The Privacy Insider Podcast
Where Tabletop Games Meet the Future of Privacy with Dr. Tehilla Shwartz Altshuler of the Israel Democracy Institute
Dr. Tehilla Shwartz Altshuler, Senior Fellow at the Israel Democracy Institute, is at the forefront of conversations where technology, democracy, and human rights collide. As nations scramble to regulate fast-moving innovation, her perspective is critical now. With Israel recently updating its privacy law and global debates intensifying around AI, smart glasses, and workplace surveillance, Tehilla’s work highlights what’s at stake if regulation lags behind reality.
Episode Highlights:
- 00:00 Introduction.
- 05:33 Privacy is essential for protecting autonomy and democracy.
- 09:19 Legislative changes reshape the landscape of data protection.
- 16:16 Alternative approaches to regulation emerge between global models.
- 26:01 New technologies blur the lines between private and public life.
- 28:54 Governance challenges arise around consent and platform oversight.
- 38:51 AI systems increasingly influence workplace management and decisions.
- 41:33 Digital replicas of individuals raise ethical and privacy questions.
- 43:50 Preparing for the near future becomes the most critical skill.
Episode Resources:
Dr. Tehilla: [00:00:00] the right to privacy is going to become maybe, even the mother of all human rights. Because if there's no privacy, there's no autonomy. you can't talk about free elections. You can't talk about anything.
Arlo Gilbert: hello. My name is Arlo Gilbert. I'm the founder at osano, a leading data privacy platform, and today I'm your host on the Privacy Insider Podcast. Today we're joined by Dr. Tela Shwartz Alschuler. Dr. Shwartz is a senior fellow at the Israeli Democracy Institute, an independent research center dedicated to reinforcing Israeli democracy.
She is a [00:01:00] technology, law and policy expert addressing the challenges and opportunities stemming from the information surveillance, cyber and artificial intelligence revolutions, especially their impact on democracy. Dr. Shwartz, welcome to the show.
Dr. Tehilla: Thanks for having me. It was great to be here.
Arlo Gilbert: So, as you know, we love to always start our conversations off learning a little bit about the person that we're talking to, and I think that your story is very interesting. So if you wouldn't mind telling us a little bit about how you got into privacy, digital surveillance, technology rights, I know I would love to understand more about that.
Dr. Tehilla: So I actually began this journey in academia, Hebrew University in Jerusalem, and then Harvard Kennedy School. And it's funny because my doctoral thesis was about media ownership, concentration, which sounds, you know, very boring, but it was about power. You know, who gets to shape what we read, what we [00:02:00] see, what we know.
And honestly, my career has kind of evolved in parallel with the digital revolution itself. At first, I was asking what do we need to change in traditional communications law by gag orders, protection of sources, election propaganda, freedom of information. Then I moved a little bit into open government, and from there it was almost, almost, inevitable that I, uh, uh, I'd move into data and privacy because.
You know, suddenly the battleground wasn't just newspapers and television anymore, it was digital platforms and databases, and then came social media and later, what I call si fluence cyber influence, the way people and opinions are manipulated online. and obviously there came AI and entered into the picture.
And I find myself asking, uh, not just how people use data. How machines use data about people and what that means for democracy, [00:03:00] for privacy, you know, for other human rights. So, in fact, my recent book is really called People Machine In, in the State. when I returned to Israel after five years in Brooklyn, Massachusetts, I taught at the Hebrew University School of Public Policy, but all of a sudden I realized that, academia alone was too detached from the Messi reality, and I love to influence, the Messi reality. So I joined Israel Democracy Institute, which is, uh, Israel's, biggest, think tank on public policy, and I built there My kingdom.
You know, it's called democracy in the information Age, and it spans the entire spectrum from media to technology. Ethics, policy, regulation, legislation of both media, telecom, and uh, and technology. And, also what I've been doing during the past, let's say two years or so, is that I teach at, the Geo Institute in Mumbai, which is a data sciences, institute in, uh, India.
And there I [00:04:00] get to talk with, Indian students about AI and, uh, social media. policy. So yeah, I combine research policy, teaching, a little bit of, uh, public speaking. I wanted to become a public intellectual. I wasn't sure exactly how, but here I am.
Arlo Gilbert: That's amazing and I, yeah, I gotta tell you, honestly, the most important thing I picked up from that was that you spent time in Brookline. And my memory is that that is where some of the absolute best locks and bagels in the entire Boston area come from.
Dr. Tehilla: Well, I, I agree. I, I really miss, Brookline. You know, there's a, the sentence at one of the children's books, uh, called Sarah Plain and Tall. anywhere, uh, wherever you are, there's always something to miss. So, you know, this is what I miss Brookline, Massachusetts.
Arlo Gilbert: Indeed. And uh, and you spend a lot of time traveling. Is that right?
Dr. Tehilla: Yeah. Yeah, I mean, I had, I just completed, uh, a [00:05:00] very interesting, uh, project that I did, with a German, uh, colleague in India, of course, other places as well. Uh, but I'm located in Jerusalem.
Arlo Gilbert: And can you tell us the, kind of very broadly, what is the general theme of your research?
Dr. Tehilla: it's the question I think how technology affects reality, how technology, um, is going to change. power, uh, relationships, how, uh, human rights are going to need to be reframed. And I think specifically regarding the, right to privacy, it is very prominent because, you know, at around. I think 2016 when I wanted to start my deep research?
in Israeli privacy law, uh, one of my bosses told me, why won't you write something else about freedom of speech or freedom of the press?
And I said to him, you know, I have this gut feeling, this hunch that the right to [00:06:00] privacy is going to become maybe, even the mother of all human rights. Because if there's no privacy, there's no autonomy. Uh, you can't talk about free elections. You can't talk about anything at this, uh, world.
And when the world became so data centered, all of a sudden, this right, needed some new interpretation and new meaning, and also new attention. And this is where I am.
Arlo Gilbert: I love it and you know, great minds think alike. I think a lot of people around that era in 2016 who were really paying attention to the writing on the wall started noticing. You know, the European movement towards privacy rights. And I think you were exactly right and that's the reason that I started Osano, was for exactly the same reason.
It just felt like privacy was becoming more and more important to society and, more and more important to businesses and individuals. So, super interesting how you ended up there. Now you, you got, your, degree at the Harvard Kennedy School. [00:07:00] Is that correct?
Dr. Tehilla: I did my post doc there under the supervision of, the late professor Fred Shower. He was a First Amendment professor, but think about the fact that that was really the early days of the internet. So since then, so much has happened.
Arlo Gilbert: And if you had to choose purely based on weather, are you a Boston gal or are you an Israel gal? Because those are very different climates,
Dr. Tehilla: true. So it's both the weather, it's both the weather and the food. I have to admit, you know, the food in the in in Israel is great, but you know, Israel has its deficiencies. I don't need to be too precise about that. Anyone who opens up the news can understand that.
Arlo Gilbert: Well, let's talk a little bit about Israel. So Israel just passed a new privacy law, isn't that right?
Dr. Tehilla: Yes. So.
Arlo Gilbert: Yeah. And I, I'd love it if you could tell us a little bit about that. I know a lot of our audience spends most of their time on the European GDPR on California's privacy Right Act. But, Israel is a powerful economy and, and [00:08:00] everybody who's involved in privacy really needs to understand, uh, Israel's privacy laws.
Dr. Tehilla: uh, yes, I think you're right. So, Israel's privacy legislation was first passed at, 1980, so, you know, it's quite old in terms of, uh, updated, privacy legislation. I think the most advanced technology that is, uh, mentioned in this, uh, old legislation is, uh. Paper mail or so, so, or direct, uh, paper mail.
So in this sense, there was an urgent need to update the privacy law. The problem is that it took a lot, a lot of time. Now, since the GDPR was introduced, other countries needed to create some kind of an adequacy, uh, with this legislation. Even America had to do that and, uh. In this sense, um, there was a lot of pressure, on the Israeli decision makers to start amending or updating, uh, the privacy law.
So Israel just passed Amendment 13 to its [00:09:00] privacy law. And I was deeply involved throughout the legislative, uh, process. You know, they say nobody should ever see how sausages or laws.
are made. Uh, but in this case, you know, despite the political tension, uh, in Israel, not to say in our political deterioration, oh. The process actually worked remarkably well. There was real cooperation between legislators and the Ministry of Justice and Civil Society, organizations like mine and, and even the industry. So that in itself is a small miracle in our current, uh, uh, climate. and the law is truly an earthquake for Israel's privacy landscape.
for the first time it creates a new professional, uh, role. What we call DPOs, data protection Officer, just like in the GDPR. So suddenly we have a brand new privacy profession in the market and, there are a lot of courses going on now. Um. and people want to take upon themselves this new, uh, uh, new role.
It also introduces financial penalties for privacy [00:10:00] violations, and this is something we've never had before. So until now, bridging privacy in Israel had almost no price tag. So this is like a dramatic step forward in aligning with international standards, especially. yeah, GDPR and that's important for a small market like ours.
You know, we must say, compatible with, the GDPR. but that said, the law is far from, uh, perfect. It's essentially technical, it's enforcement focused. So Israel's, privacy framework still relies almost entirely on consent, um, as the gateway for data. And you know that in today's, uh, world consent is sometimes I call it the biggest laundering machine of our generation.
You know, nobody really reads, nobody understands, nobody can understand. Everyone clicks. I, I agree. or, or I accept. So until, the law is updated again, which may take a long [00:11:00] time, this will remain a serious weakness. So, you know, in other words, amendment 13 is a progress, a very nice progress. I'm very proud of being part of it, but it is progress with patches and you know, patches always come with holes.
Arlo Gilbert: So a, a very good law, as they say, leaves everybody involved in drafting it as equally dissatisfied. I'm curious, you know, Israel, you know, is Israel was is a young country and it was formed on the heels of the Holocaust. Did. in your experience, is that that societal fabric of the foundation of Israel, is that woven into the privacy conversations that were being had when this law is being drafted?
Or was it being kind of arms length as like, look, we're a first world country. We need to have ex excellent regulations that protect our citizens from surveillance. I'm curious to just, just hear a little bit about the, the process and how that might've informed things.
Dr. Tehilla: I would say that, [00:12:00] um, I think like any other, political, question, uh, this could be dealt with from two sides. On the one hand you would. Say Israel is a very close, maybe the fabric of Israeli society is like, being done or being, uh, built with very close connections between people.
Israelis are very, very. Friendly. They're close to each other. You know, our Friday night dinners are what, uh, you Americans, uh, sometimes have only once a year in, uh, Thanksgiving. So in this sense, the Israeli society is really close. So no secrets. You know, people are not used to have a lot of privacy. They used to have to sit very, closely to each other on the buses.
They used to tell everyone everything, and so on and so forth. On the other hand, if you mentioned the Holocaust or you mentioned, uh, totalitarian regimes and so and so forth, surveillance and, um, lack of privacy is one of the, I think, preconditions for such regimes. And, uh, in this sense, I think there is a lot of [00:13:00] fear, at least.
You know, in my social circles from over surveillance by the government, from using microtargeting based on data, um, in order to change public opinion, you know, democratic deterioration generally, uh, which is based on lack of, uh, privacy. So I would say that this is kind of a mixture of everything that we carry on our backs.
Arlo Gilbert: It's really interesting. Yes, I, I was Schumer Shabi for years, uh, when I lived on the East Coast and I. Definitely miss, Hala on Friday nights and the, the time down. It is a, it is a key piece of being, you know, Jewish and, and I've gotta imagine that Israeli society must be so nice and supportive to be Jewish there.
So that's really interesting. So you're part of, of drafting, uh, this regulation and it's now in effect, so Israeli citizens have protections, uh, [00:14:00] similar to the way that Europeans do. And one of your focuses on how the regulatory approach to new technologies, that you're seeing in the US and the EU is not a great.
Fit for a lot of other countries. Can you tell us a little bit about that?
Dr. Tehilla: Yeah.
sure. Um, So.
let me just say that, you know, here in Israel we are thinking, um, from the perspective of a very small market, we always say to ourselves, you know, we can't be more rigid than the eu because then digital platforms would, you know, bluntly tell us, thank you very much, but we are not interested in, uh.
Delivering services to your market. It's too small And.
it's too, uh, it's too problematic. So when I started actually, getting more interest in what's happening in India, which is, you know, almost the third, the third biggest economy in the world, all of a sudden I realized that there's something really, similar between India and Israel and, you know, you can't really compare them. The scale is so different, the size of the [00:15:00] population, the economy, the market, as I said. But, there were some surprising similarities. You know, societies that's. Thrive on innovation and entrepreneurship and, uh, emerging markets with unique languages And cultures.
And as I said before, you know, not always encourage strong privacy norms and also, you know, relatively weak democratic and regulatory and. Institutions, and then I ask myself all the time, where do countries like Israel and India position themselves between what we call the Brussels effect that everyone talk about, and the Washington Effect, which is more of a, I would say, a Laier fair.
Uh. kind of an approach, uh, specifically in the in the federal level. So, what I tried to do is, together with an Indian colleague, uh, called Raja Lura, was to actually. Ask ourselves how, or what, what should we do? If neither model, really fits us because the American [00:16:00] approach is too permissive and, too weak in protecting civil rights.
And that could be like a real problem for democracies like Israels and Indias and, the European approach. On the other hand, it's really too rigid, too heavy. for fast moving innovation ecosystem for smaller markets and so on and so forth. So what we came up with was something in between, and we called it the Mumbai and Tel Aviv Effect.
and that's a third way of regulation, which on the one hand might protect rights, uh, more than the US model, but leaves more room for innovation, uh, like the, or, rather than the, uh, eu uh. model. And, I think that the fact that we rooted or deliberately, uh, rooted this idea in the live realities of our two countries, it made it more, I would say, more substantive, not that theoretical.
So, what we actually, Uh, suggested is something that we call a framework law. and that means legislation that doesn't, doesn't spell out every [00:17:00] detail of every, technological, um, approach or any, every, um, doesn't try to regulate any technology, but rather. Sets principles and ties liability to compliance with, technical standards.
By that I mean, for example, uh, let's talk about, I dunno, protecting the cyber of supply chain or the cybersecurity of supply chain. you can say if you, adopt this and this, technological standard. Then if there is a, cyber attack, we are not getting into your computers and we are not going to find you or see you, uh, liable.
So it could be like, you know, dataset or databases management. It could be cybersecurity. As I said, it could be. Safeguarding AI applications. if a company aligns itself with the relevant standards, technological standards, and international technological standards, then in case of failure, it is protected.
And the advantage [00:18:00] is that the law doesn't need constant rewriting. You know, it's the standards that would evolve and in this model, technological standardization becomes the golden key. You know, it's industry driven. It adapts quickly, it keeps regulation aligned with the pace of innovation and crucially, it also give countries like Israel and India the flexibility to decide in which fields they want to require strict adherence to technological standards.
And in which. Fields, you know, we can, uh, prepare or give more leeway, uh, allowing them to navigate their own path between the European and the American approaches. And this idea of, technical standardization as the key for, uh, regulation in, you know, this, very fast moving technological era, I think.
It could use or it could be used by other countries as well. But it could definitely be, you know, the [00:19:00] third way that we were looking for.
Arlo Gilbert: And in Israel, specifically under amendment 13, so you have this, this framework approach what body in Israel is responsible for evolving those standards and for keeping up with the standards since they're defined separately from the regulation itself.
Dr. Tehilla: So in this sense, my approach is really passive because if you look at, nist, in the United States, you look at sense and the lack in the European Union, um, you know that they're working very hard on creating us technological standardization. ISO ISO, uh, um, also does the same thing. So in this sense.
I don't think that the Israelis need to, you know, to invent anything. They just need to translate, current, international technological standards to Hebrew. And I think that would give the standardization that we want, uh, that would help us actually, [00:20:00] You know, be more adequate with what is happening in other countries.
And I think that the AI act that the EU kind of adopts this, approach as well, but in a much more strict and rigid way. So what I'm trying to say here is that Israel can have its own regulation as long as it is framework regulation that adopts international. Technological standards, but then Israel can decide in which fields it wants to create this framework regulation.
But this way you can actually keep, you know, keep up with what is happening in the rest of the, uh, Western world or. Eastern world when we talk about India. And, um, uh, but on at the same time, decide for yourself what types of priorities, you want to adopt, uh, when it comes to protecting your own industry, your own people, your own democracy.
Arlo Gilbert: Well, and I love, I want think of, one of the things I love about this standards, standards based [00:21:00] approach is that, you know, especially with things like ai, uh, we're seeing the pace of innovation, uh, move faster than anything I've ever seen in my entire life. I can't imagine a room full of regulators trying to sit around and keep pace with that, with new laws, even maintaining standards around, such a fast moving technology is, is likely very challenging.
Would you, would you agree with that?
Dr. Tehilla: Yes, absolutely. I think that, I think this is the biggest deficiency of the, of the EU AI Act, is the fact that they tried to foresee technologies and, and therefore, you know, those, uh, their approaches are not going to be suitable, you know, to the, to upcoming, uh, technological advances. On the other hand, I trust the industry, to create, uh, standards much faster than I trust regulators to do though, to do so.
So if we have the incentive that is.
being given by regulation to the [00:22:00] industry to create its own standardization, that I think could be the only way to cope with a problem of this very fast moving technology.
Arlo Gilbert: Well, we, we opened the, uh, the Pandora's box of Fast Moving Technology, and I know that's an area where you spend a lot of time thinking and a lot of the technology regulation right now is focused on how organizations are using data today, but technology obviously doesn't stand still. So, you know, in your work what sorts of.
Future challenges in tech policy, have you found, uh, you know, we're talking about the difficulty of keeping pace and, and there's a lot of interesting stuff out there that a lot of people haven't even heard of yet that we have to be thinking about, both from a regulatory and from a societal perspective.
And, and I, and you've done some really interesting research, in that area. I'd love to hear some more about that.
Dr. Tehilla: Okay. I am happy to, to share it [00:23:00] because this is really a project that I'm particularly happy about. I think that sometimes we privacy people, tend to overfocus on data sets and databases and how we. Collect data and we forget real world problems. So in a sense, all of a sudden I started realizing that, every time you see Mark Zuckerberg, he's wearing, or showing up with smart glasses on his face, and he's obviously not alone.
You talk about Samsung and human Google and, uh, they're all racing to make these. a big platform, maybe a substitute for smartphones. And I started researching smart glasses and I realized that they don't just see what you see. They have three kinds of eyes outward to the world through their cameras.
Inward to our bodies to see our reactions to everything that we see and then networked into the cloud or into the internet. And this is thrilling because it's the world of post smartphone [00:24:00] reality. And I call it fi reality, like physical and digital combined because it's not, it's not the internet behind screens anymore.
We are going to go out to the physical world. With those new layers of, uh, of reality that are being protected by the smart, glasses. And here I conducted a collaboration between Israeli and German and Finnish partners. but they were not policy makers or policy researchers or lawyers. They were industrial designers because I felt that. when we start talking about smart glasses. Yeah.
I can say privacy by design. Everyone says that, but when there are no designers in the room, what does it mean? What does it mean? You know, usually it collapses all this privacy by design thing. It collapses into filling out compliance templates. But design is about so much more. It's about shaping [00:25:00] experiences, imagining futures, researching interactions, asking uncomfortable questions.
And this is exactly what I tried, to capture in. The problem is that smart glasses are not that widespread yet. so the question of how do you think about near future questions has become like an interesting one. So, uh, what we did was to use role playing, uh, games. So I found myself moving from being a policy expert into being a dungeon master of a speculative uh, future. And that shift really mattered to me because only through this.
imagination And.
play, could we really see where the cracks in our legal framework, frameworks, uh, lie because, abstract values like privacy, transparency, responsibility, there are, they aren't enough on their own. We need to think in terms of interactions like person to person.
Person to [00:26:00] space, person to platform. so what really emerged in our, you know, role plays is how smart glasses blur the line. We've always assumed between private and public because, you know, privacy belonged in private spaces. This is where you have. The expectation or reasonable expectation for privacy.
And then in public, uh, spaces, it was normal that people could recognize us and on the street or know where we've been and, and, and all of that. But smart glasses changed that completely because. Imagine walking down the street and someone's glasses. Don't just see your face, they pick up that you're exhausted or anxious or maybe unwell or sick, or not vaccinated, or I dunno, or picture a casual cafe where someone can download.
Download an app called Undressed Waitress. You know, and then you can create this deep fake of, the waitress naked, but she doesn't know who downloaded this, [00:27:00] uh, app yesterday. So all these are speculative exercises, but, they forced us to ask what happens when our emotional state and our dignity and even our bodies become visible in the public space or in the public arena in ways we never consented to.
So in this sense, we have composed all kind of, uh, recommendations for new regulations, for new governing, uh, uh, spaces. Who's responsible to what happens, you know, in the public fial space is the, is it the municipality or federal government? Is it the platforms? Is it us? Can we actually consent to such, such things?
All those, you know, became very, very interesting for me.
Arlo Gilbert: Yeah, I mean, when you think about the tech stack that most people use today, right? We have a computer. We have a phone, and if I'm going to take a picture of you, even if it's in [00:28:00] a public space, you usually have a. Pretty good clue that I'm doing that right. I, I hold my phone up, I angle it. You might hear a clicking sound with the, with the camera, uh, that goes off.
And there's all clues that you're being recorded. Something's happening right now, but with the glasses, it really feels like there's. A new wave of problems because there's no indication that you are doing things like you talked about with the, the imaginary app that undresses the waitress. The scary part about that is that the waitress doesn't even know this is happening.
What did you, did you come to any conclusions during your research? about likely paths to things like consent. And, did you, did you form any opinions about where this would go
Dr. Tehilla: So, yes. Uh, we had a, a lot of policy recommendations that we think needs to be, you know, um, addressed. [00:29:00] Uh, first and foremost is how do we, create new relationships between people, you know, between one person to another in the public space. We need to rethink the right to privacy, in this sense because the right to privacy, usually doesn't exist or doesn't exist that, you know, in such depth in the.
uh, public space and who is going to govern.
The, app store for the glasses because actually they're gonna work just the same like, smartphones. You know, you're gonna have the hardware, then the operating system, then the app store. So maybe it can be regulated via the app stores that, um, would not allow to, um. offers such, apps or, uh, should it be the responsibility of, you know, the municipality or the train, the train station owner or maybe you know, the school principal.
so what we have also came to, to understand is that we will need to recreate what we call the public sphere. We will need to differentiate between, [00:30:00] Uh, between different types of, public spaces. and all of these needs to be, I think, or need to get regulatory attention at the moment because otherwise we're gonna find ourselves with, a widespread use of, smart glasses.
And no rules. And no restrictions. And this is not going to be, you know, a comfortable world. To live in. and I think, you know, Brandeis and uh, Warren who thought about the right to privacy, you know, a hundred years ago, obviously did not imagine that, but they did imagine what happens in the public space when someone walks with a camera.
So in this sense, I think we need to get back to thinking this interaction. relational, way of a line of thought, and not only about, you know, what to do with the data or what to do with the data sets and, and all of those, uh, questions. So, just to make a long story short, you know, [00:31:00] what we really wanted was that our children would stop staring at the screen and would go out, to play, right?
But now they'll go out to play with smart glasses and. What is going to happen then? This is where I think near future thinking is so necessary.
Arlo Gilbert: Yeah. And you know, this isn't so different than a lot of the closed captioned television debates that are, have occurred in the past, and I'm curious how you see those as being similar or significantly different.
Dr. Tehilla: I think there are similarities, but we need to understand that the depth and the abilities or um, inferences abilities of machines, are actually, are making a difference. So if in the past no one could actually say, let's say from a distance that I am, you know, furious or tired or anything like that, they can do that today.
And, think about you walking the street. and someone comes, on the other side of [00:32:00] the road and you say to yourself, oh, he has a beautiful t-shirt. I'd like to buy one like this. and you ask the glasses, which is great, and then you say to yourself. He looks really nice.
Maybe I should go and start the conversation, but then I wanna check a little bit about him. Is he vaccinated? Is he Jewish? Is he, I don't know anything like that. Does he look nice? Friendly. And this sense we have, kind of, um. Based on our own senses and our own abilities, um, to make those judgments.
And all of a sudden the machine is going to do that for us. In what way is it going to change relationships between people? So that's slightly different from what I would say our ancestor, you know, imagine or part or anticipated.
Arlo Gilbert: Yeah, I mean we've, we've seen those shifts happen in areas like online dating, right? I mean, online dating, you know, dating used to be a very personal thing where you. [00:33:00] Met somebody to make the decision about whether or not you would like to see them again and get to know them further. And, you know, the, the Bumbles and the Tinders and all the, the swipe left and swipe right apps have really compressed that discovery cycle and it sounds like.
This, you know, this smart glasses approach is only gonna further compress that because now I don't have to look and make a decision and read your profile. Instead, my smart glasses, just tell me instantly. He's Jewish, he's nice, but he has a bankruptcy in his past,
Dr. Tehilla: Yeah, I, I agree. I totally agree. Those are the billion eyes, you know, that are connected to the smart classes, and I think. That the difference between all those, you know, dating apps, for example, and smart glasses is that, the dating apps are happening in the internet. You have the ability to decide whether you wanna move them into the real world, right?
So you, we [00:34:00] usually used to say, whatever happens in the internet remains in the internet, right? But all of a sudden, this is why I call it a fidgital reality. It is going to be totally mixed. You're going to go out to the physical world. Wearing those glasses and that could change both, you know, relationships between people, but also you, the relationship, you know, between you and the physical world around you.
Think about it, that you won't need any kind of traffic signs or traffic lights or any kind of signs, you know, telling you when the new, the next train is coming, the glasses would show it to you personally. so.
in what sense? You know, the physical world is about to change, you know, to be re-engineered because of smart glasses.
This is something I find fascinating. And what's mostly important. What's most important here is the fact that it was really happening very fast. The, technology behind. Smart glasses is what is actually allowing it [00:35:00] to be. Now something that is, you know, a very, close profit or a very, uh, short time, prophecy or, or, plan for big techs.
unlike let's say, You know, the cemetery where the Google Glass were buried in a decade ago.
Arlo Gilbert: you, know, you you mentioned Google Glass, and uh, I'm glad you did because if you, if you remember, they had a nickname for anybody who wore them. I think it was a glass hole. and. I remember reading news about people who would wear those and they'd go somewhere public, maybe a a, a bar or a cafe, and there was actually violence, uh, against people for, for wearing those because individuals felt like, I don't want you recording me and taking snapshots surreptitiously and learning about me without having to have asked me.
And I'm curious, so, you know, we, we look back at that Google Glass experiment and, and clearly the technology was not ready, and society was not ready. [00:36:00] Do you feel like that's changed? Do you think that if people wear these smart glasses, that that will become a, a socially acceptable, and social norm that we're, that we need to begin bracing ourselves for?
Or, or do you think there's a chance that nobody's willing to put these on?
Dr. Tehilla: So that's an excellent question actually. And I think that, this is like a classic privacy paradox here because on the one hand, a lot has happened, since, you know, a decade ago. We know that there be, that we are being surveilled all the time. We know that, you know, the depth of the knowledge of the data that companies have on us.
is widening and, it can be processed into unbelievable, um, results on the other hand, I think that the awareness towards privacy has changed dramatically. There was this kind of a market education here, uh, which we can see. you know, we, we could have, we, we could have, witnessed that during COVID with.
Surveillance, technologies. And we could have seen [00:37:00] it, with a lot of awareness and legislation around the world. And, um, in this sense, I think something interesting is happening because we, will accept smart glasses. Now if we put ear buds, you know, like app, the new apple earbuds in our ears.
Which can actually monitor our heartbeat. we will wear, smart glasses. But on?
the other hand, I think that the call for regulation, for helping us, protecting us, when we go out to the street, the this call is going to be stronger, and I wanna be ready when this call comes because I wanna come with.
all different kinds of, uh, recommendations of how to do it, how to look at it, what types of questions we need to ask.
Arlo Gilbert: And you've, you've done some additional research. on, on some topics that are pretty fascinating. So I know recently you completed some research, people managing people through machines. you know, we talk about acceptance. We talk about ai, we talk about [00:38:00] consent. I'm interested in understanding what you saw in terms of the growing use of AI systems, in managing workers and workplace relations, because this is a, a whole nother topic where.
The level of consent is different. There are different regulations governing employer, employee relationships. What did you learn and what did you find?
Dr. Tehilla: So first of all that, you know, the, the HR profession is changing dramatically. 'cause sometimes when we talk about the intersection of AI And, workplace, we ask ourself. Is my job going to get obsolete, right? Or am I going to lose my job or can I use chat GPT while, while I do my job? But my project was, was a little bit different because I explored how people are in or how.
Employees are increasingly being managed through machine, how AI systems are shaping the entire journey of employees from [00:39:00] hiring to promotion to dismissal. You know, it all started during COVID when people started to work remotely and then, uh, employers wanted to, uh, see my, uh. Clicks or how many windows were open, on my screen or how fast I have, uh, responded to emails and what my sentiment was.
So it, it all started there. But all of a sudden when we think about agents and we think about different ways to assess, uh, productivity. this is a place I think that we can, uh, we, we should start thinking of or looking at that. And one of the things that I found in my research, in my recent re research was another privacy paradox, because when we spoke with senior managers or with a HR managers, they talk about privacy as a top priority.
As I said, you know, the market education here. Was amazing. But at the same time, often they deny employees, for [00:40:00] example, the right of to access, to their own personal data or the right to correct this data or to erase, this data. So we see some kind of a paradox here. What we also, I think found, was the fact that, People are thinking a lot about privacy and about cybersecurity of, uh, employees' data, but they don't take enough, I think consideration to what's next, which means, uh, decision making by machines, inferences that can be, in their, abilities to understand the employee's emotions.
you know, the, the switch from human, uh, consideration to machine, uh, decision making, all those types of questions seems to be unanswered at the moment. And I think that, Anyone who works on this, you know, intersection of, work and privacy, should be aware of the fact that that's gonna be our next, our next mission or our [00:41:00] next aim to start resolving or creating those new contracts in works in workplaces where employers can know so much more about their employees.
Employers can change. Behavior of their employees. Employees don't need to get any kind of decisions about the employees because the machines are making them. So, in this sense, there's still a lot to do. But one thing that was really fascinating to me in this, uh, project, and this is going to be actually my.
I think next research project, is the rise of digital twins of employees like human digital, twins of employees. you know that they do that in medicine in order to try or simulate all kind of new medicine on a specific person, but I think that workplace will be the next frontier. Employers start building digital replicas of their workers, for example, to run [00:42:00] salary simulations, to train them even to send them into meetings when they can't, be there in person.
And you can imagine many more scenarios, you know, so I will focus exactly on this because like, just like with smart glasses, it raises profound questions, not only about privacy, but also about identity philosophy, you know? Do I have a right to privacy over my digital twin? What are the limits of an employer's prerogative to use it?
And in the end, you know, where do you bury the twin when the employee leaves the company? so I think, uh, there is some industry growing startups, let's say Hagen and other. Startups like that they are no longer, longer focused on, you know, generic avatars or agents, but on creating precise replicas of real identifiable people.
And that makes this, I think one of the most intriguing and urgent [00:43:00] privacy debates we are about to face. So, you know, this is going to be for our next meeting.
Arlo Gilbert: And, you know, to somewhat wind this down, you know, one of the, the themes that it sounds like is coming through in your research is this kind of excessive focus on privacy, right? To the detriment of paying attention to this very near term. In the next couple of years, significant AI shifts that need to be happening.
Is there anything that you would recommend that, that privacy professionals or compliance professionals do to avoid getting stuck with their nose in the book? Uh, I guess to say, uh, and instead focusing on the consequences and challenges that are coming up soon that they might not be prepared for.
Dr. Tehilla: Yeah, I totally agree. and you know, I think about myself not as a futurist Futurist, but rather as a presentist. You know, I think about, when I think about what professionals, uh, need [00:44:00] most, today. I think that the skill of near future thinking is the most important skill. And we're not talking about 2050 scenarios 'cause I have no idea what's gonna happen in 2050, but I am talking about the next three years, even at five years, just next three years.
And I think we all need to be aware of technologies that are already not in the lab, but in early adoption and they will shape. The reality, our clients, our regulators, citizens, uh, live in. And I think for lawyers and for.
policymakers, this mindset is critical. And if you only react to problems once they're fully visible, you're already too late.
We've seen that, So if you want me to be more precise, I can say that from my research I see four challenges that. Don't get enough attention. First is the deep inferences machine can make about us. Not just what we say, but what can be predicted from patterns in our data and, you know, [00:45:00] all of that.
And, but it's very deep now. And second is the emotional layer. You know, that machine. Don't just recognize our feelings, but influencing our feelings step by step, you know, by micro step or behavioral micro steps. And third is the collapse of the old boundary between public and private spaces. This is what we talk when we, when we talk about the smart glasses.
And the fourth is the rise of digital twins, of human digital twins. This is going to be a major privacy, challenge. Those are not abstract, abstract, uh, risks. They're real debates that are waiting to happen, and that's why I keep insisting, if you want governance that works, you need to stretch your imagination just a little further ahead.
So privacy is important. It's only one piece of a much bigger puzzle, but we need to think about it at the moment.
Arlo Gilbert: Wow. I can't help but feel that I'm living in a Philip [00:46:00] k Dick science fiction novel when I talk to you, and I believe everything you say is gonna come true, but it's just so hard to imagine what we can't see sometimes.
Dr. Tehilla: I know, but this is the present. It is not the future. And it's not sci-fi. Look at Mark Zuckerberg. Just listen to him and you'll see that everything I say is this. This is reality.
Arlo Gilbert: Yeah. Yeah. Well, we're all living in Mark Zuckerberg's world these days, so, you know, we spend a lot of time, uh, as privacy professionals, as researchers, focusing on ethics, thinking about the downsides and the negatives, and that can sometimes come across as a little bit of finger wagging or, you know, listen to me or.
Uh, I'm gonna tell you why the answer is no. but one of the things that we have learned through the process of this podcast is that we're all human. And despite the fact that we may have academic and, and philosophical beliefs, we're still people. And so I'm really interested to understand, you know, [00:47:00] from your perspective, what are the things that you do that maybe you wouldn't recommend other folks do?
do you mind sharing one?
Dr. Tehilla: Oh yeah, sure. But, and then I wanna tell you a short story again, you know, to tell that, uh, to show how very literate that, people can get trapped. So, first of all, I constantly tell people to protect their digital footprints and. I use the same three passwords across way too many platforms, and I always say to myself, you know, just, refresh them, change them.
And what's so funny is that the passwords are actually made from the initials of my kids' names, you know, blood all the time. And I joke to myself, who would've ever guessed that? but the other story I wanted to share with you is to share with you is the fact that I was the subject of a spearfishing campaign about two months ago.
I received this very, convincing emails from a well-known person in Israel and I, uh, started chatting with him [00:48:00] and we. decided to create some kind of a zoom meeting, and only, you know, really two weeks into this type of conversa conversation did I realize that it was not him. And it was a spear phishing, uh, campaign.
And I said to myself. I need to work more slowly. I need to check, you know, my emails more carefully. So when you approached me, Arlo, I said to myself, could be, could this be real or is it another spear phishing campaign?
Arlo Gilbert: Well, I'm glad, I'm glad to share that I am real, but it does say something about the sophistication of those attacks. If somebody whose entire profession is focused on these kinds of topics can still find yourself, you know, confused and tricked by some of these tactics that are used. They're very effective.
Dr. Tehilla: Oh yes, absolutely. And this is why I decided, to share this story, you know, in Israeli media and international media, just to tell everyone, you know, be careful if it [00:49:00] happened to me, it would happen to all of you guys.
Well, Dr. Shwartz, thank you for joining today and if you're interested in learning more about her research, I highly encourage you to go over to Google or Bing or Brave Search, whatever you use, and, uh, type in the I-P-P-S-O project at the University of Potsdam. you will find it as the first result it.
The I-P-P-S-O uc lab, and it's got a whole lot of interesting information about this smart glasses research and the smart glasses project that Dr. Shwartz was sharing with us today. Dr. Shwartz, thank you for joining us. This has been a pleasure.
Dr. Tehilla: Thank you so much for having me. It was really nice. [00:50:00]
Meet the host
Arlo Gilbert is the host of The Privacy Insider Podcast, CIO and cofounder of Osano, and author of The Privacy Insider Book. A native of Austin, Texas, he has been building software companies for more than twenty-five years in categories including telecom, payments, procurement, and compliance.