The Privacy Insider Podcast
The Data Privacy of the Dead & Critiquing the Digital Divine with Carl Öhman of Uppsala University
As AI systems increasingly shape how people love, parent, vote, and govern, questions about data ownership and human agency become urgent. This conversation is timely because society is quietly outsourcing judgment to machines trained on past data, while policy and ethics struggle to keep pace. That tension makes this discussion essential right now.
This episode features Carl Öhman, Associate Senior Lecturer in Political Science at Uppsala University, whose research examines AI, data, death, and democratic power. Carl explains why digital traces outlive people, how AI increasingly governs present decisions, and what is lost when humans stop taking risks in favor of statistical certainty.
Episode Highlights:
-
00:00 Introduction.
-
02:42 Curiosity about the internet shapes a path across sociology, political science, and AI.
-
07:12 An academic upbringing normalizes uncertainty and intellectual exploration.
-
10:03 Digital traces persist after death, raising unresolved social and political questions.
-
15:38 Deceased individuals lack data protection rights, creating privacy and security risks.
-
19:42 AI systems act as a personified version of society’s digital past.
-
23:50 Outsourcing decisions to AI weakens faith, courage, and human agency.
-
29:14 Genuine political dialogue requires vulnerability rather than optimization.
-
34:39 AI trained on AI risks allowing the dead to govern the living.
-
45:34 Data privacy is a collective societal issue, not an individual consumer problem.
Episode Resources:
[00:00:00] Arlo: There's no such thing as a self-governing market. The market is always dependent on the states providing a space for it, and the market always acts in response to the incentives that we set up for it. Welcome to the Privacy Insider podcast. My name is Arlo Gilbert. I'm the founder at osano, a leading data privacy startup, but today I'm your host on the Privacy Insider. We're joined today by one of the sharpest minds in digital culture and ethics. Professor Carl Oman from Upsala University in Sweden.
[00:00:58] Arlo: Carl's breakthrough book, the Afterlife of Data, was named one of the top books of 2024. By nature, the Guardian and the Economist, it challenges everything we thought we knew about what happens to our data and our digital selves after we're gone. In his work, Carl dives into the hauntingly fascinating reality where your online footprint might actually outlive You and AI can already talk with the digital debt.
[00:01:27] Arlo: Today we're peeling back the layers of his research, his journey from Oxford to global thought leader, and even the deep philosophical underpinnings of his forthcoming book, gods of Data, A provocative atheist inflected critique of ai. This episode goes from the sociology of death to the soul of your machine.
[00:01:48] Arlo: Carl, welcome to the show.
[00:01:50] Arlo:
[00:01:50] Carl: Thank you so much.
[00:01:52] Arlo: Well, no, Carl, you have some really interesting views and I'm super excited to talk about your journey into AI and data. Um, but before we dive into this entire journey, perhaps you could tell us a little bit about your background and how you end up in political science and how that political science somehow becomes centered around data and ai.
[00:02:14] Arlo: Um, maybe a little bit of background about yourself would be helpful.
[00:02:18] Carl: Sure. Yeah, it's uh, it's one of these kind of funny stories. I actually, um. I applied to political science. When I started university. It was my, my firsthand choice and I got rejected. Uh, so I had to go with my secondhand choice, which was sociology. And so I studied sociology and comparative literature, uh, as a double major for my under undergraduates.
[00:02:42] Carl: And I'm just a, a generally pretentious person, so I knew that I wanted to write about like, the big revolution of our time. I, I wanted to write about the internet and technol digital technologies in some way. So I, I wrote both of my, um, bachelor thesis on, uh, digital technologies and social media, and around that time, I think it was towards the end of my first year.
[00:03:08] Carl: I went to visit two friends of mine who are doing their undergraduate degrees at the University of Oxford. And I just had the most amazing five days when I visited, you know, it was sunny every day. Uh, it was in the beginning of summer. We just hung out in parks and had ports and, uh, you know, stayed up late in these beautiful gardens, um, which is like a completely false impression of Oxford.
[00:03:37] Carl: It always rains, it's always cold, but for those five days it was just absolutely amazing. As I said, I, I, I had a lovely time, but I kept thinking like, hang on a minute. Like, these people are not so much smarter than I am there, there should be a place for me here. Uh, and then I found out that there was something called the Oxford Internet Institute, and I was like.
[00:03:57] Carl: The Internet Institute, but that's, that's what I study, that's what I do. So I applied for, for their master's degree, and was admitted. Um, and there I met a philosopher, uh, called Luciano, who's the father of, uh, the philosophy and ethics of information. Uh, I did my PhD in his lab as well. Uh, so I, when people ask me about my discipline, my, uh, my response is often that I'm a sociologist who was, brought up by analytical philosophers in Oxford.
[00:04:30] Carl: but when I was done, uh, I knew I wanted to, to leave the uk I knew I wanted to go back to Sweden. and there was this position that opened up in AI and political communication at the, the political science department, the first place I applied to, when I, uh, applied to my undergraduate. Uh, and so I got this position and, and that's where I'm still at.
[00:04:53] Carl: So I, I guess I'm a kind of like a rogue political science political scientist now.
[00:04:59] Arlo: That's amazing. Now I have to ask, because the way you talk about it is as almost though, as like at birth, you came out. Interested in research? I mean, what, what, what gets somebody interested in these kind of topics before you end up in the academic system, uh, as you know, as a graduate or a master's or even at, at the college level, was there something that inspired you, or something that that kind of drove you in that general direction?
[00:05:23] Carl: It's a, it's a boring answer, really. Um, I, I, most researchers, like, they have such inspiring stories of how, like, I'm such an unlikely person to become a researcher. Like I, I never thought, but then I had this wonderful professor or this wonderful teacher. that source of inspiration would be like, everyone in my family, like everyone in my family is an academic.
[00:05:47] Carl: It's kinda like, what do you do? I was like 10 when I learned that there are, there are other jobs that being a professor. Um, jokes aside, like I, I'm, I'm from a, a very academic family. That being said, there was never any pressure whatsoever to pursue academic studies of, of any kind. Um, my, my dad who's, uh, a professor of education, never once in my life asked me if I had any homework to do.
[00:06:17] Carl: He never asked me like. About my grades. I, I know that once he asked like, have you, like, do you get grades in school? And I was
[00:06:27] Arlo: Do you get
[00:06:27] Carl: yeah, yeah. And I was like, yeah, I, I do. And I was like, oh, are they any good? I was like, yeah, they're very good. And he is like, oh, nice. And I was like the one talk that we ever had about academic performance.
[00:06:40] Carl: So the first couple years after finishing high school. I didn't, I, I think I spent three years just exploring. I, I did some traveling, I worked some like strange jobs. and in a US context, I, I suppose that would be referred to as sabbaticals or like gap years in a Swedish context is quite common. Um, but I, I think I was like 23 when I decided that I should go to university.
[00:07:10] Carl: and even then I wasn't. As I said, I wanted to do something with internet. Like this was the, the early-ish, or early to mid 2010s when everybody was so hyped up about like startup culture and technologies. I, uh, I picture that I would do something like that, But I always had this kind of academic, um, I, I, I felt very like home, uh, but at home amongst academics.
[00:07:37] Carl: I think that's what eventually drew me to the university. And, and once I got there, I, I, I got completely stuck.
[00:07:43] Arlo: Well, I, I can, I can completely empathize with you and although we didn't talk about this, I also come from an entire family of academics. So I, I identify closely with kind of understanding that that's the only job in the world and that everybody gets. Tenure and then they get to work on whatever's interesting to them with very little oversight.
[00:08:01] Arlo: Uh, certainly is an exciting path if you can manage to get there.
[00:08:05] Carl: Yeah, it's, I remember the one, the one time when I was a kid, um, this was in high school. The, the only time that like, I sort of suspected that there was an expectation that I would eventually become a professor was I, um, I read an essay to my dad, that I had written in, uh, in my Swedish class. And, uh, and my dad's response was like, Hmm. That's interesting. I think that maybe you'll write your doctoral dissertation about the Swedish language.
[00:08:38] Arlo: That's right. Some, some people have parents with expectations that they're gonna become a lawyer or a doctor. You know, if you grow up an academic family, they think about what your doctoral thesis will be about. I love that. Well, so speaking of doctoral thesis is because you, you know, you developed a passion for, um, AI and for data, political science, the intersection of the internet, and, and then you had this kind of.
[00:09:02] Arlo: I don't know, kind of a, a prescient set of views that led to a book that, that is quite incredible, the afterlife of data. and I think that our audience would benefit from reading it, but perhaps you can give us a quick overview of the, of the high level of the topics that you discuss in there.
[00:09:18] Carl: So the, the elevator pitch would be that everything that you leave behind on the internet, and I'm not just talking about the data that you consciously upload there, you know, your Instagram photos or your Facebook posts, but like virtually every time you open your phone, every time, I mean, every time you just take a step, your iPhone will record that and send it to the the Apple Health App server.
[00:09:43] Carl: So literally everything that you do on your devices, and, and when I say devices, I'm not just meaning your phone and your laptop, but every connected device, most cars are connected nowadays. Your fridge, your, oven, whatever. these gadgets leave traces behind, right? And those traces are still gonna be around when you die.
[00:10:08] Carl: Now somebody's gonna. Be faced with a question, what? What do I do with all these digital footprints? And that's a practically challenging question. Where are these data located? Who owns them? How do you gain access to them? And so on. It was also a morally challenging question of what should I do with my loved one's data?
[00:10:32] Carl: Let's say that one of my parents passes away, or my grandparents. Okay, what, what should I do with the data? Maybe my kids would wanna see their, their Instagram profiles in the future, or, see their Facebook posts or whatever kind of data it is. Now I'm asking that kind of moral questions, but not about.
[00:10:52] Carl: One person's data, but about an entire generation of internet users. So in the next three decades alone, about two and a half billion people are gonna die and leave huge heaps of data behind on the web. Now my book is asking from the kind of sociological and political point of view. What are we as a society gonna do with these data?
[00:11:19] Carl: What are the political and sociological and economic implications of this question of what do we do with the digital dead? And that's, that's what the book is about.
[00:11:30] Arlo: Yeah, I mean, and, and just hearing you talk about it, I feel emotion because it's about death. Right. And, you know, I think that, uh, I think that, people in Europe have a little bit more of a. A reality about the fact that life comes to an end, whereas, uh, you know, in America we have a, a little bit of a cowboy culture.
[00:11:52] Arlo: I'm never gonna die. I'm not gonna think about it. So, so what is the right answer? I mean, do we come to a conclusion in your book about how we should be thinking about this data? Because I, I can think of a million things I don't want my kids to know, right? That, that Reddit post, that maybe I got in and I was in a bad mood and I was kind of snarky and I, you know, posted some kind of, you know.
[00:12:15] Arlo: I know better than you reply. Should that belong to my family or should it disappear when I die? Do do you formulate any conclusions about the the right philosophical approach?
[00:12:26] Carl: first let me just say that I'm, I'm really glad that, that you say that it sort of resonates on an emotional level because that's something that, was very important to me in, in writing the book and, and generally in my research in drawing that connection between. The individual psychological and emotional experience, you know, something that everyone can relate to.
[00:12:45] Carl: Whenever I bring this topic up, they're like, oh, this, this happened to my friend. Like I see my friend popping up on social media every birthday and so on, and they have a sort of emotional connection to that and to. Connect that experience with these global power struggles over who owns the past and, you know, what, what are these major long-term sociological implications?
[00:13:10] Carl: That's really a goal of mine. But to get back to your, your question of, um, is there like a right answer. There are a couple of things to, to say on this and on on various levels. Uh, there are some things that are obviously bad that we want, like, that everyone would want to avoid. So just to give you a quick example, dead people don't have, generally don't have any data protection rights.
[00:13:40] Carl: The GDPR, which governs data privacy in Europe, for instance. Explicitly leaves out deceased individuals. That means that if we would have a situation where a big tech platform, That may host hundreds of millions of dead profiles and, and sometimes an entire lifetime on of data on those profiles. Or say it would be, um, like a one of these ancestry, genealogy sites that holds the DNA of tens of millions of people, many of whom are gonna be dead.
[00:14:18] Carl: Let's say one of these companies go bust. What's gonna happen to the data? Well, it's gonna be auctioned off to the highest bidder now. We living users have normally some means of protecting ourselves, but for these potentially hundreds of millions of dead people, they're just gonna be auctioned off to whomever is the highest bidder.
[00:14:38] Carl: And that may be Russia, China, you know, any service operating within the same, within the same, uh, industry really.
[00:14:47] Arlo: Well, and now, and now you've got, you've got companies that have figured out ways to really take these massive quantities of data and use them in ways that perhaps, previously there was the question of why would I want a dead person's data? What, what value would that provide to my platform? But now they have a use.
[00:15:06] Carl: Yep. Oh, definitely. And I mean, they, they have economic, usages. You know, there's increasingly becoming a thing of like you can, uh, train an AI bot into impersonating someone based on their data. Uh, so you can sort of create a digital ghost of someone and then. Sell that ghost avatar back to, the bereaved family, which obviously has some pretty grave uh, moral implications.
[00:15:34] Carl: but there's nothing really stopping you from doing the same biologically. Let's say that I own the data of your deceased parents or grandparents, uh, and I can contact you and be like, oh, look, I have their data. Do you want me to create a clone of your departed father? or do you not want me to do that?
[00:15:52] Carl: Either way, pay me money. That is like an entirely legal thing to do because there's no nothing protecting dead people's data, which is bizarre.
[00:16:01] Arlo: that's terrifying.
[00:16:02] Carl: It's terrifying. And, and this is not even mentioning like the political implications of this. Imagine that one of these tech giants went down in data about.
[00:16:13] Carl: Hundreds of millions of potentially billions of users, falls into the hands of, of morally dub. I mean, you can argue that they're already in the hands of morally dubious actors, but politically d dubious actors, I mean, imagine what a weapon you have. If you have a lifetime of data on the citizens of, of like every citizen of your, your enemy country.
[00:16:37] Carl: So we were putting ourselves in, in huge risk at huge risk in sort of outsourcing our, national and arguably like even civilizational interests to these like handful of tech billionaires. But like I said, like, I, I wanna make clear, These are like the obviously bad scenarios.
[00:16:57] Carl: this is the things that I hope that we can like all agree on would be very, very bad. But that in and of itself doesn't solve the larger questions of like, which principles ought to guide the governance and curation of our, collective digital heritage.
[00:17:16] Arlo: Yeah, these are, these are big, big questions that, that, I mean, candidly, I spend most of my day thinking about very complicated topics related to data and use and privacy. But this is, this is really kind of a next step up and above. So I, so I have to ask you, because you wrote this wonderful book, afterlife of Data, but you wrote it way before this whole AI revolution.
[00:17:41] Arlo: Kicked off. And so your, your book really looks very prescient. and, and as I understand it now, you have been working on another book. and I would love to hear about it 'cause I haven't read that one. You gotta send your friends a preview copy.
[00:17:57] Carl: I, I will. It's like the, the first draft of the manuscript is actually just about to be completed. I'm finishing off the, the, the last chapter right now. But, so the new book is entitled, uh, is titled, um, gods of Data and the subtitle is an Atheist Critique of ai. And, um, it, it's, it's funny 'cause I. It sounds like I, I'm not very creative.
[00:18:20] Carl: My, with my titles, it's some something of data. but the idea here to give you another elevator pitch is that AI systems have become the gods of the information age. But not necessarily in the way that you would expect. So what's a God? In a theological sense? A God would be a supernatural, omnipotent, being beyond space and time.
[00:18:43] Carl: I mean, that, that, that's
[00:18:45] Arlo: Sitting in the clouds. That's the picture we all have of a, of an omniscient being.
[00:18:49] Carl: Yeah. That's, that's not ai. It's not emissions. It's, uh, it's good, but it's not that. Now, in an anthropological sense, a God would be. An amalgamation of the authority of the ancestors. So the theory here is that all religion emerges first as a veneration of society itself, of of the authority of the ancestors. But with time, the ancestors become too numerous and they collapse into a single identity, a single agency. God. Now that is exactly how a large language model, for instance, works. It takes society, the digital traces that we leave behind in our everyday activities on the internet, and it collapses these into what is experienced as a singular agency.
[00:19:42] Carl: So what we're interacting with when we interact with, uh, these AI chatbots, or at least the, the large language models, ones. Essentially a personification of society's digital past. Now, why am I making this argument beyond the fact that it's cool and that it works very elegantly? It's, I mean, there's an obvious
[00:20:03] Arlo: It. It does. It can. I mean, you connect the dots here very well.
[00:20:07] Carl: Thanks. I mean, and, and anyone who studies sociology, even if you've just taken the, the, the introduction course I think will have heard of, uh, this, this theory, dark Heim's theory of the, the elementary forms of religious life, which is this theory that like all religion is essentially. The worship of a personified form of society.
[00:20:29] Carl: So it's like one of the foundational theories of sociology really. But the reason I'm, I'm making the argument to it, that I'm establishing this, I call it a structural symmetry between gods and and AI systems, is because this move allows me to mobilize one of the richest traditions, if not the richest tradition of Western thought, which is religious critique.
[00:20:52] Carl: So I can use Hagel, Feuerbach, Marx, Hannah a, uh, Nietzsche Freud, who are all critics of God and the idea of God, and I transpose their arguments and, and apply them in the realm of AI instead. And to be clear, like these are atheist thinkers, but they're not atheists in your standard way. They're not saying, oh, God is not real, therefore we should believe in him. Their argument is far more sophisticated. they're saying if there's a common denominator in this very complex tradition, it would be that it's not the fact that God isn't real. It's the fact that if there would be a perfect being beyond space and time, that would sort of solve all of our problems.
[00:21:39] Carl: That wouldn't be desirable. We wouldn't want. To live in a world where there would be an external being who could step in and solve all of our problems,
[00:21:50] Arlo: Well, hang on. Time out. Time out. Time out because you're just, you're describing chat GPT to me right now.
[00:21:56] Carl: in
[00:21:56] Arlo: all my problems now in a lot of ways.
[00:21:59] Carl: And, and that's kind of, that, that's how I use the, I mean, and, um, under no illusions that like, that AI actually solves all of our problems. But the point that I'm making in the book is that we increasingly come to use it with that logic, that we, we, outsource. Our decisions to the past.
[00:22:20] Carl: I call it in the book, I, uh, I call it the attorney of the past over the present in that our quest in the, in the this age, which is commonly understood as an age where the machines come to dominate humans. I'm saying, no, no, no, it's humans. All along, it's just. An age where past humans come to dominate, present humans, where our own past becomes personified and sort of externalized into this external being that, that dominates us.
[00:22:51] Carl: But let, let me give you just an example of like how that works in practice. So have this, parable in, uh, in one of my chapters where I say, okay, let's, let's say that you have. A young couple love each other. I call them Alice and Bob, and they have a great passion for one another, but one day they begin fighting and it turns out that they hold radically different values when it comes to building a family and what's a good life? So they decide to consult an AI couples counselor, and they give it all of their data, their, their DNA childhood photos, everything they ever Googled. The AI crunches the numbers and it spits back the answer to them saying, with 98% certainty, this is gonna end a disaster. You should divorce now. in fact, I've already identified replacement partners that statistically fit you much better, and the, the chances of you regretting this decision is virtually zero.
[00:23:50] Carl: Now, according to standard AI ethics. There's very, it's very difficult to articulate a reason why they shouldn't go separate paths yet. I, I sense that most of us have this kind of gut feeling that a person living their life in such a way loses something essential in what it is to be human. And I think that this, this tradition of religious critique, it allows us to articulate what is this quality that is being lost and.
[00:24:20] Carl: In this particular chapter, I argue that ironically, that quality that is lost in in such a life would be faith. Because faith is always done in spite of something. We take a leap of faith. We're saying, I know that it's vain. I know that the chances of success are virtually zero. I'm gonna do it anyway.
[00:24:42] Arlo: And this is, this is the story arc for virtually every great story told in human history is a human being. Choosing to do something that defies the odds and then, and then winning or dominating and whatever that attempt is. And they become the hero for having, for having bucked the trend and fought against the odds.
[00:25:04] Arlo: And, and we love that as people. So it sounds like part of the, part of the argument is that this may be taking away some of our humanity.
[00:25:13] Carl: Yes. a a particular kind of our humanity, so we can call it faith, we can call it. Uh, and, and an important part of faith is courage. Like you say, every good story is about someone courageously doing something. and, uh, in, in the book, and I love this because the book is full of these like biblical references and very, it's very highbrow.
[00:25:34] Carl: But to demonstrate or to illustrate what I mean, I used this episode from the first Harry Potter book when Harry Potter comes to Hogwarts and he puts out the the magical sorting hat, which supposedly has some like connection to your inner essence. The sorting hat goes, oh, I think you would be great in slithering.
[00:25:54] Carl: And Harry goes, not slithering. Not slithering. And it says, okay, then it has to be gryffendor. And towards the end of the book, Harry approaches Don Ballor. And, and he confesses that I don't really belong. And in Gryffendor, I belong in slithering. That's what the hat said. And Dumbledore says. Well, it's not our essential qualities, Harry, that define who we are, but our choices now, my argument is that the hat wasn't wrong.
[00:26:24] Carl: The hat was right all along, but in defying its verdict, even though it said you, the best fit is Slithering and Harry went to Gryffindor instead that he retroactively. Retroactively. He belonged in Rendor all the time, but it wasn't until he made that leap of faith, until he defied the hat that he actually defined himself who he was.
[00:26:51] Carl: So if you go back to this couple and you have, the, the magical sorting hat in the shape of modern AI that says, well, here are your chances of success. And they say. We're gonna go with it anyway. We're gonna take this leap of faith because I have faith in our love, and surely like a couple that does that, that is ready to sacrifice so much for their love.
[00:27:14] Carl: Then after they've made that, that leap of faith, they're meant to be together. So that's what I'm trying to, to argue that we must recognize our human ability to act in the present rather than. hide in the safety of the past.
[00:27:32] Arlo: Wow, that is deep.
[00:27:34] Carl: And, and, and to be clear, like the argument is not that like, I wanna be really clear. It's not, I'm not saying that like humans are epistemically superior to the machines or like AI is bad at taking these things to into account. No, no. The machine is right. It is 98% certainty that this gonna end in disaster.
[00:27:54] Carl: Still, it's important to define it.
[00:27:57] Arlo: So let, let me ask you, I, I, I don't often get people who are deep into philosoph philosophical discussions on this show. So yeah, I grew up, um, probably somewhat like you. I watched a lot of Star Trek. As a kid, my parents were really into it and you know, we all saw the Borg, right? This, this kind of gigantic cold universal brain with billions of individuals that had collective thoughts.
[00:28:24] Arlo: Do, when you think about these kinds of topics, does it make you ever question whether or not our. Kind of individualistic approach to humanity and society is the right approach. Meaning are humans better off where everybody is making their own independent choices and taking leaps of faith versus that kind of collaborative, collective type of past driven knowledge.
[00:28:53] Arlo: Does that change? Is that better? Is it worse? Is there a good or bad here, or is it just up to the person?
[00:28:59] Carl: Hmm. So I think that, I, I don't fully agree with, with the dichotomy. So this, this leap of faith thing, is not necessarily an individualistic act or move. actually quite, quite the opposite. When you make the leap of faith, when you say, I'm, I'm going to disregard what this personified past is, is telling me to do, you.
[00:29:25] Carl: What you're essentially doing is that you are opening yourself up to failure. You're being courageous in that you're, you're daring to step into a relationship of fundamental vulnerability with others. And this is partly, this is a very, like a personal experience when I, uh, when I promised my wife to love her.
[00:29:53] Carl: My entire life. I mean, I know if I look at the statistics of how many marriages and in divorce, I know that this kind of a ridiculous thing to say when I tell my wife that, she's the best wife in the world. I mean, out of the 8 billion people on the planet, what are the chances that I happened to be in the same high school class as the person that was like the best fit with my personality?
[00:30:19] Arlo: We're gonna have to make sure your wife doesn't listen to this episode.
[00:30:22] Carl: I, I write this in my book actually. but, but I, but when I say it, I mean it with my whole heart. So that's what I mean in saying that. It's a very, um. It's a personal experience, but it's an openness to the other. It's saying we were making this leap of faith together. Uh, I'm, I'm holding someone's hat as I do it, and it's the same thing.
[00:30:45] Carl: It's the, the exact same mechanism, in politics. So politics is also like genuine political discussion is very dangerous because we make this leap of faith in opening ourselves to fundamental uncertainty in that when I speak politically out of my heart, there's always a chance that. People are gonna reject and ridicule what I'm saying.
[00:31:11] Carl: And that's what politicians today are trying to, they're trying to reduce the risk of failure by, outsourcing their communication to ai. They're asking constantly consulting various models of like, what should I say, which phrases to use, which phrases shouldn't I use? in, in the book I write about the 2016 Clinton campaign, which had this, huge AI model that they just poured tons and tons of data into.
[00:31:43] Carl: You know, these models are like. What car do you drive? what kind of pet do you have? Everything that you wouldn't even think would be politically relevant. And then they run simulations. I think the Clinton campaign ran 400 simulations every night. and that spits out like where to speak, who to speak to it.
[00:32:04] Carl: It basically tailors the entire campaign. And that is the opposite to taking a leap of faith and saying. I don't know if you're gonna like this, but here's, here's what I have to say, and I am accepting that you may not like it, that I may lose by saying these things, or still, if I'm entering into a genuine political dialogue with someone, I also open myself to the fact that.
[00:32:32] Carl: I may come out of this a different person. This may change my view and who I am. And we're increasingly losing this in this AI dominated landscape where, you know, you're on X and someone writes something upsetting about, say the the Israel Palestine conflict. And then you're like, okay, um, how should I respond to this argument?
[00:32:56] Carl: Uh, and then you just paste it into chat JPT and ask it like, well, what's the best response to this? And then you just take that and you paste it in, in, into the, the, the chat. The thing is, the person on the other side may do the exact same thing, maximize. The their point of view. But what you really have is two people not engaging in any risk whatsoever.
[00:33:18] Carl: They just outsource it to the authority of the personalized past. So that's what I mean by saying that it's not this individualist, collectivists kind of thing. It's rather that we must recognize that we must open ourselves to failure collectively and individually by being open to the other.
[00:33:36] Arlo: And it sounds like one of the real risks here is homogeneity in our society. Mean, we know that these models are using X to train their own corpus of data. So as you know there, I've read many arguments that are concerned more from a business perspective about, well, if we are then using AI content to train these AI models, you know, does it all kind of revert to the mean.
[00:34:03] Arlo: Where every single model always has the same opinion about every outcome. that sounds terrifying in a
[00:34:11] Carl: It, it is. And I like the way I framed this in the book is that, you know how in it, in the beginning I talked about how, The Gods are really the authority of the ancestors personified. And right now the models that we have, at least they're trained on data generated during our lifetime. It's like it's, it's we who have produced the data that goes into the models that then sort of govern our societies. But imagine in a couple of decades, because the models that are trained today. Or the first generation on the internet. Those are the models that are gonna train the next generation of models, both in a literal sense that, you know, you use an LLM to train another LLM, but also in the sense that like the corpus that it's trained on will increasingly be written by AI in and of itself.
[00:35:09] Carl: So we who are alive now. Train models who are gonna train models, who are gonna train models such that in a couple of generations it's literally gonna be the dead governing the living. When you act with an AI or interact with an AI in say, 2090, uh, in so far as there's still gonna be data, data driven, you're literally gonna be interacting with, uh, an amalgamation of your ancestors.
[00:35:38] Arlo: I mean, I, I can't imagine if today all of the answers we got were coming from 1920s prohibition era, right? I mean that the, the viewpoint, the viewpoints about life and the world and how to do things and the right tasks to do things to accomplish the outcomes is, is all changing so fast. Well, these are, these are really deep questions and I appreciate you taking through this new book, uh, God's Data.
[00:36:01] Arlo: So I, I really. Hope that when you do, uh, get this out into the world, that we can have you back on to talk more in depth about that particular book
[00:36:09] Carl: Yeah, I, I'd love to, and you know, now we've had a very, sort of deep philosophical discussion, but I do want to emphasize that it's a book that I think any, anyone can pick up this book and, and find it. Thoroughly interesting. I hope because the, the application, the areas where I apply this perspective, I think will resonate with most people.
[00:36:30] Carl: There's a chapter on or resonate with, or at least, um. I think most people can recognize why the stakes are very high. The first chapter is about, uh, the AI for parenting industry. People increasingly use AI to raise their kids. It's like huge industry where, you know, your toddler has a tantrum, uh, or throw throws the tantrum, and, uh, you're like, you're tired, you haven't eaten.
[00:36:58] Carl: What are you gonna do? Well, don't you just take up your app and, and you. Tell the AI what your child did or said, and it will literally provide a script for you of like, here's how you terminate this tantrum. Here are literally the things to say when you're, when your kid talks back, you just type it into the app and it gives you another answer for.
[00:37:21] Carl: Uh, you know, what, what is the best way to deescalate the situation? And then there's a chapter on Love, the one I talked about with a couple, and that's also like an entire industry of these ai, couple couples, counselors that give you very statistically detailed, answers on the, your chances of success and so on.
[00:37:41] Carl: And then there's a chapter on AI in law and a final chapter on ai in, in political communication. These are areas which. As I said, love parenting, law, politics. These are all areas that are so, it's so essential that we open ourselves to the possibility of failure. Otherwise, we're not loving, we're not politically deliberating and we're not parenting someone else.
[00:38:10] Carl: It, it, the results may be better. We may find better, better jobs, better. We may be better parents lovers. Politicians better lives in short, but it's not our lives.
[00:38:26] Arlo: Yeah. It's at, at the cost of our own humanity. And it's so funny, I, I mean, as I hear you talk about like the toddler tantrums, I have a toddler. Mm-hmm. All I could think is what app was that that helped them to stop the tantrum because boy, that sure is uc. Right. Quick answers, quick solutions. Well, look, we talked a lot about, um, philosophical concepts here, but you know, you've spent a lot of time now thinking deeply about data, about the impacts of these kinds of tools on our society.
[00:38:55] Arlo: I'd love to hear your thoughts on, on policy. How should humanity at large be thinking about policing and governing and making decisions about what we permit and what we do not permit as it relates to these kinds of, of amazing new technologies? Do, do you have any viewpoints on that?
[00:39:15] Carl: I I do, and I get the, this question or versions of, of this question from, from journalists whenever I, uh, engage with media, which tend tends to be a lot actually. I think that it is an interesting question because there's a hidden assumption underneath it, which is that there is the industry, which is this organic thing that sort of just happens, and then we can place regulation and governance as layers on, on top of this activity, on top of the commercial activity. And I, I'd like to think about it. The exact other way around. There are no markets without states that provides a, a space for those markets. And, and this is historically as well as conceptually true. There's no such thing as a self-governing market. The market is always dependent on the states providing a space for it, and the market always acts in response to the incentives that we set up for it. So rather than focusing on like, how can we regulate the industry and regulate the production of technology, I would like to think about it in the in, in sort of the opposite way of how can we change the way that we produce things? How can we change the. The actual activity itself. And I think there are many interesting paths towards, towards that.
[00:40:44] Carl: There are many alternative ways of building technology. I'm thinking of, um, the open source community, for instance. Incredible innovation happening there. and, uh, I, I recently came from, uh, the International Association of Privacy Professionals Big, uh, data Protection Congress in, in Brussels. There, the big discussion was like, well, sort of same here.
[00:41:11] Carl: How do we, how do we establish tech sovereignty? All of Europe's technologies are basically owned by external states. So, you know, if, if the United States wanted to shut Europe down, they could in an instance be just like, oh, like we didn't like that. So I'm sorry, you can't use Microsoft Teams anymore. Like, and all of a, all of a sudden, you know, all of Europe just breaks down. Uh, and the other keynote speaker whose name I now alludes me, but look her up. She was brilliant, but she made this fantastic point of like, uh. If the Americans would wake up and somebody told them that 80% of your digital infrastructure is owned by the Europeans, do you think that they would respond by, huh, let's add another regulation package that limits what they can do?
[00:42:06] Carl: No, no, no. That is not how the Americans would react. They would react in absolute panic
[00:42:14] Arlo: Yeah, and there would probably be, probably be some level of military violence
[00:42:19] Carl: yeah, yeah, yeah, exactly. So what she was saying is that the, the solution can never be antitrust or adding layers of regulation on top. We must build our own things, and I think that goes not only in a. National context of like owning our own fundamental infrastructure, but it must go on a democratic level as well.
[00:42:45] Carl: Like these technologies are the fundamental infrastructures of society. We must have some level of control, not just some kind of like, regulatory framework saying, oh, please, could you allow us to walk on our roads? Which is like, you know, kind of what we're doing. We've sold all the roads of society to a couple, a handful of corporations, and now we're asking them if we could please, like, uh, can we walk on the roads?
[00:43:18] Carl: It's, uh, it's a bizarre situation to me. Um, that being said, of course there are important regulations that one could do. Um, to get back to the afterlife of data, I think it's a shame that we don't ascribe any data protection rights to deceased individuals. We're opening a huge back door into our own privacy.
[00:43:41] Carl: like imagine a futurist scenario where. I may not have any data about you, but let's say that I own your departed family's data, a couple of departed friends. I don't really need your data because I can track you by proxy, especially nowadays. When, when, I mean, you said you have a toddler. I'm, I'm just guessing that you may share one or two pictures on, on
[00:44:05] Arlo: Oh yeah, we're, we're, we're terrible about that. We know it's not the right thing to do yet. Still we feel
[00:44:11] Carl: Yeah. Yeah. And you know, most people are on the internet before they're even born because the first person that's shared is the, the ultrasound picture when you're in the womb. So like even before you were born, you have a digital footprint. Uh. And, and that's sort of my point, that the privacy of the dead isn't a niche issue.
[00:44:34] Carl: It's inextricably linked to the privacy of the living. You cannot, you cannot have one without the other. so of course, being able to regulate that, it's a kind of short term, very important thing that we could do long term. We're gonna have to look for bigger solutions.
[00:44:52] Arlo: Wow. I, I'm walking away both inspired and scared, uh, at the same time. So, so let me ask you, I mean, you know, you spend a lot of time. Thinking about this topic, does it change your own behavior? Because I know I'm, I'm in data privacy and this is what I spend all day thinking about is regulatory rights, data privacy, data transfers, these kinds of things.
[00:45:15] Arlo: And yet still, I have a Facebook account and I do exactly what you just described. I share photos with my children. I, I talk about where I'm going and what I'm doing so that people who I'm close to can understand what's happening in my life. Do you, do you find yourself conflicted in your own, in your own day-to-day life?
[00:45:32] Carl: not, not very, and maybe this is the political scientist in me who speaks, or the sociologist and that. For me, these are, these are sort of collective matters to me. I'm not so concerned about my myself, I'm, I'm more concerned about like the society that I'm living in. So it's, I'm, I'm more concerned with our data than, than my data. That being said, I mean, every once in a while, There's a thing that I will do or sacrifice.
[00:46:04] Carl: Um, I mean, when, when Elon Musk bought Twitter, for instance, I, uh, I was like, okay, this is, this is really where I draw the line. I think it's unacceptable for me as a privacy advocate and tech ethicist to be on this platform. but often I would encourage people. When it comes to these topics of not falling for the trap of reducing themselves to a consumer of a platform rather than a citizen of society, like as consumers of platforms, we're all helpless then if you reduce data privacy to this individual good.
[00:46:45] Carl: Then the solutions are gonna be clicking a million pop-up notes, from the GDPR compliance. I don't know if you've used the web in in Europe, but it's absolutely terrible. It's like
[00:46:59] Arlo: Yeah. Well, you know, one of the things we do at Osano is we actually make. Uh, the world's most prevalent cookie banner. So, my, frequently, uh, when I, when I speak, I have to frequently start off by apologizing to the audience for all of the yes and no buttons they have to click.
[00:47:16] Arlo: yes, that ha that has gotten us there, right?
[00:47:18] Arlo: They, they were trying to solve a societal issue of clear disclosure and it's clearly kind of gotten out of control.
[00:47:24] Carl: Yeah. And, and I think that that buy in, reducing it to this kind of like individual. Good. I think it's gonna cause a backlash. I think it's actually gonna have the opposite effect. Uh, you see this already with people hating upon the GDPR. Uh, you know, my, my wife works outta school and she's like, it's, it's terrible.
[00:47:47] Carl: Like we can't do anything really because of GDPR compliance. And if you have a parent who comes in and they're like, give me. Every email that contains my child's name, they're, they have to take out three teachers who are gonna sit an entire day and just search through documents and look through papers with that child's name on it.
[00:48:12] Carl: And it's like, this is not, this is not what it's for. So ironically, they created these. Big regulatory frameworks to bust big tech. But the irony is that big tech is the only kind of organizations that have the, the organizational know-how and uh, infrastructure to actually comply. whereas everybody else is scrambling.
[00:48:40] Carl: They don't know, know what they're doing. And sadly, A lot of organizations, what people choose to do is to just delete data because they're, they're so afraid of GDPR compliance and, and that's what you get when you reduce it to this kind of individual. Good.
[00:48:58] Arlo: Yeah, which was, which results in haves and have nots in technology, right? 'cause now, now the Googles, the Amazons of the world can absolutely afford to keep all of our data because they can build that regulatory compliance response. But the school, the small business, they're up
[00:49:13] Carl: Yeah.
[00:49:13] Arlo: creek. 'cause they have to choose, do I wanna spend a lot of money and time on this or do I just wanna make do without.
[00:49:20] Carl: It's, and it's the same with ai. I mean, that's why you see open ai. A couple of months after they launched chat, JBT came out and they're like, oh, these technologies are so dangerous that there should be a license. Like only, only we and our friends should be allowed to do this. Which is like such a bullshit way of just saying we don't want any like, open source people doing this for free.
[00:49:47] Carl: Uh, like we wanna make money, because they know that like, uh, they have a plausible case to more or less get a monopoly. and, and that, that's why I think that tech sector, they're at least not. And they're at least not totally against regulation because they know that regulation will lock out a lot of competition.
[00:50:12] Arlo: This is true. Look, Carl, this has been a fascinating conversation and I can't thank you enough for taking the time today to join us and share your views and thoughts. I, for one, am deeply looking forward to Gods of data and our an atheist critique of ai. Uh, and for everybody in the audience, thank you.
[00:50:29] Arlo: I can't wait to read it. And for everybody in our audience, um, you can go pick up a copy of the afterlife of data. and that one, uh, is a couple of years out, but, but very, very. Relevant to our present tense situation. So, so please go online and search for, you know, Dr. Carl Oman, afterlife of Data. You will absolutely find it to be a riveting read, that will leave you, breathless and asking a lot of deep questions like we've done on this show.
[00:50:56] Arlo: So, Carl, thank you for joining.
[00:50:58] Carl: Thank you so much. Cheers.
[00:50:59]
Meet the host
Arlo Gilbert is the host of The Privacy Insider Podcast, CIO and cofounder of Osano, and author of The Privacy Insider Book. A native of Austin, Texas, he has been building software companies for more than twenty-five years in categories including telecom, payments, procurement, and compliance.