Hello all, and happy Thursday!
Not to say I told you so, but a few weeks ago, I pointed out a major challenge AI models faced when serving as impromptu therapists. Now, OpenAI has empowered ChatGPT to report on certain users and certain conversations.
The question I raised in that previous newsletter was how AI models could effectively, accurately, and privately act as mandated reporters when confronted with users who appeared to be imminent threats to themselves or others.
Here’s how OpenAI has answered that question:
- When it detects that users are an imminent threat to others, that conversation is routed to a specialized, trained team.
- That team may refer cases to law enforcement.
- Cases where the user is an imminent threat to themselves will not be referred to law enforcement. This last decision is borne out of a desire to “respect people’s privacy given the uniquely private nature of ChatGPT interactions.”
From a privacy perspective, I’ve got some follow-up questions. What criteria will be used to detect whether a user is a threat to others? What policies and training will be used to inform the human team in this loop? What legal basis will be used to justify the transfer of data to OpenAI’s human reviewers?
And I’m not being intentionally difficult here. It’s clear that if AI chatbots are going to be a part of our lives, then there needs to be some mechanism for mandated reporting. It’s somewhat reassuring to see that OpenAI has made some consideration of the privacy ramifications for that mechanism.
But still, it all feels like we’re building the plane while flying it—not an uncommon practice in tech, but one that might not be advisable in the context of mental health and law enforcement.
Best,
Arlo
Highlights From Osano
Events
Webinar: Why Should Marketers Give a %#$@ About Data Privacy?
You’ve got pipeline to generate, campaigns to run, and metrics to analyze (SO many metrics)—why should you give a %#$@ about data privacy? Find out in our webinar on September 18th. We'll even bribe woo you with games and prizes to boot.
Register today | September 18th, 1-2 PM EST
Event: Columbus AI Week
We’re proud to be sponsoring Columbus AI Week, the Midwest’s hub for AI innovation. Use code OHIOMEANSAI for 50% off tickets!
Register today | September 9-11, WatersEdge in Hilliard, OH
Meetup: AI, IRL: At Work
In our last meetup in our AI, IRL series, we talked about all the pitfalls and opportunities that AI brought to bear in our personal lives. Let’s raise the stakes and bring it into the office. How can you use AI safely and effectively in your daily workflows? What should you automate and what should you leave to the humans? Meet with Osano experts and your peers to discuss all there is about AI, IRL at work!
Register today | September 17th, 1-2 PM EST
Event: LogicON
Osano’s Chief Trust & Privacy Officer will be speaking at LogicON 2025! Listen to Rachael Ormiston cover everything you need to know about how to protect privacy in an AI-driven world, as well as all the other speakers’ insights into proving AI’s ROI, surviving AI regulatory overload, finding the human in the AI, and more.
Register today | October 14-16 | Columbus, OH
In Case You Missed It...
Blog: DSAR Response Letter: How to Respond to a Subject Rights Request
If you’re not accustomed to handling data subject access requests (DSARs), then it’s understandable if you feel a bit nervous about responding. While there aren’t any magic words or phrases you have to say to make your response legal and official, there are key factors to keep in mind when communicating with requesters at different stages of the DSAR process. Learn more in our blog.
Top Privacy Stories of the Week
ChatGPT Will Call the Cops on Its Most Dangerous Users, OpenAI Announces
OpenAI has announced new plans to clamp down on its most dangerous users and report them to law enforcement if necessary. "When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts," it wrote in a policy update.
Schrems-Like Challenge to the Data Privacy Framework Shot Down by EU General Court
Philippe Latombe, a French member of parliament, brought an action for the annulment of the EU-US data transfer agreement before the General Court. The CJEU had previously ruled in “Schrems I” and “Schrems II” that the two prior data transfer agreements were illegal, annulling both. However, the General Court was not convinced by Mr. Latombe’s arguments and confirmed the validity of the EU-US Data Privacy Framework.
Google Avoids Worst-Case Penalties in Antitrust Case, Will Be Forced to Share Data with Competitors
U.S. District Judge Amit Mehta ruled against the most severe consequences of last year’s finding that Google held an illegal monopoly on internet search—specifically, Google will not be forced to sell its Chrome browser, which provides data that helps its advertising business deliver targeted ads. However, it will be required to share search data with rivals, exacerbating privacy concerns.
Disney to Pay $10 Million to Settle FTC Allegations Over COPPA Violations
Disney will pay $10 million to settle Federal Trade Commission allegations that the company allowed personal data to be collected from children who viewed kid-directed videos on YouTube without notifying parents or obtaining their consent as required by COPPA. The proposed order would transform how the entertainment behemoth designates videos on YouTube as “Made for Kids,” while encouraging adoption of age assurance technologies on YouTube.
‘Anonymity Online Is Going to Die’: What Age-Verification Laws Could Look Like in the U.S.
The UK’s Online Safety Act, which went into effect on July 25, requires any website that might have adult content to conduct age-verification checks. Users must provide proof of age in the form of photo IDs, credit cards, photos for facial recognition software, and other so-called age gates, or else be denied access. While U.S. officials have derided the Online Safety Act as a path to censorship, at least 19 states already have laws on the books with similar requirements. States including Arizona, Iowa, Texas, Ohio, Missouri, North Dakota, South Dakota, and Wyoming all require ID before a resident can access porn sites or other platforms that could contain sexually explicit material. And there are at least six more states that have introduced comparable bills.
Like what you hear from the Privacy Insider newsletter?
There's more to explore:
🎙️The Privacy Insider Podcast
We go deeper into additional privacy topics with incredible guests monthly. Available on Spotify or Apple.
📖 The Privacy Insider: How to Embrace Data Privacy and Join the Next Wave of Trusted Brands
The book inspired by this newsletter: Osano CEO, Arlo Gilbert, covers the history of data privacy and how companies can start building a privacy program from the ground up. More details here.
If you’re interested in working at Osano, check out our Careers page!
Arlo Gilbert
Arlo Gilbert
Arlo Gilbert is the CIO & co-founder of Osano. A native of Austin, Texas, he has been building software companies for more than 25 years in categories including telecom, payments, procurement, and compliance. In 2005 Arlo invented voice commerce, he has testified before congress on technology issues, and is a frequent speaker on data privacy rights.
