In this article

Sign up for our newsletter

Share this article

Hello all, and happy Thursday! 

As AI becomes embedded in our lives and society, one of the hurdles we’ll have to face is how we balance safety and privacy, and how AI models can replace human judgement on a case-by-case basis.  

Consider New York’s new law. Among other requirements, it mandates that AI companions address messages suggesting the possibility of self-harm or suicidal ideation. The privacy risks of this law, at least as far as I understand it, are low: AI companions just have to provide information referring users to suicide prevention resources, so no data has to leave the app.  

But when we look at how users are treating AI companions, a notification listing a phone number and a website may not be enough. People aren’t just casually venting to chatbots; they’re treating them as full-blown therapists. There are a lot of problems with that already, but I want to focus on the privacy and safety challenges it poses.  

Unlike real flesh-and-blood therapists, chatbots aren’t mandated reporters. Since you can't forbid users from treating chatbots as therapy tools, future regulation could very well extend mandated reporting requirements to AI chatbots.   

How much faith can we put in AI developers to perfectly replicate not just human judgement, but expert human judgement in these circumstances? Can a chatbot accurately determine if a user truly is a danger to themselves or others? What happens if they get it wrong, and share highly sensitive conversations with others when there was no need? 

Best, 

Arlo 

 

Gear-Patrol-Case-Study-1024x512

Highlights from Osano

New From Osano 

Blog: The Data Privacy Certification Guide 

Unless businesses start hiring psychics, certifications will continue to be a critical way for experts to prove that they know what they're talking about. If you’re looking to bring privacy expertise onto your team or are a burgeoning privacy pro seeking to prove your value, check out our guide to privacy certifications and what they mean. 

Read more 

Podcast: Protecting Privacy at Every Walk of Life 

Join Osano CIO Arlo Gilbert as he interviews France Bélanger and Donna Wertalik of Virginia Tech on their academic perspectives on data privacy in the latest episode of the Privacy Insider. 

Listen here 

In Case You Missed It... 

Blog: Taking Osano to the Next Milestone (and Beyond) 

"What got you here won’t get you there.” We strive to apply this truism at Osano on a daily basis. Read this post to learn why Arlo Gilbert decided it was time to pass the CEO role to another in order for us to reach our next “there.” 

Read more 



Top Privacy Stories of the Week

France’s Data Protection Authority Finalizes Its Recommendations on Applicability of GDPR to AI Development 

In December 2024, the European Data Protection Board (EDPB) stated that the GDPR applies, in many cases, to AI models trained on personal data. Now, France’s data protection authority, the CNIL, has issued guidance for AI developers on how to determine whether their AI model is subject to the GDPR as well as how to reduce models’ reliance on personal data. 

Read more 

Kentucky AG Files Lawsuit Against Chinese Shopping Platform Temu for Stealing Kentuckians’ Data 

Recently, Kentucky Attorney General Russell Coleman announced his office would be suing Temu, the Chinese online shopping platform, for unlawful data collection, violations of customers’ privacy, and counterfeiting some of Kentucky’s most iconic brands, violating the Kentucky Consumer Protection Act 

Read more 

Apple Alerted Iranians to iPhone Spyware Attacks, Say Researchers 

Apple notified more than a dozen Iranians in recent months that their iPhones had been targeted with government spyware, according to security researchers. Miaan Group, a digital rights organization that focuses on Iran, and Hamid Kashfi, an Iranian cybersecurity researcher who lives in Sweden, said they spoke with several Iranians who received the notifications in the last year. 

Read more 

New York Passes Novel Law Requiring Safeguards for AI Companions 

New York has enacted the first law requiring safeguards for AI companion apps. Scheduled to come into effect on November 5, 2025, the law requires operators of AI companions to implement safety measures to detect and address users’ expression of suicidal ideation or self-harm and to regularly disclose to users that they are not communicating with a human. 

Read more 

Federal Appeals Court Hands Out Win in Website Chat Wiretap Case: What It Means for Your Business 

If your company uses a third-party tool to power your website chat function or AI-assisted customer service, the 9th Circuit Court of Appeals just delivered a ruling you should know about. The court affirmed summary judgment for the defense in a high-profile privacy case brought against Converse and made it significantly harder for plaintiffs to weaponize California’s wiretap law (CIPA) against businesses that rely on modern digital tools. 

Read more 

Like what you hear from the Privacy Insider newsletter?

There's more to explore:

🎙️The Privacy Insider Podcast

We go deeper into additional privacy topics with incredible guests monthly. Available on Spotify or Apple.

📖 The Privacy Insider: How to Embrace Data Privacy and Join the Next Wave of Trusted Brands

The book inspired by this newsletter: Osano CEO, Arlo Gilbert, covers the history of data privacy and how companies can start building a privacy program from the ground up. More details here.

If you’re interested in working at Osano, check out our Careers page

Get a demo of Osano today
Share this article