Hello all, and happy Thursday!
I promise this newsletter covers more than just California privacy news, but CalPrivacy hasn’t been making it easy.
Earlier this month, CalPrivacy fined Ford for creating unnecessary friction in the opt-out process. Specifically, Ford required consumers to verify their identities (via email verification) before they could submit an opt-out request. This is not compliant under the CCPA.
It’s an especially interesting violation because it parallels last year’s enforcement action against Honda in three ways:
Taken together, I think these actions underscore a real difference in product philosophy between Osano and legacy privacy providers. The latter is content to hand you all the tools to get compliant, but expects you to know exactly what to do when putting those tools to use.
That’s not a realistic expectation.
If well-resourced companies like Ford and Honda are getting dinged for non-compliance, it’s pretty clear privacy vendors ought to be putting more thought into how their solutions are getting implemented.
As an example, Osano doesn’t require identity verification when processing Do-Not-Sell/-Share requests from California by default. We don’t expect our customers to have perfect knowledge of every jurisdiction’s compliance requirements. That’s why we bake privacy best practices into the platform to the best of our ability.
Best,
Arlo
Highlights From OsanoBlog: DPDPA Rules: How India’s Privacy Law Will Be Put into Practice
Indian regulators have now implemented the rules that will operationalize India’s flagship data privacy law, the DPDPA. What are they, what do you need to do to stay compliant, and starting when? Find out in our blog.
The California Privacy Protection Agency has issued a $373,703 fine against the Ford Motor Company and is requiring it to change its practices over unnecessary friction in the opt-out process that violated the California Consumer Privacy Act (CCPA).
A California federal court denied Elon Musk’s xAI request to block enforcement of the state’s AI training data transparency law, rejecting the company’s claims that the disclosure requirements would destroy trade secrets and violate free speech rights.
Recently, Connecticut Attorney General William Tong and Senator James Maroney announced that the state’s lawmakers will soon consider new measures aimed at protecting children and teenagers from potential risks associated with AI technologies. This announcement comes amid growing concerns about the increasing use of chatbots and other AI tools by young people.
As AI systems are increasingly integrated into commercial and government applications, there is a growing demand to monitor these systems in real-world settings. To address this pressing need, the Center for AI Standards and Innovation (CAISI) held three practitioner workshops and conducted an in-depth literature review to map the landscape, focusing on current challenges to robust and effective post-deployment monitoring of AI systems. CAISI’s findings are outlined in a newly released report focused on the identification, organization, and documentation of monitoring challenges.
Organizations continue to explore the many ways artificial intelligence can streamline and automate existing processes, with human resources being a particular area of focus. With most HR tasks requiring a data governance element, stakeholders are wary of additional compliance obligations and challenges stemming from the introduction of AI.
There's more to explore:
We go deeper into additional privacy topics with incredible guests monthly. Available on Spotify or Apple.
Join our official subreddit to stay up to date on the latest news, analysis, guidance, and content from Osano!
The book inspired by this newsletter: Osano CEO, Arlo Gilbert, covers the history of data privacy and how companies can start building a privacy program from the ground up. More details here.
If you’re interested in working at Osano, check out our Careers page!