In this article

Sign up for our newsletter

Share this article

2023 was a year full of surprises. Barbie became a feminist icon, Taylor Swift helped to boost the US economy while former royals and pop stars wrote books. We also saw the advent of a new AI era with foreboding messages that robots were taking over the earth as doomsday hashtags took over the internet.

It, therefore, makes sense that the Merriam-Webster dictionary word of 2023 was “authenticity,” reflecting just how much humans are striving to understand reality from myth in a world of growing digitization and AI.

AI has been a topic at the forefront of the minds of privacy professionals across the globe since the start of the year. While AI is not a new technology, what is novel is the level of accessibility that allows the public to engage with it and marvel first-hand at how extraordinary the technology can be. Companies have been quick to explore its potential use, rolling out new features at an unprecedented pace, while societal implications of AI use have become dinner-table conversation for tech advocates and technophobes alike.

AI's Privacy Risks

My first introduction to AI was, in fact, as a teenager back in the early 2000s, watching a rerun of a season one episode of the X-files from 1998. In that episode, AI was imagined as a technology that could be used for illicit government activity but, the more successful the technology became, the more it evolved to become a sentient being that required suppression.

Looking beyond the realms of Mulder and Scully, the potential for misuse of AI has always been clear. It relies on learning language models that are processing vast amounts of data, with the potential that an individual’s personal information (or even sensitive information) — if submitted to the model — may be revealed in AI output. The idea that the technology evolves, learning as it goes to create greater predictability, is both fascinating and terrifying.

It’s therefore not surprising that Privacy Pros, whose bread and butter is data, are being called on by their boards and executive teams to lead organizational AI governance efforts. From undertaking impact assessments to make sure that data entering generative AI models meets data protection requirements to displaying notice and getting consent to training on data minimization strategies, privacy pros have the knowledge and transferrable skills that support AI governance goals.

AI Governance, Just Like Privacy, Is a Team Sport

But let’s be clear, AI is not a privacy issue; it is a standalone discipline that heavily intersects with privacy. However, successful Privacy and AI pros, who are both fueled by data, will need to work with others across their organization to lead success:

  • Security Pros are looking to ensure that they build AI systems that are managing the risk of new vulnerabilities, like data poisoning to corrupt training data used by a large-language model (LLM) that AI is dependent on. The UK and the US have therefore developed global guidelines for AI security. Privacy pros already work closely with security professionals when it comes to privacy incidents, so working with these team members will be old hat by now. You’ll likely need to review your current privacy incident reporting process with your security pros to see where AI fits in.
  • Ethicists are working to use AI as a tool for generating realistic and creative content in a manner that promotes accessibility and innovation, without misinforming the public. Examples of deceased artists singing recent songs give a chilling reminder that it can be hard to know what is real or not. You might work with these team members to ensure that you are developing policies that align with company ethical standards, questioning whether the “why” of processing the data aligns with the “how” of your usage.
  • Trust Teams want to make sure that data usage promotes trustworthiness. Recent statistics from PEW research show that awareness of AI is increasing, but the level of trust in it – and specifically the algorithms supporting it – is declining. Only 20-30% of people have confidence in the use of algorithms irrespective of the organization deploying the algorithm. Privacy professionals play a big role in generating trust — considering that 97% of people feel that data privacy is important according to a KPMG study, privacy professionals can help trust teams by creating a strong foundation within their privacy policies and programs.
  • DEI teams are looking to ensure that datasets are managing risks of data bias, which means diversifying the data your organization collects while still ensuring it has consent to use that data. There are examples of image analysis software reliant on AI mislabeling photos of people of color, perpetuating deeply offensive racist tropes. We all know the adage “garbage in, garbage out” meaning that we have to do the work upfront to get valuable output.

What Should Privacy Leaders Do Now?

AI can change our lives for the better; but to do that, and to avoid the pitfalls that sci-fi TV shows present, requires action now. Policymakers are paving the way for AI regulation globally, but savvy privacy professionals can lean into this by promoting responsible data usage as a key part of their strategic efforts for innovative, and compliant, digital transformation.

While AI and privacy are not interchangeable, having a healthy privacy program will support AI efforts. Knowing your data (where it is, who is accessing it, why you need it, and how you are using it) is foundational to both. By investing in privacy now, companies can help harness the benefits of AI, which will hopefully help them in 2024 and beyond.

Schedule a demo of Osano today

Privacy Policy Checklist

Are you in the process of refreshing your current privacy policy or building a whole new one? Are you scratching your head over what to include? Use this interactive checklist to guide you.

Download Now
Frame 481285
Share this article