Data Mapping: Frequently Asked Questions
Most people find data privacy compliance to be complicated enough....Read Now
December 18, 2023
2023 was a year full of surprises. Barbie became a feminist icon, Taylor Swift helped to boost the US economy while former royals and pop stars wrote books. We also saw the advent of a new AI era with foreboding messages that robots were taking over the earth as doomsday hashtags took over the internet.
It, therefore, makes sense that the Merriam-Webster dictionary word of 2023 was “authenticity,” reflecting just how much humans are striving to understand reality from myth in a world of growing digitization and AI.
AI has been a topic at the forefront of the minds of privacy professionals across the globe since the start of the year. While AI is not a new technology, what is novel is the level of accessibility that allows the public to engage with it and marvel first-hand at how extraordinary the technology can be. Companies have been quick to explore its potential use, rolling out new features at an unprecedented pace, while societal implications of AI use have become dinner-table conversation for tech advocates and technophobes alike.
My first introduction to AI was, in fact, as a teenager back in the early 2000s, watching a rerun of a season one episode of the X-files from 1998. In that episode, AI was imagined as a technology that could be used for illicit government activity but, the more successful the technology became, the more it evolved to become a sentient being that required suppression.
Looking beyond the realms of Mulder and Scully, the potential for misuse of AI has always been clear. It relies on learning language models that are processing vast amounts of data, with the potential that an individual’s personal information (or even sensitive information) — if submitted to the model — may be revealed in AI output. The idea that the technology evolves, learning as it goes to create greater predictability, is both fascinating and terrifying.
It’s therefore not surprising that Privacy Pros, whose bread and butter is data, are being called on by their boards and executive teams to lead organizational AI governance efforts. From undertaking impact assessments to make sure that data entering generative AI models meets data protection requirements to displaying notice and getting consent to training on data minimization strategies, privacy pros have the knowledge and transferrable skills that support AI governance goals.
But let’s be clear, AI is not a privacy issue; it is a standalone discipline that heavily intersects with privacy. However, successful Privacy and AI pros, who are both fueled by data, will need to work with others across their organization to lead success:
AI can change our lives for the better; but to do that, and to avoid the pitfalls that sci-fi TV shows present, requires action now. Policymakers are paving the way for AI regulation globally, but savvy privacy professionals can lean into this by promoting responsible data usage as a key part of their strategic efforts for innovative, and compliant, digital transformation.
While AI and privacy are not interchangeable, having a healthy privacy program will support AI efforts. Knowing your data (where it is, who is accessing it, why you need it, and how you are using it) is foundational to both. By investing in privacy now, companies can help harness the benefits of AI, which will hopefully help them in 2024 and beyond.
Rachael Ormiston is the Head of Privacy at Osano. With over 15 years of professional experience, she has deep domain expertise in Global Privacy, Cybersecurity, and Crisis and Incident Response. Rachael is an IAPP FIP and has previously served on the IAPP CIPM Exam Development board. She has a personal interest in privacy risk issues associated with emerging technologies.