In this article

Sign up for our newsletter

Share this article

Data privacy has never been more top of mind. From regulators to businesses, privacy professionals to consumers, and more, everyone has a stake in data privacy.   

With all this attention and focus, the data privacy world is evolving at a break-neck pace—not just in terms of legislation, but also in terms of best practices, awareness, and risk. Here are the top 5 trends we’re seeing unfold in 2024. 

1. AI and Data Privacy Butt Heads 

Without a doubt, AI will be making every 2024 data privacy trends list you come across.  

In late 2022, OpenAI released a free research preview of its generative AI project ChatGPT. Then, in 2023, ChatGPT was upgraded to use the more robust GPT-4 model as its core AI. Before long, ChatGPT and other generative AI models were virtually ubiquitous in digital spaces. 

Tech conversations were dominated by AI over the course of 2023. But now that the dust has settled somewhat, the world has had (some) time to digest the impact of generative AI on regulations and business ethics. 

No doubt AI technology is going to advance exponentially over the next few years, and there will almost certainly be another massive leap in functionality, just like we saw in 2023. But when that does happen, we’ll be better prepared; now, regulators, academics, and technologists are hard at work determining how to mitigate unethical uses of AI today and in the future. 

There are plenty of ways AI can be misused, but data privacy is of particular concern when it comes to the (un)ethical use of AI. 

The first thing that might come to mind is the accidental exposure of private information. Generative AI models like ChatGPT scrape huge amounts of data from a variety of sources in order to “train” the AI algorithm and generate human-like responses to queries.  

Here’s what ChatGPT had to say about its own training dataset:  

undefined

So, less than helpful.  

However, estimates suggest that ChatGPT’s training set consisted of around 570GB of data obtained from internet-available texts—that’s a lot of data, and some of it is likely personal information. If an AI model is trained on personal information, then there is a good chance that information could be exposed in its outputs. And that’s not to mention the fact that any personal information you provide in a chat with ChatGPT is also fair game for use as training data. 

If that data and the data subject is protected by a data privacy regulation, then the AI developer will be in violation unless they take the appropriate steps, like asking for consent, first.  

But toothpaste isn’t easy to put back into the tube, and personal data isn’t easy to delete from a trained AI model—as a result, the AI model must be destroyed. The FTC, for instance, has the power to demand the deletion of such models through an action called “algorithm disgorgement.” 

While the exposure of private information in an AI model is a major concern, it’s not the only conflict between AI and data privacy. Some AI applications can be used to scrape biometric information from photos and videos on the web.  

Forget the fact that a dataset of biometric information in the wrong hands, such as a domestic abuser's, is dangerous; even if such datasets were reserved for law enforcement, as is the typical use case for such technology, it would effectively place everyone on a permanent police lineup. This exact use case has happened before with Clearview AI, and it spurred this exact criticism. 

Lastly, AI can be a powerful tool to access sensitive personal information that would have otherwise been protected. Again, some might not object to security-breaking AI tools being in the hands of “the good guys”—but technology has never remained solely in the hands of those capable of using it responsibly for long. 

2. More Emphasis on PETs 

Privacy-enhancing technologies, or PETs, have been around for a while. However, the recent surge in AI technology means they’ll be more important than ever.  

PETs are all about reducing or even eliminating a system’s access to personal information without affecting its functionality. Thus, for generative AI systems that rely on the collection of huge datasets for training, PETs will be an essential tool to comply with regulations while still generating a useful AI tool.  

In fact, the PET market is forecasted to grow to reach $25.8 billion by 2033. Over the course of this period, we can expect to see more focus paid on PETs, such as:  

  • Federated learning, an AI technique where individual nodes host a machine-learning model, whose outcomes are shared (federated) with a larger, centralized AI cluster, but whose training data is not shared. 
  • Differential privacy, a mathematical framework for measuring the leakage of personal information in an AI model and reducing that leakage. 
  • Homomorphic encryption, which makes data unreadable but still computable. 
  • Secure multiparty computation, which enables multiple parties to compute a function while keeping their individual inputs private. 
  • And new innovations that improve upon existing techniques. 

3. Enforcement Advances Slowly But Surely 

Greater enforcement was on our list for last year’s data privacy trends, and this year is no exception. As more laws come online, more enforcement is inevitable. 

Especially when it comes to data privacy regulations, state attorneys general, the FTC, the CPPA, and EU data protection authorities are eager to prove that they have bite to match their bark. Throughout 2024, we expect to see enforcement of state laws to kick in, especially from the CPPA and their enforcement of the CPRA, which kicked into effect July 1, 2023. 

Note, however, that the CPPA’s enforcement is only for the additional rulemaking they’ve gone through; anything in the text of the CPRA/CPPA regulations is fair game for enforcement (as we saw with 2022’s Sephora enforcement). 

In 2024, several other state laws go into effect and are therefore enforceable. They include: 

Furthermore, numerous state laws have already gone into effect in previous years and are enforceable. Those include: 

All of these laws are enforced by state attorneys general eager to make an example of violators. If you want to review an updated list of U.S. privacy laws and their associated characteristics, check out U.S. Data Privacy Laws: A Guide to the 2024 Landscape. 

4. ESG Assimilates Data Privacy 

Investment decisions and corporate identities are increasingly influenced by environmental, social, and governance (ESG) factors. In fact, asset managers’ total ESG-related assets under management (AuM) are forecasted to reach US$33.9 trillion by 2026—that’s nothing to sneeze at.  

The 2004 report titled "Who Cares Wins", which popularized the term “ESG,” introduced the concept like so:  

Ultimately, successful investment depends on a vibrant economy, which depends on a healthy civil society, which is ultimately dependent on a sustainable planet. In the long-term, therefore, investment markets have a clear self-interest in contributing to better management of environmental and social impacts in a way that contributes to the sustainable development of global society. 

 

As a result, argued the report, investors and businesses should prioritize ESG factors. In doing so, they would promote a sustainable planet, a healthy civil society, and a vibrant economy. 

Few could argue that the full-throated support of individuals’ data privacy rights does not contribute to a healthy civil society. However, it should be noted that the ESG approach has its roots in 2004 with the “Who Cares Wins” report. While data privacy has been an issue for a long time, it really only reached the public consciousness in 2016, when the GDPR was passed. Perhaps in part because of this gap, data privacy has been viewed as a parallel, but distinct, factor relative to ESG factors.  

Increasingly, however, organizations are recognizing that data privacy is very much an integral aspect of ESG. External ESG rating agencies, which investors rely on to identify ESG-focused investment opportunities, often include privacy and cybersecurity as components of a business’s overall ESG score. 

What this trend really means is that data privacy is becoming a brand statement. The public is more and more aware of their data privacy rights as well as the data privacy wrongs committed by companies in the past. Businesses are more aware of the responsibility they have to ethical data stewardship.  

In 2024, we can expect this trend to intensify; businesses that wish to be known as ESG-driven and responsible participants in the economy will tout their respect for their customers’ data privacy. 

5. Will “Pay or Okay” Be Okayed? 

Research shows that targeted advertising is only about 4% more profitable than “dumb” advertising, but ad-based businesses are still fighting hard to hold onto that extra 4%. That’s evident in Meta’s experiment with a model dubbed “Pay or Okay” in the EU. 

Under this approach, Meta is offering EU users the choice to either: 

  1. Use Facebook and Instagram for free in exchange for their data, which is then funneled to ad tech networks for targeted advertising purposes. In essence, this is consenting to (or “okaying”) the processing of personal data. 
  2. Pay a monthly subscription fee and see no ads (whether targeted or “dumb”) in return.
It’s a controversial approach, especially when you consider that only 3% of survey respondents would consent to tracking under normal circumstances, but 99.9% would consent if they were otherwise forced to pay a fee. Clearly, there’s a gap between consumers’ actual desires and the options they’re being presented with. 

As of this writing, TikTok is already following Meta’s lead with a limited test of the same model. If Meta and TikTok are successful, then it wouldn’t be a surprise if other ad-supported social media platforms offered an ad-free subscription tier. 

This development isn’t sitting well with privacy advocates, however, who characterize the choice between paying a fee or consenting to data processing as coercive. If that’s the case, then users couldn’t be considered to have “freely given” consent, as required by the GDPR.  

If every social media system followed this model, then it could become very expensive indeed to simply wish not to be tracked. Does that mean businesses should provide their services for free? Of course not. Instead, they could simply offer non-targeted advertisements, forgoing the extra 4% of profitability in favor of a lower risk profile and better brand reputation. 

Currently, the “pay or okay” model is being challenged in European courts, and TikTok’s experiment is still ongoing. Over the course of 2024, we’ll see whether or not this approach is legal in the eyes of the EU authorities and tolerable in the eyes of EU citizens. 

How Should Businesses and Privacy Professionals Respond to These Trends?  

Really, these different trends are all the same trend: Data privacy compliance is becoming more important for modern businesses. Consumers are becoming savvier and better equipped, enforcement authorities are on the prowl, businesses of all sizes are building privacy programs, the risk inherent to data collection has become more obvious, and companies are paying attention to the space. All of this adds up to a level of focus on data privacy that has never been higher.  

At the same time, there is also widespread confusion around where to start. There is a lot to compliance, and prioritization is a challenge. For businesses and privacy professionals looking for guidance on what comes next, we recommend checking out Osano’s action plan for compliance with 2024’s privacy laws. 

Schedule a demo of Osano today

Osano Privacy Program Maturity Model

Score and evaluate your privacy program's operational efficiency with the Osano Privacy Program Maturity Model. With this model, you'll pinpoint gaps, identify next steps, and ultimately grow your privacy program's maturity.

Download Now
Privacy maturity model cover
Share this article