Data Privacy Week: Exploring the intersection of AI and Privacy

Concept vector illustration of LLM.

Image credit: Adobe Stock/ DRN Studio.

This week at Adobe we are taking time to dig deeper into what privacy means for individuals, and business, in a rapidly evolving legal and technological world.

Data Privacy Week is an annual, global event that focuses on raising awareness about the importance of privacy. We have fun and engaging activities to help employees strengthen their data privacy knowledge and support responsible data privacy stewardship.

Personal information, whether it belongs to employees, customers, or partners, is a valuable asset and must be handled responsibly. Privacy is not just a matter of regulatory compliance — the proper processing of personal information is our collective responsibility. It is a key driver for creating and retaining trust with customers and employees.

During an employee-wide fireside chat, I sat down with Maneesha Mithal, a partner in the privacy and cybersecurity practice at Wilson Sonsini Goodrich & Rosati law firm, to discuss key ways to balance innovation and privacy considerations for AI technologies.

In Mithal’s previous role at the FTC, she oversaw a team responsible for enforcing privacy and security laws and developing policy positions in areas such as artificial intelligence (AI), facial recognition, biometrics, connected cars, health privacy, children's privacy, ransomware, and the intersection of the evolving space of privacy, competition and generative AI.

This discussion provides a unique perspective on how the evolution of technology requires privacy as an important component when making ethical choices on how we develop, use, and regulate these new technologies. Highlights included below:

Shabaka: Please share your perspective on the concept of algorithmic bias in AI and its implications for user privacy.

Mithal: Biased algorithms produce unfair outcomes that privilege one arbitrary group over others. The efficacy of an algorithm will depend on the inputs used to train it, which creates a potential “garbage in, garbage out” problem.

As privacy professionals, our challenge and opportunity is to ensure that personal data can be used in ways that benefit society, while at the same time giving consumers the autonomy over how their data is collected, used, and shared.

Advanced AI technologies like facial recognition, if misused, can lead to invasive surveillance, significantly infringing upon personal privacy.

Shabaka: This is great background. At Adobe our fundamental approach to AI is grounded in principles of accountability, responsibility, and transparency. These principles foster trust in Adobe’s products and services and apply to the various cross-functional subject matter areas of ethics, intellectual property, security, and privacy, all of which are critical in AI.

Shabaka: Are there notable examples you can share of AI technologies positively impacting privacy?

Mithal: Absolutely, AI technologies are not just potential threats to privacy, they can be (and indeed are) used to enhance people’s privacy. Many companies in fact use AI tools to assist in privacy and cybersecurity compliance. For example, AI-driven techniques can automatically detect sensitive data across a data ecosystem in real-time. Once flagged, the AI technology can immediately apply privacy protections, such as tokenization. Interestingly, a 2019 study by Gartner projected that 40 percent of privacy compliance technology would be using AI by 2023. I’d be surprised if that number wasn’t exceeded today. And there are numerous examples of how AI can be used to protect data from unauthorized access, such as intrusion detection systems that detect fraudulent patterns to stop unauthorized access to data and authentication mechanisms that rely on AI to defeat hackers trying to infiltrate systems.

Shabaka: What advancements can we expect in privacy-preserving AI technologies in the near future?

Mithal: Let me mention a few examples. One is federated learning. This is an approach to training machine learning models that allows data to remain on local devices while still benefiting from collective intelligence. The models learn from decentralized data across devices and only the learned patterns (and not the raw data) are shared, offering a significant boost to privacy. Another example is homomorphic encryption. This is a form of encryption that allows computations to be performed on encrypted data without decrypting it first. Advances in this field could enable AI models to learn from encrypted data, enhancing the privacy of user data significantly. And finally, while the concept of differential privacy has been around for some time, I’d expect that AI technologies would allow us to see further enhancements and wider adoption. Differential privacy introduces “statistical noise” to data, allowing overall trends to be analyzed without compromising individual data points.

Shabaka: The tools you mention involve analyzing data in a privacy-protective way. Innovation can be achieved in a privacy conscious manner!

As we look ahead and continue to navigate the rapidly evolving digital world it is crucial for organizations and individuals to prioritize responsible data stewardship and leverage tools and advancements in privacy-preserving AI technologies to enhance protection of personal information while maintaining trust with employees and customers.