AI has a governance problem - privacy professionals need to step-up

AI has a governance problem - privacy professionals need to step-up

I’ve found myself wondering lately whether, in years to come, we’ll tell our children where we were and what we were doing when we first learned about ChatGPT. Kind of like those moments when your parents told you what they were doing (for those of you with parents old enough) when they heard about the moon landing. 

In my case, I remember it very well. I was browsing LinkedIn, and saw a post from Mason Weisz , an attorney at ZwillGen (here, if you’re interested). Mason had asked ChatGPT to “Write a rhyming and deceptive privacy policy about an internet-connected blender in the style of Run DMC” and, well, ChatGPT duly obliged. I was so amazed by the output that I commented on Mason’s post, asking: “For real, this was a chatbot output???

You see, at the time I couldn’t tell whether Mason was playing some kind of prank or not. But, as it turned out, he wasn’t. Soon enough, I headed over to ChatGPT to check it out for myself.

If I’m honest with myself (and with all of you reading this), I’d been asleep at the wheel when it came to following AI developments - and so was stunned by the capability of generative AI models like ChatGPT when I finally started paying attention properly. I had no excuse: I’d studied computer science at university (admittedly some years back), and even taken courses in AI. But what stuck in my mind most from those courses was my AI lecturers scoffing at the idea that machines could ever become sentient, or more capable than humans - confidently predicting that we were “a very long way off” that happening.

I guess it depends on how you define “a very long way off”. My degree was some 20 years ago, so it didn’t happen overnight. However, I’d like to think the progress AI has made over that intervening period has outstripped the wildest dreams of even my most optimistic AI lecturers (who, at the time, were explaining just how difficult it was to get computer-aided vision to recognise even a single written character). What we face now is sci-fi level stuff.  


AI’s hopes and fears

Now we see all sorts of alarm bells being rung about AI, with AI experts, researchers and backers publicly calling for an “immediate pause” on AI development, saying that AI developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”.

The thing is that we don’t really get how AI systems work. I mean, the brightest engineers among us understand how to make them work, but the multi-dimensional patterns, correlations, and linkages that deep learning neural nets detect within data in order to reach decisions and produce outputs often seem well beyond the grasp of our puny human minds. Combine that with the almost limitless potential for AI systems to transform almost every aspect of society - from transportation to healthcare, financial systems to defence, law enforcement to communications, journalism to energy - and we simply don’t know what the consequences of widespread AI deployment will be.

Of course there’s a lot of good that can and will come out of AI, including a revolution in the way we diagnose and treat patients, educate our children, achieve business efficiencies, improve road safety, and so much more. 

Indeed, you can already see the world dividing into camps when it comes to the impacts of AI on humanity: the AI believers and the AI doubters. But history shows that simply seeing the world through a simple lens of good or bad is seldom correct. After all, the invention of dynamite by Alfred Nobel brought huge benefits to mining. It also cost countless lives through its use in weaponry.


The governance problem

So herein lies the central problem: governance. 

AI is so complex, and its use cases so varied, and its benefits and risks so manifold, that the central question surfacing is “who owns AI?” - by which I mean, who takes responsibility for ensuring AI behaves as it should and doesn’t do what it shouldn’t? This question arises at both a regulatory level (just take a look at the different approaches emerging across the EU, UK, US and China, for example) and at an organisational level.

Within organisations, responsibility for owning AI seems, increasingly, to be falling into the hands of privacy professionals. There’s logic to this: privacy professionals regularly opine on data, typically in the context of complex technological use cases, have to deal with a variety of different regulatory models around the world, and often fall back on core privacy principles to do so. This seems to fit hand in glove with AI, which relies heavily on data for training, is complex technology, will be subject to various emergent regulatory models around the world, and exists against a backdrop of internationally-recognised principles for responsible AI development.

Handing over responsibility for AI solely to privacy professionals is itself a risk, though. The privacy community broadly views the world through the lens of “personal data” or “not personal data”, and asks whether personal data uses are lawful, transparent, necessary, proportionate etc. AI raises many more issues than this, including issues of disinformation and freedom of expression, trust and safety (witness the chatbot that allegedly encouraged a user to commit suicide, or the self-driving Tesla that drove into a lorry), intellectual property infringement (see the backlash Adobe suffered after automatically opting-in users’ creations for ML training), to defamation (see the story about the law professor falsely “accused” by ChatGPT of sexual harassment), to ethical issues (should generative AI create essay answers for students) and so much more. Add to this that privacy professionals, as educated as many of us are, often will not understand properly the technology that enables AI to work, and the risk is that we will counsel on AI issues in a way that may be inappropriate, unrealistic and impractical.

To do AI governance properly, organisations would therefore ideally have a diverse community of computer scientists, privacy professionals, trust and safety experts, lawyers, policy wonks, ethicists and more to call upon. This presents its own challenges though: the first, and most obvious, being that the great majority of organisations simply won’t have all of those resources and skillsets to throw at the “AI problem” when deciding whether or not to develop, procure or integrate an AI system.

The bigger problem, though - if I may be permitted to blend my metaphors - is that too many cooks often spoil the broth and, if everyone is accountable, no one is accountable. While AI governance needs input and insights from as diverse an audience as possible, it also ultimately needs someone to take responsibility, provide direction, and say “this is how we’ll do things”. Without this, an overly diverse approach to AI governance will simply lend itself to disagreement, infighting and politics among its many factions.


And so back to the privacy professional...

Where should AI governance sit within an organisation, then? Well, in the short term at least, this maybe brings us back to the privacy professional after all - for all the reasons already given above.  

However, if privacy professionals are to take on this responsibility, then they will need a broad church of stakeholders (whether internal or external) to counsel them, and to undertake the learning and development necessary to broaden their skillset beyond the boundaries of “traditional” privacy and into AI issues more widely. Perhaps in time, we’ll see the privacy professionals who take up this challenge evolve into a new breed of dedicated AI professionals - who can say?

Right now though, rising to this challenge is what I’ll be doing. So should you. 

Phil Lee

Managing Director, Digiphile

Kajol Patel

Partner Alliance Marketing Operations at Data Dynamics

3mo

This proposed amendment in the UK's DP&DI Bill could have significant implications for AI development in the private sector. By broadening the definition of "scientific research" to include commercially-funded activities, it opens the door for companies to classify their AI research as such. This could impact how AI development is regulated under the GDPR and what safeguards are required. It's a nuanced issue that warrants careful consideration. Thanks for shedding light on this, it's certainly an area to watch closely.

Like
Reply

Thanks for this. The series of emotions I felt when I read that blender poem roughly followed the same trajectory as what I felt when my firstborn child first stood up by himself in his crib and grinned at me: (1) oh, wow, this is so cute and amazing how he's growing up so fast, and everybody needs to see this, followed by (2) oh, no, we're in trouble now! But my son was way cuter.

Like
Reply
Yae Lin (Erin) LEE

Senior Consultant - at Deloitte

1y

great insight!

Like
Reply
Lucy Gilmore

Lawyer | data privacy | data protection | data ethics | Adding value to global organisations by finding straightforward, commercial and ethical solutions to promote data privacy and support digital innovation.

1y

Superb article as always. Thanks! The issue I see is the desperate need for cross functional collaboration here - but the lack of deep technical understanding by most privacy lawyers combined with privacy/legal misconceptions in the smart engineering and dev space makes this challenging. Relationship building, educating and empowering others is key to this journey. The whole is worth more than the sum of the parts here.

Like
Reply
Dr. Blake Curtis, Sc.D

Cybersecurity Governance Advisor | Research Scientist | CISSP, CISM, CISA, CRISC, CGEIT, CDPSE, COBIT, COSO | 🛡️ Top 25 Cybersecurity Leaders in 2024 | Speaker | Author | Editor | Consultant | Educator

1y

Alison Brown De Moreno , you called it! Lol

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics