Intro to Responsible AI
A discussion guide
Responsible tech pillar(s): #solutions #culture
I was recently preparing a roundtable discussion for data protection professionals. The purpose of the roundtable was to introduce how AI works, with the ultimate goal of enabling a more intuitive understanding of responsible AI issues like transparency, bias, and the like.
The challenge?
Most roundtable participants were AI novices with non-technical backgrounds. In addition, we had only 30 minutes together. I was nervous, to say the least, about covering such a big topic for this audience in a relatively short amount of time.
My solution
Enter my hero: THE ANALOGY! I introduced different types of AI/ML using analogies*, then used those analogies to explore common responsible AI issues. Though this didn’t come close to establishing a deep technical understanding of AI or a comprehensive view of AI risks, it proved to be a great introduction and conversation starter. Most importantly, it made an otherwise intimidating topic accessible and engaging.
*Those of you steeped in all things AI will recognize most of the analogies from published research or go-to examples in AI literature.
Over to you!
I’m sharing my discussion guide for you to use – and not just with non-technical audiences. I’ve been working on trustworthy AI for years, and going through the discussion guide not only refreshed my memory but also reinforced concepts and sparked new insights.
As with everything on this site: Take what’s useful and leave the rest. And feel free to make this discussion guide your own. Add analogies you find helpful — and please share, if you’re so inclined!
The discussion guide
Description. A list of commonly used AI/ML models, explained through analogies. Accompanying each model type are common responsible AI issues.
Purpose. Introduce how AI works to enable understanding of common AI risks.
Keep in mind. This discussion guide is not intended to provide a deep technical understanding of AI, nor a comprehensive view of AI risks.