What we're teaching students about digital ethics

A look inside Professor Porter's Digital Ethics class this January term

đź“« Subscribe | 🗣️ Hire us to speak | 🤝 Sponsor our newsletter & more

Welcome to the Friday edition of our newsletter, which we are newly branding “In the Classroom with Innovation Profs.” The content remains the same … we spend Friday’s going deeper into tools and trends related to generative AI (and Tuesdays sharing news updates). This week we will literally go inside the classroom with Professor Porter…

Putting AI Ethics into Practice

Every January, a number of intrepid Drake University students return to campus early after the winter break to partake in our J-term session, in which classes meet daily (or almost daily) for three straight weeks to knock out a semester’s worth of content in a short span.

Both Snider and I (Porter) opt in to teach a J-term course every year; I’m on my tenth straight year, while Snider is on his 13th. This year, like most of the years I’ve taught J-term, I’m teaching digital ethics to my students. The course is required for all students majoring in AI, computer science, or data analytics, but it also fulfills a values and ethics requirement that all Drake students need to satisfy, so I end up with an interesting mix of students from a range of disciplines.

A majority of the course consists of topics pertaining to AI. Continue reading for some highlights of what we’re doing in the classroom this week:

AI Lunch Club returns

Our AI Lunch Club series returns this spring with four FREE virtual events to help you better understand generative AI tools. Click each link to get more info and to sign up:

These events are offered free thanks to a sponsorship from technology and management consultancy, Lean TECHniques.

Deepfakes and Misinformation

For this topic, one of the things I have the students do is listen to the Radiolab episode Breaking News, which covers the topic of audio and video deepfakes. Although the episode is a bit dated, for me it functions like a time capsule that allows students to relive the initial surprise of learning about the possibility of AI-generated audio and video (as so vividly provided by the hosts of the show), particularly now that we inhabit a world fully saturated in it. Given the current controversial state of deepfake technology, it’s easy for students to connect this material to the present.

Privacy

After reading about the nature of privacy in digital technology, students apply their theoretical insights to everyday scenarios. Highlights this year include recently headlines about the recently launched ChatGPT Health, Meta AI chats being made public, and Google’s $425 million settlement in a privacy lawsuit.

Automation

In this unit, we look at automation and work (as well as the ethics of autonomous vehicles, which I won’t discuss here). Judging by our conversations, students are deeply invested in thinking about how automation is changing the nature of work and what counts as socially responsible automation.

Algorithmic Bias

Next, we look at the topic of algorithmic bias as informed by the documentary Coded Bias, which chronicles the work of Joy Buolamwini, who, as a Ph.D. student at MIT, unearthed biases in off-the-shelf facial recognition models wherein they would perform less accurately on darker skinned faces compared to their performance on lighter skinned faces. The issue of algorithmic bias still persists across a number of domains, for instance, with how AI represents older working women according to certain inaccurate stereotypes.

The Ethics of Large Language Models

This week ends with a look at the ethics of LLMs, particularly their environmental impact but also broader concerns raised in the seminal paper On the Dangers of Stochastic Parrots.

Concluding Thoughts

Our students are going to be entering the workforce not just expected to be technically proficient with AI tools but also to understand how to use these tools ethically and responsibly. So our goal this J-term isn’t just to talk about ethical principles in the abstract, but to practice identifying risks, asking better questions, and making grounded choices in the messy, real situations they’ll actually face. If we do this well, they’ll enter the workforce ready to ask the right questions, and to act responsibly when AI makes decisions even more consequential.

If this sounds like a class that would interest a high school student in your life, maybe they should consider studying artificial intelligence at Drake University.

AI-native CRM

“When I first opened Attio, I instantly got the feeling this was the next generation of CRM.”
— Margaret Shen, Head of GTM at Modal

Attio is the AI-native CRM for modern teams. With automatic enrichment, call intelligence, AI agents, flexible workflows and more, Attio works for any business and only takes minutes to set up.

Join industry leaders like Granola, Taskrabbit, Flatfile and more.