- Teacher's AIed
- Posts
- ⚖️ Ethical AI: Culture, the Economy, and Social Good
⚖️ Ethical AI: Culture, the Economy, and Social Good
Estimated Read Time: 4 min 54 sec
Teach with expert insights on AI, curated by your trusty Teacher’s AIde
Welcome back to Teacher's AIed, where we help you navigate the ethical challenges of AI, unlike the little 1st grader who thought it would be wise to eat a worm during recess.
Should a driverless car bound to get in a wreck continue along its course and hit the mini-van in front of it or swerve to hit the motorcycle in the other lane? This is a thought experiment first shared with me when I was studying Engineering and Philosophy in undergrad. This “modern Trolley Problem” forces us to come to terms with automated systems making ethical decisions.
Created by DALLE
Should the car continue on its course, thereby avoiding the liability of making a deliberate decision? Should the car make a quick probabilistic calculation, assume that the mini-van has more passengers than the motorcycle, and make the decision to serve because one is less than potentially more than one?
This thought experiment underscores the reality that AI applications are not necessarily morally neutral. This naturally raises a multitude of ethical questions to consider.
So, consider them we shall.
This is the last installment in my series on AI literacy and its implications in K-12 Education. AI Literacy, like digital literacy before it, will need to be taught to students and educators alike. While many components of AI Literacy are already part and parcel of computer literacy, there are areas where they differ.
The organization AI4K12 outlined 5 Big Ideas of AI Literacy. The 5th Big Idea, “Societal Impact,” is divided into four concepts: Ethical AI, AI & Culture, AI & the Economy, and AI for Social Good.
Did you miss the previous posts? Here are the links:
Here's what we have for you today
1. Ethical AI
At the outset of this series, I shared a case-study in Ethical AI. I’ll repeat it here in case you missed it:
Story-Time: My mom asked me to weigh in on the recent news reports on Google’s Gemini Image Creator.
Here’s a brief summary: “When asked for an image of a Founding Father of America, Gemini showed a Black man, a Native American man, an Asian man, and a relatively dark-skinned man. Asked for a portrait of a pope, it showed a Black man and a woman of color. Nazis, too, were reportedly portrayed as racially diverse” (Vox).
Of course, there are political and cultural themes to this. These, I will avoid.
Yet, this example provides a great lens to explore the first concept detailed by AI4K12. One of their middle school learning objectives is to “evaluate the ways various stakeholders' goals and values influence the design of AI systems.”
After looking more into this story (I highly recommend this podcast), what surfaced were various stakeholders’ goals and values. In short, one camp prioritized historical accuracy, while the other valued creative representation.
AI4K12 provides enduring understanding that offers a lens for a next step, “AI systems need to align with the norms and values of the groups they aim to serve.” This does become difficult for a company as large as Google, but the sentiment of developing AI that conforms with our ethical frameworks remains.
2. AI & Culture
As I was writing this, a friend I spoke with yesterday about Savannah, Georgia, texted me a screenshot of an email he received about a flight deal for Savannah.
To which I responded, “lol. It’s listening to us.”
Created by DALLE
Which isn’t an ungrounded accusation. According to Surfshark, “it’s safe to say that your phone is probably listening to you. More often than not, it’s done to improve user experience, but it still causes privacy concerns for users that don’t want to be listened to.”
One component of AI4K12’s framework is that AI is already embedded into our daily lives in a myriad of ways. Everything from Google optimizing your search request to Netflix suggesting a new show you might like. These uses of AI feel innocuous enough, but as we’ve seen in the education sphere, AI can be quite a disruptor and source of debate.
Should AI-generated text be considered as plagiarized?
Should students use generative AI in class, and for which purposes?
How should AI technology be regulated by governments?
Regarding that last question, some elements of this technology could be regulated to further protect people’s privacy. To read up on this further, see the EU’s AI Act or EdSurge’s take on the US’s forthcoming AI Literacy Act.
3. AI & the Economy
We’ve been down this road before. Jobs and our economy change with the times. Today, we have fewer horse trainers and more auto mechanics than we did 100 years ago.
Created by DALLE
yada, yada, yada.
It’s undeniable that our economy and its jobs will evolve with technology. This is a tale as old as time.
Yet, this cannot be where the conversation ends. Particularly with students who are anxious about entering a workforce with uncertainty.
Two shifts in thinking about the future of jobs that might offer consolation and affirmation to students (and educators alike!) are:
Instead of saying, “AI will replace jobs,” shift the language to “People who leverage AI will replace people who do not.” While it is true that companies are utilizing AI to complete tasks that humans once completed, there are humans - software engineers, product managers, and designers - behind each of those technologies.
Secondly, instead of saying, “We don’t know what jobs will exist in 5-10 years,” shift the message to “We know what career pathways were created yesterday.” We know that companies are hiring social media marketers, drone engineers, and health professionals. Maybe these roles will change in the future because of AI-enhanced automation, hence “career pathways,” but we owe students a starting point when thinking about their careers.
One life lesson I always tried to impart to my students while teaching was that they were in the driver’s seat of their lives. They had agency, choice, and opportunity…even when it didn’t feel like it.
When reviewing the final two subconcepts - Democratization of AI Technology and Using AI to Solve Societal Problems - I’m reminded of my students and my encouragement to them to be agents in the world.
The first subconcept encourages students to create computer applications that leverage AI. While this might seem daunting, there are many entry-level resources for educators and students to get their hands dirty - figuratively - and tinker with AI tools (e.g. Scratch programming). The importance of playing around with AI technology cannot be understated! Students must have the opportunity to explore this new technology.
The second subconcept reminds us that students are capable of enacting change and that what they create can be a force for good. At the end, AI is just a human invention, and it is up to us to use it ethically and for the common good. The world is our oyster for using AI to address global issues like climate change, poverty, inequitable resource allocation, and, of course, education.
This concludes our six-part series on AI Literacy as laid out by AI4K12! We would love your feedback. Did you find this series useful?
Class dismissed!
Lewis Poche & Kourtney Bradshaw-Clay
Reply