• Teacher's AIed
  • Posts
  • ❗️ "You said what now?" Natural Interaction and AI Literacy

❗️ "You said what now?" Natural Interaction and AI Literacy

Estimated Read Time: 3 min 58 sec

Teach with expert insights on AI, curated by your trusty Teacher’s AIde

Welcome back to Teacher's AIed, where spring is in the air!

Created by DALLE

This is the fifth installment in my series on AI literacy and its implications in K-12 Education. AI Literacy, like digital literacy before it, will need to be taught to students and educators alike. While many components of AI Literacy are already part and parcel of computer literacy, there are areas where they differ.

The organization AI4K12 outlined 5 Big Ideas of AI Literacy. The 4th Big Idea, “Natural Interaction,” is divided into four concepts: "Natural Language, Commonsense Reasoning, Understanding Emotion, and Philosophy of Mind. In this blog post, I’ll dive into each of those components.

Did you miss the previous posts? Here are the links:

1. Natural Language

The annals of history, which in this case is Reddit, are full of times when voice assistants, like Siri and Alexa, have failed us. Here’s just one example of a Siri query gone awry:

As AI tools improve, these types of mistakes should become less common.

But, that’s not the design concept in question. When considering the design of AI systems to process natural language, we have to look at the complexities of our language that we, as humans, pick up on that computers do not.

AI4K12 notes that “Computers have difficulty understanding the text that makes use of metaphor, imagery, hyperbole, sarcasm, or humor, or wordplay.”

Understanding language is not simply an algorithmic process of defining terms and piecing together grammatical structures. It’s more of a game (to borrow Wittgenstein’s term). And, computers, are not traditionally the best players of this game.

Created by DALLE

Generative AI tools have made significant strides in reasoning with language that employs these literary devices, and it is fair to assume that the human-computer barrier will only continue to lessen.

2. Commonsense Reasoning:

We educators often joke that middle school students lack common sense. I once had a student say, “Mr. Poche, I heard you give that direction, but I chose to forget.” I’d love for someone to explain how one can “choose to forget” what they don’t want to hear.

For computers, common sense takes a different form.

Understanding the physical world and making inferences based on everyday experiences is a fundamental aspect of commonsense reasoning for computers. There are things humans instinctively know based on common sense, such as the consequence of a raw egg rolling off a kitchen counter and hitting a hardwood floor. We know that the egg will likely break—not because of a deep understanding of physics or materials—but through common sense.

Just like our middle school students, we might need to explicitly teach computers what we see as common sense.

3. Understanding Emotion

The idea of computers understanding and responding to human emotions often feels like a concept straight out of science fiction. Movies like “Ex Machina” and “M3gan” depict AI characters with fascinating levels of emotional intelligence. They are so advanced they are thoroughly unsettling.

Created by DALLE

Yet, there’s no need to jump to extreme reaches of the realm of sci-fi. In reality, the current state of AI in understanding emotions is more grounded and much more tame.

One common application is sentiment analysis, where AI tools assess whether a piece of text, like a Google review, has a positive or negative sentiment. More advanced systems aim to interpret a wider range of emotional cues, including body language and vocal tone, and adjust their responses accordingly.

As of now, conversations with AI chatbots feel impersonal, which might be a good quality for helping young learners distinguish between man and machine.

But, I don’t believe chatbots will remain entirely impersonal. We know that the next stage in AI development is AGI (Artificial General Intelligence), where AI tools have all of the capacities as humans, including emotional intelligence.

4. Philosophy of Mind

What does it mean to be conscious? Is it just a matter of being aware that you are a self? Is it linked to our memories and our ability to recall?

Our philosophically inclined friends are always more than happy to posit questions that make us think about who we are and what the implications of that answer might be on what computers are and can be.

Created by DALLE

The debate among AI experts and philosophers is divided; some assert that computers will never replicate human intelligence in its entirety, while others envision AI surpassing human cognitive capabilities.

Now, this isn’t a philosophy newsletter, so I won’t dive too deeply into the arguments for either side. However, what is likely more important than the content of these philosophical conversations is just that students are having these types of conversations.

To wrap up Big Idea #4, getting AI software to a point where it feels as natural to interact with it compared to interacting with a human is still years away. However, with each new model, we get one step closer. First, we will find Siri responding with a bit of sarcasm or genuine humor. Then, there might come a response from ChatGPT that feels genuinely empathetic. Finally, what if we find ourselves in the presence of a computer and a human and cannot distinguish who is what.

Stay tuned for more in this series as we continue to unravel the intricacies of AI Literacy and its importance in education and beyond!

Class dismissed!

Lewis Poche & Kourtney Bradshaw-Clay

Reply

or to participate.