- Teacher's AIed
- Posts
- From Pencils to Pixels: AI Tools Reshaping Student Assessments
From Pencils to Pixels: AI Tools Reshaping Student Assessments
AI tools that revitalize low-tech assessments.
Welcome to Teacher’s AIed: the newsletter about AI in the K12 Classroom.
How AI will affect K12 Classrooms is complex. Each week, we curate knowledge for educators about the strengths, weaknesses, opportunities, and threats of AI and K12 education.
In this edition, we continue our series focused on addressing the "student AI-plagiarism problem.” This week’s focus is on using AI-enhanced tools to improve low-tech assessments.
Article Overview:
This article outlines the key characteristics of a plagiarism-proof assessment environment.
Two promising AI tools, Sherpa and GradeCam, suggest models for effective AI-enhanced assessment. Sherpa reintroduces oral exams with AI assistance and GradeCam allows plain paper assessments with AI-driven handwriting recognition for grading.
While these tools have some limitations, particularly in assessing higher-order thinking, they represent an innovative approach to addressing AI-induced plagiarism through low-tech assessments.
When I just starting graduate school, I specifically remember looking at the list of required courses and thinking to myself, “Really, an entire class just on assessments?” I quickly realized how foundational assessments are to the learning process. Furthermore, I saw the need for assessments that are fair, reliable, and efficient while minimizing the chances of academic dishonesty.
We now have to add one more consideration to the assessment pot: including plagiarism facilitated by AI.
As we stated in the introduction article to this series, there are a three main ways educators can proactively mitigate the threat of AI-faciliated Plagiarism. Yet it is important to remember that any one solution does not constitute a complete response to the AI plagiarism problem; instead, we recommend a combination.
This blog post delves into an exploration of designing assessments that rely on little to minimal student interaction with technology while harnessing the power of AI to make grading and assessment management more efficient. We seek tools that take low-tech inputs and transform them with AI. We want the benefits of low-tech assessments regarding disincentivizing cheating and the scalability, efficiency, and analytics of higher-tech solutions.
In the sections below, I’ll share two AI tools that fascinate me that do just that. But, before we jump into those AI tools, I have two notes:
It is essential to acknowledge that trust is a vital component of any educational process. Our most recent post explored this - the classroom culture and management considerations of AI use in K12 classrooms. Placing full trust in students is a fundamental aspect of their growth and development, and it's often the lack of trust that leads to academic dishonesty. While we cannot eliminate all possibilities of students using AI to cheat, we can certainly design assessments that minimize such risks.
For our elementary school teacher readers, the concern about AI cheating might seem far-fetched. After all, young students may not have the technical skills or resources to employ AI in their assignments. However, AI can still play a crucial role in streamlining assessment processes, saving you valuable time and providing more insightful feedback. The following tools can help you in that quest.
Let's explore some exciting AI tools and concepts that can make assessments in elementary schools more efficient.
Sherpa: “Voice-enabled assignments for students to chat about class material”
Oral assessments were the de facto form of assessment for centuries. But, as education became more democratized and industrialized, educators couldn’t assess each student through a dialogue a la Socrates.
Two undergraduate students, Joseph Tey and Shaurya Sinha, from Stanford University's Piech Lab, are pioneering an innovative way to reintroduce oral exams using artificial intelligence.
To use Sherpa, instructors upload assigned readings or have students upload written papers. The tool then analyzes the reading and either creates a list of questions or uses an instructor provided list to assess students’ understanding of key concepts. Instructors can choose to record audio and/or video of the conversation between Sherpa and students. Sherpa uses AI to transcribe student responses and highlights areas where the answers seem off-point.
Pretty cool, huh?
But, how does this help with disincentivizing plagiarism?
The short answer is that students are “put on the spot” by Sherpa in a way that a take-home assessment does not. Students are incentivized to learn the material before their assessment with Sherpa as a student’s delayed response and sounds of furious page-flipping or ChatGPTing would raise suspicion of cheating.
See “As AI Chatbots Rise, More Educators Look to Oral Exams — With High-Tech Twist” from EdSurge for more details on Sherpa.
GradeCam: “Assessment made easy.”
I never thought I would say this: Consider reverting to pencil and paper assessments to rule technology out of the equation for students.
Pencil and paper assessments surely have their place in a classroom - particularly in our quest to disincentivize cheating. However, the benefit of their simplicity was usually outweighed by the extra time needed to grade student responses and analyze their scores.
However, there might just be a “best of both worlds solution.”
Gradient by GradeCam is a versatile assessment tool that offers a range of features for educators. It allows teachers to create plain paper bubble sheets for scoring multiple-choice, rubric-based, or gridded response type answers. Additionally, with the handwriting recognition power of AI, GradeCam can read short student responses - think numeric answers to a math question or a fill-in-the-blank word or phrase. After teachers scan the answer sheets using their phone’s camera, the assessments are automatically scored according to the answer key the teacher loads in.
A Limitation and a Caveat
There are some limitations to both of these platforms. Primarily, I am concerned about the depth of knowledge being assessed. Simple responses, whether they are multiple choice or fill-in-the-blank, struggle to assess knowledge higher up Bloom’s Taxonomy. I’d love to see more AI-enhanced tools mimic a teacher’s ability to rigorously and intentionally assess deeper levels of knowledge.
I haven’t used any of the following tools in a classroom, so I’m not speaking from personal experience in using these. Rather, from my time working in and with schools along with my research into AI and AI-enhanced EdTech, these tools - or at least their models - seem fascinating to me. If you use either of these programs, please reach out to us at [email protected]. We would love to hear your experiences!
In the ever-evolving realm of education, the quest to design assessments that are both equitable and resistant to AI-facilitated plagiarism presents an ongoing challenge. This blog post has delved into using assessments that minimize students' access to technology while leveraging AI for more efficient grading and management. The tools mentioned strike a balance, capitalizing on the advantages of low-tech assessments in discouraging cheating while harnessing the scalability and analytical capabilities of higher-tech solutions. Though the two tools we mentioned come with certain limitations, particularly in assessing higher-order thinking, they represent promising steps in the ongoing quest to combat AI-induced plagiarism and offer insights into student performance.
As education evolves, these tools offer educators novel ways to navigate the intricate assessment landscape, fostering authentic and effective learning experiences.
Reply