There was a time when educators feared that the use of handheld calculators would encourage students to cheat on math homework and ruin their mathematical reasoning skills because students would lose the ability to do mental math. Similarly dire predictions were made about the spell-check functions on word processors and, later, by programs that corrected grammar and usage errors in student essays. The latest concern over how technology might impair student learning is generative artificial intelligence (AI), including Chat GPT, HayoAI, Tailor, and many others. Teachers have legitimate concerns about the impact of AI on learning, with AI programs writing student essays, lab reports, and responding to test questions in a broad array of subjects. In a recent survey, 56 percent of students admit to using AI to craft answers on assignments despite the growing efforts of schools to curtail student’s use of it.
Rather than attempting to ban AI, teachers and educational leaders are better advised to help students use these tools in an ethical and effective manner. As a result of my research on AI, including interviews with leading AI scholars, I offer three practical considerations for educational institutions to consider when bringing AI tools into the classroom.
1. Practice in Class, Not at Home
To ensure students don’t misuse AI in learning, my first suggestion is for teachers to reconsider what effective practice looks like. Most teachers assign homework because they know that students need practice. If they calculate a problem with the quadratic equation 30 times or read a map of Europe repeatedly, then these skills should become ingrained.
The problem with this view is this: The characteristics of effective practice are feedback, response to feedback, and the immediate application of that feedback. The traditional practice of homework, however, never meets these criteria for effective practice. Rather than building student skills, traditional homework encourages cheating not only with AI, but also copying from friends or leaning on siblings and parents to give the answers. This explains why some students have perfect homework grades and then perform badly on final assessments.
Conversely, every year I see students who score well on Advanced Placement exams and yet receive low grades because of missing homework. The verdict is clear—homework is unrelated to real performance, and in the age of AI this incongruity will only get worse. Moreover, there is little to no evidence that homework has a positive impact on student achievement, as it is more a reflection of compliance with teacher demands than genuine learning. The most effective practice happens in class, with feedback and response to feedback occurring immediately.
Rather than attempting to ban AI, educators are better advised to help students use these tools in an ethical and effective manner.
When teachers give up lecture time and get students to practice work during class, teachers witness authentic student work and can provide immediate feedback to improve student performance. If students use AI in their work in class, the teacher is there to monitor use, teach effective practices, and point out the tool’s limitations, pushing students to go deeper in their learning and not merely copy AI-generated work.
Moreover, students can give each other feedback in class in addition to the teacher. For example, a student might provide an analysis generated by ChatGPT of the Spanish American War and another student might address the same issue with Claude or another of the many emerging AI products. The students can then debate and reconcile the disagreements among the internet sources, find errors, and evaluate the biases that are inherent in every AI product. This is critical thinking at its finest, showing students that AI is only the start of a proposed solution. Practice during class also can generate vibrant classroom discussions in which students debate alternative approaches to a history question, interpretation of data, or a host of other questions that might have more than one possible answer.
2. Require Evidence of Understanding
One of the rites of passage for graduate students is the oral defense of their thesis. It requires diligent preparation and the ability to respond to a variety of questions to defend the reasoning, research, and conclusions in the thesis. But an oral defense doesn’t have to be limited to graduate students—students of any age can do this work. Critical thinking is an important skill for all students at every grade level and every subject, and requiring students to defend their reasoning in an oral presentation, either in class or in a short video, can improve their critical thinking skills and, at the same time, discourage AI-generated work.
We expect middle school math students to be able to demonstrate that they know the Pythagorean Theorem—that the sum of the square of two sides of a right triangle is equal to the square of the hypotenuse. If a teacher wants to know that the student really understands this mathematical principle, they might ask, “Where is the Pythagorean Theorem not true?” (On Mars, or any other curved surface) or “How can you demonstrate that the Theorem is true using blocks?” These types of questions dig below the “right answer” and test students’ application of learning, and they can be applied in any subject.
The oral defense process is time-consuming, however, and given the large class sizes in many schools, teachers will not have time to require an oral defense from every student. But they do have time to randomly select three students (and the selection must be random and not restricted to eager volunteers) to defend their work. Over the course of a term, every student should have the opportunity and the requirement to engage in an oral defense. The teacher can make this emotionally safe by offering the student an opportunity to “phone a friend” (confer with a classmate) or “answer a question with a question” to keep discussion rich and ongoing.
3. Require the Use of AI as a First Draft
My third suggestion is that rather than fearing AI and attempting to restrict it, teachers should require it. Too many assignments for students are “one and done,” a process that emphasizes getting the assignment right the first time over the more demanding process of respecting teacher feedback and then revising the first draft to improve everything from lab reports to essays to math solutions.
Every writer knows that the real work of presenting a coherent essay is not getting the first draft right, but in the editing and revision that follows that draft. Rather than fight AI solutions and the many easily available homework helper programs, teachers should consider requiring students to use ChatGPT and other technology assists. They should ask students to submit both the computerized essay or other work product and a revision that shows how they reworked and improved upon the technology-supported product. This sends two essential messages to students. First, an essential human skill today is improving the work of computers, not passively accepting the premise that bots are better than people. Second, it reinforces the critical thinking skills that are inherent in revision and improvement.
Educators can use AI to improve, rather than diminish, the critical thinking abilities of students.
In my interviews with college professors and employers, one of my most consistent findings is that they want students and employees who can accept and apply feedback. They rail against the “get it right the first time” mentality in which feedback is irrelevant. However sophisticated the technology may be, our students can and must add value to computerized solutions.
For better or worse, AI has left Pandora’s box. Attempting to stop it in schools is as futile as expecting that students will return to the dress codes of 1953. Nevertheless, educators can use AI to improve, rather than diminish, the critical thinking abilities of students. We need not oppose AI. Rather, we can use it to help our students meet the critical thinking challenges of today.