Most teachers want strategies that stop students using AI to cheat, but truly blocking AI is a moving target. Instead, teachers can design assessments that are much harder for AI to “fake,” and also foster real learning and engagement. Perhaps instead of "AI-Proof" assignments, we need to think about "AI-resistant" assessments and "AI-leveraging" learning.
Instead of only focusing on how to prevent students from using AI, what if we also asked:
What's the real purpose of this assignment?
Why might a student be tempted to cheat?
How can we design assessments that are more engaging, authentic, and valuable?
How can we leverage AI as a tool to actually deepen student understanding?
Tip: Clarify the real purpose and value of each assignment to help students buy in and reduce cheating.
Understanding the root cause is the first step. "Cheating" is often a symptom of other issues.
Students juggling multiple deadlines, extracurriculars, and high-stakes grades may look for a shortcut, not out of malice, but from desperation.
If an assignment feels like "busy work" or its purpose isn't clear, students are less motivated to engage with it authentically.
When the penalty for a poor grade is high, students may feel it's safer to get a "correct" answer from AI than risk their own potentially "wrong" answer.
Open the following dropdowns to view strategies and examples.
These strategies make it significantly harder for an AI to generate a complete, high-quality response. They focus on unique, in-the-moment, or process-oriented thinking.
Socratic seminars, timed in-class essays, group presentations, or whiteboard problem-solving. AI can't participate in a live, dynamic discussion.
Connect assignments to a specific event from that day's class discussion, a local community issue, or a personal reflection. (e.g., "Using the 'Tragedy of the Commons' concept we discussed today, analyze the problem of parking at our school.")
Require students to submit process journals, annotated bibliographies, multiple drafts with tracked changes, or a reflection on *how* they arrived at their answer.
Provide a brand-new case study, a unique dataset, a guest speaker's transcript, or a very recent article that AI models haven't been trained on.
These strategies embrace AI as a tool and ask students to use it, then build upon, critique, or refine its output. This shifts the skill from "answer generation" to "critical thinking and refinement."
Have students prompt an AI on a complex topic. Their assignment is to submit the AI's response along with their own detailed critique identifying its inaccuracies, biases, or "shallow" analysis.
Allow students to use AI to generate a first draft. Their graded submission must be a heavily revised and annotated version, with comments explaining *why* they changed what they did.
Ask students to use AI to generate 10 possible solutions to a problem. Their assignment is to select the top 3, justify their choices, and synthesize them into a single, superior solution.
Have students use an AI chatbot to "debate" a topic. They must submit the transcript along with a reflection on the AI's strongest and weakest arguments, and how their own thinking evolved.
When an assignment can be fully completed by AI, it might be a sign that the assignment itself is assessing a skill that is becoming obsolete. How can we assess deeper, more human-centered skills?
Design tasks that mirror real-world work. Instead of a 5-paragraph essay on climate change, have students create a public service announcement, write a policy brief for a local official, or design a community action plan.
The assignment isn't just the final product, but the *reflection* on it. Ask: "What was the hardest part of this project for you? What strategy did you use to overcome it? What would you do differently next time?"
Have students *defend* their work. This could be a formal presentation, a Q&A session, a "gallery walk" of projects, or a podcast recording. The value is in *explaining* and *justifying* their thinking.