6 min

The Illusion of Competence: AI Cheating & the End of Oral Exams

Viva Voce has always felt like the ultimate test of real understanding. No notes, no shortcuts, just direct questions and honest answers. It creates a sense of certainty, as if this is where true knowledge finally shows up.

14 April 2026

A group of students sitting in a lecture hall, attentively listening to a speaker. They are seated in staggered rows with books in front of them.
The Illusion of Competence: AI Cheating & the End of Oral Exams

That certainty is starting to slip. With AI tools shaping how students prepare, it has become easier to sound confident and structured without fully understanding the subject. Answers can be practiced, refined, and delivered in a way that looks like mastery.

Now here is the uncomfortable question. When a student performs well in a Viva, are we seeing real understanding, or just a well-rehearsed performance? And if the answer is unclear, then the role of the final exam begins to change. Instead of confirming learning, it starts exposing what was missed all along.

Flawless Essays, Blank Stares: The Great AI Discrepancy

Something unusual is happening in classrooms today. Assignments are coming back perfect. Clear arguments, polished structure, strong vocabulary. On paper, it looks like students have mastered the topic. But the moment a simple follow-up question is asked, everything changes.

Silence. Confusion. Blank stares.

This growing gap is becoming hard to ignore. On one side, there are flawless submissions. On the other, students struggle to explain even the basic logic behind their own work. This is where students using AI to cheat is no longer a suspicion. It is becoming a visible pattern.

Reports from the Associated Press in March 2026 highlight this shift as a widespread issue across higher education. Professors are seeing the same trend again and again. Essays look exceptional, yet understanding does not match. So, what changed?

AI tools have made it easy to generate high-quality answers within seconds. As a result, AI cheating in education is no longer about copying content. It is about presenting work that feels original, structured, and convincing, even when the student has not fully engaged with the material.

Educators are no longer asking if AI is being used. That question is already settled. The real challenge now sounds much deeper: How do you tell if learning actually happened?

The "Inquisition" Strategy: Why Universities are Reverting to the Viva Voce

As the gap between polished work and real understanding grows, universities are reacting quickly. The solution many are turning to feels familiar: university oral exams. But the purpose has changed.

Oral exams are no longer just about discussion or deeper learning. They are being used as a kind of lie detector. A direct way to check if a student can actually explain what they submitted.

Interest in this approach has surged since the launch of ChatGPT in 2022. Educators are increasingly convinced of one thing: “You won’t be able to AI your way through an oral exam.”

That belief is driving real changes across institutions:

  • At the University of Pennsylvania, professors like Emily Hammer are pairing written assignments with oral defenses. The idea is simple. If a student truly understands their work, they should be able to explain it.

  • At the NYU Stern School of Business, Professor Panos Ipeirotis has taken a different route. He introduced an AI-powered voice chatbot to conduct oral exams, aiming to scale the process and, in his words, bring oral exams everywhere to fight fire with fire.

This shift answers a growing question: do universities check for AI? Yes, but not in the way many expect. Instead of trying to detect AI in the work itself, universities are testing the student behind it.

The Cognitive Shortcut: Unpacking the "Illusion of Competence”

Everything feels clear while reading it. The explanation flows, the logic makes sense, and it feels like you understand. But then comes a simple question, and suddenly, there is nothing to say.

That is the illusion of competence

AI-generated content is designed to be smooth and easy to follow, which tricks the brain into thinking the learning has already happened. The difficult part, the struggle that builds real understanding, gets skipped without even being noticed.

This is where deep learning vs generative AI starts to separate. One forces you to think, question, and work through confusion. The other hands you a complete answer, removing the need to engage deeply. 

Research from Figshare in 2025 highlights this pattern through what is called cognitive offloading. Students begin to outsource their thinking to AI systems. Over time, this leads to skills fading rather than improving.

One case study makes this even clearer. In tracked engineering courses, student engagement dropped sharply, with attendance falling below 30 percent. Many students believed they understood the material because AI helped them produce correct answers. In reality, their ability to explain or apply those ideas kept declining. 

This explains the moment of failure in oral exams. When the support disappears and the student has to think independently, the illusion breaks.

The Paradigm Shift: Why Continuous Assessment is the Only Path Forward

Understanding does not suddenly appear at the end. It builds quietly, step by step, long before any final exam. That is the shift educators are starting to recognize. Insights from Ken Purnell in early 2026 point to a simple idea: the focus needs to move away from the final artifact and toward the thinking behind it.

This is where continuous assessment changes the game. Learning is seen as it develops through a continuous formative assessment approach. Staged tasks, in-class activities, and live problem-solving begin to reveal how students actually think. It starts to feel like a continuous defense. Not a one-time performance, but an ongoing demonstration of understanding.

AI also becomes part of that process. Students use it, question it, and explain their choices as they go. And over time, something important happens. You no longer wait until the end to see if learning worked. You can see it happening.

Conclusion: Grading the Process, Not the Post-Mortem

A final exam can only reveal what has already happened. When a student struggles to explain their own work at that stage, the gap in understanding has been there for a long time.

This is why the question should exams be replaced with continuous assessment is gaining real weight. A single moment at the end cannot reflect how learning develops, especially in an AI-driven environment.

A more reliable path is already clear. Shift the focus from the final output to the thinking behind it, and follow that process from the very beginning. When learning is observed as it happens, understanding becomes visible, and the need to catch failure at the end starts to fade.