The AI Cheating Epidemic No One Wants to Talk About
Here's a statistic that should make every hiring manager uncomfortable: 83% of candidates say they would use AI assistance during technical interviews if they thought employers wouldn't detect it. That's not a typo. More than four out of five job seekers are willing to let ChatGPT, Claude, or specialized cheating tools do their thinking for them.
And it's not just willingness—it's already happening at scale. Research shows that 27% of technical candidates admit to using AI during interviews, while nearly 48% openly acknowledge using unauthorized AI tools. One tech leader reported that 80% of candidates used a large language model to complete their code test, even after being explicitly told not to.
The technical hiring process is broken. But not in the way most people think.
The Rise of Interview Cheating Tools
In early 2025, a 21-year-old Columbia University student named Chungin "Roy" Lee made headlines for developing "Interview Coder"—an AI tool designed to solve technical coding problems discreetly during live interviews. The tool analyzes both written and verbal questions and generates code in real-time, invisible to screen-sharing detection.
Lee's tool claimed a 65% success rate, with users reportedly landing offers at Amazon, Microsoft, and other major tech companies. Despite being suspended from Columbia for sharing recordings about the tool, Lee has turned the controversy into a business, reportedly earning nearly $200,000 monthly in subscriptions.
Interview Coder isn't alone. Tools like Leetcode Wizard, Cluely, and various ChatGPT integrations have created an underground economy of interview fraud. Cluely, for instance, operates as a covert overlay that analyzes exam screens and spoken questions, feeding real-time answers back to candidates while remaining invisible during screen sharing.
Why Is This Happening?
Before we condemn candidates as cheaters, let's acknowledge the uncomfortable irony at play: companies like Google and Amazon actively encourage engineers to use AI tools in their daily work—over 25% of Google's codebase is now AI-generated. Yet during interviews, these same tools are banned.
This disconnect creates a strange moral calculus for candidates:
- "If I'll use AI on the job, why can't I use it to get the job?"
- "Everyone else is probably doing it—I'm at a disadvantage if I don't."
- "The interview doesn't reflect real work anyway."
There's also the matter of stakes. Technical interviews have become increasingly high-pressure, with candidates investing weeks in LeetCode preparation for jobs that may offer $200,000+ compensation packages. When the perceived risk is low and the reward is high, rational actors will game the system.
The Detection Arms Race
Companies aren't taking this lying down. Over 60% of engineering and talent acquisition leaders now cite technical assessment security as their top concern for 2025. This has spawned a new industry of anti-cheating solutions.
Behavioral Analytics
Modern proctoring tools like Sherlock AI use multimodal machine learning to detect interview fraud. Rather than simple rule-based triggers, these systems model natural versus adversarial interaction patterns—tracking device activity, audio environments, typing patterns, and candidate behavior. Sherlock claims detection accuracy has risen from approximately 85% to over 97%.
Platform-Level Protection
Assessment platforms have implemented sophisticated countermeasures:
- HackerEarth uses Smart Browser technology, AI-powered snapshots, audio monitoring, and code plagiarism detection. It can detect forbidden tools like ChatGPT or Interview Coder.
- Talview uses intelligent application control to block AI cheating tools before they can run, employing behavioral analytics to identify hidden overlays and suspicious shortcuts.
- HackerRank provides enhanced security protocols through certified assessments with validated question pools.
Human Pattern Recognition
Interviewers have learned to spot telltale signs: eyes wandering to the side, reflections of other apps visible in candidates' glasses, answers that sound rehearsed or don't quite match the questions asked. Google has moved beyond surface-level plagiarism checks by layering code structure analysis, response timing, and language pattern review.
The Return to In-Person
Perhaps the most dramatic response has been the revival of onsite interviews. In-person interview rounds rose from 24% in 2022 to 38% in 2025, particularly for design and behavioral assessments.
Google has reintroduced onsite interviews specifically to combat AI cheating, while Meta has experimented with AI-assisted interviews that evaluate responses dynamically—essentially fighting fire with fire.
But returning to in-person interviews creates its own problems: increased costs, geographic limitations, candidate inconvenience, and barriers for remote-first companies. It's a retreat, not a solution.
A Better Path Forward: Assess What Actually Matters
Here's the uncomfortable truth that the cheating epidemic exposes: if candidates can successfully cheat their way through your technical assessment using AI, your assessment might be measuring the wrong things.
Think about it. If a candidate uses ChatGPT to solve a LeetCode-style algorithm problem, what have they demonstrated? That they know how to prompt an AI effectively—which, ironically, is increasingly a valuable job skill.
The companies winning the hiring game in 2025 aren't just adding more proctoring. They're fundamentally rethinking what technical assessments should measure:
1. Real-Time Explanation and Reasoning
Require candidates to articulate their thought process throughout the assessment. AI can generate code, but it can't authentically explain why a human made specific tradeoffs or how they'd adapt the solution under different constraints.
2. Custom, Non-Public Problems
Use problems that aren't readily available in public datasets used to train AI models. When your assessment relies on novel scenarios that require genuine problem-solving, AI assistance becomes less useful.
3. Multi-Modal Assessments
Combine coding with system design discussions, behavioral questions, and pair programming exercises. It's much harder to cheat across multiple modalities simultaneously.
4. Assess AI Collaboration Skills
Here's a radical idea: instead of banning AI, explicitly test how candidates work with AI tools. Give them a complex problem, full access to AI assistants, and evaluate:
- How effectively do they prompt and iterate?
- Can they identify and fix AI-generated bugs?
- Do they understand the code well enough to explain and extend it?
- Can they recognize when AI suggestions are wrong?
This approach acknowledges reality: developers will use AI on the job. The skill that matters is human-AI collaboration, not coding in isolation.
The Tipping Point
Some companies have already reached a tipping point. One tech company found AI usage in their assessments so widespread that they decided to simply ignore it, moving top performers to subsequent interview rounds where human skills could be better assessed.
This pragmatic approach recognizes that fighting the AI cheating arms race is expensive, imperfect, and possibly futile. Instead of asking "how do we prevent AI use," these companies ask "how do we identify great engineers regardless of AI use?"
What This Means for Your Hiring Process
If you're responsible for technical hiring, here's the reality check:
- Your current assessments are probably compromised. Assume a significant percentage of candidates are using AI assistance, detected or not.
- Adding more proctoring is a temporary fix. Cheating tools evolve faster than detection tools. This is an arms race you can't win permanently.
- The interview-job disconnect is the real problem. When assessments don't reflect actual job requirements, candidates rationalize cheating. Align your process with reality.
- AI collaboration is a feature, not a bug. The best candidates in 2025 know how to leverage AI effectively. Test for that skill explicitly.
The 83% statistic isn't just about candidate ethics—it's a signal that our assessment methods have failed to keep pace with how software development actually works. The companies that recognize this and adapt will hire better engineers. The ones that double down on outdated methods will keep playing whack-a-mole with cheating tools while missing great candidates who refuse to play the game.
The future of technical hiring isn't about preventing AI use. It's about assessing what humans uniquely bring to the table—and that requires entirely new approaches to evaluation.

