Rethinking coding interviews in the AI era.
You're hiring an engineer for 2026. Why are you still testing them like it's 2005?
Let me paint you a picture.
A candidate sits down for a technical interview. They're given a whiteboard - or its digital equivalent - and 45 minutes to write code from memory while a stranger watches their every keystroke. No Stack Overflow. Definitely no AI assistant.
Meanwhile, back in the real job they're interviewing for, every working engineer has a tab open with ChatGPT, Copilot, or Claude. They're generating boilerplate, asking for code reviews, debugging at the speed of conversation.
We're testing people on skills they will never use in the way we're testing them. Something has to give.
The Problem With the Whiteboard
The classic coding interview was designed to test problem-solving ability under pressure. And honestly? For its era, it made sense. When writing code meant typing everything from memory, it was a reasonable proxy for engineering skill.
But that era is over.
LLMs can produce a working quicksort, a binary search tree, a REST API boilerplate; in seconds. Any engineer who isn't using these tools daily is working slower than they need to. So when we refuse to let candidates use them in interviews, we're not testing real-world ability. We're testing a specific, increasingly irrelevant sub-skill: code recall.
Worse, we're accidentally selecting for the wrong things. The candidate who memorised algorithm patterns performs well. The candidate who is exceptional at prompt engineering, code review, architectural thinking, and knowing when the generated code is wrong - they might stumble. And those second skills? Those are exactly what matters now.
What a Better Interview Looks Like
Here's a simple shift: stop asking candidates to write code. Start asking them to work with it.
Give them a real, reasonably complex task. Let them use whatever tools they'd use on the job — including AI. Then evaluate what actually matters:
Design decisions. With AI writing the boilerplate, what choices did they make? Did they structure it sensibly? Did they handle edge cases? Did they spot where the LLM cut a corner?
Can they explain it? Sit down with the output and ask them to walk you through it. An engineer who understands what they built, even if they didn't type every line, can explain every part of it. An engineer who just vibed with the AI output will fall apart the moment you dig in.
How do they think about quality? Did they add tests? Is there any documentation, or did they assume the code speaks for itself? How do they talk about maintainability and what happens when a colleague picks this up in six months?
Debugging instincts. Break something in the code - either intentionally, or ask them to review AI-generated code with a deliberate bug - and see how they approach it. This is still a deeply human skill.
These signals tell you far more about working ability than watching someone implement a linked list from memory.
The Fear Under the Hood
I understand the resistance. The immediate concern is: "If we let candidates use AI, how do we know they can code?"
But I'd flip the question: in 2026, what does "being able to code" actually mean?
It means knowing what to build and why. It means understanding the output well enough to own it. It means catching the subtle bugs, asking the right questions, and making the call when the AI is confidently wrong. None of that is tested by a 45-minute whiteboard session.
The interview process should reflect the job. Right now, for most roles, it doesn't.
A Starting Point
You don't need to throw out your entire interview process overnight. But a few changes go a long way:
Replace one coding round with an AI-assisted design challenge
Add a "walk me through this code" component. Give candidates something pre-written and ask them to explain and critique it.
Ask explicitly how they use AI in their daily workflow. Candidates who use it thoughtfully and critically are showing you something valuable.
Evaluate documentation and communication as seriously as logic
The engineers who thrive in the next decade won't be the ones who refused to use AI tools. They'll be the ones who learned to work with them well: critically, creatively, and deliberately.
It would be nice if our interviews were designed to find those people.
What does your team's interview process look like? I'd love to hear whether others are making this shift — or still stuck in the whiteboard era.