Back to Blog

Technical Interview Guide for Hiring Managers

22244

Technical interview questions are the single biggest leverage point most hiring managers ignore. Not the job description. Not the offer letter. The forty-five minutes you spend asking a senior developer to explain how they’d design a caching layer, or watching a DevOps candidate whiteboard an incident response workflow. That conversation decides whether you’re making a $140,000 investment in someone who ships, or a $140,000 lesson in what happens when you wing it.

A 2024 CareerBuilder study found the average cost of a bad hire sits around $17,000. For technical roles, where onboarding takes longer and the blast radius of a wrong decision touches production systems, that number climbs fast. We’ve seen it firsthand. At KORE1, we run technical staffing across software engineering, DevOps, data, and infrastructure. Hundreds of technical interviews debriefed every year. The pattern is consistent: companies with a structured technical interview process hire better and hire faster. Companies that let each interviewer freelance end up in four-month hiring loops wondering why their top candidate took another offer three weeks ago.

This is the guide we wish every hiring manager read before their next technical panel.

Hiring manager conducting technical interview with software engineer at conference table

What Makes a Technical Interview Different?

A technical interview evaluates a candidate’s ability to solve real engineering problems, not just talk about them. It goes beyond resume credentials and behavioral questions into actual demonstration of skill, whether that’s writing code, designing systems, debugging a broken deployment, or explaining tradeoffs between two architectural approaches they’ve actually lived with in production.

That’s the clean definition.

The messy reality is that most technical interviews don’t do any of that well. They test memorization. A hiring manager pulls “top 50 Java interview questions” from Google, reads them off a list, and checks whether the candidate’s answer sounds close enough to the one on the screen. That’s not an interview. That’s a pop quiz. And it selects for people who prepared for your specific pop quiz, not people who can build and maintain the systems you actually need built and maintained.

Schmidt and Hunter’s landmark meta-analysis, cited in virtually every industrial-organizational psychology textbook since 1998, found structured interviews predict job performance with a validity coefficient of .51. Unstructured interviews? .38. That gap sounds small until you multiply it across fifty hires a year over five years. The structured approach isn’t marginally better. It’s a different category of outcome.

How to Structure Your Technical Interview Process

Four stages. Not three, not seven. Four is enough to evaluate thoroughly without burning three weeks of calendar time and losing your best candidates to companies that move faster.

Stage 1: Recruiter Screen (20-30 minutes)

Not technical. This is logistics and mutual fit. Compensation expectations, timeline, remote/hybrid/onsite preferences, visa status if applicable. The recruiter should also be pattern-matching on communication clarity and enthusiasm level, which sounds soft but correlates strongly with interview-to-offer conversion. We track it.

Stage 2: Technical Screen (45-60 minutes)

One engineer. One focused problem. This is the gate that filters out candidates who look good on paper but can’t execute. For software roles, a single well-chosen coding problem. For DevOps, a troubleshooting scenario. For data engineers, a SQL problem with messy joins and a follow-up question about how they’d pipeline the output.

The key: the problem should have a clear minimum viable solution that a competent mid-level engineer can reach in 30 minutes, with extensions that let strong candidates show depth. If your screening problem requires 55 minutes of heads-down coding to reach any working solution, it’s too hard for a screen and you’re losing qualified people.

Stage 3: Deep Technical (90-120 minutes, split into two sessions)

System design in one session. Domain-specific depth in the other. Two different interviewers. This is where you find out whether someone can think at the level the role demands.

System design is non-negotiable for anyone at the senior level or above. Give them a real problem from your stack, simplified. “Design a notification system that handles 10 million users” is generic and overused. “We have a batch processing pipeline that occasionally drops records between our Kafka consumers and Postgres, how would you diagnose and fix that” is specific and reveals real experience versus interview prep theater.

Stage 4: Team Fit and Hiring Manager Conversation (30-45 minutes)

This is not a “culture fit” vibe check. It’s structured. You’re evaluating how they communicate technical decisions to non-technical stakeholders, how they handle disagreement, what they do when they’re wrong, and whether their working style meshes with how your team actually operates.

Ask about a time they pushed back on a technical decision and lost. The answer tells you more about how they’ll function on your team than any coding problem will.

StageDurationWho Runs ItWhat You’re Evaluating
Recruiter Screen20-30 minRecruiterLogistics, comp alignment, communication baseline
Technical Screen45-60 minSenior EngineerCan they actually code/build/debug at the level claimed?
Deep Technical90-120 min (2 sessions)2 Engineers (system design + domain)Architecture thinking, depth of expertise, tradeoff reasoning
Team Fit + HM30-45 minHiring ManagerCommunication, collaboration, conflict style, motivation

Total elapsed time for a candidate: roughly 4-5 hours of interview, spread across 7-10 calendar days. Any longer than that and you’re leaking candidates. According to a 2025 hiring trends report, 61% of candidates accept the first offer they receive. Speed is not the enemy of rigor. Slow process is.

Technical interview panel observing candidate whiteboard system design session

Technical Interview Questions by Role

Generic question lists are everywhere. What’s missing is guidance on which questions actually differentiate for specific roles. The questions that surface a strong backend engineer are useless for evaluating a DevOps candidate, and vice versa.

Here’s what we’ve seen work, organized by role type, with notes on what the answers actually reveal.

Software Engineer / Full-Stack Developer

“Walk me through how you’d design the data model and API for a multi-tenant SaaS application where tenants can have different feature sets.”

This isn’t a trick question. There’s no single right answer. But it immediately separates candidates who’ve thought about tenant isolation, schema design tradeoffs (shared vs. siloed databases), feature flagging at the API layer, and authorization boundaries from candidates who’ve only worked on single-tenant systems and don’t know what they don’t know.

Follow up with: “Where does this design break down at scale?” Anyone can design for the happy path. You want the person who’s already thinking about what goes wrong at 10x load.

Other questions worth asking:

  • “Show me a pull request you’re proud of and one you’d do differently today.” Real artifacts beat hypotheticals every time.
  • “You inherit a codebase with no tests and a deployment that takes 45 minutes. Where do you start?” This tests prioritization instincts, not textbook knowledge.
  • “Describe a production incident you were involved in. What broke, what did you do, and what changed afterward?” Listen for whether they describe the system or just their own role.

DevOps / Cloud / Infrastructure Engineer

“Your CI/CD pipeline has been green for six months but deploys have been taking progressively longer. Last week a deploy took 47 minutes. Walk me through how you’d diagnose this.”

Strong candidates will ask clarifying questions first. What’s the build system? Monorepo or polyrepo? What’s in the pipeline besides build and test? Are artifact sizes growing? Weak candidates jump straight to “I’d add caching” without understanding what they’re caching.

More questions that surface depth:

  • “Describe your approach to managing Terraform state across multiple teams and environments.” State management is where the real complexity lives in IaC. Textbook answers mention remote backends. Experienced answers mention state locking conflicts, import blocks, and the time they corrupted state and had to manually reconcile.
  • “A Kubernetes pod is in CrashLoopBackOff. Walk me through your debugging process, from the first alert to resolution.” Step-by-step observable methodology versus guessing.
  • “How do you decide what gets monitored versus what gets logged versus what gets traced?” This separates engineers who’ve built observability stacks from engineers who’ve inherited them.

Data Engineer

“You need to build a pipeline that ingests 50 million events per day from a Kafka topic, transforms them, and loads them into a data warehouse for analyst queries. Walk me through your design.”

Listen for: technology choices with reasoning (not just “I’d use Spark”), partitioning strategy, exactly-once vs. at-least-once delivery tradeoffs, how they’d handle late-arriving data, and what monitoring they’d put around it. Candidates who mention backfill strategy without being prompted have done this for real.

  • “Write a SQL query that finds the top 5 customers by revenue in the last 90 days, excluding any customer who had a full refund in that period.” Then: “The query runs in 12 seconds on a 200 million row table. How do you optimize it?”
  • “What’s the difference between a slowly changing dimension Type 1 and Type 2, and when would you choose each?” Sounds basic. Roughly 40% of data engineer candidates we screen can’t answer it clearly.

QA / SDET

“You’re joining a team with zero automated tests and a monthly release cycle that routinely slips. What do you do in your first 30 days?”

You’re not looking for “write a test framework.” You’re looking for someone who says they’d first understand what’s breaking, talk to the engineers about where the pain is, pick the highest-value test targets (probably the integration points, not the unit tests), and ship something small that catches a real regression within two weeks. Strategy first. Coverage second.

  • “Describe a bug that got past your testing. What did you learn?” Honest answers here separate senior QA engineers from everyone else.
  • “When do you stop testing?” This is a question most candidates have never been asked. The ones who think about risk tolerance and diminishing returns are the ones you want.
Software developer completing live coding assessment at dual-monitor workstation

What Good Answers Actually Sound Like

Listing questions is the easy part. Knowing what to listen for is where most hiring managers struggle. You’re not grading a test. You’re evaluating how someone thinks under ambiguity, and that requires knowing what signal actually looks like.

Process over answer. A candidate who arrives at a wrong answer but showed clear reasoning, asked good clarifying questions, identified their own assumptions, and course-corrected when you gave a hint is almost always a better hire than the candidate who memorized the right answer. We placed a senior engineer last year who bombed the system design portion, at least on paper. Got the capacity math wrong by an order of magnitude. But the hiring manager watched him catch his own error, recalculate, and then explain what the error would have caused in production. Hired. Still there. His manager told us at the six-month check-in that he’s the most reliable person on the team.

Specificity is the signal. When a candidate says “I’ve worked with distributed systems,” that tells you nothing. When they say “I maintained a Cassandra cluster with 140 nodes across three data centers, and we had a compaction storm in DC-East that took us offline for six hours because our tombstone TTLs were too aggressive,” that tells you everything. Vague answers mean vague experience. Press for specifics. Always.

Communication matters as much as correctness. Can the candidate explain their solution to someone who isn’t an engineer? Can they simplify without dumbing it down? This matters because senior engineers spend half their time translating technical constraints into business language for product managers and executives. If they can’t do it in an interview, they can’t do it in a sprint planning meeting.

Red flag: blame deflection. “The previous team’s architecture was bad” without any ownership of what they did about it. Every system has legacy problems. You want the person who improved things, not the person who complained about them.

Building a Technical Interview Scoring Rubric

If your interviewers are submitting feedback as freeform paragraphs, you don’t have an evaluation process. You have an opinion collection. Opinions are inconsistent, hard to calibrate across interviewers, and impossible to defend if a candidate challenges your decision.

Here’s a rubric framework we’ve seen work across hundreds of technical hiring debriefs.

Dimension1 (No Hire)2 (Weak)3 (Meets Bar)4 (Strong)5 (Exceptional)
Technical DepthCannot solve basic problems at expected levelSolves with significant hints, gaps in fundamentalsSolid fundamentals, reaches working solution independentlyClean solution, identifies edge cases, optimizes without promptingTeaches the interviewer something new
Problem SolvingFreezes or guesses randomlyTries one approach, gets stuck, can’t pivotBreaks problem down, works through it methodicallyConsiders multiple approaches, picks the right one with clear reasoningReframes the problem itself, finds a simpler solution others miss
CommunicationCannot articulate thought processExplains only when prompted, unclearThinks aloud, explains decisions as they goProactively explains tradeoffs, adjusts explanation to audienceCould present this solution to a VP of Engineering and a junior dev equally well
System ThinkingSolves in isolation, no awareness of broader systemMentions adjacent systems when askedConsiders impact on other services, data flows, failure modesDesigns with observability, rollback, and scale in mind from the startIdentifies second-order effects the team hadn’t considered
CollaborationDismissive, argues without listeningAccepts hints reluctantly, prefers to work aloneResponds well to hints, asks clarifying questionsTreats interview as a conversation, builds on interviewer’s suggestionsMakes the interviewer want to work with them

Scoring rules that matter: a score of 3 across all dimensions is a hire. A single 1 in any dimension is a no-hire regardless of other scores. Two 2s require a calibration conversation, not an automatic rejection. And the 5 column exists to identify candidates you should fast-track before someone else hires them, not as the bar for a standard offer.

One more thing. Score immediately after the interview. Not at the end of the day. Not before the debrief meeting tomorrow. The research on this is unambiguous: recall degrades within hours, and delayed scoring introduces recency bias and halo effects. Write your scores while the conversation is still sharp.

Hiring team reviewing interview scorecards during post-interview debrief meeting

Live Coding vs. Take-Home Assignments: The Real Tradeoffs

This debate has been running for a decade and neither side has won because both formats have legitimate failure modes.

Live coding tells you how someone thinks in real time. You can watch them debug, ask them to explain decisions as they go, and gauge how they handle being stuck with someone watching. The downside is that it penalizes anxiety. Talented engineers who freeze under observation, and there are more of them than most hiring managers assume, will underperform their actual ability. You’re measuring composure as much as competence.

Take-home assignments let candidates work on their own schedule, produce higher-quality code, and reduce the anxiety variable. The 2024 CoderPad State of Tech Hiring survey found candidates rated take-homes highest among assessment types at 3.75 out of 5. But the dropout rate is real. Dropbox reported 20% of candidates never completed their take-home, and it was often the strongest candidates who bailed, the ones with three other offers who didn’t have six hours to donate to your homework assignment.

Then there’s the AI problem. With Copilot and ChatGPT, a take-home submission in 2026 might represent the candidate’s work, their AI assistant’s work, or some blend that’s impossible to untangle. At least with live coding you can see the thinking happen.

The approach we see working best right now: a short take-home (2-3 hours max, explicitly time-boxed) followed by a live code review session where the candidate walks through their own submission, explains tradeoffs, and extends it based on new requirements you introduce. You get the take-home’s reduced anxiety plus the live session’s observability. Completion rates stay high because the time commitment is reasonable.

Red Flags That Should Stop a Hiring Process

Not every red flag means no-hire. Some mean “dig deeper.” But a few should end the conversation.

Candidate can’t explain their own resume. Listed Kubernetes on the resume, can’t describe a pod. Listed “architected microservices migration,” can’t name which services or why the migration happened. This isn’t nerves. This is misrepresentation.

Answers sound rehearsed but don’t connect. The candidate delivers a perfect textbook answer on CAP theorem but can’t apply it to a concrete scenario you describe. They prepared for keywords, not for the job. We see this a lot with candidates who’ve spent two weeks on LeetCode and nothing else.

Blames every previous failure on someone else. One bad team, sure. Two, maybe. But if every project went wrong because of other people, either they attract dysfunction or they can’t see their own contribution to it. Neither is great.

Won’t say “I don’t know.” This one’s counterintuitive to some hiring managers. But a candidate who never admits uncertainty is either dishonest or has such a narrow frame of reference that they don’t recognize their own blind spots. Both are problems in a technical role where being wrong about something and not catching it ends up in production at 2 AM.

On the interviewer side, watch for these in your own process:

Your interviewers are asking gotcha questions. “What’s the time complexity of Java’s Arrays.sort for primitive arrays versus object arrays?” If the interviewer can’t explain why the answer matters for your actual codebase, it’s trivia, not evaluation.

Every interviewer asks their pet question regardless of the role. Your infrastructure team lead asks a React question because that’s what they know. Your frontend engineer asks about database indexing. Interviewers should be assigned to evaluate the dimensions they’re qualified to assess.

The debrief is a popularity contest. If your post-interview discussion starts with “I liked them” or “I didn’t get a good vibe,” you don’t have a debrief process, you have a vote. Require scores submitted before the meeting. Discuss the scores, not the vibes.

Hiring manager reviewing technical candidate materials and interview scores at desk

Reducing Bias in Technical Interviews

Structured process is the single most effective bias reduction tool in technical hiring. Not training. Not good intentions. Structure.

Research from the Behavioural Insights Team has shown structured interviews can reduce bias by up to 85% compared to unstructured approaches. The mechanism is simple: when every candidate answers the same questions, scored on the same rubric, by interviewers who submitted their ratings independently before any group discussion, there’s nowhere for bias to hide. When each interviewer asks whatever they feel like and rates candidates on “gut feel,” every cognitive bias humans carry walks straight into the decision.

Practical steps that actually move the needle:

Write your questions before you see the candidate’s resume. Not after. If you design questions after reading the resume, you’ll unconsciously anchor on the schools, companies, and keywords that triggered your existing preferences. Write the questions for the role. Then evaluate every candidate against the same set.

Use diverse interview panels. Not because it checks a box. Because it surfaces different evaluation perspectives. The engineering manager who’s been at the company for eight years evaluates differently than the senior engineer who joined six months ago. Both perspectives matter. One without the other produces blind spots.

Blind the resume during technical stages. Several companies we work with strip names, photos, schools, and company names from resumes before the technical interviewers see them. The interviewer gets a skills summary and relevant project descriptions. Nothing else. It feels extreme until you see how much the evaluation changes.

Score independently, then discuss. Never let interviewers share feedback before scores are submitted. The first person to speak in a debrief anchors everyone else. Collect scores in writing. Then talk.

And for the legal side: the EEOC’s employer guidelines are clear on what you can’t ask. No questions about age, marital status, family planning, disability, religion, or national origin. In a technical interview, these rarely come up through direct questions, but they sneak in through small talk. “Where are you from originally?” before a whiteboard session. “Do you have kids? Just wondering about on-call availability.” Train your interviewers to keep pre-interview conversation neutral. Weather is fine. Weekend plans are fine. Anything touching a protected class is not.

Mistakes Hiring Managers Keep Making

We debrief with hiring managers after every placement. The same mistakes come up over and over. Here are the ones that cost the most.

Testing for knowledge instead of capability. “What port does PostgreSQL run on by default?” is a Google search, not an interview question. The candidate who doesn’t remember port 5432 but can design a replication strategy across availability zones is the one you want. Trivia tests select for memory. Problem-solving tests select for engineers.

Talking more than listening. We’ve sat in on interview panels where the hiring manager spoke for 35 of the 45 minutes. They left thinking the candidate was “a great fit.” They have no idea whether the candidate is a great fit because they never let the candidate demonstrate anything. The interviewer-to-candidate talk ratio should be roughly 20/80 during technical portions. You ask the question and set context. They do the rest.

Moving too slowly after a strong interview. This one kills more hires than bad questions do. A strong senior engineer in 2026 has multiple active processes. Our data shows that candidates who receive an offer within 5 business days of their final interview accept at nearly double the rate of candidates who wait 10+ days. You don’t need to be reckless. You need to have your decision-making process run in parallel with the interview schedule, not sequentially after it.

Skipping the calibration step. You have three interviewers. They all give a candidate a 4 out of 5. But interviewer A thinks a 4 means “solid hire, no concerns.” Interviewer B thinks a 4 means “strong hire, would fight to make this offer.” Interviewer C thinks a 4 means “good but I’ve seen better.” If your interviewers haven’t calibrated what the scores mean before they start scoring, the numbers are meaningless. Run a calibration session with your interview panel using anonymized past candidate examples before your next hiring cycle starts.

Questions Hiring Managers Ask Us About Technical Interviews

How many technical interview questions should I ask in a 45-minute session?

Three to five, maximum. Not fifteen. A 45-minute screen with one substantial coding or design problem and two follow-up questions gives you more signal than rapid-firing a dozen surface-level questions. Depth beats breadth in a screen. Save the breadth for the full technical loop.

Should I let candidates use AI tools like Copilot during live coding?

Depends on whether they’ll use them on the job. If your engineering team codes with Copilot daily, banning it in the interview tests a skill they’ll never need. Meta piloted AI-assisted coding interviews in late 2025, giving candidates access to GPT-4o, Claude, and Gemini during the problem. Their finding: the problems had to change, but the signal quality didn’t drop. If you go this route, focus your questions on system design and architectural reasoning where the AI is a tool, not the answer.

We keep losing candidates between the technical screen and the onsite. What’s going wrong?

Scheduling gap, almost certainly. If there’s more than 5 business days between your technical screen and the next round, candidates with active processes elsewhere are getting offers before you get to your second stage. Tighten the scheduling. Have your onsite interviewers block recurring weekly slots specifically for candidate interviews so availability isn’t the bottleneck every single time.

Do whiteboard interviews still work, or are they outdated?

They work for system design and architecture discussions. Drawing boxes and arrows on a whiteboard while explaining data flow is a legitimate skill that maps to real work. They don’t work for writing code. Nobody writes syntactically correct code on a whiteboard, and asking them to do it tests handwriting and memory, not engineering ability. Use a laptop with a shared screen for coding. Keep the whiteboard for design.

How do I interview for a role I don’t fully understand technically?

Hire a technical interviewer. Seriously. If you’re a VP of Engineering hiring a machine learning engineer and you haven’t done ML work yourself, you should not be evaluating the technical depth of their model architecture answers. Bring in someone who can, whether that’s a current ML engineer on your team, a trusted advisor, or a staffing partner with domain expertise who can help structure the technical evaluation. Your role in that interview is collaboration style, communication, and management fit. Stay in your lane and own that lane well.

What’s the single biggest thing I can do to improve our technical hiring starting this week?

Write down your interview questions and scoring criteria before your next interview. Not during. Before. Then use the same questions for every candidate interviewing for the same role. That single change, applied consistently, will produce more improvement in your hiring outcomes than any other adjustment you could make. The research backs this up. The data backs this up. Our placement data backs this up. Structured beats unstructured, every time, across every role type.

Leave a Comment