How We Vet Candidates Before You See the Resume
Last updated: April 25, 2026 | By Devin Hornick
KORE1 runs every candidate through three layers before you see the resume: a four-vector AI scoring pass that grades skills, experience, industry, and title against the requisition; a 30 to 45 minute recruiter interview; and an optional skills assessment scoped to the role.
That stack is why a typical req of 60 to 200 applicants compresses down to four to eight submissions. The point is not volume reduction. It is that what reaches your desk has already been measured against the actual req, not just keyword-matched to a job description.
I’m Devin Hornick, partner at KORE1. I sit through more “how would you actually vet these candidates” calls than any other question in the discovery process. Six in the last forty days. The answer below is the version we walk through on those calls, plus the sample scorecard the hiring manager usually asks to see by minute fifteen of the conversation.
Bias up front. KORE1 charges a fee on placements through our IT staffing services and adjacent practices, so when I tell you that the way we screen catches things a public job board can’t, you should read it as a claim that benefits us right alongside any claim that benefits you, and both can be true in the same paragraph. The framework below works whether you call us or not.

Why “Just Send Resumes” Is the Wrong Default
Most clients arrive having tried the volume approach first. Post on LinkedIn. Post on Indeed. Ask the in-house team to forward whatever comes in. Two weeks later they’re looking at 140 resumes that all read like the JD bounced off a chatbot. Some are real. Most are not. The hiring manager spends six hours reading them and concludes the market is broken.
The market is not broken. The intake is.
A resume is a marketing document, written by a candidate who is trying to get an interview, edited by a recruiter at the candidate’s last firm, and then optimized for ATS keyword matching by whichever of the dozen LinkedIn-tutorial blogs the candidate happened to read on the train ride home. It tells you what the candidate wants you to think they did. It does not tell you what they actually built, who managed them, what the team size was, or whether they shipped the thing the bullet point claims. Pulling those signals out is work, and the work has to happen before submission. Not after.
The U.S. Bureau of Labor Statistics’ JOLTS data showed monthly hires running near 5.4 million through 2025 against quit rates around 2.0%. The market is moving. It is just not moving in your favor when you are reading 140 resumes by hand.
The KORE1 Vetting Stack: Three Layers, Same Order Every Time
Layer one is automated. We score each candidate against the open req using a four-vector retrieval model: one vector for skills, one for experience and seniority arc, one for industry context, and one for title trajectory. The four vectors are weighted differently per role. For a senior Snowflake data engineer search, skills and industry weigh higher. For a CFO search, experience arc and prior titles weigh higher. The scoring runs in roughly thirty seconds per candidate against our internal corpus of about 1.2 million profiles.
Layer two is the human read. A KORE1 recruiter takes the top of the AI ranking and runs a thirty to forty-five minute call. Not a phone screen. A real conversation. We’re checking three things the model cannot check.
- Motivation. Why are you actually open to this role this quarter, and what would close the conversation early.
- Comp realism. Does the band the candidate wants match the band on the req, or are we ten percent apart in a way that wastes everyone’s afternoon.
- The narrative match. Does the candidate’s verbal account of their work match the resume, or does the story unravel under a follow-up question about who they reported to and what shipped.
That call also produces the line items in the scorecard your hiring manager will eventually see.
Layer three is optional. About 35% of our reqs trigger a skills assessment. Engineering roles that need a take-home or a live coding round. Workday consultant searches that benefit from a configuration walkthrough. Finance roles with a modeling exercise. We do not use canned platforms. The assessment is built off the real work the role will do in week one, graded by someone who has done that work. When the role does not need this layer, we skip it. Adding a take-home to a director-of-ops search is a fast way to lose your shortlist to a competitor who moved on Monday.

What the Multi-Vector RAG Actually Scores
The four vectors, what each one measures, and what changes when you separate them out:
| Vector | What It Measures | Example Signal |
|---|---|---|
| Skills | Granular technical and functional skills, weighted against required and nice-to-have | “Terraform 1.7 on AWS EKS” matches stronger than “infrastructure as code” |
| Experience | Years and seniority arc, including company stage progression | Series A to Series C to public-company reads differently than three Fortune 500s in a row |
| Industry | Domain context and vertical fluency | A Snowflake engineer at a healthcare payer scores closer to a healthcare IT req than the same engineer at AdTech |
| Title | Job titles, trajectory, and lateral signal | Three “Senior” titles in a row reads differently than a clean Senior, Staff, Principal arc |
Each vector returns a score from 0 to 100. The composite score is weighted per the req’s intake, not flat-averaged. A senior cloud engineer search might weight skills 0.5, experience 0.2, industry 0.1, title 0.2. A CFO search might weight experience 0.4, title 0.3, industry 0.2, skills 0.1.
The point of separating the vectors is that a single composite hides why a candidate fits. When we send a submission, you get all four scores plus the recruiter’s notes. If the title score is low but skills is high, that is a signal worth a conversation. A flat ranking would have buried it.
This is the part most hiring managers don’t expect. The model is not picking who you should hire. It is showing you, in four dimensions, why the candidate scored where they did, so you can decide whether the dimension that came up short actually matters for your team.
A Sample Candidate Scorecard
Here is a slightly anonymized version of what reaches your inbox. A real submission for a Senior DevOps Engineer search out of Irvine in March.
| Field | Value |
|---|---|
| Role | Senior DevOps Engineer, hybrid Irvine, 2 days/week onsite |
| Candidate | “M.R.” |
| Skills score | 91 / 100 (Terraform, Kubernetes, AWS EKS, GitHub Actions, Datadog) |
| Experience score | 78 / 100 (8 years, all Series B+ SaaS) |
| Industry score | 72 / 100 (FinTech and HealthTech background, no current healthcare payer experience) |
| Title score | 84 / 100 (Senior, Senior, Lead trajectory) |
| Composite | 84 / 100 (weights: skills 0.45, experience 0.25, industry 0.10, title 0.20) |
| Comp expectation | $185K base, open on equity |
| Notice period | 3 weeks |
| Why open | Burned out from on-call. Wants smaller team. Has interviewed at one other company, declined a Staff offer at $200K because the on-call structure was the same. |
| Recruiter notes | Strong signal in the call. Walked through a 2024 incident at a fintech client where his on-call rotation was rewritten after a 4am page took 90 minutes to triage. Owns the fix and the failure honestly. Wife and two kids in Anaheim, hybrid Irvine commute is the actual constraint. |
The scorecard produces a useful first interview because the hiring manager walks in already knowing where to push (the industry gap, the soft 72), where to go shallow (the technical screen can be lighter than usual since the skills score is solid in the high 80s), and what the candidate’s actual constraints look like in concrete terms (Anaheim, two kids, hybrid Irvine commute is the constraint, not equity). The whole thing compresses the first interview by about twenty minutes on average.

What This Catches That a Job Board Doesn’t
A job board catches keyword matches. It does not catch:
- The candidate who has Kubernetes on the resume but only ran kubectl commands a senior gave him. Layer two catches this in eight minutes.
- Comp expectations 30% above the band. The model can’t know what the candidate wants. The recruiter can.
- The candidate whose last three jobs lasted under nine months. Not a disqualifier on its own. A signal worth reading in a conversation.
- Whether the candidate is actually leaving. About 12% of “active” applicants we screen are using your interview as leverage in a counteroffer at their current company.
- The narrative gaps. A two-year resume gap with a generic “Career Sabbatical” line in 2023 means something. The honest answer (a parent’s terminal illness, a startup that died at Series A, a divorce and a move across three time zones) matters and reframes how a hiring manager should read the rest of the resume. The model does not ask. The recruiter does.
That last one is the underrated half of vetting. The resume tells you what someone did. The recruiter call tells you why they did it, what they’re walking toward, and whether the story they tell about it on a Tuesday afternoon to a stranger holds up under three or four follow-up questions about the parts of it that the resume happened to leave out.
When We Skip a Step, and Why
We do not apply the full stack to every req. Three cases where we adjust:
The retainer search. If a client engages us on a retained basis for a single critical hire such as a CFO, a head of engineering, a controller, or a VP of operations, the AI scoring still runs but layer two stretches to sixty or ninety minutes, sometimes split across two calls a week apart so we can probe what changes between the first conversation and the second. Layer three is replaced with a written case study and a structured reference workflow. Different shape, same goal.
The volume req. A light industrial warehouse search for twelve forklift operators in Riverside does not need a four-vector model. It needs availability, certifications, shift fit, and a phone screen that actually ran. We adjust.
The repeat hire. If we placed someone two years ago into a role that is reopening, we already have the scorecard and the references on file. We do not redo what the prior file already proved.
How Long the Stack Takes
The composite layer runs same-day for most reqs, layer two takes 24 to 72 hours depending on how fast candidates return calls, and layer three, when used, adds 3 to 7 days more, all of which means that across our last 200 placements the median time-to-first-submission landed at 6 calendar days, with senior cloud and security searches running 9 to 12 because the available pool is genuinely thinner and the bar that the hiring manager will eventually want to meet is higher.
Time-to-hire end-to-end averages 17 days for IT roles. That number includes all three layers running on every candidate that reached you. It is not faster because we cut corners. It is faster because the corners get cut on the wrong candidates earlier in the funnel, where it costs nothing.
For context, an MIT Sloan Management Review piece on hiring throughput in 2024 noted that the average corporate role takes closer to 44 days from posting to offer accepted, per SHRM benchmarking. We are not running 17 days because we are smarter than SHRM’s median respondents. We are running 17 days because the screening work happens upstream.
Where This Plugs Into the Engagement Model
The stack runs the same whether the engagement is direct hire staffing or contract staffing. The only difference is layer three weighting. Direct hire searches lean more on the skills assessment. Contract searches weight comp realism and notice period more heavily, because a two-week start beats a perfect score that cannot start for six.
For specialized work, we route the same scorecard through the relevant practice. A Snowflake engineer flows through our data engineering staffing desk. A controller goes through finance consulting. The four vectors and the recruiter call do not change. The reviewer does.
Common Questions
How is this different from what an internal recruiter does?
In philosophy, identical. In throughput, substantially different. An internal recruiter at a 200-person company is running 8 to 12 reqs simultaneously across multiple departments, and the depth of any single screening tracks the time available. We are running one. The screening goes deeper because the math allows it.
Do you actually look at every applicant, or just the top of the AI ranking?
The top 25 to 40 by composite score get a manual recruiter touch. The rest are scored, tagged, and added to the long-term talent pool. About 18% of our placements come from candidates we first sourced for an unrelated req six or more months earlier.
What if your AI scoring misses someone the hiring manager would have liked?
It does sometimes. The model is trained on past submissions and accepted hires, so it carries the biases of past hiring decisions. We audit the misses about once a quarter. When a candidate the model deprioritized turns out to be a hiring-manager favorite, the weighting on similar searches gets adjusted. That feedback loop is part of why the model performs better in 2026 than it did when we deployed it.
Can we see the full scorecard or only your summary?
The full scorecard is the deliverable. Every submission comes with all four vector scores, the recruiter notes, the comp data, the reference status, and a flag if the candidate is actively interviewing elsewhere. We do not filter the scorecard down to a summary. The scorecard is the work.
How does this handle confidential or executive searches?
The candidate’s name and current employer are masked in the scorecard until they consent to a formal submission. Layer two still runs. Layer three usually does not. The model still scores, but the corpus is restricted to candidates who have explicitly opted into a confidential search workflow.
What does all this cost?
Built into our placement fee. There is no separate AI scoring charge, no per-screen fee, no add-on for the skills assessment. The vetting stack is the product. The fee is paid on the hire.
The Honest Summary
Most clients tell us they expected the AI part to be the differentiator. It is not. The AI scoring runs in thirty seconds. The thirty-five minute recruiter call is what catches the candidate who would have wasted your hiring manager’s afternoon. Layer one filters volume. Layer two builds signal. Layer three confirms it.
If you want to see the scorecard format applied to a real role on your side of the table, talk to our team. We will walk you through the last submission we sent on a similar req, with the candidate’s name redacted.
