Hiring for an AI pivot means staffing the technical and product roles that turn an AI announcement into a working product, usually starting with a head of AI or a senior ML engineer, then MLOps, data engineering, and an AI-literate product lead. Most pivots stall right there, at hiring. The Allbirds AI pivot, which involved selling the shoe business and rebranding as NewBird AI, is the loudest recent example on the board.

What actually happened with Allbirds
On April 15, Allbirds announced a $50 million convertible financing facility, sold its shoe business to American Exchange Group for $39 million, and filed to change its name to NewBird AI. The plan, per CNBC’s reporting, is to acquire high-performance AI compute hardware and lease it out under long-term agreements. Shares closed at $14.50 that day, up 582 percent from Tuesday’s close of $2.49, which sounds impressive until you remember the stock had fallen roughly 99 percent from its post-IPO peak, meaning the entire pop was off a very small base.
The pivot also includes a line item most headlines buried. Shareholders are being asked to strip the environmental conservation public benefit language out of the corporate charter. The wool-and-eucalyptus story Allbirds spent a decade telling is being quietly deleted so the new AI compute story can move in.
Short version. A legacy consumer brand sold its operating business, raised a $50M convertible, and is repositioning as an AI infrastructure play. The market paid for the pitch. Whether NewBird AI can staff what it just promised is a separate question, and the one nobody on the earnings call is going to ask directly.
Quick definition, since a lot of people have started Googling the term this week. An AI pivot is a strategic shift where a company exits or deemphasizes its prior business model and repositions around AI as its core product, service, or infrastructure. It usually requires new technical leadership, a near-total rebuild of the engineering team, and access to either proprietary data or compute capacity that the company didn’t previously own.
The hiring gap nobody mentions when a company announces AI
McKinsey’s 2025 State of AI report put a hard number on something we have been watching from the recruiting side for eighteen months. AI adoption sits at 88 percent. Only 23 percent of companies have actually scaled agentic AI across the enterprise. The rest are stuck in what McKinsey called pilot purgatory. A PoC that works. No production rollout. No ROI. According to the McKinsey analysis, the 6 percent of companies that are actually capturing meaningful financial returns were three times more likely to have redesigned their workflows around AI rather than sprinkling AI onto existing ones.
That gap is a staffing gap. It is not a model gap.
The $547 billion number is worse. Of the $684 billion poured into AI in 2025, $547 billion of it failed to deliver measurable business value. You do not get that big of a miss with pure technology risk. You get there because most AI programs are launched without the team they need to ship.
A few numbers anchor why hiring is the bottleneck. ManpowerGroup’s 2026 Talent Shortage Survey found AI skills were, for the first time, the hardest-to-fill capability in the world, and roughly seventy-two percent of employers told the survey that they still cannot find the technical talent their companies need. LinkedIn’s Economic Graph shows 1.3 million new AI-related jobs added in two years. The Bureau of Labor Statistics projects 317,700 annual openings in computer and IT occupations through 2034, and a meaningful share of those roles now require applied ML skills.
We have been in the middle of this as a staffing firm. A Series B client last quarter told us they needed to “stand up an AI team” in 60 days. What they meant, after two discovery calls, was a head of ML, two senior engineers, a data engineer, and a platform lead. Twelve weeks of runway. They hired the head of ML in week nine. The rest of the team followed four months later. The product roadmap that the pivot had been announced around slipped into the next fiscal year. The board was not happy. The CEO understood why. The VP of engineering spent a week in our office.
For the record, we help companies hire the people inside that gap. That’s our day job. Our AI and ML engineer staffing page walks through what we see working by subspecialty. Biased source, obviously. Still useful.

The real talent math behind an AI transformation
Before any AI pivot plan survives a board meeting, somebody on the exec team needs to see the comp bands. Here are the current ranges for senior ML engineer roles in the United States, pulled from three sources that use different methodologies, so the variance itself is the point.
| Source | 25th Percentile | Median / Average | 75th Percentile |
|---|---|---|---|
| Glassdoor (ML Engineer, US avg) | $129,181 | $160,751 | $202,628 |
| Levels.fyi (ML/AI SWE, total comp) | n/a | $245,000 | n/a |
| Levels.fyi (base only) | n/a | $190,000 | n/a |
| ZipRecruiter (ML Engineer, US) | n/a | $128,769 | n/a |
At the top of the market, the numbers get aggressive. Levels.fyi has Google ML engineer median total comp at $290,000. Apple sits at $359,000. Meta lands at $429,000. Amazon is $265,000. For sub-role breakdowns (applied ML vs research vs MLOps), our ML engineer salary guide goes deeper.
This is where most pivots start to look different on the spreadsheet than they did in the press release. A four-person AI team at big-tech-adjacent comp is roughly $1.4 million to $1.8 million in annual base plus equity load. That number assumes you find them, which per ManpowerGroup, 72 percent of employers are not.
Two pricing notes from what we have been seeing on the ground. First, base-only comparisons are useless for senior ML candidates in 2026. They take the total comp conversation or they ghost. Second, if you are not a big tech brand, your premium is usually paid in equity and in clean ownership of a real problem. Not cash. We placed a senior ML engineer last fall who left a $380K total-comp Meta package for a $260K base at a Series A, because the startup gave her full ownership of a model she actually cared about and a clear shot at the first L7-equivalent role on the team. We also had a very well-funded AI startup lose a senior candidate cold because, quote, “there’s no data,” and he turned out to be exactly right: the team was training on a dataset their biggest customer hadn’t yet contractually committed to sharing with them.

The first five AI hires (in this order)
You cannot hire an AI team in parallel. The ordering matters. The first hire changes the caliber of the second. The second changes the third.
- Head of AI or ML. Not a director of engineering with an AI side interest. A technical leader who has shipped production ML systems at scale. They will set the architecture, the eval framework, and the hiring bar itself. Without this hire first, every subsequent hire is mis-calibrated, because the people making the hiring decisions do not yet know what “good” looks like in their own domain.
- Senior ML engineer. Not mid-level. Not two juniors stacked in a trench coat. The senior ML hire anchors the modeling and research work and buys the head of ML time to lead instead of ship. This is usually the role where the head of ML spends their personal capital pulling in someone they have worked with before.
- ML platform or MLOps engineer. This is the role almost every pivot forgets for six months and then panics about in month seven. No MLOps, no reliable deploys. No reliable deploys, no revenue. MLOps is also a perfectly reasonable role to start on a contract engagement for the first six months if you are still standing up infrastructure and learning your own data shape.
- Data engineer. The best model in the world is downstream of your pipelines. If your AI pivot inherits a legacy product’s messy data lake, the data engineer frequently matters more than the third ML engineer would have. This hire gets consistently under-leveled in comp conversations, which is one reason these searches drag.
- AI product manager. Controversial pick for position five. A lot of companies want this hire earlier, because they want someone to “own the roadmap.” Fine in theory. In practice, if you hire the PM before the technical leader, you get a roadmap that ignores eval, latency, and cost, and the team spends a year rebuilding it.
Where does direct hire fit here versus contract? Positions 1, 2, and 4 are direct-hire conversations almost every time. Position 3 can go either way, and we often recommend contract-to-hire specifically for MLOps during the zero-to-one phase. Position 5 depends on whether the company already has a product motion to extend or is genuinely building something new. If it’s genuinely new, delay.

Why most AI pivots quietly die at the hiring stage
Gartner’s number is 30 percent. That is the share of generative AI projects they expected to be abandoned after proof-of-concept by the end of 2025. 40 percent of agentic AI projects they forecast will be canceled by 2027. Those are not technology failures in the strict sense. They are organizational failures with AI wrapped around them.
Three patterns we see most often when a pivot starts to slip on the hiring side.
The first is the senior ML engineer who never starts. Offer extended. Sat on the candidate’s desk for three weeks because the company insisted on a take-home evaluation at the offer stage, which senior ML candidates in 2026 simply will not do. The candidate picked up two counters during the wait. The company lost the hire at week four. The search restarts. The deadline slipped.
The second is the MLOps hire that was never scoped. Two senior ML engineers and a data engineer got hired, everyone celebrated the org chart update, and then six months later nobody owned the model deploy pipeline. The models worked in notebooks. The product didn’t. That one is on the hiring plan, not the engineers.
The third one is less discussed and, honestly, the one I have been yelling about internally for the better part of a year. An AI product manager hired too early, shipping a roadmap before the team can build any of it. Six months in, the engineers are resentful, the PM is frustrated, and the CTO is politely rewriting the roadmap on a Sunday. Nobody did anything wrong individually. The sequence was wrong.
None of those three are model problems. They are sequencing problems. Sequencing is a hiring decision.
What hiring managers should actually do next week
If the Allbirds AI pivot has you asking your board the polite version of “should we be doing this too,” here is the short list before your next planning meeting.
Pull your current engineering org chart and circle the roles that are mislabeled. If you have a “senior engineer, AI” who has shipped zero production ML, that is a mislabel. Not a judgment on the person. A calibration issue that will bite you during the real hiring pass.
Write down the first five hires in order. Not five roles. Five named people. If you cannot name a credible head of ML candidate, the pivot is a press release, not a plan.
Model the comp honestly. $1.4 million to $1.8 million for the first four hires, fully loaded at big-tech-adjacent bands. Know that number before the board asks for it.
Then pick a staffing partner who has actually placed senior ML engineers. Not a generalist tech recruiter who searched “machine learning” in LinkedIn Recruiter this morning. When you are ready, talk to our recruiting team and we will tell you honestly whether your search is a 60-day search or a 140-day search.
Things people ask
What exactly does an “AI pivot” commit a company to hiring?
Four to eight net-new hires inside 12 months, roughly, if you are going beyond a single proof of concept. A technical leader, two to three ML engineers, an MLOps or platform engineer, a data engineer, and later an AI product manager and possibly a research lead depending on the domain. Most companies underestimate it by about half on the first pass. The ones that underestimate by a lot are usually the ones who already told the market they were pivoting.
Realistically, how long does a senior ML engineer search take in 2026?
70 to 110 days from kickoff to signed offer for a non-brand-name company. Brand names can close in 40. The variance is mostly about comp, equity story, and how tight the JD was before posting. If the JD says “AI/ML generalist,” expect 120-plus days and a lot of candidate churn along the way.
Contract or direct hire for the first AI hires?
Direct for the technical leader and the first senior ML engineer. Those roles need to own the architecture and the team, and neither is a question you want a contractor answering. MLOps and data engineering can absolutely start on a contract or contract-to-hire path for the first six months, especially if the product definition is still moving. We have seen that work cleanly more than a dozen times this year.
Is the Allbirds AI pivot actually going to work?
Genuinely unclear. The stock pulled back after the 582 percent run and the $50 million convertible does not buy you a world-class ML research team plus the compute to train anything original. It buys you, optimistically, a head of AI, three senior engineers, about 18 months of runway, and a GPU bill that grows faster than anyone on the cap table expects. It could work. Most pivots like this don’t.
What is the biggest mistake companies make when they copy a move like this?
Announcing before hiring. The announcement compresses your hiring timeline, tells the market to benchmark you against companies three tiers richer in AI talent, and recruits your target candidates’ existing employers to start counter-offering them the day the job posts. I have watched at least three companies in the last 18 months announce an AI product and then spend nine months hiring the team to build it. The stock popped. The product slipped. Same pattern, different logo. Allbirds is the most photogenic version of this pattern we have seen this month, and based on the public comps trading right now it is very unlikely to be the last one we write about before the end of the year.
