AI coding assistants: what engineering leaders want, and what's holding them back
AI coding assistants are everywhere: 94% of the companies I surveyed have at least some teams actively using these tools. Yet despite widespread availability, adoption among individual developers remains surprisingly low: only around a third of companies have seen adoption of these tools, with a majority (>50%) of their developers. This gap got me wondering: are these tools delivering on the promises that engineering leaders expect?

I surveyed engineering leaders and Platform teams from dozens of tech companies, ranging from smaller startups to big tech. After diving deeper into these results and having 1:1 conversations with engineering leaders, clear patterns emerged about the benefits they were aiming for and what’s holding them back from seeing those results.
The benefits they’re aiming for
Velocity is king

This should NOT be surprising: When I asked engineering leaders what they hoped to gain from AI coding assistants, one answer stood head and shoulders above the rest: speed. Over 70% of respondents pointed to developer velocity and reduced time-to-market as their top priorities.
This tracks with what I hear every day in conversations. Shipping faster is a universal goal. AI coding assistants offer some big promises: cut down on boilerplate, unblock developers faster, and reduce cycle time without adding headcount.
Reducing toil and improving focus on “higher-level” work
While velocity dominated the responses, another (related) benefit showed up in about 34% of them: reducing toil. This means cutting down on tedious, repetitive work and letting engineers focus on higher-value or higher-level tasks. This goes hand in hand with speeding up developers but speaks to a goal of improving the overall developer experience while they’re at it.
What was super interesting here is that this benefit was seen predominantly among teams with higher AI adoption (i.e., with AI coding adoption > 50% of their developers). So, the companies with higher adoption—and as such, the ones more likely to have actually REALIZED the benefits they were looking for—were the ones seeing their developers focus on higher-level work.
It suggests a pattern: the deeper teams go, the more they begin to see improvements in developer experience, not just output speed.
Ok, so what’s holding back adoption of AI coding tools?
Everyone wants to speed up their developers, and reduce toil. So why is adoption still so early? I asked these folks what some of the biggest barriers to adoption have been, and what’s slowing them down.
It’s not the cost!
Cost is usually some sort of factor in adoption new tools, but with AI it’s minor. Only about 22% of respondents brought up cost as a concern.
In fact, many leaders I spoke to are actually letting their engineering teams choose between a set of AI tools, with a fair bit of redundancy. (There are, on average, ~3 coding assistants in use at most eng orgs at the same time.)
One SVP of Engineering I spoke to was actually slashing his dev-tool budget pretty heavily in general, but still shrugged off the price of these AI tools: “If they speed up my engineers, spending $20/month is nothing.”
Another VP of Engineering worried that today's prices are artificially low due to market competition and investor subsidies. They foresee potential future increases - that could eventually become more significant barriers.
Learning curve: the new way to code
A bit larger of a blocker: 28% of respondents mentioned the learning curve as slowing them down. This one was a bit ironic, as these tools should, in theory, be “intuitive”, with a more English-language interface.
But developers are used to coding in code… not English. And vibing, prompting, refining, and guiding an AI isn’t the same as typing out lines of Python. It takes time to adjust, and that friction slows adoption.
The real barriers: security, quality, and maintainability
In the end, the biggest things I heard that were slowing down adoption were:
- Security: 59%
- Code quality: 53%
- Maintainability: 38%++ (see note below)

Security concerns are important, but surmountable
While 59% of respondents mentioned Security concerns (and 44% mentioning the somewhat-related IP concerns), they were often surmountable with the right controls in place. Many organizations have strict rules on which AI tools are approved for usage, on turning training and memory off, and what kind of data can be shared with these tools.
Nevertheless, several folks still brought up wariness around new threat vectors, especially surrounding MCP. MCP is still a bit of a wild west, without a lot of guardrails around where the MCP servers are coming from, what’s being run when they’re executed, or what local data might be getting exfiltrated.
Code quality concerns remain
Diving deeper into the 53% of respondents that cited code quality concerns, I saw more deeply-rooted issues around the AI tools generating incorrect code, or using bad libraries or bad patterns in their code.
These types of concerns frustrate developers and platform engineering teams alike. Developers are frustrated when AI coding assistants make basic mistakes or hallucinations. Platform teams are frustrated when AI-generated code imports old libraries they’re trying to deprecate.
One Staff Engineer I spoke to recalled Github Copilot repeatedly hallucinating parameters to Github’s own public CLI. Which, as you can imagine, was intensely frustrating, given they’re both owned by Github.
These quality issues speak to a real tradeoff between time saved with AI coding assistants, vs time wasted going down rabbit holes, fixing hallucinations, or taking on unwanted technical debt with bad libraries.
Maintainability: a critical concern for those with low adoption
Maintainability concerns of AI-generated code was only cited by 38% of respondents, and so at first blush don’t seem to be the biggest issue.
However, taking a step back and focusing on just those companies with lower AI adoption rates within their organization (25% or less), a very interesting pattern emerged: suddenly maintainability concerns jumped to a big barrier at 67%.
(Logically, focusing just on those companies with lower AI adoption rates gives us more signal on which factors have the most “slowing impact”.)
This distinction is important, and it suggests that maintainability concerns are an early-stage barrier. Teams dipping their toes in the water are especially cautious about long-term code health. They worry that AI-generated code might “work” today, but longer-term lead to codebases that are difficult for their teams to understand, debug, and maintain over the long term.
Conclusion
Engineering organizations are buying AI coding assistants with the hopes of real, tangible benefits, especially related to velocity and developer experience. But the fears holding teams back are just as real.
In the next post in this series, I’ll go beyond diagnosis and into treatment. I’ll share actionable strategies from high-adoption teams, including how they trained developers, handled concerns, and made AI a regular part of their daily workflows.