
Photo by Gemini/Chris Seifert
The conversation around artificial intelligence today is deafening. It is a constant, overwhelming rush of breathless releases and predictions on one side and catastrophic warnings on the other. It feels like every keynote, every news headline and every team meeting has been taken over by the subject of AI.
The problem with all this noise is that it forces engineering leadership into one of two camps: either you are the AI diehard who believes these tools will automate every developer out of a job by next Tuesday, or you are the AI skeptic who dismisses it all as hype that will die off and a distraction from real work.
Neither of these positions is particularly helpful. As engineering leaders, our job is not to be a cheerleader for AI or a doomsayer about job security. Our job is to be strategically pragmatic. We have to find the narrow path between swallowing the hype and rejecting the pessimism entirely so we can focus our teams on building things that matter.
The hype cycle is a natural phenomenon in technology. I have seen it repeatedly in my career, from the early days of Agile adoption to the fever dream of blockchain. Every new technology arrives with a promise to fundamentally change how our teams work.
The Shiny New Tool Trap
The danger of embracing the hype is that you prioritize the tool over the team's needs. You adopt a new AI coding assistant or a massive language model simply because it is the technology everyone is talking about. Or worse, you adopt every new AI tool in the hopes of keeping up with the latest trends. This leads to wasted budget and time on internal "transformation" projects that are technologically impressive but ultimately fail to increase developer velocity or reduce friction.
When evaluating a new AI tool for your engineering workflow, I look for answers to questions like these:
Can this technology solve a core problem in our software development lifecycle today? Think about slow code reviews, outdated documentation, or excessive toil.
Is the value proposition clearer than the cost and complexity of integration, security and developer training?
Are we adopting this because our competitor is bragging about it or because our developers are actually asking for it?
Do we have a clear measurement for success that does not involve the word AI? Perhaps a reduction in average time to merge, or a faster time to triage.
To be fair, figuring out the answers to these questions requires experimentation. There is no one-size-fits-all approach to adopting AI tools. Different teams have different workflows, pain points and cultures. What works for one team may not work for another. The key is to start small, measure impact and iterate based on real-world results.
If the only reason for doing something is to say that your engineers are using AI, you are falling victim to the hype. The focus must always be on the tangible improvement in development and operational outcomes, not the tool itself.
The Naysayer Trap
On the flip side, the opposite danger is organizational inertia masked as caution. This often comes from a place of experience: leaders who have seen past 'productivity boosts' come and go and assume this is just another fad.
This pessimism is dangerous because it leads to organizational stagnation. When a technology truly is a foundational shift, even an incremental one, rejecting it means falling behind your peers and your competition in recruiting and developer efficiency. The leader stuck in the past often becomes the one whose teams take longer to ship code and whose engineers feel they lack modern tools. This is the risk of being too cautious.
For those prone to pessimism, the challenge is reframing the conversation away from the technology itself and toward engineering capability.
What new capacity for innovation could we unlock if we significantly reduced time spent on writing boilerplate code?
How would our team's ability to focus on complex, novel problems change if we could automate a core, time-consuming task like writing unit tests?
Are there low-risk areas like internal wiki maintenance or ticket summarization that we could use to experiment and build internal competency?
What is the long term cost to developer morale and retention of doing nothing while everyone else gains efficiency?
The Pragmatic Middle Ground
The way forward is not about belief. It is about application. It requires asking the right questions and adopting an experimental mindset with small groups of trusted engineers.
The pragmatic leader treats AI like any other powerful tool. They understand that while the technology is powerful, it is also messy and imperfect. Their responsibility is to define the boundary between the two. They do not wait for the perfect solution to emerge. Instead, they focus on finding high-value, low-risk areas for initial deployment. They are more interested in a tangible 5% increase in code completion speed tomorrow than a promised 50x change in a decade.
We do not have to submit to the hype. We do not have to reject the possibility. We just have to do the hard work of looking past the noise to find where the technology meets an actual engineering need. That is the balance that matters. Good luck!
