Engineering was never just about writing code. It was always about building things that work — and knowing how to fix them when they don’t.

There’s a debate happening in engineering circles right now, and it mostly misses the point. Some argue that AI will replace software engineers. Others argue that engineers will always be needed because AI can’t “truly” reason. Both sides are arguing about the wrong thing.
The more interesting question isn’t whether engineers will be replaced. It’s about what engineering will look like — and what it will mean to be good at it.
At its core, engineering has always been two things: knowing how to build, and knowing how to fix. Not one or the other. Both. A developer who can write clean code but can’t diagnose a cascading failure at 2am isn’t an engineer — they’re a coder. An architect who can design elegant systems but can’t implement anything isn’t an engineer — they’re a theorist. The engineering mindset is the synthesis: build with enough understanding that when things break, you can reason your way to the root cause.
That fundamental doesn’t change. What changes is the medium. And right now, the medium is shifting faster than most people’s mental models can keep up.
The Evolution of Development Workflows Link to heading
To understand where we’re going, it helps to be precise about where we’ve been and where we are.
The Past: High-Touch, Synchronous Collaboration Link to heading
Traditional software development was a deeply synchronous process. Development teams were simultaneously engaged in both the creation and verification of every artefact — requirements, designs, code, tests, infrastructure. Humans were present at every step: writing the code, reviewing the code, writing the tests, reviewing the tests, approving the deployment.
The value engineers brought was irreplaceable because everything required human cognitive engagement. The bottleneck was human attention and capacity. You couldn’t go faster than the team could think, write, and review.
This created a craft culture around software development — deeply skilled individuals whose personal knowledge and judgment were the primary assets of the organization. That culture still shapes how most engineering teams think about their work today.
The Present: AI-Driven Creation with Medium Human Involvement Link to heading
We’re now in a transitional phase. Development teams are synchronously engaged in AI-driven creation and AI-assisted verification. Code generation is trivially fast. Documentation, tests, infrastructure templates — all of these can be produced at a rate that exceeds any individual’s ability to consume and verify.
But the process itself is still largely synchronous. Engineers sit alongside AI, reviewing outputs, providing direction, iterating on prompts, catching errors. The human is still in the loop for most production-quality work. As I’ve explored in The New Asymmetry, the generation velocity has already outpaced verification capacity — but verification is still primarily a human activity.
This is where most organizations are right now. Human involvement has decreased from high to medium, but the workflow model hasn’t fundamentally changed. People are working faster with AI, not working differently.
The Future: Asynchronous AI Orchestration Link to heading
The near future looks structurally different. Development teams will asynchronously build and manage quality gates, standards, and guardrails for high-autonomy AI agents to develop software with minimal moment-to-moment human intervention.
Think about what that means in practice. An engineer doesn’t write code directly — they define the constraints, quality bars, and acceptance criteria within which an AI agent operates autonomously. They set up the verification pipelines that catch deviations before they compound. They review summaries of what was built, flag edge cases that escaped the guardrails, and evolve the system to handle them better next time.
This is the shift from software engineer to AI workflow conductor. Just as an orchestra conductor doesn’t play every instrument but must understand each section, coordinate their interaction, and retain strategic control of the overall performance — the AI workflow conductor defines constraints and quality bars, ensures the different AI agents work well together, and maintains strategic ownership of what gets built and why. The primary skill isn’t code authorship — it’s directing and coordinating AI-driven development with enough depth to recognise when something is off and course-correct before it compounds.
The engineers who thrive in this environment won’t be the ones who are best at writing code. They’ll be the ones who understand complex systems deeply enough to define what good looks like, build the mechanisms to verify it, and debug when those mechanisms fail.
The Convergence of Engineering Roles Link to heading
As AI redistributes cognitive work across the development lifecycle, role boundaries are blurring in ways that will reshape how organizations think about specialization.
Architects and Developers: The Same Outcome, Different Tools Link to heading
The traditional separation between software architects and developers has always been somewhat artificial — a pragmatic response to cognitive limits, not a fundamental distinction in function. Architects made big decisions because they had the time and context that individual developers, buried in implementation, often didn’t. Developers implemented because the execution complexity required deep focus that architects, coordinating across multiple systems, rarely had available.
AI changes both constraints simultaneously. A developer with AI assistance can engage with architectural concerns in a fraction of the time — generating options, evaluating tradeoffs, prototyping approaches quickly enough to inform design decisions. An architect with AI can prototype and validate design decisions at an implementation level that would previously have required weeks of developer work.
The functional outcome — software systems with sound architecture and clean implementation — is the same. The tools enabling that outcome are increasingly available to both roles. The lines between “the person who decides how it should be built” and “the person who builds it” will continue to blur.
This doesn’t mean the distinction disappears entirely. Breadth of context, system-wide reasoning, and technical leadership remain valuable. But the idea that these capabilities only exist in people with “Architect” in their title is increasingly outdated.
Product and Development: A Gradual, Partial Convergence Link to heading
The relationship between product and development is more nuanced. I don’t think we’re heading toward a world where the distinction collapses. The domains are genuinely different, and the depth of expertise required in each remains substantial.
What’s changing is the degree to which competence in one domain requires engaging with the other. A product manager who can prototype a working application in days has a fundamentally different relationship with technical constraints than one who can’t. A developer who understands user behavior and business outcomes builds different software than one who treats requirements as external inputs to be executed without judgment.
AI has made both capabilities more accessible. Prototyping tools and coding assistants lower the technical barrier for product people. AI-assisted user research and outcome analysis lower the analytical barrier for developers. The result is a gradual, partial convergence — not a merger, but a meaningful extension of each role into the other’s territory.
The practical implication: you still need a primary domain. You can’t be a depth-zero generalist in a world of increasingly capable AI tools and expect to contribute meaningfully. But roles and responsibilities will slowly extend into adjacent territory, and the engineers who cultivate some product intuition and the product people who cultivate some technical depth will have disproportionate leverage.
The Specialist Generalist: T-Shaped to U-Shaped Link to heading
The idea of a “T-shaped” professional — deep in one area, broad across others — has been the hiring and development model for engineering organizations for decades. The depth of the vertical bar ensures real expertise. The breadth of the horizontal bar enables collaboration and communication.
AI significantly boosts the rate at which people can develop both kinds of knowledge. The horizontal bar gets wider because AI can help you rapidly acquire working knowledge of adjacent domains. And crucially, AI can help you develop meaningful competence in a second area of depth — something that previously required years of deliberate practice.
This suggests a transition from T-shaped to something closer to a U-shaped knowledge profile: deep expertise in a primary domain, meaningful competence in a secondary domain, and broad awareness across the rest. Not because people are suddenly smarter, but because the learning resources available to an engineer with AI assistance are incomparably richer than a textbook, a mentor, or a training course.
This transition won’t happen automatically. It requires deliberate investment and intellectual curiosity. But for engineers who are willing to invest in it, the range of what’s achievable has expanded significantly.
Why Engineering Maturity Still Matters Link to heading
Here’s the uncomfortable truth that often gets lost in the excitement about AI: poor engineering maturity is still the bottleneck, even with AI.
I’ve written about how your codebase isn’t ready for AI — the structural and documentation deficiencies that hamper AI-assisted development. But the issue runs deeper than codebase hygiene.
Engineering maturity encompasses practices, processes, and culture: how teams approach testing, how systems are designed for observability, how changes are validated before deployment, how technical debt is managed and prioritized, how knowledge is documented and shared. These practices determine whether AI amplifies good engineering or accelerates bad outcomes.
The irony of the AI era is this: code is getting better, but applications are getting worse. AI tools produce syntactically cleaner, better-documented, more consistent code than most developers write under time pressure. But clean code inside a poorly designed system doesn’t make the system better. It makes the system more polished on the surface while the underlying problems — unclear domain boundaries, tangled dependencies, missing observability, undocumented decisions — compound unchecked.
Teams with low engineering maturity use AI to go faster. Faster in the wrong direction is still wrong. The prototype trap is a vivid example: AI makes it trivially easy to build something that works for demonstration purposes and fails catastrophically in production. The trap catches teams that lack the engineering discipline to distinguish between validated-idea quality and production quality.
Redefining Productivity Link to heading
This brings up a question that more engineering leaders need to ask directly: what does productivity actually mean in the AI era?
The simplest metric — velocity, story points, lines of code shipped — becomes actively misleading when AI can generate large volumes of code quickly. A team measuring productivity by output can look extremely productive while accumulating technical debt that will slow them down severely in six months.
More useful definitions of productivity focus on long-term maintainability: how much does the cost of changing this system increase over time? How much of the team’s capacity is consumed by maintenance and incident response versus new capability delivery? How long does it take to onboard a new engineer to productive contribution?
Alternatively: how effectively can the team iterate on validated ideas? This isn’t just about speed. Iteration must deliver quality and robustness that matches the business complexity of the system being built. A consumer-facing financial platform has fundamentally different correctness and reliability requirements than an internal tooling prototype — and a team’s engineering practices need to be calibrated accordingly. Shipping quickly into the wrong level of quality isn’t productivity; it’s deferred risk accumulation.
Both dimensions — lower long-term maintenance cost and effective iteration that matches business complexity — require engineering maturity. AI accelerates the iteration cycle dramatically for teams with that maturity. For teams without it, AI primarily accelerates the accumulation of things that are hard to maintain, hard to change, and mismatched to the robustness their business context demands.
Advice to Young Engineers Link to heading
If you’re early in your engineering career, the signals from the AI era can be disorienting. Old certainties about what skills matter are being disrupted. Entire categories of work are being automated. Career paths that were clear five years ago are murkier today.
Here’s what I think actually matters, based on how the field is developing.
Think About Your Value Proposition — Ownership Over Capability Link to heading
The most important mental shift you can make is to stop thinking about your value in terms of what you can do and start thinking about it in terms of what you can own.
“I can write React components” is a capability. Capabilities can be commoditized. AI can write React components. Better AI will write better ones. Defining your professional identity around a capability is building on sand.
“I can take responsibility for a production system and ensure it stays healthy, scalable, and evolvable” is ownership. Ownership requires judgment, accountability, and systems thinking that isn’t easily automated — because it requires understanding what the system is for, not just how it works.
The tools will always evolve. Engineers who define themselves as users of specific tools get disrupted when tools change. Engineers who define themselves as builders of systems find that the tools enabling them get better over time. Elevate yourself from a user of tools to a builder of systems. That shift in identity is more durable than any specific technical skill.
This aligns with what I’ve explored in The Identity Anchor — except the advice goes in the opposite direction. Where established engineers are anchored by existing role identities that limit transformation, young engineers have the opportunity to form their professional identity around system ownership from the start, before the old identity patterns calcify.
Understand Real Challenges from Complex Systems Link to heading
There’s a specific kind of learning that you can only get from exposure to complex, production systems under real pressure — and it’s irreplaceable.
Abstract knowledge of distributed systems is not the same as having debugged a cascading failure in a distributed system at 2am while customers are affected. Understanding CAP theorem from a textbook is not the same as having made a real consistency/availability tradeoff under production constraints. Reading about technical debt is not the same as having lived with a system where it’s accumulated to the point of paralysis.
This experiential knowledge puts everything else into perspective. It tells you which solutions are real and which are theoretical. It gives you the intuition to recognize when something is more complex than it looks. It makes you a better evaluator of AI-generated output — because you’ve seen what production failure looks like and you know what to look for.
Seek out the complex, messy, high-stakes work early. Don’t optimize for comfortable projects where everything goes smoothly. The challenging systems, the legacy codebases, the incidents — these are where the most durable learning happens.
Build Strong Fundamentals, Then Business Acumen Link to heading
In the short term, the combination that matters most is strong fundamentals in both computer science and AI, then business acumen.
The CS fundamentals — data structures, algorithms, systems design, distributed computing — remain essential. Not because you’ll implement a binary tree from scratch in your day job, but because they give you the mental models to reason about complexity, performance, and correctness at a fundamental level. AI doesn’t replace this reasoning; it assumes you can evaluate its outputs against it.
AI fundamentals are now equally essential: how large language models work, their failure modes and limitations, prompt design and context management, agent frameworks, and evaluation strategies. Not as a user of AI tools, but as an engineer who can build systems that incorporate AI reliably. The gap between engineers who understand AI’s internals and those who treat it as a magic black box will only widen.
Then, over time, invest in business acumen. Understand the domains you’re building for. Learn to read a business model. Develop intuition for which technical decisions have business consequences and which don’t. This is what converges product and engineering capabilities in a useful way — not becoming a product manager, but developing enough business context to make better technical decisions and communicate their value more clearly.
The order matters. You can’t build good business-aware systems if you don’t have the technical foundations to build good systems at all. But engineers who stay purely technical in an era where AI is handling more of the technical execution will find their leverage diminishing.
The Conductor’s Mindset Link to heading
The transition from code writer to AI workflow conductor isn’t just a change in job description. It’s a change in how you think about your work.
A code writer asks: “How do I implement this?” An AI workflow conductor asks: “What constraints, standards, and verification mechanisms do I need to put in place for this to be built correctly? How do I ensure the different parts of this system work well together? What does a good performance look like, and how will I know when it’s going wrong?” The first question focuses on execution. The second focuses on the system within which execution happens — its coherence, its quality, and its strategic direction.
As organizational adaptation to AI progresses, the engineers who thrive will be those who have made this shift — who can design the guardrails, define the quality gates, and build the feedback mechanisms that allow AI to deliver reliably without requiring constant hand-holding.
This requires exactly what’s always made great engineers: deep understanding of complex systems, the judgment to know what matters, and the accountability to own outcomes. The medium is new. The craft isn’t.
Build the systems. Own the outcomes. That’s what it means to be an engineer.