The Identity Anchor: What's Really Slowing AI Adoption

Traditional roles create invisible ceilings on AI adoption

None Organizations are racing to adopt AI. New models, better tooling, expanded context windows—the technical capabilities improve by the month. Engineering leaders are investing heavily in coding assistants, experimenting with AI agents, and exploring how large language models can accelerate their teams.

But here’s what I’ve observed across dozens of enterprise transformations: the technical adoption is outpacing the organizational adaptation by a wide margin. Teams are bolting AI capabilities onto unchanged processes, unchanged structures, and most critically—unchanged role definitions.

The result? Incremental improvements instead of transformation. And the root cause isn’t the technology. It’s the job titles.

The roles we use today—Developer, QA Engineer, Business Analyst, Solutions Architect—were designed for a different era. They emerged from human-only collaboration models, optimized for managing cognitive limits and communication overhead between specialists. These roles are not transferable to AI-native methods of constructing complex software systems. And the failure to redefine them creates two significant problems that are holding organizations back.

The Origin of Current Roles Link to heading

Before we examine what’s broken, it’s worth understanding how we got here.

Software development roles evolved through decades of process refinement. In the waterfall era, we created distinct phases—requirements, design, implementation, testing—and assigned specialists to each. The division of labor made sense: complex systems required deep expertise, and humans could only hold so much context at once.

Agile methodologies blurred some boundaries, encouraging cross-functional teams and shared ownership. DevOps further eroded the wall between development and operations. But the fundamental role structure persisted. We still have developers who write code, QA engineers who test it, business analysts who define requirements, and architects who design systems.

These roles were optimized for handoffs. A BA gathers requirements and hands them to developers. Developers write code and hand it to QA. QA validates and hands it to operations. Each specialist goes deep in their domain, and the handoff model manages the coordination overhead.

This made sense when humans were the only actors in the system. But AI changes the equation entirely. AI doesn’t need handoffs—it can participate continuously across the entire lifecycle. AI doesn’t have the same cognitive limits—it can hold vastly more context than any individual. The collaboration model that justified our current roles no longer applies.

Yet we keep the roles anyway.

Problem #1: Identity Anchoring Link to heading

Here’s the first problem: people define their professional identity by their job title.

“I’m a developer.” “I’m a QA engineer.” “I’m a solutions architect.” These aren’t just descriptions of what someone does—they’re statements of who someone is. Years of career development, skill building, and professional pride get wrapped up in that title. It becomes an anchor.

This anchoring effect existed long before AI. Job titles don’t just define what people do—they define who they talk to, what meetings they attend, and what artifacts they produce. BAs write requirements documents. Developers write code. QA engineers write test plans. Architects write design documents. Each role creates its own artifacts, stored in its own systems, following its own conventions.

The silos aren’t just organizational—they’re materialized. Look at any enterprise and you’ll find the evidence: requirements in Confluence, code in GitHub, test cases in Jira or specialized QA tools, architecture diagrams in separate repositories. The division of labor becomes a division of information. Handoffs aren’t just process steps; they’re translations between different artifact formats, different tools, different mental models.

This fragmentation was always inefficient, but it was manageable when humans were the only actors. We built processes to bridge the gaps—review meetings, documentation standards, integration testing. The overhead was the cost of specialization.

AI changes the calculus. AI can work across all these artifacts simultaneously. It doesn’t need handoffs or translations. But when organizations adopt AI within existing role boundaries, they inherit all the fragmentation. A developer’s AI assistant sees the codebase but not the requirements. A BA’s AI assistant sees the business documents but not the implementation. The silos persist, now with AI trapped inside each one.

This identity anchoring creates significant inertia when organizations try to transform their processes. AI adoption can only happen within the bounds of traditional role definitions. A developer will use AI to write code faster. A QA engineer will use AI to generate test cases. A BA will use AI to draft user stories. Each role adopts AI tools in isolation, optimizing their slice rather than the whole.

Responsibility boundaries may shift—maybe developers now handle some tasks that used to belong to QA, or architects take on work that was previously a BA’s domain. But the fundamental role structure remains unchanged. The silos persist, just with AI assistants inside each one.

This is exactly why most organizations end up with what I call AI-Assisted development—AI constrained to helping within existing role boundaries. It’s the natural result of identity anchoring. You get incremental productivity gains within each role, but you don’t get transformation. You don’t get the 5x or 10x improvements that AI-native approaches can deliver.

Some organizations are experimenting with fundamentally different approaches—small cross-functional teams where roles are defined by phase ownership rather than task specialization, and where the entire team collaborates with AI together rather than in isolation. But adopting these models requires people to let go of their identity anchors. And that’s hard.

Problem #2: The Skills Stagnation Trap Link to heading

The second problem compounds the first: people try to accelerate existing processes with AI rather than questioning whether those processes should exist at all.

This is a natural response. You’ve spent years mastering a craft—writing clean code, designing test strategies, architecting systems. AI arrives, and the instinct is to use it to do what you already do, just faster. Developers use AI to write code faster. QA engineers use AI to generate test cases faster. Architects use AI to produce design documents faster.

But this “faster” mindset misses the point entirely. AI isn’t just a faster horse—it’s a different mode of transportation. Some of the problems we’ve built entire processes around can be eliminated entirely with a paradigm shift.

Consider test coverage. Traditional QA processes exist because humans writing tests is slow and expensive, so we optimize for coverage efficiency—risk-based testing, test pyramids, strategic sampling. But when AI can generate comprehensive test suites in minutes, the entire optimization problem disappears. The skill isn’t “how do I maximize coverage with limited testing resources”—it’s “how do I validate that AI-generated tests actually verify the right behaviors.”

Or consider requirements elaboration. Business analysts exist partly because translating high-level business needs into detailed specifications is cognitively demanding and time-consuming. But when AI can elaborate a brief into detailed specs, the skill shifts from “how do I write comprehensive requirements” to “how do I validate that AI-generated specs capture the actual business intent.”

The professionals who cling to accelerating existing processes find themselves in a trap. They’re optimizing skills that are becoming commoditized. The developer who prides themselves on typing speed and syntax mastery. The QA engineer who’s built a career on manual test case design. The architect who’s valued for producing detailed documentation. These skills don’t disappear overnight, but their relative value erodes steadily.

Meanwhile, organizations aren’t providing clarity on what skills matter next. Without a defined path forward, people default to defending their current expertise rather than evolving it. They resist AI adoption not because they fear unemployment, but because they fear irrelevance. They’ve invested years building capabilities that may no longer differentiate them.

The mindset shift required is fundamental: stop asking “how can AI help me do my job faster?” and start asking “what outcomes does my role exist to achieve, and what’s the best way to achieve them now that AI exists?”

This reframing changes everything. Job titles stop being defined by activities (writing code, writing tests, writing specs) and start being defined by outcomes (working software, validated quality, clear requirements). Responsibility models shift from “who produces this artifact” to “who ensures this outcome.” Ownership becomes about judgment and accountability, not task execution.

The organizations that help their people make this mindset shift—that actively redefine roles around outcomes rather than activities—create clarity. They show people a path where their domain expertise becomes more valuable, not less. A senior developer’s judgment about what to build matters more when AI handles the how. A QA engineer’s understanding of risk and quality becomes more critical when they’re validating AI-generated tests rather than writing them manually.

But the organizations that leave people to figure this out alone? They get stagnation. People clinging to existing skills, resisting tools that threaten their current value, optimizing for a world that’s rapidly disappearing.

The Imbalance: Technical vs. Organizational Investment Link to heading

There’s a striking imbalance in how organizations approach AI transformation.

On the technical side, investment is aggressive. Engineering leaders evaluate new models as they release. Teams experiment with different coding assistants, comparing capabilities and workflows. Organizations build internal platforms for AI tooling, establish prompt libraries, and create centers of excellence for AI adoption.

On the organizational side? Crickets. The same job descriptions from five years ago. The same team structures. The same handoff-based processes. Maybe some encouragement to “use AI tools” sprinkled on top.

As I wrote in Beyond Code Completion, this is like putting a jet engine on a horse carriage. The technology has transformed, but the vehicle hasn’t. You might go a bit faster, but you’re not going to fly.

Tools benefit individuals. Processes help teams. Culture elevates organizations. If you only invest in tools, you’re optimizing for individual productivity gains while leaving team and organizational transformation on the table. You’re chasing a local maximum, not a global one.

The organizations that will realize the full gains from AI adoption are the ones investing equally in both sides of the equation. They’re not just adopting new tools—they’re reinventing processes and redefining roles. They’re asking hard questions about what their org structure should look like when AI is a first-class participant in software delivery.

What’s Next: How Roles Will Evolve Link to heading

We’ve identified the problems. Traditional roles create identity anchors that constrain AI adoption to incremental improvements. The failure to define new roles creates an uncertainty vacuum that breeds fear and resistance. And organizations are over-investing in technical adoption while under-investing in organizational transformation.

So what do we do about it? What do roles actually become in an AI-native world?

In my next post, I’ll share my opinions on how specific roles will evolve for software delivery. Not vague predictions, but concrete perspectives on what developers, QA engineers, architects, and product roles become when AI is a true development partner.

to make sure you don’t miss it.

Conclusion Link to heading

The biggest barrier to AI transformation isn’t the technology. It’s the organizational structures and role definitions we’ve inherited from a pre-AI era.

Traditional roles were designed for human-only collaboration—optimized for handoffs, specialization, and managing cognitive overhead. These roles create identity anchors that constrain AI adoption to existing boundaries, resulting in AI-Assisted patterns that deliver incremental gains rather than transformation. And when organizations fail to define what roles become, they create an uncertainty vacuum that breeds fear and resistance.

The path forward requires equal investment in organizational transformation as in technical adoption. It requires actively experimenting with new role definitions, new team structures, and new collaboration models. It requires giving people clarity about their future rather than leaving them to assume the worst.

The organizations that figure this out first will have a significant advantage. They’ll move from AI-Assisted to AI-Driven development. They’ll unlock the 5x and 10x gains that remain out of reach for organizations still anchored to traditional roles.

The technology is ready. The question is whether our organizations are ready to evolve with it.

About the Author - Derick Chen

I'm a Developer Specialist Solutions Architect at AWS Singapore, where I lead the AI-Driven Development Lifecycle (AI-DLC) programme across multiple key countries in ASEAN and the wider APJ region. As an early contributor to the AI-DLC methodology and its foundational white paper, I help engineering organizations build complex software faster and better, unlocking 10X delivery velocity through reimagined processes and team structures.

Previously, I worked at Meta on platform engineering solutions and at DBS Bank on full-stack development for business transformation initiatives. I graduated Magna Cum Laude from New York University with a BA in Computer Science.

Follow me on LinkedIn for more insights on AI-driven development and software engineering.

The views expressed in this article are my own and do not represent the views of my employer.