Beyond Self-Disruption: The Paradigm Shift Software Engineers Need in the AI Era

First Published:
Last Updated:

Throughout my career, I have poured my passion into meticulous design, obsessive attention to code quality, and building deep technical knowledge to support it all. I earned every AWS certification available and was recognized as a Japan AWS Top Engineer. I believed that the accumulation of knowledge and technical skills was the very essence of my value. Yet recently, I made the deliberate decision to disrupt my own way of thinking.

In this article, I share my own experience and reflections on what engineers should value in the AI era, what to let go of, and what to sharpen. This is not a lecture aimed at anyone in particular. It is a record of a paradigm shift that I myself had to face. If you are an engineer grappling with similar conflicts, I hope you find something useful here.

1. What Is Losing Its Value — The Changing Meaning of "Knowing"

I feel that the value of "knowing" things, in and of itself, is rapidly diminishing.
  • Knowing how to use a framework
  • Knowing design patterns
  • Being proficient in a specific programming language
  • Being able to document best practices
All of these can now be handled instantly by AI. The era of competing based on the sheer volume of accumulated knowledge may be drawing to a close.

Why I Came to Think This Way

About a decade ago, people who had memorized Stack Overflow answers were highly valued. By the years leading up to ChatGPT's arrival, the prevailing sentiment had shifted to "there's no point memorizing what you can just Google." And now, AI doesn't just search — it understands the intent behind your question and generates contextually appropriate answers, complete with code.

Let me reflect on some specific examples.

Knowledge of design patterns: There was a time when the ability to judge that "the Strategy pattern is appropriate for this case" carried significant value. But now, if you describe your requirements to AI, it will select the appropriate pattern and complete the implementation. The value of simply knowing a pattern's name has diminished dramatically.

Proficiency in languages and frameworks: "I deeply understand Python's asyncio behavior." "I have a complete grasp of React's rendering cycle." AI has extensive knowledge in these areas. Moreover, it can cross-reference related information far faster than a human reading through documentation one page at a time.

What Big Tech CEOs Are Saying

This isn't just my personal impression. Big Tech CEOs have been publicly declaring, one after another, that the era of AI writing the majority of code has arrived. Google CEO Sundar Pichai stated during the October 2024 earnings report that over 25% of Google's new code is AI-generated (*1). Meta's Mark Zuckerberg predicted in January 2025 that "AI will begin replacing midlevel engineers' work during 2025" (*2). Anthropic CEO Dario Amodei predicted in March 2025 that "AI will write 90% of code within 3 to 6 months" (*3), and by October of the same year, he stated this had become reality within Anthropic (*4).

This is no longer a story about the future. It is something happening right now.

[Sources]

The Thought Pattern I Fell Into

Let me be honest about the thought pattern I fell into. It was this: "I can write it better, therefore I should write it myself."

Having earned every AWS certification and been selected as a Top Engineer for six consecutive years, my belief that "the volume of accumulated knowledge equals my value" was deeply entrenched. "AI writes mediocre code. I can write far more refined code." This feeling I had in the early days of AI was probably not wrong. However, the fact that "I can write it better" does not logically lead to the conclusion that "therefore I should write it myself."

Instead of spending an hour carefully crafting polished code, I have AI write it in 30 seconds and focus on review and improvement. The time saved lets me tackle the next challenge as well. The result, I've found, is dramatically higher output in both quality and quantity within the same timeframe. This realization was a major turning point for me.

2. What Remains Valuable — Or Grows Even More So

So, while some things are losing their value, what remains? I see this not as a simple skill swap, but as a paradigm shift in the very definition of an engineer's value.
Traditional Paradigm New Paradigm
Source of Value Technical ability (writing, knowing, solving) Problem-defining ability, judgment, and execution (defining, discerning, delivering)
Engineer's Role Hands-on craftsperson Director who orchestrates AI to realize outcomes
Measure of Evaluation How good the code you can write is How significant the problems you can solve are

This transition is not simply an extension of the status quo. That is precisely why "self-disruption" becomes necessary — or at least, that is what I have come to feel.

So what gains value in this new paradigm? In my view, it comes down to three core capabilities:
  1. Problem-defining ability: The power to define what should be built
  2. Judgment: The power to discern the quality of AI's output
  3. Execution: The power to deliver and get it into users' hands
Let me elaborate on each.

2-1. Problem-Defining Ability — The Power to Ask the Right Questions

AI will build whatever you ask it to. But it cannot decide what should be built.
  • What is the real challenge facing this business?
  • What are users truly struggling with?
  • Among everything that is technically feasible, what is worth doing now?
This requires someone who knows the ground truth, talks to people, and makes judgments. And the quality of these questions determines the quality of AI's output.

Why "Questions" Matter So Much

The quality of AI's output is entirely dependent on the quality of its input. Even for building the same feature, the following two instructions produce completely different results:

Instruction A: "Build a user management screen."

Instruction B: "Build a dashboard for a SaaS product with 5,000 monthly active users, designed for the Customer Success team to quickly identify users at high risk of churn. Calculate a health score based on login frequency over the past 30 days, usage rate of key features, and number of support inquiries, then display users sorted by lowest score first."

The person who can give Instruction B understands the business, grasps the users' challenges, and judges what is fundamentally necessary. This "problem-defining ability" is an entirely different capability from technical knowledge, and I believe it cannot be replaced by AI.

How to Strengthen Problem-Defining Ability

Here is what I personally try to keep in mind:
  • Depth of domain knowledge: You cannot ask the right questions without a deep understanding of the business domain
  • Touchpoints with users: Knowing real users' voices, behaviors, and frustrations
  • A sense of technical feasibility: Having a rough grasp of what can and cannot be done (AI will fill in the details)
  • Ability to structure problems: The power to decompose vague problems into solvable units

2-2. Judgment — The Eye to Discern the Quality of AI's Output

AI can produce perfect output, but it can also produce output that looks convincing yet is fatally flawed. The eye to discern that difference is something that, in my view, you can only possess because of prior engineering experience.

Humans Observe, Then Ask AI

Verifying scalability and security, identifying edge cases, analyzing performance. AI can address these concerns with high accuracy when asked. But noticing what needs to be checked is the human's role.
  • Because you know the patterns of past production incidents, you can ask AI: "Could this design cause the same problem?"
  • Because you've observed how users actually behave, you can sense that something is off: "Is this screen flow really natural?"
  • Because you've picked up on the unspoken concerns of stakeholders, you can pause and ask: "Will this specification truly get buy-in?"
The painful lessons from past projects, the insights gained from observing user behavior — it is precisely this accumulated experience that enables you to ask AI the right questions.

How to Apply Judgment

What matters here is where you apply your judgment. Judgment should be used for review and course correction. If you use it as a reason to do the work yourself, it actually decreases your productivity.

A film director does not act in their own movie. Yet without their judgment, the film cannot exist. I believe the same applies to an engineer's judgment. When you're dissatisfied with AI's output, instead of rewriting it yourself, feed your course correction back to AI. It is through this iterative loop that judgment is most fully leveraged.

2-3. Execution — The Power to Deliver to Users

While the value of being able to write code is changing, actually running a service and getting it into users' hands remains gritty, hands-on, human work.
  • Configuring and operating infrastructure
  • Building consensus with stakeholders
  • Making decisions and responding during incidents
  • Pivoting based on user feedback
This involves human will and emotion. Even with AI utilized to its fullest, there are aspects of execution that only humans can handle.

What Execution Concretely Entails

Grit — the tenacity to see things through
  • Persisting through setbacks, iterating with AI, and continuing to push until results emerge
  • Adapting flexibly and moving forward even when facing unexpected incidents or specification changes
  • Grinding through the last mile to get things across the finish line
The ability to read human emotions
  • Reading stakeholders' vague dissatisfactions and unarticulated expectations, and drawing out what they truly want
  • Sensing the differences in stakes and urgency among stakeholders and guiding them toward consensus
  • Perceiving team members' motivation levels and anxieties and engaging with them appropriately
The willingness to take responsibility
  • Making the call to say "we're going with this" even in uncertain situations, and owning the outcome
  • Having the resolve to take on the final decision when no one else will and things can't move forward otherwise
AI can support all manner of technical tasks. But sensing a stakeholder's hidden demands, raising team morale, and pushing forward tenaciously until results materialize — these are things only humans can do.

3. On Self-Disruption — Why I Needed It

If you've read this far and felt some discomfort or resistance, you are not alone. I felt exactly the same way. The thought pattern I described in Chapter 1 — "I can write it better, therefore I should write it myself" — was born directly from my attachment to the skills I had spent years building up. Accepting that "the value of skills I've spent years accumulating is changing" was not something I could do easily.

That is precisely why I needed to use such a strong phrase — "self-disruption" — to describe what I was doing.

What I Disrupted

  • The belief that "code written by my own hand has inherent value": The value of code is determined not by who wrote it, but by what it solves. Whether AI writes it or a human does, if it solves the user's problem, it should hold the same value.
  • The evaluation axis that "possessing deep technical knowledge proves excellence": AI has the knowledge. The new measure of excellence, I believe, is shifting to whether you can leverage AI to produce results.
  • The habit of rolling up my sleeves because "I can write it better than AI": This was the hardest to let go. It is precisely because I have judgment that I feel dissatisfied with AI's output and end up rewriting it myself. But what I should have been doing was feeding that dissatisfaction back to AI as course-correcting feedback — not processing it with my own hands.

What I Didn't Need to Disrupt

On the other hand, there were things I didn't need to let go of. The problem-defining ability, judgment, and execution I discussed in the previous chapter are, in fact, growing in value.

However, how they are applied has changed. The broad technical knowledge I cultivated through the process of earning every AWS certification is no longer used for writing code myself. Instead, it has transformed into something I use to cross-reference multiple domains, holistically review AI's output, and provide course corrections. Perhaps I didn't deny the knowledge itself, but rather redefined how to put it to use.

Self-Disruption Is Not a One-Time Event

What I've come to realize is that self-disruption is not a one-off occurrence. AI's capabilities evolve daily. What you think today can only be done by a human may be something AI handles flawlessly six months from now.

That is why I believe it is essential to keep asking yourself: "What is the work that only I can do?" Self-disruption may not be about discarding an old set of values once. It may be the very posture of continuously re-examining your own value.

4. Embracing Local AI Agents — The Catalyst of My Self-Disruption

Up to this point, I have written about self-disruption in fairly abstract terms. But there was, in fact, a very concrete trigger for it: the arrival of local AI agents — specifically, Claude Code. Without seeing what local AI agents can actually do, I do not believe I would have arrived at the level of self-disruption I described in the previous chapter. So in this chapter, I want to step back from the abstract narrative and look squarely at the technology itself.

Claude Code first appeared as a Research Preview on February 24, 2025, alongside Claude 3.7 Sonnet, and reached general availability on May 22, 2025, when Claude 4 was announced. The way it reads through a codebase and autonomously handles design, implementation, and verification fundamentally undermined the premise I had been clinging to: "I can write it better than AI." This is what I really want to convey in this chapter.

4-1. Online AI Agents vs. Local AI Agents

There are many ways to classify generative AI services, but if we divide them by where they run, they fall broadly into two categories.
  • Online AI agents: Used through a browser or chat interface. Examples include Anthropic Claude (web version), OpenAI ChatGPT, and Microsoft 365 Copilot. They are easy to start using, but have inherent limits on how directly they can touch the data on your own machine.
  • Local AI agents: Run on your local terminal (PC) and can directly access the file system. Examples include Anthropic Claude Code (coding-centric), OpenAI Codex (coding-centric), and Anthropic Claude Cowork (general office work). New entrants are appearing one after another.
The defining characteristic of a local AI agent is that it can directly access the file system on your PC. This single difference dramatically widens the range of work AI can take on.

4-2. Why Local AI Agents Forced My Self-Disruption

The strength of local AI agents, in my view, lies in their autonomy that pulls the execution environment itself into the loop.
  • Direct access to the file system and command execution environment: Reading and writing files, executing commands, running tests — the agent does these autonomously on its own.
  • Ability to ingest the full codebase and project-specific context: Repository structure, configuration files, and project-specific instructions like CLAUDE.md are taken in wholesale as context.
  • Self-execution and self-correction loops: It can run a "write → execute → observe failure → fix" loop without a human in the middle.
In short, local AI agents do not stop at "writing code that looks good." They verify and refine the result themselves. Once I saw this in action, the rationale of "I'll write it because I can write it better" stopped holding water. The AI did not just produce code — it iterated on its own output, considered the project's overall context, and delivered something workable. That was the moment I had to face self-disruption squarely.

4-3. The Stages of "AI-Differentiating Skills" — Now Branching into Harness and Environment

The skills that differentiate engineers in the age of AI have evolved over the past one to two years. Through 2024 and 2025 the progression looked sequential: prompt → context → the new local-AI layer. With local AI agents now in mainstream use, that latest stage has resolved into two parallel sub-disciplines: harness engineering and environment engineering.
Period Skill in the spotlight Core question
Through 2024 Prompt Engineering How do you write the instruction?
2024–2025 Context Engineering How do you feed background information (RAG, etc.)?
Now and going forward
(parallel pair)
Harness Engineering How do you configure the agent runtime — permissions, hooks, MCP, the tool surface — inside the process?
Environment Engineering How do you bound the world the agent acts in — OS user, sandbox, network — outside the process?

These two final-stage skills are both new to the AI-agent era, and they sit at different layers of the same stack. Harness engineering is the in-process configuration of the agent runtime itself — what permissions are evaluated, what hooks fire, which MCP servers load, what advisories CLAUDE.md carries. Environment engineering is the out-of-process bounding of the world the agent acts in — OS user, filesystem ACLs, container, network egress, backups, audit. (For the environment-engineering side of the pair, see for example Aymen Furter's article on Environment Engineering as Platform Engineering for AI Agents.)

Why Harness and Environment Engineering Matter

In the AI agent era, the agent itself reaches out for the information it needs. The question, then, is no longer "what should I give it" but rather "what should it look at, what should it be allowed to do, and what should it be permitted to touch". In other words, it is a question of guardrail design — and that design lives at two layers, not one.
Concrete elements of harness engineering (inside the process):
  • Permission rules (allow / deny / ask) for tool calls
  • Lifecycle hooks for blocking, mutating, or auditing actions
  • MCP server scopes and per-tool gating
  • CLAUDE.md advisories that travel with the project
  • Approval workflows and prompt-injection-aware prompt handling
Concrete elements of environment engineering (outside the process):
  • OS user separation and filesystem ACLs
  • Container, devcontainer, or VM sandboxing
  • Network egress controls (firewall, allowlisted domains)
  • Backups and disk-level snapshots
  • Audit pipelines for tool calls and filesystem mutations

Harness and Environment Engineering Are the Frontline for Local AI

If we map these stages against the two types of AI agents, the picture becomes clear:
Prompt Context Harness Environment
Online AI High High (RAG) Limited Limited
Local AI High High High (battlefield) High (battlefield)

Online AI does its work within the screen it has been given — the harness and the environment are both fixed for you, and only prompt and context are real levers. Local AI, by contrast, touches your own environment directly, and both the harness and the environment are yours to design. The quality of your harness and environment engineering together directly determines the ceiling of what local AI can do for you. Whoever can balance safety and convenience across both layers will be the one who reaps the most from AI.

4-4. To Those Who Have Not Yet Tried a Local AI Agent

I would strongly encourage anyone who has not yet used a local AI agent to simply try one. Reading articles and watching demos is not enough. The shift in perspective only becomes real when you let it work alongside you on your own files. If your company does not yet permit it, I personally believe it is worth using one on your own initiative for personal projects, taking responsibility for the consequences yourself.

And if you are unsure how to get started — that, too, is a perfect first task to ask AI about. Even in this small moment, the three capabilities I described in Chapter 2 — problem-defining ability, judgment, and execution — are quietly being put to the test.

5. How I Work Now — My Own Behavioral Changes

From "hands-on craftsperson" to "the person who orchestrates AI to deliver results."

I don't want to leave this as an abstraction, so let me describe what I am actually practicing — or striving to practice — in concrete terms.

5-1. Let AI Write the Code — Focus on "Defining" and "Judging"

In my previous way of working, I spent the bulk of my day on understanding requirements, designing, coding, and team communication.

Today, my time allocation has changed dramatically. My focus has shifted to defining problems, structuring requirements, instructing AI, reviewing its output, and communicating with the team about AI adoption. The occasions when I write code with my own hands have become nearly nonexistent. By reallocating much of the time I once spent coding to problem definition and evaluation of AI output, I feel the scope I can cover in the same timeframe has expanded significantly.

5-2. Move Across the Full Surface, Not Just in a T-Shape

Traditionally, the ideal skill set was described as "T-shaped" — deep expertise in, say, backend development, with some familiarity with frontend. But now that AI can handle implementation across domains, a single engineer's coverage can expand into a broad surface.

Frontend, backend, infrastructure, database design, security. You don't need to implement all of these by hand, but you do need enough understanding to direct AI and enough experience to judge its output across all of them.

I have experience designing and building across AWS, multi-cloud, and on-premises environments. With AI, the barrier to "moving across the full surface" has dropped dramatically. Previously, you needed to keep each platform's specifications in your own head. Now, AI supplements that knowledge. What you yourself need to retain is not exhaustive knowledge of every domain's details, but rather a grasp of the essentials and enough experience to wield AI effectively.

This is subtly different from being "a mile wide and an inch deep." You need enough depth in each domain to give AI appropriate instructions and judge the quality of its output. Think of it as "broadening the range of your judgment."

5-3. Build and Validate Before Overthinking

Early in my career, I worked in R&D in Silicon Valley. The most valuable lesson I took from that experience was a working style that combines rapid iteration with initiative, creative thinking, tenacity, and adaptability to produce meaningful results. This principle has been at the core of my approach throughout my career.

This is not the first time this principle has proven relevant. The IT industry has seen repeated moments where the established way of doing things stops working — from on-premises to cloud, from monoliths to microservices. In the early days of cloud computing, there was pushback: "You can't trust production workloads to the cloud." When generative AI first appeared, I personally felt that "code I write by hand is higher quality" — and I was struck by the realization that this was the same pattern of resistance all over again. But each time, by moving quickly, learning from failures, and pivoting flexibly, I've continued to ride the wave of change.

And now, with the arrival of generative AI, the cycle of "build quickly and validate" has accelerated dramatically.

Before: Spend two weeks locking down the design, two weeks implementing, and if it doesn't fit, rework everything.

Now: Form a hypothesis in one hour, have AI implement it, see the working product, validate, and pivot. Do this multiple times a day.

It's a shift from the mindset of "create the perfect design upfront" to "build fast and learn fast." Now that the cost of failure has dropped, the time spent hesitating out of fear of failure may be the greater waste.

The scale of this paradigm shift may be greater than anything before it. Yet I believe that the posture itself — taking initiative, moving quickly, thinking creatively, persevering through difficulty, and adapting to change — remains universally important across every era's transitions.

5-4. Keep Asking: "What Work Can Only I Do?"

This is, personally, the guiding principle I hold most dear. In my daily work, I make a point of asking myself:

"Is the task I'm doing right now something AI cannot do?"

If I'm doing something AI can do, it may be time to reconsider how I'm spending that time. The hours humans spend doing what AI can do will increasingly become "wasted time."
  • Routine CRUD implementation → Delegate to AI
  • Writing test code → Delegate to AI
  • Writing documentation → Delegate to AI
  • Bug investigation and fixing → Let AI take the first pass; focus on judgment myself
So where should I spend my time?
  • Talking to users to uncover fundamental challenges
  • Connecting business strategy with technology strategy
  • Leading technical decision-making for the team
  • Integrating AI's outputs into a coherent, shippable product
  • Making the final call during incidents

6. Changes in Teams and Management — How We Relate to Others Is Changing Too

So far, I've been discussing the individual engineer. But engineers don't work in isolation. Teams, management, organizations — how we relate to others is also fundamentally changing through this paradigm shift.

6-1. Changes in Team Structure — Toward Smaller Teams with Greater Impact

As AI takes on implementation, the range a single engineer can cover expands dramatically. This inevitably affects team structure.

A product that previously required a 10-person team with dedicated frontend, backend, infrastructure, QA, and PM specialists is increasingly being covered by 2–3 engineers working collaboratively with AI.

This is not about "reducing headcount." It's about the same number of people producing far greater impact. However, for this to work, each team member needs to possess problem-defining ability, judgment, and execution at a high level. In lean, elite teams, each individual will be expected to act with greater autonomy.

6-2. The Focus of Management Shifts — From "Volume of Work" to "Quality of Judgment"

The focus of management changes too.
Traditional Management Future Management
Estimating and allocating effort Prioritizing problems and judging direction
Distributing tasks Designing what AI handles vs. what humans handle
Tracking progress Reviewing output quality
Assigning based on skill sets Assigning based on judgment and problem-defining ability

Once AI removes implementation speed as a bottleneck, a manager's job shifts from "who does what" to "what should the team focus on" and "how do we ensure the quality of AI output across the entire team."

6-3. Redefining Evaluation Criteria

This may be the most difficult change for organizations.

Traditionally, evaluation relied on visible metrics: lines of code, number of commits, expertise in a specific technology, number of certifications. But in an era where AI writes the code, I sense these metrics alone will no longer be sufficient. I am no exception, as someone who has stacked up certifications.

Possible new evaluation criteria include:
  • How impactful were the problems you defined?
  • How accurately did you judge and ensure the quality of AI's output?
  • How quickly did you run hypothesis-validation cycles and connect them to results?
  • How much did you contribute to elevating the team's overall AI adoption?
Evaluate "what you achieved," not "what you know." Put into words, it sounds obvious. But actually designing this into an evaluation system is far from easy. Yet I feel that organizations which fail to face this challenge will lose their best talent.

6-4. The Challenge of Developing Junior Engineers

What I personally find most challenging is the development of junior engineers.

Traditionally, juniors built their understanding of the codebase through relatively straightforward tasks — bug fixes, small feature additions — and gradually developed judgment. Through this "apprenticeship" process, they failed, received code reviews, and accumulated experience bit by bit, growing into senior engineers.

But as AI becomes capable of handling junior-level tasks with high quality, the very opportunities for juniors to gain experience may diminish. At the same time, judgment can only be cultivated through experience. Therein lies a significant dilemma.

One possibility is to make reviewing AI-generated code a learning experience in itself. Training juniors to think about AI output — "Why this design?" "What other options exist?" "What are the risks of this implementation?" — differs from the traditional "learn by writing it yourself," but it may be effective for developing judgment.

Regardless, the growth path from junior to senior is something each organization must consciously redesign.

Furthermore, as individuals become able to move at high speed with AI, there's a risk that information sharing within teams weakens. "Who made what judgment to arrive at this design?" becomes harder to trace. Beyond just junior development, I believe a culture of sharing the rationale behind decisions, rather than just the code itself, will become more important than ever for the team as a whole.

7. AI Cannot Decide "What Should Be Built"

Let me close with what I consider the most important point.

AI is powerful. In many situations, it surpasses humans in both the speed and quality of writing code. Yet there is something AI categorically cannot do: it cannot decide on its own "what should be built."

For AI, the quality of its instructions is everything. Without a human who can ask the right questions, AI cannot unleash its full potential. Conversely, the person who asks the right questions through problem-defining ability, discerns the output through judgment, and delivers through execution may be the most valuable person in the AI era.

I see this not as a threat, but as an opportunity. Precisely because AI handles implementation, engineers can focus on more essential work: "What should we build?" "Why should we build it?" "Are we truly solving the user's problem?"

Letting go of the pride I held as a hands-on craftsperson was, honestly, painful. But beyond self-disruption, I believe there is a new form of engineering — one anchored in problem-defining ability, judgment, and execution — that enables us to create far greater impact.


Everything in this article is based on my own experience and reflections. Others may hold different views.

Some readers may interpret this article as "an engineer's declaration of defeat." But self-disruption is not about standing still. It is about looking squarely at what you have built over the years, identifying what needs to change, and moving forward. Isn't that, in fact, what engineers have always excelled at? We are the people who refactored yesterday's code today. This time, the subject of refactoring simply happens to be our own values.

However, if anything in this article resonated with you, even slightly, I ask you to pose one question to yourself:

Of the work you did today, how much of it was something AI could never do?

I believe that thinking through that answer will lead you to your next step.

Written by Hidekazu Konishi