Beyond Self-Disruption: The Paradigm Shift Software Engineers Need in the AI Era
First Published:
Last Updated:
In this article, I share my own experience and reflections on what engineers should value in the AI era, what to let go of, and what to sharpen. This is not a lecture aimed at anyone in particular. It is a record of a paradigm shift that I myself had to face. If you are an engineer grappling with similar conflicts, I hope you find something useful here.
1. What Is Losing Its Value — The Changing Meaning of "Knowing"
I feel that the value of "knowing" things, in and of itself, is rapidly diminishing.- Knowing how to use a framework
- Knowing design patterns
- Being proficient in a specific programming language
- Being able to document best practices
Why I Came to Think This Way
Ten years ago, people who had memorized Stack Overflow answers were highly valued. Five years ago, that shifted to "there's no point memorizing what you can just Google." And now, AI doesn't just search — it understands the intent behind your question and generates contextually appropriate answers, complete with code.Let me reflect on some specific examples.
Knowledge of design patterns: There was a time when the ability to judge that "the Strategy pattern is appropriate for this case" carried significant value. But now, if you describe your requirements to AI, it will select the appropriate pattern and complete the implementation. The value of simply knowing a pattern's name has diminished dramatically.
Proficiency in languages and frameworks: "I deeply understand Python's asyncio behavior." "I have a complete grasp of React's rendering cycle." AI has extensive knowledge in these areas. Moreover, it can cross-reference related information far faster than a human reading through documentation one page at a time.
What Big Tech CEOs Are Saying
This isn't just my personal impression. Big Tech CEOs have been publicly declaring, one after another, that the era of AI writing the majority of code has arrived. Google CEO Sundar Pichai stated during the October 2024 earnings report that over 25% of Google's new code is AI-generated (*1). Meta's Mark Zuckerberg predicted in January 2025 that "AI will begin replacing midlevel engineers' work during 2025" (*2). Anthropic CEO Dario Amodei predicted in March 2025 that "AI will write 90% of code within 3 to 6 months" (*3), and by October of the same year, he stated this had become reality within Anthropic (*4).This is no longer a story about the future. It is something happening right now.
[Sources]
- (*1) Over 25% of Google's code is written by AI, Sundar Pichai says - Fortune
- (*2) 'AI Can Write The Code': Zuckerberg Says Meta's Midlevel Engineers May Soon Be Replaced - Yahoo Finance
- (*3) Anthropic CEO Says AI Could Write '90% Of Code' In '3 To 6 Months' - Yahoo Finance
- (*4) 90% of code at Anthropic now written by AI, CEO Dario Amodei says humans still essential - Startup News
The Thought Pattern I Fell Into
Let me be honest about the thought pattern I fell into. It was this: "I can write it better, therefore I should write it myself."Having earned every AWS certification and been selected as a Top Engineer for six consecutive years, my belief that "the volume of accumulated knowledge equals my value" was deeply entrenched. "AI writes mediocre code. I can write far more refined code." This feeling I had in the early days of AI was probably not wrong. However, the fact that "I can write it better" does not logically lead to the conclusion that "therefore I should write it myself."
Instead of spending an hour carefully crafting polished code, I have AI write it in 30 seconds and focus on review and improvement. The time saved lets me tackle the next challenge as well. The result, I've found, is dramatically higher output in both quality and quantity within the same timeframe. This realization was a major turning point for me.
2. What Remains Valuable — Or Grows Even More So
So, while some things are losing their value, what remains? I see this not as a simple skill swap, but as a paradigm shift in the very definition of an engineer's value.| Traditional Paradigm | New Paradigm | |
|---|---|---|
| Source of Value | Technical ability (writing, knowing, solving) | Problem-defining ability, judgment, and execution (defining, discerning, delivering) |
| Engineer's Role | Hands-on craftsperson | Director who orchestrates AI to realize outcomes |
| Measure of Evaluation | How good the code you can write is | How significant the problems you can solve are |
This transition is not simply an extension of the status quo. That is precisely why "self-disruption" becomes necessary — or at least, that is what I have come to feel.
So what gains value in this new paradigm? In my view, it comes down to three core capabilities:
- Problem-defining ability: The power to define what should be built
- Judgment: The power to discern the quality of AI's output
- Execution: The power to deliver and get it into users' hands
2-1. Problem-Defining Ability — The Power to Ask the Right Questions
AI will build whatever you ask it to. But it cannot decide what should be built.- What is the real challenge facing this business?
- What are users truly struggling with?
- Among everything that is technically feasible, what is worth doing now?
Why "Questions" Matter So Much
The quality of AI's output is entirely dependent on the quality of its input. Even for building the same feature, the following two instructions produce completely different results:Instruction A: "Build a user management screen."
Instruction B: "Build a dashboard for a SaaS product with 5,000 monthly active users, designed for the Customer Success team to quickly identify users at high risk of churn. Calculate a health score based on login frequency over the past 30 days, usage rate of key features, and number of support inquiries, then display users sorted by lowest score first."
The person who can give Instruction B understands the business, grasps the users' challenges, and judges what is fundamentally necessary. This "problem-defining ability" is an entirely different capability from technical knowledge, and I believe it cannot be replaced by AI.
How to Strengthen Problem-Defining Ability
Here is what I personally try to keep in mind:- Depth of domain knowledge: You cannot ask the right questions without a deep understanding of the business domain
- Touchpoints with users: Knowing real users' voices, behaviors, and frustrations
- A sense of technical feasibility: Having a rough grasp of what can and cannot be done (AI will fill in the details)
- Ability to structure problems: The power to decompose vague problems into solvable units
2-2. Judgment — The Eye to Discern the Quality of AI's Output
AI can produce perfect output, but it can also produce output that looks convincing yet is fatally flawed. The eye to discern that difference is something that, in my view, you can only possess because of prior engineering experience.Humans Observe, Then Ask AI
Verifying scalability and security, identifying edge cases, analyzing performance. AI can address these concerns with high accuracy when asked. But noticing what needs to be checked is the human's role.- Because you know the patterns of past production incidents, you can ask AI: "Could this design cause the same problem?"
- Because you've observed how users actually behave, you can sense that something is off: "Is this screen flow really natural?"
- Because you've picked up on the unspoken concerns of stakeholders, you can pause and ask: "Will this specification truly get buy-in?"
How to Apply Judgment
What matters here is where you apply your judgment. Judgment should be used for review and course correction. If you use it as a reason to do the work yourself, it actually decreases your productivity.A film director does not act in their own movie. Yet without their judgment, the film cannot exist. I believe the same applies to an engineer's judgment. When you're dissatisfied with AI's output, instead of rewriting it yourself, feed your course correction back to AI. It is through this iterative loop that judgment is most fully leveraged.
2-3. Execution — The Power to Deliver to Users
While the value of being able to write code is changing, actually running a service and getting it into users' hands remains gritty, hands-on, human work.- Configuring and operating infrastructure
- Building consensus with stakeholders
- Making decisions and responding during incidents
- Pivoting based on user feedback
What Execution Concretely Entails
Grit — the tenacity to see things through- Persisting through setbacks, iterating with AI, and continuing to push until results emerge
- Adapting flexibly and moving forward even when facing unexpected incidents or specification changes
- Grinding through the last mile to get things across the finish line
- Reading stakeholders' vague dissatisfactions and unarticulated expectations, and drawing out what they truly want
- Sensing the differences in stakes and urgency among stakeholders and guiding them toward consensus
- Perceiving team members' motivation levels and anxieties and engaging with them appropriately
- Making the call to say "we're going with this" even in uncertain situations, and owning the outcome
- Having the resolve to take on the final decision when no one else will and things can't move forward otherwise
3. On Self-Disruption — Why I Needed It
If you've read this far and felt some discomfort or resistance, you are not alone. I felt exactly the same way. The thought pattern I described in Chapter 1 — "I can write it better, therefore I should write it myself" — was born directly from my attachment to the skills I had spent years building up. Accepting that "the value of skills I've spent years accumulating is changing" was not something I could do easily.That is precisely why I needed to use such a strong phrase — "self-disruption" — to describe what I was doing.
What I Disrupted
- The belief that "code written by my own hand has inherent value": The value of code is determined not by who wrote it, but by what it solves. Whether AI writes it or a human does, if it solves the user's problem, it should hold the same value.
- The evaluation axis that "possessing deep technical knowledge proves excellence": AI has the knowledge. The new measure of excellence, I believe, is shifting to whether you can leverage AI to produce results.
- The habit of rolling up my sleeves because "I can write it better than AI": This was the hardest to let go. It is precisely because I have judgment that I feel dissatisfied with AI's output and end up rewriting it myself. But what I should have been doing was feeding that dissatisfaction back to AI as course-correcting feedback — not processing it with my own hands.
What I Didn't Need to Disrupt
On the other hand, there were things I didn't need to let go of. The problem-defining ability, judgment, and execution I discussed in the previous chapter are, in fact, growing in value.However, how they are applied has changed. The broad technical knowledge I cultivated through the process of earning every AWS certification is no longer used for writing code myself. Instead, it has transformed into something I use to cross-reference multiple domains, holistically review AI's output, and provide course corrections. Perhaps I didn't deny the knowledge itself, but rather redefined how to put it to use.
Self-Disruption Is Not a One-Time Event
What I've come to realize is that self-disruption is not a one-off occurrence. AI's capabilities evolve daily. What you think today can only be done by a human may be something AI handles flawlessly six months from now.That is why I believe it is essential to keep asking yourself: "What is the work that only I can do?" Self-disruption may not be about discarding an old set of values once. It may be the very posture of continuously re-examining your own value.
4. How I Work Now — My Own Behavioral Changes
From "hands-on craftsperson" to "the person who orchestrates AI to deliver results."I don't want to leave this as an abstraction, so let me describe what I am actually practicing — or striving to practice — in concrete terms.
4-1. Let AI Write the Code — Focus on "Defining" and "Judging"
In my previous way of working, I spent the bulk of my day on understanding requirements, designing, coding, and team communication.Today, my time allocation has changed dramatically. My focus has shifted to defining problems, structuring requirements, instructing AI, reviewing its output, and communicating with the team about AI adoption. The occasions when I write code with my own hands have become nearly nonexistent. By reallocating much of the time I once spent coding to problem definition and evaluation of AI output, I feel the scope I can cover in the same timeframe has expanded significantly.
4-2. Move Across the Full Surface, Not Just in a T-Shape
Traditionally, the ideal skill set was described as "T-shaped" — deep expertise in, say, backend development, with some familiarity with frontend. But now that AI can handle implementation across domains, a single engineer's coverage can expand into a broad surface.Frontend, backend, infrastructure, database design, security. You don't need to implement all of these by hand, but you do need enough understanding to direct AI and enough experience to judge its output across all of them.
I have experience designing and building across AWS, multi-cloud, and on-premises environments. With AI, the barrier to "moving across the full surface" has dropped dramatically. Previously, you needed to keep each platform's specifications in your own head. Now, AI supplements that knowledge. What you yourself need to retain is not exhaustive knowledge of every domain's details, but rather a grasp of the essentials and enough experience to wield AI effectively.
This is subtly different from being "a mile wide and an inch deep." You need enough depth in each domain to give AI appropriate instructions and judge the quality of its output. Think of it as "broadening the range of your judgment."
4-3. Build and Validate Before Overthinking
Early in my career, I worked in R&D in Silicon Valley. The most valuable lesson I took from that experience was a working style that combines rapid iteration with initiative, creative thinking, tenacity, and adaptability to produce meaningful results. This principle has been at the core of my approach throughout my career.This is not the first time this principle has proven relevant. The IT industry has seen repeated moments where the established way of doing things stops working — from on-premises to cloud, from monoliths to microservices. In the early days of cloud computing, there was pushback: "You can't trust production workloads to the cloud." When generative AI first appeared, I personally felt that "code I write by hand is higher quality" — and I was struck by the realization that this was the same pattern of resistance all over again. But each time, by moving quickly, learning from failures, and pivoting flexibly, I've continued to ride the wave of change.
And now, with the arrival of generative AI, the cycle of "build quickly and validate" has accelerated dramatically.
Before: Spend two weeks locking down the design, two weeks implementing, and if it doesn't fit, rework everything.
Now: Form a hypothesis in one hour, have AI implement it, see the working product, validate, and pivot. Do this multiple times a day.
It's a shift from the mindset of "create the perfect design upfront" to "build fast and learn fast." Now that the cost of failure has dropped, the time spent hesitating out of fear of failure may be the greater waste.
The scale of this paradigm shift may be greater than anything before it. Yet I believe that the posture itself — taking initiative, moving quickly, thinking creatively, persevering through difficulty, and adapting to change — remains universally important across every era's transitions.
4-4. Keep Asking: "What Work Can Only I Do?"
This is, personally, the guiding principle I hold most dear. In my daily work, I make a point of asking myself:"Is the task I'm doing right now something AI cannot do?"
If I'm doing something AI can do, it may be time to reconsider how I'm spending that time. The hours humans spend doing what AI can do will increasingly become "wasted time."
- Routine CRUD implementation → Delegate to AI
- Writing test code → Delegate to AI
- Writing documentation → Delegate to AI
- Bug investigation and fixing → Let AI take the first pass; focus on judgment myself
- Talking to users to uncover fundamental challenges
- Connecting business strategy with technology strategy
- Leading technical decision-making for the team
- Integrating AI's outputs into a coherent, shippable product
- Making the final call during incidents
5. Changes in Teams and Management — How We Relate to Others Is Changing Too
So far, I've been discussing the individual engineer. But engineers don't work in isolation. Teams, management, organizations — how we relate to others is also fundamentally changing through this paradigm shift.5-1. Changes in Team Structure — Toward Smaller Teams with Greater Impact
As AI takes on implementation, the range a single engineer can cover expands dramatically. This inevitably affects team structure.A product that previously required a 10-person team with dedicated frontend, backend, infrastructure, QA, and PM specialists is increasingly being covered by 2–3 engineers working collaboratively with AI.
This is not about "reducing headcount." It's about the same number of people producing far greater impact. However, for this to work, each team member needs to possess problem-defining ability, judgment, and execution at a high level. In lean, elite teams, each individual will be expected to act with greater autonomy.
5-2. The Focus of Management Shifts — From "Volume of Work" to "Quality of Judgment"
The focus of management changes too.| Traditional Management | Future Management |
|---|---|
| Estimating and allocating effort | Prioritizing problems and judging direction |
| Distributing tasks | Designing what AI handles vs. what humans handle |
| Tracking progress | Reviewing output quality |
| Assigning based on skill sets | Assigning based on judgment and problem-defining ability |
Once AI removes implementation speed as a bottleneck, a manager's job shifts from "who does what" to "what should the team focus on" and "how do we ensure the quality of AI output across the entire team."
5-3. Redefining Evaluation Criteria
This may be the most difficult change for organizations.Traditionally, evaluation relied on visible metrics: lines of code, number of commits, expertise in a specific technology, number of certifications. But in an era where AI writes the code, I sense these metrics alone will no longer be sufficient. I am no exception, as someone who has stacked up certifications.
Possible new evaluation criteria include:
- How impactful were the problems you defined?
- How accurately did you judge and ensure the quality of AI's output?
- How quickly did you run hypothesis-validation cycles and connect them to results?
- How much did you contribute to elevating the team's overall AI adoption?
5-4. The Challenge of Developing Junior Engineers
What I personally find most challenging is the development of junior engineers.Traditionally, juniors built their understanding of the codebase through relatively straightforward tasks — bug fixes, small feature additions — and gradually developed judgment. Through this "apprenticeship" process, they failed, received code reviews, and accumulated experience bit by bit, growing into senior engineers.
But as AI becomes capable of handling junior-level tasks with high quality, the very opportunities for juniors to gain experience may diminish. At the same time, judgment can only be cultivated through experience. Therein lies a significant dilemma.
One possibility is to make reviewing AI-generated code a learning experience in itself. Training juniors to think about AI output — "Why this design?" "What other options exist?" "What are the risks of this implementation?" — differs from the traditional "learn by writing it yourself," but it may be effective for developing judgment.
Regardless, the growth path from junior to senior is something each organization must consciously redesign.
Furthermore, as individuals become able to move at high speed with AI, there's a risk that information sharing within teams weakens. "Who made what judgment to arrive at this design?" becomes harder to trace. Beyond just junior development, I believe a culture of sharing the rationale behind decisions, rather than just the code itself, will become more important than ever for the team as a whole.
6. AI Cannot Decide "What Should Be Built"
Let me close with what I consider the most important point.AI is powerful. In many situations, it surpasses humans in both the speed and quality of writing code. Yet there is something AI categorically cannot do: it cannot decide on its own "what should be built."
For AI, the quality of its instructions is everything. Without a human who can ask the right questions, AI cannot unleash its full potential. Conversely, the person who asks the right questions through problem-defining ability, discerns the output through judgment, and delivers through execution may be the most valuable person in the AI era.
I see this not as a threat, but as an opportunity. Precisely because AI handles implementation, engineers can focus on more essential work: "What should we build?" "Why should we build it?" "Are we truly solving the user's problem?"
Letting go of the pride I held as a hands-on craftsperson was, honestly, painful. But beyond self-disruption, I believe there is a new form of engineering — one anchored in problem-defining ability, judgment, and execution — that enables us to create far greater impact.
Everything in this article is based on my own experience and reflections. Others may hold different views.
Some readers may interpret this article as "an engineer's declaration of defeat." But self-disruption is not about standing still. It is about looking squarely at what you have built over the years, identifying what needs to change, and moving forward. Isn't that, in fact, what engineers have always excelled at? We are the people who refactored yesterday's code today. This time, the subject of refactoring simply happens to be our own values.
However, if anything in this article resonated with you, even slightly, I ask you to pose one question to yourself:
Of the work you did today, how much of it was something AI could never do?
I believe that thinking through that answer will lead you to your next step.
Written by Hidekazu Konishi