top of page

The End of Methodology Theatre

  • Graham Anderson
  • Aug 27
  • 17 min read

A thought experiment on what may come to pass and what it means for how we organise ourselves.


ree

Signals of a New Era

The cracks are already showing.


Walk into any technology organisation today and you'll witness a peculiar theatre: teams dutifully pointing stories while Claude coding agents write half the code. Developers debating acceptance criteria while Loveable generates three working prototypes.  Project managers updating Gantt charts that everyone knows are fiction the moment they're published.


We're using 20th-century management frameworks to coordinate 21st-century AI capabilities.  It's like using a sundial to schedule Zoom calls.  This isn't about replacing the old ways, but evolving them.  This is not the "emperor's new clothes" - it's a new engine under the hood and we are still arguing about the colour of the paint.


The early signals are everywhere if you know where to look:

  • GitHub Co-pilot users report completing tasks 55% faster (based on a study by GitHub and Microsoft), yet their organisations still plan sprints as if development takes the same time it did in 2019.

  • Junior developers increasingly skip writing code, jumping straight to AI-prompting and code review - but their organisations still hire and train them as if they'll follow traditional learning paths.

  • Teams are performing methodologies that have lost their value.  The ceremonies designed to provide predictability - from daily stand-ups to estimation sessions - become hollow performances when AI fundamentally alters the pace and nature of work.

  • Middle management face their own crisis, caught between executives who demand AI transformation and teams who are already transforming faster than the organisation can absorb.  They now spend more time translating AI capabilities and team velocity to executive expectations than actually managing people.


As I explored in my previous writing on generational dynamics, we're seeing a fascinating collision. Gen Z employees, approaching one third of the workforce in 2025, arrive as digital natives who intuitively grasp AI tools. They see AI as a colleague, not a threat. They've never known a world without intelligent assistants. While senior developers debate whether AI-generated code is "real" programming, Gen Z developers are already building entire products with AI partners. This isn't a skill gap - it's a paradigm gap.


The cost of maintaining our current methodologies while AI revolutionises development isn't just inefficiency - it's organisational cognitive dissonance that's driving your best talent to companies that get it.


The Two Faces of Agility

The most common misconception in technology today is the belief that "being agile" means "doing Agile."

  • Doing Agile is the ceremonial application of methodologies: the stand-ups, the sprint planning, the story pointing. These are the prescribed rituals and processes that were designed for a world of human limitations.

  • Being agile is the underlying philosophy: the values of customer collaboration, continuous improvement and the ability to respond to change. This is the ultimate goal - to be an organisation that can deliver valuable outcomes with speed, quality and adaptability.


The central thesis of this post is that AI doesn't make us less agile; it makes the practices we used to achieve agility redundant. The change isn't in abandoning the values of agility, but in letting go of the methods we used to express them. We are entering a new era where we can finally focus on being agile without the overhead of doing Agile.


A New Engine for Old Principles

It's a fair point.  The principles of agile - prioritising individuals over processes, focusing on working software and responding to change-are decades old.  The concepts of continuous integration and delivery are also well-established.


The novelty, however, is not in the principles, but in the catalyst and scale of the change. Doing Agile was a human system for managing human limitations.  Its ceremonies and processes were born from a world of slow communication, manual processes and high-cost mistakes.  They were amplified by consultancy led frameworks that have become embedded in many organisation’s operating systems.


AI is the new variable that removes these limitations, forcing us to ask a fundamentally different question: what happens when our processes are no longer constrained by human slowness, fallibility and communication friction?  The answer is a seismic shift in how work gets done.


From Human Constraints to AI Capabilities

The core of the matter is that AI is automating the very constraints that our methodologies were designed to manage.

Human Limitations (Managed by SDLC Processes)

AI Capabilities (Forcing Evolution)

Communication Delay (stand ups, meetings)

Real-time Context Sharing (AI learns from all team communication and code changes instantly)

Manual Repetition (testing, writing boilerplate code)

Automated Execution (AI generates, tests and validates code in seconds)

Cognitive Biases (inaccurate estimation)

Data-driven Prediction (AI analyses vast datasets to provide probabilistic timelines and resource needs)

Limited Scalability (a team of 10)

Instant Scalability (a team of 10 plus AI agents performing specific roles)

Information Silos (siloed domain expertise)

Omniscient Context (AI has instant access to all codebase, documentation and data that is made available to it)

A Glimpse into the Future

Sarah didn't realise her team had just performed their last meaningful sprint planning session.  Neither did anyone else in the room.  They went through the familiar rituals: story pointing, capacity planning, the inevitable debate about whether that integration task was a 5 or an 8.


The team kept meeting for the next few sprints, but the conversations started to feel hollow.  They'd spend an hour debating a story, only to discover their AI development partner had already generated three different working solutions in the time it took to grab a coffee.  The point of the debate was gone.  The ceremony became theatre.


What ultimately killed sprint planning wasn't a grand decision or a better methodology.  It was the collective, unspoken realisation that the ceremony itself - a carefully crafted human system for managing human limitations - had become the constraint.  Their methodology hadn't been abandoned; it had simply been rendered obsolete by the new, more effective ways of working they had already started to build.

The Unavoidable Business Imperative

Let's be blunt about the economics.  A typical enterprise spending $50 million annually on software development could see:

  • 70% reduction in time-to-market for standard features when AI handles implementation.

  • $15-20 million in "methodology overhead" (the cost of planning, estimating and coordinating work that can consume 30% or more of a developer’s time) that AI could orchestrate automatically.

  • 3x faster experimentation cycles, meaning faster product-market fit and revenue realisation.

  • Quality improvements of 40%-60% as human attention shifts from debugging code to preventing defects at the design stage where they are 100x cheaper to fix.


But the real risk isn't inefficiency - it's disruption.


AI-native start-ups are already building products without traditional development teams. They're not faster because they have better processes; they're faster because they have no processes.  A founder with AI tools can now accomplish what used to require a team of ten. When these companies mature and scale, they won't adopt your methodologies - they'll make them extinct.


Consider what's already happening:

  • Replit's AI agents can build entire applications from prompts.

  • Vercel's v0 generates complete UI components from descriptions.

  • Tools like Cursor and Windsurf are challenging the dominance of traditional IDEs by transforming the development workflow with AI.


Your competition isn't just using better tools - they're operating in an entirely different paradigm.


Every Company is a Software Company

Every company is becoming a software company.  Your bank, your retailer, your manufacturer - they're all writing code.  If you're not writing code directly, you're buying it, integrating it or depending on it. The methodologies that govern how software gets built affect every organisation, not just tech companies.


Even if you never write a line of code, your vendors do.  Your partners do.  Understanding this transformation helps you evaluate whether your suppliers are evolving or becoming obsolete.


Three Key Transitions


Process to Principles

Today's reality: your organisation has hundreds of pages of process documentation. Confluence wikis that nobody reads.  Detailed SDLC flowcharts that were outdated before they were published.


The shift: AI doesn't need process documents.  It needs principles.  Give it "ensure user data is encrypted at rest and in transit" not a 50-page security implementation guide.


How to Start (Monday Morning):

  • Pick one well-documented process your team follows.

  • Extract the 3-5 core principles behind it.

  • Test if AI can generate appropriate implementations from just those principles.

  • Measure: Does the AI output match or exceed your process compliance?


90-day experiment: convert one team's entire process library to a principles framework. Track velocity and quality impacts.


Estimation to Experimentation

Today's reality: Your teams spend 10-15% of their time in planning ceremonies.  That's about 6 weeks per year per developer estimating work that AI could prototype in hours.


The shift: When implementation is nearly free, estimation becomes theatre.  The new question isn't "how long will this take?" but "which of these five approaches best solves the problem?"


How to Start (Monday Morning):

  • For your next feature request, skip estimation entirely.

  • Have AI generate 3 different implementation approaches.

  • Spend your planning time evaluating trade-offs, not debating story points.

  • Measure: Time from request to working prototype to implementation.


90-day experiment: run one team with no estimation - just rapid prototyping and selection. Compare delivery speed and stakeholder satisfaction.


Roles to Capabilities

Today's reality: your organisation has developers, testers, DevOps engineers, each in their silo, each protecting their domain expertise.


The shift: when AI can embody all these capabilities instantly, job titles become limitations. The new roles are about judgment, context and wisdom - things that emerge from experience, not training.


A Day in the Life ..

A product team receives a problem: "Users are abandoning checkout because shipping costs are too high."  A traditional team would assign this to a developer to build a new feature. In a capability-based pod, the team acts as one.  An AI partner generates ten solutions-from a dynamic pricing model to a subscription service.  The Curator (a former developer) reviews the solutions for code quality.  The Guardian (a former QA tester) identifies ethical or security risks in the AI's proposal.  The Philosopher (a former product manager) evaluates which solution best aligns with the company's long-term vision.  They don't have titles; they have capabilities.


How to Start (Monday Morning):

  • Create a "capability marketplace" where team members offer expertise regardless of title.

  • Let developers review test strategies, testers suggest architectures.

  • Track who provides valuable input outside their traditional role.

  • Measure: Value delivered versus traditional role boundaries.


90-day experiment: run a "role-free" product team where members contribute based on capability, not title. Use AI to handle the tactical work traditionally associated with each role.

Five Monday Morning Quick Wins

Cancel one recurring meeting and replace it with an AI-generated summary.

Stop estimating bugs - just fix them with AI assistance.

Let AI write your next test suite, then review it.

Convert one process document into 5 principles.

Ask your newest hire what ceremony they find most pointless - then experiment with removing it.

The Battle for Control

Let's address what my previous writing on bad actors in organisations makes clear: transformation isn't just about technology - it's about power.


Who Loses Power

  • Traditional Architects. When AI can generate and evaluate multiple architectural approaches in minutes, the architect who hoards knowledge loses their monopoly on technical decisions.

  • Process Gatekeepers. The PMO leader whose power comes from controlling methodology adherence watches their empire crumble when process becomes principles.

  • Estimation Experts. The senior developer whose status comes from being the best at sizing stories finds their expertise irrelevant.

  • Quality Gatekeepers. Those who insist all AI-generated code is inferior, using quality concerns as a weapon to slow change rather than genuine risk management.


Who Gains Power

  • Problem Articulators. Those who can clearly express business problems in ways AI can act upon become invaluable.

  • Context Translators. People who understand both business needs and AI capabilities become the new power brokers.

  • Learning Accelerators. Leaders who can help others navigate continuous change gain influence over those protecting static expertise.


Managing the Resistance

As I've explored previously, bad actors - people who act out of self-interest in ways that damage the organisation - will exploit this transition.  Watch for:

  • The Methodology Purists who insist on "proper process" to slow AI adoption.

  • The Consultant Manipulators who sell complex AI transformation frameworks that perpetuate the very ceremonies AI makes obsolete.

  • The Insurgent Networks who use AI's mistakes to argue for returning to "proven methods".

  • The Quality Theatre Performers who cherry-pick AI failures while ignoring that AI code quality already exceeds human averages in many metrics.


Counter these by:

  • Making resistance visible through metrics.

  • Celebrating early wins publicly.

  • Creating "safe to fail" experiments that don't threaten entire power structures initially.

  • Building coalitions with Gen Z employees who have no attachment to legacy methodologies.


The Pragmatic Path Forward


Managing the Transition Risks

Evolution doesn't mean recklessness.  As you experiment:

  • Maintain parallel tracks.  Keep existing processes for critical systems while experimenting with new approaches for new initiatives.

  • Create reversibility.  Design experiments that can be rolled back without destroying team morale.

  • Measure religiously.  Track both successes and failures with equal rigour.

  • Communicate constantly. Over-communicate during transitions to maintain psychological safety.


Horizon 1 (now to 6 months): Experiments Within The System

Start where you have autonomy. Don't ask permission to innovate within your sphere:

  • Week 1-2.  Baseline Reality

    • Document actual time spent on ceremonies versus building.

    • Measure how long decisions really take versus implementation.

    • Track where AI is already being used (officially or otherwise).

  • Month 1-3.  Protected Experiments

    • Designate one team as "methodology-free" for one sprint.

    • Let them use AI to replace any ceremony they choose.

    • Measure outcomes, not process compliance.

    • Document what breaks (it's usually not what you expect).

  • Month 3-6.  Scale What Works

    • Expand successful experiments to willing early adopters.

    • Create "AI-first" alternatives to traditional ceremonies.

    • Start building your coalition of the willing.

    • Develop internal case studies with hard metrics.


Horizon 2 (7-18 months): Structural Evolution

This is where organisational antibodies kick in.  You're not experimenting anymore; you're changing structure:

  • Reshape Team Composition

    • Move from role-based teams to capability-based pods.

    • Create "Human + AI" pairs as the basic unit of delivery.

    • Establish new career paths that don't follow traditional progression.

  • Restructure Governance

    • Shift from process audits to outcome reviews.

    • Replace estimation-based planning with capacity-based flow.

    • Create new budget models that assume continuous delivery, not project batches.

  • Rebuild Performance Management

    • Stop measuring individual velocity, start measuring learning speed.

    • Reward problem identification over solution delivery.

    • Create new recognition categories for human-AI collaboration excellence.


Horizon 3 (19+ months): Full Transformation Readiness

By now, you're not adapting to change, you're driving it:

  • New Operating Model

    • Methodology becomes a continuously evolving capability, not a fixed framework.

    • Teams self-organise around problems, not projects.

    • AI orchestrates tactical work while humans provide strategy and judgment.

  • Transformed Workforce

    • Junior roles focus on learning judgment, not writing code.

    • Senior roles focus on wisdom transfer, not technical gatekeeping.

    • Middle management evolves from coordination to coaching - addressing the "Silver Tsunami" crisis I've discussed previously where an aging workforce's knowledge risks being lost without a new system for transferring it.

  • Competitive Advantage

    • Your organisation moves at AI speed while competitors still plan quarters.

    • You attract talent that wants to work with AI, not despite it.

    • You're building categories of value your methodology-bound competitors can't imagine.


Measuring The Transition and Know If You're Winning

Leading Indicators (Monthly)

  • Ceremony Reduction Rate.  % decrease in time spent planning versus building.

  • Prototype Velocity.  Number of working prototypes generated per week.

  • Decision Speed.  Time from problem identification to solution selection.

  • AI Amplification Factor. Output per person with AI versus without. (You will need to baseline 'without' early on to have a comparison).

Success Metrics (Quarterly)

  • Time to Value.  How fast does an idea become customer value?

  • Experimentation Rate.  How many approaches are tested versus debated?

  • Role Fluidity.  How often do people contribute outside traditional boundaries?

  • Learning Velocity.  How quickly do teams adopt new AI capabilities?

Warning Signs (Watch Daily)

  • Methodology Theatre.  Teams performing ceremonies that produce no decisions.

  • AI Resistance Patterns. Sudden increase in "quality concerns" about AI-generated work.

  • Political Manoeuvring.  New process requirements appearing without clear value.

  • Talent Flight.  Your best people leaving for AI-native companies.


Building a Future-Ready Workforce

The challenge isn't just organisational - it's human. As I've explored in previous writing, Gen Z arrives with AI fluency but struggles with organisational dysfunction.  Meanwhile, experienced professionals face an identity crisis as AI replicates their craft.


The Hidden Risk of the Copy-Paste Generation

There's a darker side to AI adoption we're not discussing.  Junior developers who've never written a sorting algorithm or debugged a memory leak are now shipping production code they don't understand.  They're prompt engineers, not software engineers. When the AI suggests code that's functionally correct but architecturally wrong, they can't tell the difference.


This isn't just a skills gap - it's a ticking time bomb.  Every line of ununderstood code is technical debt.  Every unchallenged AI suggestion could be a security vulnerability.  Every architectural decision delegated to AI without comprehension weakens the foundation of our systems.


The solution isn't to abandon AI - it's to fundamentally restructure how we develop expertise:

  • Teach developers to interrogate AI output, not just accept it.

  • Build understanding of why code works, not just that it works.

  • Create learning paths that use AI as a teaching tool, not a replacement for understanding.

  • Establish code review processes that specifically check for "AI smell" - patterns that work but shouldn't be there.


The Upward Migration of Human Value

Current AI capabilities excel at implementation but typically struggle with high-level architecture, system design and the ideation phase. This isn't a bug - it's a feature that reveals where human value migrates.


Research from software quality experts like Capers Jones has long shown that 60-80% of defects originate before coding even begins - in requirements, architecture and design phases. AI's automation of coding tasks finally frees us to address these root causes of poor software quality.


This creates a paradoxical opportunity: by automating the "bottom" of the pyramid (implementation), we can finally invest properly in the "top" (architecture and design). The 70% reduction in implementation time doesn't just mean faster delivery - it means that time can be redirected to:

  • Deep architectural thinking that prevents systemic issues.

  • Thorough requirements analysis that ensures we're building the right thing.

  • Strategic system design that considers long-term evolution.

  • Quality-focused ideation that addresses root causes, not symptoms.


The New Learning Architecture

Stop teaching this:

  • Syntax and language specifics (AI knows these).

  • Basic implementation patterns (AI generates these).

  • Standard testing approaches (AI automates these).

Start teaching this:

  • Problem decomposition and articulation.

  • Trade-off evaluation and decision-making.

  • Ethical reasoning and bias detection.

  • Human psychology and team dynamics.

  • Business strategy and market understanding.


Creating Psychological Safety in an AI World

The real challenge isn't job loss - it's meaning loss.  When AI can replicate your craft in seconds, who are you professionally?


New Sources of Professional Identity

  • The Curator: Your value isn't in creating solutions but in recognising the best ones.

  • The Guardian: You protect against AI's blindness to context and consequence.

  • The Translator: You bridge between human needs and AI capabilities.

  • The Philosopher: You ask the questions AI doesn't know to ask.


A Tale of Two Developers

Imagine two junior developers, both hired on the same day.


The Traditional Developer (Alex): Alex follows the well-trodden path. His first month is spent in onboarding, reading documentation and setting up his environment. His first few tasks are simple bugs, which he spends hours debugging and googling. When he finally submits his first feature, he waits two days for a senior developer to review it. The senior's time is consumed by low-level code reviews and debugging sessions, leaving little room for more valuable, high-level mentorship. Alex learns, but the learning curve is steep, and the time-to-value for the business is long. His early contributions are minimal and the team's most valuable members spend a significant portion of their time on tactical oversight.


The AI-Augmented Developer (Sarah): Sarah's journey is different. Her AI "partner" helps her set up her environment in a single afternoon. When she's assigned her first bug, she describes the problem to her AI partner. The AI provides a diagnosis, a potential fix and explains the underlying code. The AI handles the tactical work, allowing Sarah to submit her first feature in a fraction of the time.


Now, her senior developer is free to engage in truly valuable mentorship. Instead of spending 60% of their time on code reviews and debugging, they now focus on architecture decisions, system design and ensuring the AI-generated implementations align with long-term strategic goals. They're finally doing the high-value work they were hired for but rarely had time to do properly. They hold high-level sessions to teach Sarah:

  • Problem decomposition and articulation: How to break down a business problem, not just a coding task.

  • Trade-off evaluation and decision-making: How to weigh different architectural solutions and their long-term impact.

  • Ethical reasoning and bias detection: How to spot potential security vulnerabilities or biases in AI-generated code.

  • Human psychology and team dynamics: How to collaborate effectively and understand stakeholder needs.

  • Business strategy and market understanding: How to connect her code to the company's mission.


In this scenario, the senior developer's role is amplified. Their expertise is used to teach judgment and wisdom, not just to fix syntax errors. They guide Sarah on the "why" and "what," while the AI handles the "how." The company's lower cost isn't because Sarah is "worth less", it's because the tools she uses radically accelerate her ability to deliver meaningful value and the team's senior talent is now fully leveraged for strategic impact.


The Seniority Shift

Counter-intuitively, AI doesn't eliminate jobs. Organisations embracing AI find themselves needing:

  • More architects (to design systems AI will implement).

  • More strategists (to decide what to build).

  • More quality philosophers (to define what "good" means).

  • Fewer coders, but more code curators.


This mirrors historical patterns. The industrial revolution didn't eliminate jobs, it transformed them. Factory automation didn't reduce employment, it shifted it from manual labour to supervision, maintenance, and design. Similarly, the Internet enabled new business models to evolve, creating opportunities for new roles to fulfil those business models.


The difference now is that we're needing to see a shift toward creating time and space for senior roles to build a workforce multiplier effect by accelerating the junior workforce capabilities. As AI handles junior-level implementation tasks, organisations need more people who can:

  • Think strategically about system architecture

  • Define quality standards and principles

  • Navigate complex business requirements

  • Make judgment calls AI cannot make


This creates better employment outcomes for those in organisations that embrace the change - more intellectually stimulating work, higher-value contributions and roles that leverage uniquely human capabilities rather than competing with machines on repetitive tasks.


The Uncomfortable Truth, Not Everyone Makes It

Here's what nobody wants to discuss. Some organisations simply can't make this transition. Their DNA, built over decades of process optimisation, is incompatible with continuous evolution.


Signs your organisation might not survive:

  • Leadership still debates whether AI is "ready for enterprise."

  • Your PMO is adding AI governance layers to existing processes.

  • You're trying to fit AI into SAFe or scaling frameworks.

  • Your transformation plan is a three-year waterfall project.

  • Middle management sees AI as a threat to control rather than an amplifier of capability.


If this is you, the choice isn't whether to transform - it's whether to transform or prepare for acquisition by someone who did.


The Evolution, Not Revolution

This isn't about burning down everything we've built. Just as Agile methodologies and frameworks evolved from Waterfall when the constraints of software development changed, now AI is changing the constraints again. The principles of agile - individuals over processes, working software over documentation - remain valid. However, the practices built around human limitations (sprints, story points, ceremonies) must evolve for a world where those limitations no longer apply.


The organisations that win won't be those that abandon all methodology, but those that let their methodologies evolve as fast as their capabilities.

Letter from 2027: What Sarah Wishes She'd Known ..

"To my 2025 self,

Stop debating whether AI will change everything. It already is. While you're in meetings discussing 'AI readiness,' start-ups are building your replacement with three people and Claude.


The hardest part isn't learning new tools - it's unlearning old assumptions. Every methodology you've mastered is about to become irrelevant. This isn't threatening; it's liberating. You'll spend less time in ceremonies and more time creating value.


Start small. Pick one process that everyone hates and let AI replace it entirely. When it works (and it will), you'll have your coalition for bigger changes.


The people who resist aren't evil - they're scared. Their entire professional identity is tied to expertise that's being commoditised. Help them find new identity in what AI can't do: care about outcomes, understand context, make ethical choices.


Most importantly, the future isn't about humans versus AI or even humans with AI. It's about becoming something new - augmented creators who operate at the speed of thought rather than the speed of typing.


- Sarah, writing from a world where sprint planning is a museum exhibit"


The Choice Before Us

The end of sprint planning - when it comes - is not the end of structure. It's the end of static structure. It's not the end of human development - it's the end of humans doing what machines do better.


As I've explored throughout my writing on momentum, bad actors and organisational dynamics, the challenge isn't technical, it's human. We must overcome not just the inertia of existing processes but the active resistance of those whose power depends on maintaining them.


The organisations that thrive won't be those with the best AI tools, everyone will have those. They'll be those brave enough to let go of the methodologies we built for a world that no longer exists, while building new capabilities for a world that's just being born.


Your competitors are already experimenting. Your customers are already expecting AI-speed delivery. Your best talent is already using AI, with or without your permission.


The day after the last sprint planning isn't a distant future - it may be closer than we think. The question isn't whether this future will arrive, but whether we'll be ready when it does. The time to start preparing isn't tomorrow, it's today. It starts with one simple question: what ceremony will you cancel this week? Or perhaps a better question: what would happen if you gave one team permission to work however they want for one sprint, as long as they deliver value?


The answer might surprise you. It might scare you. Most importantly, it might save you.

The AI evolution in software development isn't about better tools or faster delivery. It's about reimagining our entire organisational operating system for a world where human limitations no longer define how work gets done. The winners won't be those who adopted AI first, they'll be those who abandoned obsolete methodologies fastest.






 
 
 

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page