The AI Strategy Conversation Nobody's Having
I'm still relatively new to thinking about AI strategy and the future of organisations. As is everyone, to be honest. Even CTOs with decades under their belts are trying to figure out what this looks like. We're all peering into an entirely uncertain future, making educated guesses about technologies that are evolving faster than anyone can keep up with them.
But working on AI and data strategies has revealed something that keeps me awake at night. The conversation is almost entirely about incremental improvement. How do we get an AI boost? How do we become a little bit better, faster, smarter, more efficient? How can this technology accelerate and leverage what we're already trying to accomplish in the business?
This focus makes sense. It's tangible, measurable, and will deliver massive impact in the short to medium term. These aren't bad strategies - they're necessary ones - because organisations need wins, boards need to see ROI, and people need to feel like they're moving forward rather than standing still while the world changes around them.
But there's an existential reality hanging over everyone's heads that nobody seems ready to address. What if everything changes? What if AI doesn't just make current processes more efficient but makes them completely obsolete? There's no space in most organisations for people to think about the gigantic reimagination that could be on the horizon - the radical restructuring of how we do business, manage people, think about resources, products, services, systems, data architecture - the whole fucking shebang. Nobody's ready to have that conversation or to think about laying critical groundwork for transformation that might be coming. Nobody's building in the adaptability now that prevents us from cementing ourselves in an iterative direction only to be completely screwed in a decade or two.
This creates a dangerous strategic blind spot. While you're optimising current workflows, someone else might be inventing entirely new business models. The real threat isn't AI disruption - it's thinking about AI too narrowly while competitors think radically. It's becoming very good at approaches that become irrelevant while you're still perfecting them.
Every AI strategy I review follows the same pattern: identify current inefficiencies, apply AI to optimise them, measure the improvement. The focus is on automation, enhanced decision-making speed, better data analysis, cross-functional insights that break down silos. Teams get tasked with finding ways to make existing processes 15% faster or 20% more accurate. Success gets measured in time saved, costs reduced, errors eliminated.
These approaches deliver measurable ROI within 12-24 months and satisfy board expectations. They make immediate business sense and generate the kind of success stories that justify further AI investment. They also create a seductive cognitive trap. Early wins from incremental AI make radical experimentation seem unnecessary and risky. Why explore uncertain territory when you're getting consistent returns from optimising known processes? Why allocate resources to speculative approaches when you have proven methods for improvement?
This develops into a resource allocation problem that most organisations don't even realize they have. Incremental improvements consume all available AI budget and leadership attention. Innovation teams, if they exist, get tasked with finding more efficiency gains rather than exploring fundamental alternatives. The entire conversation stays within the boundaries of current business models and structures.
The competitive vulnerability this creates is real, and it's more dangerous than most leaders understand. For the first time, AI has unlocked possibilities that weren't technically feasible before. This isn't classic market disruption or Clayton Christensen innovation theory - this is different. While you're getting 15% better at existing approaches, new entrants might be inventing approaches that are 10x more effective. While you're using AI to make traditional customer service faster, someone else might be using AI to eliminate the need for customer service entirely. While you're optimising supply chains, someone else might be creating supply chains that work on completely different principles.
Market timing amplifies this risk. The organisations that will dominate AI-transformed industries won't be those that perfected pre-AI approaches with a bit of AI assistance. They'll be those that reimagined what was possible when cognitive work became essentially free and infinitely scalable. By the time these radical approaches prove themselves in the market, the window for competitive response has closed.
But what really frustrates me about this situation: we're not talking about some distant science fiction future. We're talking about scenarios that could unfold in the next five to ten years, and we have the tools right now to explore them. While most businesses are focused on AI-enhanced versions of current operations, the more interesting question is: what becomes possible when we stop assuming current organisational and business model constraints?
What if reporting hierarchies become obsolete when AI can coordinate complex projects across flat networks more effectively than management layers? Most organisations are using AI to make managers better at managing, but what if the need for traditional management disappears when AI provides perfect information flow and coordination? What if departmental silos dissolve when AI can synthesise insights and coordinate actions across all functions simultaneously? Instead of using AI to improve cross-departmental communication, what if departments themselves become an artifact of pre-AI organisational design?
What if annual planning cycles become continuous real-time strategy adaptation based on market signals and competitive intelligence? Most organisations are using AI to make annual planning more data-driven, but what if the entire concept of periodic strategy updates becomes obsolete when AI can maintain complex models of interconnected possibilities and adjust strategy continuously based on emerging conditions?
What if we stop hiring for task-based skills and start hiring purely for judgment and creativity? When AI can handle most technical execution, the value proposition of human workers shifts entirely to areas where human cognition remains superior. What if performance management becomes real-time optimisation based on continuous AI feedback rather than periodic reviews? The annual or quarterly performance review might be a relic of information scarcity that no longer applies when AI can track performance and provide feedback continuously.
What if customer service becomes proactive problem-solving before customers know they have problems? Instead of using AI to respond to customer inquiries faster, what if you could identify and resolve issues before they impact customers? What if product development becomes real-time customisation based on individual usage patterns rather than periodic product launches? The traditional product development cycle might be too slow when AI enables continuous adaptation to user needs.
What if fixed costs become variable when AI handles most operational complexity? Many business models are built around the assumption that certain capabilities require significant fixed investment, but AI might make these capabilities available on demand. What if economies of scale reverse when AI enables mass customisation at individual scale? The competitive advantages of size might diminish when AI can deliver personalised solutions as efficiently as standardised ones.
These aren't abstract thought experiments. These are questions about scenarios that AI is already making technically possible, and some organisations are starting to explore them while others are still focused on making excel macros faster.
The solution isn't to abandon incremental AI strategies - that would be strategically stupid. It's to run two parallel approaches with different time horizons and success criteria. Call it a two-track strategy if you want a consulting term, but what it really is is hedging your bets against your own obsolescence.
Seventy percent of your AI resources should go toward incremental optimisation. This is the safe bet that keeps the lights on, delivers measurable improvements, and keeps boards happy. Use AI to automate routine processes, enhance decision-making, and improve customer experience. Focus on clear ROI with timelines of one to three years and predictable outcomes. Measure success in efficiency gains, cost savings, and revenue increases.
Thirty percent should go toward radical experimentation. This is the future bet that prevents obsolescence. Experiment with fundamental changes to how your business operates that are only possible with AI. Test completely new approaches to core functions. Accept timelines of three to ten years with uncertain but potentially transformational outcomes. Measure success in learning, optionality creation, and competitive positioning.
Let me be absolutely clear about something: the iterative work isn't optional fluff. You need to get your shit together on the basics. You need proper data structures, data tagging, governance frameworks that actually work. You need to figure out collaborative workflows, upskill your people so they can work effectively with AI tools, and build the data analysis capabilities that make any of this possible. You need to think seriously about which AI partners you want to work with and what use cases actually make sense for your business.
You have to walk before you can run, and most businesses are barely crawling when it comes to basic AI implementation. This foundational work is super important and absolutely critical to get right.
But here's what's driving me insane: it can't be everything. Right now, the entire conversation is "Shit, we need to get on the AI train. Let's do some pilots. Let's sign a big fuck-off deal with OpenAI and give all our future sovereignty away to Mr. Altman." And that's kind of it. That's the extent of the strategic thinking.
While you're building those foundations - and you absolutely should be - you also need to create space for radical thinking about what happens if everything goes completely tits up tomorrow. Do you have a plan? Are you building in parallel the foundations for a much more adaptive and resilient organisation that could handle scenarios you haven't even imagined yet?
The key is keeping these tracks separate with different mandates and success criteria. The radical track needs protection from being judged by incremental metrics, while the incremental track needs protection from being diverted into speculative territory.
But here's where most organisations fuck this up: they try to solve this by creating innovation teams and giving them the radical track to figure out. This doesn't work. Never has worked. We pretend it does. We do innovation workshops and have innovation days and hackathons and all that bullshit, but nothing touches the sides. Innovation teams get trapped in workshop hell and hackathon theatre because they're disconnected from real business operations and lack the authority to implement transformational changes.
The solution is much more practical than that. Use the AI tools you're already embedding to model extreme scenarios and work backwards to actionable steps. This is a governance and leadership issue, not an innovation process issue.
Use the same AI systems you're deploying for incremental improvements to explore radical futures. Feed them economic, technological, geopolitical, and cultural inputs relevant to your industry. Generate thousands of scenario variations to understand how different forces might reshape your competitive landscape. This isn't traditional trend forecasting or strategic planning. You can create infinite scenarios and iterations, pulling fictional levers to see what might happen in reality. Digital twin simulations of your organisation operating under completely different assumptions about markets, technology capabilities, and customer behaviour.
Once you've modelled extreme scenarios where AI has fundamentally changed your industry, work backwards to identify the baby steps you need to take today. What organisational capabilities would you need in those futures? What business model innovations would be required? What talent and technology infrastructure would enable those scenarios?
This requires senior leadership engagement, not delegation to innovation teams. The same executives making incremental AI investment decisions need to be running scenario models that challenge their fundamental assumptions about the business. Use AI-powered war gaming to stress-test your current strategy against various future scenarios. What happens to your business model if cognitive work becomes essentially free? What happens to your competitive advantages if AI democratises capabilities that currently require significant expertise or capital?
Start with your existing AI deployments. Instead of only using them to optimise current processes, also use them to model what happens if those processes become obsolete. Use the data analysis capabilities you're building to explore scenarios where your entire industry structure changes.
Make radical scenario modelling a standard part of strategic planning, not a separate innovation process. Every major AI implementation should include both incremental optimisation objectives and radical scenario exploration. This approach is entirely possible to implement now and should be done now. You don't need permission from innovation teams or separate innovation budgets. You need leadership commitment to using AI tools for transformational thinking, not just operational efficiency.
Because the worst thing you can do is give people chatbots and faster PowerPoint generation through Microsoft Copilot and call it transformation. That's not going to prepare you for anything.
The window for this kind of radical experimentation is narrowing as AI capabilities accelerate and competitive landscapes shift. Organisations that wait for proof of concept elsewhere will be following, not leading. The cost of running two tracks is significantly lower than the cost of strategic annihilation. The incremental track funds itself through immediate improvements while the radical track provides insurance against future displacement.
The organisations that survive this transition won’t be the ones squeezing out a little more efficiency. They’ll be the ones modelling futures their competitors can’t even imagine - and building toward them now. Because if your AI strategy is only about efficiency, you don’t actually have a strategy. You have a countdown clock. The companies that matter in the next decade will be those bold enough to bet on futures nobody else believes in yet. Everyone else will still be polishing KPIs when the floor drops out from under them.



To be honest, I was exhausted half way through reading this.
While I agree massively on most of your points in the piece, I find that it is leaning heavily on a future hyped by biggest AI developers and the large consultancies. I think that the AI development has - for now at least – plateaued. Just look at Open AI's disappointing ChatGPT 5 launch. We also have to look at the very real threat of model collapse. At least when talking LLMs, the models are well underway to eating their own tail, if they don’t get fresh human-generated material to train on. The next year will be interesting - but I think this might be the break that we all need, to think and build new strategies.
And yes, there will always be a chance of business and sector disruption, especially from fresh thinkers wielding new technology and yes, you should always at least try to imagine what those kind of alternative innovation paths look like. Especially from more nimble upstarts at the edges. I would love to see radical experimentation build into organisations and not just in relation AI. This is also why foresight should always be part of strategy. Sadly, the field has been so watered out that it has become a joke. Every futurist regurgitating the same ‘models’ and truisms, is exactly why the reports are used as bookends and not in the strategy or innovation... but that is a different story. I too, am weary of the tired trope of AI efficiency being the holy grail in strategy for its implementation into organisations. Your urgent call for building the right foundations is spot on and I couldn’t agree more on your points of what (human) skills are needed in tomorrow’s organisations. But since I am of the school ‘less but better’, I just dont fit into this corporate mindset of efficiency and growth above everything else anyway, so I'll see myself out. I'll be out there looking for the people trying to break the mould (and I'll probably be unemployed - especially since I am one of those pesky futurists ^^). Thank you for your always inspiring thinking and writing.
Hi Zoe, I’ve had an intellect-crush on your thinking for years ( I think I may have told!) and this piece is exactly why. You manage to cut through the noise with such clarity and courage, naming the blind spots most leaders would rather avoid.
I love how you’ve framed efficiency as a trap, not a strategy. That distinction alone is going to save some people a lot of wasted cycles.
Grateful you keep putting work like this into the world. It’s sharp, generous, and urgently needed.
CB