A New Creative Intelligence
Reconstructing Strategic Work in the AI Era
I’ve been exploring AI in my creative practice for a few years now. When I first wrote about strategy in the era of AI, I was very much trying to plug AI into my existing practice. To see where it could help me along the established processes and ways of working I’d constructed over the course of my career. Could it speed up research? Sure. Help with shitty first drafts? Totally. Generate options? Absolutely.
But something felt off. I was treating AI like a better intern, a faster search engine, a more articulate brainstorming partner, something I could spar with on occasion. I was asking where it could fit into how I already work, like trying to wedge a piece from a different puzzle into the one I’d been building for years.
But as I’ve gotten deeper into this world and into reimagining my own practice, I’ve realised I had it completely backward. This is not about wedging AI into an established practice. It’s about reconstructing that practice completely. It’s about creating a brand new form of creative intelligence.
The breakthrough wasn’t in the tools. It was in realising I was asking the wrong question. Not “How can AI help me do what I do?” but “What becomes possible if I rebuild everything from scratch?”
That changes everything. But it also demands we get honest about what we’re actually working with.
AI is brilliant at pattern completion and absolutely terrible at pattern breaking. It predicts the next most likely thing, which is precisely what creativity is not. This is not a limitation you can prompt your way around. It’s baked into the architecture.
Pip Bingemann at Springboards wrote a piece earlier this year that makes this painfully clear. He ran experiments asking popular AI models the same question a hundred times over: “Pick a random number between 1 and 10.” GPT-4o answered “7” ninety-two times. Claude answered “7” ninety times. Gemini answered “7” a perfect one hundred out of one hundred times. When they asked for a random word, “quokka” appeared a hundred and fifty-five times. And when they specifically asked for “a completely random word that I wouldn’t be able to predict,” the results got even more repetitive. Quokka jumped to three hundred and fifty-five appearances.
This isn’t a bug. It’s what LLMs are designed to do. They’re trained to predict the most likely next word based on everything that came before. Which means when hundreds of thousands of people ask ChatGPT for creative ideas, they’re all getting variations of the same answer. The world is slowly turning beige, and most people can’t see it happening because from a single user’s vantage point, seven followed by three seems perfectly random.
That initial work grew into a larger study with Springboards benchmarking the creative capability of different LLMs specifically for advertising work. I was part of that research. The findings confirmed what Pip’s experiments suggested: when LLMs generate creative output, they consistently prefer logical structure and coherence over emotional depth or exploratory thinking. They’re deterministic, rational, and overconfident. They struggle with divergent thinking.
So with all of this in mind, let’s be clear about what we’re working with. AI is magnificent at speed, doing in seconds what takes us hours. It excels at scale, processing volumes we could never match. It dominates retrieval, finding needles in infinite haystacks. And it can simulate scenarios we’d never have time to explore manually. But it fails spectacularly at genuine novelty. It recombines endlessly but never invents. It sees adjacent possibilities but misses orthogonal leaps entirely. It lacks cultural intuition because it lacks embodied experience. And it cannot make productive mistakes, the kind of intentional wrongness that cracks open something new.
Most people, when they hit this wall, try to fix the AI. They workshop better prompts. They adjust temperature settings. They chain multiple models together hoping the combination will somehow unlock originality. They’re still trying to make AI do the thing it fundamentally cannot do.
That’s the wrong fight.
You cannot prompt your way to genuine creativity (at least not yet). You cannot engineer randomness into a system built for prediction. And you definitely cannot get pattern-breaking thinking from a machine designed to complete the patterns. The architecture won’t allow it. It’s like trying to get a calculator to write poetry by giving it better instructions.
So what if we stop trying to get AI to be creative? What if we use it to build better environments for our own creativity instead?
AI can’t invent, but it can map every corner of what’s already been invented so you can finally see the gaps. It can’t break patterns, but it can surface your unconscious patterns faster than years of self-reflection. It can’t make creative leaps, but it can stress-test your leaps from a dozen perspectives simultaneously. It can’t generate genuine insight, but it can hold enough complexity that you can.
What follows are the experiments I’ve been running. The hacks I’ve been testing. The ways I’ve been rebuilding my own practice from the ground up. Not to force-fit AI into my ‘business as usual’, but to create an entirely new playground. These aren’t best practices because there aren’t any yet. They’re test cases. Provocations. My attempts to construct a completely different form of creative intelligence.
Using AI to Change How We Think, Not What We Think (perspective shifts)
The first move is to stop asking AI to make creative work and to start using it to reimagine how we create. That difference is everything.
We’re living in a world of ideological islands. Small oceans we all swim in without realising the water has a temperature, a pH, a particular composition. My ideology. Your ideology. The ideology embedded in every brief, every framework, every piece of strategic thinking. And if we have any chance of creating work that bridges rather than deepens these divides, we need tools that help us see the water we’re swimming in.
This is where adversarial red-teaming becomes essential rather than just useful. I’ve started setting up what I’m calling “perspective prisons.” I lock an AI into a specific worldview - a complete ideological frame - and ask it to attack my work from inside that constraint. Not a caricature. A genuinely coherent alternative way of seeing the world.
A burned-out middle manager who thinks transformation initiatives are performance theatre for executives. A small business owner who can’t afford to experiment and sees innovation talk as privilege. A traditionalist who thinks constant change is pathological. A radical who thinks incremental reform is complicity.
I ran a piece about organisational evolution through the traditionalist lens and it came back with something I didn’t expect: “You assume organisations should adapt to their environment. But the most enduring institutions in human history succeeded by refusing to adapt - by defending a core identity against environmental pressure. The Catholic Church. Oxford. The Japanese Imperial House. What you call rigidity, they call integrity.”
I didn’t agree. But I couldn’t dismiss it. And suddenly I could see that my entire argument assumed a particular relationship between organisations and their environments that wasn’t universal, it was in many ways, ideological. The work that came out the other side didn’t abandon the evolution frame, but it knew exactly where it stood and why. It could name its own ideology instead of pretending to be neutral.
This isn’t about being balanced or hearing all sides. Balance is fine when you need it, but strong ideas need conviction and focus, not consensus. This is about something else entirely: making visible the assumptions you can’t see because you’re inside them. AI can hold a coherent worldview that isn’t yours and argue from it convincingly. That’s not a party trick. In a fractured world, it might be the only way to create work that doesn’t just speak to your own island.
Temporal perspective takes the same principle across time rather than viewpoint. I ask AI to argue my thinking from radically different time periods. Not for novelty. Because contemporary thinking has blind spots that only become visible when you step outside the moment.
I was writing about workplace culture and asked AI to argue it from a 1950s factory owner’s perspective. The response was a tad uncomfortable: “You treat employee satisfaction as a business objective. We treated it as a personal concern that had no place at work. People weren’t meant to be happy at work. They were meant to be competent and compensated. The idea that an employer is responsible for an employee’s emotional state would have been seen as patronising and invasive. You’ve created a world where the boundary between work and self has dissolved, then congratulated yourself for caring about wellbeing.”
This irked me, but there was a very real point to be made here. The entire discourse around workplace culture assumes that dissolving the work-self boundary is progress. What if it’s not? What if it’s just a different kind of control? The piece that came out of that wasn’t anti-wellbeing, but it stopped treating the collapse of boundaries as unquestionably good.
I tried this in the other direction too. I asked AI to argue current organisational thinking from a 2065 perspective. It came back with: “You’re still designing organisations around the assumption that humans should adapt to work rather than work adapting to humans. By our time, the idea that people should spend decades in organisations to earn the right to security seems barbaric. You knew the system was broken but kept trying to make it more humane instead of replacing it.”
Neither perspective is correct, necessarily. But both reveal what we’re taking for granted right now. The 1950s view exposes how much we’ve normalised the invasion of work into identity. The 2065 view exposes how much we’ve accepted that broken systems just need better management rather than replacement. Seeing both means I can write from 2025 with my eyes open instead of just channelling the assumptions of the moment.
The same principle works at the organisational level. I’ve been using AI to map the invisible structure of collective thinking. I was working on converging teams around reimagining the role of HR in an AI-enabled organisation. There were literally hundreds of ideas, POVs, and experiments happening across different geographies and functions. Everyone was moving, nobody could see the whole picture.
I synthesised the patterns myself first - reading everything, abstracting the core arguments, mapping who was saying what - then used AI to help me see connections I was missing. Not feeding it raw documents, but asking it to analyse the landscape I’d already distilled. Where ideas clustered. Where gaps existed. Where teams were unknowingly working on the same problem with different language.
What it surfaced was seriously useful. Three different regional teams had independently built nearly identical frameworks for how HR should evaluate AI skills, each convinced they’d discovered something unique. Two teams were running experiments that would produce contradictory results because they were using different success metrics for the same outcome. And there was a massive gap: everyone was focused on training and compliance, almost nobody was thinking about how HR’s fundamental role changes when AI handles most administrative work.
That map changed the entire conversation. Instead of more alignment meetings where everyone presented their work, we could focus on the actual tensions and gaps. The AI didn’t create the strategy. But it helped me make the invisible structure of collective organisational thinking visible. We could see where we actually disagreed versus where we were just using different words for the same idea. We could identify genuine white space versus redundant effort.
None of these practices ask AI to be creative. They ask it to reveal invisible structures - in individual thinking, across time periods, and within organisations. To make visible what you can’t see because you’re swimming in it.
Interrogating Our Own Creative DNA (seeing your own patterns)
You can see your blind spots in the moment. That’s what the previous practices do. But there’s a different kind of invisibility: the patterns in your own work over years. How your thinking has evolved. What you’ve abandoned and why. The structural signatures you’ve developed without realising it.
I’ve trained a personal creative ‘memory keeper’, based on years of my own thinking. Not to generate new ideas, but to show me my own patterns. Essays, frameworks, abandoned arguments, voice notes about systems and organisations and culture. It’s an external memory that actually works like memory does - by connection, not retrieval.
Last month I was stuck on how to think about new economic models for sport. The IP landscape is fracturing, tech is enabling entirely new distribution models, and the traditional league-broadcaster-sponsor triangle feels increasingly obsolete. I asked it: “What have I written about the relationship between content ownership and value creation?” It surfaced three different POVs I’d built over five years. One from 2019 about “distribution control becoming less valuable than audience relationship.” One from 2021 about “direct-to-fan models bypassing traditional intermediaries entirely.” One from 2024 about “biometric IP as a new copyright and commercial frontier -where the data generated by athletes becomes as valuable as broadcast rights.” Seeing them together showed me something I couldn’t have seen otherwise: my thinking has been moving from questioning who controls distribution to reimagining what constitutes intellectual property in the first place. That shift matters because it reveals a significant move from “how do we fix the current system” to “what replaces it?”
This isn’t about outsourcing memory. It’s about making memory useful. I can see how my thinking has evolved. I can find connections I made years ago and forgot. I can test whether I’m repeating myself or actually building on what I’ve done before, which helps me to evolve and expand my perspective.
We all have idea graveyards. Ideas or arguments or seeds that never quite made it. I used to keep mine in a Google Doc - half-written pieces, frameworks I couldn’t finish, arguments that felt interesting but maybe weak. Now I’ve made it actually powerful by creating a GPT on all of it.
I’ve been asking it to show me what I threw away and why. Last year it surfaced a half-written piece from 2020 about organisations as attention economies. I’d abandoned it because it felt too abstract, too disconnected from practical application. Looking at it now, it’s exactly the framework I need for understanding how AI is reshaping organisational decision-making. The attention layer is the only layer that matters when machines handle everything else.
I wasn’t wrong to abandon it in 2020, it was too early for it to make much sense. But I was wrong to forget it existed. This approach doesn’t just store rejected ideas, it helps you understand why you rejected them and whether those reasons still hold. Sometimes you abandon things because they’re bad. Sometimes you abandon them because you weren’t ready for them yet.
Counter-research protocols mean before I commit to anything, I ask AI to destroy it. Not to poke holes, but to genuinely construct the strongest possible case for why I’m wrong. Find the best evidence against my argument. Steel-man the opposing view with the same rigour I’d use to defend my own position.
I’ve killed three major pieces of thinking this year before they went anywhere because the counter-research was too strong. One was about athlete ownership models in professional sports. I thought I’d identified a genuinely novel structure that could shift power from leagues to players. The AI came back with six historical precedents where nearly identical models had been attempted and failed. Not because they were bad ideas in theory, but because the economic incentives made them structurally unstable. Players would defect to traditional leagues the moment money got tight. I’d missed the incentive design entirely.
Better to discover that privately than have someone else point it out after I’ve published and defended it. This is intellectual honesty as creative discipline. Most people use AI to confirm what they already think. I’m using it to test whether I should think it at all.
Finding What Doesn’t Exist Yet (mapping territory)
This is where it gets genuinely strange. Using AI not to create, but to map the territory of what’s been done so completely that you can finally see what hasn’t.
Idea archaeology means treating fields like excavation sites. I was thinking about polyfuturism and asked AI to map every approach to discussing “the future” over the last forty years. Singular. Deterministic. Inevitable. Disruptive. Transformative. The AI returned hundreds of examples of each framing. Then I asked it to show me approaches that existed in academic futures studies but disappeared from mainstream discourse. “Plural futures” was central to academic futurology since the 1960s and basically vanished from popular tech discourse by the late 1990s.
Why? Because the cultural context changed. Academic futures studies emphasised multiple possible futures shaped by different choices. Tech culture wanted a singular inevitable future they were building. Plurality implies uncertainty and agency. Singularity is deterministic and fundable. That disappeared framework led me down rabbitholes I wouldn’t have found otherwise. Indigenous futures. Solarpunk. Afrofuturism. Asian futurisms. Queer temporalities. Entire traditions of imagining futures that exist outside the Silicon Valley monoculture but get almost no mainstream attention. Not retro. Not nostalgia. But genuinely unoccupied territory in popular discourse that had been abandoned because it didn’t fit the venture capital narrative. The multiplicity was always there. Mainstream culture just stopped looking for it.
White space cartography works differently. I fed an AI hundreds of pieces of content about sports innovation from the last decade. Articles, reports, conference talks, investment memos. Then asked it to map where the actual innovation was happening. What came back surprised me. The AI identified women’s sports as the most concentrated area of structural experimentation across the entire sports landscape - new formats, new economic models, community-centered propositions, alternative fandom structures - but almost none of the mainstream “future of sports” discourse was paying attention. Women’s sports were quietly becoming the R&D department for all of sport, testing what actually works when you can’t rely on legacy infrastructure. The white space wasn’t that nobody was talking about women’s sports. It’s that nobody had named what was actually happening there. That’s not a gap you can spot by reading widely. You need to see everything at once to notice the pattern everyone’s missing.
Cross-domain translation has produced some of my most useful breakthroughs. I feed AI frameworks from completely unrelated fields - jazz improvisation, ecosystem design, evolutionary biology, network theory, game theory - and ask it to reinterpret whatever problem I’m stuck on through those lenses. I’m not looking for metaphors. I’m borrowing structural logic from domains that have already solved similar problems in different contexts.
I was stuck on how to think about new models for ad agencies as the current structure collapses. Traditional creative agencies, holding companies, the whole ecosystem is breaking down and nobody seems to know what replaces it. I fed the AI frameworks from jazz improvisation and asked it to reinterpret the problem. It came back with something about how jazz ensembles don’t have fixed structures - they have protocols for coordination. Nobody leads permanently. Authority shifts based on who has the idea in that moment. The structure emerges from the interaction, not from the org chart.
This chimed with how I’d begun to this about what’s next. Maybe agencies aren’t organisations at all anymore. Maybe they’re coordination protocols. Temporary assemblies of capability that form around specific problems and dissolve when they’re solved. The question isn’t “what’s the new agency model?” It’s “what are the protocols that let talent coordinate without infrastructure?” And off all of this came from borrowing a completely different structural logic about how creative collaboration actually works.
Cross-domain translation borrows existing structures. But there’s another practice that does the opposite - detecting structures that are still forming, rules that haven’t been articulated yet.
Constraint mining from signals means I’m constantly feeding AI weak cultural signals and asking it to identify emergent constraints. What rules are forming that haven’t been articulated yet? What’s becoming taboo? What’s becoming mandatory? I’ve been tracking signals around truth and reality, and the AI spotted something genuinely Orwellian forming.
The emerging constraint isn’t about cherry-picking truth anymore. That’s been happening for years. What’s new is we’re being asked to deny what we’re seeing with our own eyes. Not interpret it differently - deny it entirely. And the infrastructure makes that possible. AI-generated content, sophisticated propaganda machines, the collapse of verification systems. We genuinely can’t tell anymore what’s real, what’s synthetic, what’s manipulated. That ambiguity is then being weaponised.
The constraint forming is this: you’re expected to accept narratives that contradict direct observation. We’re entering a phase where “I saw it” is no longer sufficient evidence of anything, because you might have seen a deepfake, or propaganda, or you might be misremembering, or you might be lying. The ground itself is unstable.
That’s not fully explicit yet. But it’s hardening. Seeing it means you can prepare for a world where reality itself is contested territory, not just interpretation of it.
Prototyping Culture (simulating impact)
Most testing happens too late. You make something, put it in the world, then see how people respond. By then you’ve committed resources, built momentum, attached your name to it. The cost of being wrong is high.
I’ve been using AI to rehearse cultural response before anything exists. Not to test creative work, but to stress-test how ideas will land, mutate, and be misunderstood across different contexts. You’re prototyping reactions and emotions, not just assets.
I was working on a project for a particularly passionate fandom. The kind that reacts with frustration and anger on a regular basis. Every communication from the organisation got picked apart, misinterpreted, turned into evidence of disrespect or incompetence. Before developing anything, I asked AI to simulate how different approaches would be received. What would set them off? What would they read as patronising? Where would they find conspiracy?
What came back was clarifying. The straightforward transparency approach I’d planned - explaining constraints and trade-offs - would be read as making excuses. The community-first framing would be seen as pandering. Even silence would be interpreted as contempt. The AI showed me the emotional architecture of the fandom’s relationship with the organisation. Years of broken promises had created a lens where nothing could land as intended.
That didn’t mean the project was doomed. It meant I had to design for distrust. Acknowledge it directly instead of trying to overcome it with sincerity. Show understanding of why they were angry rather than asking them not to be. The final approach assumed antagonism as the baseline and worked from there. Not trying to win them over, but to not make things worse.
Cultural propagation modelling takes this further. How would a message spread within the community? Where would it mutate? I ran different framings through simulation. One version would get amplified by the most extreme voices and distort everything. Another would create a split between OG fans and newer ones. A third would generate a brief moment of unity in shared cynicism that might actually be useful.
That’s not prediction. It’s scenario planning for culture. You can’t control how people respond, but you can anticipate the likely shapes of response and design accordingly. You can make certain misreadings less likely. Or you can accept that some misreadings are inevitable and plan for them.
But understanding how things might go wrong isn’t enough. You also need to reverse-engineer what’s actually worked before. I’ve been using AI to break down the narrative structure of the rare moments when this organisation actually connected with its fandom. Not analysing what they said, but extracting the structural DNA - the sequencing, the beats, the logic of how information was revealed.
The pattern that emerged wasn’t about tone or promises. It was architectural. The successful communications followed a specific structure: establish shared reality first, then introduce constraint or complexity, then offer agency within that constraint. The failures skipped straight to solutions or tried to reframe the shared reality. They broke the sequence. When I mapped this structure and tested it against new communications, it held. The architecture mattered more than the content. You’re not carbon-copying what worked. You’re extracting the underlying logic and testing whether it transfers to other moments and contexts.
Inventing Better Problems (constraints and evolution)
Most people use AI to solve problems. I’ve been using it to create better problems. Harder constraints. Weirder challenges. The kind of creative friction that forces you out of default thinking.
The strangest thing I’ve been doing is asking AI to make my work harder.
I’ve been inventing anti-briefs. Not solving the brief I’ve been given, but generating restrictions and challenges that force me into new cognitive spaces. I was working with an organisation trying to navigate new forms of leadership - new capabilities, methodologies, belief systems required for a different kind of future. Instead of asking “what does future leadership look like,” I asked AI to create constraints that would make the problem genuinely difficult. What if leaders couldn’t rely on hierarchy? What if they had to lead people who fundamentally disagreed with the organisation’s direction? What if leadership had to happen entirely through questions rather than answers?
These aren’t hypotheticals for the sake of it. They’re designed to break default patterns. The questions-only constraint led to thinking about leadership as facilitation rather than direction - a completely different model than the command-and-control frame most leadership discourse still assumes. Most of that thinking wouldn’t have worked for the actual brief. But some of it revealed assumptions about what leadership even means in their context that made the final work so much sharper.
You’re using AI to engineer difficulty, not ease. To make the box, not escape it.
The other side of this is generating enough variations that selection becomes the creative act. Instead of trying to think of the perfect idea, you create conditions for hundreds of ideas to emerge, then practice ruthless curation.
I’ve been treating ideas like organisms. Using AI to generate mutations, hybrids, and evolutionary branches of a core concepts - especially combinations that shouldn’t work. I was working on a project about new mythologies for a world where our stories no longer hold truth or reality or a future. Instead of refining the idea, I asked AI to hybridise it with incompatible concepts. What if you combined “new mythologies” with “radical transparency” - the exact opposite of how myths work? What if you crossed it with “computational culture” where meaning is algorithmic rather than narrative? What if you merged it with pure individualism, the exact opposite of the collective function myths serve?
Most combinations were nonsense. But working through them revealed something I hadn’t seen: myths have already mutated into memes. We still have cultural stories that spread and shape behaviour, but they’ve been stripped of everything that made myths functional and valuable. Memes are mythology without the container. Without the depth, the wisdom, the moral architecture, the sense of continuity and humanity. We’re living in a world of myth-shaped holes.
The real question isn’t how to create new myths. It’s what new container we need to build that can hold what myths used to hold, in a form that works in fragmented, algorithmic culture. The format changed. The function still needs to exist. We just haven’t invented the structure yet.
None of the fifty variations I generated solved that. But collectively, they revealed it. This is Darwin, not Da Vinci. You’re not crafting ideas. You’re creating conditions for variation, then using the mutations - especially the failed ones - to understand what you’re actually building. The practice is in selection, but also in reading the failures correctly.
Building a New Creative Intelligence
This isn’t exhaustive. I have a plethora of other methods. Mad-cap experiments, hacks, and weirdness that don’t fit neatly into categories or haven’t crystallised enough to write about yet. These are just the ones that have survived long enough to articulate. The test cases that revealed something worth sharing.
But these practices all share something critical - they don’t use AI to generate creative work. They use it to transform the conditions under which creative work happens. To make invisible structures visible. To create friction that sharpens thinking. To map territories too vast to see from inside them. To stress-test ideas before they meet reality. To invent problems worth solving.
AI isn’t the creative force in any of this. It’s infrastructure. The scaffolding that makes certain kinds of thinking possible that weren’t before. The question isn’t whether AI can be creative. It can’t. The question is whether we can build new creative systems that happen to have AI in them.
That’s harder than it sounds. When you can generate a hundred variations of anything in seconds, selection becomes the creative act. When you can map entire cultural territories instantly, knowing where to look becomes the skill. When you can simulate responses to ideas before they exist, judgement about what’s worth making becomes everything.
The practices I’ve been building aren’t about making AI more creative. They’re about making myself more capable of creativity in an environment where the old constraints have dissolved. Where the friction that used to force good thinking has been smoothed away. Where you can produce infinite mediocrity without ever hitting resistance.
So I’m rebuilding the resistance. Using these platforms to create better problems instead of easier solutions. To surface my own patterns so I can break them. To show me the ideology I’m enveloped in. To destroy my arguments before I commit to them. To map what hasn’t been done yet. To stress-test what I’m about to do.
I’m trying to craft a new form of creative intelligence from the ground up.
Because here’s what’s actually at stake. The work ahead isn’t about chasing tools or perfecting prompts. It’s about cultivating new habits of mind. New creative disciplines. New standards for what good even means when anything can be generated instantly. We’re not learning to use AI. We’re learning to stay creative in a world where AI exists.



Big fan of this piece, thank you for writing it Zoe!
I discovered your Substack not long ago and now I don't miss a post. By far, this is my fave Substack because it most resonates with me. But man, oh man, this piece was as if you'd jumped into my head. This is exactly how I experience AI and how I use AI. I consider it a thought partner more than an automation tool. It allows me to do what I do best with AI as activator, accelerator, and amplifier. You articulated this so incredibly well. Thanks for sharing as always!