The Six Loops
Groundhog Day in AI discourse
I was keynoting a client away day last month. One of those internal sessions where a big company gathers its senior people to talk about AI and emerging tech, and I found myself doing something I’ve started doing at every one of these events, which is mentally cataloguing how long it takes before someone says something I haven’t heard four hundred times already.
Seven minutes. That’s how long it took this time. Seven minutes before a senior leader leaned into the microphone during the Q&A and started talking about “agentic workflows” and how automation was going to “eat the middle layer.” The room nodded. Some people were taking notes. And I thought: we’re still doing this. We’re still stuck.
The vocabulary has evolved. Nobody says “learn to prompt or get left behind” anymore - that’s so 2023. Now it’s agentic this and autonomous that and middle-layer disruption. The words are shinier. But the conversation underneath hasn’t moved. We’re still going round and round the same handful of talking points, performing the same monologues at each other, never actually getting anywhere.
I’ve started counting them. The loops we’re stuck in. There are six. And once you see them, you can’t unsee them. You start predicting which one someone’s about to perform within the first ninety seconds of them speaking. The evangelists have their loops. The skeptics have theirs. Both camps are convinced they’re the ones seeing clearly. Neither is getting anywhere interesting.
I’m half tempted to turn it into a bingo card. But I think I’d be the only one playing, and instead of shouting “bingo” I’d be begging “for the love of God, can we change the script.”
So here they are. The six loops. In no particular order of annoyance.
The first loop is fear.
AI is coming for creative jobs. The robots are replacing us. Adapt or die.
You know this one because it’s everywhere. It’s the opening slide of half the keynotes I’ve sat through. It’s the hook for every AI course being sold to anxious professionals. It’s the subtext of every LinkedIn post from someone who’s just discovered ChatGPT and now feels compelled to warn everyone else that the sky is falling.
That conference I mentioned - one internal speaker opened with a slide showing “ADAPT OR DIE” in massive letters (I shit you not). The audience scribbled notes like they were receiving battlefield orders. Everyone’s so busy being scared that no one stops to ask what exactly they’re supposed to adapt into, or why dying is the only alternative, or who benefits from keeping an entire industry in a state of low-grade panic.
Fear does something to your thinking. It narrows you. It makes you defensive. When you’re scared, you focus on survival - on protecting what you have, on not losing ground, on keeping your head down and hoping the predator doesn’t notice you. You don’t invent new things when you’re scared. You don’t reimagine what’s possible. You cling to what exists and try not to get eaten.
Which is fine, I suppose, if there’s actually a predator. But fear is also a business model now. There’s an entire cottage industry built on terrifying creative people and then selling them the solution. Courses. Certifications. “Future-proof your career” programmes. Learn these seventeen prompting techniques or get left behind. The fear loop serves someone’s interests, and it isn’t yours.
I’ve watched smart people - genuinely talented, experienced creative professionals - tie themselves in knots trying to “stay relevant” in ways that have nothing to do with the actual work they’re good at. Learning tools they don’t need. Chasing credentials that don’t matter. Running on the hamster wheel of anxiety while the thing that made them valuable in the first place - their taste, their judgment, their ability to see what others miss - gets neglected.
Fear is a shit place to think from. And we’ve been thinking from there for two years.
But fear has a twin.
The second loop is hype.
AGI in eighteen months. Artificial superintelligence by 2030. AI agents that will handle everything - your calendar, your emails, your creative work, your life. The singularity is near. Again. Still. Always just around the corner.
If fear is one extreme, hype is the other - and they feed off each other. The doomers and the accelerationists are locked in a strange dance, both convinced that everything is about to change completely, disagreeing only on whether that’s terrifying or exhilarating. One side sees extinction risk; the other sees transcendence. Neither seems particularly interested in what’s actually happening right now, in the messy middle where most of us live and work.
The hype loop has its own vocabulary. Exponential curves. Emergent capabilities. Paradigm shifts. It speaks in timelines and predictions, always just far enough out that you can’t quite call bullshit yet but close enough to create urgency. Eighteen months. Two years. By the end of the decade. The goalposts move but the breathlessness stays constant.
And there’s something almost religious about it. The true believers speak of AI the way previous generations spoke of salvation - as something that will transform everything, render current concerns irrelevant, usher in a new age. The specifics are vague but the conviction is absolute. We’re on the cusp of something unprecedented. Everything is about to change. You just have to have faith.
I find the hype loop exhausting in a different way than the fear loop. Fear at least takes the present seriously, even if it distorts it. Hype skips over the present entirely, always fixated on a future that never quite arrives. It makes it impossible to think clearly about what’s actually in front of us - what these technologies can actually do, what they’re actually being used for, what’s actually changing in how work gets done.
The hype loop and the fear loop are two sides of the same coin. Both treat AI as something that happens to us rather than something we shape. Both make the present feel like a waiting room for a future that’s already been decided. Neither leaves much room for agency.
Most people, of course, aren’t living in either extreme. They’re not paralysed by fear or intoxicated by hype. They’re just... using it. Which brings us to the third loop.
The efficiency loop.
AI will make us more productive. More content, faster. More options, cheaper. Scale what used to be artisanal.
This is the dominant narrative right now. The one with the money behind it. Every AI tool pitched to creative teams does essentially the same thing: it makes production faster. Write more copy. Generate more options. Produce more variants. Resize for more formats. Translate into more languages. The entire investment thesis of the AI-for-creativity industry is “do what you already do, but more of it.”
I sat in a pitch last week where a vendor showed us how their tool could generate three hundred social media posts in an hour. Three hundred. The room was impressed. People were taking photos of the slides. And I sat there thinking: when did we decide the problem with social media was that there wasn’t enough content? When did “more” become the goal?
Because if you actually talk to anyone doing creative work - the strategists, the designers, the writers, the people making things - nobody says the hard part is production. Nobody says “I wish I could generate more options faster.” The hard part is knowing which option is right. The hard part is having a point of view in the first place. The hard part is seeing what everyone else is missing, or holding complexity without flattening it, or making a decision when you can’t possibly know how it’s going to land.
More content doesn’t solve any of that. It just makes it easier to avoid. Can’t decide which direction is right? Generate fifty and pick one. Don’t have a clear strategy? Produce enough variants that something’s bound to stick.
The efficiency narrative is also comfortable for the people selling the tools. Keep everyone focused on “produce more, faster, cheaper” and nobody asks what all this content is for, or whether the underlying strategy makes any sense. Keep them on the how and they’ll never get around to the why.
I’ve started asking a question in client meetings: “What would change if you produced half as much, but it was twice as good?” The silence that follows is always instructive.
Not everyone has bought into efficiency, though. Some people have found a different way to avoid the hard questions.
The fourth loop is exceptionalism.
AI can’t be truly creative. It’s just remixing. It doesn’t have soul, intuition, humanity. We’re safe because we’re special.
I get why this one’s appealing. It’s comforting. It lets you off the hook. If AI can’t really do what we do - if it’s just sophisticated pattern-matching without genuine understanding - then we don’t have to think too hard about what’s changing. We can keep doing what we’ve always done and trust that our ineffable human spark will protect us.
I’ve watched entire rooms of senior creatives relax visibly when someone says this. The shoulders come down. The defensive posture softens. It’s like watching people receive permission to stop paying attention.
And look, there’s something to it. AI doesn’t have lived experience. It doesn’t have a body that’s felt things. It doesn’t wake up at 3am with a half-formed idea that won’t let go. There are dimensions of creative work that emerge from being human in ways that are genuinely hard to replicate.
But the exceptionalism loop isn’t really an argument about what AI can and can’t do. It’s a way of not engaging with what’s actually happening. A way of retreating and closing down the conversation before it gets uncomfortable. The people clinging hardest to creative exceptionalism are usually the ones who really need to be asking difficult questions about their own work - whether it’s actually as distinctive as they think, whether the “human touch” they’re selling is real or performed, whether they’ve been coasting on mystique rather than substance.
Exceptionalism often comes with a side of nostalgia. Remember when we used to actually make things? Remember when craft meant something? Remember when you had to really know your trade? There’s a whole romanticised past being constructed here - a golden age of authentic creative work that AI is threatening to destroy. Never mind that most of what got made in that golden age was also mediocre. Never mind that the “craft” being mourned was often just gatekeeping by another name. The nostalgia isn’t really about the past. It’s about not having to deal with the present.
And then there’s the advice that comes with it: lean into your humanness. Be more human. Double down on empathy and connection and all the things machines can’t do.
Which sounds lovely until you ask the follow-up question: how many “be more human” jobs are there, realistically? Are we expecting a boom in empathy consultants? A surge in demand for blue-sky thinkers? An economy restructured around feelings? Come on.
The questions aren’t going away just because we’ve decided we’re too special to answer them. And the defensive crouch of exceptionalism - insisting that what we do is irreplaceable without actually examining what we do - is just a cope.
Back on the evangelist side, there’s a loop that looks like engagement but keeps everything small.
The fifth loop is tactical.
Here’s how to prompt better. Here’s the workflow hack. Here’s the tool stack that’ll change your life. Ten prompts that will transform your creative process. The ultimate guide to Claude for strategists. I saw one the other day promising “the only AI workflow you’ll ever need” - which is quite a claim given that the tools change every six weeks.
I get the appeal. Tactical advice is actionable. You can do something with it today. There’s a satisfaction in learning a new technique, in feeling like you’re getting better at something concrete. God knows I’ve shared enough prompting tips myself.
But scroll through LinkedIn on any given day and it’s wall-to-wall tactical content. Prompt libraries. Workflow templates. Tool comparisons. “I tested seven AI writing tools so you don’t have to.” How to generate a PowerPoint deck that’s just as shit as your usual PowerPoint decks, only faster. Here’s a copy-paste prompt that will give you copy-paste outputs. Congrats, you're now certified. In something. Probably.
Everyone's optimising their prompts, refining their workflows, stacking their tools. A lot of motion. I'm not sure it's movement.
The tactical loop is seductive because it feels like progress. But it keeps the conversation small. It keeps us focused on how to use the tool, not what the tool makes possible. It’s the difference between learning to operate a printing press and asking what a world with printing presses could become. One of those is a skill. The other is imagination. We’re drowning in the first and starving for the second.
If the fifth tactical loop is for people who’ve embraced AI but think small, the sixth loop is for people who’ve decided not to think about it much at all.
The minimising loop.
AI is just a tool. Like Photoshop. Like the camera. Nothing fundamental is changing, we just have a new instrument in the kit.
Some tools are just tools. A better hammer is still a hammer. You use it the same way, for the same things, just more effectively.
But some tools change everything.
Writing wasn’t “just a tool.” It restructured how humans think and remember. It made it possible to accumulate knowledge across generations, to build on ideas that came before, to think in ways that weren’t possible when everything had to be held in living memory. Writing changed what humans were capable of.
The printing press wasn’t “just a tool.” Yes, it made books cheaper and faster to produce. But it also enabled the scientific method - the ability to share findings reliably across distances so knowledge could accumulate and be verified. It restructured who could access ideas. It eventually toppled centuries of centralised religious authority. The printing press didn’t just do the old thing more efficiently. It made new things possible that no one could have imagined when Gutenberg was setting type.
The internet wasn’t “just a tool.” Early on, people used it to make digital brochures and send faster mail - doing the old things slightly more efficiently. But the internet enabled strangers to build things together without ever meeting. Open source software. Wikipedia. Entirely new forms of collaboration and knowledge-building that didn’t exist before. The tool created possibilities that weren’t imaginable in advance.
The minimising loop lets us avoid reckoning with the possibility that this might be one of the big ones. And look, maybe it isn’t. Maybe we’re all getting worked up about a slightly fancier autocomplete and in ten years this will all look like the hype cycle it probably partially is. But the certainty with which people declare “it’s just a tool” suggests they’re not actually interrogating the question. They’re just closing it down.
So there they are. Six loops. Fear, hype, efficiency, exceptionalism, tactical, minimising. The setlist that never changes.
None of them are entirely wrong. That’s what makes them so sticky. But none of them are interesting. And none of them point toward what actually matters.
What I keep coming back to - the thing that nags at me continuously - is that technology doesn’t simply replace things. It reconfigures them. It changes what something means, what it’s for, where it lives.
Photography didn’t kill painting. It reconfigured what painting was for. The camera took over documentation and painting became something else - expression, abstraction, impressionism. Things it couldn’t have become while it was busy recording reality. The technology didn’t destroy the art form. It liberated it into new territory.
And so for me, the interesting question isn't replacement or refusal. It's reconfiguration. What becomes possible now that wasn't possible before?
So what’s the alternative? What happens if you step off the setlist entirely?
I’ve been trying to find out. What happens if you rebuild creative practice from scratch for this moment?
What if a brand could make decisions, not just follow guidelines? Not a PDF gathering dust, but a living system that understands how the brand thinks - and can navigate situations the guidelines never anticipated.
What if you could explore strategic directions by playing them out across different time horizons before committing to any of them? Run a positioning forward five years. See where it strains. Fork it. Compare.
What if you could see past the edges of your own worldview? Pressure-test your thinking through frames you don’t naturally hold. Find the blind spots that come from being you, in your context, with your assumptions.
What if the AI wasn’t your personal assistant but the connective tissue between everyone’s thinking? Surfacing connections between people’s work that would otherwise never meet. Holding what groups usually lose - the tangent that got cut off, the idea that wasn’t right for this brief but might be right for another.
I’m calling it Terra Nova. New ground. Four coordinates on a landscape that didn’t exist before.
It’s a talk I’m giving first at Lisbon Digital School in March. After that, I’ll figure out the right format to share it more widely.
I don't have it all figured out. Nobody does. We're early - really early - and my imagination has a ceiling too (which irks me no end). But I'm mapping what I can see from where I'm standing. And I’ve found enough to know that the current conversation is missing the point entirely.
There's something new here. Genuinely exciting, unmapped territory. And after two years of bingo, I'm ready to go explore it.



I'm "guilty" of all 6 and reading your piece helped me make sense of the mess in my head on this topic.
Truth is whenever I see your name in my email with a new essay I feel like a kid opening a present at Christmas. Thank you for being such an isnpiration!
Imagination feels like the currency for Terra Nova. What would Walt Disney do?