Complexity Cascades: A Lesson Re-Learned through Building with AI
Stuff I knew but failed to apply.
Note: This is a personal reflection on a specific experience with AI-assisted software development, but the lessons are broadly applicable to AI collaboration. I’m sharing my (slightly embarrassing) story to highlight, and remind myself once again, of the importance of clarity in our interactions with them.
I asked Claude Code to review an implementation for a library tool I’d built using Claude Code itself. The library was intended to work as a “content retrieval -> synthesis -> information retrieval” layer for use with AI tools (such as Claude Code) when building AI tools (I know), but it was ridiculously over-engineered. 20,000 lines of code to manage content from 11 URLs. Admittedly, the plan was to extend the content source base beyond that, but here we were.
Claude responded with a 79,000-character critique. Sixteen issues across four tiers. Forty-two thousand characters of improvement plans. It was delivering the same mess it was critiquing. The analysis was so thorough it was utterly useless.
It didn’t take me long to realise that I’d created the conditions that got us here in the first place, and which ultimately culminated in this outcome when I asked Claude to “review this library” without constraining what that meant and what I was after.
This got me thinking about how lack of clarity cascades into complexity.
Engaging deeply with vaguely defined goals spiral into chaos as every iteration in an unbounded context leads to ever greater divergence. If you don’t know what you’re trying to achieve, you can’t evaluate whether you’re getting closer to it. If you can’t evaluate whether you’re getting closer to it, you have no choice but to keep going in an attempt to cover every possibility. The only way out is to consciously interrupt the cycle.
The Complexity Cascade in Action
The example here relates to software engineering, but I believe the lessons are generally applicable to AI (and even more generally applicable to leadership and “good” thinking, but let’s keep it tight).
To recap:
The system was spectacularly over-engineered. Twenty thousand lines of code. Cache layers, TTL management, scheduled refresh, quality gates. A tonne of infrastructure for a problem that didn’t exist.
The analysis inherited the complexity. Claude delivered a 79,000-character tome (great word, thank you Thesaurus) containing code samples for every recommendation, risk assessments, success metrics, a four-tier priority system. It was the stuff of nightmares.
The improvement plan compounded it further. Another 42,000 characters detailing how to fix sixteen identified issues. The proposed solution was as impenetrable as the analysis.
At no point did Claude ask if the system should exist at all. And why would it? I never set clear boundaries, expectations, or quality gates. At this point, I realised that the complexity cascade had been set in motion by my own lack of clarity. I asked it to reassess the implementation after I provided it with a set of “expert principles”, starting with realigning on the objective.
Claude was polite enough not to say this, but I knew that it knew it was all my fault: “This library solves the wrong problem. Consider starting fresh.”
That one’s on me, Claude...that one’s on me.
You Get What You Prompt For
Anthropic’s own prompt engineering guidance puts it clearly: “When interacting with Claude, think of it as a brilliant but very new employee (with amnesia) who needs explicit instructions.”
Vague prompts produce vague (or in my case, encyclopedic) output. Specific prompts produce focused output, and you cannot be specific if you don’t know what you want. I’d asked for a “review.” That’s not a constraint, it’s an invitation to demonstrate comprehensiveness. Claude obliged. Word choice matters.
Geoffrey Litt makes a similar point in his work on LLM-assisted development: the medium of your interaction fundamentally changes what the AI produces. Chat is inherently expansive. Constrained requests are inherently focused.
This got me thinking about what differentiates novices from experts, and what I needed to ask for to get expert results.
Howard Marks captures this distinction in his writing on second-order thinking: “First-level thinking is simplistic and superficial... Second-level thinking is deep, complex and convoluted.”
But there’s a trap here. Deep thinking shouldn’t produce convoluted output. The expert’s job is to do the complex analysis internally and deliver the simple conclusion externally. Novices demonstrate knowledge by showing everything they know. Experts demonstrate understanding by showing only what matters.
The same novice/expert distinction applies to us as AI users. With AI, you are the expert w.r.t. the problem you’re trying to solve and in assessing proposed solutions. I’d been acting like a novice user, and Claude happily obliged by giving me novice level results.
The tragedy in all of this is that I already know this from experience. I know and apply the principles of clear communication in my leadership role (and understand how this affects AI. I mean, I was even cheeky enough to write about it here...the shame...). I have significant experience in training and coaching others to become better “doers” (indeed another thing I’ve written about...the shame compounds...).
And yet, it turned out I hadn’t internalised all of this in the context of AI collaboration until I started going really deep and got my ass kicked.
If I have any credibility left at this point, then allow me to share the lesson:
Complexity is contagious, but the infection requires your consent.
When you ask AI to analyse something complex, the analysis will tend toward matching complexity. When you ask it to review something over-engineered, the review will inherit the over-engineering. If you don’t know what you need, there’s no chance (yet) that AI will figure it out for you.
You don’t need to know the solution to your problem, but what you absolutely must work hard at is understanding and communicating the problem you are trying to solve (I can basically hear the ironic chuckles of Product Managers everywhere as I type this).
In this, working with AI is really not that dissimilar from working with people.
Sometimes, you need to relearn the old lessons the hard way.

