The Last Mile Fallacy

March 30, 2026

Why you're using AI backwards


Amazon recently tightened scrutiny on AI-generated code pushes. Engineers were shipping functions they hadn't thought through, and production was breaking in ways that review couldn't catch.

The headlines wrote themselves: "AI writes bad code." "Vibe coding fails."

But the code wasn't the problem. The code was fine.

Nobody had done the thinking that's supposed to happen before the code.

Contribution Theater

Watch someone use AI for any complex task and you'll see the same choreography.

The entire job goes to the LLM. Hard parts, easy parts, everything. Strategy, architecture, execution, formatting - all of it, in one prompt or a rapid-fire sequence of them.

The human's role shrinks to directing. Nudging. "Make it more like this." "That's not quite right." "Try again." They become a coach who never steps on the field, offering touchline instructions while the AI does the actual playing.

But here's the subtle part: when people do take something back from the AI - when they feel the need to contribute with their own hands - they almost always grab the easy work. The formatting. The manual testing. The boilerplate. The cruising altitude.

Not because someone told them to. Because it feels like contribution without being threatening.

This is contribution theater - the performance of involvement in your own work. You're present. You're active. You're "in the loop."

But you've handed away the part that was actually yours to do.

The 80% Trap

Here's a pattern you'll recognize.

You start with a vague idea. Or maybe a concrete reference - a design you saw, an interaction you loved, a product that works the way yours should. You know what "done" looks like. You just haven't figured out how to get there.

So you hand it to the AI. "Build this." "Make it like that." "Here's what I want."

And something remarkable happens: it one-shots a first version that's 80% right.

For a demo, a quick prototype, a proof of concept - this looks done. It's impressive. Your brain registers completion. The urge to ship is strong.

But then you actually use it. Or someone else does. Or you look closer. And you realize 80% isn't a rounding error from done - it's a different universe from done. The last 20% is where all the hard decisions live. The edge cases. The sequencing. The details that separate "looks right" from "is right."

So you start patching. "Fix this part." "No, the other way." "Closer, but not quite."

And the patches don't converge. Each fix introduces a new drift. You're playing whack-a-mole with an AI that's working from your vague corrections instead of any real understanding of what "right" means - because you haven't built that understanding either.

Getting from 80% to 90% takes more effort than getting from 0% to 80%.

Getting from 90% to 100% - if anyone even attempts it - takes more effort than everything before it combined.

Most people never attempt it. They ship the 80%, tell themselves it's good enough, and wonder why everything feels slightly off.

I hit this wall trying to replicate an animation. Multiple iterations. Endless coaching from the sidelines. Stuck at 80% - the kind of stuck where more prompting makes it worse.

Eventually I did what I should have done from the start. I played the original at 0.25x speed. Studied it. Built the mental model myself - how it actually worked, not how it looked like it worked.

Then I described that to the LLM. Not the outcome. The underlying structure.

First try. Perfect.

The AI hadn't changed. I had stopped skipping the part that was mine to do.

The Placebo

I saw this pattern again when I reviewed a QA team's workflow.

They were busy. Hundreds of test cases. Detailed execution logs. Dashboards full of green checkmarks. All the artifacts of rigor.

I asked one question: How do you know your test coverage actually guarantees quality?

Silence.

Not the silence of someone thinking. The silence of someone realizing they'd never been asked.

There was no answer because there was no thesis - no framework for what should be tested, why, or how to know when it's sufficient. Just volume. Activity that felt like assurance. A placebo wrapped in process.

When I asked them to bring AI into testing, they automated test case generation and execution. More tests. Faster. Same blindness.

Test slop - the QA equivalent of content farms. 10x the output, identical signal. A hundred people checking random locks in a building when nobody's asked which doors actually matter.

The Autopilot Principle

Commercial pilots don't automate takeoff and landing.

They automate cruising - the long, stable middle where the variables are known and the risk is low.

The hard parts - where lives depend on split-second decisions in unrepeatable conditions - stay with the human. Always.

Nobody questions this. Everyone understands why.

Yet with AI, we do the opposite. We automate the decisions and volunteer for the cruising.

The Last Mile Fallacy

In logistics, the "last mile" is the final stretch - from the warehouse to your door. Shortest distance. Hardest problem. Irregular addresses, locked gates, missing apartment numbers. No two deliveries are the same.

Companies have spent billions trying to automate it. Drones, robots, autonomous vehicles. Progress is slow - because the last mile is where controlled variables end and messy reality begins.

Your work has a last mile too.

It's the part that requires your specific understanding. Not general knowledge - your particular read of this problem, this context, this moment. The decision about what to build, not how. The diagnosis, not the prescription.

The Last Mile Fallacy is the belief that AI's greatest value is solving this part - the hardest, most context-dependent stretch of your work.

In reality, AI's greatest value is solving everything before the last mile.

It clears the road. Automates the cruising. Handles the predictable middle. So that when you arrive at the part that actually matters, you have more time, more energy, and more focus than you've ever had.

Why Your Brain Fights This

If this is so clear, why does almost everyone get it backwards?

Three reasons - and they compound.

Your brain is a cognitive miser.

Kahneman's System 1 - the fast, intuitive brain - is constantly scanning for ways to reduce effort. It's not laziness; it's evolutionary efficiency. For most of human history, conserving cognitive energy was survival.

When AI offers to handle the hardest part of your work, System 1 doesn't deliberate. It accepts. The decision is made before you're conscious of it. "Let the machine handle it" is the lowest-energy path, and your brain will always prefer it unless something actively overrides the impulse.

AI manufactures the feeling of flow.

Before AI, avoiding the hard part meant doing nothing. The guilt, the deadline, the blank screen - eventually, friction pushed you back to the problem.

Now, avoiding the hard part looks like productivity. You're prompting. Iterating. Generating output. Time passes. Things are happening.

But this isn't flow. Real flow - Csikszentmihalyi's flow - requires working at the edge of your ability on a problem hard enough to demand your full engagement. It's effortful. Uncomfortable. The kind of deep work where you lose track of time because every cognitive resource is committed.

AI-assisted cruising mimics this. You're engaged. The screen is moving. But you're in System 1 - pattern matching, accepting plausible outputs, steering rather than solving.

The sensation of flow without the substance of it.

Most people haven't decided what they actually want.

This is the one nobody talks about.

When a pilot automates cruising, the motivation is unambiguous: land the plane safely. When a chef decides the menu, the goal is clear: create a specific experience for a specific room.

But when most people sit down with an AI tool, they haven't answered the fundamental question: What am I actually trying to achieve?

Not "finish this task." Deeper than that. Do you want to get to the finish line as fast as possible? Or do you want to create something genuinely good - something with a point of view, an angle, an understanding that didn't exist before you engaged with the problem?

If the goal is speed, AI is perfect. You'll get 80% output at 10x pace and it'll be fine. Good enough. Indistinguishable from the median.

If the goal is quality - real quality, the kind that comes from original thinking - then AI is a trap unless you bring a perspective it can't generate. A point of view. A thesis. A deep, specific understanding of why this should exist and what it should be.

This is what "solutioning" means. Code is syntax. Test cases are templates. Drafts are words. The actual work is the thinking that precedes all of it - the decomposition, the framing, the decisions about what matters and what doesn't.

And that thinking is the most energy-expensive task in your workday. Which is why System 1 offers you the escape hatch the moment AI makes one available.

This is why the "vibe coding" discourse misses the point entirely. The question was never "do you read the AI-generated code?" It's: did you do the thinking before the code existed?

If yes - not reading every line is fine. You're an architect reviewing the contractor's work.

If no - reading every line won't save you. You're proofreading a document written in a language you don't speak.

AI doesn't give you less hard work. It gives you more time for hard work. LLMs give you back more time to be miserable - to sit with hard problems, to think past the obvious, to do the cognitive labor that is your job. If you're using AI and your work feels easier, you might be doing it wrong.

Chefs and Cooks

AI just handed everyone a kitchen full of sous chefs.

Now anyone can execute. Generate code, produce test suites, write copy, build interfaces. The recipes are all available. The techniques are all automated.

But a cook follows recipes. A chef decides what to cook, for whom, and why.

A chef reads the room. Knows this table is celebrating and that table just got bad news. Decides tonight's special should be comfort food because it's the first cold evening of the year.

That's the last mile. And the tragedy of contribution theater is that people are using AI to be the chef - "decide what to build, write the strategy, figure out the architecture" - while keeping the cook's job for themselves.

They've automated the thinking and kept the typing.

The Split

AI is dividing the world into two kinds of workers.

Not "people who use AI" versus "people who don't."

People who use AI to skip the hard part versus people who use AI to make room for the hard part.

The first group is growing fast. They produce more, move faster, ship sooner. Their output metrics look great. They'll be rewarded in the short term - because most organizations still measure volume, not understanding.

But they're building on sand. They can't explain why their solution works. They can't adapt when the context shifts. They can't onboard someone new because there's no transferable framework - just a trail of prompts. When the next model is 10x better, they'll need it to be - because they've atrophied the only muscle that was supposed to be theirs.

The second group looks slower. They pause. They slow down when the LLM is ready to sprint. They insist on understanding before generating.

But they're compounding something no model can replicate: their own depth.

Here's the uncomfortable truth about the hard part - the part most people are trying to escape: it's the only part that generates genuine flow. The only part that makes you better at your work. The only part where you produce something that didn't already exist in the training data.

AI commoditized the first 80% of every task. That 80% is table stakes now -- everyone gets it, for free, in seconds. Your entire professional identity lives in the last 20%. And that 20% only exists if you have a point of view. A thesis. A deep, specific understanding that no model can generate because it lives in your head, built from your experience, your context, your willingness to sit with the problem longer than was comfortable.

The people who will thrive aren't the best prompters.

They're the ones who've learned to endure - maybe even enjoy - the misery of the last mile.

AI didn't change what makes someone valuable. It just made it impossible to fake.

The Last Mile

Your value was never the execution.

It was never the typing, the formatting, the running of tests, the boilerplate. That was always the middle mile - necessary, but not yours.

Your value is the last mile. The understanding that can't be prompted into existence. The context that lives in your head because you did the hard work of building it.

Let AI clear the road.

Then walk the last mile yourself.

It's harder. Slower. Lonelier.

It's your job.


If this resonated, you might also enjoy Why your side project is worse than Netflix & chill.