A unique colorful paper crane standing out among identical gray ones, illustrating the AI productivity myth and commoditized output.

The AI Productivity Myth: Why Most People Get It Wrong

Ilyas el aissi
Ilyas Elaissi
7 min readMay 9, 2026

Last quarter I watched a junior marketer ship 40 blog posts in six weeks using the same ChatGPT prompt template a popular newsletter had published. Traffic went down. The AI productivity myth is that the tools themselves create an edge. They don't. The edge was always whatever you brought to them, and most people are bringing the same thing as everyone else.

This isn't an anti-AI piece. I use Claude every day. I also think about half of what people call an "AI workflow" is actively making them worse at their job, and the other half is making them indistinguishable from the next 10,000 people running the same prompts.

The Average User Is Getting Slower, Not Faster

There's a quiet finding buried in some of the recent NBER working papers on generative AI adoption: the productivity gains are wildly uneven. Top performers in customer support saw modest lifts. Bottom performers saw bigger ones, but mostly because the model gave them a floor they didn't have before. In creative and strategic work, the picture is messier. McKinsey keeps publishing decks about a universal productivity boost across knowledge work, but if you actually talk to people doing the work, the story is closer to: some tasks got 3x faster, some got slightly slower, and a lot of people now spend twenty minutes editing a draft they could have written in fifteen.

That last one is the part nobody talks about. When AI makes you less productive, it usually does it in a way that feels productive. You generated something. You shipped something. The dopamine hit is real. The output is worse than what you would have made yourself, and you can't tell because you didn't make it.

This is the automation bias trap. We trust the machine output more than our own judgment, especially when we're tired or unsure. A 2023 IBM study on AI adoption found that workers who used generative tools for tasks within their expertise often double-checked the output. Workers using it for tasks outside their expertise rarely did. That's exactly backwards from where the risk actually lives.

Why The Same Workflows Everyone Uses Stop Working

Here's the thing about a clever prompt: the moment it goes viral, it stops being clever. It becomes the baseline. Then it becomes the noise.

I watched this happen with cold email. Around mid-2023, a specific ChatGPT-based personalization workflow started circulating on LinkedIn. Scrape the prospect's recent post, feed it to GPT, generate a "personalized" opener. Reply rates spiked for about four months. Then they cratered, because everyone's inbox started receiving fifteen of them a day, all with the same rhythm, all opening with "Loved your post on..." Buyers learned the pattern. The workflow didn't degrade because the model got worse. It degraded because it got copied.

This is how common AI workflows become worthless over time: they're built on the assumption that the input (your prompt, your tool stack, your Zapier automation) is rare. It's not. If you can find it on a Twitter thread with 4,000 likes, your competitors found it too. The output isn't yours. It's a commodity, and commodities compete on price, which in content means attention.

A few examples I've seen recently:

  • Coding: junior devs using Cursor to ship features they don't understand, then spending two days debugging because they can't read the diff. The senior who wrote her own scaffolding shipped the same feature in an afternoon.
  • Design: portfolios full of Midjourney mood boards that look like every other portfolio full of Midjourney mood boards. The hiring designer scrolls past in 8 seconds.
  • Writing: SaaS blogs publishing four posts a week, all ranking for nothing, because Google's helpful content updates have gotten very good at sniffing out the median.
  • Startups: founders generating 200-slide pitch decks. Investors I know now actively penalize decks that smell like AI because the thinking underneath is usually thin.
  • Content creation: YouTubers using the same five "viral hook" prompt templates, all sounding like the same person.

The hype-vs-reality gap on AI isn't that the tools don't work. It's that "works" was always relative to scarcity, and scarcity is gone.

What Strategic AI Use Actually Looks Like

The people getting real leverage from these tools have one thing in common: they had a rare skill or a rare body of context before they touched AI. The model amplifies what's already there. If what's already there is generic, you get faster generic. If what's already there is specific, weird, deeply informed, or hard-won, you get faster specific-weird-deeply-informed.

A friend who's been doing technical SEO for twelve years uses Claude to draft schema audits in minutes that used to take her two hours. The audits are good because she knows what to look for, what to ignore, and which of the model's suggestions are confidently wrong. A junior running the same prompt would ship the confidently-wrong parts and never know.

This is the skill differentiation point that gets lost in the hype-vs-reality argument. AI is not a leveler. It's a multiplier, and multipliers are brutal: 10 times zero is still zero. The same tools that turn a domain expert into a one-person agency turn a generalist into a producer of slightly more generic output.

The creativity paradox is real here. Models are trained on the median of human writing, code, and design. They pull toward the average by mathematical necessity. If you anchor your work to their first draft, you anchor to the average. If you use them to pressure-test a position you already hold, or to draft the boring 70% so you can spend energy on the 30% that needs your judgment, you stay above the average. Same tool, opposite outcomes.

Identical plated dishes on a long table with one different plate, symbolizing generic AI workflows versus original thinking.

The AI Productivity Myth That Costs Beginners The Most

The most expensive version of this trap hits people new to a field. They mistake the model's fluency for their own competence. The output reads professional, so they assume it is professional. Six months later they've shipped a portfolio, a blog, a codebase, or a brand voice that's polished on the surface and hollow underneath, and they can't tell the difference because they never built the taste to evaluate it.

I'm not sure how to solve this honestly. The advice "learn the fundamentals first" sounds like a boomer telling you to learn long division before using a calculator, and I don't fully buy it. But there's a version of it that's true: you need some friction with the raw material to develop judgment, and AI removes that friction by default.

If you're searching for the best AI tools for students or hunting down free AI productivity tools, the more useful question is which parts of the work you're trying to skip and whether skipping those parts builds or erodes your ability to do the job. Using Claude to explain a concept you're stuck on builds. Using it to write the essay so you can submit it erodes. Both feel productive. Only one of them compounds.

The same logic applies to anyone asking which are the best AI productivity tools for their workflow. The tool barely matters. ChatGPT, Claude, Gemini, Cursor, Perplexity: the gap between them is smaller than the gap between two users of the same tool with different levels of underlying expertise.

How To Use AI Without Losing Your Creative Edge

A few rules I've landed on, none of them clean:

  1. Don't let the model write the first draft of anything you care about. Outline it yourself. Use the model to argue with the outline, fill in the boring connective tissue, or stress-test your weakest section. The first draft is where the thinking happens. If you outsource it, you outsource the thinking.
  2. Audit what you'd lose if the tool disappeared tomorrow. If your entire content engine, sales process, or design pipeline collapses without ChatGPT, you don't have a workflow, you have a dependency. Over-reliance on AI in creative work shows up first as speed and second as the realization that you can't work without it.
  3. Treat any prompt or stack you found in a viral thread as a starting point, not an asset. By the time it's viral, the alpha is gone. The interesting work is what you build on top using context nobody else has.
  4. Spend the time AI saves you on something AI can't do. This is the part most people skip. They use the model to cut a four-hour task to one hour, and then fill the other three hours with more AI-assisted tasks. Net output: more median work. The point of saved time is to invest it in the rare skills, the original research, the customer conversations, the niche expertise that the model can't generate.

There's a version of this where you sound like a Luddite, and I want to be clear I'm not arguing that. The anti-AI position, "real artists don't use it, real engineers write every line themselves," is just as lazy as the maximalist one. Both are ways to avoid thinking about the actual question, which is: what specifically do you bring that the median user of these tools doesn't, and is your workflow set up to amplify that, or to dilute it?

Originality Is The Only Thing That Compounds

If everyone has the same tools, the tools stop being the answer. What's left is what was always going to matter: a real point of view, a specific body of experience, a niche you understand better than the people next to you, taste built from doing the work badly for long enough to learn what good looks like.

AI doesn't replace any of that. It just makes the people who have it move faster, and the people who don't move sideways at higher speed. The honest version of the productivity research is that adoption is wildly heterogeneous and the gains accrue to people who already had something to amplify.

So the question isn't whether to use these tools. Of course you should. The question is what you're using them on, what you're still doing yourself, and whether the answer to the second question is something you'd be proud to put your name on if the model that wrote half of it stopped existing tomorrow. If yes, keep going. If no, that's the work.

Get CodeTips in your inbox

Free subscription for coding tutorials, best practices, and updates.