The speed tax
Every CMS vendor shipped AI features this year. Auto-generate buttons in the rich text editor, AI image creation in the media library, neural translation panels, SEO suggestion overlays.
Marcus Lindblom
Head of Product
Every CMS vendor shipped AI features this year. Auto-generate buttons in the rich text editor, AI image creation in the media library, neural translation panels, SEO suggestion overlays. A few years ago, this would have required custom integrations and dedicated teams. Now it ships as checkboxes in a product update.
I keep looking at feature comparison pages, though, and noticing something.
The features are designed for the comparison page. Not for the person who opens the CMS at nine in the morning and spends the next eight hours inside it.
What the comparison page doesn't show
There's a quiet assumption running through all of this: that more features means more value. That if you can build it, you should ship it. That speed is the same as progress.
It isn't.
Speed without judgment isn't velocity. It's drift.
I watched an editor try to update a hero image last month. Before she could reach the media library, she had to navigate past an AI generation panel, an auto-suggest overlay, and a "smart crop" dialog she'd never seen before. Three new surfaces between her and the thing she came to do. She closed two of them without reading them.
Each of those features makes a reasonable case on its own. Taken together, they made a two-minute task feel adversarial. I keep seeing this pattern: features that look like progress on a product page but feel like friction in an editing workflow.
The speed tax
I've started thinking of this as a tax. Not the kind you pay once, but the kind that compounds.
Editors pay it in complexity. Every new AI button is another choice they didn't need to make. The interface that used to feel intuitive starts to feel cluttered with options that serve the vendor's roadmap more than the editor's workflow.
The product itself pays it in bloat. It gets heavier, slower to learn, harder to explain to a new team member. The original reason someone chose the tool gets buried under layers of capability they don't use.
And vendors pay it eventually, in trust. I notice it in the conversations we have with teams evaluating tools. They're not impressed by long feature lists anymore. They're skeptical. They've been burned by products that promised AI-powered everything and delivered confusion.
Karri Saarinen put it well: more power to build should increase our need to think, not reduce it. That's the part most teams are skipping. The ability to ship a feature in a week is remarkable. The discipline to ask whether it should exist at all is rarer.
The question that matters
I've sat in product meetings where the question was "can we ship this by Friday?" and felt the pull to skip the harder question: "does the editor want it?"
AI translation that understands context and preserves tone across languages is genuinely useful. An auto-crop feature that learns from an editor's past choices saves real time. The technology isn't the problem. The problem is the gap between capability and judgment.
When you can build anything fast enough, the question shifts from "can we?" to "should we?" Most teams are still answering the first question and forgetting to ask the second.
The pace of AI development is remarkable. But remarkable isn't the same as good. The most considered product decision might be the feature you choose not to build.