Get your custom website in 14 days · Fixed price and built on a CMS that keeps you flexible to evolve

When Tools Replace Understanding

For over a century, every generation outperformed the one before it. Then we introduced screens in classrooms. What went wrong is a lesson for anyone building digital products.

Anders Granberg

Anders Granberg

Co-Founder

For over a century, every generation outperformed the one before it on cognitive tests. The trend was so reliable that researchers gave it a name: the Flynn effect. Recent evidence suggests it may be reversing. Neuroscientist Dr. Jared Cooney Horvath, in testimony before the U.S. Senate earlier this year, presented data showing that cognitive development across much of the developed world has stalled and, in several domains, declined [1].

The causes are debated [2]. But Horvath's core argument points to something that reaches well beyond schools: a rapid expansion of educational technology created a structural mismatch with how people actually learn [1]. The tools weren't designed around the people using them.

A Solution Looking for a Problem

Governments around the world have invested heavily in putting screens in classrooms. U.S. K-12 EdTech spending alone now exceeds $30 billion a year [3], and many other countries have followed similar trajectories. The narrative has been the same: education needs transformation, technology can deliver it, and students with tools at their fingertips will thrive.

But the picture that's actually emerging is more complicated. When digital devices are used in moderation for learning in school, OECD's own analysis of global PISA data actually shows better outcomes [4]. The key word is moderation. The problems start elsewhere. Screens in schools are routinely used for things they were never intended for, misdirecting attention more than focusing it. And when screens replace teaching rather than support it, outcomes get worse, not better [4][5].

In other words, the data doesn't say technology in schools is inherently harmful. It says that how it's deployed matters enormously, and that most deployments haven't been guided by evidence.

This is not really a story about schools. It's a story about what happens when any organisation scales a tool faster than it builds understanding of the people that tool is supposed to serve.

History is full of moments where we got it wrong because we stopped following the evidence. We are, at our best, a species that observes, tests, and adjusts. But we are also a species that falls in love with its own ideas.

So it's hard to avoid the conclusion that the prevailing narrative, that education is broken and only technology can fix it, was more convenient for the companies selling the tools than it was accurate. UNESCO's 2023 Global Education Monitoring Report found that just 11% of teachers and administrators requested peer-reviewed evidence before adopting new technology [6]. Education systems adopted the premise without asking a more fundamental question: how do students actually learn? The tools weren't the problem. The absence of understanding was.

When the Tool Becomes the Method

There is nothing wrong with teaching students about technology. But that's not what happened at scale. Instead, schools started teaching every subject through technology. Reading happened on tablets. Maths happened through apps. Writing happened in cloud documents. The tool didn't support the method. It replaced it.

That distinction matters far beyond the classroom.

The Universal Pattern

This story keeps repeating itself because the underlying mistake is always the same: building for the tool instead of building for the user.

It happens in software when teams ship features because they can, not because customers asked for them. It happens in marketing when strategies are built around the latest platform instead of around how the audience actually makes decisions. It happens in product development when roadmaps are driven by competitor features rather than user research.

The pattern is seductive because it feels like progress. A new feature is tangible. A new integration is demonstrable. A new technology looks good in a press release. Understanding your users, by contrast, is slow, unglamorous work. It requires listening more than building. It requires the discipline to say "not yet" when everyone around you is saying "ship it."

In his testimony, Horvath makes an interesting analytical move. Published research on EdTech tends to show small positive effects. But those studies typically compare digital tools to doing nothing. Horvath instead compares them to the benchmark of ordinary, competent classroom teaching. When you do that, he argues, most digital interventions fall below the line. The technology looks productive in isolation, but measured against a good teacher simply teaching, it often adds little or makes things worse [1]. The underlying research does show small positive effects in its own framing [6], so this is Horvath's interpretation. But the question he raises is a useful one for anyone building products: are you measuring your tool against the absence of a tool, or against the alternative of simply doing the work well?

How many features in your product would survive that test?

The Investment Nobody Evaluated

UNESCO's data reveals what happened next: 85% of some 7,000 pedagogical tools were either a poor fit or implemented incorrectly, and around two-thirds of education software licences in the U.S. went unused [6]. The tools were bought. The money was spent. The question of whether they worked was rarely asked.

Anyone who has worked with digital platforms will recognise the pattern. The investment is made, the migration is done, the rollout is celebrated. And then nobody circles back to ask what value it actually delivered. The project is too far along to kill. The platform is too expensive to replace. The strategy is too public to question.

Slow Down to Speed Up

Some are starting to rethink. Denmark recently moved to make schools mobile-free and restrict children's access to social media [7][8]. Not by upgrading the technology, but by questioning its role entirely.

That's the real lesson from the EdTech story. Not that technology is dangerous, but that urgency without understanding is. The pressure to adopt, to modernise, to keep pace with competitors, creates an environment where stopping to think feels like falling behind. But the evidence increasingly suggests the opposite is true.

Understanding your users isn't a phase in a project plan. It isn't a box to tick before you build what you were already going to build. It's the foundation that determines whether everything you build afterwards creates value or destroys it.

The next time you're evaluating a platform, a feature, or a tool, ask the question that the EdTech industry failed to ask: does this serve the people using it, or does it just serve the people selling it?


References

[1] Horvath, J.C. — Written testimony, U.S. Senate Committee on Commerce, Science, and Transportation, January 2026. commerce.senate.gov

[2] Dutton, E., van der Linden, D., & Lynn, R. (2016). "The negative Flynn Effect: A systematic literature review." Intelligence, 59, 163–169.

[3] CoSN/SETDA — K-12 EdTech spending data, reported in Education Week, October 2025. edweek.org

[4] OECD — PISA 2022 Results & Managing Screen Time, 2023–2024. oecd.org

[5] OECD — "Using Digital Resources for Learning", Education Working Papers No. 340. oecd.org

[6] UNESCO — Global Education Monitoring Report 2023: "Technology in Education: A Tool on Whose Terms?" unesco.org

[7] Eurydice — "Denmark: Agreement on mobile-free primary and lower secondary schools", 2025. eurydice.eacea.ec.europa.eu

[8] "Denmark's government aims to ban access to social media for children aged under 15" — Associated Press, November 7, 2025. pbs.org

REIMAGINE CONTENT — REDEFINE THE FUTURE