When knowledge becomes worthless

Jan 31, 2026

8 min read

thumbnail

Introduction

In Douglas Adams' The Hitchhiker's Guide to the Galaxy, a supercomputer called Deep Thought is built to answer the Ultimate Question of Life, the Universe, and Everything. After seven and a half million years of computation, it delivers the answer: 42. The answer is precise, verified, and completely useless.

I used to think this was satire. I'm starting to think it was prophecy.

The Coming Abundance

Google DeepMind's AlphaFold has predicted the structures of over 200 million proteins, work that would have consumed lifetimes of experimental effort. Researchers are using AI to discover new drugs, develop novel algorithms, and solve mathematical problems that had resisted human attempts for years. The frontier of what is knowable is expanding faster than any individual can track.

The Internet made knowledge freely accessible. AI is doing something different: it's making knowledge freely producible. AI-powered research agents [1, 2] can now survey hundreds of papers and produce competent literature reviews in minutes, generate novel research ideas, implement experiments, and draft paper autonomously. The hard part is less and less about gathering and organizing what's known, and more about deciding what to investigate next.

As Jason Wei (Meta Superintelligence Labs) illustrated during his talk at the Stanford AI Club, when compute per task rises and information retrieval time collapses, intelligence itself starts to behave like a commodity. The scarce resource shifts elsewhere.

The question is: where?

Questions Are the New Scarce Resource

Demis Hassabis, one of the founders of Google DeepMind, put it directly during his interview with Lex Fridman: "Picking the right question is the hardest part of science, and making the right hypothesis." Not running the experiment. Not analyzing the data. Choosing what to investigate.

Richard Hamming made the same point decades ago in his famous speech You and Your Research: "If you do not work on an important problem, it's unlikely you'll do important work." But Hamming added a crucial nuance. An important problem isn't just one with big consequences (e.g., time travel). It must also have a possible attack. The skill lies in identifying problems that are simultaneously significant and tractable. That judgment, Hamming argued, is what separates great scientists from merely competent ones.

This extends beyond research. Andrew Ng argued during his talk at AI Startup School at Y Combinator that "product is the bottleneck, not code." The hard part isn't building software; it's knowing what to build. And Jensen Huang has described how 90% of his prompts to AI are questions that require genuine cognitive effort. He treats AI not as an answer machine, but as a tool for challenging and clarifying his own thinking.

Across scientific research, product development, and engineering leadership, the pattern is the same: answers are becoming cheap. Questions are becoming the scarce resource.

The harder question is: what does it actually take to ask good questions? And what happens to that ability when AI removes the conditions that produce it?

The Paradox

Consider where the ability to ask good questions actually comes from. It doesn't appear from nowhere. It emerges from struggling with a problem, from hitting dead ends, noticing gaps, and developing intuitions through effort.

Physicists ask sharp questions about quantum mechanics because they spent years wrestling with problems that didn't yield to their first approach. Senior engineers anticipate performance bottlenecks because they spent painful hours debugging systems that failed in production. The struggle produced the questioning ability.

Here's a concrete example. A junior developer notices their web application is running slowly. They use Cursor agent mode and immediately get the fix: "Your database query is missing an index on the id column." Problem solved in seconds.

But consider what that developer didn't learn. If they had investigated the issue themselves, they would have formed and tested multiple hypotheses. Is it the frontend? The network? The application logic? Eventually they'd discover that the database was scanning every row. Each failed hypothesis teaches something about the shape of the problem space. Six months later, when designing a new feature, this developer instinctively asks: "What queries will run frequently? What's the expected data volume? Where will this break at scale?" These questions arise because they've felt the pain of not asking them.

The developer who always had AI fix their bugs may ship features quickly, but they can't anticipate problems. Anticipation is pattern recognition across past failures, and they have no failures to pattern-match against.

By making answers effortless, AI may be eroding the very ability to question that makes humans valuable in the loop. The ability to ask good questions is a byproduct of struggling to find answers yourself. Remove the struggle during problem-solving, and the training ground disappears.

What Struggle Actually Develops

If questioning ability grows from struggle, and AI reduces struggle, we face a bootstrapping problem. To find a way forward, we need to understand what struggle specifically develops, so we can preserve those elements even as the future of work changes.

Sharp mental models built through first principles. Geoffrey Hinton has warned about the danger of "fuzzy mental models." People who accept frameworks uncritically, who learn what without understanding why, cannot generate novel questions because they don't see the cracks in their own understanding. This is why foundational knowledge like mathematics, physics, logic, and ethics endures even as specialized knowledge like programming frameworks, medical protocols, or legal precedents becomes rapidly obsolete. Foundations are the grammar of questioning itself.

Tolerance for ambiguity. Hamming observed that great scientists "believe enough to proceed, but doubt enough to notice flaws." Most people collapse to one side: they commit so fully to an approach that they can't see its weaknesses, or they doubt so pervasively that they never commit to anything. The ability to hold a hypothesis provisionally, to invest effort while remaining open to being wrong, is what allows you to navigate the space between "this might work" and "this is definitely true." This tolerance doesn't come from reading about ambiguity. It comes from living inside it.

Metacognition, the ability to monitor your own thinking. It's not enough to think well; you must notice how you're thinking. Are you asking this question because it's important, or because it's comfortable? Are you avoiding a line of inquiry because it's unproductive, or because it threatens a belief you're attached to? When AI can retrieve and synthesize information faster than any human, the value of being a repository for factual knowledge drops to near zero. What endures is the ability to notice the gaps where new questions live.

The Open Question

The three qualities above are real, but they raise a harder problem: can we develop questioning ability without the problem-solving struggle that has historically produced it? The tempting answer is to replace struggle with scaffolding, to teach people the metacognitive moves of good questioning without requiring years of trial and error. But I think this frames the problem wrong. The question isn't whether scaffolding can replace struggle. It's whether scaffolding can make struggle more productive.

Failure is a powerful contrastive signal. When your hypothesis is wrong and the system breaks, your brain registers the gap between expectation and reality viscerally. That signal trains intuition over time. A well-scaffolded learner doesn't avoid confusion. They learn to notice what their confusion is telling them, to treat each failure as a data point about the shape of the problem rather than just an obstacle to get past.

What makes this urgent is who's most affected. We can already see a trend forming among those most exposed to AI. A 2025 study from MIT's Media Lab tracked participants writing essays over four months with and without AI assistance. Those who relied on ChatGPT showed the weakest brain connectivity on EEG, the lowest linguistic complexity, and diminished critical thinking even after they stopped using AI to write. A separate study of 666 participants found the pattern was sharpest among 17-to-25-year-olds, who showed higher AI dependence and lower critical thinking scores than any other age group.

Jensen Huang uses AI as a questioning machine because he has decades of hard-won intuition to bring to the conversation. He knows when to question AI's output. But a child encountering AI for the first time has no such foundation. They have no reason to treat it as anything other than an oracle, a system that just gives you answers instantly.

The resolution isn't to keep AI away from young learners. It's to design new human-AI interaction paradigms that preserve productive struggle rather than eliminate it. The scaffolding doesn't replace the struggle; it makes the struggle more productive. The question for educators, researchers, and AI engineers alike is how to build tools and methodologies that help learners extract more signal from each moment of confusion, each failed hypothesis, each encounter with the limits of their own understanding.

We need to figure that out sooner before an entire generation learns to accept "42" as the answer without ever asking what the question was.

Acknowledgments

I'd like to thank Cassey Shao and Racquel Brown for reviewing the initial draft of this blog and providing insightful feedback.