When knowledge becomes worthless

Jan 31, 2026

9 min read

thumbnail

Introduction

In Douglas Adams' The Hitchhiker's Guide to the Galaxy, a supercomputer called Deep Thought is built to answer the Ultimate Question of Life, the Universe, and Everything. After seven and a half million years of computation, it delivers the answer: 42. The answer is precise, verified, and completely useless.

I used to think this was satire. I'm starting to think it was prophecy.

The Coming Abundance

Google DeepMind's AlphaFold has predicted the structures of over 200 million proteins, work that would have consumed lifetimes of experimental effort. Researchers are using AI to discover new drugs, develop novel algorithms, and solve mathematical problems that had resisted human attempts for years. The frontier of what is knowable is expanding faster than any individual can track.

The Internet made knowledge freely accessible. AI is doing something different: it's making knowledge freely producible. AI-powered research agents [1, 2] can now produce competent literature reviews in minutes, generate novel research ideas, and implement experiments to verify hypotheses. This is not limited to abstract disciplines like math, physics, or computer science research. By integrating GPT-5 with a cloud autonomous laboratory, the partnership between OpenAI and Ginkgo Bioworks achieved a 40% reduction in the cost of protein production. The hard part is less and less about validating ideas or hypotheses, and more about deciding what to investigate next.

As Jason Wei (Meta Superintelligence Labs) illustrated during his talk at the Stanford AI Club, when compute per task rises and information retrieval time collapses, intelligence itself starts to behave like a commodity. The scarce resource shifts elsewhere.

The question is: where?

Questions Are the New Scarce Resource

Demis Hassabis, one of the co-founders of Google DeepMind, put it directly during his interview with Lex Fridman: "Picking the right question is the hardest part of science, and making the right hypothesis." Not running the experiment. Not analyzing the data. Choosing what to investigate.

Richard Hamming made the same point decades ago in his famous speech You and Your Research: "If you do not work on an important problem, it's unlikely you'll do important work." But Hamming added a crucial nuance. An important problem isn't just one with big consequences (e.g., "time travel"). It must also have a possible attack. The skill lies in identifying problems that are simultaneously significant and tractable. That judgment, Hamming argued, is what separates great scientists from merely competent ones.

This extends beyond research. Andrew Ng argued during his talk at AI Startup School at Y Combinator that "product is the bottleneck, not code." The hard part isn't building software; it's knowing what to build. And Jensen Huang has described how 90% of his prompts to AI are questions that require genuine cognitive effort. He treats AI not as an answer machine, but as a tool for challenging and clarifying his own thinking.

Across scientific research, product development, and executive leadership, the pattern is the same: answers are becoming cheap. Questions are becoming the scarce resource.

The harder question is: what does it actually take to ask good questions? And what happens to that ability when AI removes the conditions that produce it?

The Paradox

Where does the ability to ask good questions actually come from? One working hypothesis is that it requires two ingredients:

  1. Curiosity: this is the intrinsic motivation to close a perceived knowledge gap. It's what makes you generate questions in the first place.
  2. Feedback: this is information that violates your expectations. It's what tells you whether your questions are pointing in the right direction. Feedback comes from two sources: the concrete signals you get when you struggle with a problem and discover your hypothesis was wrong, and the metacognitive signals you get from self-critique and peer challenge. The first teaches you through collision with reality. The second teaches you through anticipation, that is learning to sense whether a question is worth pursuing before you invest the effort.

Independent thinkers have both. Computer scientists ask sharp questions about AI because they are genuinely curious and spent years thinking about the right questions to ask. They struggled with countless inquiries that didn't yield to their expected result. They've also been challenged by peers on the problems they chose to solve and have developed exceptional self-critiquing skill, the habit of actively destroying their own ideas before others can (e.g., Po-Shen Loh or Charlie Munger). Curiosity generates the questions. Feedback shapes them.

Here's what happens when AI short-circuits both.

A junior developer notices their web application is running slowly. They ask an AI coding tool, which immediately identifies the root cause: "Your database query is missing an index on the id column." Problem solved in seconds. The developer moves on. But notice what didn't happen. The fix arrived before curiosity had a chance to activate. There was no knowledge gap to perceive, just a gap that was filled before the developer noticed its shape. And there was no feedback from struggle or reflection. No wrong hypothesis that forced them to rethink how databases work. No peer asking, "Are you sure it's the database layer? What's your evidence?"

Now consider a developer who investigates the issue themselves. They start with a hypothesis, maybe the frontend is rendering too much data, and test it. Wrong. That failure is feedback: it tells them their question was pointing in the wrong direction. So they generate a new one. Is it the network? Wrong again. But here's the key: each failed hypothesis doesn't just eliminate a possibility. It reshapes their curiosity and understanding of the problem space. They start asking different kinds of questions. "I've been assuming this is a frontend problem. What if my whole framing is wrong? What layer haven't I even considered?"

This is curiosity and feedback working together. Struggle provides concrete signals that redirect questioning. Reflection ("why was I so sure it was the frontend?") provides metacognitive feedback that improves the quality of questions they'll generate next time. Six months later, facing a completely different system, they don't just ask "what's slow?" They ask "what assumptions am I making about where the bottleneck lives, and which could be wrong?"

That's not pattern-matching. That's a fundamentally different mode of inquiry, one originated from getting meaningful feedback from previous questions asked.

The developer who always had AI fix their bugs never experienced that cycle. Curiosity was never activated, so they never generated their own hypotheses. They received no feedback from failure, so their mental models were never challenged. They solve problems efficiently. But they lose the ability to ask the novel questions that open new lines of inquiry.

By making answers effortless, AI short-circuits both ingredients for asking great questions. It resolves knowledge gaps before curiosity engages. It eliminates the feedback, from struggle and from reflection, that teaches which questions are worth asking. Remove either ingredient, and the training ground for questioning disappears.

The Open Question

If answers are becoming cheaper and the curiosity-feedback hypothesis holds, a natural question follows: how can we develop questioning ability during human-AI interaction?

Two research directions seem promising.

The first focuses on curiosity. If we can activate learners' curiosity during AI interactions, perhaps we can cultivate questioning ability even when AI handles much of the cognitive load. This path assumes that curiosity can do more heavy lifting than the framework suggests, and that AI itself might provide the feedback signals that struggle used to generate. The bet: maybe the feedback doesn't have to come from failure. Maybe it can come from the AI explicitly surfacing gaps, contradictions, or unexplored directions.

The second focuses on feedback. Instead of trying to replace struggle, we make it more productive. Failure is a powerful contrastive signal. When your hypothesis is wrong and the system breaks, your brain registers the gap between expectation and reality viscerally. That signal trains intuition over time. An exceptional learner doesn't avoid struggle. They learn to notice what their struggle is telling them, to treat each failure as a data point in their mental models rather than just an obstacle to overcome.

These two directions aren't mutually exclusive. Curiosity without feedback produces aimless questioning. Feedback without curiosity leads to rote memorization. The real challenge is finding interventions that activate both.

What makes this urgent is who's most affected.

A 2025 study from MIT's Media Lab tracked participants writing essays over four months with and without AI assistance. Those who relied on ChatGPT showed the weakest brain connectivity on EEG, the lowest linguistic complexity, and diminished critical thinking even after they stopped using the tool. A separate study of 666 participants found the pattern was sharpest among 17-to-25-year-olds, who showed higher AI dependence and lower critical thinking scores than any other age group.

Jensen Huang uses AI as a questioning machine because he brings decades of hard-won intuition to the conversation. He knows when to push back on AI's output, when to probe further, when to reject an answer. A child encountering AI for the first time has no such foundation. They have no reason to treat it as anything other than an oracle.

The resolution isn't to keep AI away from young learners. It's to design new human-AI interaction paradigms that promote curiosity and preserve productive struggle rather than eliminating it. AI should make the struggle more productive. The question for educators, researchers, and engineers: how do we build tools that encourage curiosity and help learners extract more signal from each moment of confusion, each failed hypothesis, each encounter with the limits of their own understanding?

We need to figure this out before an entire generation learns to accept "42" as the answer without ever asking what the question was.

Acknowledgments

I'd like to thank Cassey Shao and Racquel Brown for reviewing the initial draft of this blog and providing insightful feedback.