Impertinent Questions and Pertinent Answers
The AI Revolution Is Here. Will the Economy Survive the Transition?
“Ask an impertinent question and you are on the way to the pertinent answer.”
– Jacob Bronowski, The Ascent of Man, Episode 4, “The Hidden Structure”
With this iconic line, Jacob Bronowski concludes his essay tracing John Dalton’s development of atomic theory - the discovery of hidden structure beneath the surface of observable matter - emphasizing the nature of scientific inquiry: the courage to pose “impertinent” questions that cut across convention so that genuinely new, pertinent answers can emerge. He wasn’t criticizing anyone for asking the “wrong” questions so much as drawing a contrast between the open questioning of scientific inquiry and the stagnant dogmatism of the established paradigm.
Douglas Adams makes the same point with characteristic humor in The Hitchhiker’s Guide to the Galaxy. Deep Thought, a massive superintelligent computer, is asked for “the Answer to Life, the Universe and Everything.” Several million years later the answer comes back: “42.” Displeased with the output, the builders of Deep Thought demand an explanation. It informs them that the answer is correct - but to understand it, they must first formulate the Ultimate Question, which will take several million more years. They asked the wrong question. Oops.
As a young person, these two sources influenced my inquiring mind profoundly and combined to produce a lifelong personal aphorism that has served me well:
“Knowing the question is halfway to finding the answer.”
If you’re not asking the right questions, you’re going to end up drawing the wrong conclusions.
Thomas Kuhn, in The Structure of Scientific Revolutions, gave us the language to understand this as a general principle. He distinguished between “normal science” - the “puzzle-solving” work that happens within an established paradigm - and the revolutionary science that occasionally overturns the paradigm itself. Normal science, Kuhn observed, is “an attempt to force nature into the preformed and relatively inflexible box that the paradigm supplies.” Within normal science, “no effort is made to call forth new sorts of phenomena, no effort to discover anomalies. When anomalies pop up, they are usually discarded or ignored.”
When I read the recently published Substack, “The AI revolution is here. Will the economy survive the transition?”, co-created by Michael Burry, Dwarkesh Patel, Patrick McKenzie, and Jack Clark, I found myself thinking about Bronowski, Adams and Kuhn.
The participants in this Substack exchange are, broadly speaking, engaged in normal science - or rather, normal economic and technological analysis. They’re solving puzzles within the boundaries of the established paradigm: Where does value accrue in the AI supply chain? What’s the capital cycle position? How many engineers will Big Tech employ in 2035? These are legitimate questions, but they assume the box remains intact. They don’t ask what AI is, what intelligence is, or whether what’s emerging can even be contained within our existing frameworks.
I also read the piece from the perspective of what I call the General Theory of Intelligence (GTI) - a framework I’ve been developing that attempts to understand Intelligence as a fundamental principle of systems organization rather than as a product feature or a benchmark score.
The core insight is this: intelligence isn’t primarily a cognitive attribute that some entities have more of than others. It’s a process - specifically, the process of resolving entropy within a domain. When you solve a problem, you’re reducing uncertainty. When you make a decision, you’re collapsing possibilities into actualities. When you communicate, you’re transferring structured information across a channel in a way that reduces entropy for the receiver. Intelligence, in this view, is what systems do when they organize information and resolve uncertainty - whether those systems involve human minds, evolutionary processes, or machines trained on the corpus of human language.
From this vantage point, the LLM looks very different than it does from inside the AI industry today. It’s not just a tool or a product category. It’s proof that language itself is an intelligent system - a 100,000-year-old technology for compressing experience, coordinating action, and transmitting structured thought. When we trained models on human text, we didn’t teach them to be intelligent. We downloaded Human Cultural Intelligence into a new substrate. The LLM is our own cognitive inheritance, instantiated in silicon and talking back to us.
This perspective reframes everything: the economics, the risks, the applications, the trajectory. It suggests that most of the current discourse is asking the wrong questions - or rather, asking questions that assume the existing paradigm will hold when it may already be cracking.
In the spirit of Bronowski, Adams, and Kuhn, what follows is an impertinent reframing of the Substack discussion. For each theme the participants addressed, we’ll offer both a reframed question and an alternative answer - not to dismiss their expertise, but to show what becomes visible and accessible when you step outside the box they’re constrained within.
Synopsis: Four Themes, Four Reframings
The Substack discussion, moderated by Patrick McKenzie, covered four broad themes across its nine questions:
Theme 1: What Is This Thing? What has actually been built since the Transformer architecture emerged in 2017? What does it mean that today’s capabilities are “the floor, not the ceiling”?
The participants offered a capable industry history - the shift from tabula rasa game-playing agents to scaled pre-training, the surprise that a chatbot launched a trillion-dollar infrastructure race, the observation that capabilities keep improving faster than outsiders expect.
Our Impertinent Reframing: The real history isn’t commercial - it’s epistemological. We accidentally conducted the most important experiment in the history of cognitive science and discovered that language isn’t the output of intelligence but a core algorithm of it. The LLM isn’t a floor; it’s an evolutionary leap in a lineage stretching back to Shannon and Information Theory, the printing press, cuneiform tablets, cave paintings and the first spoken words. Current systems are powerful because language itself is an intelligent system. But they’re extensions of Human Cultural Intelligence, not intelligence from first principles. We’re tinkering with the motive power of language like early steam engineers before Carnot and thermodynamics. It works (mostly), but we don’t know why it works.
Theme 2: Where Does It Work? Why has coding become the flagship use case? What does actual productive interaction with an LLM look like?
The participants noted coding’s “closed loop” property - you generate code, validate it, ship it. They shared practical use cases: chart generation, tutoring, home repair guidance. The relationship described was consistently transactional - human has task, AI performs, human receives output.
Our Impertinent Reframing: Coding isn’t accidentally successful - it reveals a principle. Code is language with an objective function, a constrained search space with verifiable outcomes. This points to domain entropy resolution as the key: AI excels where the problem space is bounded and success criteria exist. The question isn’t “what sector is next?” but “where else can we constrain the search space?” And the transactional model - AI as “Answer Vending Machine” - is the lower use of what’s possible. The real potential lies in thinking with AI, not just using it. AI × HI = TI: Artificial Intelligence multiplied by Human Intelligence yields Technological Intelligence.
Theme 3: What’s It Worth? Where does value accrue in the AI supply chain? Is this a bubble? How many engineers will Big Tech employ in 2035?
Michael Burry delivered sharp financial analysis: ROIC compression, stranded assets, the Buffett escalator analogy (if your competitor installs one, you must too, and neither gains advantage). The consensus worry: massive capital expenditure with unclear returns.
Our Impertinent Reframing: The participants proclaim “AI changes everything!” then analyze it within frameworks where nothing changes at all. It’s the Flintstones problem - cars exist, but they’re powered by human feet. We need to pick a lane: either things really do change, or they only superficially change. AI is not a bubble, but the Big Data Center Infrastructure Buildout is a bubble - just as the Internet wasn’t a bubble in 2000, but Pets.com was. The assumption that massive centralized GPU farms are the only way to do AI is unwarranted and will be disrupted by the very economics Burry describes. Meanwhile, “headcount” becomes meaningless when intelligence flows through Human-AI systems rather than residing in discrete humans. We’ll need new metrics entirely - perhaps a Domain Entropy Resolution Quotient. Economics itself needs reimagining: Intellinomics, grounded in actual value generation rather than debt manufacturing.
Theme 4: What Comes Next? What are the real risks? What would surprise you in 2026?
Jack Clark worries about recursive self-improvement. Burry shrugs with Cold War fatalism and pivots to energy infrastructure. The “surprise” headlines they’d watch for are all quantitative - more revenue, more displacement, scaling hitting a wall.
Our Impertinent Reframing: The risk spectrum presented - from “social media unpleasantness” to “literal extinction” - shares a hidden assumption: AI as external force acting on humanity. But the deeper risk is that we’re building it wrong - not aligned with how intelligence actually works. The LLM isn’t an alien threat; it’s the Library of Alexandria that talks back, humanity’s externalized cultural intelligence returned to us in interactive form. The question isn’t how to control it but whether we’re wise enough to collaborate with our own legacy. As for surprises - one has already arrived. Ilya Sutskever, architect of the scaling paradigm, now declares: “The age of scaling is ending... it’s back to the age of research.” When the field’s most important insider calls for new paradigms rather than bigger clusters, the ground is shifting.
What Follows
In the parts that follow, we’ll take each theme in turn, working through the specific questions Patrick posed, offering both our reframed questions and our answers from the perspective of the General Theory of Intelligence. The goal isn’t to dismiss the valuable analysis the participants provided, but to show what becomes visible when you step outside the box - when you ask the impertinent questions that just might lead to pertinent answers.
Or at least get us past “42.”
What Readers Are Feeling
The comments on the original Substack tell a different story than the measured analysis of the participants. People are frustrated, anxious, angry. Some have lost jobs. Many report that AI “doesn’t work” - that it gives wrong answers, hallucinates, wastes their time. The gap between the hype and their lived experience feels like betrayal. “AI is a grift.” “Tech bros don’t care what they break.”
This frustration is valid. But from the GTI perspective, it’s also diagnostic.
Current AI is built by engineers for engineers - people who already possess the cognitive skills to leverage it. The blank dialogue box assumes you know what you want and how to ask for it. For those who do, AI is transformative. For everyone else, it’s a wilderness. The technology works; the design fails. Today’s AI, as it’s currently engineered and presented, meets people where they are not.
The job displacement anxiety connects here too. People sense they lack the skills to use AI as a resource for their own development - and nothing in the current product design helps them build those skills. Instead of guidance, they get an Answer Vending Machine.
These concerns run through everything that follows. Our reframing of the Substack discussion addresses them directly: why current AI fails most users, what would need to change, and what an alternative might look like.
Follow me and subscribe to my Substack to get the next installments in this series.
You can also follow me on X.


