Discussion about this post

User's avatar
Augmented Generation's avatar

I think the shape that’s emerging is this: intelligence is, at root, entropy management. Not metaphorically. Literally. The free energy principle gives us a clean lens for seeing it. Life is a local, temporary resistance to thermodynamic decay. Intelligence is the machinery that makes that resistance adaptive rather than accidental.

In its simplest form, intelligence is the ability to resolve uncertainty well enough to stay within viable bounds. Evolution is just the long, blind optimizer that keeps improving that ability across ever more hostile and varied environments. Consciousness, in that frame, isn’t special pleading. It’s another adaptation, one that shows up when prediction, memory, and social complexity reach a point where internal simulation becomes cheaper than brute reaction.

Human intelligence and social evolution aren’t exceptions to this story. They’re continuations of it.

Active inference doesn’t invent this logic. It names it. It describes how systems that persist must act to minimize free energy relative to expectations that encode survival. Strip away the equations and what remains is almost banal: if a system doesn’t care about remaining within its constraints, it doesn’t last long enough to be called intelligent.

That’s why I’m skeptical of the idea that AGI can emerge while disregarding this most basic biological principle. AI already works as an extension of human objectives, and that’s powerful. Human intelligence multiplied by artificial intelligence gives us something real, something consequential. Technological intelligence, if you like. There’s no question that this coupling can be expanded dramatically.

But that’s not the same thing as human-level intelligence in the autonomous sense.

Human intelligence is not just modeling and reasoning. It’s modeling and reasoning under pressure. Creativity is not decorative. It is what exploration looks like when the future is uncertain and failure is costly. Curiosity isn’t a luxury trait. It’s what you get when a system needs to widen the range of states in which it can survive.

Every biological intelligence we know evolved under that pressure. Every one of them is shaped by the need to persist. Take that away and you don’t get a calmer, purer intelligence. You get something inert. A system that can answer questions but never asks any. A map with no reason to be consulted.

So I doubt that human-level intelligence can arise without some intrinsic drive, some preferred state, even if it’s minimal and abstract. Call it survival, call it constraint satisfaction, call it free energy minimization. The label doesn’t matter. What matters is that without it, nothing compels exploration, creativity, or self-directed action.

AGI without that pressure might be extraordinarily useful. It might even be indispensable. But it would still be a tool, waiting patiently for someone else to supply meaning.

Biology’s lesson, repeated for billions of years, is harder to ignore than most engineering intuitions. Intelligence doesn’t emerge just from knowing the world. It emerges from needing to stay in it.

Augmented Generation's avatar

Again

No Objective, No AGI

An intelligence without an objective, preferred state, or intrinsic constraint can be very powerful, but it is not an agent. It is a tool, a map, or at best a simulator waiting to be used. AGI, as people usually mean it, implies agency, persistence, and autonomy. Those require stakes.

Here’s the core argument.

Intelligence without an objective can model the world.

Intelligence with an objective must act in the world.

JEPA, energy-based reasoning, large language models, and most current architectures excel at building internal structure. They compress, predict, solve constraints, and generate solutions when prompted. But none of them, on their own, have a reason to do anything when nothing is asked.

Active Inference for example, associated with Karl Friston, is different because it bakes in a non negotiable fact of biology: systems that persist must remain within viable bounds. Preferred states are not goals in the motivational poster sense. They are survival constraints. Temperature, energy, integrity, coherence. Miss them and the system stops existing.

That single move, adding preferred states, converts prediction into agency.

Without preferred states:

• Error is informational, not urgent

• Time does not matter

• Inaction is always acceptable

• There is no reason to explore, protect, or persist

You can bolt on tasks, rewards, or external prompts, but that produces instrumental intelligence, not autonomous intelligence. The system is smart only when someone else supplies meaning.

This is where many AGI discussions quietly cheat.

They assume intelligence naturally “wants” to generalize, explore, or improve. It doesn’t. Those are values smuggled in from the designers or from training setups that proxy for objectives without admitting it.

Reinforcement learning adds objectives, but usually shallow ones. Reward maximization works, but it fragments cognition into task specific hacks unless the reward structure is extraordinarily rich and stable. Most real environments are not.

Active Inference’s claim is stronger and harder to escape.

An agent must expect itself to exist. Or Markov Blanket

That expectation defines what counts as error. Or free energy

That error defines action.

No objective, no agent.

No preferred states, no reason to choose one future over another.

Now the subtle point, because this is where LeCun’s camp pushes back.

Could you build AGI by first building a perfect world model, then later adding objectives?

In principle, yes. In practice, the moment you add objectives, you radically reshape what “intelligence” means inside the system. Representation, attention, memory, and learning all reorganize around what matters. Biology does not add goals after the fact. Goals sculpt perception from the beginning.

So a system trained without stakes may be poorly shaped for agency later. It will know many things and care about none of them.

This leads to a clean conclusion.

AGI without an innate objective is possible only in a hollow sense, a universal simulator that never initiates action. The moment you want autonomy, persistence, curiosity, or self directed behavior, you must introduce preferred states.

Active Inference does not claim those states must be humanlike, emotional, or even conscious. They can be minimal. But they must exist.

AGI without objectives is like a brain without metabolism.

It may compute, but it will not live.

The uncomfortable implication is this.

The hardest part of AGI is not world modeling or reasoning. It is deciding what the system is allowed to care about, and how much. That is why most approaches postpone it. Active Inference puts it front and center and pays the price in complexity.

Whether engineering ultimately sides with biology or tries to cheat it remains open. But if AGI ever genuinely exists, it will not be neutral. Neutrality is not stable.

2 more comments...

No posts

Ready for more?