The Engine of Abundance – The Nine AI Exponentials and the Intelligence Flywheel

AI progress is not driven by a single breakthrough but by nine compounding exponentials across hardware, training, and cost. Together they form an intelligence flywheel that may become the engine of post-labor abundance—or concentrated power.
1 Jan 2026
Intelligence Was Always Exponential — It Just Looked Linear
Intelligence has never truly been linear. What changed is not the nature of intelligence growth, but the size of the exponent.
For most of human history, intelligence gains followed an exponential curve with an extremely small coefficient. Cultural accumulation, institutions, writing, science, and education all compounded over time—but slowly enough that, on human timescales, the curve appeared almost linear. Mathematically, this is unsurprising: for very small exponents, an exponential function is well approximated by a straight line.
Early AI systems followed the same pattern. Progress existed, but the exponent was tiny. Limited compute, narrow models, small datasets, and inefficient algorithms kept intelligence gains incremental. We mistook this shallow exponential for linearity because the curvature was imperceptible.
What has changed in the last decade is not that intelligence became exponential—but that multiple exponent coefficients increased simultaneously.
Compute scaled. Data exploded. Architectures improved. Costs collapsed.
The result is visible curvature.
When several small exponentials synchronize, the approximation breaks down. The curve bends upward, and intelligence starts to feel discontinuous—even though it is following the same underlying mathematics it always has.
This is the core shift we are living through: intelligence growth has crossed the threshold where exponential dynamics are no longer ignorable. What once felt like steady progress now feels like acceleration, not because intelligence changed its nature, but because the system supporting it did.
To understand why this acceleration is persistent rather than temporary, we need to examine the nine exponentials that collectively increased the slope of the curve—and locked intelligence into a self-reinforcing flywheel.
Understanding this system matters if we want to shape a future aligned with the goals of Post-Labor Mutualism rather than sleepwalking into digital feudalism (see What Does Post-Labor Really Mean?).
I. The Hardware Exponentials: The Physical Substrate of Intelligence
Intelligence may feel abstract, but it runs on atoms, electrons, and energy.
1. Chip Manufacturing Equipment
Advanced manufacturing tools—most notably EUV lithography—enable each new generation of chips to be denser, faster, and more power-efficient. These machines are among the most complex artifacts humanity has ever built, and each improvement compounds across the entire semiconductor ecosystem.
This is where intelligence begins: at the edge of physics and precision engineering.
2. Chip Design
Architectures optimized for AI workloads—GPUs, TPUs, and custom accelerators—dramatically improve parallelism, memory access, and energy efficiency. Hardware–software co-design turns algorithmic insights directly into silicon advantages.
Better designs don’t just speed up models; they expand what models are feasible to build.
3. Chip Fabs
Process-node advances, yield improvements, and scaling expertise compound performance per dollar over time. Fabrication is capital-intensive, but once scaled, it becomes a powerful multiplier on every downstream capability.
4. Data Centers
Hyperscale data centers function as intelligence factories. Innovations in cooling, networking, orchestration, and energy management increase compute density while lowering marginal costs. Increasingly, AI and energy systems are becoming inseparable.
Hardware exponentials define the ceiling of possible intelligence.
II. The Foundation Exponentials: Scaling Learning Itself
If hardware is the body, learning dynamics are the mind.
5. Pre-training
Large-scale pre-training on vast datasets produces emergent capabilities without task-specific design. Empirical scaling laws act like a gravitational force: more compute and data reliably translate into more general intelligence (Kaplan et al.). Scaling laws are empirical, not guarantees — but so far, they have proven stubbornly reliable.
6. Post-training
Fine-tuning, RL/RLHF, synthetic data, and distillation transform raw intelligence into usable systems. This stage compresses research into deployment-ready tools, shortening iteration cycles and accelerating feedback.
7. Inference-Time Compute
Intelligence is no longer fixed at training time. Techniques like chain-of-thought reasoning, tool use, search, and self-correction allow models to spend compute at runtime to think better.
Intelligence is becoming elastic—allocated dynamically when needed.
III. The Cost Exponentials: The Diffusion of Intelligence
Power alone does not create abundance. Cost determines access.
8. Tokens per Dollar
The cost of generating intelligence continues to fall. What was once scarce becomes abundant, enabling experimentation, integration, and entirely new categories of applications.
Falling costs democratize capability—at least in principle.
9. Tokens per Watt
Energy efficiency is a hard constraint. Improvements in algorithms, chips, and scheduling increase intelligence per unit of energy consumed. This exponential will determine whether AI scales sustainably or collapses under its own energy demands.
Cost exponentials decide who gets to participate in the intelligence economy.
Why the Flywheel Keeps Spinning
Each exponential reinforces the others. Cheaper inference drives adoption. Adoption generates data and revenue. Revenue funds better chips and training runs. Better models unlock new use cases, restarting the cycle at a higher level.
This is not a bubble. It is a structural regime shift in how intelligence is produced and distributed.
The critical question is no longer whether abundance is possible—but how it will be organized.
A Post-Labor Question: Who Owns the Engine?
As explored in The Design of the Commons – Scaling Cooperation, computing power and energy are becoming the new means of production. If the intelligence flywheel remains privately enclosed, we risk a world where abundance exists—but access does not.
Alternatively, intelligence could be treated as infrastructure: cooperatively governed, transparently allocated, and aligned with shared wellbeing. This is where ideas like Mutual AI, DAOs, and platform cooperatives (Solarpunk Governance) become more than ideology—they become necessary design responses.
Intelligence as Infrastructure
AI is no longer just a product. It is becoming civilizational infrastructure, on par with energy grids or transportation networks.
The nine exponentials ensure acceleration. The intelligence flywheel ensures persistence. But neither guarantees justice.
The engine of abundance is being built whether we like it or not.
The only open question is who it will serve.
What would it mean to treat intelligence itself as a commons?