As intelligence scales, so must the systems that power it.
Artificial intelligence has advanced at a remarkable pace. Over the past two years, development has been driven by breakthroughs in research, expanding datasets and the exponential scaling of compute. But as large models reach new levels of capability, a fundamental question is emerging: can our energy systems keep up?
While algorithmic progress continues, the limiting factor for AI is shifting. The next phase of growth will be determined not by new models but by infrastructure. Most critically, by energy.
Leaders sound the alarm
Industry leaders are increasingly acknowledging the scale of the challenge. Former Google CEO Eric Schmidt warned in 2024 that energy availability and cost will be the key constraints on AI development. He shared that a single next-generation AI data centre could require up to 10 gigawatts of power, the equivalent of ten nuclear plants. Most US plants produce just one.
“This is a scale of industry I’ve never seen in my life,” he said, estimating that AI data centres could demand up to 96 additional gigawatts by 2030—almost the entire current electricity output of the United States.[1]
At a US Senate hearing in May 2024, OpenAI CEO Sam Altman echoed the concern. “The cost of AI will converge to the cost of energy,” he stated, adding that electricity will become the primary limiting factor as compute becomes more efficient and automated.[2]
A growing bottleneck
Analysts and researchers increasingly agree: energy is becoming the bottleneck for AI advancement. Contributing factors include:
- Rising demand for high-density compute to support foundation models
- Exponential growth in energy use from data centres
- Increasing need for always-on, low-latency global infrastructure
Academic studies and reports such as OpenAI’s GPT-4 Technical Report and foundational work on scaling laws show that increasing model size and dataset diversity directly improves performance, but also significantly raises energy consumption.[3][4]
Sustaining progress in AI will require major investment in:
- Renewable energy generation
- Grid capacity and stability
- Long-distance transmission networks
- Efficient, distributed energy systems
Nations scale up to meet the energy race
Countries with strong energy manufacturing and deployment capacity are scaling rapidly. China, for example, is building nearly twice as much wind and solar capacity as the rest of the world combined. By 2030, its manufacturing base could add solar and storage capacity equivalent to the entire US grid each year—a sign of the scale required to meet future demand.[5]
Fusion for future infrastructure
Amid growing concern over AI’s energy demands, the LENR sector is emerging as a promising field. LENR technologies aim to deliver clean, efficient and scalable energy without the radioactive waste or emissions associated with conventional nuclear processes. With compact systems and potentially high energy output, LENR could provide a critical foundation for future compute infrastructure.
ENG8’s catalysed fusion system builds on this potential. Designed for high efficiency and minimal environmental impact, it offers a pathway to scalable, clean energy that could support AI infrastructure without straining existing grids. With a focus on scalable, compact energy solutions, we support the power needs of today’s compute-intensive operations.
Energy has always powered transformation. Today, the transformation is intelligence itself. The question is whether our energy systems can keep pace.
References
- Eric Schmidt, Milken Institute Conference, May 2024.
- Sam Altman, testimony to the US Senate Judiciary Subcommittee on Privacy, Technology and the Law, 16 May 2024.
- OpenAI, GPT-4 Technical Report, 2023.
- Kaplan, J., McCandlish, S., & Amodei, D. (2020). Scaling Laws for Neural Language Models.
- Global Energy Monitor (2024), Ember (2025), Our World in Data.