Sam Altman’s vision of AI as a utility—like electricity or water—isn’t just a futuristic musing; it’s a structural shift that could redefine how industries, governments, and individuals consume technology. Speaking at the BlackRock Infrastructure Summit, Altman framed AI not as a product to be bought outright but as a metered service, where users pay for computational power as they go. The idea isn’t entirely novel—cloud computing already operates on a similar premise—but the scale and ubiquity of AI suggest a deeper integration into daily life. If OpenAI and its peers treat intelligence as a utility, the implications for pricing, infrastructure, and access could mirror those of traditional utilities: massive capital investments in data centers, regulatory scrutiny over pricing models, and debates over whether such services should be subsidized or universally accessible.
The utility analogy forces a confrontation with a harsh reality: demand and supply imbalances could price out segments of the market. Altman acknowledged as much, warning that if OpenAI can’t meet demand, prices would rise, potentially locking out less affluent users. This isn’t just a hypothetical. We’re already seeing early signs of this in the cloud computing sector, where AI workloads are driving up costs for data center operators. Companies like Nvidia, which supplies the GPUs powering most AI models, are racing to expand capacity, but the sheer scale of demand—ChatGPT alone now serves 900 million weekly active users—means that infrastructure bottlenecks could become a chronic issue. The question isn’t whether AI will become a utility, but how its pricing and distribution will be managed to prevent a two-tiered system where only the wealthy or well-funded can afford premium access.
Jensen Huang’s prediction that AI agents will outnumber human workers by a 100:1 ratio at Nvidia by 2034 further underscores the inevitability of AI’s utility-like integration into the economy. If employers start “hiring” AI agents as they would human employees, the demand for computational power won’t just be about answering questions or generating text—it will be about sustaining entire workforces of digital labor. This transition could reshape job markets overnight, creating a new class of “digital workers” that operate alongside humans. But it also raises questions about accountability, ethics, and the human cost of such efficiency. Who bears responsibility when an AI agent makes a mistake? How do we ensure that the benefits of this digital workforce are distributed equitably?
The utility model also introduces a paradox: as AI becomes more essential, the companies controlling its infrastructure gain unprecedented influence. Altman’s mention of “selling tokens” suggests a commoditized approach, but in practice, this could centralize power in the hands of a few tech giants. The water and electricity utilities of the past were often state-regulated to prevent monopolistic abuse. Will AI follow the same path, or will it be governed by market forces alone? The answer will determine whether AI’s utility status empowers users or entrenches existing power structures.
For industries like water management, which are already grappling with digital transformation, the rise of AI as a utility could be a double-edged sword. On one hand, AI-driven predictive analytics could optimize water distribution, detect leaks in real time, and reduce waste—transforming utilities into smarter, more responsive systems. On the other, the cost of integrating such technologies could widen the gap between wealthy and developing regions, leaving smaller utilities unable to compete. The sector must start asking now whether it wants to be a passive consumer of AI utilities or an active participant in shaping how they’re deployed. The decisions made in the next decade will determine whether AI becomes a democratizing force or a luxury good.

