A recent report from MIT Technology Review is raising questions about Google’s claims around the energy and water consumption of its AI services, suggesting that the tech giant’s numbers may significantly understate the true environmental and financial costs that enterprises need to plan for.
Last week, Google published figures estimating that a single AI text query consumes roughly a quarter of a watt-hour of electricity and the equivalent of five drops of water. While these numbers sound small, Technology Review journalist Casey Crownhart argues they don’t tell the full story. The estimate only represents the median consumption for text-based queries, ignoring more complex queries—like image or video generation—that likely demand far more energy.
Crownhart, who co-authored a comprehensive analysis of AI’s environmental footprint earlier this year, noted that without insight into energy consumption at the higher end of Google’s usage spectrum, it’s impossible to gauge the real environmental impact of AI operations:
We don’t know how much energy these more complicated queries demand or what the full range looks like,” she wrote.
A Narrow View of a Massive System
By publishing only the resource use of a single text query, Google has effectively minimized the scope of its AI impact. The company hasn’t disclosed how many queries Gemini, its flagship AI model, processes daily. While Google has reported 450 million monthly active users for Gemini, that number represents just a slice of its AI ecosystem, which also powers search summaries, Gmail drafts, and other services.
By contrast, rival OpenAI is more transparent, sharing that ChatGPT handles 2.5 billion queries every day. This lack of comparable data makes it challenging for enterprises—or even individual users—to fully understand the scale of energy and water use tied to their AI interactions.
Why It Matters for IT Leaders
The debate isn’t just academic. Enterprises are directly paying for AI services, and those costs include the energy and water consumption behind them. CIOs planning budgets for 2026 face a moving target: AI capabilities are evolving rapidly, and costs will depend on the complexity of future workloads.
For organizations considering whether to bring AI computing in-house, these concerns become even more pressing. Deploying AI infrastructure internally doesn’t just mean sourcing scarce chips like NVIDIA’s top-tier accelerators—it also requires grappling with power and cooling constraints that are increasingly difficult to ignore.
Matt Kimball, VP and principal analyst at Moor Insights & Strategy, warns CIOs to start aligning more closely with operations teams:
If you’re not already speaking with your facilities teams about power requirements versus power availability, start immediately. Power shouldn’t just be a line item—it’s a strategic concern.
Rethinking Infrastructure Efficiency
Energy use isn’t just a problem for compute-heavy tasks. Storage infrastructure is another area where enterprises can cut costs and consumption. Kimball recommends moving away from spinning media in favor of all-flash storage, which is more energy efficient despite higher upfront costs. He also urges IT leaders to consider whether they need top-of-the-line hardware like NVIDIA’s B300 GPUs or if lower-powered, more affordable options could meet their needs:
RTX6000 PRO GPUs, for example, use about 40% of the power compared with a B300 while offering strong performance,” he said.
Simon Ninan, SVP of business strategy at Hitachi Vantara, added that AI’s energy footprint is forcing a fundamental rethink of data center design:
Air cooling is becoming insufficient. We’re seeing a major shift toward liquid cooling for AI workloads, and the industry will need to invest heavily in new solutions that respect environmental constraints.
The Takeaway
Google’s numbers might make AI seem like a minimal drain on resources, but a closer look shows a far more complicated—and costly—picture. For IT leaders, understanding these dynamics is no longer optional. The rise of AI is forcing enterprises to scrutinize energy use, reconsider hardware strategies, and even rethink data center design to stay competitive in an increasingly resource-intensive era of computing.