By Mike Koetting August 19, 2025
Today’s blog is the second of two posts about AI. It’s not about whether AI is good or bad for society. That’s a worthwhile discussion and I expect it to remain a hot topic for the foreseeable future. These posts, however, are based on the observable reality that, like it or not, it’s coming.
These posts focus on some practical questions about its arrival. Part One looked at some of the economic impacts; today’s post considers environmental issues and questions of security and reliability.
Risk #2: Environmental Issues
It is hardly a newsflash that AI has huge energy demands.
A report from the Berkeley National Laboratory forecast that to meet the growing demands of AI, US data centers would consume as much at 12% of the nation’s electricity. By 2028. In Texas alone, a single utility reported demand for 119 gigawatts of power from data centers, which is significantly more than the current power generation capacity of the entire state.
The resulting increase in demand is a well-documented major factor in increasing utility rates. The Washington Post, the New York Times, the Chicago Sun Times and Chicago Tribune (via an article from the Associated Press that presumably was in many other cities) recently reported the role of AI in driving up electric rates in various localities. In analysis of rate requests in Ohio, an independent monitor concluded that three-quarters of the price increase was due to data center demand, primarily to support AI.
One of the side problems created by the excess of demand over electricity supply is that data center owners are seeking their own electricity supplies outside or on the margins of existing infrastructure. How this works out and what are the longer-term impacts remain to be seen. In the short term, there are grounds for concern because the regulatory framework for some of these adventures is foggy. This is a particular worry in that some data center sponsors are either creating new nuclear power facilities or are making deals to extend or retrofit facilities that have been or are being phased out. While the idea shouldn’t be rejected out of hand—there are substantive arguments that it is hard to construct an environmentally sustainable society without nuclear power—doing it on the fly in quasi-controlled circumstances might not be the best way to address this issue.
One other important issue is that the rise of data centers also puts pressure on the water supply. Data centers require huge amounts of water for cooling, much of it not at this time recyclable. While water shortages don’t receive as much environmental attention as some other threats, it is a growing problem all over the world, even in the U.S. AI will make it worse. Two thirds of data centers opened since 2022 or in development are in areas with high levels of water stress.

Since it is incontrovertible that AI has terrible environmental potential, it shouldn’t be a surprise to anyone that, among other depredations, Trump’s new plan to grow and expand AI does away with existing environmental protections by side-stepping the National Environmental Policy Act, which for the last 55 years has been the foundation of environmental preservation in this country.
Given the magnitude of issues from creating data centers at the pace necessary to support AI inflated demands, it would make sense if there were some broader regulatory framework for their establishment. The experience of Northern Virginia is instructive. At first communities welcomed data centers with open arms (and often tax inducements). Now that the full scope of what is and what could be happening is coming into focus, they are starting to think about how to regulate. I don’t suppose there is any practical way to stop the regulatory framework from popping up piecemeal (or not at all) but it is clear there has been too little systematic thought about the scale at which we are introducing this change.
A final note here. Some of the damage we anticipate will in fact be avoided. Technology will improve and AI will become more efficient. Already we know the Chinese have developed AI capabilities with dramatically smaller environmental footprints. However, all these improvements are uncertain and the known risks are all too certain. This makes now exactly the worst possible time to allow corporations to ride roughshod over the environment.
Risk #3: Security/Reliability Concerns
One of the risks that gets far too little attention is the risk that follows from AI being a smashing success—it delivers on many of its substantive promises, we work out the economic problems and mitigate environmental issues. AI will, at that point, be so thoroughly embedded in the worldwide fabric of life that it would become hard to imagine it not there. In that case—indeed, at many points before we get to this halcyon point—the security and reliability of the machinery driving AI will become some version of “essential” to our well-being.
Unfortunately, this raises a whole new set of issues, some immediate and obvious, and some a bit more speculative.
One of the more immediate obvious concerns is President’s Trump idea to house a substantial chunk of AI capability in Saudi Arabia and the United Arab Emirates in data centers that will serve U.S. and Gulf AI firms that are operating cutting-edge models. While there are some obvious reasons this might work, is the Middle East really the place where we want to locate something that could be so essential to day-to-day life in our country? It would be hard to imagine a less stable location with more possibilities for disruption, short of situating them on Tiawan. (Which, given its continuing role in providing essential computer chips, is already a risk to stable computing capability in the US.)
A more nebulous concern is what happens if we start to run out of energy or water? It’s hard to get people to think about this in large part because they imagine it as everything disappearing on one day. It won’t happen like that. As fossil fuel gradually becomes harder to pump—and there is no reason to believe that won’t happen in our children’s lives—costs will go up and priorities will have to be set. What will be more important—maintaining AI, investing in food supply systems, powering our personal car, or keeping the house warm?
I know it is impossible to get policy makers to make decisions based on this. The consequences are drastic, it’s somewhere in the future and everything seems fine now. But maybe some people should be modeling out how this is going to play out without the blithe assumptions that “some new technology will emerge.”

The Real Choices
I am not suggesting we abandon AI. That would be like someone in 1900 saying we should un-invent automobiles because there might be accidents in the future that would kill people. Progress entails uncertain ends and, accordingly, requires taking risks.
That said, looking back, we could have probably avoided several million premature deaths by focusing more on safety from the beginning. And we certainly would be in less of an environmental mess if we had taken seriously the alarms 50 years ago instead of putting so much effort into downplaying and avoiding them.
So here’s the real choice about AI: are we going to take prudent actions now against the worse outcomes or are we going to give the tech bros a blank check to create our future based on their short-run profits? We tend to think of AI as a disembodied technology above it all. But the truth is that it is embedded in a web of material systems. Society has the ability—indeed I would suggest the obligation—to regulate those systems for the broader good. I am seriously worried that too few people see this as a choice we can make because our current political system makes it almost impossible for an issue this complicated to get attention, particularly since the winners have so much reason to obfuscate this choice and tie it up in unproductive partisan acrimony.
We can only hope that human intelligence is up to the task of managing artificial intelligence.