|
|||||
This story was originally published by Grist. Sign up for Grists weekly newsletter here. The conversation around energy use in the United States has become . . . electric. Everyone from President Donald Trump to the cohosts of Today show has been talking about the surging demand for, and rising costs of, electrons. Many people worry that utilities wont be able to produce enough power. But a report released today argues that the better question is: Can we use what utilities already produce more efficiently in order to absorb the coming surge? A lot of folks have been looking at this from the perspective of, Do we need more supply-side resources and gas plants? said Mike Specian, utilities manager with the nonprofit American Council for an Energy-Efficient Economy, or ACEEE, who wrote the report. We found that there is a lack of discussion of demand-side measures. When Specian dug into the data, he discovered that implementing energy-efficiency measures and shifting electricity usage to lower-demand times are two of the fastest and cheapest ways of meeting growing thirst for electricity. These moves could help meet much, if not all, of the nations projected load growth. Moreover, they would cost only halfor lesswhat building out new infrastructure would, while avoiding the emissions those operations would bring. But Specian also found that governments could be doing more to incentivize utilities to take advantage of these demand-side gains. Energy efficiency and flexibility are still a massive untapped resource in the U.S., he said. As we get to higher levels of electrification, its going to become increasingly important. The report estimated that by 2040, utility-driven efficiency programs could cut usage by about 8 percent, or around 70 gigawatts, and that making those cuts currently costs around $20.70 per megawatt. The cheapest gas-fired power plants now start at about $45 per kilowatt generated. While the cost of load shifting is harder to pin down, the report estimates moving electricity use away from peak hoursoften through time-of-use pricing, smart devices, or utility controlsto times when the grid is less strained and power is cheaper could save another 60 to 200 gigawatts of power by 2035. That alone would far outweigh even the most aggressive near-term projections for data center capacity growth. Vijay Modi, director of the Quadracci Sustainable Engineering Laboratory at Columbia University, agrees that energy efficiency is critical but isnt sure how many easy savings are left to be had. He also believes that governments at every levelrather than utilitiesare best suited to incentivize that work. He sees greater potential in balancing loads to ease peak demand. This is a big concern, he said, explaining that when peak load goes up, it could require upgrading substations, transformers, power lines, and a host of other distribution equipment. That raises costs and rates. Utilities, he added, are well positioned to solve this because they have the data needed to effectively shift usage and are already taking steps in that direction by investing in load management software, installing battery storage and generating electricity closer to end users with things like small-scale renewable energy. It defers some of the heavy investment, said Modi. In turn, the customer also benefits. Specian says that one reason utilities tend to focus on the supply side of the equation is that they can often make more money that way. Building infrastructure is considered a capital investment, and utilities can pass that cost on to customers, plus an additional rate of return, or premium, which is typically around 10 percent. Energy-efficiency programs, however, are generally considered an operating expense, which arent eligible for a rate of return. This setup, he said, motivates utilities to build new infrastructure rather than conserve energy, even if the latter presents a more affordable option for ratepayers. Our incentives arent properly lined up, said Specian. State legislators and regulators can address this, he said, by implementing energy-efficiency resource standards or performance-based regulation. Decoupling, which separates a companys revenue from the amount of electricity it sells, is another tactic that many states are adopting. Joe Daniel, who runs the carbon-free electricity team at the nonprofit Rocky Mountain Institute, has also been watching a model known as fuel cost sharing, which allows utilities and ratepayers to share any savings or added costs rather than passing them on entirely to customers. Its a policy that seems to make logical sense, he said. A handful of states across the political spectrum have adopted the approach, and of the people hes spoken with or heard from, Daniel said every consumer advocate, every state public commissioner, likes it. The Edison Electric Institute, which represents all of the countrys investor-owned electric companies, told Grist that regardless of regulation, utilities are making progress in these areas. EEIs member companies operate robust energy-efficiency programs that save enough electricity each year to power nearly 30 million U.S. homes, the organization said in a statement. Electric companies continue to work closely with customers who are interested in demand response, energy efficiency, and other load-flexibility programs that can reduce their energy use and costs. Because infrastructure changes happen on long timelines, its critical to keep pushing on these levers now, said Ben Finkelor, executive director of the Energy and Efficiency Institute at the University of California, Davis. The planning is 10 years out, he said, adding that preparing today could save billions in the future. Perhaps we can avoid building those baseload assets. Specian hopes his report reaches legislatures, regulators, and consumers alike. Whoever reads it, he says the message should be clear. By Tik Root This article originally appeared in Grist. Grist is a nonprofit, independent media organization dedicated to telling stories of climate solutions and a just future. Learn more at Grist.org.
Category:
E-Commerce
For the past two years, artificial intelligence has felt oddly flat. Large language models spread at unprecedented speed, but they also erased much of the competitive gradient. Everyone has access to the same models, the same interfaces, and, increasingly, the same answers. What initially looked like a technological revolution quickly started to resemble a utility: powerful, impressive, and largely interchangeable, a dynamic already visible in the rapid commoditization of foundation models across providers like OpenAI, Google, Anthropic, and Meta. That flattening is not an accident. LLMs are extraordinarily good at one thinglearning from textbut structurally incapable of another: understanding how the real world behaves. They do not model causality, they do not learn from physical or operational feedback, and they do not build internal representations of environments, important limitations that even their most prominent proponents now openly acknowledge. They predict words, not consequences, a distinction that becomes painfully obvious the moment these systems are asked to operate outside purely linguistic domains. The false choice holding AI strategy back Much of todays AI strategy is trapped in binary thinking. Either companies rent intelligence from generic models, or they attempt to build everything themselves: proprietary infrastructure, bespoke compute stacks, and custom AI pipelines that mimic hyperscalers. That framing is both unrealistic and historically illiterate. Most companies did not become competitive by building their own databases. They did not write their own operating systems. They did not construct hyperscale data centers to extract value from analytics. Instead, they adopted shared platforms and built highly customized systems on top of them, systems that reflected their specific processes, constraints, and incentives. AI will follow the same path. World models are not infrastructure projects World models, systems that learn how environments behave, incorporate feedback, and enable prediction and planning, have a long intellectual history in AI research. More recently, they have reemerged as a central research direction precisely because LLMs plateau when faced with reality, causality, and time. They are often described as if they required vertical integration at every layer. That assumption is wrong. Most companies will not build bespoke data centers or proprietary compute stacks to run world models. Expecting them to do so repeats the same mistake seen in earlier AI-first or cloud-native narratives, where infrastructure ambition was confused with strategic necessity. What will actually happen is more subtle and more powerful: World models will become a new abstraction layer in the enterprise stack, built on top of shared platforms in the same way databases, ERPs, and cloud analytics are today. The infrastructure will be common. The understanding will not. Why platforms will make world models ubiquitous Just as cloud platforms democratized access to large-scale computation, emerging AI platforms will make world modeling accessible without requiring companies to reinvent the stack. They will handle simulation engines, training pipelines, integration with sensors and systems, and the heavy computational liftingexactly the direction already visible in reinforcement learning, robotics, and industrial AI platforms. This does not commoditize world models. It does the opposite. When the platform layer is shared, differentiation moves upward. Companies compete not on who owns the hardware, but on how well their models reflect reality: which variables they include, how they encode constraints, how feedback loops are designed, and how quickly predictions are corrected when the world disagrees. Two companies can run on the same platform and still operate with radically different levels of understanding. From linguistic intelligence to operational intelligence LLMs flattened AI adoption because they made linguistic intelligence universal. But purely text-trained systems lack deeper contextual grounding, causal reasoning, and temporal understanding, limitations well documented in foundation-model research. World models will unflatten it again by reintroducing context, causality, and time, the very properties missing from purely text-trained systems. In logistics, for example, the advantage will not come from asking a chatbot about supply chain optimization. It will come from a model that understands how delays propagate, how inventory decisions interact with demand variability, and how small changes ripple through the system over weeks or months. Where competitive advantage will actually live The real differentiation will be epistemic, not infrastructural. It will come from how disciplined a company is about data quality, how rigorously it closes feedback loops between prediction and outcome (Remember this sentence: Feedback is all you need), and how well organizational incentives align with learning rather than narrative convenience. World models reward companies that are willing to be corrected by reality, and punish those that are not. Platforms will matter enormously. But platforms only standardize capability, not knowledge. Shared infrastructure does not produce shared understanding: Two companies can run on the same cloud, use the same AI platform, even deploy the same underlying techniques, and still end up with radically different outcomes, because understanding is not embedded in the infrastructure. It emerges from how a company models its own reality. Understanding lives higher up the stack, in choices that platforms cannot make for you: which variables matter, which trade-offs are real, which constraints are binding, what counts as success, how feedback is incorporated, and how errors are corrected. A platform can let you build a world model, but it cannot tell you what your world actually is. Think of it this way: Eery company using SAP does not have the same operational insight. Every company running on AWS does not have the same analytical sophistication. The infrastructure is shared; the mental model is not. The same will be true for world models. Platforms make world models possible. Understanding makes them valuable. The next enterprise AI stack In the next phase of AI, competitive advantage will not come from building proprietary infrastructure. It will come from building better models of reality on top of platforms that make world modeling ubiquitous. That is a far more demanding challenge than buying computing power. And it is one that no amount of prompt engineering will be able to solve.
Category:
E-Commerce
Most managers are using AI the same way they use any productivity tool: to move faster. It summarizes meetings, drafts responses, and clears small tasks off the plate. That helps, but it misses the real shift. The real change begins when AI stops assisting and starts acting. When systems resolve issues, trigger workflows, and make routine decisions without human involvement, the work itself changes. And when the work changes, the job has to change too. Lets take the example of an airline and lost luggage. Generative AI can explain what steps to take to recover a lost bag. Agentic AI aims to actually find the bag, reroute it, and deliver it. The person that was working in lost luggage, doing these easily automated tasks, can now be freed to become more of a concierge for these disgruntled passengers. As agentic AI solves the problem, the human handles the soft skills of apologizing, and offering vouchers to smooth the passengers transition to a new locale that was disrupted by a misplaced bag, and perhaps going a step further to make personal recommendations for local shops to pick up supplies. With AI moving from reporting information to taking action, leaders can now rethink how jobs are designed, measured, and supported to best maximize on the potential of the position and the abilities of the person in it. According to data from McKinsey, 78% percent of respondents have said their organizations use AI in at least one business function. Though some are still applying it on top of existing roles rather than redesigning work around it. 1. When tasks disappear, judgment becomes the job Many roles are still structured around task lists: answer tickets, process requests, close cases. As AI takes on more repeatable execution, what remains for humans are exceptions, tradeoffs, and judgment calls that dont come with a script. Take for example a member of the service team at a car dealership. Up until now the majority of their tasks have been scheduling appointments, sending follow-up emails, making follow-up calls and texts. Agentic AI can remove the bulk of that work. Now that member of the team can make the decisions that require nuance and critical thinking. They know that the owner of a certain vehicle is retired and has trouble getting around. They can see that their appointment is on a morning when it might snow. The human then calls the customer and rebooks them for when the weather is more favorable. These sorts of human touches are what will now set this dealership apart and grow customer loyalty. 2. Measure what humans now contribute As AI absorbs volume, measuring people on speed and responsiveness pushes them to compete with machines on machine strengths. Instead, evaluation should reflect what humans uniquely provide: quality of judgment, ability to prevent repeat issues, and stewardship of systems that learn over time. In the example above, the service team member at the car dealership could now be assessed not by number of appointments set, or cancellations rescheduled, but by outcomes such as customer satisfaction, and repeat business. The KPIs should be in-person or over the phone touch points with a customer to up-sell, or suggest better services that their vehicle will need. 3. Human accountability for AI work When AI is involved, ownership has to be explicit. Someone must own outcomes, even if a system takes the action. Someone must own escalation rules, workflows, and reviews. Without that clarity, AI doesnt reduce friction, it just shifts it to the moment something goes wrong. In the car dealership example, a human should still be overseeing the AI agents doing the work and ensuring that its done well. If there are problems, they should be able to catch them and come up with solutions. One of the biggest risks with AI isnt failure, its neglect from humans overseeing the overall strategy and bigger goals that the AI is completing. Systems that mostly work fade into the background until they dont. Teams need protected time to review where AI performed well, where it struggled, and why. Looking ahead This shift isnt theoretical. Klarna has publicly described how its AI assistant now handles a significant share of customer service interactions, an example of how quickly AI moves from support tool to frontline worker. Once AI is doing real work, the old job descriptions stop making sense. Roles, accountability, metrics, and oversight all need to be redesigned together. AI improves fastest when humans actively review and guide it, not when oversight is treated as an afterthought. The next phase of work isnt about managing people plus tools. Its about designing systems where expectations are clear, ownership is explicit, humans focus on meaningful decisions, and AI quietly handles the rest. If leaders dont redesign the job intentionally, it will be redesigned for them, by the technology, by urgent failures, and by the slow erosion of clarity inside their teams.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||