|
Weve spent several years now obsessing over models and assistants, but heres a new interesting truth: the next competitive edge in AI wont be another benchmark, but electrons. And not just any electrons, but cheap ones. As the AI wars heat up, the winners wont simply be those with the best UX or the most compute. Theyll be the firms that can secure abundant low-cost power at scale, hour after hour, year after year. Thats where AI is colliding with the physical world, and where the story stops being about software and starts being about grids, turbines, and price curves. Most recent analyses show that AI-driven data centers are now a visible driver of U.S. electricity demand and are starting to send retail prices higher, a clear signal that the constraint is shifting from graphics processing units (GPUs) to kilowatt-hours. Then theres also a lot of insistence about water, and it deserves some observations. In fact, the problem is that water use is often confused with water consumption. In data centers, much of the water involved in most cooling systems is withdrawn, then used to absorb heat, and later returned. Warmer, yes, but reentering the water cycle after discharge is brought back within permitted temperature ranges. Only some designs (notably evaporative cooling) consume water through vapor losses; others trade water for electricity by leaning on air-cooled chillers or direct-to-chip liquid loops that dramatically cut onsite withdrawals. Think local The right way to think about the problem is local: Water stress is a catchment-level issue, not a global one, and the risk depends on where you site the load and which cooling technology you choose. In short, the headlines often overstate a universal thirst that the engineering and the definitions dont support. None of this, of course, minimizes communities that are water-stressed, where a single facility can matter. Investigations have shown clusters of data centers in arid regions, prompting scrutiny and new local rules. Thats the right debate: match technology choices to basin realities, and stop treating water for AI as the same problem everywhere. In places with abundant non-potable or reclaimed water, or with dry/thermosyphon cooling, the footprint can be managed; in stressed watersheds, it becomes a siting decision, not an engineering afterthought. Electricity is different. There is no local workaround if the price is structurally high. And on cost, the market is brutally clear. The latest Lazard Levelized Cost of Energy+ (LCOE+) report again shows utility-scale wind and solar at the bottom of the price stack, with new gas combined-cycle plants rising in cost and nuclear still the most expensive new build in rich-country conditions. If youre trying to run large training runs or always-on inference, the delta between clean, cheap power and legacy generation is not a rounding errorit is the margin that decides where you build and whether the unit economics make sense. Consider nuclear: Georgias Vogtle expansion finally went online, but only after historic cost and schedule overruns that translated into material rate hikes for customers. If AIs advantage is speed and scale, its hard to square that with technologies that arrive late, over budget, and with levelized costs that sit at the wrong end of the curve. The physics is fine. The economics, today, are not. This is why the new moat isnt access to energy in the abstract: Its access to cheap energy, reliably delivered. The firms that can lock in 24/7 low-cost supply, time-shift non-urgent workloads into off-peak windows, and colocate compute with stranded or overbuilt renewables will win. Everyone else will pay retail, and pass those costs on to users or investors. We are already seeing utilities, grid operators, and tech companies negotiate curtailment and flexibility, and the International Energy Agency’s (IEAs) modeling makes the near-term picture obvious: AI-related demand is rising, and it will test systems that were not designed for this kind of always-on compute. The China factor This brings us to the comparison nobody in Silicon Valley likes to make out loud: China. Look past the coal headlines for a moment and follow the build rates. China hit its 2030 wind-and-solar target in 2024, six years early, and added roughly 429 GW of net new capacity to the grid in 2024 alone, the vast majority wind and solar, backed by massive investment in transmission. Pace matters, because marginal megawatt-hours from ultra-low-cost renewables set the floor for training and inference costs. Chinas grid still has big challenges (curtailment among them), but if youre simply asking Who is manufacturing cheap electrons at scale the fastest? the answer today is not the United States. That doesnt mean resignation; it means focus. If the U.S. wants to stay competitive in AI economics, the priority is not another model announcement: Its a buildout of cheap generation and the wires to move it. Anything that delays that, be it doubling down on gas price volatility, pretending coal is cheap once you factor in capacity payments and externalities, or dreaming of next-gen nuclear that wont arrive on time, will keep AI sited where the power is inexpensive and predictable. In a world of location-aware workloads, electrons decide geography. The takeaway The practical takeaway for companies is straightforward: If you are spending real money on AI, your CFO should now know your blended cost of electricity as intimately as your cloud bill, and should be negotiating for both. Favor regions with abundant wind and solar and strong transmission, insist on time-of-use pricing and demand-response programs, push your vendors on 24/7 carbon-free energy rather than annual offsets that do nothing for peak prices or local loads. None of this is environmental, social, and governance (ESG) posturing. Its cost control for a compute-intensive product line whose unit economics are married to energy markets. On water, keep the conversation precise. Ask for cooling designs, not slogans. Is the system evaporative or closed-loop? Whats the water-use effectiveness and the discharge temperature profile? Where does the site sit on the World Resource Institute’s (WRIs) aqueduct map today and under climate-adjusted scenarios? If your supplier cant answer those basics, theyre not ready to build where youre planning to grow. But dont let the AI is drinking the planet meme obscure a simpler reality: With the right technology and siting, the binding constraint is cheap electricity, not moisture in a recirculating loop. The narrative arc is changing. The first phase of the AI boom rewarded companies that could raise capital and buy a lot of GPUs. The next phase will reward those that can buy electrons cheaply, cleanly, and continuously. If you want a preview of who wins the assistant wars, dont look at the demos. Look at the interconnection queues, the power-purchase agreements, and mostly, the maps of wind and solar buildoutsthe cheapest energy available. Software is glamorous, but power is destiny.
Category:
E-Commerce
In business, the art of the pivot is a delicate thing, difficult to get right. That’s why it doesn’t happen that often; you only do it when you’re convinced the alternativecontinuing down a path that isn’t workingwill be worse. I have to think this is the basic logic factoring into Perplexity‘s recent relaunch of its revenue-sharing program with publishers. Quick recap: Perplexity announced a new kind of subscription called Comet Plus. Users can pay $5 a month to access content from Perplexity’s publisher partnersthat is, those who sign up to participateand Perplexity passes on most of the revenue to them. It’s already set aside $42.5 million to kick-start the program, according to CEO Aravind Srinivas. Although the program is named after the company’s new Comet web browser, users can use any browser to access the content via Perplexity. However, using Comet means you’ll also be able to use the Comet Assistantmore on why that’s important in a minute. And if you already have a Pro or Max subscription, Plus is part of the package. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"green","redirectUrl":""}} The thing is, Perplexity already shares revenue with publishers via the Perplexity Publishers’ Program. Launched last summer, the PPP is an ad-based program; when a partner’s content is featured in an answer, revenue created from ads in that answer (typically a sponsored question) is shared with that partner. Perplexity isn’t sunsetting the PPPGannett just signed up for it. Still, it’s hard to see Comet Plus as at least a partial admission that the PPP wasn’t a great answer to building a business around AI search, at least not one that excites publishers. It didn’t shield the company from their ire either. News Corp sued Perplexity last year over alleged copyright violations, simultaneously praising OpenAI for its willingness to sign up-front content licensing deals instead of experimental revenue-sharing models. Perplexity’s recent bid to get the case dismissed failed, and Japanese publishers Nikkei and The Asahi Shimbun Co. sued around the same time. Getting that agent money Comet Plus is a different tack on a revenue model, but it’s also an opportunity to reset the conversation around monetization, copyright, and the lawat least a little. While competing AI search engines have been slowly migrating toward either licensing deals or “pay per crawl” models that charge bots in the moment they access content, Perplexity has so far been resistant to an approach that involves them (or their bots) paying up front for content. Instead, they’re going to monetize when others payeither advertisers or usersand share the money with publishers. With respect to Comet Plus, Perplexity says it’s going to share 80% of that money, with the other 20% going to compute costs. A key part of the structure is that it plans to apportion the money based on three different types of traffic: human engagement, search indexing, and agent activity (i.e. bots). That in itself is interestingI’ve written before about the rise in bot traffic and the opportunity it represents for publishers to provide context for those bots. This is where the Comet Assistant factors in: it’s the agent in Comet Plus’s three-part revenue plan (obviously, Perplexity can’t track and monetize agent bots it doesn’t control). Credit to Perplexity for creating a way to make money from the activity that its own Assistant creates. In fact, it might be the only one who could. That’s because Perplexity is one of several AI companies that gives its user agents permission to bypass a site’s Robots Exclusion Protocol (the internet standard for blocking bots). So rather than partnering with others on an existing “pay per crawl” program (by, say, paying TollBit or Dappier when its bots want access to content), Perplexity is effectively building its own system, and setting the price of that activity itself. That seems like an obvious conflict. Although a Perplexity spokesperson told me it provides “robust and transparent” visibility to publisher partners about how their content is performing, agent activity is largely uncharted territory. Perplexity promises to compensate publishers based on it, but they also control it. The company is adamant that its search engine will surface only the answers that best answer a query, but exactly how agents make queries could end up being a subject of great interestespecially to media companies who start to make money off it. How much for just the scrape? Comet Plus also exposes the central contradiction of how the AI companies value content, but in a different way. Since the program is charging users to access certain content, that content is by definition valuable. But Perplexity doesn’t treat “free” content differentlyit will still surface the best content to answer a user query regardless of whether or not the publisher is part of Comet Plus. The onus is on the publisher to erect defenses (block crawling via either robots.txt, Cloudflare, or some other means) to prevent that. Put another way, Perplexity is essentially saying, “We’re happy to share revenue ith you if you join our program, but if you don’t we’ll ingest and surface the content anyway, unless you tell us not to.” This approach is certainly more legally dicey, but since Perplexity’s business model depends on being able to access the entire internet, it’s clearly decided that the ambiguity is worth the risk. And to be fair, Perplexity is hardly the only AI company with this de facto stance. It’s not like ChatGPT will ignore sites that don’t have a deal with OpenAI. “Ingest first, sort it out later” has essentially become an operational standard in the AI world. How that shakes out will ultimately be answered by the courts. Will users pay? In the meantime, the media world will be watching Perplexity’s new, three-pronged revenue model with great interest. Monetizing user agents and AI search activity are new ideas, but whether they succeed ultimately depends on if users think Comet Plus is an experience they want to pay for. Because if they don’t you can bet a different revenue model will rise to take its place: advertising. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"green","redirectUrl":""}}
Category:
E-Commerce
At the University of Texas, where I’m senior vice provost of academic affairs, our credo (coined by the head of our Office of Academic Technology, Julie Schell) is to be AI Forward and AI Responsible. In service of this, over the past few years, we launched a homegrown AI tutoring platform (UT Sage), launched a second platform to enable faculty, staff, and students to engage with building AI tools (UT Spark), provided a license for Copilot for everyone at UT, and engaged a working group called Good Systems focused on the ethical implications of AI models. Although all of the conversation about AI makes it seem like it’s taken over the world, it hasn’t. Although it appears to be growing in popularity, there’s no shame in having waited to see how the tech matures and the hype shakes out. But if you’re ready to dip your toe into the AI waters, here’s my advice. In this piece you’ll learn: The first thing you should ask AI to do for you How to get AI to perfect your tone when communicating with clients Why you actually need to read the terms of service before you start Read the fine print It is important to remember that AI/large language models (LLMs) like ChatGPT, Claude, Copilot, and Gemini are commercial products. The dictum that if the product is free to use then youre the product holds for these tools as well. The generic versions of some LLM tools will include text you enter into the models into the training set for the model. That means you need to actually read over the End User License Agreement (that thing you usually skip through and click that you agree to the terms). If the model is going to take any text you enter and ingest it (that is, to incorporate it into the models training), then you need to make sure you want to give that text over to the system (and that you have permission to do so if youre adding either proprietary business information or text that is copyrighted by someone else). When possible, try to use a version of these models that someone has an enterprise license for. Most companies that pay for a license to one of these products stipulate that data entered by employees wont be used by the system for training. That is true for all of the tools we have launched at the University of Texas, for example. If you cant access an enterprise license and want to protect your data, you can consider purchasing an individual license that also typically protects your data. Many people have heard about concerns about the energy consumption and water usage associated with the server farms that power AI models. There is certainly a growth in the resources being consumed by the computers underlying these models, and it is important to pay attention to this. At the same time, youd have a much bigger impact on the environment by giving up eating meat than stopping your use of AI. First steps Because large language models spit out text based on something you type in (the prompt), it is natural to start by getting a system to write something for you. I dont recommend asking these models to write anything for you that you plan to send to someone else. As I have written about before, while asking an LLM to write a document may make you feel like you have improved on your own writing, it tends to make your writing sound like anyone else who has engaged with an AI tool. That said, if you have never played with an LLM, give a quick description of yourself and ask the model to pretend youre a superhero and ask it to describe your superpowers. This is a fun (and harmless) exercise that will give you a flavor of how the models work. After that, I recommend trying an exercise where you use an LLM as a partner to help you think about a problem. Find something youre struggling with at work. Describe that situation to the LLM in your prompt. Ask for suggestions for solutions, courses of action, or advice. Often, the system will suggest possibilities you hadnt considered. More importantly, the suggestions you get from the LLM may inspire you to think of other factors you hadnt considered before. AI as your tone coach While I dont recommend asking an LLM to write something for you in a professional context, it can be quite helpful in massaging something you have written to give it a different flavor. A colleague in our law school, for example, often asks students in a law clinic to draft letters to clients and then describe the client to an LLM, give it the initial text of the letter and ask the system to write a new draft tailored to that client. Often, the initial drafts of letters are brusque and clinical, and the drafts produced by the LLM have more empathy and engagement. You can do the same thing. Write a draft of a document. Then, describe your audience to the LLM as well as the purpose of the document. Paste in the text of the document, and ask the system to rewrite it so that it is tailored to the audience. I dont recommend just taking the output of the LLM verbatim. For one thing, it may actually change the meaning of things you intended. These systems dont actually understand your document, they are just word prediction engines. But, the inspiration you get from seeing a different approach to your text can make your next draft clearer and a better fit to your audience. Be specific about what you want The main thing to learn about engaging with an LLM is that it doesnt really know what you want to do. So, the more specific the prompt you give it, the more likely it is to give you a valuable output. Heres an exercise you can try to see this in action. Find a Large Language Model youre interested in using. Ask it to write you a blog entry about using AI. Youll get a response. It might even have some interesting suggestions. Now, ask it to write you a blog entry about using AI for the first time. Youll get something different. Next, ask it to write a blog entry in the style of Fast Company on using AI for the first time. Youll see a shift in tone and style. Finally, ask it to write a blog entry in the style of Art Markman writing for Fast Company. (I have a lot of text on the internet, so this prompt actually makes sense to LLMs . . .), and youll get a different shift in tone. You can add other specifics to prompts like how long you want it to be. The point is that if you try something on an LLM and you dont get quite what you want out of it, dont give up. Ask it a more specific question. Remember that the LLM is not a colleague who will naturally understand every nuance of what you want. The more clearly you describe what you want, the more likely you will be to get it.
Category:
E-Commerce
All news |
||||||||||||||||||
|