|
When artificial intelligence first gained traction in the early 2010s, general-purpose central processing units (CPUs) and graphics-processing units (GPUs) were sufficient to run early neural networks, image generators, and language models. But by 2025, the rise of agentic AIthat is, models capable of thinking, planning, and acting autonomously in real timehas fundamentally changed the equation. With a single click, these AI-powered assistants can turn work items into real outcomesfrom booking venues and handling HR tickets to managing customer queries and orchestrating supply chains. Were heading into a world where hundreds of specialized, task-specific models known as agents can work together to solve a problem, much like human teams do, says Vamsi Boppana, SVP of the AI group at Advanced Micro Devices (AMD). When these models communicate with one another, the latency bottlenecks of traditional data processing begin to disappear. This machine-to-machine interaction is unlocking an entirely new level of intelligence. As enterprises integrate AI agents into live workflows, they are realizing that true autonomy requires a fundamentally new computing foundation. The shift from static inference to agentic operation is putting unprecedented pressure on back-end infrastructure, with demand for compute, memory, and networking growing exponentially across every domain, Boppana adds. Ultra-low latency data processing, memory-aware reasoning, dynamic orchestration, and energy efficiency are no longer optionalthey are essential. To support these demands, the industry is moving toward custom silicon designed specifically for autonomous agents. Tech leaders such as Meta, OpenAI, Google, Amazon, and Anthropic are now codesigning silicon, infrastructure, and orchestration layers to power what could become the worlds first truly autonomous digital workforce. We work closely with partners like OpenAI, Meta, and Microsoft to co-engineer systems optimized for their specific AI workloads, both for inference and training, Mark Papermaster, AMD’s chief technology officer, tells Fast Company. These collaborations give us early insight into evolving requirements for reasoning models and their latency needs for real-time inference. We are also seeing CPUs playing an increasingly important role in agentic AI for orchestration, scheduling, and data movement. They are investing in supercomputing systems, cooling technologies, and AI-optimized high-density server racks to manage resources for thousands of concurrent AI agents. When you ask Gemini to work with you to create a research report using a few dozen documents or to summarize weekly research on a podcast, it utilizes the AI Hypercomputer [Googles supercomputing system] to support those requests, says Mark Lohmeyer, vice president and general manager of compute and AI/machine learning infrastructure at Google Cloud. Our current infrastructure is designed in deep partnership with the leading model, cloud, and agentic AI builders such as AI21, SSI, Nuro, Salesforce, HubX, Essential AI, and AssemblyAI. The Shift from Broad Compute to Purpose-Built Silicon Agentic systems dont operate in isolation. They constantly interact with enterprise databases, personal devices, and even vehicles. Inferencethe models ability to apply its learned knowledge to generate outputsis a continuous requirement. Agentic AI requires much more hardware specialization to support their constant inference demands, says Tolga Kurtoglu, CTO at Lenovo. Faster inferencing equals efficient AI, and this is as true in the data center as it is on-device. To avoid inference bottlenecks, tech companies are partnering with chipmakers to build silicon tailored for low-latency inference. OpenAI is developing custom chips and hiring hardware-software codesign engineers, while Meta is optimizing memory hierarchies and parallelism in its MTIA accelerators and Grand Teton infrastructure. We’ve embraced a codesign approach for a long time, evident in our latest AI advancements like Gemini 2.5, or Alphabet reaching 634 trillion tokens in Q1 of 2025. Agentic experiences often require multiple subsystems to work together across the stack to ensure a useful, engaging experience for users, Lohmeyer says. Our decade-plus investment in custom AI silicon has yielded Tensor processing units (TPUs) purposefully built for large-scale, agentic AI systems. TPUs are built to be more efficient and faster than CPUs and GPUs for specific AI tasks. At the Google Cloud Next 2025 conference in April, the company introduced the seventh-generation TPU, called Ironwood, which can scale to 9,216 chips per pod with interchip connection capabilities for advanced AI workloads. Models like Gemini 2.5 and AlphaFold run on TPUs. Ironwood TPUs are also significantly more power-efficient, which ultimately reduces the cost of deploying sophisticated AI models. This approach, demonstrated by our partnerships with AI21 Labs, Anthropic, Recursion, and more, underscores the fundamental but necessary industry shift toward purpose-built AI infrastructure, Lohmeyer says. Transformer-optimized GPU accelerators such as AMDs Instinct MI series, along with neural processing units (NPUs) and systems on chip (SoCs), are being engineered for real-time adaptability. AMD recently launched its Instinct MI350 series GPUs, designed to accelerate workloads across agentic AI, generative AI, and high-performance computing. Agentic AI demands more than accelerators alone. It requires full-system solutions with CPUs, GPUs, and high-bandwidth networking working in concert, says AMDs Papermaster. Through OCP-compliant systems like Helios, we remove latency hotspots and improve data flow. This integration has already delivered major results. We are now targeting a further 20 times rack-level efficiency improvement by 2030 to meet the demands of increasingly complex multi-agent workloads. According to AMD, seven of the worlds top 10 AI model buildersincluding Meta, OpenAI, Microsoft, and xAIare already running production workloads on Instinct accelerators. Customers are either trying to solve traditional problems in completely new ways using AI, or theyre inventing entirely new AI-native applications. What gives us a real edge is our chiplet integration and memory architecture, Boppana says. Metas 405B-parameter model Llama 3.1 was exclusively deployed on our MI series because it delivered both strong compute and memory bandwidth. Now, Microsoft Azure is training large mixture-of-experts models on AMD, Cohere is training on AMD, and more are on the way. The MI350 series, including Instinct MI350X and MI355X GPUs, delivers a fourfold generation-on-generation increase in AI compute and a 35-time leap in inference. We are working on major gen-on-gen improvements, Boppana says. With the MI400, slated to launch in early 2026 and purpose-built for large-scale AI training and inference, we are seeing up to 10 times the gain in some applications. That kind of rapid progress is exactly what the agentic AI era demnds. Power Efficiency Now Drives Design, From Data Center to Edge Despite their performance promise, generative and agentic AI systems come with high energy costs. A Stanford report found that training GPT-3 consumed about 1,287 megawatt-hoursthe equivalent of a small nuclear power plant running for an hour. AI training and inference generate significant heat and carbon emissions, with cooling systems accounting for up to 40% of a data centers energy consumption. As a result, power efficiency is now a top design priority. We are seeing strong demand from enterprises for more modular, decentralized, and energy-efficient deployments for their agent-based applications. They need to put AI agents wherever they make the most sense while also saving on costs and power, Lohmeyer says. Infrastructure providers like Lenovo are now delivering AI edge chips and data center racks tailored for distributed cognition. These allow on-device agents to make quick decisions locally while syncing with cloud-based models. Heat is the mortal enemy of sensitive circuitry and causes shutdowns, slower performance, and data loss if allowed to accumulate. We now build sustainability into servers with patented Lenovo Neptune water-cooling technology that recycles loops of warm water to cool data center systems, enabling a 3.5 times improvement in thermal efficiencies compared to traditional air-cooled systems, Kurtoglu says. Our vision is to enable AI agents to become AI superagents (single point of entry for all user requests) and eventually graduate to AI twins. Realizing superagents full potential hinges on developing and sustaining the supercomputing power needed to support multi-agent environments. The Future of Enterprise AI is Autonomous, But Challenges Remain Despite growing momentum, key challenges persist. Kurtoglu says many CIOs and CTOs still struggle to justify the value of agentic AI initiatives. Lenovos AI Readiness Index 2025 revealed that agentic AI is the area businesses are struggling with the most, with one in six (16%) businesses admitting to having low or very low confidence in this area. That hesitation stems from three core concerns: trust, safety and control; complexity and reliability; and security in integration, Kurtoglu says. To address this, Lenovo recommends a hybrid AI approach in which personal, enterprise, and public AI systems coexist and support each other to build trust and scale responsibly. Hybrid AI enables trustworthy and sophisticated agentic AI because of its access to your sensitive data, locally on a trusted device or within a secure environment. It enhances responsiveness by not relying on the cloud, avoiding cloud round trips for every question or decision, Kurtoglu explains. Its also more resilient, with at least part of agents tasks persisting even if cloud connectivity is intermittent. Lohmeyer adds that one major challenge for Google Cloud is helping customers manage unpredictable AI-related costs, especially as agentic systems create new usage patterns. Its difficult to forecast usage when agentic systems drive autonomous traffic, Lohmeyer explains. Thats why were working with customers on tools like the Dynamic Workload Scheduler to help optimize and control costs. At the same time, were constantly improving our platforms and tools to handle the larger challenges of getting agent systems and making sure they’re governed properly. Boppana notes that enterprise interest in agentic AI is growing fast, even if organizations are at different stages of adoption. Some are leaning in aggressively, while others are still figuring out how to integrate AI into their workflows. But across the board, the momentum is real, he says. AMD itself has launched more than 100 internal AI projects, including successful deployments in chip verification, code generation, and knowledge search. As agentic AI expands from server farms to the edge, the infrastructure behind it must be just as intelligent, distributed, and autonomous as the agents it supports. In that future, AI wont just be written in codeit will be etched into silicon.
Category:
E-Commerce
The Velvet Sundown is the most-talked-about band of the moment, but not for the reason you might expect. The “indie rock band,” which has gained more than 634,000 Spotify listeners in just a few weeks, has spoken out in response to accusations that the group is AI-generated. The suspicions first surfaced on Reddit last week, where users discussed the band’s sudden appearance in their Discovery Weekly playlists on Spotify. The Velvet Sundown exhibits several common indicators of AI involvement: eerie, uncanny-valley-style images, a now-deleted fabricated Billboard quote in its Spotify bio, and virtually no internet presence prior to last month. As the speculation picked up media attention, an X account claiming to represent the band responded to the rumors: Absolutely crazy that so-called journalists keep pushing the lazy, baseless theory that The Velvet Sundown is AI-generated with zero evidence. The post went on to read: This is not a joke. This is our music, written in long, sweaty nights in a cramped bungalow in California with real instruments, real minds, and real soul. Every chord, every lyric, every mistake HUMAN. Adding to the confusion, the X account that posted the denial is not the one linked from the band’s official Spotify page. In other words, multiple social media profiles appear to be representing the band, all of them claiming to be official. When Fast Company reached out to the X account that first posted last week, an apparent spokesperson for the band tried to clarify the situation. There are a couple Twitter accounts floating around because different members have been responding in different ways, the spokesperson wrote in an email to Fast Company. Were a collective, and not everyone agrees on how to handle the attention. They added that the ambiguity is part of the story and is helping to get people curious about diving down the rabbit hole. They also admitted to having used some AI tools in the process, mostly for press visuals and experimenting with aesthetic ideas. Still, they insisted, the core of this has always been about human musicianship. According to its Spotify bio, the Velvet Sundown is a four-piece consisting of singer and mellotron player Gabe Farrow, guitarist Lennie West, Milo Rains, who crafts the bands textured synth sounds, and free-spirited percussionist Orion ‘Rio’ Del Mar. The band maintains that its two full-length albums are written, played, and produced by real people, adding, No generative audio tools. The textures and glitches that people point to as proof are just from lo-fi gear, weird mic setups, tape loops, that sort of thing. Whether AI is involved or not, the controversy highlights the growing conversation around generative AI in the music industry. Deezer, a streaming service that flags AI-generated music, recently reported receiving more than 20,000 fully AI-created tracks per day. The Velvet Sundown, for its part, defends the artistic freedom to experiment. For us, this has always been about making strange, emotional music and exploring how to present it in interesting ways. It might not fit neatly into anyones expectations, but its honest to what were trying to do.
Category:
E-Commerce
How the Boomer wealth transfer could reshape global finance. Born too late to ride the wave of postwar prosperity, but just early enough to watch the 2008 financial crisis decimate some of their first paychecks. Old enough to remember dial-up. Young enough to buy Bitcoin on their phones. Theyve lived through tech booms, housing busts, meme stocks, student debt, and five different definitions of “retirement planning.” Now, as trillions in wealth begin to change hands, this generation stands to serve as a bridge between old capital and new code, traditional finance and the blockchain future. If handled wisely, this moment wont just shape the portfolios of younger investorsit could reshape the architecture of global finance itself. The $46 Trillion Handoff Roughly $124 trillion in wealth is expected to pass from baby boomers to younger generations by 2048, with millennials set to inherit the largest share: approximately $46 trillion over the next two decades. While Gen X is expected to inherit slightly more than millennials in the next 10 years, by the 2040s, millennials will take over as the dominant inheritorsand primary stewards of global capital. This isnt just a generational milestone. Its a once-in-history opportunity to redefine how capital is allocated, what assets are prioritized, and what financial frameworks endure. Millennials arent inheriting a set playbooktheyre writing a new one. Digital Assets Have Grown Up The timing couldnt be more significant. After years of growing pains, the digital asset space is undergoing a profound transformation. Following the collapse of FTX in 2022, the ecosystem began maturing rapidly. By 2024, a major inflection point arrived: The Securities and Exchange Commission approved the first spot Bitcoin exchange-traded funds (ETFs), marking a formal bridge between traditional finance and crypto. The ETFs shattered recordsunderscoring just how much pent-up demand existed among retail investors, registered investment advisers (RIAs), and institutions that had previously been locked out of the asset class. So far, nearly $41 billion has flowed into these products, a staggering figure for any ETF, let alone one tied to an asset recently dismissed as fringe. Additionally, North Americas crypto market is now dominated by large transfers over $1 millionabout 70% of transaction volumereflecting deep institutional involvement. And its not just about ETFs. Major institutions are integrating crypto into their offerings in tangible ways: Mastercard and Visa are experimenting with stablecoin settlements. Lyft is leveraging Hivemapper for road data. AT&T is offloading traffic onto the Heliu network. This isnt the Wild West anymore. Regulation is clarifying. Infrastructure is stabilizing. And serious capital is arriving. The Bridge Generation So, which generation is most naturally situated to carry digital assets into the financial mainstream? Not Gen Z (at least, not yet). While 42% of these young investors own cryptocurrency, only 11% have a retirement account, indicating a preference for immediate, high-risk investments over long-term financial planning. Not boomers, either, who have largely opted outjust 8% hold digital assets, while 64% have more traditional retirement accounts. Millennials, however, are fluent in both financial worlds. Theyre almost equally likely to invest in crypto as they are in retirement accounts36% own cryptocurrency, and 34% have retirement plans. They understand ETFs and decentralized finance, spreadsheets and stablecoins. They grew up with the internet and came of age during the 2008 crisis. Theyre old enough to remember the dot-com bust, young enough to see blockchains promise. In short: Millennials have a tech-native mindset and a healthy respect for risk. That balance matters. Surveys show that millennials are more comfortable investing in crypto than any older cohort. In fact, 62% of millennial ETF investors say they plan to allocate to crypto ETFs, making it the No. 1 asset class for that age group. And theyre not just speculating12% believe crypto is the best place to invest for long-term goals, compared to just 5% of boomers. This makes millennials uniquely qualified to shepherd digital assets out of their adolescence and into legitimacy. Market-Wide Impact As nearly $85 trillion moves into the hands of Gen X and millennials combined, every asset manager, RIA, and financial institution will be forced to adapt. Catering to these investors wont just mean better digital UX or TikTok explainers. Itll mean rethinking allocations, product offerings, and frameworks that may have, until recently, assumed digital assets are fringe. They are not. Not anymore. The generation that straddled Web2 and Web3 is about to call the shots. They speak the language of blockchain and the cadence of capital markets. That dual fluency will define the next phase of global investingand determine whether crypto becomes a credible pillar of the financial system or stalls as a misunderstood asset class, never realizing its broader potential. The opportunity isnt in betting on crypto. Its in building the institutions, tools, and strategies for a world where digital assets are simply part of the portfolio. And that world? Its coming faster than most expect.
Category:
E-Commerce
All news |
||||||||||||||||||
|