Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-10-30 16:30:59| Fast Company

For the past 30 years, the web browser has been the primary way humans navigate the internet. It makes sense, then, that as artificial intelligence becomes more humanlike in its capabilities, it would use the same tool. That’s basically the idea behind AI-powered browsers, which are definitely having an “it” moment now that OpenAI has launched Atlas, its own web browser that incorporates ChatGPT as an ever-present helper. Atlas follows Perplexity’s Comet, which arrived in the summer to quickly capture the imagination of what an AI browser could do. In both cases, the user can, at any time, call up an AI assistant (aka agent), able to perform multistep taskssuch as navigating to a grocery retailer and filling an online shopping cart with ingredients for a recipefrom a simple command. Atlas vs. Comet: Who has the smarter browser? {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"salmon","redirectUrl":""}} Who has the better experience? Based on features, the clear winner is Comet, which boasts Chrome-like functionality, supporting multiple user profiles, extensions, and more buttons for specific, fast AI-powered actions, such as instant summarization of web pages. However, because ChatGPT is the go-to AI that over 800 million people now use, that context represents a huge advantage. When you call up the chatbot in Atlas, you can simply point to the relevant conversation, plus it will remember aspects of your browsing experience to better help you. The Atlas-vs.-Comet fight may be moot, though, since Google Chrome is the incumbent browser for most people (it has 74% market share worldwide), and it has AI features, too. Chrome’s large user base, however, also means Google can’t move as fast: Since the whole idea of AI agents taking control of your browser to perform tasks is fraught with security concerns, Google’s Gemini assistant in Chrome is relatively feeble; if you ask it to, say, shop for you on Amazon, it’ll give you the digital equivalent of a shrug. So Chrome’s continued dominance in the AI era isn’t assured. But the question of who will win the AI browser war doesn’t matter so much as whether AI browsing will take off at all. I’ve been using Comet heavily for a few months, and although I find the idea of an agent doing all my tedious internet tasks compelling, I’ve found the actual set of things it can do to be quite narrow. Generally, the task needs to be something that doesn’t require a lot of specialized context (since the AI can’t read your mind) or complex prompting (since spending several minutes crafting a prompt is time you could use to just do the task yourself). Nonetheless, OpenAI imagines a future where most of the activity online is done via AI agents in browsers like Atlas. In its announcement, it says, “This launch marks a step toward a future where most web use happens through agentic systemswhere you can delegate the routine and stay focused on what matters most.” OpenAI could be right. Those narrow use cases for agentic browsing could be expanded greatly with more elegant and comprehensive merging of personal context and the browsing experience. If the agent understands the entire background of what you’re doingthe whyand gets better at navigating the web (as it inevitably will), AI browsing might even burst through to the mainstream. What agentic browsing means for publishers If that happens, it would have huge implications for the media. Because not only will people get a lot of their information through the lens of their preferred AI agent, the tasks performed on their behalf will be informed by content seen through that same lens. For example, an agent told to search for a “stylish suit” would need to essentially Google what’s in style, then use that information to complete the task. No human eyeballs ever look at the content it uses to research what’s in style, but getting the right information is a crucial part of the agent performing the task well. How agents access that information, and what they do with it, are important questions to answer in building the framework of how all this works. The whole area of how AI systems access information is of course hotly contested, generating several lawsuits, but there is some consensus. OpenAI made clear in the launch announcement that it would not use Atlas as a “backdoor” to train on content that was otherwise blocked from its training bot. However, access for the agent itself is controversial. AI companies maintain that agents are proxies for users, and should, in many cases, be allowed to bypass bot controls to access content and services that a human could access. Others don’t see it that waythat because an agent is a robot, with no human attention to cater to, it should not be treated as human, and sites should have the option to block agents specifically. This is essentially the core of what Perplexity and Cloudflare were arguing about this summer. With the release of Atlas, AI browsing can only accelerate, and answering these questions will become more urgent. Media strategy depends on knowing who your audience is, understanding how they access your content, and having reliable ways of monetizing that behavior. Right now none of those components are well defined for a future where the primary users of the internet are browser agents. It’s not just a question of whether sites should be able to block agents specifically. That’s just a building block in creating a system where an agent can work autonomously to either pay or register to access certain content, or prove it has a license to do so. For example, if a subscriber to Fast Company asks their agent to do a task, and in the course of that task needs information the publication can provide, access should be seamless and, importantly, measurable. But if you don’t have a subscription, your agent will be blocked and nee to go elsewhereregardless of whether the actual article is paywalled for humans. The real power of this idea is in the aggregate, where licensing deals carry over to users of the AI. In the case of OpenAI, which has licensed content from several media companies, that could theoretically carry over to its agents. And since agent activity is measurable, there could theoretically be a way for publications to reach those AI users and turn them into more engaged audience members. It could all be done anonymously, through the AI provider, based on user activity. When your audience isnt human It’s questionable whether most web browsing in the future will be done by bots, but regardless of the proportion, it seems likely that agentic activity on the web will expand significantly, as security concerns are slowly resolved. That means publishers will need to adapt to a world where bots acting on behalf of users become a big part of their audience, and deciding what those agents see and how much they will pay will be critical. The fundamental question in front of us now, however, is figuring out who decides: the people making the content or the people making the agents. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"salmon","redirectUrl":""}}


Category: E-Commerce

 

LATEST NEWS

2025-10-30 16:00:00| Fast Company

Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. Im Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy. This week, Im focusing on a stunning stat showing that OpenAIs ChatGPT engages with more than a million users a week about suicidal thoughts. I also look at new Anthropic research on AI introspection, and at a Texas philosophers take on AI and morality. Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on X (formerly Twitter) @thesullivan.  OpenAI’s vulnerable position OpenAI says that 0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent. Considering that ChatGPT has an estimated 700 million weekly active users, that works out to more than a million such conversations every week.  That puts OpenAI in a very vulnerable position. Theres no telling how many of those users will choose their actions based on the output of a language model. Theres the case of teenager Adam Raine, who died by suicide in April after talking consistently with ChatGPT. His parents are suing OpenAI and its CEO Sam Altman, charging that their son took his life as a result of his chatbot discussions. While users feel like they can talk to a non-human entity without judgement, theres evidence that chatbots arent always good therapists. Researchers at Brown University found that AI chatbots routinely violate core mental health ethics standards, underscoring the need for legal standards and oversight as use of these tools increases. All of this helps explain OpenAIs recent moves around mental health. The company decided to make significant changes in its newest GPT-5 model based on concern about users with mental health issues. It trained the model to be less sycophantic, or less likely to constantly validate the users thoughts, even when theyre self-distructive, for example.  This week the company introduced further changes. Chatbot responses to distressed users may now include links to crisis hotlines. The chatbot might reroute sensitive conversations originating to safer models. Some users might see gentle reminders to take breaks during long chat sessions.  OpenAI says it tested its models responses to 1,000 challenging self-harm and suicide conversations, finding that the new GPT5 model gave 91% satisfactory answers compared to 77% for the previous GPT5 model. But those are just evals performed in a labhow well they emulate real-world conversations is anybodys guess. As OpenAI itself has said, its hard to consistently and accurately pick up on signs of a distressed user.  The problem began coming to light with research showing that ChatGPT usersespecially younger onesspend a lot of time talking to the chatbot about personal matters including self-esteem issues, friend relationships, and the like. While such conversations are not the most numerous on ChatGPT, researchers say they are the lengthiest and most engaged.  Anthropic shows that AI models can think about their own thoughts It may come as a surprise to some people that AI labs cannot explain, in mathematical terms, how large AI models arrive at the answers they give. Theres a whole subfield in AI safety called mechanistic interpretability dedicated to trying to look inside these models to understand how they make connections and reason.  Anthropic’s Mechanistic Interpretability team has just released new research showing evidence that large language models can display introspection. That is, they can recognize their own internal thought processes, rather than just fabricate plausible-sounding answers when questioned about their reasoning.  The discovery could be important for safety research. If models can accurately report on their own internal mechanisms, researchers could gain valuable insights into their reasoning processes and more effectively identify and resolve behavioral problems, Anthropic says. It also implies that an AI model might be capable of reflecting on wrong turns in its thinking that send it in unsafe directions (perhaps failing to object to a user considering self-harm).  The researchers found the clearest signs of introspection in its largest and most advanced modelsClaude Opus 4 and Claude Opus 4.1suggesting that AI models’ introspective abilities are likely to become more sophisticated as the technology continues to advance. Anthropic is quick to point out that AI models dont think introspectively in the nuanced way we humans do. Despite the limitations, the observation of any introspective behavior at all goes against prevailing assumptions among AI researchers. Such progress in investigating high-level cognitive capabilities like introspection can gradually take the mystery out of AI systems and how they function. Can AIs be taught morals and values?  Part of the problem of aligning AI systems with human goals and aspirations is that models cant easily be taught moral frameworks that help guide their outputs. While AI can mimic human decision-making, it cant act as a moral agent that understands the difference between right and wrong, such that it can be held accountable for its actions, says Martin Peterson, a philosophy professor at Texas A&M. AI can be observed outputting decisions and recommendations that sound similar to those humans might produce, but the way the AI reasons toward constructing them isnt very humanlike at all, Peterson adds. Humans make judgements with a sense of free will and moral responsibility, but those things cant currently be trained into AI models. In a legal sense (which may be a reflection of societys moral sense), if an AI system causes harm, the blame lies with its developers or users, not the technology itself. Peterson asserts that AI can be aligned with human values such as fairness, safety, and transparency. But, he says, its a hard science problem, and the stakes of succeeding are high. We cannot get AI to do what we want unless we can be very clear about how we should define value terms such as bias, fairness, and safety, he says, noting that even with improved training data, ambiguity in defining these concepts can lead to questionable outcomes. More AI coverage from Fast Company:  Harvey, OpenAI, and the race to use AI to revolutionize Big Law The 26 words that could kill OpenAIs Sora Exclusive new data shows Google is winning the AI search wars OpenAI finalizes restructure and revises Microsoft partnership Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.


Category: E-Commerce

 

2025-10-30 15:55:20| Fast Company

Samsung Electronics on Thursday reported a 32.5% increase in operating profit for the third quarter, driven by rebounding demand for its computer memory chips, which the company expects will continue to grow on the back of artificial intelligence.The South Korean technology giant set a new high in quarterly revenue, which rose nearly 9% to 86 trillion won ($60.4 billion) for the July-September period, fueled by increased sales of semiconductor products and mobile phones.Samsung, which has dual strength in both components and finished products, said it expects the demand driven by AI to further expand market opportunities in coming months. SK Hynix, another major South Korean chipmaker, also reported a record operating profit of 11.4 trillion won ($8 billion) on Wednesday, which it also described as AI-related growth.Samsung’s operating profit of 12.2 trillion won ($8.6 billion) in the last quarter marked a 160% increase from the previous quarter, when it said its semiconductor earnings were weighed down by inventory value adjustments and one-off costs related to technology export restrictions on China.Samsung’s semiconductor division posted 7 trillion in operating profit for the third quarter, with the company reporting strong sales in high bandwidth memory chips, which are used to power AI applications.“The semiconductor market is expected to remain strong, driven by ongoing AI investment momentum,” the company said in a statement. The company said an advanced version of its high-bandwidth memory chips, the HBM3E, is “currently in mass production and being sold to all relevant customers,” while samples of its next-generation product, the HBM4, are being shipped to key clients. Kim Tong-Hyung, Associated Press


Category: E-Commerce

 

Latest from this category

30.10Last-minute Halloween costume ideas inspired by news and pop culture that almost anyone can make
30.10Eli Lillys obesity and diabetes treatments fuel growth and spark bidding war
30.10Republicans urge Trump administration to back Falun Gong lawsuit against Cisco
30.10What Mastercard is racing to snag before Visa or Coinbase gets there first
30.10NBA approves $10 billion Los Angeles Lakers sale to Mark Walter
30.10U.S. allowed and helped firms sell tech used for Chinas surveillance state
30.10How Hurricane Melissa quickly became one of the most powerful landfall storms in recorded history
30.10In the future, U.S. troops wont just deploy drones. Theyll make them
E-Commerce »

All news

30.10Stocks Lower into Final Hour on Higher Long-Term Rates, Earnings Outlook Jitters, Profit-Taking, Tech/Consumer Discretionary Sector Weakness
30.10Spiraling effects of federal government shutdown leave lawmakers grasping for ways to end it
30.10Last-minute Halloween costume ideas inspired by news and pop culture that almost anyone can make
30.10Eli Lillys obesity and diabetes treatments fuel growth and spark bidding war
30.10Bull Radar
30.10Bear Radar
30.10Stellantis warns of one-off costs as revenue and shipments rise
30.10Republicans urge Trump administration to back Falun Gong lawsuit against Cisco
More »
Privacy policy . Copyright . Contact form .