Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-01-15 17:00:00| Fast Company

Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. Im Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy. This week, Im focusing on how and why AI will grow from something that chats to something that works in 2026. I also focus on a new privacy-focused AI platform from the maker of Signal, and on Googles work on e-commerce agents. Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on X (formerly Twitter) @thesullivan.  Our relationship with AI is changing rapidly Anthropic kicked off 2026 with a bang. It announced Coworker, a new version of its powerful Claude Code coding assistant thats built for non-developers. As I wrote on January 14, Coworker lets users put AI agents, or teams of agents, to work on complex tasks. It offers all the agentic power of Claude Code while being far more approachable for regular workers (it runs within the Claude chatbot desktop app, not in the terminal as Claude Code does). It also runs at the file system level on the users computer, and can access email and third-party work apps such as Teams.  Coworker is very likely just the first product of its kind that well see this year. Some have expressed surprise that OpenAI hasnt already offered such an agentic tool to consumers and enterprisesit probably will, as may Google and Microsoft, in some form. I think well look back at Coworker a year from now and recognize it as a real shift in the way we think about and use AI for our work tasks. AI companies have been talking for a long time about viewing AI as a coworker or copilot, but Cowork may make that concept a reality for many nontechnical workers.  OpenAIs ChatGPT, which debuted in late 2022, gave us a mental picture of how consumer AI would look and act. It was just a little dialog box, mainly nonvisual and text-based. This shouldnt have been too surprising. After all, the chatbot interface was built by a bunch of researchers who spent their careers teaching machines how to understand words and text.   Functionally, early chatbots could act like a search engine. They could write or summarize text, or listen to problems and give supportive feedback. But their outputs were driven almost entirely by their pretraining, in which they ingested and processed a compressed version of the entire internet. Using ChatGPT was something like text messaging with a smart and informed friend.  Large language models do way, way more than that today. They understand imagery, they reason, they search the web, and call external tools. But the AI labs continue to try to push much of their new functionality through that same chatbot-style interface. Its time to graduate from that mindset and put more time and effort into meeting human users where they livethat is, delivering intelligence through lots of different interfaces that match the growing number of tasks where AI can be profitably applied.  That will begin to happen in 2026. AI will expand into a full workspace, or into a full web browser ( la OpenAIs Atlas), and will eventually disappear into the operating system. As we saw at this years Consumer Electronics Show, it may go further: An AI tool may come wrapped in a cute animal form factor.  Interacting with AI will become more flexible, too. Youll see more AI systems that accept real-time voice input this year. Anthropic added a feature to (desktop) Claude in October that lets users talk to the chatbot in natural language after hitting a keyboard shortcut. And Wispr Flow lets users dictate into any input window by holding down a function key.  Signal creator Moxie Marlinspike launches encrypted AI chatbot People talk to AI chatbots about all kinds of things, including some very personal matters. Personally, I hesitate to discuss just anything with a chatbot, because I cant be sure that my questions and prompts, and the answers the AI gives, wont somehow be shared with someone who shouldnt see them.  My worry is well-founded, it turns out. Last year a federal court ordered OpenAI to retain all user inputs and AI outputs, because they may be relevant to discovery in a copyright case. And theres always a possibility that unencrypted conversations stored by an AI company could be stolen as part of a hack. Meanwhile, the conversational nature of chatbots invites users to share more and more personal information, including the sensitive kind.  In short, theres a growing need for provably secure and private AI tools. Now the creator of the popular encrypted messaging platform Signal, who goes by the pseudonym Moxie Marlinspike, has created an end-to-end encrypted AI chatbot called Confer. The new platform protects user prompts and AI responses, and makes it impossible to connect users online identities with their real-world ones. Marlinspike told Ars Technica that Confer users have better conversations with the AI because theyre empowered to speak more freely. When I signed up for a Confer account, the first thing the site asked was that I set up a six-digit encryption passkey, which would be stored within the secure element of my computer (or phone), which hackers cant access. Another key is created for the Confer server, and both keys must match before the user can interact with the chatbot. Confer is powered by open-source AI models it hosts, not by models accessed from a third party.  Confers developers are serious about supporting sensitive conversations. After I logged in, I saw that Confer displays a few suggested conversations near the input window, such as practice a difficult conversation, negotiate my salary, and talk through my mental health.  Google is building the foundations of agentic e-commerce Agents, of course, will do more than work tasks. Theyll be involved in more personal things, too, like online shopping. Right now human shoppers move through a long process of searching, clicking, data input, and payment-making in order to buy something. Merchants and brands hope that AI agents will one day do a lot of that work on the humans behalf.  But for this to work, a whole ecosystem of agents, consumer-shopping sites, and brand back-end systems must be able to exchange information in standardized ways. For example, a consumer might want to use a shopping agent to buy a product that comes up in a Google AI Mode search, so the shopping agent would need to shake hands with the Google platform and the product merchant, and theyd both have to connect through a payment agent in the middle.  Goole is off to a strong start on building the agentic infrastructure that will make this all work. On January 11, the company announced a new Universal Commerce Protocol (UCP) that creates a common language for consumers, agents, and businesses to ensure that all types of commerce actions are standardized and secure. The protocol relieves all parties involved from having to create an individual agent handshake for every consumer platform and tech partner.  UCP now standardizes three key aspects of a transaction: It offers a standard for guaranteeing the identity of the buyer and seller, a standard for the buying workflow, and a standard for the payment, which uses Googles Agent Payment Protocol (AP2) extension.  Vidhya Srinivasan, Googles VP/GM of Advertising & Commerce, tells Fast Company that this is just the beginning, that the company intends to build out the UCP to support more parts of the sales process, including related-product suggestion and post-purchase support. Google developed UCP with merchant platforms including Shopify, Etsy, Target, and Walmart. UCP is endorsed byAmerican Express, Mastercard, Stripe, Visa, and others.  More AI coverage from Fast Company:  Why Anthropics new Cowork could be the first really useful general-purpose AI agent Governments are considering bans on Groks app over AI sexual image scandal Docusigns AI will now help you understand what youre signing  CES 2026: The year AI got serious Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.


Category: E-Commerce

 

LATEST NEWS

2026-01-15 16:45:00| Fast Company

I was born an only child, but now I have a twin. Hes an exact duplicate of medown to my clothing, my home, my facial expressions, and even my voice. I built him with AI, and I can make him say whatever I want. Hes so convincing that he could fool my own mother. Heres how I built himand what AI digital twins mean for the future of people. Deepfake yourself From the moment generative AI was born, criminals started using it to trick people. Deepfakes were one of the first widespread uses of the tech. Today, theyre a scourge to celebrities and even everyday teenagers, and a massive problem for anyone interested in the truth. As criminals were leveraging deepfakes to scam and blackmail people, though, a set of white-hat companies started quietly putting similar digital cloning technologies to use for good. Want to record a training video for your team, and then change a few words without needing to reshoot the whole thing? Want to turn your 400-page Stranger Things fanfic into an audiobook without spending 10 hours of your life reading it aloud? Digital cloning tech has you covered. You basically deepfake yourselfcloning your likeness, your voice, or bothand then mobilize your resulting digital twin to create mountains of content just as easily as youd prompt ChatGPT or Claude. I wanted to try the tech out for myself. So I fired up todays best AI cloning tools and made Digital Toma perfect digital copy of myself. Hear me out I decided to start by cloning my voice. A persons voice feels like an especially intimate, personal thing.  Think back on a loved one youve lost. Ill bet you can remember exactly how they sounded. You can probably even remember a specific, impactful conversation you had with them. Cloning a voicewith all the nuance of accent, speaking style, pitch, and breathis also a tough technical challenge. People are fast to forgive crappy video, chalking up errors or glitchiness in deepfakes to a spotty internet connection or an old webcam. Content creators everywhere produce bad video every day without any help from AI! A bad AI voice sounds way creepier, though. Its easier to land in the uncanny valley unless every aspect of a voice clone is perfect. To avoid that fate, I turned to ElevenLabs. The company has been around since 2022 but has exploded in popularity over the last year, with its valuation doubling to more than $6.6 billion.  ElevenLabs excels at handling audioif youve listened to an AI-narrated audiobook, interacted with a speaking character in a video game, or heard sound effects in a TV show or movie, its a good bet youve inadvertently experienced ElevenLabs tech. To clone my own voice, I shelled out $22 for a Creator account. I then uploaded about 90 minutes of recordings from my YouTube channel to the ElevenLabs interface.  The company says you can create a professional voice clone with as little as 30 minutes of audio. You can even create a basic clone with just 10 seconds of speech. ElevenLabs makes you record a consent clip in order to ensure that youre not trying to deepfake a third party. In a few hours, my professional voice clone was ready. Using it is shockingly easy. ElevenLabs provides an interface that looks a lot like ChatGPT. You enter what you want your clone to say, press a button, and in seconds, your digital twin voice speaks the exact words you typed out. I had my digital twin record an audio update about this article for my Fast Company editor. He described it as terrifyingly realistic. Then, I sent a clip to my mom. She responded, It would have fooled me. In my natural habitat I was extremely impressed with the voice clone. I could use it right away to spin up an entire AI-generated podcast, prank my friends, or maybe even hack into my bank. But I didnt just want a voice. I wanted a full Digital Tom that I could bend to my will.  For the next stage in my cloning experiment, I turned to Synthesia. I originally met Synthesias CEO Victor Riparbelli in 2019 at a photo industry event, when his company was a scrappy startup. Today, its worth $4 billion. Synthesia specializes in creating digital Avatarsessentially video clones of a real person. Just as with ElevenLabs, you can type text into an interface and get back a video of your avatar reading it aloud, complete with realistic facial expressions and lip movement. I started a Synthesia trial account and set about creating my personal avatar. Synthesia asked for access to my webcam, and then recorded me reading a preset script off the screen for about 10 minutes. A day later, my avatar was ready. It was a perfect digital clone of my likeness, right down to the shirt I was wearing on the day I made it and my (overly long) winter haircut. It even placed me in my natural habitat: my comfy, cluttered home office. As with my voice clone, I could type in any text I could imagine, and in about 10 minutes I would receive a video of Digital Tom reading it aloud.  Synthesia even duplicated the minutiae of my presenting style, right down to my smile and tendency to look to the camera every few seconds when reading a script from the screen. If I recorded a video with Digital Tom for my YouTube channel, Im certain most users would have no idea its a fake. The value of people My experiment shows that todays AI cloning technology is extremely impressive. I could easily create mountains of audio content with my clone from ElevenLabs, or create an entire social media channel with my Digital Tom as the star. The bigger question, though, is why Id want to.  Sure, there are tons of good use cases for working with a digital twin.  Again, Synthesia specializes in creating corporate training videos. Companies can rapidly create specialized teaching materials without renting a studio, hiring a videographer, and shooting countless takes of a talking head in front of a green screen.  They can also edit them by altering a few written wordsfor example, if a product feature changes subtly. For their part, ElevenLabs does a brisk business in audiobooks and customer service agents. But they also provide helpful services, like creating accessible, read-aloud versions of web pages for visually impaired users. But my experiment convinced me that there are fewer good reasons to work with your digital twin.  In an internet landscape where anyone can spin up a thousand-page website in a few minutes using Gemini, and compelling videos are a dime a dozen, thanks to Sora, content is cheap. There are not many good ways left for users to sort the wheat from the chaff. Personality is one of the few remaining ones. People like to follow people. For creators, developing a personal relationship with your audience is the best way to keep them consuming your content, instead of cheaper (and often better) AI alternatives. Compromising that by shoving an undisclosed digital twin in their face, however convincing it might be, seems like the fastest possible way to ruin that relationship.  People want to hear from the meat-based Thomas Smith, even if the artificial intelligence version never forgets a word or gets interrupted by his chickens mid-video.  I could see using one of ElevenLabs or Synthesias built-in characters to create (fully disclosed) content. But I cant see putting my digital twins to real-world use. I can see one use for the tech, though. It struck me during my experiment that the best reason to build an AI digital twin isnt to replace your voice or likeness, but to preserve it.  I sometimes lose my voice, and its incredibly disruptive to my content production. If I was ever affected by a vocal disorder and lost it permanently, its nice to know that theres a highly realistic backup sitting on ElevenLabs servers.  Its also cool to think that in 10 yearswhen Im inevitably older and wrinklier than todayI could bring my 2026 Digital Tom back to life. Hed be frozen in time, a perfect replica of my appearance, mannerisms, and environment in this specific moment, recallable for all eternity. I wont be using Digital Tom to augment my YouTube channel, get into podcasting, or read my kids a bedtime story anytime soon. But theres a strange part of me thats happy hes out there, just in case.


Category: E-Commerce

 

2026-01-15 16:35:59| Fast Company

Elon Musks AI chatbot Grok wont be able to edit photos to portray real people in revealing clothing in places where that is illegal, according to a statement posted on X. The announcement late Wednesday followed a global backlash over sexualized images of women and children, including bans and warnings by some governments. The pushback included an investigation announced Wednesday by the state of California, the U.S.’s most populous, into the proliferation of nonconsensual sexually explicit material produced using Grok that it said was harassing women and girls. Initially, media queries about the problem drew only the response, legacy media lies. Musks company, xAI, now says it will geoblock content if it violates laws in a particular place. We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis, underwear and other revealing attire, it said. The rule applies to all users, including paid subscribers, who have access to more features. xAI also has limited image creation or editing to paid subscribers only to ensure that individuals who attempt to abuse the Grok account to violate the law or our policies can be held accountable. Groks spicy mode had allowed users to create explicit content, leading to a backlash from governments worldwide. Malaysia and Indonesia took legal action and blocked access to Grok, while authorities in the Philippines said they were working to do the same, possibly within the week. The U.K. and European Union were investigating potential violations of online safety laws. France and India have also issued warnings, demanding stricter controls. Brazil called for an investigation into Groks misuse. The British government, which has been one of Grok’s most vociferous critics in recent days, has welcomed the change, while the country’s regulator, Ofcom, said it would carry on with its investigation. I shall not rest until all social media platforms meet their legal duties and provide a service that is safe and age-appropriate to all users, Technology Secretary Liz Kendall said. California Attorney General Rob Bonta urged xAI to ensure there is no further harassment of women and girls from Grok’s editing functions. We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or of child sexual abuse material, he said. California has passed laws to shield minors from AI-generated sexual imagery of children and require AI chatbot platforms to remind users they arent interacting with a human. But Democratic Gov. Gavin Newsom also vetoed a law last year that would have restricted childrens access to AI chatbots. Elaine Kurtenbach, AP business writer Pan Pylas in London contributed to this report.


Category: E-Commerce

 

Latest from this category

15.012026 will be the year of the AI living companion
15.01My employee didnt tell anyone she was pregnant
15.01Trump wants a classical stadium in D.C. Heres what that could look like
15.01AI-related layoffs keep coming. But theres more to the story
15.01ChatGPT put a weird idea into our heads about how AI should look and act
15.01NASA astronauts return to Earth early after a medical evacuation
15.01I cloned a digital twin of myself with AI. Hes convincing enough to fool my mom
15.01Grok blocked from undressing images in places where its illegal after global backlash
E-Commerce »

All news

15.01US sanctions Iranian officials over protest crackdown
15.01Four Arab states urged against US-Iran escalation, official says
15.01What Made This Trade Great: IBRX and the Power of Re-Entry
15.01How to claim Verizon's $20 credit for Wednesday's service outage
15.01Heist game Relooted gets a release date
15.01Gov. Braun said Indiana working hard to secure Chicago Bears stadium
15.01In sudden reversal, Trump administration restores millions in mental health grants for Illinois after day of confusion
15.012026 will be the year of the AI living companion
More »
Privacy policy . Copyright . Contact form .