Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-02-12 17:18:04| Fast Company

Adam Mosseri, the head of Meta’s Instagram, testified Wednesday during a landmark social media trial in Los Angeles that he disagrees with the idea that people can be clinically addicted to social media platforms.The question of addiction is a key pillar of the case, where plaintiffs seek to hold social media companies responsible for harms to children who use their platforms. Meta Platforms and Google’s YouTube are the two remaining defendants in the case, which TikTok and Snap have settled.At the core of the Los Angeles case is a 20-year-old identified only by the initials “KGM,” whose lawsuit could determine how thousands of similar lawsuits against social media companies would play out. She and two other plaintiffs have been selected for bellwether trials essentially test cases for both sides to see how their arguments play out before a jury.Mosseri, who’s headed Instagram since 2018 said it’s important to differentiate between clinical addiction and what he called problematic use. The plaintiff’s lawyer, however, presented quotes directly from Mosseri in a podcast interview a few years ago where he used the term addiction in relation to social media use, but he clarified that he was probably using the term “too casually,” as people tend to do.Mosseri said he was not claiming to be a medical expert when questioned about his qualifications to comment on the legitimacy of social media addiction, but said someone “very close” to him has experienced serious clinical addiction, which is why he said he was “being careful with my words.”He said he and his colleagues use the term “problematic use” to refer to “someone spending more time on Instagram than they feel good about, and that definitely happens.”It’s “not good for the company, over the long run, to make decisions that profit for us but are poor for people’s well-being,” Mosseri said.Mosseri and the plaintiff’s lawyer, Mark Lanier, engaged in a lengthy back-and-forth about cosmetic filters on Instagram that changed people’s appearance in a way that seemed to promote plastic surgery.“We are trying to be as safe as possible but also censor as little as possible,” Mosseri said.In the courtroom, bereaved parents of children who have had social media struggles seemed visibly upset during a discussion around body dysmorphia and cosmetic filters. Meta shut down all third-party augmented reality filters in January 2025. The judge made an announcement to members of the public on Wednesday after the displays of emotion, reminding them not to make any indication of agreement or disagreement with testimony, saying that it would be “improper to indicate some position.”During cross examination, Mosseri and Meta lawyer Phyllis Jones tried to reframe the idea that Lanier was suggesting in his questioning that the company is looking to profit off of teens specifically.Mosseri said Instagram makes “less money from teens than from any other demographic on the app,” noting that teens don’t tend to click on ads and many don’t have disposable income that they spend on products from ads they receive. During his opportunity to question Mosseri for a second time, Lanier was quick to point to research that shows people who join social media platforms at a young age are more likely to stay on the platforms longer, which he said makes teen users prime for meaningful long-term profit.“Often people try to frame things as you either prioritize safety or you prioritize revenue,” Mosseri said. “It’s really hard to imagine any instance where prioritizing safety isn’t good for revenue.”Meta CEO Mark Zuckerberg is expected to take the stand next week.In recent years, Instagram has added a slew of features and tools it says have made the platform safer for young people. But this does not always work. A report last year, for instance, found that teen accounts researchers created were recommended age-inappropriate sexual content, including “graphic sexual descriptions, the use of cartoons to describe demeaning sexual acts, and brief displays of nudity.”In addition, Instagram also recommended a “range of self-harm, self-injury, and body image content” on teen accounts that the report says “would be reasonably likely to result in adverse impacts for young people, including teenagers experiencing poor mental health, or self-harm and suicidal ideation and behaviors.” Meta called the report “misleading, dangerously speculative” and said it misrepresents its efforts on teen safety.Meta is also facing a separate trial in New Mexico that began this week. By Kaitlyn Huamani and Barbara Ortutay, AP Technology Writers


Category: E-Commerce

 

LATEST NEWS

2026-02-12 17:00:00| Fast Company

Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here. Is AI slop code here to stay? A few months ago I wrote about the dark side of vibe coding tools: they often generate code that introduces bugs or security vulnerabilities that surface later. They can solve an immediate problem while making a codebase harder to maintain over time. Its true that more developers are using AI coding assistants, and using them more frequently and for more tasks. But many seem to be weighing the time saved today against the cleanup they may face tomorrow. When human engineers build projects with lots of moving parts and dependencies, they have to hold a vast amount of information in their heads and then find the simplest, most elegant way to execute their plan. AI models face a similar challenge. Developers have told me candidly that AI coding tools, including Claude Code and Codex, still struggle when they need to account for large amounts of context in complex projects. The models can lose track of key details, misinterpret the meaning or implications of project data, or make planning mistakes that lead to inconsistencies in the codeall things that an experienced software engineer would catch.   The most advanced AI coding tools are only now beginning to add testing and validation features that can proactively surface problematic code. When I asked OpenAI CEO Sam Altman during a recent press call whether Codex is improving at testing and validating generated code, he became visibly excited. Altman said OpenAI likes the idea of deploying agents to work behind developers, validating code and sniffing out potential problems.  Indeed, Codex can run tests on code it generates or modifies, executing test suites in a sandboxed environment and iterating until the code passes or meets acceptance criteria defined by the developer. Claude Code, meanwhile, has its own set of validation and security features. Anthropic has built testing and validation routines into its Claude Code product, too. Some developers say Claude is stronger at higher-level planning and understanding intent, while Codex is better at following specific instructions and matching an existing codebase. The real question may be what developers should expect from these AI coding tools. Should they be held to the standard of a junior engineer whose work may contain errors and requires careful review? Or should the bar be higher? Perhaps the goal should be not only to avoid generating slop code but also to act as a kind of internal auditor, catching and fixing bad code written by humans. Altman likes that idea. But judging by comments from another OpenAI executive, Greg Brockman, its not clear the company believes that standard is fully attainable. Brockman, OpenAIs president, suggests in a recently posted set of AI coding guidelines that AI slop code isnt something to eliminate so much as a reality to manage. Managing AI generated code at scale is an emerging problem, and will require new processes and conventions to keep code quality high, Brockman wrote on X. Saas stocks still smarting from last weeks SaaSpocalypse Last week, shares of several major software companies tumbled amid growing anxiety about AI. The share prices of ServiceNow, Oracle, Salesforce, AppLovin, Workday, Intuit, CrowdStrike, Factset Research, and Thompson Reuters fell so sharply that Wall Street types began to refer to the event as the SaaSpocalypse. The stocks fell sharply on two pieces of news. First, late in the day on Friday, January 30, Anthropic announced a slate of new AI plugins for its Cowork AI tool aimed at information workers, including capabilities for legal, product management, marketing, and other functions. Then, on February 4, the company unveiled its most powerful model yet, Claude Opus 4.6, which now powers the Claude chatbot, Claude Code, and Cowork. For investors, Anthropics releases raised a scary question: How will old-school SaaS companies survive when their products are already being challenged by AI-native tools?  Although software shares rebounded somewhat later in the week, as analysts circulated reassurances that many of these companies are integrating new AI capabilities into their products, the unease lingers. In fact, many of the stocks mentioned above have yet to recover to their late-January levels. (Some SaaS players, like ServiceNow, are now even using Anthropics models to power their AI features.) But its a sign of the times, and investors will continue to watch carefully for signs that enterprises are moving on from traditional SaaS solutions to newer AI apps or autonomous agents. China is flexing its video models This week, some new entrants in the race for best model are very hard to miss. X is awash with posts showcasing video generated by new Chinese video generation modelsSeedance 2.0 from ByteDance and Kling 3.0 from Kuaishou. The video is impressive. Many of the clips are difficult to distinguish from traditionally shot footage, and both tools make it easier to edit and steer the look and feel of a scene. AI-generated video is getting scary-good, its main limitation being that the generated videos are still pretty short. Sample videos from Kling 3.0, which range from 3 seconds to 15 seconds, feature smooth scene transitions and a variety of camera angles. The characters and objects look consistent from scene to scene, a quality that video models have struggled with. The improvements are owed in part to the models ability to glean the creators intent from the prompts, which can include reference images and videos. Kling also includes native audio generation, meaning it can generate speech, sound effects, ambient audio, lip-sync, and multi-character dialogue in a number of languages, dialects, and accents. ByteDances Seedance 2.0, like Kling 3.0, generates video with multiple scenes and multiple camera angles, even from a single prompt. One video featured a shot from within a Learjet in flight to a shot from outside the aircraft. The video motion looks smooth and realistic, with good character consistency across frames and scenes, so that it can handle complex high-motion scenes like fights, dances, and action sequences. Seedance can be prompted with text, images, reference videos, and audio. And like Kling, Seedance can generate synchronized audio including voices, sound effects, and lip-sync in multiple languages.  More AI coverage from Fast Company:  Were entering the era of AI unless proven otherwise A Palantir cofounder is backing a group attacking Alex Bores over his work with . . . Palantir Why a Korean film exec is betting big on AI Mozillas new AI strategy marks a return to its rebel alliance roots Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.


Category: E-Commerce

 

2026-02-12 16:48:20| Fast Company

Russia has attempted to fully block WhatsApp in the country, the company said, the latest move in an ongoing government effort to tighten control over the internet.A WhatsApp spokesperson said late Wednesday that the Russian authorities’ action was intended to “drive users to a state-owned surveillance app,” a reference to Russia’s own state-supported MAX messaging app that’s seen by critics as a surveillance tool.“Trying to isolate over 100 million people from private and secure communication is a backwards step and can only lead to less safety for people in Russia,” the WhatsApp spokesperson said. “We continue to do everything we can to keep people connected.”Russia’s government has already blocked major social media like Twitter, Facebook, and Instagram, and ramped up other online restrictions since Russia’s full-scale invasion of Ukraine in 2022.Kremlin spokesman Dmitry Peskov said WhatsApp owner Meta Platforms should comply with Russian law to see it unblocked, according to the state Tass news agency.Earlier this week, Russian communications watchdog Roskomnadzor said it will introduce new restrictions on the Telegram messaging app after accusing it of refusing to abide by the law. The move triggered widespread criticism from military bloggers, who warned that Telegram was widely used by Russian troops fighting in Ukraine and its throttling would derail military communications.Despite the announcement, Telegram has largely been working normally. Some experts say it’s a more difficult target, compared with WhatsApp. Some Russian experts said that blocking WhatsApp would free up technological resources and allow authorities to fully focus on Telegram, their priority target.Authorities had previously restricted access to WhatsApp before moving to finally ban it Wednesday.Under President Vladimir Putin, authorities have engaged in deliberate and multipronged efforts to rein in the internet. They have adopted restrictive laws and banned websites and platforms that don’t comply, and focused on improving technology to monitor and manipulate online traffic.Russian authorities have throttled YouTube and methodically ramped up restrictions against popular messaging platforms, blocking Signal and Viber and banning online calls on WhatsApp and Telegram. In December, they imposed restrictions on Apple’s video calling service FaceTime.While it’s still possible to circumvent some of the restrictions by using virtual private network services, many of them are routinely blocked, too.At the same time, authorities actively promoted the “national” messaging app called MAX, which critics say could be used for surveillance. The platform, touted by developers and officials as a one-stop shop for messaging, online government services, making payments and more, openly declares it will share user data with authorities upon request. Experts also say it doesn’t use end-to-end encryption. Associated Press


Category: E-Commerce

 

Latest from this category

12.02Say this instead of please find attached
12.02China releases new rules to curb auto price war
12.02Instagram chief Adam Mosseri testifies on social media addiction at landmark trial in L.A.
12.02Developers are still weighing the pros and cons of AI coding agents
12.02WhatsApp is completely blocked in Russia, as authorities route users to another messaging site
12.02AI expert predicted AI would end humanity in 2027now hes changing his timeline
12.02How AI is rewriting 70 years of lending rules
12.02Electronic shelf labels mean grocery stores can now change their prices anytime they want, in seconds
E-Commerce »

All news

12.021,200 Ubisoft workers strike in response to layoffs
12.02Thousands queue as beauty store arrives on island of Ireland
12.02How to get F1TV with your Apple TV subscription
12.02Apple Vision Pro finally gets a YouTube app today
12.02Highguard studio lays off 'most' of its team just weeks after the game went live
12.02Say this instead of please find attached
12.02Infosys ADRs plunge over 7%, Wipro down 5% as tech turbulence deepens on Wall Street
12.02China releases new rules to curb auto price war
More »
Privacy policy . Copyright . Contact form .