Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-08-20 13:29:55| Fast Company

Beauty magazines have been lying to readers for decadesbut at least they used to start with actual humans. NewBeauty magazine’s Summer/Fall 2025 issue quietly crossed that line, publishing a multipage article dedicated to the beautification of female skin that featured perfect female models who weren’t real. Spotted by professional photographer Cassandra Klepac, she pointed out that each photo was labeled as AI and included the prompt used to generate them. With the advent of technology capable of synthesizing ultra-high-definition photos of realistic humans, this was bound to happen sooner rather than later. Knowing the hell that Vogue recently faced for featuring Guess advertisements with AI-generated modelssparking the rage of 2.7 million TikTok viewers and subscription cancellationsit is surprising that NewBeautys editors decided to do the same with actual editorial content, the supposedly “real” part of magazines. So why did the magazinewhich calls itself the beauty authority in its taglinedo this? “NewBeauty features both real people and patients, alongside AI-generated images,” executive editor Liz Ritter told me via email. “We maintain a strict policy of transparency by clearly labeling all AI content in detail in our captions, including the prompts used to create these images, so readers always know the difference.” Her nonanswer leaves us to only speculate on the reasons why. Talking about the Guess campaign, Sara Zifffounder of the Model Alliancesaid that it was “less about innovation and more about desperation and the need to cut costs.” Given the depressed status of the print media industry, I suspect that may have played a role in the case of NewBeauty. View this post on Instagram A post shared by Cass Klepac (@cassandra.klepac) Perfectly legal . . . However you may feel about the campaign, we know that NewBeauty didn’t do anything illegal. There are virtually no laws governing editorial use of AI-generated humans at this point, except to stop deceptive use in politics and in regards to the honor of individuals (something already covered by libel laws). Surprisingly, advertising is a little bit more regulated. The Federal Trade Commission can penalize deceptive advertising practices. New York’s groundbreaking AI disclosure law targets advertisements, requiring “conspicuous disclosure” when synthetic performers are used. But editorial? It exists in a regulatory wasteland, leaving it to the judgment of editors. Europe’s comprehensive AI Act mandates clear labeling of AI-generated content and carries maximum fines of 35 million ($40.7 million), but it focuses on transparency, not prohibition. You can fabricate entire humans for editorial useyou just have to mention it in the caption. NewBeauty did exactly that. . . . but dangerous anyway? We also know that, despite the fact there’s nothing illegal about it, this doesn’t mean it is right. Using artificial intelligence feels dangerous because it is so easy and so powerful. When it spreadsand it willmany professions will be affected. This includes not only the models, photographers, makeup artists, and all the people who make real photoshoots possible, but also the Photoshop artists who retouch what comes out of the digital camera into an image that quite often has very little to do with what the sensors capture. For the past few decades, Photoshop artists have erased wrinkles, refined arms, rebuilt waistlines, adjusted eyes, and turned anything that editors deemed imperfect into whatever fantasy beauty standard the industry set. Reality has been malleable, to be generous. Remember that time Rolling Stone heavily retouched Katy Perry because they didn’t think she was pretty enough? Or that Lena Dunham Vogue cover and photo feature, the one in which she was missing an arm? Duham said at the time those photos were intended as “fantasy.” Like everything else featured in glossy pages. Those were just two high-profile examples of a practice that happens regularly for any cover of any fashion, beauty, or celebrity print magazine. In this sense, acting surprised or offended by NewBeautys AI models feels hollow, albeit understandabledue to a fear of the damage that AI tools will bring to the industry. It’s been a ruse forever The hard reality is that photographers have used lighting and filtering tricks to make things look more beautiful than they are in real life since the advent of the medium. Then, the editorial and advertising industries have been breaking every taboo in digital manipulation since Photoshop was invented. Today, AI is democratizing the deception once againto the point where a single art director for some random magazine can actually create a high-resolution print spread full of beautiful people who dont exist, simply by using a short prompt and spending a couple of dollars. I get it. It’s tempting to tweak reality, sometimes rearranging it completely, to tell a compelling narrative. This summer I went to the Robert Capa museum in Budapesthighly recommendedand stared for a while at that famous Spanish Civil War photo of a Republican “miliciano” being shot. I considered its terrible beauty and the effect it had on the public in an era in which the specter of Nazism and fascism was rising in Europe. I also considered the fact that some experts believe that the photo may have been staged (while others vehemently disagree) and pondered on what is real and whats not, on the effects of perceived reality versus “real reality” versus manipulated reality. These are questions we constantly face as journalists. If Capa really staged that photo, perhaps it was the right thing to do at the time. Perhaps not.  But I digress. I dont pretend to hold NewBeauty to the same fact-checking standards that governed news media back in the time of Capa. Beauty, fashion, cars, and luxury magazines are all part of that aspirational world in which reality easily gets bent to tell a fantasy. I would say that, by clearly labeling the AI images, NewBeauty is being way more honest than the editors of fashion and beauty magazines have been in years and decades past. Those magazines never labeled their photos, “THIS CELEB IS PHOTOSHOPPED! THIS AINT REAL, STEPHANIE! STOP DIETING! LOL!” Yet all covers and many interior shots were digitally altered and many times reconstructed beyond recognition, sometimes pathetically so.  Indeed, the beauty industry was already constructed on visual lies, but in the age of AI, the powers that be wont stop here. Will it be problematic? Yes. Will it cause real economic and personal damage? Most definitely. But we will get more and more used to it until we stop questioning the practice at all. I hate to tell you I told you so, but I told you so. Its the destruction of reality as we know it.


Category: E-Commerce

 

LATEST NEWS

2025-08-20 13:28:19| Fast Company

Its rare for a tech titan to show any weakness or humanity. Yet even OpenAIs notoriously understated CEO Sam Altman had to admit this week that the rollout of the companys new GPT-5 Large Language Model was a complete disaster. We totally screwed up, Altman admitted in an interview with The Verge. I agree. As a former OpenAI Beta testerand someone who currently spends over $1,000 per month on OpenAIs APIIve eagerly anticipated the launch of GPT-5 for over a year. When it finally arrived, though, the model was a mess. In contrast to the companys previous GPT-4 series of models, GPT-5s responses feel leaden, cursory, and boring. The new model also makes dumb mistakes on simple tasks and generates shortened answers to many queries. Why is GPT-5 so awful? Its possible that OpenAI hobbled its new model as a cost-cutting measure.  But I have a different theory. GPT-5 completely lacks emotional intelligence. And its inability to understand and replicate human emotion cripples the modelespecially on any task requiring nuance, creativity or a complex understanding of what makes people tick. Getting Too Attached When OpenAI launched its GPT-4 model in 2023, researchers immediately noted its outstanding ability to understand people. An updated version of the model (dubbed GPT 4.5 and released in early 2025) showed even higher levels of emotional intelligence and creativity. Initially, OpenAI leaned into its models talent for understanding people, using terms cribbed from the world of psychology to describe the models update. Interacting with GPT4.5 feels more natural. Its broader knowledge base, improved ability to follow user intent, and greater EQ make it useful for tasks like improving writing, programming, and solving practical problems, OpenAI wrote in the models release notes, subtly dropping in a common psychological term used to measure a persons emotional intelligence. Soon, though, GPT-4s knack for human-like emotional understanding took a more concerning turn. Plenty of people used the model for mundane office tasks, like writing code and interpreting spreadsheets. But a significant subset of users put GPT-4 to a different use, treating it like a companionor even a therapist. In early 2024, studies showed that GPT-4 provided better responses than many human counselors. People began to refer to the model as a friendor even treat it as a confidant or lover.  Soon, articles began appearing in major news sources like the New York Times about people using the chatbot as a practice partner for challenging conversations, a stand-in for human companionship, or even an aide for counseling patients. This new direction clearly spooked OpenAI.  As Altman pointed out in a podcast interview, conversations with human professionals like lawyers and therapists often involve strong privacy and legal protections. The same may not be true for intimate conversations with chatbots like GPT-4. Studies have also shown that chatbots can make mistakes when providing clinical advice, potentially harming patients. And the bots tendency to keep users talkingoften by reinforcing their beliefscan lead vulnerable patients into a state of AI psychosis, where the chatbot inadvertently validates their delusions and sends them into a dangerous emotional spiral. Shortly after the GPT-5 launch, Altman discussed this at length in a post on the social network X. People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that, Altman wrote. We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks. Altman went on to acknowledge that a lot of people effectively use ChatGPT as a sort of therapist or life coach. While this can be really good, Altman admitted that it made him deeply uneasy.  In his words, if …users have a relationship with ChatGPT where they think they feel better after talking but theyre unknowingly nudged away from their longer term well-being (however they define it), thats bad. Lobotimize the Bot To avoid that potentially concerningand legally damagingdirection, OpenAI appears to have deliberately dialed back its bots emotional intelligence with the launch of GPT-5.  The release notes for the new model say that OpenAI has taken steps towards minimizing sycophancytech speak for making the bot less likely to reinforce users beliefs and tell them what they want to hear. OpenAI also says that GPT-5 errors on the side of safe completionsgiving vague or high-level responses to queries that re potentially damaging, rather than refusing to answer them or risking a wrong or harmful answer. OpenAI also writes that GPT-5 is less effusively agreeable, and that in training it, the company gave the bot example prompts that led it to agree with users and reinforce their beliefs, and then taught it not to do that. In effect, OpenAI appears to have lobotomized the botpotentially removing or reconfiguring, through training and negative reinforcement, the parts of its virtual brain that handles many of the emotional aspects of its interactions with users. This may have seemed fine in early testingmost AI benchmarks focus on productivity-centered tasks like solving complex math problems and writing Python code, where emotional intelligence isnt necessary.  But as soon as GPT-5 hit the real world, the problems with tweaking GPT-5s emotional center became immediately obvious. Users took to social media to share how the switch to GPT-5 and the loss of the GPT-4 model felt like losing a friend. Longtime fans of OpenAI bemoaned the cold tone of GPT-5, its curt and business-like responses, and the loss of an ineffable spark that made GPT-4 a powerful assistant and companion. Emotion Matters Even if you dont use ChatGPT as a pseudo therapist or friend, the bots emotional lobotomy is a huge issue. Creative tasks like writing and brainstorming require emotional understanding.  In my own testing, Ive found GPT-5 to be a less compelling writer, a worse idea generator, and a terrible creative companion. If I asked GPT-4 to research a topic, I could watch its chain of reasoning as it carefully considered my motivations and needs before providing a response. Even with Thinking mode enabled, GPT-5 is much more likely to quickly spit out a fast, cursory response to my query, or to provide a response that focuses solely on the query itself and ignores the human motivations of the person behind it. With the right prompting, GPT-4 could generate smart, detailed, nuanced articles or research reports that I would actually want to read. GPT-5 feels more like interacting with a search engine, or reading text written in the dull prose of a product manual. To be fair, for enterprise tasks like quickly writing a web app or building an AI agent, GPT-5 excels. And to OpenAIs credit, use of its APIs appears to have increased since the GPT-5 launch. Still, for many creative tasksand for many users outside the enterprise spaceGPT-5 is a major backslide. OpenAI appears genuinely blindsided by the anger many users felt about the GPT-5 rollout and the bots apparent emotional stuntedness. OpenAI leader Nick Turley admitted to the Verge that the degree to which people had such strong feelings about a particular modelwas certainly a surprise to me. Turley went on to say that the level of passion users have for specific models is quite remarkable and thatin a truly techie bit of word choiceit recalibrated his thinking about the process of releasing new models, and the things OpenAI owes its long-time users. The company now seems to be aggressively rolling back elements of the GPT-5 launchrestoring access to the old GPT-4 model, making GPT-5 warmer and friendlier, and giving users more control over how the new model processes queries. Admitting when youre wrong, psychologists say, is a hallmark of emotional intelligence. Ironically, Altmans response to the GPT-5 debacle demonstrates rare emotional nuance, at the exact moment that this company is pivoting away from such things. OpenAI could learn a thing or two from its leader. Whether youre a CEO navigating a disastrous rollout or a chatbot conversing with a human user, theres a simple yet essential lesson to forget at your peril: emotion matters.


Category: E-Commerce

 

2025-08-20 13:15:00| Fast Company

Claires has found a buyer just two weeks after filing for Chapter 11 bankruptcy protection. It announced on Wednesday, August 20, that it plans to sell its North America business and IP to Ames Watson, a private equity firm. Courts in the U.S. and Canada must approve the sale for it to proceed. The company began bankruptcy proceedings on August 6 with $690 million in debt. However, Claires hasnt disclosed the amount Ames Wilson would pay for the assets. It did state that the sale will significantly benefit its attempt to create value during restructuring. Finding a buyer has been a critical goal for Claires. At the time of filing, Claires CEO Chris Cramer said the company was in active discussions with potential strategic and financial partners to find alternatives to shutting down stores. Claires had claimed its North American stores would stay open during bankruptcy proceedings, but named 18 locations across the country that would likely close soon. It said another 1,236 stores could close by October 31 if the company didnt find a buyer in time. In light of the agreement, Claire’s has paused the liquidation process at a significant number of stores. The stores that will stay open could total as many as 950 in North America, though some stores in the region will continue with liquidation. We are pleased to have the opportunity to partner with Claire’s and support the next chapter for this iconic brand, Ames Watson CEO Lawrence Berger said in a statement. We are committed to investing in its future by preserving a significant retail footprint across North America, working closely with the Claire’s team to ensure a seamless transition and creating a renewed path to growth based on our deep experience working with consumer brands.


Category: E-Commerce

 

Latest from this category

20.08TSA adds new items to the banned list and they might be in your bathroom
20.08Trump calls on Federal Reserve governor Lisa Cook to resign over alleged mortgage fraud
20.08Sony to raise PlayStation 5 prices in the U.S. amid tariff uncertainty
20.08Googles Pixel 10 phones just got a long-overdue upgrade
20.08What you can do about the government data thats disappearing
20.08How AI will radically change military command structures
20.08How this popular Parisian neighborhood is fighting overtourism and Disneyfication
20.08In Uvalde massacre lawsuit, Meta lawyer argues Instagram isnt responsible for gunmakers posts
E-Commerce »

All news

20.08Trump calls on Fed Governor to resign 'now'
20.08With no Chicago Street Race, NASCAR will return to long-dormant Joliet track in 2026
20.08Target appoints new boss as it seeks to revive sales
20.08Government prepares to take over UK's third largest steelworks
20.08What are Rachel Reeves' options on property tax?
20.08Sony to raise PlayStation 5 prices in the U.S. amid tariff uncertainty
20.08Trump calls on Federal Reserve governor Lisa Cook to resign over alleged mortgage fraud
20.08TSA adds new items to the banned list and they might be in your bathroom
More »
Privacy policy . Copyright . Contact form .