Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-02-12 10:30:00| Fast Company

Every TV and movie critic is loving to hate on Darren Aronofsky these days. The Academy Award-nominated filmmakercreator of lyrical, surreal, and deeply human movies like Black Swan, The Whale, Mother!, and Pihas released an AI-generated series called On This Day . . . 1776 to commemorate the semiquincentennial anniversary of the American Revolution. Though the series has garnered millions of views, commentators everywhere call it “a horror,” slamming Aronofsky’s work for how stiff the faces look, how everything morphs unrealistically. Although calling it “requiem for a filmmaker” seems excessive, they are not wrong about these faults. The series, created using real human voice-overs and Google’s generative video AI, does suffer from uncanny valley syndrome (our brains can very easily detect what’s off with faces, and we don’t buy it as real, feeling an automatic repulsion). But this month, two new generative AI models from China have closed the valley’s gap: Kling 3.0 and Seedance 2.0. For the first time, AI is generating video content that is truly indistinguishable from film, with the time and subject coherence that will make the 2020s “It’s AI slop!” crybabies disappear like their predecessors in the aughts (“It’s CGI!”) and the 1990s (“It’s Photoshop!”). Seedance 2.0, developed by TikTok parent company ByteDance, released in beta on February 9exclusively in China for now. Its widely considered the first “director’s tool.” Unlike previous models that gave the feeling you were pulling a slot machine lever and hoping for a coherent result, Seedance allows for what analysts at Chinese investment firm Kaiyuan Securities call director-level control. It achieves this through a breakthrough multimodal input system. ByteDance has redesigned its model to accept images, videos, audio, and text simultaneously as inputs, rather than relying on text prompts alone. A creator can upload up to a dozen reference filesmixing character sheets, specific camera movement demos, and audio tracksand the AI will synthesize them into a scene that follows cinematic logic. The results have been startling. “With its reality enhancements, I feel its very hard to tell whether a video is generated by AI,” says Wang Lei, a programmer in Guangdong who tested the model to generate a 10-second history of humanity. He described the output as “smooth in storytelling with cinematic grandeur.” One of the tricks is that ByteDance trained it on the vast video dataset of Douyin (Chinas TikTok). This gave the model the capacity to understand human nuance, which shows in the everyday shots it produces in addition to the Hollywood-level cinematic shots it can create.  [Image: Kuaishou] And then there’s Kling If Seedance is the visionary director, Kling 3.0 is the rigorous cinematographer. Launched February 5 by Kuaishou Technology, Kling 3.0 has earned the moniker “Motion Engine.” While other models struggle with the basic laws of physicscars floating, people walking through wallsKling 3.0 respects gravity and light. [Image: Kuaishou] “The physics simulation finally lets you art direct motion instead of hoping for it,” Bilawal Sidhu, a former Google product manager and AI strategist, said on LinkedIn. This makes it uniquely suited to be integrated into commercial work where a product must look and behave like a real object. Commenters on Reddit were in awe of the models new abilities, especially for long takes and multishot. [Image: Kuaishou] Klings major breakthrough is its Elements feature, which allows users to upload reference videos to lock in character consistency. Before, generative video AI would change the characters’ faces at random, like in Aronofsky’s series. With Kling, they always look exactly the same in any shot it generatesa holy grail feature for filmmakers who need actors to look like the same person from shot to shot. It doesn’t just generate pixels; it understands narrative pacing, cutting, and continuity. The level of realism is so high that Kaiyuan Securities believes the new model is positioned to be widely adopted first in AI manga and short drama areas, bringing down costs and improving efficiency to benefit companies with large holdings of intellectual property or traffic. The markets agreed. The release of these models immediately sent shockwaves through the Chinese tech sector. Digital content company house COL Group skyrocketed in the anticipation it will use these models. Shares in studio giant Huace Media and game developer Perfect World rallied 7% and 10% respectively. Investors arent betting on a toy; theyre betting on the total replacement of traditional production pipelines in gming, film, and publishing. An industrial revolution for the visual arts For many professionals in the trenches, generative AI tools are not toys; they are the new standard. Julian Muller, an award-winning director and creative producer, told me the shift is already visible to everyone. “Just from what I saw in the Super Bowl commercials on Sunday, many incorporated AI elements to achieve creative results. We are definitely at the beginning of a shift in what is possible under tighter timelines and leaner production investments, Muller says. I’d say these models [Seedance 2.0 and Kling 3.0] clearly can produce stunning visual results, Muller tells me, noting, however, that theyre not perfect. They are very close to being indistinguishable from real production footage, yet I think there is still a detectable artificial quality to it.  Muller does believe that we have passed the point of no return. “Directors and producers who don’t use AI tools to enhance their projects will soon become the exception and not the rule,” he says. “This is te future, and we’re definitely not going back.” This sentiment is echoed by Tim Simmons, a 17-year Hollywood veteran who analyzes the industry on YouTubes channel Theoretically Media. He told me that while big studios are paralyzed by their own infrastructure, indie creators are adapting. “Adoption at the large studios will remain slower because of the rigid postproduction specs that necessitate building customized AI workflows,” he says. “The challenge is the time required to build such a workflow versus the speed at which AI models are evolving.” Basically, by the time the studios have finished constructing your bridge, the river has moved 150 miles to the north, he points out. Setting aside the complex discussions regarding unions and talent for a moment, its safe to say that through 2026, you’ll see tentative steps from larger studios, Simmons says. But for indie studios and international production houses working outside the traditional Hollywood system? Utilization will rise rapidly. A demo video from the Kling 3.0 announcement [Image: Kuaishou] No soul in the machine Not everyone is ready to embrace the algorithm, of course. While the technology has nearly conquered the visual uncanny valley, a deeper, emotional chasm remains. “I dont think weve ever been amazed and saddened like we are today,” Peter Quinn, a VFX artist and director known for his surreal, handcrafted effects, told me via email. “Spectacular art has just become so dull,” he says. Quinn argues that we value art not just for the final image, but for the human struggle behind itthe painter mixing colors, the stop-motion artist moving a puppet millimeter by millimeter. “Kling 3.0 and Seedance 2.0, while spectacular, are 2026s latest shiny AI toys . . . capable of generating soulless marvels, birthed in a data center somewhere,” he says. “Its interesting how the wow fades when we hear its AI.” In fact, Quinn is in the process of creating a TV docuseries about the anti-AI. Titled The Creators, it intentionally features dozens of “real” artists whove found interesting ways to express creativity by leaning heavily into showing the process, time, and effort it takes to make something. We see a painter mixing and painstakingly applying paint to a canvas over days, stop-motion artists timelapse of weeks of tiny well-considered adjustments, a dancer getting it wrong, a collage artist cutting hundreds of pieces by hand, an artist who can create photo-real pencil sketches, a sculptor who knows the nuance of clay, or a photographer who sees something nobody else does, he  tells me. It just feels like its time. [The] time it takes is what makes it valuable and worthy of looking at or hanging on a wall. Titans of the industry share his skepticism. Guillermo del Toro has famously dismissed AI art as “an insult to life itself,” while Breaking Bad and Better Call Saul creator Vince Gilligan says he wont use tools that remove the human element from storytelling. In Pluribuss credits there is a line that says that humans proudly made it. Maybe TV and cinema will bifurcate between a minority of human-only-made art for the galleries and the purists, and algorithmic content for the masses. Just like there are fanatics of real film, like Christopher Nolan and Quentin Tarantino, who refuse using digital cameras like everyone else in the industry. A demo video from the Kling 3.0 announcement [Image: Kuaishou] The new impressionism I understand Quinn, Del Toro, Nolan, and every purist out there. But, from a historical perspective, it really doesnt make a lot of sense. Despite the existential angstand leaving aside the huge problem that this will cause in terms of jobs and copyright issues, a topic for another articlethere is reason for deep optimism. We are standing at a moment in history that mirrors the state of art in the late 19th century. Before the industrial revolution brought us the collapsible paint tube and pre-stretched factory-made canvas, painting was an expensive, studio-bound endeavor reserved for the elite who had the patrons that would pay them enough for them to grind their own pigments. The industrial revolution in paint manufacturing liberated every artist. It allowed Monet and Renoir to leave the studio, go outside, and paint the light. It birthed Impressionism. Seedance 2.0 and Kling 3.0 may be the paint tubes of cinema and TV, which has seen its cost go down with the analog and video revolutions, but its still reserved for a very few. Those modelsand the ones that will come next from Google and otherstruly open the gates for true AI-generated stories that will feel as real as the ones produced with real people, whether the purists like it or not. Simmons believes “there is a ‘new media’ coming that isn’t ‘just movies but cheaper.’ It will be interactive, generative, and personalized in ways we can’t fully articulate yet, he says. I dont think we have the language for it yet. Right now, we are looking at the internet in 1990 and asking, ‘How will this change the fax machine?’ The answer wasn’t a better fax machine.” I believe that he is right. By lowering the barrier to entry to zero, Seedance and Kling are inviting billions of people who have never held a camera to tell their stories. With the uncanny valley closed, the gatekeepers are gone. The only thing left is to see what humanity decides to paint with this terrifying, wonderful new brush.


Category: E-Commerce

 

LATEST NEWS

2026-02-12 10:07:00| Fast Company

When my business went through a difficult season, I turned to my friend, ChatGPT. I asked the Large Language Model (LLM) for insights and advice on how to leverage my strengths and pivot my business as budgets for womens leadership programs shifted downward. When the well-framed answers started pouring in, I didnt pause to check in with myself and ask if my opinion diverged from ChatGPT or whether this advice aligned with my values and mission. In fact, I didnt even think to ask ChatGPT what might work in my favor if I just stayed the course.I was a LLeMming: a term Lila Shroff uses to describe compulsive AI users in The Atlantic. Lila Shroff shares that just as the adoption of writing reduced our memory and calculators devalued basic arithmetic skills, AI could be atrophying our critical thinking skills. A MODERN LEADERSHIP BLIND SPOT In my TEDx talk, I share that we are all susceptible to a cognitive bias called authority bias, which means we are heavily influenced by the opinions and judgments of perceived authority figures. This could be accepting your bosss input without critical evaluation, or it could be blind trust that ChatGPT always provides the right answers. Large Language Models offer us 24/7 access to advice and guidance. Its easy to fall into an authority bias toward LLMs because not only do tools like ChatGPT answer all questions with an astonishingly confident tone, but outsourcing our decision-making is convenient. Also known as Cognitive Offloading, the outsourcing of cognition helps people manage mental load, memory demands, and decision burden. There is also discomfort and effort involved in turning inward (and checking in is not quick, nor are the answers obvious).  Given the fact that LLMs are not well-rounded, critical-minded people, this can be dangerous. LLMs have been known to hallucinate by making up data or resources, reduce cognitive problem-solving skills, and hinder spontaneous creativity. It also has a bias for positivity, which means it can validate or support even the worst of ideas. This bias can be especially powerful in making you drift off track as a leader. Heres what to do when you realize youre outsourcing your thinking (whether to an LLM or to a person). GET CURIOUS ABOUT YOUR MOTIVATIONS When I sought out ChatGPT to help me make some business pivots at the beginning of 2025, it seemed like a safe place to go to express my concerns and get advice without judgment. What I was really seeking was a sense of certainty in an uncertain time. Its tempting to default to our favorite LLM when uncertainty hits. One of my clients, a founder in the events industry, was feeling stuck in a creative strategy. She wanted to offload some of the uncertainty she was feeling around her marketing strategy, and so she asked ChatGPT to give her feedback. It gave her a host of strategies to try. When she asked my opinion, I asked her: Does the strategy align with your values? Does it move you and your team closer to your goals and objectives? Most importantly, does this recommendation energize you, or does it drain you?If you are wondering whether or not to use a strategy or idea suggested by AI, you can ask yourself these same questions. You can also start to take track of your own tendencies, like how frequently you turn to your favorite LLM to solve a problem, or even validate your choices and beliefs. Are you: Trying to eliminate uncertainty? Seeking validation? Craving alternatives? Or looking for novel ideas? When you notice why you turn to your favorite LLM for advice, it becomes easier to slow down and ensure you are using it for the right reasons.  TRUST YOURSELF FIRST For over three months, I’ve stopped asking ChatGPT (my preferred LLM) for advice on business challenges after I realized I was drifting off course. While building a skill set around AI and LLM is critical as a leader, this exercise helped me rebuild self-trust. I feel better in my physical and mental health, my creativity has returned, and I feel back in alignment with my business, my decisions, and my future. I made some hard decisions to quit things that weren’t working for me (that were very well supported by ChatGPT). One of my clients realized that she was trusting Claude, her preferred LLM, too much for leadership advice. To combat this, she started to read the advice in a toddlers voice. It helped her remember that the recommendations, while sounding smart, generally offer no more experience and education than a toddler. Its often guessing at best. Sow down enough to assess whether this advice aligns with your value system, feels aligned, or even whether its advice youd entertain if a younger coworker suggested it. DONT OUTSOURCE YOUR LEADERSHIP POWER A client of mine remembers the precise day she started looking for a new job. It was the day she shared her annual marketing strategy with her CEO. As CMO, she had spent months gathering enough data and research to craft this careful plan. Her CEO took her plan, put it in ChatGPT, and told her they would be moving forward with one of ChatGPTs strategies, instead of her custom-crafted plan. She felt her intelligence was undermined as the CEO swapped her decades of marketing knowledge for a tool that has been known to guess. As modern leaders, we should be both proficient in using AI tools and also cognizant of when not to use them. We have to trust that we can bring all of our five senses to real business issues, and AI cannot. Delegating our approach and decisions to AI leads to a sea of sameness, and in my clients situation, employee disengagement. In my own experience, defaulting to LLMs for the answer made it harder to think creatively and on the fly for solutions. Remember, your experience, insights, and senses are unique and valuable. They are your competitive advantage. No AI tool can replace this.


Category: E-Commerce

 

2026-02-12 10:07:00| Fast Company

Romance scams used to feel like a cliché. Everyone pictured an email from an overseas “prince” that was poorly written and full of typos and pleas for cash. Now, that cliché is dead. Todays romance scams are industrial-scale operations. Attackers use artificial intelligence to clone voices, create deepfake video calls, and write scripts with large language models (LLMs). In 2024 alone, the Federal Trade Commission reported that financial losses to romance scams skyrocketed, with victims losing $1.14 billion. The real number, hidden by shame and silence, is likely triple that. Romance scams arent just a tragedy for the victims. A successful scam is a massive risk for businesses, too. When an employee with access to sensitive data or funds is compromised, the “heartbreak hack” can harm an entire organization. What Today’s Romance Scams Look Like Phase 1: Contact. Romance scams often start on dating appsbut theyre also prevalent on Instagram, Facebook, and LinkedInwith a seemingly innocent message. These scams arent necessarily about love; theyre about establishing trust. For example: Is this Alex? We met at the conference last week, or Sorry, wrong number, but your profile photo is lovely.  The goal is to continue the conversation on an encrypted app, such as Telegram or WhatsApp, where traditional security measures cant monitor conversations. Once contact is established, the manipulation becomes emotional. Phase 2: Love bomb. Over weeks or months, the scammer builds intimacy. Theyll share mundane details, such as photos of their dog or personal struggles. But with todays AI upgrade, LLMs can craft empathetic responses that mimic shared information to gain trust. Eventually, the relationship is leveraged for financial gain. Phase 3: Pivot. Once trust is established, the conversation pivots. The scammer doesnt ask for a plane ticket or emergency money. They talk about success. They might say, My uncle has an exclusive crypto trading algorithm. They’ll agree to teach the victim how to invest, showing massive (yet fake) returns on a legitimate-looking app. Then, the victim invests large sums of money. What makes these scams especially dangerous is that old warning signs no longer apply. When the Bot Flirts Back We used to say, If they wont video call you, its a scam. That advice is now obsolete. In deepfake video calls, for example, scammers use real-time face-swapping technology. On your screen, the person moves, blinks, and smiles, wearing the face of the stolen identity. While the tech is good, its not perfect. Tip: Look for blurring around the neck and hairline or glitches when they pass a hand in front of their face.  In voice cloning, scammers send voice notes that sound exactly like the person in the photos. Free AI tools now require less than 10 seconds of audio to clone a voice with 85% accuracy, enabling voicemails that reinforce the persona’s reality. Organizations Need to Pay Attention You might be thinking, Why is it a CISOs problem? Take the now-former CEO of Heartland Tri-State Bank, who fell victim to such a scam. Convinced he was investing in a crypto opportunity for his “friend,” he embezzled $47 million of the banks funds, leading to the banks total collapse and a 24-year prison sentence. Had the bank’s chief information security officer known what was going on, the situation might have been identified earlier and nipped in the bud. Here are three forms of the corporate blast radius. Embezzlement: Employees with access to payroll or wire transfers may “borrow” company funds, believing theyll pay it back once their “investment” clears. Sextortion and blackmail: Scammers typically encourage victims to share intimate images. Once they have this, it becomes leverage. BYOD malware: The “trading app” the victim installs is often sophisticated malware that gives the attacker backdoor entry. If that device connects to your corporate network, the attacker is inside. How to Stop a Romance Scam Defending against romance scams requires recognizing patterns in infrastructure and the psychology of influence. Here are three tips to avoid falling victim to a fraudster.  Watch for the vibe shift: If a romantic interest mentions cryptocurrency, foreign exchanges (forex), or nodes within the first few weeks, its a 100% positive indicator of a scamno exceptions. If theyve been patient for months, but suddenly an opportunity is closing quickly, this is manufactured urgency designed to bypass critical thinking. The “specific action” test: Try to hop on a video call, and take two actions. First, ask the person to turn their head all the way around. Deepfake models often struggle with extreme movements or facial expressions, and the face can glitch. Second, ask the person to wave a hand in front of or behind their head. AI often gets confused about which object is in front, leading to face distortion. Move beyond awareness training: Social engineering defense used to be treated as a training problem, measured by click rates and phishing simulations. But modern attacks go beyond inboxes, and they dont wait for employee mistakes. Todays most damaging campaigns leverage impersonation tactics across email, messaging platforms, and social media, often targeting trusted relationships. Defense requires moving beyond reactive training toward early detection of impersonation and coordinated disruption, supported by human rsk management practices that help employees recognize how attacks like romance scams begin and escalate. Trust, But Verify Theres now little distinction between personal life and corporate risk. When an employee or executive is emotionally compromised, so is the organization. Human intuition cant win a fight against AI-powered psychological warfare.  The heart will always be a vulnerability, and in the age of AI, its also an attack vector. Romance scams prove that attackers dont need to break a firewall; they just need to break a heart, and its time to defend with rigor.


Category: E-Commerce

 

Latest from this category

12.02This paint acts like a dehumidifier for your walls
12.02Why your smartphone is about to turn you into a vibe coder
12.02Boy kibble is the weird, protein-obsessed TikTok trend you cant unsee
12.02Were entering the era of AI unless proven otherwise
12.02Jamie Haller loafers are fashion lore. Can she do the same for sneakers?
12.02How work from hotel became the new WFH
12.02KB Home CEO: Homebuilders are slowing spec builds in weaker housing markets
12.02What Black tech founders need to know before raising their first venture capital round
E-Commerce »

All news

12.02AI researcher says 'world is in peril' and quits to study poetry
12.02Civil service pension backlog 'overwhelmed' Capita, boss says
12.02Civil service pension backlog 'overwhelmed' Capita, boss says
12.02WhatsApp is now fully blocked in Russia
12.02Mag Mile 6-bedroom duplex with spa suite: $5.5M
12.02Why your smartphone is about to turn you into a vibe coder
12.02This paint acts like a dehumidifier for your walls
12.02Boy kibble is the weird, protein-obsessed TikTok trend you cant unsee
More »
Privacy policy . Copyright . Contact form .