Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-01-26 11:00:00| Fast Company

When a stranger smiles at you, you smile back. That is why, when Sir Ian McKellen (The Lord of the Rings, X-Men, Amadeus) walked on the stage in front of me, looked me straight in the eye, and smiled at me, I smiled back. It was the polite thing to do. It was also completely unnecessary, because McKellen was not actually on the stage in front of me. He smiled at me through a pair of special glasses. The reason for this unusual social interaction is called An Ark, which bills itself as the first play to be created in mixed-reality. Using Magic Leap glasses, the play blends the physical world with the digital realm, creating an unusually intimate theater experience. Opening January 21 at The Shedthe arts center in Manhattans Hudson YardsAn Ark tells a story of humanity through the perspective of four unnamed characters speaking to you from the afterlife. The charactersplayed by McKellen, Golda Rosheuvel of Bridgerton fame, Rosie Sheehy (a Welsh stage and screen actor, known for her work with the Royal Shakespeare Company,) and Arinzé Kene (a British actor and playwright who originated the lead role of Bob Marley in the West End musical Get Up, Stand Up!)appear to sit in a semi-circle that you, a member of the audience, are part of. From the second they appear on stage, their eyes peer straight into your soul as they talk directly to you for the length of the play, which lasts 47 minutes. The illusion, which some might find disconcerting, is that each member of the audience is the center of the attention. In a purely physical world, this conceit would be impossible to realize unless the play were performed privately, one audience member at a time. But with the help of technology, it was convincing enough to elicit an unconscious smile from meuntil my brain caught up to the trickery and the magic spell broke. [Photo: Marc J. Franklin/courtesy The Shed] The making of a mixed-reality play An Ark was written by British playwright Simon Stephens, who is perhaps most famous for his stage adaptation of The Curious Incident of the Dog in the Night-Time, and directed by Sarah Frankcom, a British director known for her work at the Royal Exchange and National Theatre. The mastermind is Todd Eckert, who both conceived of and produced the play. [Photo: Marc J. Franklin/courtesy The Shed] Eckert built a decades-long career in music journalism, film, and dance before embracing technology for its ability to liberate storytelling. In 2012, he joined Magic Leap as director of content development, where he helped pioneer mixed-reality hardware. Four years later, in 2016, he founded a mixed-reality studio called Tin Drum, bought 400 Magic Leap headsets, which he owns to this day, and set out to change what theater could be. First came The Life, a mixed-reality project with his long-time partner, the artist Marina Abramović. Then came Kagami, an ethereal, mixed-reality concert by the Japanese composer and pianist Ryuichi Sakamoto, who collaborated with Eckert to create the show before he passed away. Kagami, which first premiered at The Shed in 2023, and has since toured globally, was so dazzling that I wept when I attended a performance in Manhattan. Eckert is immensely proud of the work, but he says An Ark was even more ambitious. Nobody had ever captured four people simultaneously, he says of the underlying technology. [Photo: Tin Drum] The team gathered in London, where they rehearsed An Ark like you would rehearse a traditional play, from beginning to end, with no interruption. Then, they flew to Grenoble, in southeastern France, where 4DViews, the company that designed the volumetric video system that can capture all four actors in full 3D, is headquartered. In Grenoble, they filmed the play under the scrutiny of 48 cameras, including a cluster of two cameras that stood in for the eventual audience members. We ultimately got three full takes, Eckert recalls of the shoot, which took place in an entirely green room he’s previously likened to Kermit land. After three months of data processing, the play was ready for opening night. [Photo: Marc J. Franklin/courtesy The Shed] What’s next for theater? Theater is becoming an increasingly endangered art form. Since the pandemic, audiences have been slower to return to in-person performances, production costs have climbed, and public funding has shrunk. Across the country, regional theaters have been cutting back seasons and are still struggling to recover, while Broadway budgets now routinely reach into the tens of millions. As a result, ticket prices have risen, often putting live theater out of reach for younger audiences and first-time attendees. There is an entire community of people who feel art is not being made for them, says Daniel Sherman, a San Francisco-based artist who has been producing theater since 2010, and who also recently finished a play in mixed-reality (though it hasn’t been staged yet.) If we can add a tech component, and meet people where they are, maybe this could be the thing that brings in younger audiences, he says. One of the obvious promises of the mixed-reality technology is it could make theater more accessible. With no actors to tour, no sets to build or transport, and far fewer recurring labor and logistics costs tied to global touring, a mixed-reality play should be a lot more affordable than a traditional production. (A ticket for An Ark costs around $45.) There are other benefits, too. As producers around the world continue to rethink the genre, technology is increasingly being used not as a cost-cutting tool, but as a way to stretch what theater can do. In the Broadway production of The Picture of Dorian Gray, director Kip Williams used live video capture to allow a single performer (Sarah Snook) to inhabit multiple characters at once in ways that would be difficult to achieve through traditional staging alone. And in Briar & Rose, an augmented-reality childrens play that ran across Europe, Glitch studio combined physical performance with augmented reality technology, placing audiences inside a layered narrative space rather than in front of a fixed stage. [Photo: Marc J. Franklin/courtesy The Shed] Still, some have been skeptical of technologys potential for years. Sarah Frankcom, An Ark‘s very own director, used to be one of them. In fact, when Eckert first approached her, she refused the job, arguing, as Eckert recalls, that she was not interested in technology; she was interested in humans in a room. What made her change her mind? She experienced Kagami through the glasses. I was intrigued by how it put an audience in a different relationship to a live experience and the possibilities of its intimacy, she told me in an email. I was excited by the way it could summon up a communal experience. Frankcom says that working with this particular technology has reframed her ideas of what theater could be. This feels like the beginning of a new form, she wrote. And whilst there is no live acting in a traditional sense, Ive been very struck by how much an audience interact with the actors and how they laugh, cry and reach to hold their hands. [Photo: Marc J. Franklin/courtesy The Shed] What do we gain and what do we lose with technology? Is a play still a play if there are no live actors on stage? Perhaps that’s a matter of semantics. Or perhaps it helps to consider a definition of theater that doesn’t focus on the physicality of the experience, but rather the emotions that it conjures up. The technology promises cinematic realism, and it mostly delivers. While some glitches made the actors’ arms and feet flicker and stretch into their surroundings (glitches Eckert says he could fix if he had unlimited funds) their faces looked as real as they could through a pair of eyeglasses. The team also fine-tuned the distance between the actors and audience members so the experience feels as intimate as it would in real life. (You can’t ever see all four actors at the same time, forcing you to turn your head to stay engaged.) But there is only so much realism to conjure when all it takes to break the spell is to peek underneath the glasses and see a room full of bespectacled people staring into nothing. [Photo: Marc J. Franklin/courtesy The Shed] I like to think I would have felt the story in my bones if only the actors had delivered it to me in real life. But I will never be able to put my theory to the test because this exact play, in this exact configuration, could never be performed without technology. What can we do that’s not possible in any other way? Eckert first wondered when brainstorming what the play could be with Simmons. The idea, he says, was never to supplant traditional theater but rather to broaden its potential and having actors of such great caliber address audience members in such an intimate setting accomplishes just that. Art, I think, is ultimately a way of making sense of things that don’t make sense, Eckert told me after the show. If Ian McKellen ushered me off stage to guide me into the afterlife, it would not make sense without a strong sense of suspended disbelief. But here, in the hazy world that only mixed-reality can afford, it does.


Category: E-Commerce

 

LATEST NEWS

2026-01-26 10:30:00| Fast Company

For decades, people with disabilities have relied on service dogs to help them perform daily tasks like opening doors, turning on lights, or alerting caregivers to emergencies. By some estimates, there are 500,000 service dogs in the U.S., but little attention has been paid to the fact that these dogs have been trained to interact with interfaces that are made for humans. A team of researchers from the United Kingdom wants to change that by designing accessible products for, and with dogs. The Open University’s Animal-Computer Interaction Laboratory in the UK was founded in 2011 to help promote the art and science of designing animal-centered systems. Led by Clara Mancini, a professor of animal-computer interaction, the lab studies how animals interact with technology and develops interactive systems designed to improve their wellbeing and support their relationships with humans. [Video: The Open University] The team’s first commercially available product is a specifically-designed button that service dogs can press to help turn on corresponding appliances at home, like a lamp, a kettle, or a fan. The Dogosophy Button took more than ten years to develop and was tested with about 20 dogs from UK charity Dogs for Good. It gives dogs more control over certain aspects of their home, which can make training them easier and further strengthen the bond between a human and their dog. It’s also taught the team a few lessons about how to design for humans. “I am now a better human designer,” says Luisa Ruge, an industrial designer who worked with Mancini and led the design of the button. For now, the Dogosophy Button is only available for purchase in the UK (for about $130). [Photo: The Open University] The challenges of designing for animals Anyone who’s ever designed a product for a human client knows the process relies on a perfect storm of variables like gender, age, background, and personal preferences. But these designers also have one advantage they likely take for granted: they can ask their client what they think at every step of the way. Getting feedback from a dog is much harder and requires an understanding of animal behavior. “Theres a lot of iteration,” says Ruge, “and a huge ethical and reflective component because I can’t be a dog, I don’t [feel] what they feel.” Ruge began her career as an industrial designer, but as she moved up the corporate ladder, she realized she was fascinated with animals. Her interest led her to train as a service dog trainer at Bergin College of Canine Studies in California. “One of the ways to bond is we had to be tied to our dog with a carabiner and leash for 8 days, 24/7,” she recalls. Later, she attended a conference on human behavior change for animal welfare, where she met Mancini and became interested in her lab. Ruge immediately enrolled in a PhD at The Open University, and spent the next three years writing a thesis on designing for the animal user experience and proving out her dog-centered methodology. Ruge followed the five human factors model, a method that helps designers understand the end user’s behavior by breaking down the UX into five factors. The typical list includes physical, cognitive, social, cultural, and emotional factors, but Ruge added a sixthsensoryand then later, a seventh: consent. To understand the exact characteristics and abilities she had to design for, she focused on Labrador Retrievers, Golden Retrievers, as these are the most common breeds for service dogs. Her research led to various correlations that informed the design of the button. For example: since both breeds have long tails, the button should not feature sensors that might accidentaly be activated by it. Since both breeds are predisposed to hip dysplasia and joint problems, the button should also not be designed in a way that requires jumping to activate. And since all dogs see the world in hues of yellow, blue, and brown, the button should be made in one of these colors so it is easy to perceive. [Video: The Open University] When Ruge first got involved, the prototype Mancini had developed was square in shape, and looked a bit like the standard metallic button that people with wheelchairs can press to open a door. Nowafter about 20 iterations and five prototypesthe button is round, convex, and blue. It is textured to prevent a dog’s wet snout from sliding on it, and its push depth is such that a more timid dog shouldn’t have to press hard to activate it. Ruge had to test some of her designs the hard way. The first prototype she ever made took days to develop and the dogs destroyed it “in two seconds,” she recalls with a laugh. But dogs don’t know that a prototype should be handled with care. To them, a work-in-progress product looks no different than a finished product. Animal design as a discipline Designing for dogs humbled Ruge’s assumptions. “It lets you know you’re never 100% right,” she says, adding that the only way to confirm her theories was through extensive testing and observation. It also made her a better designer for humans, because she learned to better spot her biases and assumptions. “Sometimes, I’m assuming you feel a handle like I do, and you don’t,” she says. In the end, though, animal design is where Ruge’s passion lies. Since earning her PhD, she has moved back to her native Colombia and started a design consultancy called Ph-auna (pronounced fauna) where she focuses on animal centeed innovation. She hosts a podcast called Pomodogo, guiding humans to better connect with their dogs, and is now working on an app that gamifies dog training and inspires humans to be better caretakers. “There’s an immense opportunity for animal design to be its own design discipline,” she says. Meanwhile, in the UK, the Dogosophy Button is available to individual customers willing to buy it, but the team is hoping to broaden its scope beyond the home. Mancini, who spearheaded the button project, says they first installed an earlier version of the button to operate the motorized door of a restaurants accessible toilet, but the restaurant ended up shuttering. Then, they tried installing it at a local shopping mall, but the plan fell through due to budget constraints. Still, she plans to continue developing new versions and adapt them for the characteristics of other species too. “It is my interest to try and install the buttons in public buildings,” she says. “I would love for whole cities to be more accessible for dogs and other urban animals.”


Category: E-Commerce

 

2026-01-26 10:00:00| Fast Company

On my phone, there are already videos of the next moon landing. In one, an astronaut springs off the rung of a ladder, strung out from the lander, before slowly plopping to the surface. He is, alas, still getting accustomed to the weaker gravity. In another, the crew collects a samplea classic lunar expedition activitywhile another person lazily minds the rover. A third video shows an astronaut affixing the American flag to the ground, because this act of patriotism is even better the second time around. The blue oceans of Earth are visible, in the background, and a radio calls out: Artemis crew is on the surface. America is going back to the moon, and NASA is in the final weeks of preparing for the Artemis II mission, which will have astronauts conduct a lunar flyby for the first time in decades. If all goes well, during the next endeavor, Artemis III, theyll finally land on the lunar surface, marking an extraordinary and historical and in some sense, nostalgic, accomplishment. The aforementioned videos are not advance copies, or some vision of the future, though. They were generated with OpenAIs video generation model and are extremely fake.  Still, this kind of content is a reminder that the upcoming Artemis missions promise a major epistemic test for the deniers of the original moon landing. This a small but passionate and enduring community who doubt the Apollo moon landing for a host of reasons, including that (they allege) the government lied or (they believe) it is simply physically impossible for humans to go the moon. Now, when NASA returns to the lunar surface, these people will be confronted with far more evidence than from the last time around. The space agency operation will be broadcast, live, and including camera technology and social media platforms that just werent around in the 1960s. But theres also a bigger challenge before us. NASA will be launching its moon return effort in a period of major distrust in American scientific and government institutions, and, amid the proliferation of generative AI, declining confidence in the veracity of digital content. Most observers will be able to sort through the real NASA imagery, and anything fake that might show up. Still, there tends to be a small number of people who doubt these kinds of milestones, especially when a U.S. federal agency is involved.  Adding AI to the conspiracy theory cocktail When the moon landing first came in, AI wasn’t a thing. The sophistication of [the landing] didn’t necessarily make us question it, says David Jolley, a professor at the University of Nottingham who studies conspiracy theories. But now, with the power of AI and the power of images that you can create, it certainly offers that different reality if you want to interpret it in that way. Its the trust in those sources that we need to kind of really create. Of course, if you haven’t got trust in our gatekeepers and you don’t trust scientists, well, suddenly you are going to lean into: well, this, is this real? Is this just AI? he continues. The upcoming Artemis missions arent yet a major topic among lunar landing deniers. But there are hints it will attract more attention from conspiracy theorists. During the last Artemis mission, which was unmanned, Reuters had to push back on online posts suggesting the expedition proved that Apollo 11 didnt actually happen. (Skeptics suggested longer Artemis I mission timelines, a product of a change in route, actually cast doubt on the original Apollo timeline).  Other online skeptics have already suggested that, with Artemis, NASA is yet again faking a space endeavor. Some people in internet conspiracy communities suggest the upcoming moon missions will be entirely CGI (computer-generated imagery).  Generative AI stands to introduce even more confusion, says Ben Colman, the CEO of Reality Defenders, a deep fake detection platform. Generating a believable image of a (fake) moon landing is now something any consumer can do. Any astute physicist will be able to tell you if these videos get star placement or physics wrong, as they are likely to do, he says, but even that is getting better with each model iteration. Conspiracy theories are sticky There are, of course, many reasons why people say they deny reality of the first lunar expeditions. They are canonical, misinterpreted references, like Van Allen belts, a zone of energetic charged particles that surrounds the planet (critics say the belts are too radioactive for manned vehicles to traverse)  and the suspicious flag-in-the-wind (theres no wind on the moon!). All of these pointsand the many other points deniers bring uphave been thoroughly debunked. Still, this small community of self-appointed detectives are insistent. Even decades after the missions ended, people are still combing through NASAs videos and images, mining for signs of alternations or other surreptitious editing. To them, an expected shimmer reveals a film operation just beyond the view of the camera. A movement that might not look right is a hint that the world has been duped. Open source intelligence (OSINT) becomes the rabbit hole.  Some allege we didn’t go to the moon, perhaps because we were trying to trick the Soviets into thinking that we had superior technology than they did, explains Joseph Uscinski, a political scientist at the University of Miami who also studies conspiratorial beliefs. Some people think we did go but it wasn’t televised. And that footage that we saw was made later in a sound studio. Some people think Stanley Kubrick was in charge of filming the faked Moon Landing footage. For its part, NASA is preparing to point to evidence, should any deepfake allegations come their way. Agency spokesperson Lauren Low tells Fast Company: We expect AI experts will be looking closely at all our images and will be able to verify they are real images taken by real astronauts as part of the Artemis II test flight around the Moon. Moreover, Low added, there will be many ways for people to watch the lunar flyby themselves, including live broadcasts, two 24/7 YouTube streams, a new conference, and views from Orion cameras. In other words, the reality of Artemis will be very hard to deny. Research suggests that conspiracy theories are entertaining, and even serve peoples core psychological needs, like  a desire to understand the world or a way of dealing with uncertainty. Finding other people, including on social media, pushing these theories can help normalize them, and make someone feel like theyre part of a broader community. Some people simply dont trust institutions, and evidence that something did, indeed, happen only raises further questions, and suspicions that it didnt. To an extent, politics matters, too; people outside the United States are more likely to deny the moon landing, polls show.  In the end, says Uscinski, we should prepare for people who are prone to conspiratorial thinking, or prone to mistrusting institutions, to take a skeptical view of any big news event. This may happen again when the Artemis missions finally launch. “The good news is that belief in conspiracy theories isnt likely to get worse,” he explains. “The bad news is that this conspiratorial thinking has always been this pervasive. People are very good at waving away evidence that tells them things they don’t want to hear, and they’re very good at believing things, either without evidence or with really shitty evidence when it tells them what they do want to believe about the world, Uscinski adds. You don’t need AI or sophisticated technology to provide a justification.


Category: E-Commerce

 

Latest from this category

26.01Stop chasing green jet fuel
26.01Good leaders dont shut down when employees push backthey do this instead
26.01What is situational retirementand should you give it a try?
26.01Davos reveals mixed messages on CEO confidenceand new narratives on AI
26.01Make stealing time a crime: How to protect your most valuable resource
26.01The decline of Food52, Goop, Hodinkeeand the internets dream of content-to-commerce
26.01A play with no actors on stage? Thats the bet behind the worlds first play in mixed reality
26.01Quitting your job might not solve your burnout. Heres why
E-Commerce »

All news

26.01The EU is investigating Grok and X over potentially illegal deepfakes
26.01The Bose QuietComfort Ultra headphones are 35 percent off
26.01The best gear to upgrade your home theater setup
26.01Tanker crash captain denies falling asleep
26.01Stop chasing green jet fuel
26.01EU investigates Elon Musk's X over Grok AI sexual deepfakes
26.01Good leaders dont shut down when employees push backthey do this instead
26.01Asian Paints Q3 Preview: PAT seen up 8% YoY; volume growth to pick up
More »
Privacy policy . Copyright . Contact form .