Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-02-19 14:00:00| Fast Company

Can AI help neurodivergent adults connect with each other? That’s the bet of a new social network called Synchrony, which believes AI and a well-designed social network with the right safeguards can reduce social atomization and calm the overwhelming cacophony of socializing online. Launching February 19, the social network debuts during a moment when social media, chatbots, and doomscrolling has made digital communications a hot button topic for parents. No other app for the neurodiverse is focusing primarily on reducing social anxiety and encouraging friendship, says cofounder Jamie Pastrano. I think that’s the biggest piece of it, and no other app is focusing on building an authentic community.  Synchrony also has support from Starry Foundation and Autism Speaks, two large U.S. advocacy groups, and approval from the Apple App Store. I was really blown away about what theyre trying to do, says Bobby Vossoughi, president of the Starry Foundation. These kids are isolated and their social cues are off.  Theyre creating something that could really change this community’s lives for the long term. A parenting challenge without a solution The idea for Synchrony came from Pastrano, a former management consultant and executive sales leader, whose son, Jesse, 21, is autistic. As Jesse experienced teenagerhood, Pastrano became frustrated with the challenges she saw her son facing around the friendship gap; she saw him as a social kid, but planning, timing, even saying the appropriate thing often tripped him up. Unlike other challenges shed faced as a mother of a neurodivergent child, this one didnt seem to have a solution.  Research shows that people with autism or neuro developmental differencesroughly 1 in 5 people according to the Neurodiversity Allianceface increasing loneliness as they transition between adolescence and adulthood. New social responsibilities and expectations for life after school, combined with the loss of support systems that may have been embedded in secondary and university education, can lead to isolation.  One of the cofounders, Brittany Moser, an autism specialist who teaches at Park University in Missouri, says that shes held crying students who, forced to operate in a world thats not built for them, are desperate for social connection. She hopes this network can foster it. Autism doesn’t end at 18, Pastrano says. There was this huge gap in services to support social, emotional, and community needs. Pastrano sold her company in 2024 and devoted herself to solving the issue with what would become Synchrony. Part of Pastranos inspiration came from reality television. The dating show Love on the Spectrum piqued her interest, causing her to think not about romance, but about connection, friendship, and community. She even contacted a coach on the show, who suggested she get certified at the PEERS program at UCLA, which teaches social and dating skills to young adults on the spectrum. [Image: Synchrony] Broadly speaking, Synchrony is built with the same digital infrastructure as a dating site, but is meant for fostering friendships amid a unique population. A big part of the design challenge was making sure it was suitable for the audience, and wasnt too distracting or loud.  Profiles focus much more on interests, Pastrano says, since interests weigh much more heavily as a reason to communicate among this population. Theres also a space to list neurodiversity classifications and communication style and preferences (“I prefer text to phone calls,” or “I take a few days to reply,” etc.) as part of the effort to front-load key details. Simplified menus and colors and no ads help reduce distractions. Pastrano also wants to respect the community and focus on healthy experiences and not push for rapid growth; users pay a monthly fee of $44.99 after a free 30-day trial, allowing the network to avoid advertisements. Part of the registration process includes two-step verificationboth the user and a trusted person, either a teacher, doctor, or parent needs to input personal details and a photo IDto make sure bad actors outside the community arent given access.  Social Coach, or ‘Seductive Cul-de-sac’ Part of Synchronys strategy is the use of Jesse (named after Pastrano’s son), marketed as an AI-powered social support tool that goes far beyond chat assist technology. By providing real-time conversation support, the chatbot aims to overcome social anxiety and a lack of confidence around socialization. Talking with Jesse online, developers claim, will bolster user self-assurance and communication skills, eventually manifesting in real life.  When Synchrony users get stuck in an online conversation, they can tap an icon to summon Jesse, who will provide editable solutions to advance or end an interaction. The AI coach offers three main options: a tool to help express yourself, that will offer solutions to continuing the conversation; a button that can help parse through the conversation to help better understand what happened, and whether something might have been meant as flirty or friendly; and a final option to protect, and offer suggestions to set boundaries and exit a conversation quietly.  Built using the Amazon Bedrock large language model and trained by Synchrony staff, Jesse is scanning conversations constantly to provide social coaching when asked. The use of AI among the neurodivergent population has sparked the same debates as the technologys use among the population at large. Research by a team at Stanford found that an AI chatbot they developed called Noora, designed to improve communication skills, can improve empathy among users with autism. Some members of the community have claimed AI coaches have helped them with relationships and transformed their lives. At the same time, some advocacy groups have warned that chatbots emotional manipulation can be more severe for the neurodiverse, and some researhers are concerned AI might reinforce bad communication habits. British researcher Chris Papadopoulos sums up the state of play in a recent paper, concluding that while the technology holds the potential to democratize companionship left unchecked, AI companions could become a seductive cul-de-sac, capturing autistic people in artificial relationships that stunt their growth or even lead them into harm’s way.  Amid awareness of the sometimes destructive and even deadly consequences of chatbot use, there are significant guardrails built into Jesse, says Moser, including a long list of activities and actions to avoid, like not sharing personal addresses. Jesse is also told not to dispense medical advice. Jesse is not a therapist, and as the founders are clear to note, this isnt a clinical app. If users start asking Jesse about off-topic concepts, Moser says it will be programmed to reply something to the effect of, Hmm, I don’t know if that’s really going to help you connect with the other members. There will also be warnings if someone is spending too much time just talking with Jesse. Synchrony is launching with human moderation to provide extra safeguards. Lynn Koegel, a professor and researcher at Stanford University who has studied autism and technology, says her team has spent time updating and changing their models of Noora, to make sure its not too harsh, such as not reinforcing communication attempts or being too strict around grammar issues. She says its very important to do more in-depth studies and clinical research to make sure these tools do work well and as intended (she has not seen or tested Synchrony). My gut feeling is these tools can be very good support, she says. The jury is out about whether individual programs that havent been tested can be assistive.   As the Synchrony team works out bugs and final design issues before launch, the challenge becomes building a robust enough community to drive more organic growth. Early user testing that started in December, both an alpha test of 14 users, and closed beta tests among university support groups for autistic students, helped them refine the model and layout. The marketing strategy at launch doesnt focus on the users themselves, but rather neurodiverse employer groups, universities that have neurodiverse programs (who can create their own closed-loop, campus versions of the app), advocates, and relevant podcast hosts.  Success is about awareness and attention, says Pastrano. It’s not a numbers game for me. It’s a really personal game. 


Category: E-Commerce

 

LATEST NEWS

2026-02-19 13:41:00| Fast Company

If the thought of AI smart glasses annoys you, youre not alone. This week, the judge presiding over a historic social media addiction trial took a harsh stance on the AI-powered gadgets, which many bystanders find invasive of their privacy: Stop recording or face contempt of court. Heres what you need to know. Whats happened? Yesterday, Meta CEO Mark Zuckerberg took the stand in a trial that many industry watchers say could have severe ramifications for social media giants, depending on how it turns out. At the heart of the trial is the question of whether social media companies like Meta, via its Facebook and Instagram platforms, purposely designed said platforms to be addictive. Since the trial began, many Big Tech execs have taken the stand to give testimony, and yesterday it was Meta CEO Mark Zuckerbergs turn. But while Zuckerberg was there to talk about his legacy productsFacebook and Instagram, particularlyfor a brief moment, the presiding judge in the case, Judge Carolyn B. Kuhl, turned her attention to a newer Meta product: the companys Ray-Ban Meta AI Glasses. Judge warns AI smart glasses wearers According to multiple reports, at one point during yesterday’s trial, Judge Carolyn B. Kuhl took a moment to issue a stark warning to anyone wearing AI glasses in the courtroom: stop recording with them and delete the footage, or face contempt. Many courts generally forbid recording during trials, though there are exceptions. However, while the judge did seem to be worried about recording in general, she also had another concern: the privacy of the jury. If your glasses are recording, you must take them off, the judge said, per the Los Angeles Times. It is the order of this court that there must be no facial recognition of the jury. If you have done that, you must delete it. This is very serious. Currently, Metas AI glasses do not include the ability to identify the names of the people a wearer views through them, but thats not likely what the judge meant in her concerns about facial recognition. Instead, it is likely the judge was concerned that the video recorded by the AI glasses could then be later viewed and run through external facial recognition software to identify the jury in the video. Some of Metas AI glasses can record video clips up to three minutes long. From reports, it does not appear as if the judge singled out any specific individuals in the courtroom, but CNBC reports that ahead of Mark Zuckerbergs testimony, members of his team, escorting him into the building, were spotted wearing Meta Ray-Ban artificial intelligence glasses. As the LA Times reported, the judges admonition was met with silence in the courtroom. Broader social concerns over AI glasses The privacy of jurors is critical for fair and impartial trials, as well as their own safety. Given that, its no surprise that the judge did not mince words when warning about AI glasses recording. But the judges courtroom concerns also mirror many peoples broader concerns over AI glasses: People are worried about wearers of the glasses violating their privacy, either by recording them or using facial recognition to identify them. This concern first became evident more than a decade ago after Google introduced its now-failed smart glasses called Google Glass. Wearers of the device soon became known as glassholes due to what many bystanders felt was their intrusive nature. When talking to a person wearing smart glasses, you can never be sure you arent being recordedand that freaks people out. That apprehension about smart glasses has not gone away in the years since Google Glasss demise. Modern smart glasses are much more capable and concealed. At the same time, everyday consumers are more concerned about their privacy than ever. These privacy concerns will continue to be a major hurdle to AI smart glasses adoptionespecially as AI smart glasses manufacturers, including Meta, reportedly plan to add facial recognition features in the future. Meta’s glasses come with an indicator light that glows when the wearer is recording, although the internet is full of explainers on how to disable it. The judges admonishment of AI glasses wearers in the courtroom yesterday wont help the devices already strained reputation.


Category: E-Commerce

 

2026-02-19 13:00:00| Fast Company

Generative AI has rapidly become core infrastructure, embedded across enterprise software, cloud platforms, and internal workflows. But that shift is also forcing a structural rethink of cybersecurity. The same systems driving productivity and growth are emerging as points of vulnerability. Google Clouds latest AI Threat Tracker report suggests the tech industry has entered a new phase of cyber risk, one in which AI systems themselves are high-value targets. Researchers from Google DeepMind and the Google Threat Intelligence Group have identified a steady rise in model extraction, or distillation, attacks, in which actors repeatedly prompt generative AI systems in an attempt to copy their proprietary capabilities. In some cases, attackers flood models with carefully designed prompts to force them to reveal how they think and make decisions. Unlike traditional cyberattacks that involve breaching networks, many of these efforts rely on legitimate access, making them harder to detect and shifting cybersecurity toward protecting intellectual property rather than perimeter defenses. Researchers say model extraction could allow competitors, state actors, or academic groups to replicate valuable AI capabilities without triggering breach alerts. For companies building large language models, the competitive moat now extends to the proprietary logic inside the models themselves. The report also found that state-backed and financially motivated actors from China, Iran, North Korea, and Russia are using AI across the attack cycle. Threat groups are deploying generative models to improve malware, research targets, mimic internal communications, and craft more convincing phishing messages. Some are experimenting with AI agents to assist with vulnerability discovery, code review, and multi-step attacks. John Hultquist, chief analyst at Google Threat Intelligence Group, says the implications extend beyond traditional breach scenarios. Foundation models represent billions in projected enterprise value, and distillation attacks could allow adversaries to copy key capabilities without breaking into systems. The result, he argues, is an emerging cyber arms race, with attackers using AI to operate at machine speed while defenders race to deploy AI that can identify and respond to threats in real time. Hultquist, a former U.S. Army intelligence specialist who helped expose the Russian threat actor known as Sandworm and now teaches at Johns Hopkins University, tells Fast Company how AI has become both a weapon and a target, and what cybersecurity looks like in a machine-versus-machine future. AI is shifting from being merely a tool used by attackers to a strategic asset worth replicating. What has changed over the past year to make this escalation structurally and qualitatively different from earlier waves of AI-enabled threats? AI isnt just an enabler for threat actors. Its a new, unique attack surface, and its a target in itself. The biggest movements we will see in the immediate future will be actors adopting AI into their existing routines, but as we adopt AI into the stack, they will develop entirely new routines focused on the new opportunity. AI is also an extremely valuable capability, and we can expect the technology itself to be targeted by states and commercial interests looking to replicate it. The report highlights a rise in model extraction, or distillation, attacks aimed at proprietary systems. How do these attacks work? Distillation attacks are when someone bombards a model with prompts to systematically replicate a models capabilities. In Googles case, someone sent Gemini more than 100,000 prompts to probe its reasoning capabilities in an apparent attempt to reverse-engineer its decision-making structure. Think of it like when youre training an analyst, and youre trying to understand how they came to a conclusion. You might ask them a whole series of questions in an effort to reveal their thought process. Where are state-sponsored and financially motivated threat groups seeing the most immediate operational gains from AI, and how is it changing the speed and sophistication of their day-to-day attack workflows? We believe adversaries see the value of AI in day-to-day productivity across the full spectrum of their attack operations. Attackers are increasingly using AI platforms for targeting research, reconnaissance, and social engineering. For instance, an attacker who is targeting a particular sector might research an upcoming conference and use AI to interpret and highlight themes and interest areas that can then be integrated into phishing emails for a specific targeted organization. This type of adversarial research would usually take a long time to gather data, translate content, and understand localized context for a particular region or sector. But using AI, an adversary can accomplish hours worth of work in just a few minutes. Government-backed actors from Iran, North Korea, China, and Russia are integrating AI across the intrusion lifecycle. Where is AI delivering the greatest operational advantage today, and how is it accelerating the timeline from initial compromise to real-world impact? Generative AI has been used in social engineering for eight years now, and it has gone from making fake photos for profiles to orchestrating complex interactions and deepfaking colleagues. But there are so many other advantages to adversaryspeed, scale, and sophistication. Even a less experienced hacker becomes more effective with tools that help troubleshoot operations, while more advanced actors may gain faster access to zero-day vulnerabilities. With these gains in speed and scale, attackers can operate inside traditional patch cycles and overwhelm human-driven defenses. It is also important not to underestimate the criminal impact of this technology. In many applications, speed is actually a liability to espionage actors who are working very hard to stay low and slow, but it is a major asset for criminals, especially since they expect to alert their victims when they launch ransomware or threaten leaks. Were beginning to see early experimentation with agentic AI systems capable of planning and executing multi-step campaigns with limited human intervention. How close are we to truly autonomous adversaries operating at scale, and what early signals suggest threat velocity is accelerating? Threat actors are already using AI to gain scale advantages. We see them using AI to automate reconnaissance operations and social engineering. They are using agentic solutions to scan targets with multiple tools and we have seen some actors reduce the laborious process of developing tailored social engineering. From our own work with tools such as BigSleep, we know that AI agents can be extremely effective at identifying software vulnerabilities and expect adversaries to be exploring similar capabilities.  At a strategic level, are we moving toward a default machine-versus-machine era in cybersecurity? Can defensive AI evolve fast enough to keep pace with offensive capabilities, or has cyber resilience now become inseparable from overall AI strategy? We are certainly going to lean more on the machines than we ever have, or rik falling behind others that do. In the end, though, security is about risk management, which means human judgment will have to be involved at some level. Im afraid that attackers may have some advantages when it comes to adapting quickly. They wont have the same bureaucracies to manage or have the same risks. If they take a chance on some new technique and it fails, that wont significantly cost them. That will give them greater freedom to experiment. We are going to have to work hard to keep up with them. But if we dont try and dont adopt AI-based solutions ourselves, we will certainly lose. I dont think there is any future for defenders without AI; its simply too impactful to be avoided.


Category: E-Commerce

 

Latest from this category

19.02New JPMorganChase report reveals midsize U.S. firms paid triple in tariffs last year
19.02This new social network is designed specifically for neurodivergent adults
19.02This is very serious: Judge threatens AI glasses wearers with contempt during Mark Zuckerbergs testimony
19.02Googles threat intel chief explains why AI is now both the weapon and the target
19.02UPS is closing package facilities: See the list of doomed locations across several states in 2026
19.02Political brandings most infamous punctuation mark launched decades before you think
19.02These designers made a sustainable new building material from corn
19.02The proposed White House lawn is a design crime
E-Commerce »

All news

19.02Nintendo announces surprise Switch 2 version of sci-fi RPG Xenoblade Chronicles X: Definitive Edition
19.02Indiana lawmakers take another step in luring Chicago Bears to Hammond with stadium bill
19.02Waukegan temporary casino marks third anniversary: We can just imagine what well be able to accomplish when the permanent opens
19.02Russia's recent blocking of Telegram is reportedly disrupting its military operations in Ukraine
19.02New JPMorganChase report reveals midsize U.S. firms paid triple in tariffs last year
19.02This new social network is designed specifically for neurodivergent adults
19.02This is very serious: Judge threatens AI glasses wearers with contempt during Mark Zuckerbergs testimony
19.02Meta reportedly plans to release a smartwatch this year
More »
Privacy policy . Copyright . Contact form .