Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-02-19 13:41:00| Fast Company

If the thought of AI smart glasses annoys you, youre not alone. This week, the judge presiding over a historic social media addiction trial took a harsh stance on the AI-powered gadgets, which many bystanders find invasive of their privacy: Stop recording or face contempt of court. Heres what you need to know. Whats happened? Yesterday, Meta CEO Mark Zuckerberg took the stand in a trial that many industry watchers say could have severe ramifications for social media giants, depending on how it turns out. At the heart of the trial is the question of whether social media companies like Meta, via its Facebook and Instagram platforms, purposely designed said platforms to be addictive. Since the trial began, many Big Tech execs have taken the stand to give testimony, and yesterday it was Meta CEO Mark Zuckerbergs turn. But while Zuckerberg was there to talk about his legacy productsFacebook and Instagram, particularlyfor a brief moment, the presiding judge in the case, Judge Carolyn B. Kuhl, turned her attention to a newer Meta product: the companys Ray-Ban Meta AI Glasses. Judge warns AI smart glasses wearers According to multiple reports, at one point during yesterday’s trial, Judge Carolyn B. Kuhl took a moment to issue a stark warning to anyone wearing AI glasses in the courtroom: stop recording with them and delete the footage, or face contempt. Many courts generally forbid recording during trials, though there are exceptions. However, while the judge did seem to be worried about recording in general, she also had another concern: the privacy of the jury. If your glasses are recording, you must take them off, the judge said, per the Los Angeles Times. It is the order of this court that there must be no facial recognition of the jury. If you have done that, you must delete it. This is very serious. Currently, Metas AI glasses do not include the ability to identify the names of the people a wearer views through them, but thats not likely what the judge meant in her concerns about facial recognition. Instead, it is likely the judge was concerned that the video recorded by the AI glasses could then be later viewed and run through external facial recognition software to identify the jury in the video. Some of Metas AI glasses can record video clips up to three minutes long. From reports, it does not appear as if the judge singled out any specific individuals in the courtroom, but CNBC reports that ahead of Mark Zuckerbergs testimony, members of his team, escorting him into the building, were spotted wearing Meta Ray-Ban artificial intelligence glasses. As the LA Times reported, the judges admonition was met with silence in the courtroom. Broader social concerns over AI glasses The privacy of jurors is critical for fair and impartial trials, as well as their own safety. Given that, its no surprise that the judge did not mince words when warning about AI glasses recording. But the judges courtroom concerns also mirror many peoples broader concerns over AI glasses: People are worried about wearers of the glasses violating their privacy, either by recording them or using facial recognition to identify them. This concern first became evident more than a decade ago after Google introduced its now-failed smart glasses called Google Glass. Wearers of the device soon became known as glassholes due to what many bystanders felt was their intrusive nature. When talking to a person wearing smart glasses, you can never be sure you arent being recordedand that freaks people out. That apprehension about smart glasses has not gone away in the years since Google Glasss demise. Modern smart glasses are much more capable and concealed. At the same time, everyday consumers are more concerned about their privacy than ever. These privacy concerns will continue to be a major hurdle to AI smart glasses adoptionespecially as AI smart glasses manufacturers, including Meta, reportedly plan to add facial recognition features in the future. Meta’s glasses come with an indicator light that glows when the wearer is recording, although the internet is full of explainers on how to disable it. The judges admonishment of AI glasses wearers in the courtroom yesterday wont help the devices already strained reputation.


Category: E-Commerce

 

LATEST NEWS

2026-02-19 13:00:00| Fast Company

Generative AI has rapidly become core infrastructure, embedded across enterprise software, cloud platforms, and internal workflows. But that shift is also forcing a structural rethink of cybersecurity. The same systems driving productivity and growth are emerging as points of vulnerability. Google Clouds latest AI Threat Tracker report suggests the tech industry has entered a new phase of cyber risk, one in which AI systems themselves are high-value targets. Researchers from Google DeepMind and the Google Threat Intelligence Group have identified a steady rise in model extraction, or distillation, attacks, in which actors repeatedly prompt generative AI systems in an attempt to copy their proprietary capabilities. In some cases, attackers flood models with carefully designed prompts to force them to reveal how they think and make decisions. Unlike traditional cyberattacks that involve breaching networks, many of these efforts rely on legitimate access, making them harder to detect and shifting cybersecurity toward protecting intellectual property rather than perimeter defenses. Researchers say model extraction could allow competitors, state actors, or academic groups to replicate valuable AI capabilities without triggering breach alerts. For companies building large language models, the competitive moat now extends to the proprietary logic inside the models themselves. The report also found that state-backed and financially motivated actors from China, Iran, North Korea, and Russia are using AI across the attack cycle. Threat groups are deploying generative models to improve malware, research targets, mimic internal communications, and craft more convincing phishing messages. Some are experimenting with AI agents to assist with vulnerability discovery, code review, and multi-step attacks. John Hultquist, chief analyst at Google Threat Intelligence Group, says the implications extend beyond traditional breach scenarios. Foundation models represent billions in projected enterprise value, and distillation attacks could allow adversaries to copy key capabilities without breaking into systems. The result, he argues, is an emerging cyber arms race, with attackers using AI to operate at machine speed while defenders race to deploy AI that can identify and respond to threats in real time. Hultquist, a former U.S. Army intelligence specialist who helped expose the Russian threat actor known as Sandworm and now teaches at Johns Hopkins University, tells Fast Company how AI has become both a weapon and a target, and what cybersecurity looks like in a machine-versus-machine future. AI is shifting from being merely a tool used by attackers to a strategic asset worth replicating. What has changed over the past year to make this escalation structurally and qualitatively different from earlier waves of AI-enabled threats? AI isnt just an enabler for threat actors. Its a new, unique attack surface, and its a target in itself. The biggest movements we will see in the immediate future will be actors adopting AI into their existing routines, but as we adopt AI into the stack, they will develop entirely new routines focused on the new opportunity. AI is also an extremely valuable capability, and we can expect the technology itself to be targeted by states and commercial interests looking to replicate it. The report highlights a rise in model extraction, or distillation, attacks aimed at proprietary systems. How do these attacks work? Distillation attacks are when someone bombards a model with prompts to systematically replicate a models capabilities. In Googles case, someone sent Gemini more than 100,000 prompts to probe its reasoning capabilities in an apparent attempt to reverse-engineer its decision-making structure. Think of it like when youre training an analyst, and youre trying to understand how they came to a conclusion. You might ask them a whole series of questions in an effort to reveal their thought process. Where are state-sponsored and financially motivated threat groups seeing the most immediate operational gains from AI, and how is it changing the speed and sophistication of their day-to-day attack workflows? We believe adversaries see the value of AI in day-to-day productivity across the full spectrum of their attack operations. Attackers are increasingly using AI platforms for targeting research, reconnaissance, and social engineering. For instance, an attacker who is targeting a particular sector might research an upcoming conference and use AI to interpret and highlight themes and interest areas that can then be integrated into phishing emails for a specific targeted organization. This type of adversarial research would usually take a long time to gather data, translate content, and understand localized context for a particular region or sector. But using AI, an adversary can accomplish hours worth of work in just a few minutes. Government-backed actors from Iran, North Korea, China, and Russia are integrating AI across the intrusion lifecycle. Where is AI delivering the greatest operational advantage today, and how is it accelerating the timeline from initial compromise to real-world impact? Generative AI has been used in social engineering for eight years now, and it has gone from making fake photos for profiles to orchestrating complex interactions and deepfaking colleagues. But there are so many other advantages to adversaryspeed, scale, and sophistication. Even a less experienced hacker becomes more effective with tools that help troubleshoot operations, while more advanced actors may gain faster access to zero-day vulnerabilities. With these gains in speed and scale, attackers can operate inside traditional patch cycles and overwhelm human-driven defenses. It is also important not to underestimate the criminal impact of this technology. In many applications, speed is actually a liability to espionage actors who are working very hard to stay low and slow, but it is a major asset for criminals, especially since they expect to alert their victims when they launch ransomware or threaten leaks. Were beginning to see early experimentation with agentic AI systems capable of planning and executing multi-step campaigns with limited human intervention. How close are we to truly autonomous adversaries operating at scale, and what early signals suggest threat velocity is accelerating? Threat actors are already using AI to gain scale advantages. We see them using AI to automate reconnaissance operations and social engineering. They are using agentic solutions to scan targets with multiple tools and we have seen some actors reduce the laborious process of developing tailored social engineering. From our own work with tools such as BigSleep, we know that AI agents can be extremely effective at identifying software vulnerabilities and expect adversaries to be exploring similar capabilities.  At a strategic level, are we moving toward a default machine-versus-machine era in cybersecurity? Can defensive AI evolve fast enough to keep pace with offensive capabilities, or has cyber resilience now become inseparable from overall AI strategy? We are certainly going to lean more on the machines than we ever have, or rik falling behind others that do. In the end, though, security is about risk management, which means human judgment will have to be involved at some level. Im afraid that attackers may have some advantages when it comes to adapting quickly. They wont have the same bureaucracies to manage or have the same risks. If they take a chance on some new technique and it fails, that wont significantly cost them. That will give them greater freedom to experiment. We are going to have to work hard to keep up with them. But if we dont try and dont adopt AI-based solutions ourselves, we will certainly lose. I dont think there is any future for defenders without AI; its simply too impactful to be avoided.


Category: E-Commerce

 

2026-02-19 12:51:00| Fast Company

United Parcel Service (UPS) is planning to close dozens of packaging facilities this year, the shipping giant revealed in a court filing this week. The plans include shuttering facilities in Texas, Florida, Georgia, Maryland, and several other states. It includes locations that have union employees, according to a docket made public as part of a lawsuit between UPS and the  Teamsters Union. UPS revealed in January that it will cut 30,000 jobs over the coming year. The move was announced as its partnership with Amazon was winding down and amid a broader push toward automation. At the time, it also revealed plans to close 24 total facilities, though it did not reveal the locations. Now the locations of 22 of those facilities have been made public. In the court filings, UPS said the applicable Local Unions have been notified of these closures and informed of the anticipated impacts.  Which UPS package facilities are closing? The facilities marked for closure are spread across more than 18 states. They appear below: Jamieson Park facility in Spokane, Washington Chalk Hill facility in Dallas, Texas Jacksonville, Illinois Rockdale, Illinois Devils Lake, North Dakota Laramie, Wyoming Pendleton, Oregon North Hills, California Las Vegas North in Las Vegas, Nevada Quad Avenue in Baltimore, Maryland Wilmington, Massachusetts Ashland, Massachusetts Sagamore Beach, Massachusetts Miami Downtown Air in Miami, Florida Camden, Arkansas Blytheville, Arkansas Kosciusko, Mississippi Atlanta Hub in Atlanta, Georgia Columbia Hub in West Columbia, South Carolina Kinston, North Carolina Austinburg, Ohio Cadillac, Michigan What has UPS said about the closures? Were well into the largest U.S. network reconfiguration in UPS history, creating a nimbler, more efficient operation by modernizing our facilities and matching our size and resources to support growth initiatives,” a UPS spokesperson told Fast Company when reached for comment. “Some positions will be affected, though most changes are expected to occur through attrition. Were committed to supporting our people throughout this process.” The facility closures were reported earlier by Freight Waves. Last year, UPS also shed 48,000 workers. The primary drivers for the closures are a broader rightsizing effort, outlined back in 2024. Shares of United Parcel Service Inc (NYSE: UPS) are up almost 15% so far in 2026. But the stock is down significantly from highs it had seen during the early pandemic years. However, the impact of the closures will affect members of the International Brotherhood of Teamsters. In response, the Teamsters filed a lawsuit over a planned voluntary buyout program for union drivers, called the Driver Choice Program, or DCP, saying it violates its contract. The Teamsters have asked the court for an injunction pending the two sides’ initiation of the grievance process outlined in their contract. In a statement, the Teamsters have said that they have detailed at least six violations of its National Master Agreement by UPS in the rollout of the buyout program, including direct dealing of new contracts with workers, elimination of union jobs when UPS contractually agreed to establish more positions, and erosion of the rights and privileges of union shop stewards, among other charges. For the second time in six months, UPS has proven it doesnt care about the law, has no respect for its contract with the Teamsters, and is determined to try to screw our members out of their hard-earned money, said Teamsters General President Sean M. OBrien, in comments included in the statement.  UPSs spokesperson tells Fast Company that the company is disappointed in the response. The world is changing, and the rate of change isaccelerating,” UPS says. “As we navigate these changes and continue to reshape our network, our drivers appreciate having choices, including theoptionto make a career change or retire earlier than planned.” This story is developing…


Category: E-Commerce

 

Latest from this category

19.02New JPMorganChase report reveals midsize U.S. firms paid triple in tariffs last year
19.02This new social network is designed specifically for neurodivergent adults
19.02This is very serious: Judge threatens AI glasses wearers with contempt during Mark Zuckerbergs testimony
19.02Googles threat intel chief explains why AI is now both the weapon and the target
19.02UPS is closing package facilities: See the list of doomed locations across several states in 2026
19.02Political brandings most infamous punctuation mark launched decades before you think
19.02These designers made a sustainable new building material from corn
19.02The proposed White House lawn is a design crime
E-Commerce »

All news

19.02Nintendo announces surprise Switch 2 version of sci-fi RPG Xenoblade Chronicles X: Definitive Edition
19.02Indiana lawmakers take another step in luring Chicago Bears to Hammond with stadium bill
19.02Waukegan temporary casino marks third anniversary: We can just imagine what well be able to accomplish when the permanent opens
19.02Russia's recent blocking of Telegram is reportedly disrupting its military operations in Ukraine
19.02New JPMorganChase report reveals midsize U.S. firms paid triple in tariffs last year
19.02This new social network is designed specifically for neurodivergent adults
19.02This is very serious: Judge threatens AI glasses wearers with contempt during Mark Zuckerbergs testimony
19.02Meta reportedly plans to release a smartwatch this year
More »
Privacy policy . Copyright . Contact form .