Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-11-14 17:30:00| Fast Company

A team of researchers has uncovered what they say is the first reported use of artificial intelligence to direct a hacking campaign in a largely automated fashion. The AI company Anthropic said this week that it disrupted a cyber operation that its researchers linked to the Chinese government. The operation involved the use of an artificial intelligence system to direct the hacking campaigns, which researchers called a disturbing development that could greatly expand the reach of AI-equipped hackers. While concerns about the use of AI to drive cyber operations are not new, what is concerning about the new operation is the degree to which AI was able to automate some of the work, the researchers said. While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale,” they wrote in their report. The operation was modest in scope and only targeted about 30 individuals who worked at tech companies, financial institutions, chemical companies, and government agencies. Anthropic noticed the operation in September and took steps to shut it down and notify the affected parties. The hackers only succeeded in a small number of cases, according to Anthropic, which noted that while AI systems are increasingly being used in a variety of settings for work and leisure, they can also be weaponized by hacking groups working for foreign adversaries. Anthropic, maker of the generative AI chatbot Claude, is one of many tech companies pitching AI agents that go beyond a chatbot’s capability to access computer tools and take actions on a person’s behalf. Agents are valuable for everyday work and productivity but in the wrong hands, they can substantially increase the viability of large-scale cyberattacks, the researchers concluded. These attacks are likely to only grow in their effectiveness. A spokesperson for China’s embassy in Washington did not immediately return a message seeking comment on the report. Microsoft warned earlier this year that foreign adversaries were increasingly embracing AI to make their cyber campaigns more efficient and less labor-intensive. Americas adversaries, as well as criminal gangs and hacking companies, have exploited AIs potential, using it to automate and improve cyberattacks, to spread inflammatory disinformation, and to penetrate sensitive systems. AI can translate poorly worded phishing emails into fluent English, for example, as well as generate digital clones of senior government officials.


Category: E-Commerce

 

LATEST NEWS

2025-11-14 17:00:00| Fast Company

Americans have done a shoddy job of teaching reading and math to the majority of our students. Our scores, when compared to other nationsmost with fewer resourcesare plummeting. As a scientist, I try to stay solution oriented. To ensure that we bend the curve and change the future, we must first concede that we have failed our students. We’re at the dawn of a new educational erathe age of artificial intelligence. And there is no way we will get it right in this new era if we are still struggling with the previous one.  As a congenital optimist, I am hopeful that when it comes to teaching AII mean this in its broadest sense, well beyond the practice of codingthat we will learn from our mistakes and get it right this time.   My genetic positivity is reinforced by two recent developments that are important milestones in building a national consensus for assuring that we create generational AI skills and wisdom.  The White Houses executive order Trump’s executive order speaks directly to the existential need for our country to cultivate the skills and understanding to use and create the next generation of AI technology. Upon its issuance, I wrote a column commending its intention. But I also noted, speaking as president and CEO of the Center of Science and Industry (COSI), a board member of the National Academies of Sciences, and a lifelong STEM advocate, that the EO was insufficient: We cannot teach AI without also teaching critical thinking, ethics, and wisdom. Since then, I was asked to participate in the White House Task Force on AI Education that is guiding the implementation of the EO, and is also establishing public-private partnerships with leading stakeholders in AI. COSI is part of this group and we have signed on to President Trump’s pledge to invest in AI education. I recently attended a meeting of the White House Task Force on AI Education, where the inexorable link between national security, economic prosperity, and AI proficiency was the dominant theme. I would summarize it as: We need to winand we must be the global leader in AI capabilities to keep America on top. Yesbut how? The state of Ohio creates a new state of tomorrow After the meeting, I returned to Ohio, which has joined the AI conversation in a big way. Ohio is the first state to require that every school district adopt formal policies to govern AI use in schools. To put it simply, the EO urges the mustthat AI education needs to be a priority. The Ohio regulation, by contrast, insists on the how. It proceeds from the recognition that our schools will be teaching the technology of the future, and demands that the complex nuances of how be determined and agreed to. Chris Woolard, the chief integration officer at the Ohio Department of Education, described the challenge as creating new guardrails that include ground rules for privacy, data quality, ethical use, and academic honesty. And, importantly, What are the critical thinking skills that are needed for students. Beyond just governed, to taught I commend what Ohio has done. But there is a long way to go. To build foundational pedagogical techniques for the teaching of AI, with no baseline, no historical data, and no trials, is far from trivial. In fact, it is enormously complicated, as we have seen from our inability to effectively teach STEM. Ohios regulatory framework, which other states should follow, will involve the creation of new practices and metrics and will require vast sensitivity and nuance, given that every single aspect of education can be weaponized in our undeniably fraught world of culture wars. But we can learn from our mistakes. For example, so-called whole languageversus phonicsis ineffective for the 20% of children with dyslexia. We need to bring all children into the future, and to do that we need to assure that AI literacy becomes a core marker of educational success. Interestingly enough, AI can help with this Teaching AI is like developing AI. Sort of The rapid evolution of AI comes from the process of training the model; it is how the large language models (LLMs) learn and improve in an iterative and focused manner. But it is also a black box in many ways, which cannot be the case with how we teach AI in our schools. Only transparency and continual improvement will ensure that our K-12 students develop the skills necessary to succeed in a changing workforce. None of this will be easy. AI represents a profound turning point; the EO is broad and conceptual, while our Constitution assigns the responsibility of education to the states. But nothing can be more important, and I call upon educators everywhere to come together and work together. What makes their mission even more challenging is that AI is changing all the time, and with such speed. So those teaching it must also be capable of commensurate change. But educational standards tend to be fixed. It is hard enough to set them, let alone to build in agility and responsiveness. I look forward to working with educators, continuing to participate with the AI Task Force, to help develop standards and guardrails that are as responsive and dynamic as artificial intelligence itself. Indeed, the time is now.


Category: E-Commerce

 

2025-11-14 17:00:00| Fast Company

AI was supposed to make our lives easier: automating tedious tasks, streamlining communication, and freeing up time for creative thinking. But what if the very tool meant to increase efficiency is fueling cognitive decline and burnout instead? The Workflation Effect Since AI entered the workplace, managers expect teams to produce more work in less time. They see tasks completed in two hours instead of two weeks, without understanding the process behind it. Yet, AI still makes too many mistakes for high-quality output, forcing workers to adjust, edit, and review everything it producescreating workflation, which adds more work to already overloaded plates. AI has accelerated expectations because managers know that teams using it can work faster, but quality work still requires time, focus, and expertise. “We are seeing that it can lead to a lot of churn and work sloppoor quality output, in particular when it’s being used by junior team members,” says Carey Bentley, CEO of Lifehack Method, a productivity coaching company. When team members lack the expertise to audit AI output, they take it at face value, which can lead to multimillion-dollar errors. The percentage of companies using AI in at least one business function is rising every year, and one of the most popular uses is in marketing. However, many brands flood social media with formulaic, off-putting content that prioritizes speed over emotional connection, sacrificing creativity and differentiation. The consequences of using AI without proper quality review aren’t just about brand reputation or lost dealsthey also add stress while eroding workers’ creativity, problem-solving abilities, and critical thinking. Cognitive Decline and Burnout with AI Research from MIT shows that relying on AI tools to think for us, rather than with us, leads to cognitive offloadingoutsourcing mental effort in ways that gradually weaken memory, problem-solving, and critical thinking. The study found that participants using GPT-based tools showed measurable declines in these areas compared to control groups. Just as GPS impairs spatial memory, relying on AI for thinking may weaken our capacity for original thought, because the brain needs practice to maintain cognitive functions.  When we layer that cognitive debt on top of the relentless pace that AI enables, we aren’t just doing more work; we’re doing it with diminished mental capacity. Workers are reviewing AI outputs without having the time to thoroughly evaluate the quality, making decisions without space for reflection, and producing content without engaging the creative processes that generate real insight. In the long term, the overwhelm leads to small mistakes, such as forgetting to add a document, not finishing an edit, or missing a deadline; these are the first signs of burnout. It really starts small, and that’s why it gets missed so often,” explains Naomi Carmen, a business consultant specializing in leadership and company culture.  These minor errors arent signs of laziness, distraction, or disengagement, and when managers respond with performance reviews instead of support, the cycle only accelerates. The Training Gap Most people using AI haven’t been adequately trained, confusing its confidence for truth. Neuroscientist David Eagleman refers to this as the “intelligence echo illusion”the perception that AI is intelligent because it responds with apparent insight, when in reality it merely reflects stored human knowledge. Without understanding how AI works, leadership develops unrealistic expectations that cascade through organizations, requiring faster and higher-quality work that’s nearly impossible to sustain. “Expecting your team to use AI without proper training is like handing them a Ferrari and expecting them to win races right away,” Bentley explains. Carmen adds, “The input is going to directly affect the output.” Warning Signs AI Is Fueling Burnout According to a 2024 study by The Upwork Research Institute, 77% of employees believe their workload has increased since they started using AI. Key warning signs include: Errors and delays: mistakes slip through because workers rush to meet unrealistic deadlines. Not feeling time savings: employees work harder than ever despite using “time-saving” tools. Always-on culture: leadership sets expectations at AI-speed, resulting in an always-on culture that multiplies workload and stress. How to Use AI Without Burning People Out The solution isn’t abandoning AI, but implementing it thoughtfully. Here are four ways to do it: Proper training: hire experts to audit existing workflows and provide recommendations, then show team members how to produce high-quality output. Clear goals: connect AI use to specific KPIs instead of chasing trends. Companies should remain rooted in their core mission and values, rather than adopting every new AI tool. Treat AI as a low-level assistant: use it for research, initial drafts, and data organization, but keep creative problem-solving and critical thinking in the hands of humans. Support your team: life events, stress, and fatigue mean employees cant deliver at a constant, AI-driven pace. Leadership should keep the human element at the center of decisions, recognizing that policies and expectations must account for the complexity of real lives, not just the output. Moving Forward with AI In an era defined by AI, sustainable performance comes from empathy, connection, and space for creativity. A healthy workplacewhere employees can rest, express themselves, and even have unboosts engagement, problem-solving, innovation, and efficiency. AI can support this, but only when implemented thoughtfully, with the human element at its core.


Category: E-Commerce

 

Latest from this category

14.11Supergreens powder and supplements recalled nationwide after Salmonella outbreak sickens people in 7 states
14.11The latest opioid settlement plan with OxyContin maker Purdue Pharma could end the yearslong legal saga
14.11Trump tried to brand a Democratic shutdown. It didnt work
14.11Anthropic reports AI-driven cyberattack linked to China
14.11Weve got to teach AI the right way. And theres no time to waste
14.11From cognitive decline to burnout: AIs overlooked impact on workers
14.11Baseball United hosts first game in Dubai with its own rules
14.11You can stash more in your 401(k) and IRA next yearheres the new IRS limit
E-Commerce »

All news

14.11Monday's Earnings/Economic Releases of Note; Market Movers
14.11Supergreens powder and supplements recalled nationwide after Salmonella outbreak sickens people in 7 states
14.11The latest opioid settlement plan with OxyContin maker Purdue Pharma could end the yearslong legal saga
14.11Max Health Q2 Results: Nifty's latest entrant reports 74% YoY jump in cons PAT to Rs 491 crore, revenue grows 25%
14.11Anthropic reports AI-driven cyberattack linked to China
14.11Trump tried to brand a Democratic shutdown. It didnt work
14.11Baseball United hosts first game in Dubai with its own rules
14.11From cognitive decline to burnout: AIs overlooked impact on workers
More »
Privacy policy . Copyright . Contact form .