|
|||||
The hottest AI tool on the market today isnt a powerful frontier model from the likes of OpenAI or Anthropic. Rather, its a kludgey, wildly complex, open-source platform thats already provoked a trademark dispute, multiple corporate bansand fawning praise from developers around the world. Its OpenClaw, and its specifically designed to build AI agents. I set it up, built an agent of my own, and promptly trained it to do my job for me. Heres what happened. Beware the Claw For more than a year now, Big AI companies have promised us an agentic AI future. AI wouldnt simply answer our queries or help us shop for a toaster, companies like OpenAI and Anthropic assured usit would actually do useful things. Turns out, the AI giants are generally too squeamish and cost-sensitive to actually release such a tool. Because AI agents can take actions on behalf of a user, they can easily cause harm or make mistakes at scale. As well see, theyre also blindingly expensive. Both those things scare Big AI firms with reputations and valuations to protect. Therefore, theyve largely given users neutered versions of agentic AI. Todays agents come with strict guardrails and perform very specific, bounded functions (like writing code or performing research). Theyre engineered to be unlikely to escape their cages or run up the compute bill. OpenClaw is different. The system is open source and model agnostic. That means it can leverage the best LLMs from OpenAI, Anthropic, Grok, or any other company. Developers install OpenClaw on their local server or computer, giving it broad permissions. This combination of unfettered access to hardware and tie-ins to the worlds most powerful LLMs is a potent one. It allows OpenClaw to do things that other agents cant, spending minutes or hours acting on its users behalf, crawling the web, signing into external platforms, and even controlling cameras and local hardware. The developers behind OpenClaw originally named it Clawdbot, a clear shot at Anthropics Claude system. Anthropic didnt take kindly to that provocation, and threatened a trademark lawsuit. OpenClaws creators briefly named their tool MoltBot, before pivoting to the current, lobster-themed moniker. And thats not the only trouble OpenClaw has gotten into during its brief tenure on the planet. Because the bot has such broad access to users hardware and data, multiple security experts have warned that its a potential data security disaster. Meta and multiple other Big Tech companies have already banned their own developers from using the bot, ostensibly on privacy and security grounds. Those bans just made me want to try OpenClaw even more. So I went to my hosting provider, found a reasonably safe way to install the bot, and set about training its agentic AI to make me obsolete. A Steep Curve To begin experimenting with OpenClaw, I used a Virtual Private Server from Hostinger to create a new OpenClaw instance. Basically, this keeps the bot contained within its own dedicated pretend computer, where it can do minimal damage. I immediately discovered that OpenClaws learning curve, especially for nonprogrammers, is extremely steep. I know my way around a Linux terminal, but it still took me several hoursand lots of back and forth with ChatGPT as my guideto get OpenClaw successfully set up and ready to use. Once it was active, I paired it with my OpenAI credentials, set it up to use OpenAIs flagship models, and set about building an agent. My goal was simple: I wanted an agent that I could unleash on the open internet, and that would do my job as a Fast Company contributing writer for me. Specifically, I wanted my agent to research everything happening in the world of AI, find a compelling news story, hunt down all the relevant details, write up a snappy and blindingly clever (but factual) piece in my writing style, add inline citations, craft a strong headline, and deliver the whole thing back to me. Unlike traditional chatbots, OpenClaw allows users to configure the system deeply. To build my agent, I gave OpenClaw specific instructions about my research process, as well as multiple samples of my prior Fast Company stories. That allowed the system to learn the nuances of my writing style and determine exactly what I wanted. After several hours of maddeningly complex configuration work, I had my OpenClaw doppelgänger ready to go. I named it AI News Desk. Then, I set it to work. Replace Me! Although configuring OpenClaw isto put it in technical termsa pain in the ass, using my AI News Desk agent is extremely easy. All I need to do is fire up a Linux terminal connected to my OpenClaw instance and tell my agent to work its magic. The first thing that struck me was how long OpenClaw spends doing its work. OpenAI users pay the company a flat monthly fee. That gives the company an incentive to do as little work as possible in responding to user queriesthe more work and thinking ChatGPT does on a given query, the more OpenAI has to spend on computing power, and the less profit it makes from the users fixed monthly fee. OpenClaw, in contrast, doesnt care about costs or profit. Its content to blithely burn through tokens to do the best possible job fielding your request. When I asked my agent to research and write an article for me, it often took as long as 20 minutes to produce a response, blowing though $2 to $3 worth of OpenAI API credits in the process. Thats not a lot of money in the grand scheme of things, but its way more than even a Blitz-scaling OpenAI or Anthropic would devote to a single query. With all that work and thinking, though, OpenClaws responses were quite good. In one test, the system successfully found a relevant piece of juicy AI news (Anthropics decision togive free users access to its powerful new Sonnet 4.6 model), researched more than 50 sources, chose a solid headline (Anthropic just moved its best everyday Claude into the cheap seats), and wrote a piece thats factually accurate and quite polished. Functionally, the Sonnet tier just cannibalized a lot of work that used to force teams onto Opus, OpenClaw opined in the article. I could see writing that. Human sacrifice metaphors in a business story? Thats my jam! OpenClaw writing an article OpenClaw even captured my propensity for including data and stats in my articles. Internal evals show developers prefer Sonnet 4.6 over 4.5 about 70% of the time and even choose it over last falls Opus 4.5 in nearly six out of ten trials, the bot wrote, citing a blog post from Anthropic. Overall, OpenClaw did a surprisingly good job following journalistic best practices. It has a strong sense of whats newsworthy, cites a mixture of sources (including company announcements and external analysis pieces), and keeps things compelling without embellishing facts or hallucinating. Sometimes it drones on about technical things. But then, so do I! In short, its a decent journalistif not, Id like to think, a real replacement for yours truly. Agents for the Win? To be clear, I would never use OpenClaw to actually write a Fast Company article for me. But based on my experiments, the system is a compelling and powerful tool. I spent most of my time on the basics. But with more time spent tweaking and improving its instructions and training data, I could likely improve its output even more. I could also give the bot more capabilities beyond just writing. Because OpenClaw allows deep integrations with other tools, I could train the bot to put its articles into a Google Doc, fact-check them, and even send them directly to my Fast Company editor. Other developers have trained the system to create videos for them, control their smart home devices, build entire iPhone apps, and clear their inboxes by responding to hundreds of emails on their behalf. Beyond the specifics of my experiment, using OpenClaw showed me the real potential of agentic AIas well as its drawbacks. OpenClaw bills itself as The AI that actually does things. Thats true, and refreshing. Its also expensive. In a day of using OpenClaw, I can easily spend $10 to $15. Companies like OpenAI are already burning through hundreds of billions just fielding basic ChatGPT queries. Theres no way theyd let everyday users access such a pricey technology. That means until frontier AI models get far cheaper, agentic AI will be the purview of big enterprises that can build their own bespoke agents, and the crazy few who are devoted (and deep-pocketed) enough to implement tools like OpenClaw for themselves. In short, based on price alone, you can ignore promises of powerful AI agents for the masses. Model prices will come down, though. And when they do, even consumer-friendly tools will be able to pull the same magic as OpenClaw. The agentic future will arrive. But not until its profitable.
Category:
E-Commerce
Most workplace frustration doesnt come from a lack of effort or commitment. It comes from expectations that werent metnot because people failed to try, but because those expectations were never clearly stated or truly understood. In our organizational research over the past 30 years, weve seen this pattern repeatedly: when expectations are unclear, trust in leadership and collaboration begins to drop. When this happens, the frustration that follows is real. But the deeper cost is often invisibletrust begins to erode. This dynamic is increasingly common. Roles evolve, priorities shift, and teams are asked to move faster with less certainty. People continue to work in good faith, investing energy and time into what they believe is needed. They solve problems based on experience and what has worked before. When theyre later told the outcome fell short, the issue is more than disappointment. Its disorientation. People begin to question their judgment and whether they can reliably meet expectations going forward. Over time, that uncertainty weakens collaboration and trustthe sense that people are truly working with one another toward a shared outcome. Consider a common scenario. A leader asks a team member to move this forward quickly. The work gets done on time, but when its delivered, the leader is disappointed. What they needed wasnt just speed, but alignment with a broader strategyor more collaboration with another team before finalizing decisions. The expectation wasnt ignored; it was incomplete. The leader never named the strategy, nor the need. In the absence of clarity, effort went in one direction while expectations lived in another. Over time, moments like this teach people to hesitate, over-check, or disengage because trust in their understanding has been shaken. Heres how to break that cycle. Set expectations explicitly This means being clear not just about tasks or deadlines, but about what success looks like, along with what constraints or tradeoffs are in play. It also means being realisticconsidering current priorities and what support may be required to do the work well. Rather than assuming clarity, make it visible. Instead of saying, Can you move this forward? try something more specific: Id like to review my expectations with you for clarity. What Im trying to accomplish is [outcome], and what matters most here is [speed, quality, alignment, or collaboration]. I need this delivered by [timeframe], and I want to make sure thats realistic given everything else youre managing. Setting expectations this way signals partnership, not control. It shows consideration for others and consistency in how expectations are applied. It also opens the door to an essential question: What do you need from me? Asking that upfront helps leaders provide the right support and ensure people are set up to succeed. Confirm understanding before work begins Shared history and good intentions can create the illusion of alignment. Leaders may believe expectations are obvious, that others understand what matters most, or that capable people will speak up if something is unclear. In effect, clarity is assumedand theres often an unspoken expectation that people will initiate their own understanding. In reality, many people hesitate to ask clarifying questions, especially in environments shaped by urgency or rapid change. They dont want to slow things down, appear uninformed, or challenge direction. Trust is strengthened when leaders treat clarity as something to be created together, not something to be inferred. Rather than assuming alignment, invite it. That might mean asking someone to reflect back what they heard or encouraging them to surface concerns. For example, instead of asking, Any questions?which often shuts conversation downtry something more specific: Before you get started, Id like to make sure were aligned. What are you hearing matters most here? or What concerns or constraints do you see? And if youre the person receiving the instruction, this is a moment to step into ownership. Asking a clarifying question doesnt signal uncertainty: it signals engagement. Questions like, Can I confirm my understanding of what success looks like? or What would be most helpful from you as I work on this? both clarify expectations and demonstrate initiative. Managers notice this. It builds confidence on both sides and reduces the risk of misalignment later. Renegotiate expectations when reality changes Because it always does. Expectations can grow larger than anticipated, take longer than expected, or become more complex as work unfolds. New priorities emerge. Constraints surface. Resources shift. When these changes go unaddressed, people continue operating on outdated assumptions, drifting further out of alignment. Renegotiation isnt a failure of planning; its a leadership and partnership responsibility. If youre receiving an expectation and recognize that something has changed, bring it up immediately. Share what youre seeing, explain whats different, and be explicit about the support that would help you succeed. That might sound like:As Ive been working on this, Im realizing the scope is larger than expected because [reason]. Im concerned I wont be able to meet the original expectation as defined. Id like to talk about what supportor what adjustment to scope or timingwould help me complete this successfully. Asking for support isnt a sign of weakness; its a sign of ownership. If youre the one who set the expectation, make support visible. Ask questions like: Are you running into any challenges? Is there anything I need to be aware of thats creating a barrier to progress? or What support would help you get back on track? These questions normalize course correction and reinforce that success is shared. Renegotiation replaces disappointment with dialogue. It keeps people aligned to what matters now, not what mattered when the expectation was first set. And it reinforces a critical truth: trust isnt built by pushing through in silence, but by adapting together when reality changes. Managing expectations is one of the most overlooked ways trust is built at work. When managers make expectations visible, confirm understanding, and adapt together as needs change, they create more than alignmentthey create confidence. People know whats expected, why it matters, and where to ask for support when reality shifts. In a world defined by constant change, that kind of partnership isnt a luxury. Its a management responsibility.
Category:
E-Commerce
When social psychologist Jonathan Haidt published The Anxious Generation in March 2024, his core proposalthat children should be kept off social media until at least age 16, with tech companies bearing the burden of enforcementwas treated by many as aspirational, even quixotic. The tech industry dismissed it. Libertarian critics called it paternalistic overreach. Skeptics questioned the evidence base. That was then. In barely two years, Haidt’s “radical” idea has become something close to a global consensusa textbook example of what political scientists call the “Overton Windowone that’s shifted at extraordinary speed. The Overton Window describes the range of ideas that are considered politically acceptable at any given time, ranging from unthinkable to popular and eventually to policy. Ideas outside the windowno matter how sensibleget dismissed as too extreme, too impractical, or too politically risky to touch. But when conditions change, the window can move, sometimes gradually and sometimes with startling speed, pulling yesterday’s fringe idea into today’s mainstream. That is exactly what has happened with children and social media. Politicians everywhere are now racing to get on the right side of a window that has moved decisively. The Floodgates Have Opened Consider what has happened just since late 2025. Australia led the charge, enacting an outright ban on social media for children under 16 that took effect in December 2025, with monetary penalties falling squarely on the platformsnot on parents or kids. France has passed a bill banning social media for children under 15. Denmark secured cross-party support for a similar ban, expected to become law by mid-2026. Spain, Germany, Malaysia, Slovenia, Italy, and Greece are all moving in the same direction. In the United States, where bipartisan agreement on anything feels miraculous, the Kids Off Social Media Act has attracted co-sponsors from both partiesSen. Brian Schatz (D-HI) alongside Sen. Ted Cruz (R-TX), and Chris Murphy (D-CT) alongside Katie Britt (R-AL). Virginia enacted a law effective January 2026 limiting under-16 social media use to one hour per day unless parents opt in. Over 45 states have pending legislation. And in the U.K., a January 2026 government consultation is explicitly considering a social media ban for children, after the House of Lords defeated the government to insert an under-16 ban into the Children’s Wellbeing and Schools Bill. This is no longer a debate about whether to act. It’s a debate about the details. Why the Window Moved So Fast Several forces converged to make this shift possible. First, mounting evidence. Haidt marshaled data showing that since the early 2010sprecisely when smartphones and social media became ubiquitous among teensrates of anxiety, depression, self-harm, and suicide among young people have surged across the developed world. The patterns are strikingly consistent across countries and cultures. As Haidt puts it: We “over-protected children in the real world and under-protected them online.” Second, personal stories that broke through the noise. Australia’s ban originated partly from a mother’s letter to Prime Minister Anthony Albanese about her 12-year-old daughter’s suicide following social media bullying. At the U.N. General Assembly in September 2025, a mother’s speech about her daughter’s “death by bullying, enabled by social media” won support from world leaders across continents. Data persuades policymakers; stories move publics. Third, the collective action problem became too painful to ignore. Haidt nailed this insight: Individual parents feel powerless against platforms engineered by billions of dollars of design expertise to maximize engagement. No single family can opt out without socially isolating their child. This is precisely why governments need to shift the responsibility to the platforms. When enforcement becomes the tech companies’ problemnot the parents’ problemthe collective action trap breaks. Fourth, early results from related interventions are encouraging. Arkansas’ phone-free-school pilot program showed a 51% drop in drug-related offenses and a 57% decline in verbal and physical aggression among students within the first year. Results like these give politicians the cover they need to act boldly. The Strategic Lesson For those of us who study how change happens, this is a master class. An idea that seemed politically impossible in early 2024 has become politically inevitable by early 2026. That’s the speed at which Overton Windows can move when lived experience, accumulating evidence, moral urgency, and a clear articulation of the problem all align. Note, too, where the burden of proof has shifted. Two years ago, advocates for restricting children’s social media access had to justify intervention. Today, it is the tech companies and their defenders who must explain why children should continue to have unrestricted access to platforms designed to be addictive. That reversalthe shift in who must justify whatis the surest signal that an Overton Window has decisively moved. It is further set against the backdrop of the first set of legal challenges to the platforms business models, arguing that their designers have deliberately designed their products to be harmful to maximize their profits. What Comes Next Haidt, a professor of ethical leadership at New York University, didn’t create this movement alonemillions of anxious parents, grieving families, and alarmed educators did. But he gave it a framework, a language, and a set of actionable proposals. And now, politicians everywhee are scrambling to catch up with what parents already knew in their bones: that we handed our children’s attention, self-worth, and mental health to companies that optimize for engagement, not well-beingand that better guardrails, uniformly enforced, are essential.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||