|
The AI copyright courtroom is heating up. In back-to-back rulings last week, the ongoing legal war between AI companies and content creators has significantly shifted, ostensibly favoring the former. First, Anthropic got the better outcome of a case that examined whether it could claim “fair use” over its ingestion of large book archives to feed its Claude AI models. In another case, a federal judge said Meta did not violate the copyright of several well-known authors who sued the company for training its Llama models on their books. At a glance, this looks bad if you’re an author or content creator. Although neither case necessarily sets a precedent (the judge in the Meta case even went out of his way to emphasize how narrowly focused it was), two copyright rulings coming down so quickly and definitively on the side of AI companies is a signalone that suggests “fair use” will be an effective shield for them, potentially even in higher-stakes cases like the ones involving The New York Times and News Corp. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}} As always, the reality is a little more complicated. The outcomes of both cases were more mixed than the headlines suggest, and they are also deeply instructive. Far from closing the door on copyright holders, they point to places where litigants might find a key. What the Anthropic ruling says about AI inputs vs. outputs Before I get going, I need to point out that I’m not a lawyer. What I offer here is my analysis of the cases based on my experience as a journalist and media executive and what I’ve learned following this space for the past two years. Consider this general guidance for those curious about what’s going on, but if you or your company is in the process of arguing one of these cases or thinking about legal action, you should consult a lawyer, preferably one who specializes in copyright law. Speaking of, here’s a little refresher on that: Copyright law is well defined in the U.S., and it provides for a defense for certain violations, known as fair use. Almost all of the AI companies at the forefront of building models rely on this defense. Determining whether a fair-use defense holds water comes down to four factors: The purpose of the use, or whether it was for commercial or noncommercial purposes. Courts will be more forgiving for the latter, but obviously what the AI companies are doing is a massively commercial exercise. This also covers whether the allegedly violating work is a direct copy or “transformative.” Many have said that AI outputs, because they aren’t word-for-word copies and usually rely on many different sources, are transformative. The nature of the copyrighted work: More protection usually goes to creative works than factual ones. AI systems often deal with both. How much of the original work was copied: Reproducing short excerpts is usually OK, but AI companies typically ingest entire works for training. Courts have sometimes tolerated full copying as long as the output doesnt reproduce the entire work or big chunks verbatim. Whether the violation caused market harm: This is a large focus in these cases and other ongoing litigation. The outcome of the Anthropic case drew some lines between what was OK and what wasn’t. The fact is, anyone can buy a book, and for the books that were legally obtained, the judge said that training its AI on them qualified as fair use. However, if those books were illegally obtainedi.e. piratedthat would amount to a copyright violation. Since many of them undoubtedly were, Anthropic might still pay a price for training on the illegally copied books that happened to be in the archives. An important aspect of the Anthropic case is that it focuses on the inputs of AI systems as opposed to the outputs. In other words, it answers the question, “Is copying a whole bunch of books a violation, independent of what you’re doing with them?” with “No.” In his ruling, the judge cited the precedent-setting case of Authors Guild, Inc. v. Google, Inc. from 2015. That case concluded Google was within its rights to copy books for an online database, and the Anthropic ruling is a powerful signal that extends the concept into the AI realm. However, the Google case came out in favor of fair use in large part because the outputs of Google Books are limited to excerpts, not entire books. This is important, because a surface-level reading of the Anthropic case might make you think that, if an AI service pays for a copy of something, it can do whatever it wants with it. For example, if you wanted to use the entire archive of The Information, all you’d need to do is pay the annual subscription. But for digital subscriptions, the permission is to access and read, not to copy and repurpose. Courts have not ruled that buying a digital subscription alone licenses AI training, even though many might read it that way. The missing piece in the Meta case: harm The Meta case has a little bit to say about that, and it has to do with the fourth point of fair-use defense: market harm. The reason the judge ruled in favor of Meta was because the authors, which include comedian Sarah Silverman and journalist Ta-Nehisi Coates, weren’t able to prove that they had suffered a decline in book sales. While that gives a green light for an AI to train on copyrighted works as long as it doesn’t negatively affect their commercial potential, the reverse is also true: content creators will be more successful in court if they can show that it does. In fact, that’s exactly what happened earlier this year. In February, Thomson Reuters scored a win against a now-defunct AI company called Ross Intelligence in a ruling that rejected Ross’s claims of fair use for training on material derived from Thomson Reuters’ content. Ross’s business model centered around a product that competed directly with the source of the content, Westlaw, Thomson Reuters’s online legal research service. That was clear market harm in the judge’s eyes. Taken together, the three cases point to a clearer path forward for publishers building copyright cases against Big AI: Focus on outputs instead of inputs: It’s not enough that someone hoovered up your work. To build a solid case, you need to show that what the AI company did with it reproduced it in some form. So far, no court has definitively decided whether AI outputs are meaningfully different enough to count as “transformative” in the eyes of copyright law, but it should be noted that courts have ruled in the past that copyright violation can occur even when small parts of the work are copiedif those parts represent the “heart” of the original. Show market harm: This looks increasingly like the main battle. Now that we have a lot of data on how AI search engines and chatbotswhich, to be clear, are outputsare affecting the online behavior of news consumers, the case that an AI service harms the media market is easier to make than it was a year ago. In addition, the emergence of licensing deals between publishers and AI companies is evidence that there’s market harm by creating outputs without offering such a deal. Question source legitimacy: Was the content legally acquired or pirated? The Anthropic case opens this up as a possible attack vector for publishers. If they can prove scraping occurred through paywallswithout subscribing firstthat could be a violation even absent any outputs. The case for a better case This area of law is evolving rapidly. There will certainly be appeals for these cases and others that are still pending, and there’s a good chance this all ends up at the Supreme Court. Also, regulators or Congress could change the rules. The Trump administration has hardly been silent on the issue: It recently fired the head of the U.S. Copyright Office, ostensibly over its changing stance on AI, and when it solicited public comment on its AI action plan, both OpenAI and Google took the opportunity to argue for signing their interpretation of fair use into law. For now, though, publishers and content creators have a better guide to strengthening their copyright cases. The recent court rulings don’t mean copyright holders can’t win, but that the broad “AI eats everything” narrative won’t win by itself. Plaintiffs will need to show that outputs are market substitutes, the financial harm is real, or that the AI companies used pirated sources in their training sets. The rulings aren’t saying dont suethey show how to sue well. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}}
Category:
E-Commerce
I work in the data center industry, where were known for our digital-ready, adaptive infrastructure. Yet one of our most valuable products is actually the leaders we create. Developing leaders is critical for every growing company. For us, its an urgent priority. Demand for AI and high-powered computing means were expanding almost 30% annually. In just two years, weve grown from under 200 employees in the U.S. to around 900 across five countries.But as vital as leadership development is, it often gets overlooked. Just four out of 10 executives say their company has high-quality leadership, while 45% of managers dont think their organization is doing enough to develop senior talent.Turning your company into a leadership development engine requires looking at tomorrow through a talent lens. Its not just about hiring great peopleits about building a pipeline of leaders who can step up, inspire teams, and represent the business at its best. That means promoting from within, bringing in fresh perspectives, and upskilling existing leaders to be ready for whats next. Even for companies that arent on a rapid growth trajectory, our experience offers some lessons worth considering. Here are three things any business can do to develop its leaders. 1. Identify potential and create the roadmap To start, you need a clear leadership philosophy. Ours is simple: Grow people, grow the business. We see leaders as those who take initiative, elevate others, and deliver results without needing to be micromanaged. The next step: Create a leadership roadmap by figuring out which roles you need today and tomorrow. This isnt just about identifying people but also pinpointing business needs. Who on your team can be developed to meet those objectives? What roles call for a new hire? Who will need replacing? With an aging workforce threatening a talent shortage, succession planning is increasingly important for future-proofing. Its also crucial to balance internal promotions with new blood. When I became CEO, I could have recreated the C-suite from my previous company. Instead, we built a culture rooted in our unique business needsrecruiting leaders from a variety of organizations and developing existing talent. Last quarter alone, we promoted four executives within the company to new roles. Im also a firm believer that A players should hire A players. That demands letting go of fears around being replaced and bringing on people who help raise everybodys game. Finally, one of the most powerful things an organization can do is treat leadership as a behavior, not a goal. Give people the chance to lead projects, influence peers, and solve hard problems before they ever manage a team. It builds confidence, surfaces potential, and helps people grow into leaders long before their title says so. 2. Train and develop your leaders Identifying a future leader is just the beginning. The real work lies in helping them develop. General Electrics Leadership Development Institute once set the standard here, especially during the Jack Welch era. IBMs offerings include online leadership development programs that earn participants a certificate from a top business school. While some companies prefer a one-size-fits-all approach, we break down leadership development into three cohorts. One is for team members who have never led before. The next is for midlevel managers, covering topics like having tough conversations, big-picture thinking and leading rather than managing. For high-potential employees (chosen by the C-suite), we offer a Leadership Excellence program designed to accelerate those who can move the business forward. One-on-one training is also essential. Through our mentorship program, we pair top leadership candidates with senior executives. We also have promising leaders shadow more senior team members, especially if they might end up succeeding that person. Such efforts pay off. One study found that the average ROI for every dollar spent on leadership development is $7. Besides a revenue boost, those benefits include savings from higher employee retention and lower recruiting costs. 3. Support the leaders you have Leaders need autonomy to do what they do best, but that freedom hinges on support from their peers. We recently brought the entire executive team together for an offsite. Such meetings are a chance to align on priorities, share ideas, talk about what is and isnt working, and brainstorm how to overcome obstacles. Having that peer network to lean on helps set leaders up for success with their teams. Burnout is also a major problem. Younger people are especially vulnerable, with 75% of leaders under age 35 saying they feel used up at the end of each day. To prevent that, we provide executive coaching, settle on a realistic scope for leaders duties and encourage setting boundaries. Avoiding burnout also means normalizing vulnerability and urging leaders to tell us if theyre at capacity. The worst thing that can happen is someone quitting because they didnt have bandwidth. Especially when we can help them, whether thats by hiring or bringing in staff from elsewhere in the business. Challenges and opportunities Developing the next generation of leaders has its stumbling blocks. One hurdle is that many young professionals are reluctant to lead. More than half of Gen Z employees dont want to be middle managers, and roughly 70% would prefer to advance as individual contributors. Were tackling this challenge with a robust internship program that gives new grads exposure to multiple career paths, including leadership, so they can make an informed decision about whats right for them. AI adds another layer of complexity. On the one hand, I see it becoming a powerful development tool, offering leaders real-time feedback, personalized learning journeys and data-backed insights into team dynamics. On the other hand, AI is forcing leaders to start thinking about how it will transform the workforce and impact their teams. But no matter what changes AI brings, it cant replace the human element of leadership. For leaders at any successful business in any industry, qualities like empathy, judgment, and presence cant be outsourced. If anything, AI frees up more time for leaders to focus on their most important job: bringing out the best in the people around them. Andrew Schaap is CEO of Aligned Data Centers.
Category:
E-Commerce
Today, most AI is being built on blind faith inside of black boxes. It requires users to have on unquestioning belief in something neither transparent nor understandable. The industry is moving at warp speed, employing deep learning to tackle every problem, training on datasets that few people can trace, and hoping no one gets sued. The most popular AI models are developed behind closed doors, with unclear documentation, vague licensing, and limited visibility into the provenance of training data. Its a messwe all know itand it’s only going to get messier if we dont take a different approach. This train now, apologize later mindset is unsustainable. It undermines trust, heightens legal risk, and slows meaningful innovation. We dont need more hype. We need systems where ethical design is foundational. The only way we will get there is by adopting the true spirit of open source and making the underlying code, model parameters, and training data available for anyone to use, study, modify, and distribute. Increasing transparency in AI model development will foster innovation and lay a stronger foundation for civic discourse around AI policy and ethics. Open source transparency empowers users Bias is a technical inevitability in the architecture of current large learning models (LLMs). To some extent, the entire process of training is nothing but computing the billions of micro-biases that align with the contents of the training dataset. If we want to align AI with human values, instead of fixating on the red herring of bias, we must have transparency around training. The source datasets, fine-tuning prompts and responses, and evaluation metrics will reveal precisely the values and assumptions of the engineers who create the AI model. Consider a high school English teacher using an AI tool to summarize Shakespeare for literary discussion guides. If the AI developer sanitizes the Bard for modern sensibilities, filtering out language they personally deem inappropriate or controversial, they’re not just tweaking outputthey’re rewriting history. It is impossible to make an AI system tailored for every single user. Attempting to do so has led the recent backlash against ChatGPT for being too sycophantic. Values cannot be unilaterally determined at a low technical level, and certainly not by just a few AI engineers. Instead, AI developers should provide transparency into their systems so that users, communities, and governments can make informed decisions about how best to align the AI with societal values. Open source will foster AI innovation Research firm Forrester has stated that open source can help firms accelerate AI initiatives, reduce costs, and increase architectural openness, ultimately leading to a more dynamic, inclusive tech ecosystem. AI models consist of more than just software code. In fact, most models’ code is very similar. What uniquely differentiates them are the input datasets and the training regimen. Thus, an intellectually honest application of the concept of “open source” to AI requires disclosure of the training regimen as well as the model source code. The open-source software movement has always been about more than just its tech ingredients. Its about how people come together to form distributed communities of innovation and collective stewardship. The Python programming languagea foundation for modern AIis a great example. Python evolved from a simple scripting language into a rich ecosystem that forms the backbone of modern data processing and AI. It did this through countless contributions from researchers, developers, and innovatorsnot corporate mandates. Open source gives everyone permission to innovate, without installing any single company as gatekeeper. This same spirit of open innovation continues today, with tools like Lumen AI, which democratizes advanced AI capabilities, allowing teams to transform data through natural language without requiring deep technical expertise. The AI systems we’re building are too consequential to stay hidden behind closed doors and too complex to govern without collaboration. However, we will need more than open code if we want AI to be trustworthy. We need open dialogue among the enterprises, maintainers, and communities these tools serve because transparency without ongoing conversation risks becoming mere performance. Real trust emerges when those building the technology actively engage with those deploying it and those whose lives it affects, creating feedback loops that ensure AI systems remain aligned with evolving human values and societal needs. Open source AI is inevitable and necessary for trust Previous technology revolutions like personal computers and the Internet started with a few proprietary vendors but ultimately succeeded based on open protocols and massively democratized innovation. This benefited both users and for-profit corporations, although the latter often fought to keep things proprietary for as long as possible. Corporations even tried to give away closed technologies “for free,” under the mistaken impression that cost is the primary driver of open source adoption. A similar dynamic is happening today. There are many free AI models available, but users are left to wrestle with questions of ethics and alignment around these black-boxed, opaque models. For societies to trust AI technology, transparency is not optional. These powerful systems are too consequential to stay hidden behind closed doors, and the innovation space around them will ultimately prove too complex to be governed by a few centralized actors. If proprietary companies insist on opacity, then it falls upon the open source community to create the alternative. AI technology can and will follow the same commoditization trajectory as previous technologies. Despite all the hyperbolic press about artificial general intelligence, there is a simple, profound truth about LLMs: The algorithm to turn a digitized corpus can be turned into a thought-machine is straightforward, and freely available. Anyone can do this, given compute time. There are very few secrets in AI today. Open communities of innovation can be built around the foundational elements of modern AI: the source code, the computing infrastructure, and, most importantly, the data. It falls upon us, as practitioners, to insist on open approaches to AI, and to not be distracted by merely “free” facsimiles. Peter Wang is chief AI and innovation officer at Anaconda.
Category:
E-Commerce
All news |
||||||||||||||||||
|