Claude’s Code Leak Sparks Copyright Clash for Anthropic

The Leak of Claude Code’s Source Code

Anthropic, a leading AI company, recently found itself in an unexpected situation when the source code for its Claude Code AI agent was accidentally leaked this week. The leak quickly gained traction, with millions of views and numerous GitHub adaptations. This event sparked a frenzy among engineers who were eager to explore and learn from the code.

The leaked source code for Anthropic’s acclaimed AI agent, Claude Code, appeared on GitHub on Tuesday, creating a free-for-all scenario. Engineers from various backgrounds rushed to access it, hoping to gain insights and enhance their own projects. This situation echoes the practices of major AI companies that have long relied on content created by others to improve their large-language models. Anthropic is no exception to this trend.

In a twist of irony, Anthropic took swift action to control the spread of the leaked code by issuing a copyright takedown notice to the GitHub repository hosting it. An Anthropic spokesperson mentioned the Digital Millennium Copyright Act (DMCA) in response to the incident. The company has faced legal challenges before, as it, along with OpenAI and Google, has been involved in lawsuits over the use of copyrighted material without explicit permission. This includes published books, articles, scientific journals, and other online content.

In September, a court ordered Anthropic to pay $1.5 billion in damages in a class-action lawsuit brought by authors and publishers. This lawsuit included lead plaintiffs Andrea Bartz, Charles Graefer, and Kirk Wallace Johnson, who alleged that Anthropic used pirated books and shadow libraries to train Claude. Additionally, Reddit sued Anthropic last June for scraping user-generated content to train its models without authorization or compensating users. Last month, Universal Music Group, Concord, and ABKCO filed a suit against Anthropic for illegally downloading over 20,000 copyrighted songs for training its models.

Now, the tables have turned, and Anthropic is using copyright laws to protect its creations. “We’re rolling out measures to prevent this from happening again,” a spokesperson for Anthropic said.

Fortunately, the leak may not be as severe as some feared. Paul Price, a cybersecurity specialist and founder of the ethical hacking firm Code Wall, noted that the leak didn’t expose anything critical. He stated, “It’s more embarrassing than detrimental. Most of the real juicy stuff is in their internal source models and that wasn’t leaked.”

Price explained that the company inadvertently exposed its “harness” — a software infrastructure typically used to connect large language models to the broader context in which they’re used. He added that Claude Code is one of the best-designed agent harnesses, and now the industry can see how they approach the hard problems. This could also prove useful intel for competitors.

The leak also highlighted a paradox of the AI hype cycle: the same tools that make it faster to build and ship products also make it easier for information—sensitive or not—to leak, replicate, and spread instantly.

If you enjoyed this story, be sure to follow Business Insider on MSN.

Pos terkait