A significant milestone has been reached in the realm of AI copyright litigation. A court in Delaware has determined that a tech startup illegally utilized copyrighted content to develop a competing AI-driven legal tool, marking a substantial victory for Thomson Reuters.
This ruling represents a groundbreaking precedent for plaintiffs contending against AI firms regarding the definition of “fair use” of proprietary materials. Thomson Reuters, the parent company of the Reuters news agency, has been engaged in a protracted legal battle with Ross Intelligence, an AI firm accused of extracting material from Thomson Reuters’s Westlaw database.
The plaintiff asserted in court that Ross Intelligence’s appropriation of data to train its AI legal research application constituted unfair use and infringed on copyrighted content.

In a pivotal decision, U.S. Circuit Judge Stephanos Bibas dismissed Ross’s defense that its actions amounted to “innocent infringement,” thus awarding summary judgment to Thomson Reuters. Central to the case is the fair use doctrine, which encompasses four guiding factors:
(1) The purpose and character of the use, including its commercial or nonprofit nature.
(2) The nature of the copyrighted work used.
(3) The amount and significance of the portion used relative to the overall copyrighted work.
(4) The effect of the use on the value or potential market for the copyrighted work.
The district court’s decision resulted in a split ruling between Thomson Reuters and Ross Intelligence, but Judge Bibas emphasized that the fourth factor held more weight. This ruling may pave the way for more decisive legal actions against AI companies accused of unlawfully using copyrighted materials.
A New Chapter in AI Training Legal Battles?

One of the prominent cases in this arena involved The New York Times, which initiated legal action against OpenAI and its financial supporter Microsoft for allegedly using its content without permission to train products like ChatGPT. Similarly, Getty Images has filed lawsuits against Stability AI for unauthorized content scraping.
Furthermore, Anthropic, supported by Amazon, faced a copyright struggle with the Universal Music Group. Recently, a new trend has emerged where seemingly conflicting parties reach licensing agreements to resolve copyright tensions. OpenAI has established licensing arrangements with Axios, Hearst, and CondeNast, among others. Meanwhile, Perplexity, embroiled in its own legal disputes, has secured similar partnerships with Fortune and Times.
Tech giants such as Meta, Google, and Microsoft have also entered into agreements with content providers. Notably, the Reuters news agency has formed a licensing alliance with Meta. However, these licensing agreements are seen as temporary fixes.
Experts remain divided and seek more clarity on the interpretation of copyright laws in the context of artificial intelligence. Questions linger regarding the threshold at which an AI-generated output, reflecting a paraphrased version of copyrighted material, might initiate a copyright infringement claim. Clarity on this subject is anticipated in the near future.