Skip to Main Content

Artificial Intelligence (AI) and Scholarly Communications

This guide is intended to highlight the ways in which AI can be integrated into various aspects of the research and publishing process and what we should consider when deciding whether AI tools are the right fit for our individual practice

Artificial Intelligence and Scholarly Communications Graphic

Infringement and Output Liability

What is Infringement? Copyright infringement occurs when a copyrighted work is reproduced, distributed, performed, publicly displayed, or made into a derivative work without the permission of the copyright owner. Some useful examples of copyright infringement include: 

  • Digital piracy, such as downloading and sharing music, movies, or software from unauthorized distributors 
  • Creating and distributing works, such as fan art or fan fiction, that are based on existing copyrighted characters or stories, without permission from the original creator
  • Using music, images, or other media in the creation and illustration of your own work without permission. This might look like using a photograph to make a marketing image or to design the cover for a book or incorporating music as a background track into a podcast or film

In the context of AI, infringement might occur when we take copyrighted works we have purchased or otherwise downloaded from the internet and put them into AI tools, such as uploading an image or document to ChatGPT or the image generator Midjourney. Using generative AI, we might also produce works that are deemed to be infringing because they draw on the characters, signature styles, and other creative intellectual property of their creators, particularly if we distribute and/or profit from those works. 

Many generative AI tools have attempted to invent guardrails that would prevent infringing outputs from being created, but this is especially difficult when it comes to characters and other well known artists/images. AI algorithms have been constructed to prevent them from "memorizing" the exact works in their datasets and therefore prevent them from producing outputs that are copies or too close to copying exact images. However, the nature of characters - Snoopy, Mickey Mouse, Harry Potter - as being composed of a set of distinct features but not limited to a single, fixed image or text, has thus far made it impossible to prevent generative AI tools from producing infringing works. Below are two images produced by the legal scholar Matthew Sag, representing an unsuccessful attempt to have the application Midjourney produce infringing images of a specific painting by Salvador Dali, and a successful attempt to produce infringing images of the character Snoopy. This issue, by which we are unable to prevent infringing works, has thus been dubbed "The Snoopy Problem"

Figure 12: Failed attempt to infringe on a Salvador Dalí painting

Figure 13: Successful attempt to infringe on Snoopy using Midjourney, and Stable Diffusion

Figure 12 (top) and Figure 13 (bottom) from the article Copyright Safety for Generative AI (Matthew Sag, 2023). Figure 12 shows a failed attempt to infringe on a Salvador Dalí painting. Figure 13 shows a sccessful attempt to infringe on Snoopy using the applications Midjourney and Stable Diffusion.

Output Liability

So, suppose a generative AI tool is used to create an infringing image, text, or other work. Who is liable? Copyright liability is the legal responsibility for violating a copyright holder's exclusive rights. In the case of AI, the dataset compilation, construction and hosting of the AI application, and use of the application to produce outputs may not necessarily be done by the same individual or entity. This leads us to questions like...

  • If I create an image using an AI tool that references a character in copyright, am I responsible for that infringement? What if I was unaware of the existence of this character and the infringement was accidental?
  • If I built an AI application and someone else added copyrighted works to the dataset (like uploading an image to ChatGPT) and created infringing works are they liable or am I?
  • If I build or operate an AI-powered application, how can I creating technical barriers or terms of service that will protect me from liability? 

The answers to these questions are still unclear. Just as when we are considering Fair Use, we can look to legal precedents to understand how courts might interpret these issues. For instance, Section 230 of the Communications Decency Act (1996) has been pointed to as a source of legal protection for AI-powered platform operators. This law shields companies from liability from the actions taken on their platforms by users. Platforms like social media sites can host user-generated content without being held responsible as if they were the content's publisher. If someone uploads movie to YouTube or copies and pastes the chapter of a book to Reddit, the platform is not liable for that infringement. 

"No provider of user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider"

- Section 230, Communications Decency Act (1996)

By contrast, a landmark case involving the music sharing site Napster in 2001 found that the company was liable for the infringing activities taking place on their platform, despite the fact that music was being distributed directly between third party users. 

Resources