Skip to Main Content

Artificial Intelligence (AI) and Information Literacy

Learn about how AI works and how to spot common errors AI tools tend to make. You'll also learn fact-checking and critical thinking strategies for AI, how to cite AI in an academic paper, and how to learn more in-depth about AI tools and issues.

AI and Information Literacy: Assessing Content

An orange and white banner image with latticed abstract shapes and binary code in the background. The text reads "AI & Information Literacy: Assess Content."

Lateral reading: your #1 analysis tool

If you cannot take AI-cited sources at face value and you (or the AI's programmers) cannot determine where the information is sourced from, how are you going to assess the validity of what AI is telling you? Here you should use the most important method of analysis available to you: lateral reading. Lateral reading is done when you apply fact-checking techniques by leaving the AI output and consulting other sources to evaluate what the AI has provided based on your prompt. You can think of this as “tabbed reading”, moving laterally away from the AI information to sources in other tabs rather than just proceeding “vertically” down the page based on the AI prompt alone.

Diagram representing the concept of lateral reading. A tall icon representing an online resource is labeled “vertical” and a horizontal rectangle labeled “lateral” overlaps it and an icon representing a second online resource.

What does this process look like specifically with AI-based tools? Learn more in the sections below.

Lateral reading and AI

Lateral reading can (and should) be applied to all online sources, but you will find fewer pieces of information to assess through lateral reading when working with AI. While you can typically reach a consensus about online sources by searching for a source’s publication, funding organization, author or title, none of these bits of information are available to you when assessing AI output. As a result, it is critical that you read several sources outside the AI tool to determine whether credible, non-AI sources can confirm the information the tool returned. 

With AI, instead of asking “who’s behind this information?” we have to ask “who can confirm this information?” In the video above, lateral reading is applied to an online source with an organization name, logo, URL, and authors whose identities and motivations can be researched and fact checked from other sources. AI content has no identifiers and AI output is a composite of multiple unidentifiable sources. This means you must take a look at the factual claims in AI content and decide on the validity of the claims themselves rather than the source of the claims.

Since AI output is not a single source of information but rather drawn from multiple sources that could be both factual and false, you will find it useful to break apart AI output into smaller components of information that can be evaluated independent of each other. For instance, let’s see what happens when we ask ChatGPT to write an essay on Jim Henson’s undergraduate studies at The Ohio State University.

Screenshot of a ChatGPT conversation, as follows: "2. Puppetry at Ohio State: During his time at The Ohio State University, Jim Henson also became involved in puppetry activities on campus. He joined the university's puppetry club, where he honed his puppeteering skills and explored storytelling through the art of puppetry. This extracurricular involvement provided him with practical experience and opportunities to experiment with various puppet designs and techniques [2].3. Influence of Dr. Richard Lederer: One of the most significant influences on Jim Henson's studies at Ohio State was Dr. Richard Lederer, a professor in the Drama Department. Lederer was a passionate advocate for puppetry as a legitimate art form and encouraged Henson's interest in the medium. Under Lederer's guidance, Henson's passion for puppetry deepened, and he began to see it as a viable career path [3].

Of course, many of you know that Jim Henson completed his undergraduate at UMD, not Ohio State, which illustrates a critical distinction to be made about AI; it will take what you provide it and try to answer your question as best it can, but it will NOT fact check you or spot incorrect assumptions in the prompt you give it. In paragraph 2 from the example above, the AI correctly states that Henson was involved in puppetry activities on campus, but is incorrect in stating this was at Ohio State. If one were to assume AI output is accurate throughout, they would not realize their own error in prompting the AI with the incorrect university. 

The issues in this example are not just based on the flawed prompt. In paragraph 3, the AI describes a Dr. Richard Lederer who was a mentor of Henson’s. A quick Google search reveals no mentions of a Dr. Richard Lederer at Ohio State or UMD. Further, the only Richard Lederer of note to come up is an author, linguist and speaker who seems to have no connection to Henson himself. This is an example of the AI “hallucinating” a seemingly factual answer that sounds plausible, but is unfounded upon some quick lateral reading.

Instructions: tackle an AI fact-check

Diagram of a fact-checking process for AI. The diagram is titled “AI-Fact Checking” and shows a linear flow chart with five steps, represented by a series of different-colored arrows. The text of the diagram reads: Step 1: Break It Down. Break down the information. Identify specific claims. Step 2: Search. Look for information supporting a specific claim. For specific info claims: try Google or Wikipedia. For confirming something exists: try Google Scholar or WorldCat. Step 3: Analyze. Consider the info discovered in light of assumptions: What did your prompt assume? What did the Al assume? What perspective or agenda do your fact-check findings hold? Step 4: Decide. What is true? What is misleading? What is factually incorrect? Can you update your prompt to address any errors? Step 5: Repeat/Conclude. Repeat this process for each of the claims identified in the "Break It Down" stage. Make judgment calls on the validity of the claims and decide if they are relevant and useful for your research.

 

Here's how to fact-check something you got from ChatGPT or a similar tool:

  1. Break down the information. Take a look at the response and see if you can isolate specific, searchable claims. This is called fractionation.
  2. Then it’s lateral reading time! Open a new tab and look for supporting pieces of information. Here are some good sources to start with:
    • When searching for specific pieces of information: Google results or Wikipedia
    • When seeing if something exists: Google Scholar, UMD Discover, or Wikipedia
    • Tip: Some things to watch out for – is the AI putting correct information in the wrong context (like when it said that Texas A&M’s tradition was a UMD one)? Is it attributing a fake article to a real author?
  3. Next, think deeper about what assumptions are being made here
    • What did your prompt assume?
    • What did the AI assume?
    • Who would know things about this topic? Would they have a different perspective than what the AI is offering? Where could you check to find out?
  4. Finally, make a judgment call. What here is true, what is misleading, and what is factually incorrect? Can you re-prompt the AI to try and fix some of these errors? Can you dive deeper into one of the sources you found while fact-checking? Remember, you’re repeating this process for each of the claims the AI made – go back to your list from the first step and keep going!

For an example of this in action, take a look at the video at the bottom of the page.

Example: Let's fact-check an AI response!

Check out the videos below to see these lateral reading strategies in action!

The first video has information on fact-checking AI-generated text and links:

 

And the second video has advice on fact-checking AI-generated citations and scholarly sources:

But just checking specific claims isn't all we need to do. Click the "next" button below to learn about critical thinking beyond fact-checking.