If you cannot take AI-cited sources at face value and you (or the AI's programmers) cannot determine where the information is sourced from, how are you going to assess the validity of what AI is telling you? Here you should use the most important method of analysis available to you: lateral reading. Lateral reading is done when you apply fact-checking techniques by leaving the AI output and consulting other sources to evaluate what the AI has provided based on your prompt. You can think of this as “tabbed reading”, moving laterally away from the AI information to sources in other tabs rather than just proceeding “vertically” down the page based on the AI prompt alone.
What does this process look like specifically with AI-based tools? Learn more in the sections below.
Lateral reading can (and should) be applied to all online sources, but you will find fewer pieces of information to assess through lateral reading when working with AI. While you can typically reach a consensus about online sources by searching for a source’s publication, funding organization, author or title, none of these bits of information are available to you when assessing AI output. As a result, it is critical that you read several sources outside the AI tool to determine whether credible, non-AI sources can confirm the information the tool returned.
With AI, instead of asking “who’s behind this information?” we have to ask “who can confirm this information?” In the video above, lateral reading is applied to an online source with an organization name, logo, URL, and authors whose identities and motivations can be researched and fact checked from other sources. AI content has no identifiers and AI output is a composite of multiple unidentifiable sources. This means you must take a look at the factual claims in AI content and decide on the validity of the claims themselves rather than the source of the claims.
Since AI output is not a single source of information but rather drawn from multiple sources that could be both factual and false, you will find it useful to break apart AI output into smaller components of information that can be evaluated independent of each other. For instance, let’s see what happens when we ask ChatGPT to write an essay on Jim Henson’s undergraduate studies at The Ohio State University.
Of course, many of you know that Jim Henson completed his undergraduate at UMD, not Ohio State, which illustrates a critical distinction to be made about AI; it will take what you provide it and try to answer your question as best it can, but it will NOT fact check you or spot incorrect assumptions in the prompt you give it. In paragraph 2 from the example above, the AI correctly states that Henson was involved in puppetry activities on campus, but is incorrect in stating this was at Ohio State. If one were to assume AI output is accurate throughout, they would not realize their own error in prompting the AI with the incorrect university.
The issues in this example are not just based on the flawed prompt. In paragraph 3, the AI describes a Dr. Richard Lederer who was a mentor of Henson’s. A quick Google search reveals no mentions of a Dr. Richard Lederer at Ohio State or UMD. Further, the only Richard Lederer of note to come up is an author, linguist and speaker who seems to have no connection to Henson himself. This is an example of the AI “hallucinating” a seemingly factual answer that sounds plausible, but is unfounded upon some quick lateral reading.
Here's how to fact-check something you got from ChatGPT or a similar tool:
For an example of this in action, take a look at the video at the bottom of the page.
Check out the videos below to see these lateral reading strategies in action!
The first video has information on fact-checking AI-generated text and links:
And the second video has advice on fact-checking AI-generated citations and scholarly sources:
But just checking specific claims isn't all we need to do. Click the "next" button below to learn about critical thinking beyond fact-checking.