Skip to Main Content

AI Tools: ChatGPT, Copilot, and Large Language Models

Check AI tools for credibility

How to make sure the generative AI is useful.

Evaluating all information for credibility is highly recommended, regardless where it is located. This is especially true for generative AI responses. There are many different tools, checklists, and strategies to help evaluate sources. None of them are black-and-white checklists for determining if a source is credible and it it should be used. There are too many academic and professional risks to not check the computer's "work."

Always consider AI results or responses to be a starting point.  Here are two strategies for evaluating information provided by generative AI tools:

Lateral reading

Don't take what ChatGPT or Copilot says at face value. Look to see if other reliable sources contain the same information and can confirm what ChatGPT or Copilot says. This could be as simple as reviewing a Wikipedia entry's citations on the topic or doing a Google search to see if a person mentioned exists. When you look at multiple sources, you maximize lateral reading and can help avoid bias from a single source.

Watch Crash Course's "Check Yourself with Lateral Reading" video (14 min) to learn more.

Verify citations

If a generative AI tool provides a reference, confirm that the source exists. Try copying the article title or citation into a search tool like Google Scholar or the Library's OneSearch. Do a Google search for the lead author.

If the source is real, check that it contains what ChatGPT or Copilot says it does. Read the source or its abstract.  

Generative AI can provide references. However, these references may not be where the information actually came from and - more importantly - may not be for real sources. Despite sounding plausible, ChatGPT can easily hallucinate citations. This can be an issue with other generative AI tools, like Copilot, as well.

For example, ChatGPT provided this citation and summary in response to a prompt about needing an academic journal article on the topic of amphetamine use and social determinants of health:

"Here's an example of a peer-reviewed journal article on amphetamine abuse and its intersections with social determinants of health:
Stoové, M. A., Dietze, P. M., Jolley, D., Fry, C. L., Brewer, N., & Hellard, M. E. (2008). Increasing access to sterile injecting equipment and syringe exchange in a rural area with a high prevalence of bloodborne viruses. Australian Journal of Rural Health, 16(2), 99-106. 
This article describes a study conducted in a rural area of Australia with a high prevalence of bloodborne viruses, including those associated with amphetamine use..." OpenAI. (2023, February 17). [ChatGPT response to a prompt about locating an academic journal article]. https://chat.openai.com/

Although the summary sounds plausible and the citation looks realistic, this article does not exist. The journal exists, as does the lead author. However, Stoové has not published in this journal.

 

References

Scheelke, A. (2023, July 10). AI, ChatGPT, and the Library. https://libguides.slcc.edu/ChatGPT/InformationLiteracy