Lateral reading is a technique you can use to critically evaluate resources. The short video below will explain how lateral reading works.
Source material for table: Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test
** Content copied and adapted with permission from Nazarbayvev University's Artificial Intelligence LibGuide. **
Insufficient data or context - not enough data
Excessive generalization - too much generalization resulting to bizarre and illogical data connections
Contradictions - inconsistencies and discrepancies on the context and data
False facts - fake sources of information, false statements, fabricated contents
Lack of nuance/context - lack of requisite knowledge to reconcile meaning, slight or subtle differences in language or context
Erosion of trust - technology might be untrustworthy as it provides misleading data
Ethical concerns - potential perpetuation of misinformation
Impact on decision making - poor choices with serious consequences
Legal implications - exposure of AI developers and users to potential legal liabilities
response appears unusual - may indicate potential distortion
lack of coherence in responses - contradictory response
(Codecademy, 2024; Maggiolo, 2023; Univ. of East London libguide, 2024)
** Content copied with permission from Nazarbayvev University's Artificial Intelligence LibGuide. **
Codecademy Team. (2024). Detecting hallucinations in generative AI. Codecademy. https://www.codecademy.com/article/detecting-hallucinations-in-generative-ai.
Jennings, J. (2023, Oct. 10). AI in education: The problem with hallucinations. eSpark. https://www.esparklearning.com/blog/ai-in-education-the-problem-with-hallucinations/
Kingston, M. (2023). AI for education: Lesson 3: Hallucination detective: Digital and print learning packet. AI for education. https://aiforeducation.io
Maggiolo, G. (2024, Oct. 5). Can AI experience hallucinations? How to identify false information generated by neutral networks. Pigro. https://blog.pigro.ai/en/can-ai-experience-hallucinations-how-to-identify-false-information-generated-by-neural-networks
Marr, B. (2023, Mar. 22). ChatGPT: What are hallucinations and why are they a problem for AI systems. Bernard Marr & Co.https://bernardmarr.com/chatgpt-what-are-hallucinations-and-why-are-they-a-problem-for-ai-systems/
Metz, C. (2023, Apr. 4). What makes A.I. chatbots go wrong? The curious case of hallucinating software. The New York Times. https://www.nytimes.com/2023/03/29/technology/ai-chatbots-hallucinations.html.
Ramponi, M. (2023, Aug. 3). How RLHF preference tuning works (and how thing may go wrong). AssemblyAI. https://www.assemblyai.com/blog/how-rlhf-preference-model-tuning-works-and-how-things-may-go-wrong/
University of East London. (2024, Feb. 26). Using AI critically. Artificial intelligence (AI). https://libguides.uel.ac.uk/artificial-intelligence/using-ai-critically
Prompts are the instructions users give generative AI (GenAI) in order to guide GenAI's response, which could take the form of text, images, music, code etc.
Prompts define the task for the GenAI and influence the outputs in terms of style, content, and detail. The more specific the instructions given to the tool, the better the outcome. Prompting GenAI at each step keeps the AI tool on topic and more likely to provide the most helpful output.
Prompt engineering is the intentional process of designing clear and actionable prompts or instructions for GenAI models, which guide and influence GenAI's responses. Formulating clear and effective prompts is crucial to obtaining the best possible output.
** Information adapted from University of Saskatchewan's Generative Artificial Intelligence guide and used with permission.**
Context: Provide background information to set the stage. Action: Designate a specific task. Result: Provide clear directions to specify the desired outcome. Example: Share an example that can serve as a model for the desired response.
Concise: Use clear, simple, and succinct language. Logical: Establish context by providing a logical structure of information within your prompt. Explicit: Specify the format, length, sources, tone, audience etc. Adaptive: Iterate and ask follow-up questions. Reflective: Evaluate the response(s) and contemplate how to refine the prompt to receive an improved output.
Character: Share the role you would like AI to play. e.g. undergraduate student. Request: Clearly explain the task you would like performed. Examples: Share examples that can serve as models for the desired response. Additions: Provide some additional information to adjust the output. Type of Output: Provide clear directions to specify the desired outcome. Extras: Share additional information to enhance your prompt, and as a result, the output content.