Skip to Main Content

Artificial Intelligence

❍ ❏ Advantages and disadvantages of Generative AI

☆ Evaluation Resources

Lateral reading

Lateral reading is a technique you can use to critically evaluate resources. The short video below will explain how lateral reading works.

Lateral reading is:

  • Looking outside of the website to fact check claims and verify information.
  • Seeking confirmation of claims from sources that are known to be reliable.
  • Asking who is funding the website?
  • Asking what is the reputation of the website?

 

R.O.B.O.T Method of Evaluating A.I. Resources

Reliability

  • How reliable is the information available about the AI technology?
  • If it’s not produced by the party responsible for the AI, what are the author’s credentials? Bias?
  • If it is produced by the party responsible for the AI, how much information are they making available? 
    • Is information only partially available due to trade secrets?
    • How biased is the information that they produce?

Objective

  • What is the goal or objective of the use of AI?
  • What is the goal of sharing information about it?
    • To inform?
    • To convince?
    • To find financial support?

Bias

  • What could create bias in the AI technology?
  • Are there ethical issues associated with this?
  • Are bias or ethical issues acknowledged?
    • By the source of information?
    • By the party responsible for the AI?
    • By its users?

Owner

  • Who is the owner or developer of the AI technology?
  • Who is responsible for it?
    • Is it a private company?
    • The government?
    • A think tank or research group?
  • Who has access to it?
  • Who can use it?

Type

  • Which subtype of AI is it?
  • Is the technology theoretical or applied?
  • What kind of information system does it rely on?
  • Does it rely on human intervention? 

Source material for table: Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test

Hallucinations

What are "hallucinations" in AI? 

  • a result of algorithmic distortions which lead to the generation of false information, manipulated data, and imaginative outputs (Maggiolo, 2023).
  • the system provides an answer that is factually incorrect, irrelevant, or nonsensical because of limitation in its training data and architecture (Metz, 2023).
  • the generation of outputs that may sound plausible but are either factually incorrect or unrelated to the given context (Marr, 2023).

** Content copied and adapted with permission from Nazarbayvev University's Artificial Intelligence LibGuide. **

Factors Affecting AI Hallucinations (Maggiolo, 2023) Characteristics of AI Hallucinations (Jennings, 2023)
  • Insufficient data or context - not enough data 

  • Excessive generalization - too much generalization resulting to bizarre and illogical data connections

  • Contradictions - inconsistencies and discrepancies on the context and data

  • False facts - fake sources of information, false statements, fabricated contents

  • Lack of nuance/context - lack of requisite knowledge to reconcile meaning, slight or subtle differences in language or context

Why it is a problem? (Marr, 2023) Red flags to look out for (Maggiolo, 2023)
  • Erosion of trust - technology might be untrustworthy as it provides misleading data

  • Ethical concerns - potential perpetuation of misinformation

  • Impact on decision making - poor choices with serious consequences

  • Legal implications - exposure of AI developers and users to potential legal liabilities

 

  • response appears unusual - may indicate potential distortion

  • lack of coherence in responses - contradictory response

Ways to Address AI Hallucinations

  • Semantic analysis - examining language and context inconsistencies.
  • Careful prompting - includes improving training data.
  • Refining prompts - using multiple prompts or iterative refinement.
  • Being skeptical of AI responses - includes questioning generative AI responses, asking for explanations and reasoning; requesting sources and evidence that upholds transparency and credibility.
  • Use different AI models - encompasses utilization of other gen AI tools and addressing biases by checking multiple perspectives.
  • Manual searching - verifying sources of information through lateral searching and double-checking information independently.
  • Humans-in-the-loop - some AI tools include reinforcement learning from human feedback (RLHF) model which aims to include humans for fine tuning of data, context, and responses.

(Codecademy, 2024; Maggiolo, 2023; Univ. of East London libguide, 2024)

** Content copied with permission from Nazarbayvev University's Artificial Intelligence LibGuide. **

Codecademy Team. (2024). Detecting hallucinations in generative AI. Codecademyhttps://www.codecademy.com/article/detecting-hallucinations-in-generative-ai.

Jennings, J. (2023, Oct. 10). AI in education: The problem with hallucinations. eSpark. https://www.esparklearning.com/blog/ai-in-education-the-problem-with-hallucinations/

Kingston, M. (2023). AI for education: Lesson 3: Hallucination detective: Digital and print learning packet. AI for education. https://aiforeducation.io

Maggiolo, G. (2024, Oct. 5). Can AI experience hallucinations? How to identify false information generated by neutral networks. Pigro. https://blog.pigro.ai/en/can-ai-experience-hallucinations-how-to-identify-false-information-generated-by-neural-networks 

Marr, B. (2023, Mar. 22). ChatGPT: What are hallucinations and why are they a problem for AI systems. Bernard Marr & Co.https://bernardmarr.com/chatgpt-what-are-hallucinations-and-why-are-they-a-problem-for-ai-systems/

Metz, C. (2023, Apr. 4). What makes A.I. chatbots go wrong? The curious case of hallucinating software. The New York Timeshttps://www.nytimes.com/2023/03/29/technology/ai-chatbots-hallucinations.html.

Ramponi, M. (2023, Aug. 3). How RLHF preference tuning works (and how thing may go wrong). AssemblyAI. https://www.assemblyai.com/blog/how-rlhf-preference-model-tuning-works-and-how-things-may-go-wrong/

University of East London. (2024, Feb. 26). Using AI critically. Artificial intelligence (AI). https://libguides.uel.ac.uk/artificial-intelligence/using-ai-critically

** Content copied and adapted with permission from Nazarbayvev University's Artificial Intelligence LibGuide. **

Crafting AI Prompts - Best Practices

What are prompts?

Prompts are the instructions users give generative AI (GenAI) in order to guide GenAI's response, which could take the form of text, images, music, code etc.

Why are prompts important?

Prompts define the task for the GenAI and influence the outputs in terms of style, content, and detail. The more specific the instructions given to the tool, the better the outcome. Prompting GenAI at each step keeps the AI tool on topic and more likely to provide the most helpful output.

What is prompt engineering?

Prompt engineering is the intentional process of designing clear and actionable prompts or instructions for GenAI models, which guide and influence GenAI's responses. Formulating clear and effective prompts is crucial to obtaining the best possible output.

** Information adapted from University of Saskatchewan's Generative Artificial Intelligence guide and used with permission.**

Context: Provide background information to set the stage.
Action: Designate a specific task.
Result: Provide clear directions to specify the desired outcome.
Example: Share an example that can serve as a model for the desired response.

Concise: Use clear, simple, and succinct language.
Logical: Establish context by providing a logical structure of information within your prompt.
Explicit: Specify the format, length, sources, tone, audience etc.  
Adaptive: Iterate and ask follow-up questions.
Reflective: Evaluate the response(s) and contemplate how to refine the prompt to receive an improved output.

Character: Share the role you would like AI to play. e.g. undergraduate student.
Request: Clearly explain the task you would like performed.
Examples: Share examples that can serve as models for the desired response.
Additions: Provide some additional information to adjust the output.
Type of Output: Provide clear directions to specify the desired outcome.
Extras: Share additional information to enhance your prompt, and as a result, the output content.

Master the Perfect ChatGPT Prompt Formula (in just 8 minutes) video by Jeff Su