"

6.6. Key Concepts & References

David Gardner

Key Concepts

  • Lateral Reading: Instead of reading a source in isolation, open new tabs to check what trusted sources say about the same claim, author, or publication. This helps you compare and verify rather than just accept.
  • Prompt for Sources: Use prompting techniques from Module 2 to verify AI output. Ask: “What is your source for that claim?” or “Provide links or citations to the original research.” This forces accountability in the generated text.
  • “Trust but Verify” Mindset: Don’t take anything at face value, especially content that provokes strong emotion. Before accepting or sharing, pause and fact-check with a reliable search or reverse image search.
  • Always Ask “Why?”: Consider motive and purpose. Is the content intended to inform, persuade, inflame, sell something, or drive clicks? The underlying aim often reveals deeper biases or manipulations.
  • Hallucination: when the AI produces plausible-sounding but false statements. These errors can seem convincing enough to mislead. Similarly, biased training data or incentives can skew what AI “remembers” and how it frames an answer.
  • Deepfakes: synthetic images, audio, or video created by AI—raise further challenges. They can be used to distort reality, impersonate people, or spread misinformation. Distinguishing them from real media requires skepticism, technical tools, and cross-referencing with trusted sources.
  • Browser hygiene: reader view for focus, container tabs or profiles for separation, and periodic permission audits for extensions.

References

Grother, P., Ngan, M., & Hanaoka, K. (2019). Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects (NISTIR 8280). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.IR.8280

Ifenthaler, D., & Schumacher, C. (2016). Student perceptions of privacy principles for learning analytics. Educational Technology Research and Development64(5), 923-938.

McDonald, N., Johri, A., et al. (2025). Generative AI in higher ed: Evidence from an analysis of institutional policies & guidelines. Computers in Human Behavior: Artificial Humans, 3. Read Article (https://doi.org/10.1016/j.chbah.2025.100121)

 

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Basic AI Literacy Copyright © 2025 by is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.