"

6.4. Navigating the New Information Landscape with AI Media Literacy

David Gardner

Beyond the data we provide, we must also critically evaluate the information produced by AI. As AI-generated content becomes more ubiquitous and indistinguishable from human writing or media, our traditional media literacy toolkit must evolve. AI can produce plausible, even persuasive outputs, but without faithful grounding in truth. That’s why AI media literacy is no longer optional; it’s essential.

Core Skills for AI Media Literacy

  • Practice Lateral Reading: Instead of reading a source in isolation, open new tabs to check what trusted sources say about the same claim, author, or publication. This helps you compare and verify rather than just accept.
  • Prompt for Sources: Use prompting techniques from Module 2 to verify AI output. Ask: “What is your source for that claim?” or “Provide links or citations to the original research.” This forces accountability in the generated text.
  • Adopt a “Trust but Verify” Mindset: Don’t take anything at face value, especially content that provokes strong emotion. Before accepting or sharing, pause and fact-check with a reliable search or reverse image search.
  • Always Ask “Why?”: Consider motive and purpose. Is the content intended to inform, persuade, inflame, sell something, or drive clicks? The underlying aim often reveals deeper biases or manipulations.

Dealing with Hallucinations, Bias & Deepfakes

One of the trickiest challenges in AI media is the phenomenon of “hallucination,” when the AI produces plausible-sounding but false statements. These errors can seem convincing enough to mislead. Similarly, biased training data or incentives can skew what AI “remembers” and how it frames an answer.

Deepfakes—synthetic images, audio, or video created by AI—raise further challenges. They can be used to distort reality, impersonate people, or spread misinformation. Distinguishing them from real media requires skepticism, technical tools, and cross-referencing with trusted sources.

Refining Trust in AI Outputs

An AI output should rarely be treated as final truth. Instead, use it as a starting point. Cross-check, ask follow-up questions, examine the logic, and always inspect the evidence. In many ways, AI content requires a new layer of “source hygiene.” Just like you wouldn’t blindly trust an obscure article, you shouldn’t treat AI-generated content as definitive without scrutiny.

Video Resource

Below is a useful video to help you understand the challenges of AI and media literacy:

📚 Weekly Reflection Journal

Reflection Prompt: Recall an AI-generated text or image you encountered recently. What made it seem credible (style, tone, sources)? What questions would you now ask to test its accuracy or bias?

Self-Check: Fact Check the AI Claim

Drag each statement into the correct category: True, False, or Needs Verification

.

Looking Ahead

In the next chapter, we will turn from media literacy to ethical digital citizenship. You’ll learn how to build your own framework for safe and equitable participation—protecting both your data and your intellectual agency in an AI-driven world.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Basic AI Literacy Copyright © 2025 by is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.