{"id":106,"date":"2025-09-02T17:07:49","date_gmt":"2025-09-02T17:07:49","guid":{"rendered":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/?post_type=chapter&#038;p=106"},"modified":"2025-10-13T13:57:43","modified_gmt":"2025-10-13T13:57:43","slug":"chapter-2-5-the-six-element-prompting-model","status":"publish","type":"chapter","link":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/chapter\/chapter-2-5-the-six-element-prompting-model\/","title":{"raw":"2.5. Prompting Pitfalls and How to Avoid Them","rendered":"2.5. Prompting Pitfalls and How to Avoid Them"},"content":{"raw":"Not all prompts lead to useful results. Even when you know the core strategies, it\u2019s easy to slip into habits that cause the AI to misfire. These pitfalls matter for two reasons: first, they waste time and produce weak output; second, they can introduce ethical problems that undermine the integrity of your work (Morris, 2024).\r\n<h2>Common Pitfalls and Fixes<\/h2>\r\n<strong>1. Vagueness<\/strong>\r\n<em>Pitfall:<\/em> \u201cTell me about French history.\u201d\r\n<em>Problem:<\/em> The model generates a generic, unhelpful overview.\r\n<em>Fix:<\/em> Specify the topic, scope, and desired format.\r\nExample: <em>\u201cSummarize three key causes of the French Revolution in 200 words, written for high school students.\u201d<\/em>\r\n\r\n<strong>2. Overstuffing<\/strong>\r\n<em>Pitfall:<\/em> \u201cAct as a teacher, historian, therapist, and comedian. Give me an essay, a table, and a song.\u201d\r\n<em>Problem:<\/em> Too many roles and outputs confuse the model.\r\n<em>Fix:<\/em> Break complex tasks into smaller prompts and build step by step.\r\n\r\n<strong>3. Missing Context<\/strong>\r\n<em>Pitfall:<\/em> \u201cWrite a report about patient outcomes.\u201d\r\n<em>Problem:<\/em> Lacks details about audience, goals, or background.\r\n<em>Fix:<\/em> Add constraints.\r\nExample: <em>\u201cWrite a one-page report on occupational therapy outcomes for first-year graduate students.\u201d<\/em>\r\n\r\n<strong>4. Assuming Human Understanding<\/strong>\r\n<em>Pitfall:<\/em> \u201cYou know what I mean, just write it up.\u201d\r\n<em>Problem:<\/em> The AI doesn\u2019t infer unstated intentions\u2014it only predicts based on patterns.\r\n<em>Fix:<\/em> Spell out your expectations clearly.\r\n\r\n<strong>5. Hallucination Traps<\/strong>\r\n<em>Pitfall:<\/em> \u201cCite three studies proving that cats reduce exam stress.\u201d\r\n<em>Problem:<\/em> AI fabricates references when pushed to produce specifics it hasn\u2019t seen.\r\n<em>Fix:<\/em> Ask it to <em>suggest keywords<\/em> or <em>summarize research trends<\/em>, then fact-check in a scholarly database.\r\n\r\n<strong>6. Ethical Blind Spots<\/strong>\r\n<em>Pitfall:<\/em> Prompts that reinforce stereotypes (\u201cWrite a story about a nurse\u2014she is caring and kind\u201d).\r\n<em>Problem:<\/em> Such prompts can amplify biases.\r\n<em>Fix:<\/em> Use neutral or varied wording, and ask the AI to check for unintended bias.\r\n\r\n<strong>7. Over-Reliance<\/strong>\r\n<em>Pitfall:<\/em> Always asking the AI to do the thinking first.\r\n<em>Problem:<\/em> Weakens your own creativity and critical analysis.\r\n<em>Fix:<\/em> Use AI drafts as a starting point, but refine with your own expertise and voice.\r\n\r\n<!-- Sidebar -->\r\n\r\n<details class=\"pb-details\"><summary>\ud83d\udcd6 Sidebar: Why Pitfalls Matter (click to expand)<\/summary>\r\n<div style=\"border: 2px solid #1565c0;padding: 10px;background-color: #e3f2fd;border-radius: 6px;margin-top: 10px\">\r\n\r\nPrompts are not just technical instructions. They reflect your priorities and shape the kind of knowledge AI systems produce. Sloppy prompting doesn\u2019t just waste your time\u2014it can introduce bias, misinform readers, or strip away your own critical engagement.\r\n\r\nResponsible prompting means treating the AI as a partner, not an oracle. Think of each interaction as co-authoring: you guide the structure, context, and values that go into the output.\r\n\r\n<\/div>\r\n<\/details>\r\n<h2>\ud83d\udcda Weekly Reflection Journal<\/h2>\r\n<div style=\"border: 2px solid #2e7d32;padding: 10px;background-color: #f1f8e9;border-radius: 6px\">\r\n\r\n<strong>Reflection Prompt: <\/strong>Take one of your recent prompts that didn\u2019t work well.\r\n<ul>\r\n \t<li>Identify which of the pitfalls it fell into.<\/li>\r\n \t<li>Revise it using the fixes described above.<\/li>\r\n \t<li>Compare the outputs: how did clarity, context, or ethical framing change the result?<\/li>\r\n<\/ul>\r\n<\/div>\r\n<h2>Video Resource<\/h2>\r\nFor a concise walkthrough of common mistakes in prompt engineering, see this video from DeepLearningAI:\r\n\r\n[embed]https:\/\/www.youtube.com\/embed\/jHv63Uvk5VA[\/embed]\r\n<h2>Looking Ahead<\/h2>\r\nNext, in <em>2.6 Applied Practice Scenarios<\/em>, you will put these strategies to work through realistic, discipline-flexible exercises. You will design, test, and refine prompts for analysis, creation, and feedback; compare outputs; and note what improves when you apply structure and retrieval. Bring one task from your own context to use as the core scenario.","rendered":"<p>Not all prompts lead to useful results. Even when you know the core strategies, it\u2019s easy to slip into habits that cause the AI to misfire. These pitfalls matter for two reasons: first, they waste time and produce weak output; second, they can introduce ethical problems that undermine the integrity of your work (Morris, 2024).<\/p>\n<h2>Common Pitfalls and Fixes<\/h2>\n<p><strong>1. Vagueness<\/strong><br \/>\n<em>Pitfall:<\/em> \u201cTell me about French history.\u201d<br \/>\n<em>Problem:<\/em> The model generates a generic, unhelpful overview.<br \/>\n<em>Fix:<\/em> Specify the topic, scope, and desired format.<br \/>\nExample: <em>\u201cSummarize three key causes of the French Revolution in 200 words, written for high school students.\u201d<\/em><\/p>\n<p><strong>2. Overstuffing<\/strong><br \/>\n<em>Pitfall:<\/em> \u201cAct as a teacher, historian, therapist, and comedian. Give me an essay, a table, and a song.\u201d<br \/>\n<em>Problem:<\/em> Too many roles and outputs confuse the model.<br \/>\n<em>Fix:<\/em> Break complex tasks into smaller prompts and build step by step.<\/p>\n<p><strong>3. Missing Context<\/strong><br \/>\n<em>Pitfall:<\/em> \u201cWrite a report about patient outcomes.\u201d<br \/>\n<em>Problem:<\/em> Lacks details about audience, goals, or background.<br \/>\n<em>Fix:<\/em> Add constraints.<br \/>\nExample: <em>\u201cWrite a one-page report on occupational therapy outcomes for first-year graduate students.\u201d<\/em><\/p>\n<p><strong>4. Assuming Human Understanding<\/strong><br \/>\n<em>Pitfall:<\/em> \u201cYou know what I mean, just write it up.\u201d<br \/>\n<em>Problem:<\/em> The AI doesn\u2019t infer unstated intentions\u2014it only predicts based on patterns.<br \/>\n<em>Fix:<\/em> Spell out your expectations clearly.<\/p>\n<p><strong>5. Hallucination Traps<\/strong><br \/>\n<em>Pitfall:<\/em> \u201cCite three studies proving that cats reduce exam stress.\u201d<br \/>\n<em>Problem:<\/em> AI fabricates references when pushed to produce specifics it hasn\u2019t seen.<br \/>\n<em>Fix:<\/em> Ask it to <em>suggest keywords<\/em> or <em>summarize research trends<\/em>, then fact-check in a scholarly database.<\/p>\n<p><strong>6. Ethical Blind Spots<\/strong><br \/>\n<em>Pitfall:<\/em> Prompts that reinforce stereotypes (\u201cWrite a story about a nurse\u2014she is caring and kind\u201d).<br \/>\n<em>Problem:<\/em> Such prompts can amplify biases.<br \/>\n<em>Fix:<\/em> Use neutral or varied wording, and ask the AI to check for unintended bias.<\/p>\n<p><strong>7. Over-Reliance<\/strong><br \/>\n<em>Pitfall:<\/em> Always asking the AI to do the thinking first.<br \/>\n<em>Problem:<\/em> Weakens your own creativity and critical analysis.<br \/>\n<em>Fix:<\/em> Use AI drafts as a starting point, but refine with your own expertise and voice.<\/p>\n<p><!-- Sidebar --><\/p>\n<details class=\"pb-details\">\n<summary>\ud83d\udcd6 Sidebar: Why Pitfalls Matter (click to expand)<\/summary>\n<div style=\"border: 2px solid #1565c0;padding: 10px;background-color: #e3f2fd;border-radius: 6px;margin-top: 10px\">\n<p>Prompts are not just technical instructions. They reflect your priorities and shape the kind of knowledge AI systems produce. Sloppy prompting doesn\u2019t just waste your time\u2014it can introduce bias, misinform readers, or strip away your own critical engagement.<\/p>\n<p>Responsible prompting means treating the AI as a partner, not an oracle. Think of each interaction as co-authoring: you guide the structure, context, and values that go into the output.<\/p>\n<\/div>\n<\/details>\n<h2>\ud83d\udcda Weekly Reflection Journal<\/h2>\n<div style=\"border: 2px solid #2e7d32;padding: 10px;background-color: #f1f8e9;border-radius: 6px\">\n<p><strong>Reflection Prompt: <\/strong>Take one of your recent prompts that didn\u2019t work well.<\/p>\n<ul>\n<li>Identify which of the pitfalls it fell into.<\/li>\n<li>Revise it using the fixes described above.<\/li>\n<li>Compare the outputs: how did clarity, context, or ethical framing change the result?<\/li>\n<\/ul>\n<\/div>\n<h2>Video Resource<\/h2>\n<p>For a concise walkthrough of common mistakes in prompt engineering, see this video from DeepLearningAI:<\/p>\n<p><iframe loading=\"lazy\" id=\"oembed-1\" title=\"Complete ChatGPT Tutorial - [Become A Power User in 30 Minutes]\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/jHv63Uvk5VA?feature=oembed&#38;rel=0\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<h2>Looking Ahead<\/h2>\n<p>Next, in <em>2.6 Applied Practice Scenarios<\/em>, you will put these strategies to work through realistic, discipline-flexible exercises. You will design, test, and refine prompts for analysis, creation, and feedback; compare outputs; and note what improves when you apply structure and retrieval. Bring one task from your own context to use as the core scenario.<\/p>\n","protected":false},"author":6,"menu_order":5,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":["avas"],"pb_section_license":"cc-by-nc-sa","_links_to":"","_links_to_target":""},"chapter-type":[49],"contributor":[63],"license":[57],"class_list":["post-106","chapter","type-chapter","status-publish","hentry","chapter-type-numberless","contributor-avas","license-cc-by-nc-sa"],"part":30,"jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/106","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/users\/6"}],"version-history":[{"count":11,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/106\/revisions"}],"predecessor-version":[{"id":772,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/106\/revisions\/772"}],"part":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/parts\/30"}],"metadata":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/106\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/media?parent=106"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapter-type?post=106"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/contributor?post=106"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/license?post=106"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}