{"id":119,"date":"2025-09-02T17:30:33","date_gmt":"2025-09-02T17:30:33","guid":{"rendered":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/?post_type=chapter&#038;p=119"},"modified":"2025-10-13T13:57:44","modified_gmt":"2025-10-13T13:57:44","slug":"chapter-2-8-key-concepts","status":"publish","type":"chapter","link":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/chapter\/chapter-2-8-key-concepts\/","title":{"raw":"2.7. Key Concepts &amp; References","rendered":"2.7. Key Concepts &amp; References"},"content":{"raw":"<h2>Key Concepts<\/h2>\r\n<ul>\r\n \t<li><b>Prompt engineering:<\/b> The practice of crafting effective inputs to guide AI toward useful, accurate, and context-aware outputs.<\/li>\r\n \t<li><b>Weak prompts:<\/b> Vague, unconstrained, and context-free inputs that tend to produce generic or off-target AI responses.<\/li>\r\n \t<li><b>Strong prompts:<\/b> Precise, context-rich inputs that provide the AI with clarity and constraints, considering audience, format, scope, and tone.<\/li>\r\n \t<li><b>Six-Element Prompting Model:<\/b> A structured framework for crafting prompts that breaks the input into six components: Role\/Persona, Task\/Instruction, Context\/Constraints, Input Data, Output Format, and Tone\/Style.<\/li>\r\n \t<li><b>Chain-of-Thought (CoT) Prompting:<\/b> An advanced technique where the AI is asked to show its reasoning process step-by-step to improve the accuracy of its conclusions.<\/li>\r\n \t<li><b>Least-to-Most Prompting:<\/b> An advanced technique where a complex problem is broken down into a sequence of simpler subproblems that the AI solves in order.<\/li>\r\n \t<li><b>Self-Consistency in Reasoning Paths:<\/b> An advanced technique that involves generating multiple reasoning chains for the same problem and selecting the answer that appears most often to reduce errors.<\/li>\r\n \t<li><b>Tree of Thoughts (ToT):<\/b> An advanced technique that allows the AI to explore and evaluate multiple branching reasoning paths simultaneously, which is useful for planning or strategy tasks.<\/li>\r\n \t<li><b>Prompt Chaining:<\/b> A multi-turn technique where a complex task is broken down into a series of prompts, with each step building on the previous one.<\/li>\r\n \t<li><b>Retrieval-Augmented Generation (RAG):<\/b> A technique where the AI is provided with external documents or sources to ground its response in factual and current information, helping to avoid hallucinations.<\/li>\r\n \t<li><b>Zero-Shot Prompting:<\/b> The simplest type of prompt, which asks a question or gives an instruction without providing any examples or extra context.<\/li>\r\n \t<li><b>Few-Shot Prompting:<\/b> A prompt that includes a few examples to guide the AI\u2019s output style, tone, or structure.<\/li>\r\n \t<li><b>Role-Based Prompting:<\/b> A technique that instructs the AI to act as a specific professional, persona, or viewpoint to anchor its terminology and assumptions.<\/li>\r\n \t<li><b>Hallucinations:<\/b> A common AI pitfall where the model generates incorrect, invented, or fabricated content.<\/li>\r\n<\/ul>\r\n<h2>References<\/h2>\r\n<p class=\"hanging-indent\">Bozkurt, A. (2024). Tell me your prompts and I will make them true: Prompt engineering as a new form of generative art and science. <em>Open Praxis, 16<\/em>(2), 111\u2013118. <a href=\"https:\/\/doi.org\/10.55982\/openpraxis.16.2.661\">https:\/\/doi.org\/10.55982\/openpraxis.16.2.661<\/a><\/p>\r\n<p class=\"hanging-indent\">Morris, M. R. (2024). Prompting considered harmful. <em>Communications of the ACM, 67<\/em>(3), 45\u201353. <a href=\"https:\/\/doi.org\/10.1145\/3673861\">https:\/\/doi.org\/10.1145\/3673861<\/a><\/p>\r\n<p class=\"hanging-indent\">Prompt Engineering with ChatGPT: A Guide for Academic Writers. (2023). <em>Annals of Biomedical Engineering, 51<\/em>(12), 2629-2633. <a href=\"https:\/\/doi.org\/10.1007\/s10439-023-03272-4\">https:\/\/doi.org\/10.1007\/s10439-023-03272-4<\/a><\/p>\r\n<p class=\"hanging-indent\">PromptingGuide.ai. (2024). <em>Tree of Thoughts (ToT)<\/em>. Retrieved October 2025, from <a href=\"https:\/\/www.promptingguide.ai\/techniques\/tot?utm_source=chatgpt.com\" target=\"_new\">https:\/\/www.promptingguide.ai\/techniques\/tot<\/a><\/p>\r\n<p class=\"hanging-indent\">Sahoo, P., Singh, A. K., Saha, S., Jain, V., Mondal, S., &amp; Chadha, A. (2024). <em>A systematic survey of prompt engineering in large language models: Techniques and applications<\/em>. arXiv. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2402.07927\">https:\/\/doi.org\/10.48550\/arXiv.2402.07927<\/a><\/p>\r\n<p class=\"hanging-indent\">Song, Y., He, Y., Zhao, X., Gu, H., Jiang, D., Yang, H., Fan, L., &amp; Yang, Q. (2024). A communication theory perspective on prompting engineering methods for large language models. <em>Journal of Computer Science and Technology, 39<\/em>(4), 984\u20131004. <a href=\"https:\/\/doi.org\/10.1007\/s11390-024-4058-8\">https:\/\/doi.org\/10.1007\/s11390-024-4058-8<\/a><\/p>\r\n<p class=\"hanging-indent\">UNESCO. (n.d.). <em>Ethics of artificial intelligence: Recommendation on the ethics of artificial intelligence<\/em>. UNESCO. Retrieved October 2025, from <a href=\"https:\/\/www.unesco.org\/en\/artificial-intelligence\/recommendation-ethics?utm_source=chatgpt.com\" target=\"_new\">https:\/\/www.unesco.org\/en\/artificial-intelligence\/recommendation-ethics<\/a><\/p>\r\n<p class=\"hanging-indent\"><a href=\"https:\/\/doi.org\/10.48550\/arXiv.2201.11903\" target=\"_blank\" rel=\"noopener\">Wei et al. (2022)<\/a> \u2013 Chain-of-thought prompting, encouraging step-by-step reasoning in LLMs.<\/p>\r\n<p class=\"hanging-indent\">Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., Yogatama, D., Bosma, M., Zhou, D., Metzler, D., Chi, E. H., Hashimoto, T., Vinyals, O., Liang, P., Dean, J., &amp; Fedus, W. (2022). <em>Emergent Abilities of Large Language Models<\/em>. arXiv. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2206.07682\">https:\/\/doi.org\/10.48550\/arXiv.2206.07682<\/a><\/p>\r\n<p class=\"hanging-indent\">Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang, S., Chowdhery, A., &amp; Zhou, D. (2023). <em>Self-consistency improves chain of thought reasoning in language models<\/em> (Version 4) [Preprint]. arXiv. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2203.11171\">https:\/\/doi.org\/10.48550\/arXiv.2203.11171<\/a><\/p>\r\n<p class=\"hanging-indent\">White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., &amp; Schmidt, D. C. (2023). <em>A prompt pattern catalog to enhance prompt engineering with ChatGPT<\/em>. arXiv. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2302.11382\">https:\/\/doi.org\/10.48550\/arXiv.2302.11382<\/a><\/p>\r\n<p class=\"hanging-indent\">Zamfirescu-Pereira, J. D., Wong, R. Y., Hartmann, B., &amp; Yang, Q. (2023). Why Johnny can\u2019t prompt: How non-AI experts try (and fail) to design LLM prompts. <em>CHI Conference on Human Factors in Computing Systems (CHI \u201923)<\/em>, 1\u201321. ACM. <a href=\"https:\/\/doi.org\/10.1145\/3544548.3581388\" target=\"_new\">https:\/\/doi.org\/10.1145\/3544548.3581388<\/a><\/p>\r\n<p class=\"hanging-indent\">Zhou, D., Sch\u00e4rli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., &amp; Chi, E. (2023). <em>Least-to-most prompting enables complex reasoning in large language models<\/em> (Version 3) [Preprint]. arXiv. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2205.10625\">https:\/\/doi.org\/10.48550\/arXiv.2205.10625<\/a><\/p>","rendered":"<h2>Key Concepts<\/h2>\n<ul>\n<li><b>Prompt engineering:<\/b> The practice of crafting effective inputs to guide AI toward useful, accurate, and context-aware outputs.<\/li>\n<li><b>Weak prompts:<\/b> Vague, unconstrained, and context-free inputs that tend to produce generic or off-target AI responses.<\/li>\n<li><b>Strong prompts:<\/b> Precise, context-rich inputs that provide the AI with clarity and constraints, considering audience, format, scope, and tone.<\/li>\n<li><b>Six-Element Prompting Model:<\/b> A structured framework for crafting prompts that breaks the input into six components: Role\/Persona, Task\/Instruction, Context\/Constraints, Input Data, Output Format, and Tone\/Style.<\/li>\n<li><b>Chain-of-Thought (CoT) Prompting:<\/b> An advanced technique where the AI is asked to show its reasoning process step-by-step to improve the accuracy of its conclusions.<\/li>\n<li><b>Least-to-Most Prompting:<\/b> An advanced technique where a complex problem is broken down into a sequence of simpler subproblems that the AI solves in order.<\/li>\n<li><b>Self-Consistency in Reasoning Paths:<\/b> An advanced technique that involves generating multiple reasoning chains for the same problem and selecting the answer that appears most often to reduce errors.<\/li>\n<li><b>Tree of Thoughts (ToT):<\/b> An advanced technique that allows the AI to explore and evaluate multiple branching reasoning paths simultaneously, which is useful for planning or strategy tasks.<\/li>\n<li><b>Prompt Chaining:<\/b> A multi-turn technique where a complex task is broken down into a series of prompts, with each step building on the previous one.<\/li>\n<li><b>Retrieval-Augmented Generation (RAG):<\/b> A technique where the AI is provided with external documents or sources to ground its response in factual and current information, helping to avoid hallucinations.<\/li>\n<li><b>Zero-Shot Prompting:<\/b> The simplest type of prompt, which asks a question or gives an instruction without providing any examples or extra context.<\/li>\n<li><b>Few-Shot Prompting:<\/b> A prompt that includes a few examples to guide the AI\u2019s output style, tone, or structure.<\/li>\n<li><b>Role-Based Prompting:<\/b> A technique that instructs the AI to act as a specific professional, persona, or viewpoint to anchor its terminology and assumptions.<\/li>\n<li><b>Hallucinations:<\/b> A common AI pitfall where the model generates incorrect, invented, or fabricated content.<\/li>\n<\/ul>\n<h2>References<\/h2>\n<p class=\"hanging-indent\">Bozkurt, A. (2024). Tell me your prompts and I will make them true: Prompt engineering as a new form of generative art and science. <em>Open Praxis, 16<\/em>(2), 111\u2013118. <a href=\"https:\/\/doi.org\/10.55982\/openpraxis.16.2.661\">https:\/\/doi.org\/10.55982\/openpraxis.16.2.661<\/a><\/p>\n<p class=\"hanging-indent\">Morris, M. R. (2024). Prompting considered harmful. <em>Communications of the ACM, 67<\/em>(3), 45\u201353. <a href=\"https:\/\/doi.org\/10.1145\/3673861\">https:\/\/doi.org\/10.1145\/3673861<\/a><\/p>\n<p class=\"hanging-indent\">Prompt Engineering with ChatGPT: A Guide for Academic Writers. (2023). <em>Annals of Biomedical Engineering, 51<\/em>(12), 2629-2633. <a href=\"https:\/\/doi.org\/10.1007\/s10439-023-03272-4\">https:\/\/doi.org\/10.1007\/s10439-023-03272-4<\/a><\/p>\n<p class=\"hanging-indent\">PromptingGuide.ai. (2024). <em>Tree of Thoughts (ToT)<\/em>. Retrieved October 2025, from <a href=\"https:\/\/www.promptingguide.ai\/techniques\/tot?utm_source=chatgpt.com\" target=\"_new\">https:\/\/www.promptingguide.ai\/techniques\/tot<\/a><\/p>\n<p class=\"hanging-indent\">Sahoo, P., Singh, A. K., Saha, S., Jain, V., Mondal, S., &amp; Chadha, A. (2024). <em>A systematic survey of prompt engineering in large language models: Techniques and applications<\/em>. arXiv. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2402.07927\">https:\/\/doi.org\/10.48550\/arXiv.2402.07927<\/a><\/p>\n<p class=\"hanging-indent\">Song, Y., He, Y., Zhao, X., Gu, H., Jiang, D., Yang, H., Fan, L., &amp; Yang, Q. (2024). A communication theory perspective on prompting engineering methods for large language models. <em>Journal of Computer Science and Technology, 39<\/em>(4), 984\u20131004. <a href=\"https:\/\/doi.org\/10.1007\/s11390-024-4058-8\">https:\/\/doi.org\/10.1007\/s11390-024-4058-8<\/a><\/p>\n<p class=\"hanging-indent\">UNESCO. (n.d.). <em>Ethics of artificial intelligence: Recommendation on the ethics of artificial intelligence<\/em>. UNESCO. Retrieved October 2025, from <a href=\"https:\/\/www.unesco.org\/en\/artificial-intelligence\/recommendation-ethics?utm_source=chatgpt.com\" target=\"_new\">https:\/\/www.unesco.org\/en\/artificial-intelligence\/recommendation-ethics<\/a><\/p>\n<p class=\"hanging-indent\"><a href=\"https:\/\/doi.org\/10.48550\/arXiv.2201.11903\" target=\"_blank\" rel=\"noopener\">Wei et al. (2022)<\/a> \u2013 Chain-of-thought prompting, encouraging step-by-step reasoning in LLMs.<\/p>\n<p class=\"hanging-indent\">Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., Yogatama, D., Bosma, M., Zhou, D., Metzler, D., Chi, E. H., Hashimoto, T., Vinyals, O., Liang, P., Dean, J., &amp; Fedus, W. (2022). <em>Emergent Abilities of Large Language Models<\/em>. arXiv. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2206.07682\">https:\/\/doi.org\/10.48550\/arXiv.2206.07682<\/a><\/p>\n<p class=\"hanging-indent\">Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang, S., Chowdhery, A., &amp; Zhou, D. (2023). <em>Self-consistency improves chain of thought reasoning in language models<\/em> (Version 4) [Preprint]. arXiv. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2203.11171\">https:\/\/doi.org\/10.48550\/arXiv.2203.11171<\/a><\/p>\n<p class=\"hanging-indent\">White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., &amp; Schmidt, D. C. (2023). <em>A prompt pattern catalog to enhance prompt engineering with ChatGPT<\/em>. arXiv. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2302.11382\">https:\/\/doi.org\/10.48550\/arXiv.2302.11382<\/a><\/p>\n<p class=\"hanging-indent\">Zamfirescu-Pereira, J. D., Wong, R. Y., Hartmann, B., &amp; Yang, Q. (2023). Why Johnny can\u2019t prompt: How non-AI experts try (and fail) to design LLM prompts. <em>CHI Conference on Human Factors in Computing Systems (CHI \u201923)<\/em>, 1\u201321. ACM. <a href=\"https:\/\/doi.org\/10.1145\/3544548.3581388\" target=\"_new\">https:\/\/doi.org\/10.1145\/3544548.3581388<\/a><\/p>\n<p class=\"hanging-indent\">Zhou, D., Sch\u00e4rli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., &amp; Chi, E. (2023). <em>Least-to-most prompting enables complex reasoning in large language models<\/em> (Version 3) [Preprint]. arXiv. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2205.10625\">https:\/\/doi.org\/10.48550\/arXiv.2205.10625<\/a><\/p>\n","protected":false},"author":6,"menu_order":7,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":["avas"],"pb_section_license":"cc-by-nc-sa","_links_to":"","_links_to_target":""},"chapter-type":[49],"contributor":[63],"license":[57],"class_list":["post-119","chapter","type-chapter","status-publish","hentry","chapter-type-numberless","contributor-avas","license-cc-by-nc-sa"],"part":30,"jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/119","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/users\/6"}],"version-history":[{"count":10,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/119\/revisions"}],"predecessor-version":[{"id":774,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/119\/revisions\/774"}],"part":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/parts\/30"}],"metadata":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/119\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/media?parent=119"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapter-type?post=119"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/contributor?post=119"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/license?post=119"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}