2.7. Key Concepts & References
Asha Vas
Key Concepts
- Prompt engineering: The practice of crafting effective inputs to guide AI toward useful, accurate, and context-aware outputs.
- Weak prompts: Vague, unconstrained, and context-free inputs that tend to produce generic or off-target AI responses.
- Strong prompts: Precise, context-rich inputs that provide the AI with clarity and constraints, considering audience, format, scope, and tone.
- Six-Element Prompting Model: A structured framework for crafting prompts that breaks the input into six components: Role/Persona, Task/Instruction, Context/Constraints, Input Data, Output Format, and Tone/Style.
- Chain-of-Thought (CoT) Prompting: An advanced technique where the AI is asked to show its reasoning process step-by-step to improve the accuracy of its conclusions.
- Least-to-Most Prompting: An advanced technique where a complex problem is broken down into a sequence of simpler subproblems that the AI solves in order.
- Self-Consistency in Reasoning Paths: An advanced technique that involves generating multiple reasoning chains for the same problem and selecting the answer that appears most often to reduce errors.
- Tree of Thoughts (ToT): An advanced technique that allows the AI to explore and evaluate multiple branching reasoning paths simultaneously, which is useful for planning or strategy tasks.
- Prompt Chaining: A multi-turn technique where a complex task is broken down into a series of prompts, with each step building on the previous one.
- Retrieval-Augmented Generation (RAG): A technique where the AI is provided with external documents or sources to ground its response in factual and current information, helping to avoid hallucinations.
- Zero-Shot Prompting: The simplest type of prompt, which asks a question or gives an instruction without providing any examples or extra context.
- Few-Shot Prompting: A prompt that includes a few examples to guide the AI’s output style, tone, or structure.
- Role-Based Prompting: A technique that instructs the AI to act as a specific professional, persona, or viewpoint to anchor its terminology and assumptions.
- Hallucinations: A common AI pitfall where the model generates incorrect, invented, or fabricated content.
References
Bozkurt, A. (2024). Tell me your prompts and I will make them true: Prompt engineering as a new form of generative art and science. Open Praxis, 16(2), 111–118. https://doi.org/10.55982/openpraxis.16.2.661
Morris, M. R. (2024). Prompting considered harmful. Communications of the ACM, 67(3), 45–53. https://doi.org/10.1145/3673861
Prompt Engineering with ChatGPT: A Guide for Academic Writers. (2023). Annals of Biomedical Engineering, 51(12), 2629-2633. https://doi.org/10.1007/s10439-023-03272-4
PromptingGuide.ai. (2024). Tree of Thoughts (ToT). Retrieved October 2025, from https://www.promptingguide.ai/techniques/tot
Sahoo, P., Singh, A. K., Saha, S., Jain, V., Mondal, S., & Chadha, A. (2024). A systematic survey of prompt engineering in large language models: Techniques and applications. arXiv. https://doi.org/10.48550/arXiv.2402.07927
Song, Y., He, Y., Zhao, X., Gu, H., Jiang, D., Yang, H., Fan, L., & Yang, Q. (2024). A communication theory perspective on prompting engineering methods for large language models. Journal of Computer Science and Technology, 39(4), 984–1004. https://doi.org/10.1007/s11390-024-4058-8
UNESCO. (n.d.). Ethics of artificial intelligence: Recommendation on the ethics of artificial intelligence. UNESCO. Retrieved October 2025, from https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
Wei et al. (2022) – Chain-of-thought prompting, encouraging step-by-step reasoning in LLMs.
Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., Yogatama, D., Bosma, M., Zhou, D., Metzler, D., Chi, E. H., Hashimoto, T., Vinyals, O., Liang, P., Dean, J., & Fedus, W. (2022). Emergent Abilities of Large Language Models. arXiv. https://doi.org/10.48550/arXiv.2206.07682
Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang, S., Chowdhery, A., & Zhou, D. (2023). Self-consistency improves chain of thought reasoning in language models (Version 4) [Preprint]. arXiv. https://doi.org/10.48550/arXiv.2203.11171
White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., & Schmidt, D. C. (2023). A prompt pattern catalog to enhance prompt engineering with ChatGPT. arXiv. https://doi.org/10.48550/arXiv.2302.11382
Zamfirescu-Pereira, J. D., Wong, R. Y., Hartmann, B., & Yang, Q. (2023). Why Johnny can’t prompt: How non-AI experts try (and fail) to design LLM prompts. CHI Conference on Human Factors in Computing Systems (CHI ’23), 1–21. ACM. https://doi.org/10.1145/3544548.3581388
Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., & Chi, E. (2023). Least-to-most prompting enables complex reasoning in large language models (Version 3) [Preprint]. arXiv. https://doi.org/10.48550/arXiv.2205.10625