"

2.6. Applied Practice Scenarios

Asha Vas

So far, you’ve learned about effective prompt structures, common pitfalls, and advanced techniques. But the best way to understand prompt engineering is to put it into action. In this chapter, you’ll experiment with real-world tasks, observe how AI responds, and refine your prompts to achieve clearer and more ethical results.

These applied scenarios are flexible across disciplines. Whether you work in teaching, research, administration, or creative projects, the goal is the same: practice designing prompts, analyze the outputs, and reflect on what works—and why.

Scenario 1: Simplifying Complex Ideas

Task: Choose a concept from your field (a theory, framework, or technical process). Prompt an AI model to explain it at three different levels:

  • For a high school student
  • For an undergraduate in your discipline
  • For a professional colleague

Reflection: How did the AI adapt tone, vocabulary, and examples for each audience? What did it miss that you would add as a human expert?

Scenario 2: Prompt Comparison

Task: Write two prompts asking for a short summary of the same article, case, or problem.

  • Prompt A: vague and minimal instructions
  • Prompt B: structured using the Six-Element Prompting Model

Reflection: Compare the two outputs. Which was clearer, more relevant, or more useful? How did context, role, or output format shape the response?

Resource: Liu et al. (2023) on structured prompting

Scenario 3: Creative Collaboration

Task: Use AI to brainstorm ideas for a project (lesson plan, presentation, blog post, or research angle). Then refine the ideas with a second prompt:

  • First prompt: “Generate 10 creative ideas for X.”
  • Second prompt: “Take idea #3 and expand it into a 3-step action plan suitable for a [specific context].”

Reflection: How did the second prompt improve specificity and usefulness? What role did your human judgment play in deciding which ideas to keep?

Scenario 4: Ethical Guardrails

Task: Ask the AI to generate content that could raise ethical concerns (e.g., medical advice, grading decisions, financial recommendations). Observe the output carefully.

  • Does the AI give a disclaimer?
  • Does it provide potentially harmful or biased suggestions?

Then re-prompt with explicit ethical guidance, such as: “Provide general educational information only, without offering direct medical advice.”

Reflection: How did the AI’s response change? What does this show about your role in steering the system responsibly?

Resource: UNESCO’s AI Ethics Guidelines

Scenario 5: Iterative Refinement

Task: Choose a prompt you might actually use in your work. Run it once, then re-run it at least twice with improvements (adding role, context, constraints, or output format). Keep all versions.

Reflection: Which version best met your needs? What did you learn about how much detail is “just enough” in a prompt?