4.2. Defining Appropriate AI Use in Courses
Kristin Clark
Clear expectations are one of the most effective tools for ensuring academic integrity in the age of AI. Students often want to use AI tools but are unsure where the boundaries lie. Faculty, meanwhile, may worry that too much restriction will stifle creativity, while too little will invite misuse. This chapter explores how to define appropriate AI use in your courses—through policies, examples, and shared language that empower both students and instructors.
Assistive vs. Generative Use
One way to clarify expectations is to distinguish between assistive and generative uses of AI:
- Assistive use: AI supports learning without replacing essential thinking. Examples: brainstorming topics, rephrasing for clarity, suggesting outlines, or generating practice quiz questions.
- Generative use: AI substitutes for core learning tasks. Examples: writing full essays, solving graded problems, or generating finished products without attribution.
Drawing this distinction helps students see AI as a tool rather than a shortcut.
Why Transparency Matters
Students should be expected to cite or disclose when and how they used AI. Just as we expect proper citation of sources, disclosing AI support builds a culture of honesty and accountability. For practical guidance, see APA’s guidance on citing ChatGPT and AI tools.
📖 Analogy: Calculators in Math Class (click to expand)
When calculators first became common, schools debated whether students should use them. The solution was not to ban calculators entirely, but to specify when and how they could be used. In early lessons, calculators might be off-limits so students could learn foundational skills. Later, they became essential for tackling higher-order problems. AI works the same way: its value depends on timing, purpose, and alignment with learning goals.
Quick Self-Check
Test your instincts on what counts as assistive or generative use of AI.
Looking Ahead
Next up, 4.3 The Limits of AI Detection Tools examines why detection-first approaches are problematic and how design choices can provide a stronger foundation for integrity.