"

1.7. Key Concepts & References

Hou I (Esther) Lau

Key Concepts

  • Artificial Intelligence (AI) – Simulation of human intelligence in machines (learning, reasoning, perceiving, creating).
  • Machine Learning (ML) – Subset of AI where systems learn patterns from data instead of explicit programming.
    • Supervised Learning – Training with labeled data
    • Unsupervised Learning – Pattern-finding without labels.
  • Reinforcement Learning (RL) – Trial-and-error learning guided by rewards and penalties.
  • Deep Learning (DL) – A form of ML using neural networks with multiple layers to recognize complex patterns (e.g., images, speech, translation).
  • Generative Models (GM) – Models that can create new content (text, images, audio).
  • Large Language Models (LLMs) – Deep learning models trained on vast text datasets to predict and generate human-like language (e.g., ChatGPT, Gemini).
  • Chain-of-Thought (CoT) – A reasoning approach where AI generates intermediate steps before producing answers.
  • Long-range Reasoning Models (LRMs) – Designed for extended, multi-step reasoning with long context retention.
  • Retrieval-Augmented Generation (RAG) – Combines generation with external information retrieval for more accurate outputs.
  • Multimodal Models (MCM) – AI systems that process and generate across multiple input types (text, images, audio, video).
  • Multi-Component Prompting (MCP) – Structured prompting strategy for guiding AI toward more accurate, creative results.
  • Agentic AI – Adaptive, goal-directed AI agents capable of planning, reflection, and autonomous task execution (e.g., AutoGPT).
  • Artificial Narrow Intelligence (ANI) – Task-specific AI (today’s dominant form).
  • Artificial General Intelligence (AGI) – Hypothetical AI capable of reasoning and learning across domains like a human.
  • Artificial Superintelligence (ASI) – Speculative future AI surpassing human intelligence in all respects.
  • LLM Chaining or prompt chaining – this technique links a series of prompts together to progressively guide an LLM toward a desired output. Each prompt builds upon the last, allowing for structured reasoning, more coherent text, and complex workflows that approximate multi-stage human thinking.
  • Mixture of Experts (MoE) – architectures distribute computations across multiple specialized “expert” sub-models. A gating mechanism determines which experts to activate for a given input, allowing the system to scale efficiently while maintaining strong performance. MoE systems help optimize resource use and improve model interpretability by segmenting problem domains.
  • Temperature Settings – a hyperparameter that controls randomness in text generation.
    • Low temperatures (0.0–0.5) yield deterministic, precise, and repeatable outputs.
    • Higher temperatures (>1.0) increase creativity, and unpredictability in responses.
    • Allows users to adjust AI output between reliability and originality depending on the task.
  • Reinforcement Learning with Human Feedback (RLHF) – a training method where human evaluators provide feedback on model outputs, helping the system learn desirable behaviors and avoid harmful ones. It combines supervised fine-tuning with reinforcement learning, aligning model responses with human values, preferences, and safety considerations. RLHF has been crucial in making state-of-the-art LLMs usable and trustworthy.

References

Aban, T. (2021). Artificial Intelligence in Education: Promises and Implications for Teaching and Learning (Vol. 78). Arlington: American Association of School Administrators.

Anderson, J. R. (2010). Cognitive psychology and its implications (7th ed.). Worth Publishers.

Atkinson, R. C., & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. In K. W. Spence & J. T. Spence (Eds.), The psychology of learning and motivation (Vol. 2, pp. 89–195). Academic Press.

Atkinson, R. C., & Shiffrin, R. M. (1971). Human memory: A proposed system and its control processes. Advances in Research and Theory, 2, 89-195.

Bowen, J. A., & Watson, C.E. (2024). Teaching with AI: A practical guide to a new era of human learning. Johns Hopkins University Press.

Bransford, John D., Ed, Brown, Ann L., Ed, & Cocking, Rodney R., Ed. (2000). How People Learn: Brain, Mind, Experience, and School: Expanded Edition (2nd ed.). Washington, D.C: National Academies Press.

Bruner, J. S. (1960). The process of education. Harvard University Press.

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274.

Koehler, M. J., & Mishra, P. (2009). What is TPACK? Contemporary Issues in Technology and Teacher Education, 9(1), 60–70.

Long, D., & Magerko, B. (2020). Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM.

Mayer, R. E. (2019). The Cambridge handbook of multimedia learning (2nd ed.). Cambridge University Press.

Memarian, B., & Doleck, T. (2024). A scoping review of reinforcement learning in education. Computers and education open, 6, 100175.

Puentedura, R. R. (2009). Transformation, technology, and education. http://hippasus.com/resources/tte.

Russel, S. J., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.

Schunk, D. H. (2018). Learning theories: An educational perspective (8th). Pearson.

Sinek, S. (2011). Start with why: How great leaders inspire everyone to take action. Portfolio/Penguin.

Skinner, B. F. (1953). Science and human behavior. Macmillan.

Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive load theory. Springer.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

1.7. Key Concepts & References Copyright © 2025 by Hou I (Esther) Lau is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.