{"id":117,"date":"2025-09-02T17:30:08","date_gmt":"2025-09-02T17:30:08","guid":{"rendered":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/?post_type=chapter&#038;p=117"},"modified":"2025-10-09T18:37:28","modified_gmt":"2025-10-09T18:37:28","slug":"chapter-1-9-key-concepts","status":"publish","type":"chapter","link":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/chapter\/chapter-1-9-key-concepts\/","title":{"raw":"1.7. Key Concepts &amp; References","rendered":"1.7. Key Concepts &amp; References"},"content":{"raw":"<h2>Key Concepts<\/h2>\r\n<ul>\r\n \t<li><strong>Artificial Intelligence (AI)<\/strong> \u2013 Simulation of human intelligence in machines (learning, reasoning, perceiving, creating).<\/li>\r\n \t<li><strong>Machine Learning (ML)<\/strong> \u2013 Subset of AI where systems learn patterns from data instead of explicit programming.\r\n<ul>\r\n \t<li><strong>Supervised Learning<\/strong> \u2013 Training with labeled data<\/li>\r\n \t<li><strong>Unsupervised Learning<\/strong> \u2013 Pattern-finding without labels.<\/li>\r\n<\/ul>\r\n<\/li>\r\n \t<li><strong>Reinforcement Learning (RL)<\/strong> \u2013 Trial-and-error learning guided by rewards and penalties.<\/li>\r\n \t<li><strong>Deep Learning (DL)<\/strong> \u2013 A form of ML using neural networks with multiple layers to recognize complex patterns (e.g., images, speech, translation).<\/li>\r\n \t<li><strong>Generative Models (GM)<\/strong> \u2013 Models that can create new content (text, images, audio).<\/li>\r\n \t<li><strong>Large Language Models (LLMs)<\/strong> \u2013 Deep learning models trained on vast text datasets to predict and generate human-like language (e.g., ChatGPT, Gemini).<\/li>\r\n \t<li><strong>Chain-of-Thought (CoT)<\/strong> \u2013 A reasoning approach where AI generates intermediate steps before producing answers.<\/li>\r\n \t<li><strong>Long-range Reasoning Models (LRMs)<\/strong> \u2013 Designed for extended, multi-step reasoning with long context retention.<\/li>\r\n \t<li><strong>Retrieval-Augmented Generation (RAG)<\/strong> \u2013 Combines generation with external information retrieval for more accurate outputs.<\/li>\r\n \t<li><strong>Multimodal Models (MCM)<\/strong> \u2013 AI systems that process and generate across multiple input types (text, images, audio, video).<\/li>\r\n \t<li><strong>Multi-Component Prompting (MCP)<\/strong> \u2013 Structured prompting strategy for guiding AI toward more accurate, creative results.<\/li>\r\n \t<li><strong>Agentic AI<\/strong> \u2013 Adaptive, goal-directed AI agents capable of planning, reflection, and autonomous task execution (e.g., AutoGPT).<\/li>\r\n \t<li><strong>Artificial Narrow Intelligence (ANI)<\/strong> \u2013 Task-specific AI (today\u2019s dominant form).<\/li>\r\n \t<li><strong>Artificial General Intelligence (AGI)<\/strong> \u2013 Hypothetical AI capable of reasoning and learning across domains like a human.<\/li>\r\n \t<li><strong>Artificial Superintelligence (ASI)<\/strong> \u2013 Speculative future AI surpassing human intelligence in all respects.<\/li>\r\n \t<li><strong>LLM Chaining or prompt chaining<\/strong> \u2013 this technique links a series of prompts together to progressively guide an LLM toward a desired output. Each prompt builds upon the last, allowing for structured reasoning, more coherent text, and complex workflows that approximate multi-stage human thinking.<\/li>\r\n \t<li><strong>Mixture of Experts (MoE) <\/strong>\u2013 architectures distribute computations across multiple specialized \u201cexpert\u201d sub-models. A gating mechanism determines which experts to activate for a given input, allowing the system to scale efficiently while maintaining strong performance. MoE systems help optimize resource use and improve model interpretability by segmenting problem domains.<\/li>\r\n \t<li><strong>Temperature Settings \u2013<\/strong>\u00a0a hyperparameter that controls randomness in text generation.\r\n<ul>\r\n \t<li>Low temperatures (0.0\u20130.5) yield deterministic, precise, and repeatable outputs.<\/li>\r\n \t<li>Higher temperatures (&gt;1.0) increase creativity, and unpredictability in responses.<\/li>\r\n \t<li>Allows users to adjust AI output between reliability and originality depending on the task.<\/li>\r\n<\/ul>\r\n<\/li>\r\n \t<li><strong>Reinforcement Learning with Human Feedback (RLHF) \u2013<\/strong> a training method where human evaluators provide feedback on model outputs, helping the system learn desirable behaviors and avoid harmful ones. It combines supervised fine-tuning with reinforcement learning, aligning model responses with human values, preferences, and safety considerations. RLHF has been crucial in making state-of-the-art LLMs usable and trustworthy.<\/li>\r\n<\/ul>\r\n<h2>References<\/h2>\r\n<p class=\"hanging-indent\">Aban, T. (2021). <i>Artificial Intelligence in Education: Promises and Implications for Teaching and Learning<\/i> (Vol. 78). Arlington: American Association of School Administrators.<\/p>\r\n<p class=\"hanging-indent\">Anderson, J. R. (2010). <i>Cognitive psychology and its implications<\/i> (7th ed.). Worth Publishers.<\/p>\r\n<p class=\"hanging-indent\">Atkinson, R. C., &amp; Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. In K. W. Spence &amp; J. T. Spence (Eds.), <i>The psychology of learning and motivation<\/i> (Vol. 2, pp. 89\u2013195). Academic Press.<\/p>\r\n<p class=\"hanging-indent\">Atkinson, R. C., &amp; Shiffrin, R. M. (1971). Human memory: A proposed system and its control processes. <i>Advances in Research and Theory, 2<\/i>, 89-195.<\/p>\r\n<p class=\"hanging-indent\">Bowen, J. A., &amp; Watson, C.E. (2024). <i>Teaching with AI: A practical guide to a new era of human learning<\/i>. Johns Hopkins University Press.<\/p>\r\n<p class=\"hanging-indent\">Bransford, John D., Ed, Brown, Ann L., Ed, &amp; Cocking, Rodney R., Ed. (2000). <i>How People Learn: Brain, Mind, Experience, and School: Expanded Edition<\/i> (2nd ed.). Washington, D.C: National Academies Press.<\/p>\r\n<p class=\"hanging-indent\">Bruner, J. S. (1960). <i>The process of education<\/i>. Harvard University Press.<\/p>\r\n<p class=\"hanging-indent\">Kasneci, E., Sessler, K., K\u00fcchemann, S., Bannert, M., Dementieva, D., Fischer, F., \u2026 Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. <i>Learning and Individual Differences, 103<\/i>, 102274.<\/p>\r\n<p class=\"hanging-indent\">Koehler, M. J., &amp; Mishra, P. (2009). What is TPACK? <i>Contemporary Issues in Technology and Teacher Education, 9<\/i>(1), 60\u201370.<\/p>\r\n<p class=\"hanging-indent\">Long, D., &amp; Magerko, B. (2020). <i>Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems<\/i>. New York, NY, USA: ACM.<\/p>\r\n<p class=\"hanging-indent\">Mayer, R. E. (2019). <i>The Cambridge handbook of multimedia learning<\/i> (2nd ed.). Cambridge University Press.<\/p>\r\n<p class=\"hanging-indent\">Memarian, B., &amp; Doleck, T. (2024). A scoping review of reinforcement learning in education. <i>Computers and education open, 6<\/i>, 100175.<\/p>\r\n<p class=\"hanging-indent\">Puentedura, R. R. (2009). <i>Transformation, technology, and education<\/i>. <a class=\"ng-star-inserted\" href=\"http:\/\/hippasus.com\/resources\/tte\" target=\"_blank\" rel=\"noopener\">http:\/\/hippasus.com\/resources\/tte<\/a>.<\/p>\r\n<p class=\"hanging-indent\">Russel, S. J., &amp; Norvig, P. (2021). <i>Artificial intelligence: A modern approach<\/i> (4th ed.). Pearson.<\/p>\r\n<p class=\"hanging-indent\">Schunk, D. H. (2018). <i>Learning theories: An educational perspective<\/i> (8th). Pearson.<\/p>\r\n<p class=\"hanging-indent\">Sinek, S. (2011). <i>Start with why: How great leaders inspire everyone to take action<\/i>. Portfolio\/Penguin.<\/p>\r\n<p class=\"hanging-indent\">Skinner, B. F. (1953). <i>Science and human behavior<\/i>. Macmillan.<\/p>\r\n<p class=\"hanging-indent\">Sweller, J., Ayres, P., &amp; Kalyuga, S. (2011). <i>Cognitive load theory<\/i>. Springer.<\/p>","rendered":"<h2>Key Concepts<\/h2>\n<ul>\n<li><strong>Artificial Intelligence (AI)<\/strong> \u2013 Simulation of human intelligence in machines (learning, reasoning, perceiving, creating).<\/li>\n<li><strong>Machine Learning (ML)<\/strong> \u2013 Subset of AI where systems learn patterns from data instead of explicit programming.\n<ul>\n<li><strong>Supervised Learning<\/strong> \u2013 Training with labeled data<\/li>\n<li><strong>Unsupervised Learning<\/strong> \u2013 Pattern-finding without labels.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Reinforcement Learning (RL)<\/strong> \u2013 Trial-and-error learning guided by rewards and penalties.<\/li>\n<li><strong>Deep Learning (DL)<\/strong> \u2013 A form of ML using neural networks with multiple layers to recognize complex patterns (e.g., images, speech, translation).<\/li>\n<li><strong>Generative Models (GM)<\/strong> \u2013 Models that can create new content (text, images, audio).<\/li>\n<li><strong>Large Language Models (LLMs)<\/strong> \u2013 Deep learning models trained on vast text datasets to predict and generate human-like language (e.g., ChatGPT, Gemini).<\/li>\n<li><strong>Chain-of-Thought (CoT)<\/strong> \u2013 A reasoning approach where AI generates intermediate steps before producing answers.<\/li>\n<li><strong>Long-range Reasoning Models (LRMs)<\/strong> \u2013 Designed for extended, multi-step reasoning with long context retention.<\/li>\n<li><strong>Retrieval-Augmented Generation (RAG)<\/strong> \u2013 Combines generation with external information retrieval for more accurate outputs.<\/li>\n<li><strong>Multimodal Models (MCM)<\/strong> \u2013 AI systems that process and generate across multiple input types (text, images, audio, video).<\/li>\n<li><strong>Multi-Component Prompting (MCP)<\/strong> \u2013 Structured prompting strategy for guiding AI toward more accurate, creative results.<\/li>\n<li><strong>Agentic AI<\/strong> \u2013 Adaptive, goal-directed AI agents capable of planning, reflection, and autonomous task execution (e.g., AutoGPT).<\/li>\n<li><strong>Artificial Narrow Intelligence (ANI)<\/strong> \u2013 Task-specific AI (today\u2019s dominant form).<\/li>\n<li><strong>Artificial General Intelligence (AGI)<\/strong> \u2013 Hypothetical AI capable of reasoning and learning across domains like a human.<\/li>\n<li><strong>Artificial Superintelligence (ASI)<\/strong> \u2013 Speculative future AI surpassing human intelligence in all respects.<\/li>\n<li><strong>LLM Chaining or prompt chaining<\/strong> \u2013 this technique links a series of prompts together to progressively guide an LLM toward a desired output. Each prompt builds upon the last, allowing for structured reasoning, more coherent text, and complex workflows that approximate multi-stage human thinking.<\/li>\n<li><strong>Mixture of Experts (MoE) <\/strong>\u2013 architectures distribute computations across multiple specialized \u201cexpert\u201d sub-models. A gating mechanism determines which experts to activate for a given input, allowing the system to scale efficiently while maintaining strong performance. MoE systems help optimize resource use and improve model interpretability by segmenting problem domains.<\/li>\n<li><strong>Temperature Settings \u2013<\/strong>\u00a0a hyperparameter that controls randomness in text generation.\n<ul>\n<li>Low temperatures (0.0\u20130.5) yield deterministic, precise, and repeatable outputs.<\/li>\n<li>Higher temperatures (&gt;1.0) increase creativity, and unpredictability in responses.<\/li>\n<li>Allows users to adjust AI output between reliability and originality depending on the task.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Reinforcement Learning with Human Feedback (RLHF) \u2013<\/strong> a training method where human evaluators provide feedback on model outputs, helping the system learn desirable behaviors and avoid harmful ones. It combines supervised fine-tuning with reinforcement learning, aligning model responses with human values, preferences, and safety considerations. RLHF has been crucial in making state-of-the-art LLMs usable and trustworthy.<\/li>\n<\/ul>\n<h2>References<\/h2>\n<p class=\"hanging-indent\">Aban, T. (2021). <i>Artificial Intelligence in Education: Promises and Implications for Teaching and Learning<\/i> (Vol. 78). Arlington: American Association of School Administrators.<\/p>\n<p class=\"hanging-indent\">Anderson, J. R. (2010). <i>Cognitive psychology and its implications<\/i> (7th ed.). Worth Publishers.<\/p>\n<p class=\"hanging-indent\">Atkinson, R. C., &amp; Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. In K. W. Spence &amp; J. T. Spence (Eds.), <i>The psychology of learning and motivation<\/i> (Vol. 2, pp. 89\u2013195). Academic Press.<\/p>\n<p class=\"hanging-indent\">Atkinson, R. C., &amp; Shiffrin, R. M. (1971). Human memory: A proposed system and its control processes. <i>Advances in Research and Theory, 2<\/i>, 89-195.<\/p>\n<p class=\"hanging-indent\">Bowen, J. A., &amp; Watson, C.E. (2024). <i>Teaching with AI: A practical guide to a new era of human learning<\/i>. Johns Hopkins University Press.<\/p>\n<p class=\"hanging-indent\">Bransford, John D., Ed, Brown, Ann L., Ed, &amp; Cocking, Rodney R., Ed. (2000). <i>How People Learn: Brain, Mind, Experience, and School: Expanded Edition<\/i> (2nd ed.). Washington, D.C: National Academies Press.<\/p>\n<p class=\"hanging-indent\">Bruner, J. S. (1960). <i>The process of education<\/i>. Harvard University Press.<\/p>\n<p class=\"hanging-indent\">Kasneci, E., Sessler, K., K\u00fcchemann, S., Bannert, M., Dementieva, D., Fischer, F., \u2026 Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. <i>Learning and Individual Differences, 103<\/i>, 102274.<\/p>\n<p class=\"hanging-indent\">Koehler, M. J., &amp; Mishra, P. (2009). What is TPACK? <i>Contemporary Issues in Technology and Teacher Education, 9<\/i>(1), 60\u201370.<\/p>\n<p class=\"hanging-indent\">Long, D., &amp; Magerko, B. (2020). <i>Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems<\/i>. New York, NY, USA: ACM.<\/p>\n<p class=\"hanging-indent\">Mayer, R. E. (2019). <i>The Cambridge handbook of multimedia learning<\/i> (2nd ed.). Cambridge University Press.<\/p>\n<p class=\"hanging-indent\">Memarian, B., &amp; Doleck, T. (2024). A scoping review of reinforcement learning in education. <i>Computers and education open, 6<\/i>, 100175.<\/p>\n<p class=\"hanging-indent\">Puentedura, R. R. (2009). <i>Transformation, technology, and education<\/i>. <a class=\"ng-star-inserted\" href=\"http:\/\/hippasus.com\/resources\/tte\" target=\"_blank\" rel=\"noopener\">http:\/\/hippasus.com\/resources\/tte<\/a>.<\/p>\n<p class=\"hanging-indent\">Russel, S. J., &amp; Norvig, P. (2021). <i>Artificial intelligence: A modern approach<\/i> (4th ed.). Pearson.<\/p>\n<p class=\"hanging-indent\">Schunk, D. H. (2018). <i>Learning theories: An educational perspective<\/i> (8th). Pearson.<\/p>\n<p class=\"hanging-indent\">Sinek, S. (2011). <i>Start with why: How great leaders inspire everyone to take action<\/i>. Portfolio\/Penguin.<\/p>\n<p class=\"hanging-indent\">Skinner, B. F. (1953). <i>Science and human behavior<\/i>. Macmillan.<\/p>\n<p class=\"hanging-indent\">Sweller, J., Ayres, P., &amp; Kalyuga, S. (2011). <i>Cognitive load theory<\/i>. Springer.<\/p>\n","protected":false},"author":6,"menu_order":7,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":["hlau2"],"pb_section_license":"cc-by-nc-sa","_links_to":"","_links_to_target":""},"chapter-type":[49],"contributor":[62],"license":[57],"class_list":["post-117","chapter","type-chapter","status-publish","hentry","chapter-type-numberless","contributor-hlau2","license-cc-by-nc-sa"],"part":22,"jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/117","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/users\/6"}],"version-history":[{"count":10,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/117\/revisions"}],"predecessor-version":[{"id":767,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/117\/revisions\/767"}],"part":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/parts\/22"}],"metadata":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/117\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/media?parent=117"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapter-type?post=117"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/contributor?post=117"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/license?post=117"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}