{"id":53,"date":"2025-08-27T18:02:57","date_gmt":"2025-08-27T18:02:57","guid":{"rendered":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/?post_type=chapter&#038;p=53"},"modified":"2025-10-05T14:11:37","modified_gmt":"2025-10-05T14:11:37","slug":"chapter-1-5-how-ai-learns-the-core-mechanisms","status":"publish","type":"chapter","link":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/chapter\/chapter-1-5-how-ai-learns-the-core-mechanisms\/","title":{"raw":"1.5. How AI Learns: The Core Mechanisms","rendered":"1.5. How AI Learns: The Core Mechanisms"},"content":{"raw":"In this chapter, we now move under the hood of AI to examine the mechanics of how machines learn. Neural networks, natural language processing (NLP), and large language models (LLMs) are the engines that allow modern systems to generate convincing outputs. By understanding these components, we can better judge what AI can and cannot do.\r\n<h2>Neural Networks: Brains Made of Math<\/h2>\r\n[caption id=\"attachment_182\" align=\"aligncenter\" width=\"512\"]<img class=\"wp-image-182\" style=\"font-size: 1em\" src=\"http:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-content\/uploads\/sites\/3\/2025\/08\/AI-NN-vs-Brain-300x221.png\" alt=\"A comparison chart with two columns. On the left, labeled \u201cAI Neural Networks,\u201d it lists: Nodes (\u201cartificial neurons\u201d), Weights (adjust connections), Learn through patterns, Distributed processing, and Latent space. On the right, labeled \u201cHuman Brain,\u201d it lists: Neurons, Synapses (strengthen\/weaken connections), Learn through patterns, Distributed processing, and Mental maps. An illustration of a neural network diagram is on the left and a drawing of a human brain is on the right.\" width=\"512\" height=\"378\" \/> Image by Jace Hargis[\/caption]\r\n\r\nNeural networks are inspired by the structure of human brains, but they are not biological; they are mathematical models made of layers of nodes, or \u201cartificial neurons.\u201d\r\n<ul>\r\n \t<li><strong>Nodes:<\/strong> units where data flows and decisions are made.<\/li>\r\n \t<li><strong>Weights:<\/strong> adjustable strengths of the connections between nodes, tuned during training.<\/li>\r\n \t<li><strong>Patterns:<\/strong> the system improves by refining which connections lead to accurate outcomes.<\/li>\r\n \t<li><strong>Distributed processing:<\/strong> information is represented across the network, not stored in a single place.<\/li>\r\n \t<li><strong>Latent space:<\/strong> abstract internal representations where models capture relationships we did not explicitly teach them.<\/li>\r\n<\/ul>\r\n<details><summary>\ud83d\udcd6 Analogy: Trails in a Forest<\/summary>\r\n<div style=\"border: 1px solid #ddd;padding: 0.8em;margin-top: 0.5em;background: #fafafa\">\r\n\r\nThink of learning as carving trails. In human learning, the \u201ctrail\u201d is not just a path in the brain. It carries meaning, emotion, and context from lived experience. In machine learning, the trail is a strengthened mathematical pathway through a network: connections (weights) get adjusted so the system follows routes that reduce error. Both create paths, but only one is anchored in experience and meaning.\r\n\r\n<\/div>\r\n<\/details>\r\n<h2>Natural Language Processing (NLP)<\/h2>\r\nHow do machines work with human language? This is the domain of NLP. NLP enables machines to parse, generate, and manipulate language. It does not mean the system understands in a human sense. Instead, it encodes, stores, and retrieves patterns in text.\r\n<ul>\r\n \t<li><strong>Encoding:<\/strong> words are converted into data (tokens).<\/li>\r\n \t<li><strong>Storing:<\/strong> tokens are embedded in vectors that capture statistical relationships.<\/li>\r\n \t<li><strong>Retrieving:<\/strong> models generate new text by predicting the most likely next word given surrounding context.<\/li>\r\n<\/ul>\r\nFor example, in the sentence: <em>\u201cThe therapist sat with the couple during their \u2026\u201d<\/em>, a human supplies \u201csession\u201d based on context. An AI model does the same statistically, scanning nearby words to predict the most probable continuation.\r\n<h2>Large Language Models (LLMs)<\/h2>\r\nLLMs scale NLP with deep learning. They are trained on enormous corpuses of textual datasets comprising books, articles, websites, and conversations and capture the rhythms and structures of human language. They are not memorizing, but predicting patterns at scale.\r\n\r\n<strong>Capabilities:<\/strong>\r\n<ul>\r\n \t<li>Generate essays, summaries, and explanations.<\/li>\r\n \t<li>Classify or translate text.<\/li>\r\n \t<li>Brainstorm, code, or mimic tone and style.<\/li>\r\n<\/ul>\r\nLLMs produce fluent, human-like responses, but remember: they do not <em>understand<\/em> in human terms. They excel at pattern prediction, not meaning.\r\n<h2>From LLMs to GPT<\/h2>\r\nOne of the most well-known LLM architectures is GPT, or Generative Pre-trained Transformer.\r\n<ul>\r\n \t<li><strong>Generative (G):<\/strong> produces original text\u2014stories, summaries, answers, even code.<\/li>\r\n \t<li><strong>Pre-trained (P):<\/strong> trained on massive datasets before being fine-tuned for tasks.<\/li>\r\n \t<li><strong>Transformer (T):<\/strong> a neural network design that captures long-range context and relationships in text.<\/li>\r\n<\/ul>\r\nGPT models respond to prompts, such as questions, commands, fragments, by predicting what comes next based on everything they have seen in training. If LLMs are the brain, GPT is the conversational voice.\r\n<h2>\ud83d\udcda Weekly Reflection Journal<\/h2>\r\n<div style=\"border: 2px solid #4CAF50;padding: 1em;margin: 1em 0;background-color: #f9fff9\"><strong>Reflection Prompt:<\/strong>\r\nHow does the concept of \u201clatent space\u201d in AI compare with your own experiences of intuition or \u201cgut feelings\u201d? Write a short note reflecting on whether the metaphor feels accurate or misleading.<\/div>\r\n<h2>More on Neural Networks<\/h2>\r\nClick through the labeled hotspots to see how data moves through an AI model.\r\n\r\n<!-- H5P: 1.5-A Image Hotspots -->\r\n[h5p id=\"7\"]\r\n<h2>Quick Self-Check<\/h2>\r\nTest your understanding of NLP and LLMs by filling in the missing terms.\r\n\r\n<!-- H5P: 1.5-B Fill in the Blanks -->\r\n[h5p id=\"8\"]\r\n<h2>Looking Ahead<\/h2>\r\nNext, we turn to a human-centered approach to AI: how educators and professionals can frame these technologies in ways that serve human learning and flourishing.","rendered":"<p>In this chapter, we now move under the hood of AI to examine the mechanics of how machines learn. Neural networks, natural language processing (NLP), and large language models (LLMs) are the engines that allow modern systems to generate convincing outputs. By understanding these components, we can better judge what AI can and cannot do.<\/p>\n<h2>Neural Networks: Brains Made of Math<\/h2>\n<figure id=\"attachment_182\" aria-describedby=\"caption-attachment-182\" style=\"width: 512px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-182\" style=\"font-size: 1em\" src=\"http:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-content\/uploads\/sites\/3\/2025\/08\/AI-NN-vs-Brain-300x221.png\" alt=\"A comparison chart with two columns. On the left, labeled \u201cAI Neural Networks,\u201d it lists: Nodes (\u201cartificial neurons\u201d), Weights (adjust connections), Learn through patterns, Distributed processing, and Latent space. On the right, labeled \u201cHuman Brain,\u201d it lists: Neurons, Synapses (strengthen\/weaken connections), Learn through patterns, Distributed processing, and Mental maps. An illustration of a neural network diagram is on the left and a drawing of a human brain is on the right.\" width=\"512\" height=\"378\" srcset=\"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-content\/uploads\/sites\/3\/2025\/08\/AI-NN-vs-Brain-300x221.png 300w, https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-content\/uploads\/sites\/3\/2025\/08\/AI-NN-vs-Brain-65x48.png 65w, https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-content\/uploads\/sites\/3\/2025\/08\/AI-NN-vs-Brain-225x166.png 225w, https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-content\/uploads\/sites\/3\/2025\/08\/AI-NN-vs-Brain-350x258.png 350w, https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-content\/uploads\/sites\/3\/2025\/08\/AI-NN-vs-Brain.png 512w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><figcaption id=\"caption-attachment-182\" class=\"wp-caption-text\">Image by Jace Hargis<\/figcaption><\/figure>\n<p>Neural networks are inspired by the structure of human brains, but they are not biological; they are mathematical models made of layers of nodes, or \u201cartificial neurons.\u201d<\/p>\n<ul>\n<li><strong>Nodes:<\/strong> units where data flows and decisions are made.<\/li>\n<li><strong>Weights:<\/strong> adjustable strengths of the connections between nodes, tuned during training.<\/li>\n<li><strong>Patterns:<\/strong> the system improves by refining which connections lead to accurate outcomes.<\/li>\n<li><strong>Distributed processing:<\/strong> information is represented across the network, not stored in a single place.<\/li>\n<li><strong>Latent space:<\/strong> abstract internal representations where models capture relationships we did not explicitly teach them.<\/li>\n<\/ul>\n<details>\n<summary>\ud83d\udcd6 Analogy: Trails in a Forest<\/summary>\n<div style=\"border: 1px solid #ddd;padding: 0.8em;margin-top: 0.5em;background: #fafafa\">\n<p>Think of learning as carving trails. In human learning, the \u201ctrail\u201d is not just a path in the brain. It carries meaning, emotion, and context from lived experience. In machine learning, the trail is a strengthened mathematical pathway through a network: connections (weights) get adjusted so the system follows routes that reduce error. Both create paths, but only one is anchored in experience and meaning.<\/p>\n<\/div>\n<\/details>\n<h2>Natural Language Processing (NLP)<\/h2>\n<p>How do machines work with human language? This is the domain of NLP. NLP enables machines to parse, generate, and manipulate language. It does not mean the system understands in a human sense. Instead, it encodes, stores, and retrieves patterns in text.<\/p>\n<ul>\n<li><strong>Encoding:<\/strong> words are converted into data (tokens).<\/li>\n<li><strong>Storing:<\/strong> tokens are embedded in vectors that capture statistical relationships.<\/li>\n<li><strong>Retrieving:<\/strong> models generate new text by predicting the most likely next word given surrounding context.<\/li>\n<\/ul>\n<p>For example, in the sentence: <em>\u201cThe therapist sat with the couple during their \u2026\u201d<\/em>, a human supplies \u201csession\u201d based on context. An AI model does the same statistically, scanning nearby words to predict the most probable continuation.<\/p>\n<h2>Large Language Models (LLMs)<\/h2>\n<p>LLMs scale NLP with deep learning. They are trained on enormous corpuses of textual datasets comprising books, articles, websites, and conversations and capture the rhythms and structures of human language. They are not memorizing, but predicting patterns at scale.<\/p>\n<p><strong>Capabilities:<\/strong><\/p>\n<ul>\n<li>Generate essays, summaries, and explanations.<\/li>\n<li>Classify or translate text.<\/li>\n<li>Brainstorm, code, or mimic tone and style.<\/li>\n<\/ul>\n<p>LLMs produce fluent, human-like responses, but remember: they do not <em>understand<\/em> in human terms. They excel at pattern prediction, not meaning.<\/p>\n<h2>From LLMs to GPT<\/h2>\n<p>One of the most well-known LLM architectures is GPT, or Generative Pre-trained Transformer.<\/p>\n<ul>\n<li><strong>Generative (G):<\/strong> produces original text\u2014stories, summaries, answers, even code.<\/li>\n<li><strong>Pre-trained (P):<\/strong> trained on massive datasets before being fine-tuned for tasks.<\/li>\n<li><strong>Transformer (T):<\/strong> a neural network design that captures long-range context and relationships in text.<\/li>\n<\/ul>\n<p>GPT models respond to prompts, such as questions, commands, fragments, by predicting what comes next based on everything they have seen in training. If LLMs are the brain, GPT is the conversational voice.<\/p>\n<h2>\ud83d\udcda Weekly Reflection Journal<\/h2>\n<div style=\"border: 2px solid #4CAF50;padding: 1em;margin: 1em 0;background-color: #f9fff9\"><strong>Reflection Prompt:<\/strong><br \/>\nHow does the concept of \u201clatent space\u201d in AI compare with your own experiences of intuition or \u201cgut feelings\u201d? Write a short note reflecting on whether the metaphor feels accurate or misleading.<\/div>\n<h2>More on Neural Networks<\/h2>\n<p>Click through the labeled hotspots to see how data moves through an AI model.<\/p>\n<p><!-- H5P: 1.5-A Image Hotspots --><\/p>\n<div id=\"h5p-7\">\n<div class=\"h5p-iframe-wrapper\"><iframe id=\"h5p-iframe-7\" class=\"h5p-iframe\" data-content-id=\"7\" style=\"height:1px\" src=\"about:blank\" frameBorder=\"0\" scrolling=\"no\" title=\"1.5 Interactive Diagram: Neural Networks\"><\/iframe><\/div>\n<\/div>\n<h2>Quick Self-Check<\/h2>\n<p>Test your understanding of NLP and LLMs by filling in the missing terms.<\/p>\n<p><!-- H5P: 1.5-B Fill in the Blanks --><\/p>\n<div id=\"h5p-8\">\n<div class=\"h5p-iframe-wrapper\"><iframe id=\"h5p-iframe-8\" class=\"h5p-iframe\" data-content-id=\"8\" style=\"height:1px\" src=\"about:blank\" frameBorder=\"0\" scrolling=\"no\" title=\"1.5 Self-Check: NLP and LLMs\"><\/iframe><\/div>\n<\/div>\n<h2>Looking Ahead<\/h2>\n<p>Next, we turn to a human-centered approach to AI: how educators and professionals can frame these technologies in ways that serve human learning and flourishing.<\/p>\n","protected":false},"author":1,"menu_order":5,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":["hlau2"],"pb_section_license":"cc-by-nc-sa","_links_to":"","_links_to_target":""},"chapter-type":[49],"contributor":[62],"license":[57],"class_list":["post-53","chapter","type-chapter","status-publish","hentry","chapter-type-numberless","contributor-hlau2","license-cc-by-nc-sa"],"part":22,"jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/53","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/users\/1"}],"version-history":[{"count":19,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/53\/revisions"}],"predecessor-version":[{"id":702,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/53\/revisions\/702"}],"part":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/parts\/22"}],"metadata":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/53\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/media?parent=53"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapter-type?post=53"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/contributor?post=53"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/license?post=53"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}