{"id":478,"date":"2025-09-29T15:27:53","date_gmt":"2025-09-29T15:27:53","guid":{"rendered":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/?post_type=chapter&#038;p=478"},"modified":"2025-11-10T19:02:19","modified_gmt":"2025-11-10T19:02:19","slug":"6-3-the-double-edged-sword-ai-in-learning-analytics-and-proctoring","status":"publish","type":"chapter","link":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/chapter\/6-3-the-double-edged-sword-ai-in-learning-analytics-and-proctoring\/","title":{"raw":"6.3. The Double-Edged Sword: AI in Learning Analytics and Proctoring","rendered":"6.3. The Double-Edged Sword: AI in Learning Analytics and Proctoring"},"content":{"raw":"So far, we\u2019ve explored the rights and regulations that govern how data should be collected and used. But regulations are only half of the story. The other half lies in the everyday applications of AI in higher education: where privacy, equity, and academic integrity collide. Two of the most widely debated uses of AI on campus are <strong>learning analytics<\/strong> and <strong>remote proctoring<\/strong>. These technologies are often marketed as solutions for efficiency and fairness, but they also raise pressing ethical questions about surveillance, bias, and trust in education.\r\n<h3>AI-Powered Learning Analytics<\/h3>\r\nLearning analytics uses student data to understand and optimize learning. AI elevates this practice by providing real-time interventions, customized study recommendations, and predictive alerts when a student may be at risk of falling behind. On the surface, this seems like a win-win: faculty gain actionable insights, and students receive timely support.\r\n\r\nYet the story is more complex. Constant monitoring can easily begin to feel like surveillance, undermining student trust. Algorithmic bias presents another concern: if the historical data fed into the system already reflects inequities (for example, performance gaps across socioeconomic groups), the AI may reinforce those same patterns rather than disrupt them. This is where the <strong>ethical principles of fairness, accountability, and transparency<\/strong> (introduced in Module 3) come sharply into focus. The question becomes: how can we leverage data responsibly while still protecting student autonomy and equity?\r\n<h3>AI in Remote Proctoring<\/h3>\r\nRemote proctoring tools surged in popularity during the pandemic and remain in use today. They promise scale, accessibility, and efficiency\u2014enabling exams to be monitored without the need for a physical testing center. But these conveniences come at a cost. Proctoring software often requires extensive access to a user\u2019s computer, camera, and even private living space, raising profound privacy issues. In effect, students must invite surveillance technology into their homes.\r\n\r\nEquity is also at stake. Studies from the <a href=\"https:\/\/www.nist.gov\/programs-projects\/face-recognition-vendor-test-frvt\">National Institute of Standards and Technology (NIST)<\/a> have shown that facial recognition systems are less accurate for people of color and women, resulting in disproportionate false positives. Students with disabilities or neurodivergence may also be unfairly flagged for \u201csuspicious\u201d behavior, such as fidgeting or avoiding eye contact. In this sense, while <strong>Module 4 focused on designing assessments that build integrity by design<\/strong>, AI proctoring represents a technological enforcement mechanism, one that introduces its own distinct ethical risks.\r\n<h2>\ud83d\udcda Weekly Reflection Journal<\/h2>\r\n<div style=\"border: 2px solid #2e7d32;background-color: #f9fff9;border-radius: 6px;padding: 12px;margin: 1em 0\">\r\n\r\n<strong>Reflection Prompt: <\/strong>Would you feel more supported or more surveilled if your learning\/workflow was constantly tracked by AI tools? What about during an online exam? How do you balance the potential benefits of personalization and integrity with the costs to privacy and trust?\r\n\r\n<\/div>\r\n<h3>Self-Check: Benefits and Risks of AI Learning Analytics<\/h3>\r\n[h5p id=\"23\"]\r\n<h3>More on Digital Access, AI, and Data Privacy<\/h3>\r\n<div class=\"pb-embed\">\r\n\r\n[embed]https:\/\/www.youtube.com\/embed\/ORkyI4AHPR0[\/embed]\r\n\r\n<\/div>\r\n<h3>Looking Ahead<\/h3>\r\nIn the next chapter, we shift from specific technologies to a broader skillset: <strong>AI media literacy<\/strong>. Just as learning analytics and proctoring force us to question how data is collected and used, AI-generated content challenges us to evaluate the reliability, accuracy, and transparency of the information we consume.","rendered":"<p>So far, we\u2019ve explored the rights and regulations that govern how data should be collected and used. But regulations are only half of the story. The other half lies in the everyday applications of AI in higher education: where privacy, equity, and academic integrity collide. Two of the most widely debated uses of AI on campus are <strong>learning analytics<\/strong> and <strong>remote proctoring<\/strong>. These technologies are often marketed as solutions for efficiency and fairness, but they also raise pressing ethical questions about surveillance, bias, and trust in education.<\/p>\n<h3>AI-Powered Learning Analytics<\/h3>\n<p>Learning analytics uses student data to understand and optimize learning. AI elevates this practice by providing real-time interventions, customized study recommendations, and predictive alerts when a student may be at risk of falling behind. On the surface, this seems like a win-win: faculty gain actionable insights, and students receive timely support.<\/p>\n<p>Yet the story is more complex. Constant monitoring can easily begin to feel like surveillance, undermining student trust. Algorithmic bias presents another concern: if the historical data fed into the system already reflects inequities (for example, performance gaps across socioeconomic groups), the AI may reinforce those same patterns rather than disrupt them. This is where the <strong>ethical principles of fairness, accountability, and transparency<\/strong> (introduced in Module 3) come sharply into focus. The question becomes: how can we leverage data responsibly while still protecting student autonomy and equity?<\/p>\n<h3>AI in Remote Proctoring<\/h3>\n<p>Remote proctoring tools surged in popularity during the pandemic and remain in use today. They promise scale, accessibility, and efficiency\u2014enabling exams to be monitored without the need for a physical testing center. But these conveniences come at a cost. Proctoring software often requires extensive access to a user\u2019s computer, camera, and even private living space, raising profound privacy issues. In effect, students must invite surveillance technology into their homes.<\/p>\n<p>Equity is also at stake. Studies from the <a href=\"https:\/\/www.nist.gov\/programs-projects\/face-recognition-vendor-test-frvt\">National Institute of Standards and Technology (NIST)<\/a> have shown that facial recognition systems are less accurate for people of color and women, resulting in disproportionate false positives. Students with disabilities or neurodivergence may also be unfairly flagged for \u201csuspicious\u201d behavior, such as fidgeting or avoiding eye contact. In this sense, while <strong>Module 4 focused on designing assessments that build integrity by design<\/strong>, AI proctoring represents a technological enforcement mechanism, one that introduces its own distinct ethical risks.<\/p>\n<h2>\ud83d\udcda Weekly Reflection Journal<\/h2>\n<div style=\"border: 2px solid #2e7d32;background-color: #f9fff9;border-radius: 6px;padding: 12px;margin: 1em 0\">\n<p><strong>Reflection Prompt: <\/strong>Would you feel more supported or more surveilled if your learning\/workflow was constantly tracked by AI tools? What about during an online exam? How do you balance the potential benefits of personalization and integrity with the costs to privacy and trust?<\/p>\n<\/div>\n<h3>Self-Check: Benefits and Risks of AI Learning Analytics<\/h3>\n<div id=\"h5p-23\">\n<div class=\"h5p-iframe-wrapper\"><iframe id=\"h5p-iframe-23\" class=\"h5p-iframe\" data-content-id=\"23\" style=\"height:1px\" src=\"about:blank\" frameBorder=\"0\" scrolling=\"no\" title=\"6.3. AI in Learning Analytics and Proctoring\"><\/iframe><\/div>\n<\/div>\n<h3>More on Digital Access, AI, and Data Privacy<\/h3>\n<div class=\"pb-embed\">\n<p><iframe loading=\"lazy\" id=\"oembed-1\" title=\"When Digital Access, AI, and Data Privacy Collide\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/ORkyI4AHPR0?feature=oembed&#38;rel=0\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<\/div>\n<h3>Looking Ahead<\/h3>\n<p>In the next chapter, we shift from specific technologies to a broader skillset: <strong>AI media literacy<\/strong>. Just as learning analytics and proctoring force us to question how data is collected and used, AI-generated content challenges us to evaluate the reliability, accuracy, and transparency of the information we consume.<\/p>\n","protected":false},"author":6,"menu_order":3,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":["david-gardner"],"pb_section_license":"","_links_to":"","_links_to_target":""},"chapter-type":[],"contributor":[67],"license":[],"class_list":["post-478","chapter","type-chapter","status-publish","hentry","contributor-david-gardner"],"part":38,"jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/478","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/users\/6"}],"version-history":[{"count":5,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/478\/revisions"}],"predecessor-version":[{"id":687,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/478\/revisions\/687"}],"part":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/parts\/38"}],"metadata":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapters\/478\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/media?parent=478"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/pressbooks\/v2\/chapter-type?post=478"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/contributor?post=478"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/books.nbsplabs.com\/ai-lit-intro\/wp-json\/wp\/v2\/license?post=478"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}