6.3. The Double-Edged Sword: AI in Learning Analytics and Proctoring
David Gardner
So far, we’ve explored the rights and regulations that govern how data should be collected and used. But regulations are only half of the story. The other half lies in the everyday applications of AI in higher education: where privacy, equity, and academic integrity collide. Two of the most widely debated uses of AI on campus are learning analytics and remote proctoring. These technologies are often marketed as solutions for efficiency and fairness, but they also raise pressing ethical questions about surveillance, bias, and trust in education.
AI-Powered Learning Analytics
Learning analytics uses student data to understand and optimize learning. AI elevates this practice by providing real-time interventions, customized study recommendations, and predictive alerts when a student may be at risk of falling behind. On the surface, this seems like a win-win: faculty gain actionable insights, and students receive timely support.
Yet the story is more complex. Constant monitoring can easily begin to feel like surveillance, undermining student trust. Algorithmic bias presents another concern: if the historical data fed into the system already reflects inequities (for example, performance gaps across socioeconomic groups), the AI may reinforce those same patterns rather than disrupt them. This is where the ethical principles of fairness, accountability, and transparency (introduced in Module 3) come sharply into focus. The question becomes: how can we leverage data responsibly while still protecting student autonomy and equity?
AI in Remote Proctoring
Remote proctoring tools surged in popularity during the pandemic and remain in use today. They promise scale, accessibility, and efficiency—enabling exams to be monitored without the need for a physical testing center. But these conveniences come at a cost. Proctoring software often requires extensive access to a user’s computer, camera, and even private living space, raising profound privacy issues. In effect, students must invite surveillance technology into their homes.
Equity is also at stake. Studies from the National Institute of Standards and Technology (NIST) have shown that facial recognition systems are less accurate for people of color and women, resulting in disproportionate false positives. Students with disabilities or neurodivergence may also be unfairly flagged for “suspicious” behavior, such as fidgeting or avoiding eye contact. In this sense, while Module 4 focused on designing assessments that build integrity by design, AI proctoring represents a technological enforcement mechanism, one that introduces its own distinct ethical risks.
📚 Weekly Reflection Journal
Reflection Prompt: Would you feel more supported or more surveilled if your learning/workflow was constantly tracked by AI tools? What about during an online exam? How do you balance the potential benefits of personalization and integrity with the costs to privacy and trust?
Self-Check: Benefits and Risks of AI Learning Analytics
More on Digital Access, AI, and Data Privacy
Looking Ahead
In the next chapter, we shift from specific technologies to a broader skillset: AI media literacy. Just as learning analytics and proctoring force us to question how data is collected and used, AI-generated content challenges us to evaluate the reliability, accuracy, and transparency of the information we consume.