Why AI-Enabled Online Proctoring Harms Education Rather Than Saving It

By Naveen Joshi

AI-based proctoring seemingly creates an equal number of problems compared to the ones it solves in remotely conducted student assessments.

Until 2020, the concept of remote classrooms was little more than an expensive alternative to traditional ones. Remote courses and diplomas were things generally distance or part-time learners pursued. Microcredentials have been in demand but not as much as they are now. Today, things are radically different in education. Even as the world seems to be entering a post-pandemic reality, the education sector may have changed for good, with remote learning and assessments all set to be the norm rather than the exception.

Several educators use AI-based proctoring tools to maintain vigilance during remotely conducted online student assessments. While such tools bring their own benefits to virtual assessments, one can make valid arguments about why they’re problematic for the future of online tests and, eventually, remote learning.
Here are two of the most contentious issues associated with AI-based proctoring for online tests.

Racial and Disability-Related Biases

The concept of AI-based bias and discrimination is not exactly a new one. AI bias is generally caused by narrow datasets used to train machine learning models used in proctoring tools. As a result, AI-based proctoring systems may be unfair towards black students by incorrectly raising false alarms during pre-assessment identity and facial verification. What’s worse, such tools may also flag specially-abled test-takers during an assessment by implying that they may be lying about being disabled to gain an edge over other students.

As stated above, the narrowness of training datasets is the main reason for such systems’ biases. To resolve this problem, the developers of such tools must train them using diverse and inclusive datasets that positively reflect the multiculturalism and inclusivity that are seen in global education today.

Incorrect Flagging of Candidates

AI-based proctoring tools monitor a candidate’s physical and retinal movements to ascertain whether they’re cheating or using any other malpractice during a test. Occasionally, such tools may incorrectly disqualify certain remote test-takers when they behave in a certain way that the system classifies as cheating. For example, a student reading the test questions aloud during a test may be flagged. One of the more bizarre examples of AI-based proctoring gone wrong is that of a pregnant candidate taking the bar exam despite experiencing labor-associated contractions during the test.

Resolving this problem is fairly straightforward—whenever an AI-based proctoring system flags a candidate for cheating, a human proctor must assess the situation and make the final call regarding the test taker’s further participation in the test.

AI-based proctoring tools add undeniable value to online assessments. However, as you have seen, they can create unnecessary issues for educators and candidates during such tests. If AI-based online test proctoring needs to be accepted, then the developers of such tools will need to make rapid and extensive modifications to them.

LEAVE A REPLY

Please enter your comment!
Please enter your name here