This post was first published at EdSurge on April 9, 2017
Thoughts about online proctoring have been taking up more of my time and energy than I’d like to admit. Rather than spending most of my time time helping people become better online teachers, I have been figuring out how to meet two competing objectives: increase online course offerings and avoid adopting an online proctoring system.
My first objective, to increase the number of online courses, is grounded in research and campus goals for increasing student success. In the last five years, research has shown that students want choices in how they take their classes. In fact, when the option between taking a course online or in-person is provided, studies show students are more likely to stay in college.
The goal of increasing online and blended course offerings is closely aligned to campus goals of improving student success. However, it is in direct conflict with my desire to avoid online proctoring tools, which mimic poor in-person assessment practices. This issue arises because many faculty that choose to teach online refuse to do so if they can’t rely on the same methods of assessment that they use in their in-person courses. These practices often include an exam with a professor monitoring students to reduce cheating.
This kind of resistance from faculty moving to online teaching is not new. Since the early days of online instruction, the response of many new instructors has been to figure out how to transfer elements of their face-to-face class into the online format. In response, education technology companies have been quick to create products that attempt to replicate in-person teaching. Some examples include learning management systems, lecture capture tools, and early online meeting systems.
The problem with attempting to replicate in-person teaching online, though, is that it doesn’t work. Take lecture capture for example, even a good 45 minute in-person lecture can put the most engaged students to sleep if put online. The platform is different, the environment is different, which is why the pedagogy and assessment need to be different.
Let’s examine the issues involved with online proctoring, an approach that often involves a webcam, microphone, and claims to capture if a student is, for instance, talking to someone else in the room or simply looking away from their computer screen. First, this is creepy and anxiety producing. (The idea of your every move being watched through a webcam while taking an exam in your home doesn’t exactly calm one’s nerves.) More than that, it goes against much of what we teach students about online privacy. When would we encourage students to give a stranger access their webcam?
Second, online proctoring systems, such as ProctorU or Proctorio, replicate a practice that isn’t effective in-person. Exams are only good for a few things: managing faculty workload and assessing low level skill and content knowledge. What they aren’t good at is demonstrating student learning or mastery of a topic. As authors Rena Palloff and Keith Pratt discuss in their book “Assessing the Online Learner: Resources and Strategies for Faculty,” online exams typically measure skills that require memorization of facts, whereas learning objectives are often written around one’s ability to create, evaluate and analyze course material.
So, why would we spend precious and limited resources on replicating a flawed way of measuring student understanding? Wouldn’t that money be better spent designing assessments that measure student learning and support the uniqueness of the online environment?
In designing online assessments without a proctoring system, we need to take into account the online environment and all it offers. First, time is different, the class period doesn’t end or begin. We know that all students are online, so open book and open notes should be assumed. We also know that students will have access to peers, parents, friends and neighbors.
Knowing these variables, how would you have your students demonstrate their knowledge? Authentic assessments, rather than multiple choice or other online exams, is one alternative that could be explored. For example, in a chemistry course, students could make a video themselves doing a set problems and explain the process. This would allow instructors to better understand students’ thinking and identify areas that they are struggling in. Another example could be in a psychology course, where students could curate and evaluate a set of resources on a given topic to demonstrate their ability to find, and critically analyze online information.
Trying to control online learning variables, as online proctoring systems attempt to do, is futile. Instead, let’s embrace them and discover improved ways of assessing student learning online.