Atlas English

View Original

Atlas English at the Eaquals International Conference, 2022

It was an honour to give a presentation at the 2022 Eaquals International Conference titled “How to run a level test across 17 time zones: coping with security and operational issues in difficult times”. In this post we’ll explain some of the presentation’s key themes.

There has been a clear shift to online assessments over the past two years, driven principally by the Covid-19 pandemic. For a world in lockdown, online testing had one clear USP: it can be done remotely. But online testing can be much more than simply “remote”. It opens doors to countless options from a test design and test delivery perspective.

To illustrate this point, our presentation analysed two testing projects we’ve had the pleasure of working on over the past few years. With a view to brevity, we will omit the details of the two case studies, and simply share the key takeaways: how test delivery was adapted to meet the specific challenges of each context.


CASE STUDY 1

Project Context: Our first case study examined a project run with the Gulf Transportation Company (name has been anonymised at the client’s request) in April 2020. Forced to shift rapidly to online testing when Covid-19 derailed GTC’s plans for an in-person assessment of 2,000 job applicants, this project’s key challenges were administrative. 

Challenges: The key challenges were administrative and twofold. Firstly, the test delivery had to accommodate the fact that test takers were spread globally across 17 time zones. Secondly, for data protection purposes GTC had to ensure test-taker details were anonymised - eliminating the possibility of using the Admin Panel’s automated test creation features (read more on the features here).

Test delivery adaptations: To ensure this testing project was rolled out successfully, the team allocated available resources to test administration. To meet GTC’s data privacy requirements, a custom system was set up which would anonymise test taker identities for everyone but GTC’s HR department. This did mean that delivering the tests through the Admin Panel’s automated email system was not possible, so test takers were contacted manually and results had to be cross-referenced and mapped. Additionally, the original plan of having all test takers take the test at the same time was not viable with test takers spread over 17 timezones. To accommodate this, the test window was widened to 48 hours to account for scheduling issues.


CASE STUDY 2

Project Context: Our second case study examined the most recent project we ran with DAAD and HOPES LEB (to read about other projects we’ve run with them, click here). The first phase began in February 2022, when we tested the English level of 300 HOPES scholarship students in order to place them in the right learning group for their level.

Challenges: Test-takers were based in Lebanon, and were - at the time of testing - subject to unreliable internet connections and frequent power outages. The key obstacles came from a test-delivery perspective: how to conduct an online test under these conditions? Moreover, given the low English level of many test takers (30% were at an A1 or A2 level), a second challenge was communicating to these students how to take the test and what was expected of them.

Test delivery adaptations: The foremost concern with test delivery was to ensure that unreliable internet didn’t affect the students’ ability to take the test. We rolled out the Offline Mode of the test to address this issue. In order to communicate test details and information to low-level English speakers, we translated all support materials (including a test-taker guide, DPT’s ‘Before the test’ section, and the automated welcome email) into Arabic.


It’s interesting to juxtapose these case studies, as - superficially at least - they share many commonalities. Both were English level testing projects with partners based in the Middle East, and candidates being assessed remotely. However, the priorities from an operational perspective were very different and the set up and delivery of the test was very different for these two projects. This highlights the importance of taking the time to pinpoint the purpose of the test and understand the nuances of each testing project.


Maximising the potential of online test delivery

Looking to the future of online test delivery, there are a number of ways in which this area promises to revolutionise traditional ways of working. Consider the opportunities when testing SEND students, when designing question types, or when deciding on the level of test security.

Online testing offers an abundance of features that can be straightforwardly customised to specific contexts. When it comes to testing SEND (Special Educational Needs and/or Disabilities) students, this presents a big opportunity. Over the past few years, we have worked on projects in which test parameters were adjusted for SEND students to help make the testing process easier for them. These have included adaptations such as additional test time, audio/video playback, enlarging test content, text-to-voice software, and allowing the students to take the test in a comfortable environment. There is certainly much scope for this area to be developed even further.

In the same vein, test items can be specifically designed to exploit the capabilities of online, multimedia delivery. With paper-based placement tests, multiple choice questions have long been the default. Testing in the online realm drastically changes what is possible. Question types like sentence reconstruction and word placement take a holistic look at structures and syntax, so candidates engage with the language - and the test - on a deeper level.

Lastly, let’s consider test security. This is often seen as the downside to online testing, as test invigilation becomes more challenging. This is certainly the case if the type of security we have in mind is the typical paper-based test variety: all the students sitting in one exam room, being supervised by invigilators. There are remote proctoring providers that attempt to bridge this gap with live/AI-powered webcam and audio monitoring. While this is a good option in high-stakes testing situations, we have often found remote proctoring to be the bottleneck from an operational and cost perspective in large-scale testing contexts.

There are many other options to consider when calibrating the level of test security with the function of the test. For instance, randomised and adaptive test designs mean no two candidates see the same set of questions - eliminating the risk of students cheating off one another. Sophisticated question types like sentence reconstruction make it much more challenging for students to cheat by looking the right answer up online. DPT’s Anomaly Tracker flags up any suspicious test-taker activity to the test administrator.

In these ways, online testing presents a dizzying array of opportunities to test administrators. It’s key to identify the purpose of the test that is being run, and adapt the test’s features to take advantage of what is on offer.