Online assessment in the age of artificial intelligence

Abstract Online education, while not a new phenomenon, underwent a monumental shift during the COVID-19 pandemic, pushing educators and students alike into the uncharted waters of full-time digital learning. With this shift came renewed concerns about the integrity of online assessments. Amidst a la...

Full description

Saved in:
Bibliographic Details
Main Author: Alexander Stanoyevitch (Author)
Format: Book
Published: Springer, 2024-08-01T00:00:00Z.
Subjects:
Online Access:Connect to this object online.
Tags: Add Tag
No Tags, Be the first to tag this record!

MARC

LEADER 00000 am a22000003u 4500
001 doaj_ddbf67f1739842ef866c0a7165e7d877
042 |a dc 
100 1 0 |a Alexander Stanoyevitch  |e author 
245 0 0 |a Online assessment in the age of artificial intelligence 
260 |b Springer,   |c 2024-08-01T00:00:00Z. 
500 |a 10.1007/s44217-024-00212-9 
500 |a 2731-5525 
520 |a Abstract Online education, while not a new phenomenon, underwent a monumental shift during the COVID-19 pandemic, pushing educators and students alike into the uncharted waters of full-time digital learning. With this shift came renewed concerns about the integrity of online assessments. Amidst a landscape rapidly being reshaped by online exam/homework assistance platforms, which witnessed soaring stocks as students availed its questionable exam assistance, and the emergence of sophisticated artificial intelligence tools like ChatGPT, the traditional methods of assessment faced unprecedented challenges. This paper presents the results of an observational study, using data from an introductory statistics course taught every semester by the author, and delves into the proliferation of cheating methods. Analyzing exam score results from the pre and post introduction of ChatGPT periods, the research unpacks the extent of cheating and provides strategies to counteract this trend. The findings starkly illustrate significant increases in exam scores from when exams of similar difficulty were administered in person (pre-Covid) versus online. The format, difficulty, and grading of the exams was the same throughout. Although randomized controlled experiments are generally more effective than observational studies, we will indicate when we present the data why experiments would not be feasible for this research. In addition to presenting experimental findings, the paper offers some insights, based on the author's extensive experience, to guide educators in crafting more secure online assessments in this new era, both for courses at the introductory level and more advances courses The results and findings are relevant to introductory courses that can use multiple choice exams in any subject but the recommendations for upper-level courses will be relevant primarily to STEM subjects. The research underscores the pressing need for reinventing assessment techniques to uphold the sanctity of online education. 
546 |a EN 
690 |a Online assessments 
690 |a ChatGPT 
690 |a Integrity of exams 
690 |a Pandemic-induced education 
690 |a Online cheating 
690 |a Artificial intelligence in education 
690 |a Education 
690 |a L 
655 7 |a article  |2 local 
786 0 |n Discover Education, Vol 3, Iss 1, Pp 1-12 (2024) 
787 0 |n https://doi.org/10.1007/s44217-024-00212-9 
787 0 |n https://doaj.org/toc/2731-5525 
856 4 1 |u https://doaj.org/article/ddbf67f1739842ef866c0a7165e7d877  |z Connect to this object online.