Detecting performance difficulty of learners in colonoscopy: Evidence from eye-tracking

Eye-tracking can help decode the intricate control mechanism in human performance. In healthcare, physicians-in-training requires extensive practice to improve their healthcare skills. When a trainee encounters any difficulty in the practice, they will need feedback from experts to improve their per...

Full description

Saved in:
Bibliographic Details
Main Authors: Xin Liu (Author), Bin Zheng (Author), Xiaoqin Duan (Author), Wenjing He (Author), Yuandong Li (Author), Jinyu Zhao (Author), Chen Zhao (Author), Lin Wang (Author)
Format: Book
Published: Bern Open Publishing, 2021-07-01T00:00:00Z.
Subjects:
Online Access:Connect to this object online.
Tags: Add Tag
No Tags, Be the first to tag this record!

MARC

LEADER 00000 am a22000003u 4500
001 doaj_4e29f13d30a04b97a20fedd8bb894933
042 |a dc 
100 1 0 |a Xin Liu  |e author 
700 1 0 |a Bin Zheng  |e author 
700 1 0 |a Xiaoqin Duan  |e author 
700 1 0 |a Wenjing He  |e author 
700 1 0 |a Yuandong Li  |e author 
700 1 0 |a Jinyu Zhao  |e author 
700 1 0 |a Chen Zhao  |e author 
700 1 0 |a Lin Wang  |e author 
245 0 0 |a Detecting performance difficulty of learners in colonoscopy: Evidence from eye-tracking 
260 |b Bern Open Publishing,   |c 2021-07-01T00:00:00Z. 
500 |a 10.16910/jemr.14.2.5 
500 |a 1995-8692 
520 |a Eye-tracking can help decode the intricate control mechanism in human performance. In healthcare, physicians-in-training requires extensive practice to improve their healthcare skills. When a trainee encounters any difficulty in the practice, they will need feedback from experts to improve their performance. The personal feedback is time-consuming and subjected to bias. In this study, we tracked the eye movements of trainees during their colonoscopic performance in simulation. We applied deep learning algorithms to detect the eye-tracking metrics on the moments of navigation lost (MNL), a signature sign for performance difficulty during colonoscopy. Basic human eye gaze and pupil characteristics were learned and verified by the deep convolutional generative adversarial networks (DCGANs); the generated data were fed to the Long Short-Term Memory (LSTM) networks with three different data feeding strategies to classify MNLs from the entire colonoscopic procedure. Outputs from deep learning were compared to the expert's judgment on the MNLs based on colonoscopic videos. The best classification outcome was achieved when we fed human eye data with 1000 synthesized eye data, where accuracy (90%), sensitivity (90%), and specificity (88%) were optimized. This study built an important foundation for our work of developing a self-adaptive education system for training healthcare skills using simulation. 
546 |a EN 
690 |a colonoscopy 
690 |a simulation 
690 |a eye-tracking 
690 |a navigation 
690 |a Deep Convolutional Generative Adversarial Networks (DCGANs) 
690 |a Long Short-Term Memory (LSTM) networks 
690 |a Human anatomy 
690 |a QM1-695 
655 7 |a article  |2 local 
786 0 |n Journal of Eye Movement Research, Vol 14, Iss 2 (2021) 
787 0 |n https://bop.unibe.ch/JEMR/article/view/7515 
787 0 |n https://doaj.org/toc/1995-8692 
856 4 1 |u https://doaj.org/article/4e29f13d30a04b97a20fedd8bb894933  |z Connect to this object online.