Gaze Data Collection Apparatus in 2D (top) and 3D (bottom) VR environment,
where participants perform memorization, distraction and recall task.

Towards Gaze-based Memory Modeling in 2D and 3D Virtual Environments

In Submission for ACM Transaction on Applied Perception
Sunniva Liu, Abhijat Biswas, Henny Admoni, David Lindlbauer

Interactive systems benefit from knowing information about user’s cognitive states, such as memory. Current technologies often fall short in modeling user’s memory, to support applications from learning, gaming to healthcare. Eye movements provide a window to reveal our memory processes from visual attention. Existing gaze-based memory modeling approaches are limited to predicting user’s recognition, i. e., whether a stimulus presented has been observed before. In this work, we moved beyond recognition towards free recall, i. e., retrieving an object from memory without cues. We developed a novel approach to predict free recall of objects in both 2D and 3D virtual scenes using purely gaze data with a Convolutional NeuralNetwork. Our results indicate that predicting visual memory recall using gaze is feasible with an accuracy significantly above chance. Our system achieves 0.69 AUC (Area Under Curve) in 2D, 0.66 in 3D scene. As a proof-of-concept approach, our work provides new directions towards memory modeling in HCI.

#Eye Tracking, Human Memory, Mixed Reality, Machine Learning