Synergizing Human Gaze with Machine Vision for Location Mode Prediction
DOI:
https://doi.org/10.47750/Keywords:
Machine Learning, Artificial Intelligence (AI), Deep Learning Models, Cloud.Abstract
Before the advent of machine learning and AI, systems predicting human intent and movement relied heavily on sensor-based approaches like inertial measurement units (IMUs), gyroscopes, and accelerometers, which primarily tracked physical movements. These systems, while effective in detecting motion, lacked the nuanced understanding of human intent and environmental context that could be gained from integrating human gaze. The title "Synergizing Human Gaze with Machine Vision for Location Mode Prediction" reflects the integration of human gaze data, which provides information about where a person is looking (indicating intent), with machine vision systems that process movement data (cloud points) to predict future locomotion modes or transitions.