Enrique Sucar (based on joint work with Hector Aviles)
Visual recognition of gestures is an important field of study in human-robot interaction research. Although there exist several approaches in order to recognize gestures, on-line learning of visual gestures does not have received the same special attention. For teaching a new gesture, a recognition model that can be trained with just a few examples is required. In this paper we propose an extension to naive Bayesian classifiers for gesture recognition that we call "dynamic naive Bayesian classifiers". The observation variables in the classifiers combine motion and posture information of the user's right hand. We tested the model with a set of gestures for commanding a mobile robot, and compare it with hidden Markov models. When the number of training samples is high, the recognition rate is similar with both types of models; but when the number of training samples is low, the dynamic classifiers have a better performance. We also show that the inclusion of posture attributes in the form of spatial relationships between the right hand and other human body parts improves the recognition rate in a significant way.
Back to the LCI Forum page