Work place: Universidad Autónoma de Yucatán/Computational Learning and Imaging Research, Merida, 97205, Mexico
E-mail: amarting@correo.uady.mx
Website:
Research Interests: Pattern Recognition, Computer Architecture and Organization, Image Compression, Image Manipulation, Image Processing, Data Structures and Algorithms
Biography
Anabel Martin-Gonzalez received her Ph.D. degree in Computer Science from the Technische Universitaet Muenchen at the Chair of Computer Aided Medical Procedures, Germany, in 2011. She received her MSc degree in Computer Science from the Universidad Nacional Autónoma de México, Mexico, in 2006, obtaining the Alfonso Caso Medal for her high educational performance in the Master program. In 2004, she got a BSc degree in Computer Science from the Universidad Autónoma de Yucatán, Mexico. Currently, she is an Assistant Professor and a member of the Computational Learning and Imaging Research group at the Faculty of Mathematics from the Universidad Autónoma de Yucatán, Mexico. Her main research interests are pattern recognition in images, image processing, and machine learning.
By Jose L. Medina-Catzin Anabel Martin-Gonzalez Carlos Brito-Loeza Victor Uc-Cetina
DOI: https://doi.org/10.5815/ijitcs.2017.09.07, Pub. Date: 8 Sep. 2017
Personal service robots will be in the short future part of our world by assisting humans in their daily chores. A highly efficient way of communication with people is through basic gestures. In this work, we present an efficient body gestures’ interface that gives the user practical communication to control a personal service robot. The robot can interpret two body gestures of the subject and performs actions related to those gestures. The service robot’s setup consists of a Pioneer P3-DX research robot, a Kinect sensor and a portable workstation. The gesture recognition system developed is based on tracking the skeleton of the user to get the body parts relative 3D positions. In addition, the system takes depth images from the sensor and extracts their Haar features, which will train the Adaboost algorithm to classify the gesture. The system was developed using the ROS framework, showing good performance during experimental evaluation with users. Our body gesture-based interface may serve as a baseline to develop practical and natural interfaces to communicate with service robots in the near future.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals