Gesture Recognition Is A Topic in Computer Science and Language Technology With The Goal of Interpreting Human Gestures Via Mathematical Algorithms
Gesture Recognition Is A Topic in Computer Science and Language Technology With The Goal of Interpreting Human Gestures Via Mathematical Algorithms
Gesture Recognition Is A Topic in Computer Science and Language Technology With The Goal of Interpreting Human Gestures Via Mathematical Algorithms
Gesture recognition can be seen as a way for computers to begin to understand human
body language, thus building a richer bridge between machines and humans than
primitive text user interfaces or even GUIs (graphical user interfaces), which still
limit the majority of input to keyboard and mouse.
Gesture recognition enables humans to interface with the machine (HMI) and interact
naturally without any mechanical devices. Using the concept of gesture recognition, it
is possible to point a finger at the computer screen so that the cursor will move
accordingly. This could potentially make conventional input devices such as mouse,
keyboards and even touch-screens redundant.
Gesture recognition can be conducted with techniques from computer vision and
image processing.
The literature includes ongoing work in the computer vision field on capturing
gestures or more general human pose and movements by cameras connected to a
computer.[2][3][4][5]
Gesture types
In computer interfaces, two types of gestures are distinguished:[6]
Offline gestures: Those gestures that are processed after the user interaction
with the object. An example is the gesture to activate a menu.
Online gestures: Direct manipulation gestures. They are used to scale or rotate
a tangible object.
Uses
Gesture recognition is useful for processing information from humans which is not
conveyed through speech or type. As well, there are various types of gestures which
can be identified by computers.
Sign language recognition. Just as speech recognition can transcribe speech to text,
certain types of gesture recognition software can transcribe the symbols
represented through sign language into text.[7]
For socially assistive robotics. By using proper sensors (accelerometers and gyros)
worn on the body of a patient and by reading the values from those sensors, robots
can assist in patient rehabilitation. The best example can be stroke rehabilitation.
Directional indication through pointing. Pointing has a very specific purpose in
our[clarification needed] society, to reference an object or location based on its position
relative to ourselves. The use of gesture recognition to determine where a person is
pointing is useful for identifying the context of statements or instructions. This
application is of particular interest in the field of robotics.[8]
Control through facial gestures. Controlling a computer through facial gestures is a
useful application of gesture recognition for users who may not physically be able to
use a mouse or keyboard. Eye tracking in particular may be of use for controlling
cursor motion or focusing on elements of a display.
Alternative computer interfaces. Foregoing the traditional keyboard and mouse
setup to interact with a computer, strong gesture recognition could allow users to
accomplish frequent or common tasks using hand or face gestures to a camera. [9][10]
[11][12][13]