[ieee 2014 seventh international conference on contemporary computing (ic3) - noida, india...

6
A Wireless Dynamic Gesture User Interface for HCI Using Hand Data Glove Shitala Prasad Computer Science and Engg., Indian Institute of Technology Roorkee, India [email protected] Piyush Kumar Information Technology, Indian Institute of Information Technology, Allahabad, India [email protected] Kumari Priyanka Sinha Information Technology, National Institute of Technology, Patna, India [email protected] Abstract- In this paper, DG5 hand data glove is used to design an intelligent and efficient human-computer interface to interact with VLC media player. It maps the static keyboard with dynamic human hand gestures with 22 Degree of Freedom (DoF) to interact more natural way with computer. The result is very much appreciated showing the confusion matrix of various gestures used. In this paper, 10 complex gestures are used, that is Play, Pause, Forward, Backward, Next, Previous, Stop, Mute, Full Screen, and Null gestures. To study about the human-hand gestures four different age groups are taken, User A (20-30 years), User B (31-45 years), User C (46-60 years), and User D (61-above years). The decision tree a powerful learning algorithm is used to classify these gestures correctly. This enhances the user’s interaction level with immersion feeling in augmented reality to 98.88% of accuracy rate. Keywords- Gesture User Interface; Virtual Reality; Human- Computer Interaction; Simple Moving Average; Decision Tree; Wearable Sensors; I. INTRODUCTION Human computer interaction (HCI) is a conventional means that provides users to interact with computers and handheld devices. It is all about the functions, mechanisms, and conventions of users that HCI provides. This raises new applications such as wearable computing, demanding for the input of text or alphanumeric information easily and efficiently entered, recognized, stored via existing techniques. In the field of HCI, gesture recognition is becoming the most important interface method [1, 2, and 3]. Thus, computer technologies are surrounding and expanding with computers- humans communicating in more natural and realistic way as if human are communicating. In this paper, we propose a wireless hand data glove gesture recognition system to detect human dynamic gestures. Gesture recognition is more common to interact with the virtual world (VW) than with the physical world (PW), termed as gesture user interface [1-3]. Augmented reality (AR) application first introduced in a movie called Minority Report [4] using a Hand Data Glove. This human-machine interaction keeps on moving towards intuitive user interface time-to-time bridging between physical and virtual world, as in figure 1. The gesture interaction is a field including recognition of full body motion, head movement, facial expressions and hand gestures introducing more complexities. In this paper, the latter gesture is focused only. These gestures are observed and interpreted by a recognizer system. Different tasks are assigned with different set of sub- gestures to provide easy and quick access to common functions, in this paper VLC media player is the main application used as a testing bed. Along with this, a keyboard mapping method is also proposed. This paper is organized as follows. In section 2, an overview of already existing systems and research ongoing in this area is discussed. Section 3 proposes a wireless dynamic gesture recognition system and defining various gestures used and in section 4 implementation and decision tree is explained. In the second last section 5 experimental results are shown. Finally last section talks about the conclusion of the paper with some of the future scopes. Fig. 1. Physical and Virtual Reality timeline. II. BACKGROUND The gesture based user interaction is introduced to replace the old static and fixed keyboard and mouse with „n‟ numbers of limitations, to have more natural way of communication. The static mouse has only 2 degree of freedom (DoF) whereas human hand has 22 DoF, all fingers expect the thumb has 3 flexing or extension and one abduction/adduction. The thumb is missing one joint because of which it is a total of ( DoF, excluding the wrist. The wrist has 3 rotational degrees of freedom each in 3 axis(x, y, and z), hence DoF in total. Similarly, the glove used has 22 degree of freedom and has no limitations. The gesture recognition technique is a spatial-temporal pattern matching which may be both static and dynamic. The static gestures are gestures that do not depend on motion and are defined with respect to a fixed frame of reference [5]. Time Reality Augmented Reality Augmented Virtuality Virtual Reality Mixed Reality 978-1-4799-5173-4/14/$31.00 ©2014 IEEE

Upload: kumari-priyanka

Post on 05-Apr-2017

215 views

Category:

Documents


2 download

TRANSCRIPT

A Wireless Dynamic Gesture User Interface for HCI

Using Hand Data Glove

Shitala Prasad

Computer Science and Engg.,

Indian Institute of Technology

Roorkee, India

[email protected]

Piyush Kumar

Information Technology,

Indian Institute of Information

Technology, Allahabad, India

[email protected]

Kumari Priyanka Sinha

Information Technology,

National Institute of Technology,

Patna, India

[email protected]

Abstract- In this paper, DG5 hand data glove is used to design

an intelligent and efficient human-computer interface to interact

with VLC media player. It maps the static keyboard with

dynamic human hand gestures with 22 Degree of Freedom (DoF)

to interact more natural way with computer. The result is very

much appreciated showing the confusion matrix of various

gestures used. In this paper, 10 complex gestures are used, that is

Play, Pause, Forward, Backward, Next, Previous, Stop, Mute,

Full Screen, and Null gestures. To study about the human-hand

gestures four different age groups are taken, User A (20-30

years), User B (31-45 years), User C (46-60 years), and User D

(61-above years). The decision tree a powerful learning algorithm

is used to classify these gestures correctly. This enhances the

user’s interaction level with immersion feeling in augmented

reality to 98.88% of accuracy rate.

Keywords- Gesture User Interface; Virtual Reality; Human-

Computer Interaction; Simple Moving Average; Decision Tree;

Wearable Sensors;

I. INTRODUCTION

Human computer interaction (HCI) is a conventional

means that provides users to interact with computers and

handheld devices. It is all about the functions, mechanisms,

and conventions of users that HCI provides. This raises new

applications such as wearable computing, demanding for the

input of text or alphanumeric information easily and

efficiently entered, recognized, stored via existing techniques.

In the field of HCI, gesture recognition is becoming the most

important interface method [1, 2, and 3]. Thus, computer

technologies are surrounding and expanding with computers-

humans communicating in more natural and realistic way as if

human are communicating. In this paper, we propose a

wireless hand data glove gesture recognition system to detect

human dynamic gestures.

Gesture recognition is more common to interact with the

virtual world (VW) than with the physical world (PW), termed

as gesture user interface [1-3]. Augmented reality (AR)

application first introduced in a movie called Minority Report

[4] using a Hand Data Glove. This human-machine interaction

keeps on moving towards intuitive user interface time-to-time

bridging between physical and virtual world, as in figure 1.

The gesture interaction is a field including recognition of full

body motion, head movement, facial expressions and hand

gestures introducing more complexities. In this paper, the

latter gesture is focused only. These gestures are observed and

interpreted by a recognizer system.

Different tasks are assigned with different set of sub-

gestures to provide easy and quick access to common

functions, in this paper VLC media player is the main

application used as a testing bed. Along with this, a keyboard

mapping method is also proposed.

This paper is organized as follows. In section 2, an

overview of already existing systems and research ongoing in

this area is discussed. Section 3 proposes a wireless dynamic

gesture recognition system and defining various gestures used

and in section 4 implementation and decision tree is explained.

In the second last section 5 experimental results are shown.

Finally last section talks about the conclusion of the paper

with some of the future scopes.

Fig. 1. Physical and Virtual Reality timeline.

II. BACKGROUND

The gesture based user interaction is introduced to replace

the old static and fixed keyboard and mouse with „n‟ numbers

of limitations, to have more natural way of communication.

The static mouse has only 2 degree of freedom (DoF) whereas

human hand has 22 DoF, all fingers expect the thumb has 3

flexing or extension and one abduction/adduction. The thumb

is missing one joint because of which it is a total of (

DoF, excluding the wrist. The wrist has 3 rotational

degrees of freedom each in 3 axis(x, y, and z), hence

DoF in total. Similarly, the glove used has 22 degree

of freedom and has no limitations.

The gesture recognition technique is a spatial-temporal

pattern matching which may be both static and dynamic. The

static gestures are gestures that do not depend on motion and

are defined with respect to a fixed frame of reference [5].

Time

Reality Augmented

Reality

Augmented

Virtuality

Virtual

Reality

Mixed Reality

978-1-4799-5173-4/14/$31.00 ©2014 IEEE

While in the case of dynamic gestures, moving frame of

reference is used, that is it is based on the trajectory path

detection that is formed during the hand motion. In gesture

recognition, two possible techniques are used. (1) Vision-

based hand gesture recognition: a novel sequence alignment

algorithm proposed by many authors such as Kulkarni et al.

[6], Chakraborty et al. [7], Ishida et al. [8]. But the major

drawback of such vision-based systems are, there must be in a

line-of-sight between the object and the camera capturing it

and it is highly dependent on lighting conditions. (2) The

second approach used is hand gesture recognition through

hand data glove having sensors which results computationally

less intensive.

Vitor F. Pamplona et al. [9] describe the designing of an

Image-Based Data Glove (IBDG). Tomasz P Bednarz et al.

[10] used the 5DT data glove to create the Immersive Virtual

Environment in Inertial Navigation System (INS) providing

acceleration and rotation information. The development of an

electronic hand data gloves improved the interaction such as

VPL data glove and the Mattel Power glove and feel more

realistic. Christoph Amma et al. [11] proposed an application

of hand gesture recognition in free hand air-writing using

wearable motion sensors. To train the system HMM is used

resulting 94.8% accuracy. Song et al. [12] have proposed a

multi-modal interface to control a 3D computer aided design

(CAD) models using the finger movement and eye gaze

motion. There are many other methods and techniques that

use‟s hand data glove for more efficient and accurate

interaction and interface [16-20] but due to space limitation of

this paper they are not mentioned in detail here. In 2013, again

marker based mobile augmented reality system was proposed

in traditional education system to modern it with latest

technologies.

The gesture recognition system is the most natural way of

interaction with human and can be one of the best methods to

interact with machines (or computers) too and thus has many

application fields. They are broadly classified as: man-

machine interface, 3D animation, tele-presence, visualization,

computer games playing, sign language, medicine health care,

and control of mechanical systems such as robotic arms. In

this paper, to test the system proposed VLC media player is

used which is discussed in more detail in section 4.

III. PROPOSED SYSTEM

An interactive device such as data glove is equipped with

sensors that measure the fine detailed movements of hand. The

data glove used in this experiment is DG5 VHand 2.0 [13].

The DG5 is a Bluetooth equipped wireless data glove having

connectivity up to 10 meters. It used bi-flex bend sensors that

change the resistance as it is bend, the bending can be in either

direction. When a flex sensor of length, say Ĺ, having

resistance, ŖΩ (ohm), then after bending it the flex sensor is

reduced to Ĺ1, which is always less than Ĺ, and so resistance

of (Ĺ – Ĺ1) is Ŗ1Ω which is again less than Ŗ and thus change

in voltage is seen.

DG5 VHand data glove use this flex sensor in each of the

figures of data glove to measure the minuet details of the

figure‟s movement. These sensors will sense the data and will

sample the data at some sampling rate, Řš. The controller is

futher connected with the wireless communication device,

Bluetooth module. Since Bluetooth is most common is and is

commonly available device it is used here. With the help of

COM port (i.e. communication port) it is connected to the

computer(s) and the data glove signal is received after passing

through a simple average moving low pass filter to remove the

noise from the signal. Simple moving average is given by

equation below.

where, „k‟ is the smoothing window or period and 1/k is the

height of rectangular window that moves as the moving

average moves. The average moving low pass filter concept is

shown in figure 2, below.

Fig. 2. Simple moving average low pass filter.

In this methodology, all the „k‟ historical samples are

having equal weight, wL .

The block diagram of the system is shown in figure 3.

Here, first the sensors, fixed on each fingers and thumb,

sensing the bend and angle of each fingers, where

. The detailed mapping of bend and angle

calculation is described in section IV. There is a low pass filter

in the hand glove added to remove the additional noise which

is due to the induction of other fingers. Now this signal is

transmitted to the host system via Bluetooth where based on

the patterns of the input signal it is mapped to one of the

gestures defined and classified using decision tree they are

mapped to static operations performed on host system. In this

paper, VLC running on the host machine is controlled using

this hand glove interactive interface. The static keyboard

mapping system is briefed in section IV.A. Over this, different

age group users are tested to understand the proper gestures

which are age independent so that any user of any age group

can operate the system without any error: a Cognitive study.

Fig. 3. Block diagram of the system.

Sensor0

Sensor1

Sensor4

… Low Pass Filter Bluetooth

SubGestures Decision Tree

Activity Detection

SMA(t)

k t

x(t)

The sampling rate, Řš of the system is MHz, i.e.,

Hz and so the time period is 1/ Řš second/cycle.

A. Gesture Definition for VLC Media Player

The dynamic gesture recognition system works on some

gestures defined for the system at that instance. Thus, the

other important task is to define various gestures that are

independent of each other. The first most condition while

defining gestures is it should not be so complex to learn and

use and the second point to remember is that it should be

independent and there should be some gesture or delay in

between two gestures to differentiate them. The common

online gestures defined for VLC Media Player in this paper

are very simple, as below:

1) Full Screen Gesture: The gesture performed for the full

screen mode is simple and very common. To exit full screen

mode just a pinch operation of the index figures and thumb is

required. And to enter in a full screen mode is just the reverse

of the earlier operation, figure 4 (a) with its graph generated

using Matlab in figure 4 (b). One single gesture operation for

entering full screen and exiting from full screen.

(a) (b)

Fig. 4. (a) Full Screen gesture operation and (b) its graph.

2) Play/Pause Gesture: To define gesture for Play

operations just open the palm and join all the fingers together

but the thumb is folded. And if the same gesture is repeated

then it is treated as Pause operation. This gesture operation is

also overloaded operation. To clearly identify see figure 5.

(a) (b)

Fig. 5. (a) Play/Pause gesture operation and (b) its graph.

3) Stop Gesture: To define Stop operation fist is performed,

i.e., folding all the fingers and thumb together as in figure 6.

This gesture is to stop the system performance, say VLC

media player in our case. It will completely delete all the

operations system was doing and will renew the system to

start from a fresh one.

(a) (b)

Fig. 6. (a) Stop gesture operation and (b) its graph.

4) Forward/Backward Gesture: To perform this operation

first Play/Pause operation is performed without bending the

thumb and then a shift to left is backward operation and shifts

towards right side is the forward operation, figure 7 and then

release the thumb to active the Play operation. But the

movement or shift must be very slow.

(a)

(b)

Fig. 7. (a) Forward (Left) and backward (Right) gesture operation and (b) their

graphs.

5) Next/Previous Gesture: In this gesture, keeping the y-

axis is constant if the x-axis bends in negative direction with

open hand performing a wave operation, it is Next operation.

Similarly, if the x-axis is positive then we treat it a previous

operation. Here, positive x-axis gesture is chosen for Previous

operation because the hand cannot bend in positive x-axis

more and pervious operation is very less used as compared to

next operation.

(a)

(b)

Fig. 8. (a) Next (Left) and Previous (Right) gesture operation and (b) their

graphs.

6) Mute Gesture: For this gesture, just fold all the 4 fingers

except the thumb, as in figure 9.

(a) (b)

Fig. 9. (a) Mute gesture operation and (b) its graph.

7) Null Gesture: This is the most important gesture among

all gestures discussed. It separates all other gestures from each

other. Null gesture is used to differentiate between two

successive gestures. This is simply shaking the hand in x-axis

to-and-fro irrespective of the position of other fingers and

thumb. Once this is performed the gesture is stopped and no

other input is taken. That is the data glove is deactivated. To

activate the same gesture is performed again.

Note that in each gestures performed above the z-axis is

always positive else the gesture is dropped out and no action

will be performed. The gesture used here are very simple but

they are computationally complex as many sub-gestures are

combined in each operation. Because of this complexity the

time duration of one single gesture operation is fixed to time

unit Ť. The time unit Ť must be chosen such that all the

gesture operations are performed in Ť and rest signal is simply

dropped out as noise. To select this time Ť various algorithms

can be used but in this paper simply few experiments are

performed and measured the maximum time unit a gesture

operation involved in this research takes is the time unit Ť.

Therefore, here Ť is 1.27 seconds. Thus after 1.27 seconds

only the next gesture operation can be performed otherwise it

will be a null gesture.

IV. IMPLEMENTATION AND DESICISION TREE

In this section, we will talk about the experimental setup of

our proposed system. In this experiment DG5 VHand 2.0 data

glove is used as discussed above which operates on a single

chargeable battery of 3.5 volt to 5 volt. The Bi-Flex bend

sensors also sense the presser under a temperature range -45F

to 125F. This sensor measures 1024 different position per

finger along with 3 degree of integrated tracking, i.e., roll (x-

axis), pitch (y-axis), and yaw (z-axis).

The data structure used here is simply an array having

eight indexes. The first three indexes are the roll, pitch, and

yaw and rest five are thumb, index finger, middle finger, ring

finger, and little finger as in figure 10. The data value of

fingers ranges from 0 to 1023 and the axes values are

calculated simply using the formulas given below:

The acceleration values are from -32676 to 32676 and

are the lower and higher value of the x-axis at

that instance for that gesture operation.

Fig. 10. Feature vector.

After performing gesture operations activity detection is

the next step. Activity detection is basically classification of

various events on some basics using some machine learning

algorithms. In this paper a simple decision tree is used for

activity detection.

A. Decision Tree

Decision Tree Đ(Δ) is a powerful and popular machine

learning algorithm for decision-making problems. It is also

used for classification problems. Decision tree has been used

in many real life applications such as medical diagnosis, radar

signal classification, weather prediction, credit approval, and

fraud detection, image segmentation and processing, gesture

recognition and many more. It is very simple and easy to

implement. The decision tree is just a set of if-then-else rules

but it has a high detection rate of data having common

attributes. Therefore, a decision tree represents data in various

classes according to their attributes in graphical way. In

addition to this, decision tree have various other advantages

such as it requires little data preparation and is simple to

understand and interpret using a white box mole approach.

And the most important point is that it performs very well for

large data also in short time.

The decision tree, Đ(Δ) used in this paper is given below in

figure 11, mapping the sub-gestures with VLC Interface. Here,

for the simplification of the experiment we have selected only

10 gesture operations for VLC interaction namely Play, Pause,

Full Screen, Stop, Mute, Forward, Backward, Next, Previous,

and Null gestures.

In the Đ(Δ) above clearly show that to perform Next

operation first most condition should be that z-axis is in

positive direction and then second condition is the y-axis is

constant and the last condition is that the change in x-axis

must be negative, i.e., user must wave his/her hand in negative

x-axis, as in equation given below:

and for the Previous operation the value must be positive as

given below:

where, is the current passion of the x-axis and

was the initial passion. Similarly, other gestures are

calculated and used accordingly.

Begin

Fig. 11. Decision tree, Đ(Δ) for VLC gesture interface.

In the next section, experimental results are shown using

the confusion matrix.

V. EXPERIMENTS AND RESULTS

For experiment purpose the complete system is designed in

Matlab 7.7 (R2008b) on Core2Duo processor of 2.93 GHz

with 2 GB RAM in the Windows environment. The

experiment was very simple and was tested many times

resulting in a high accuracy rate as in table 1. In table 1 there

are 4 different users to test this system of different age groups.

The User A is 23 years old (20-30) years of age group, User B

is 32 years old (31-45) years of age group, User C is of 47

years (46-60) years of age group, and User D is 62 years old

(61-above) years of age group. These age groups are decided

on the basis that different age groups are having different

power and perform gestures accordingly. And thus, the results

are calculated on this parameter and found that the User A, B,

C are having nearly similar result but User D due to age

difference have little less accuracy.

The confusion matrix, CM is a specific table to visualize

the performance of an algorithm. The values in each column

of the matrix represent the instances in a predicted class and

reach row represents the instances in an actual class. The

accuracy rate, ( ) is proportional to the total number of

predictions that were correct and is determined by the equation

given below:

The total accuracy rate for different users for all the

gestures comes out to be , , , and

for user A, user B, user C, and user D respectively.

This shows that the users of Age group (46-60) years and (31-

45) years are more experienced and trained to adapt to such

type of systems than the other groups.

TABLE I. Confusion matrix of different gesture operations used. Each row has 20 samples.

Predicted Class

Actu

al

Cla

ss Full Screen

A,B,C,D

Play

A,B,C,D

Stop

A,B,C,D

Mute

A,B,C,D

Forward

A,B,C,D

Backward

A,B,C,D

Next

A,B,C,D

Previous

A,B,C,D

Null

A,B,C,D

Full

Screen 20,20,20,19 0,0,0,1 0,0,0,0 0,0,0,0 0,0,0,0 0,0,0,0 0,0,0,0 0,0,0,0 0,0,0,0

Play 0,0,0,0 20,19,20,19 0,0,0,0 0,1,0,1 0,0,0,0 0,0,0,0 0,0,0,0 0,0,0,0 0,0,0,0

Stop 0,0,0,0 0,0,0,0 17,20,19,19 3,0,1,1 0,0,0,0 0,0,0,0 0,0,0,0 0,0,0,0 0,0,0,0

Mute 0 0,1,0,1 3,0,1,1 17,19,19,18 0,0,0,0 0,0,0,0 0,0,0,0 0,0,0,0 0,0,0,0

Pinch

(Thumb + Index)

Full Screen

F/f

All Fingers

Fold

Mute

M/m

Wave

(y- Axis Constant)

All Fingers +

Thumb Fold

+ve z-axis -ve z-axis

Open palm

Thumb bend Open palm Hand Shaking

Slow Shift

Left Right

Play/Pause

Space Backward Forward

Alt+ Left Arrow

Ctrl+ Left Arrow

Alt+ Right Arrow

Ctrl+ Right Arrow

x- Axis to-fro

Null Gesture

Delay (1sec)

Stop

S/s Next

-ve x- Axis +ve x- Axis

Previous

N/n P/p

Forward 0 0,0,0,0 0,0,0,0 0,0,0,0 18,19,20,18 0,0,0,0 0,0,0,0 1,1,0,1 1,0,0,1

Backward 0 0,0,0,0 0,0,0,0 0,0,0,0 0,0,0,0 19,19,20,17 0,0,0,0 1,1,0,1 0,0,0,2

Next 0 0,0,0,0 0,0,0,0 0,0,0,0 0,0,0,0 0,0,0,0 20,20,20,19 0,0,0,1 0,0,0,0

Previous 0 0,0,0,0 0,0,0,0 0,0,0,0 1,1,0,1 1,1,0,1 0,0,0,1 18,18,20,16 0,0,0,1

Null 0 0,0,0,0 0,0,0,0 0,0,0,0 1,0,0,1 0,0,0,2 0,0,0,0 0,0,0,1 19,20,20,16

VI. CONCLUSION AND FUTURE SCOPES

In this paper, we presented and demonstrated a model of

human hand gesture that uses simple flex-bend sensors to

interact with VLC media player adding a step towards human-

computer interaction. The DG5 hand data glove is used here

and explored trying to replace the static old keyboard and

mouse limited to 2 DoF with 22 DoF as human hand. The

glove based interface is more reliable and accurate in range of

motion data collection than camera based interfaces [6, 7, 15]

used previously. In [3], only mouse interface was shown with

a very result while in this, keyboard mapping [17] is done

using decision tree with much more high accuracy making

glove devices more and more ubiquitous in our day-to-day

life. Other than the simple VLC interaction, it can be used in

various other fields such as space stations, satellite repair,

health care, biomedical surgery and practice, and many more.

The design is very simple and no need of any calibration

and training done by the user. This paper further supports the

investigation of finding more suitable and flexible glove to be

easier to put on and off reducing the computational cost

mimicking a real human hand to solve high dimensional

applications in the physical world and also in the virtual

world. The accuracy for User C reached up to . In

future, the gesture lists can be increased and more powerful

machine learning algorithms be used for accuracy and

latency combined with other virtual reality devices, i.e.,

real-time system with real-time problem.

The other most important thing is that the gestures must be

improved such that the accuracy rate of old age people, such

as grandpa, feels comfortable with it, having high accuracy

rate from , as shown in this experiment for User D to

[20].

REFERNCES

[1] S. P. Priyal, and P. K. Bora, “A study on static hand gesture

recognition using moments,” In Proc. of Int. Conf. on Signal

Processing and Communications (SPCOM), (2010), pp. 1-5.

[2] V. I., Pavlovic, R., Sharma, and T. S., Huang, “Visual

interpretation of hand gestures for human-computer interaction:

A Review,” In IEEE Trans. on Pattern Analysis and Machine

Intelligence, 19(7), (1997), pp. 677–695.

[3] P. Kumar, J. Verma, and S. Prasad, “Hand Data Glove: A

Wearable Real-Time Device for Human Computer Interaction,”

In Int. Journal of Advanced Science and Technology (IJAST),

43, (June 2012), pp. 15-26

[4] P. K. Dick, and S. Frank, “Minority Report,”

http://www.imdb.com/title/tt0181689/, (2009).

[5] S., Reifinger, F., Wallhoff, M., Ablassmeier, T., Poitschke, and

G., Rigoll, “Static and Dynamic Hand-Gesture Recognition for

Augmented Reality Applications,” In Human-Computer

Interaction. HCI Intelligent Multimodal Interaction

Environments, LNCS, Springer, Heidelberg, 4552, (2007), pp.

728–737.

[6] S. K., Vaishali, and S. D., Lokhande, “Appearance Based

Recognition of American Sign Language using Gesture

Segmentation,” In Int. Journal on Computer Science and

Engineering (IJCSE), 2(3), (2010), pp. 560-565.

[7] P. Chakraborty, P. Sarawgi, A. Mehrotra, G. Agarwal, and R.

Pradhan, “Hand Gesture Recognition: A Comparative Study,” In

Proc. of the Int. Multi Conf. of Engineers and Computer

Scientists, 1, (2008).

[8] I., Hiroyuki, T., Tomokazu, I., Ichiro, and M., Hiroshi, “A

Hilbert warping method for handwriting gesture recognition,” In

Pattern Recognition, 43, (2010), pp. 2799–2806.

[9] P. F., Vitor, F. A. F., Leandro, P. L., Joao, N. P., Luciana, and

O. M., Manuel, “The Image-Based Data Glove,” Institute of

Information, Brazil, (2008).

[10] B. P., Tpmasz, C., Con, W., Chris, R. B., Peter, F., Gianluca, E.,

Garry, and M., John, “Experiments Utilizing Data Glove and

High-Performance INS Devices in an Immersive Virtual Mining

Environment,” In Int. Global Navigation Satellite Systems

Society Symposium (IGNSSS), (2009).

[11] A., Christoph, G., Dirk, and S., Tanja, “Airwriting Recognition

using Wearable Motion Sensors,” In Proc. of the 1st Augmented

Human Int. Conf. (AH '10). ACM, New York, NY, USA,

Article 10, (2010), 8 pages.

[12] J., Song, S., Cho, S. Y., K., Baek, Lee, and Bang, H., “GaFinC:

Gaze and Finger Control interface for 3D model manipulation in

CAD application,” In Computer-Aided Design, 46, (2014), pp.

239-245.

[13] DG-Tech Engineering Solution, “DG5 VHand 2.0 OEM

Technical Datasheet,” www.dg-tech.it, Release 1.1, (Nov.,

2007).

[14] T. M. Mitchell, “Machine Learning,” McGraw Hill, (1997).

[15] S. Prasad, A. Prakash, P. S. Kumar, and D. Ghosh, “Control of

Computer Process using Image Processing and Computer Vision

for Low-Processing Devices,” In Int. Conf. on Advance in

Computing, Communications and Informatics, ICACCI‟12

ACM, (2012), pp.1169-1174.

[16] L., Dipietro, A. M., Sabatini, and P., Dario, “A Survey of Glove-

Based Systems and Their Applications,” In IEEE Trans. on

Systems, Man, and Cybernetics, part c, 38(4), (July, 2008), pp.

461-482.

[17] C., Mehring, F., Kuester, K. D., Singh, and M., Chen, “KITTY:

Keyboard independent touch typing in VR,” In Proc. IEEE

Virtual Reality, (2004), pp. 243–244.

[18] Y. S., Kim, B. S., Soh, and S. G., Lee, “A new wearable input

device: SCURRY,” In IEEE Trans. Ind. Electron., 52(6), (Dec.,

2005), pp. 1490–1499.

[19] M., Deller, A., Ebert, M., Bender, and H., Hagen, “Flexible

gesture recognition for immersive virtual environments,” In

Proc. Inf. Vis., (2006), pp. 563–568.

[20] S. A., Dalley, H. A., Varol, and M., Goldfarb, “A Method for

the Control of Multigrasp Myoelectric Prosthetic Hands,” In

IEEE Trans. on Neural Systems and Rehabilitation Engineering,

20(1), (2012), pp. 58-67.

[21] S., Prasad, P. S., Kumar, and D., Ghosh, “Mobile augmented

reality based interactive teaching & learning system with low

computation approach," In 2013 IEEE Symposium on

Computational Intelligence in Control and Automation

(CICA‟12), (April, 2013), pp. 97-103.