an android based human computer interactive system with...

6
An Android Based Human Computer Interactive System with Motion Recognition and Voice Command Activation M. Sarkar, M. Z. Haider, D. Chowdhury, G. Rabbi Department of Electrical and Electronic Engineering Bangladesh University of Engineering and Technology, Dhaka, Bangladesh E-mail: [email protected] Abstract—Human Machine Interface (HMI), referred to as a correlated system of human activities and operation of a specific device, ratifies control mechanism of a targeted machine by execution of certain physical actions. This paper presents an effective design of an Android-based Human Computer Interactive (HCI) system with voice command activation and gesture recognition to control a computer. With a continuous data acquisition from a 3-D Accelerometer sensor embedded into the smart phone, the proposed system substantiates remote computing through processing of the orientation readings of physical movement of the phone and compilation of inputted audio texts. With Wi-Fi connectivity, the smart phone is attached to the wrist of a human body. The motion parameters are utilized to control the cursor movement of the host computer and the voice commands are used for ultimate execution of an instruction. Such a wireless system provides reliable and effective control operations of the computing domains, electronic devices and robotic structures. Physically challenged people get benefitted by such systems through easy and faster computing operations. The developed work has been tested under certain conditions and the performance analysis affirms its sustainability. Index Terms—Accelerometer, Android, Human computer in- teractive, Motion recognition, Voice command, Wireless I. I NTRODUCTION The world has reincarnated itself by dint of the prevalence of smart computing technology and mobile telephony facilities. Introduction of smart phone interfacing with computers has intensified the human-computer interaction. Electronic gadgets embedded with Accelerometer, Gyroscope, Magnetometer, Wi-Fi and Bluetooth communication technologies have enhanced the dimensionality of innovation. Today the command interface to the computers is being influenced by the advancements of smart phone based communicative preferences. The mouse and keyboard applications are tending to be replaced by voice commands and gesture identification. Such a work has been presented in [1] where a pattern matching based motion recognizable gaming console has been introduced. Another gesture identifying prototype has been proposed in [2] which is a wireless Power-point presentation scheme. Such practices facilitate the physically challenged people to take advantage of remote computing with ease and comfort. In robotics and automation, wireless machine control phenomena are getting modified day by day. In a manned space mission the monitoring process of the expedition can be set up through the wireless circumstantial communication with the controlled machineries. An efficient system based on smart phone imbibed computer control mechanism via voice command execution and motion recorded activity identification with Wi-Fi connectivity has been proposed here. The controller is an Android operated handset and the host is a computer running on either Windows or Linux platform. The cursor movement is interpreted by moving the phone vertically and horizontally along the axial references. The motion parameters are retrieved from a 3-axes Accelerometer which is inclusively mounted into the smart phone fabrication. The developed system consists of two customized server applications and an executable prompt. The installed programs as well as the basic operatives like start or shut down a console of the host computer can be controlled by implementing voice commanding process. Such a system has been reported in [3] where a Google voice recognition engine has been fabricated. An Android-based comprehensive Application Programming Interface (API) security system has been reported in [4]. A human-machine interrelated rehabilitation robot control mechanism has been formulated in [5]. The proposed HMI system in this paper operates upon the executable commands in both forms of vocal texts and gesture recognition. Manually controlled robots and electronic devices can be controlled via transformation of wireless commands by the proposed effective HMI system. The system works on identifying separable commands encoded in voice and simultaneous gesture pattern through processing Accelerometer readings and computation of relevant software programs. A motion recognition based hierarchical classification system has been proposed in [6] which differentiates the motion activities from the non-motion ones. In [7] integrated computing has been proposed where the host monitor is controlled by the phone display and the keyboard is controlled by the touchpad. The work reported in [8] presents a smart wheel chair control system via voice and gesture recognition methodologies. The presented work has introduced an Android-based voice command and gesture identification interface with the movement of wheel chair to facilitate smooth application 978-1-5090-1269-5/16/$31.00 ©2016 IEEE 2016 5th International Conference on Informatics, Electronics and Vision (ICIEV) 170

Upload: others

Post on 06-Jun-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An Android based human computer interactive system with ...static.tongtianta.site/paper_pdf/722c74c4-df80-11e9-ba31-00163e08bb86.pdfAbstract—Human Machine Interface (HMI), referred

An Android Based Human Computer InteractiveSystem with Motion Recognition and Voice

Command ActivationM. Sarkar, M. Z. Haider, D. Chowdhury, G. Rabbi

Department of Electrical and Electronic EngineeringBangladesh University of Engineering and Technology, Dhaka, Bangladesh

E-mail: [email protected]

Abstract—Human Machine Interface (HMI), referred to asa correlated system of human activities and operation of aspecific device, ratifies control mechanism of a targeted machineby execution of certain physical actions. This paper presentsan effective design of an Android-based Human ComputerInteractive (HCI) system with voice command activation andgesture recognition to control a computer. With a continuousdata acquisition from a 3-D Accelerometer sensor embeddedinto the smart phone, the proposed system substantiates remotecomputing through processing of the orientation readings ofphysical movement of the phone and compilation of inputtedaudio texts. With Wi-Fi connectivity, the smart phone is attachedto the wrist of a human body. The motion parameters are utilizedto control the cursor movement of the host computer and thevoice commands are used for ultimate execution of an instruction.Such a wireless system provides reliable and effective controloperations of the computing domains, electronic devices androbotic structures. Physically challenged people get benefitted bysuch systems through easy and faster computing operations. Thedeveloped work has been tested under certain conditions and theperformance analysis affirms its sustainability.

Index Terms—Accelerometer, Android, Human computer in-teractive, Motion recognition, Voice command, Wireless

I. INTRODUCTION

The world has reincarnated itself by dint of the prevalence ofsmart computing technology and mobile telephony facilities.Introduction of smart phone interfacing with computers hasintensified the human-computer interaction. Electronic gadgetsembedded with Accelerometer, Gyroscope, Magnetometer,Wi-Fi and Bluetooth communication technologies haveenhanced the dimensionality of innovation. Today thecommand interface to the computers is being influencedby the advancements of smart phone based communicativepreferences. The mouse and keyboard applications aretending to be replaced by voice commands and gestureidentification. Such a work has been presented in [1] where apattern matching based motion recognizable gaming consolehas been introduced. Another gesture identifying prototypehas been proposed in [2] which is a wireless Power-pointpresentation scheme.Such practices facilitate the physically challenged peopleto take advantage of remote computing with ease andcomfort. In robotics and automation, wireless machine controlphenomena are getting modified day by day. In a manned

space mission the monitoring process of the expedition canbe set up through the wireless circumstantial communicationwith the controlled machineries.An efficient system based on smart phone imbibed computercontrol mechanism via voice command execution and motionrecorded activity identification with Wi-Fi connectivity hasbeen proposed here. The controller is an Android operatedhandset and the host is a computer running on either Windowsor Linux platform. The cursor movement is interpreted bymoving the phone vertically and horizontally along the axialreferences. The motion parameters are retrieved from a 3-axesAccelerometer which is inclusively mounted into the smartphone fabrication. The developed system consists of twocustomized server applications and an executable prompt.The installed programs as well as the basic operatives likestart or shut down a console of the host computer can becontrolled by implementing voice commanding process. Sucha system has been reported in [3] where a Google voicerecognition engine has been fabricated. An Android-basedcomprehensive Application Programming Interface (API)security system has been reported in [4]. A human-machineinterrelated rehabilitation robot control mechanism has beenformulated in [5].The proposed HMI system in this paper operates uponthe executable commands in both forms of vocal textsand gesture recognition. Manually controlled robots andelectronic devices can be controlled via transformation ofwireless commands by the proposed effective HMI system.The system works on identifying separable commandsencoded in voice and simultaneous gesture pattern throughprocessing Accelerometer readings and computation ofrelevant software programs. A motion recognition basedhierarchical classification system has been proposed in [6]which differentiates the motion activities from the non-motionones. In [7] integrated computing has been proposed wherethe host monitor is controlled by the phone display and thekeyboard is controlled by the touchpad.The work reported in [8] presents a smart wheel chair controlsystem via voice and gesture recognition methodologies.The presented work has introduced an Android-based voicecommand and gesture identification interface with themovement of wheel chair to facilitate smooth application

978-1-5090-1269-5/16/$31.00 ©2016 IEEE

2016 5th International Conference on Informatics, Electronics and Vision (ICIEV) 170

Page 2: An Android based human computer interactive system with ...static.tongtianta.site/paper_pdf/722c74c4-df80-11e9-ba31-00163e08bb86.pdfAbstract—Human Machine Interface (HMI), referred

Fig. 1. Functional diagram of the consistent process of cursor movement

for the physically challenged people. The work presentedin [9] has proposed a universal voice control solution fornon-visual access to the Android operating system. Thedemonstrated work eliminates the target pointing tasksto improve experience of graphic interface interaction forvisually impaired people. The paper presented in [10] hasproposed an Accelerometer-based gesture recognition systemfor mobile devices which deals with a collection of 10different hand gestures. This proposed work has incorporateda human-robot interface to control a wheeled robot withgesture identification. The paper presented in [11] hasdocumented in-depth comparison of different algorithms usedto recognize gestures based primarily on a 3-D Accelerometerreadings for robot manipulation. Considering two arguablybest recognition methods, this paper has proposed a novelapproach constructed using distance metric learning.The system presented in this paper is advent and effectivein case of controlling the targeted device via both vocalinputs and motion parameters. Features of the developedsystem, documentation of the proposed layout, principles ofAccelerometer sensor, Algorithms of the customized softwareprograms, relevant testing sequences and performanceanalysis of the demonstrated work are articulated in thispaper section-by-section.

II. PROPOSED DESIGN AND IMPLEMENTED SYSTEMLAYOUT

The basic manifestation of the developed system includestwo major operational segments- (i) Host server application;(ii) Android-based smart phone application. The salient fea-tures of the demonstrated system are documented in thefollowing excerpt.

In Fig. 1. the implementation of mouse cursor movementis shown. In Fig. 2. the voice commanding mechanism ispresented. The command transformation from the controllerAndroid application to the controlled host computer invokes

Fig. 2. Functional diagram representing voice command activation procedure

distinctive operational segments which are briefly discussedhere.

A. Motion Recognition

Motion identification is completely dependent on consistentdata logging from the embedded 3-D Accelerometer sensor. Inthis case, linearly accelerated orientation values have been pro-cessed and then attributed for further manipulation to enhancea precise control over mouse cursor movement. Through a Wi-Fi connectivity medium the orientation data are continuallytransfered to the server application. A Java class has beenspecified to compile the motion parameters and in accordancewith the fragmented software topology the cursor is movedalong the host console.

B. Voice Command Activation

A customized Android application has been developed totake textual commands as inputs. These commands are theconverted interpretations of randomly inputted vocal speechesand words. The vocal sounds are processed through Googlespeech to text converter and then transmitted to the serverapplication wirelessly. There is a customized database whichcontains almost 25 instructional pneumatics. The relevanttext is scrutinized to match with any of the pre-allocatedtexts of the database. The eventual command execution statewould commence with the matched directive. Java Runtimeobject is responsible for solemnizing the command activationphenomenon.

C. Implemented Software Kits and Functional Libraries

The host computing program has been developed usingJDK 1.8 (Java Development Kit). The respective Androidapplication has been fragmented by Android SDK 4.4.2 (API19). The programming platform has been provided by EclipseLUNA. In case of server application, the program struc-ture has been built using libraries like java.net and java.awtpackages. In the follow-up, the applied functional librarypackages for developing the Android application are java.net,

171

Page 3: An Android based human computer interactive system with ...static.tongtianta.site/paper_pdf/722c74c4-df80-11e9-ba31-00163e08bb86.pdfAbstract—Human Machine Interface (HMI), referred

android.app, android.Hardware, android.content, android.os,android.speech, android.view, android.Widget and java.util.

III. EMBEDDED TRI-AXIS ACCELEROMETER SENSOR

An Accelerometer is an electromechanical device that mea-sures the acceleration forces. MEMS (Micro Electro Mechan-ical Systems) Accelerometers are of the simplest types whichare generally utilized for low power density and low bandwidthwith high sensitivity applications. Accelerometers, referred toas orientation sensors, in smart phones are utilized to detectthe physical orientation of the device and evaluate the linearacceleration of movement.There are different ways to fabricate orientation sensors. Somesensors are built using piezo-electric effect. They containmicroscopic crystal structures that get stressed by accelerativeforces, which causes a voltage to be generated.Capacitive sensors are made of spring loaded, micro machinestructure, mounted on a Silicon base. Force on the structurechanges the seismic mass attached to the spring and this deflec-tion is measured using fixed plate capacitor sensors. Capacitivesensing relies on the variation of capacitance pertaining toits geometry and its sensitivity is intrinsically insensitive totemperature changes. The change in acceleration can be static(gravity) and dynamic (forced strain). Neglecting the fringingeffects, the parallel-plate capacitance is

C =εA

d

A = area of the electrodes, d = separation between theelectrodes and ε = permittivity of the material. A change inany of these parameters make changes in capacitance and thisvariation is used in MEMS sensing.For linear acceleration mechanism the dynamic changes inorientation along 3 axes can be defined as

~ax =d ~vxdt

~ay =d ~vydt

~az =d~vzdt

The resultant acceleration would be

~a = ~ax + ~ay + ~az

|~a| =√| ~ax|2 + | ~ay|2 + | ~az|2

Here d~v/dt = change in velocity (along a reference axis) withrespect to time. The X-axis (lateral), Y-axis (longitudinal) andZ-axis (vertical) orientations of the smart phone are measuredby the calculative speculation of tri-axis accelerated values ofthe positioning sensor.

IV. PROPOSED MECHANISM OF CURSOR MOVEMENT ANDCOMMAND EXECUTION

This section elaborates the technical aspects and imple-mented software topologies of the reported work.

A. Cursor Movement Algorithm

The topology of precise cursor movement deals with thespontaneous data acquisition from the embedded orientationsensor. The cursor position is depicted as a 2-D coordinatevalue like (X,Y) and respective changes in the coordinatevalues result in corresponding changes in the cursor position.The momentary variations in (X,Y) values are stored andaccording to a set of customized code snippets, these orientedposition data are utilized to control the mouse navigation.The systematic procedure of continuous cursor movement ispresented in Algorithm 1.

Algorithm 1 Algorithm for Cursor Control

Present X-axis Accelerometer → X0

Previous X-axis Accelerometer → X1

Present Y-axis Accelerometer → Y0Previous Y-axis Accelerometer → Y1X-axis step size → Ex

Y-axis step size → Ey

State variable → S; server Computer inputPositive threshold ← PNegative threshold ← NINITIALIZE:X0 ← 0X1 ← 0Y0 ← 0Y1 ← 0Ex ← 0Ey ← 0S ← 0START:X0 ← currentreading(X − axis)Y0 ← currentreading(Y − axis)Ex ← X0 −X1

Ey ← Y0 − Y1if (X0 < 0) && (Y0 < 0) && (Ex > P ) &&(Ey > P )thenS ← S + 1

end ifif (X0 > 0) && (Y0 > 0) && (Ex < N ) &&(Ey < N )thenS ← S − 1

end ifelseS ← 0Goto START

B. Voice Activation Algorithm

The algorithmic approach towards the capitulation of voicecommand and control mechanism commences with the cre-ation of a Speech Recognizer object in the firmware. Takinginput from the smart phone microphone, the driving programsends the available word or text to the server through wirelessnetworking module and this task is undergone in a repeated

172

Page 4: An Android based human computer interactive system with ...static.tongtianta.site/paper_pdf/722c74c4-df80-11e9-ba31-00163e08bb86.pdfAbstract—Human Machine Interface (HMI), referred

manner. The application console provides probable textualsuggestions for every other input to search for the correctsamples to materialize relevant instructions. The implementedtopology for voice commanding is presented in Algorithm 2.

Algorithm 2 Algorithm for Voice Commanding

Array of Audio texts → A; input to the programArray Index → ISuggested Array of texts and words → B; databaseSuggested Array Index → KActivated command variable → XConnection Variable → CC ← 0 ; Connection existsC ← 1 ; Connection breaksINITIALIZE:I ← 0K ← 0X ← 0C ← 0START:Read A[I]A[I]== B[K]; input matches with a member of databaseA[I]!= B[K]; input does not match with any member ofdatabaseif (A[I] == B[K]) && (C == 0) then

for J ← K; J < size(B); J ← J + 1 doX[J ]← A[J ] == B[J ]

end forend ifelseX ← 0Goto START

C. Server Application Topology

The server application mechanism initiates with the def-inition of Client application variables and their differentialchanges. The cursor navigation is regulated by assigning twospecific conditions upon the Changes in Client variables-positive increment and negative increment. If any of thesetwo avails, cursor along the host computer is shifted to a newposition according to the differential change. If the conditionalassignment does not happen, it implies that the incomingcommand is not matching with any text stored in a customizeddata base containing approximately 25 instructional directives.When it matches with any of the sample command, simulationgets activated using Java Runtime Static class and the govern-ing code is supposed to be executed. Then the data recordingprocess is reiterated and takes place continually unless thetermination instruction is provided. The overall topology ofthe runtime server application is shown in Algorithm 3.

V. RESULT AND ANALYSIS

The overall implemented system incorporates customizedAndroid and Java based software programs to place effective

Algorithm 3 Algorithm for Server Application

Client application variables → X,YElementary changes → dX, dYProgram termination variable → TT ← 0 ; program executedT ← 1 ; program completedCursor position → (X0, Y0)INITIALIZE:X ← 0Y ← 0dX ← 0dY ← 0T ← 0START:Take inputted (X,Y)X=X+1 and Y=Y+1; positive incrementsX=X-1 and Y=Y-1; negative incrementsif (X+ = dX) &&(Y+ = dY ) && (T == 0) then

(X0, Y0)=(X,Y )end ifif (X− = dX) &&(Y− = dY ) && (T == 0) then

(X0, Y0)=(X,Y )end ifelse(X0, Y0)=(X0, Y0)Goto START

Fig. 3. Client application Dishary initializing IP address, port number, atextual vocal command and transfer panel

remote computing method via a controller smart phone. Themanifestation depends on processing of voice commands andmotion identification of an Accelerometer embedded smartdevice.

In Fig. 3, the console of the developed Client applicationis presented. The system has been tested under differentcircumstances and the performance analysis accounted forhas been quite satisfactory. Chronicles of several casework

173

Page 5: An Android based human computer interactive system with ...static.tongtianta.site/paper_pdf/722c74c4-df80-11e9-ba31-00163e08bb86.pdfAbstract—Human Machine Interface (HMI), referred

TABLE IEVALUATED DEVIATIONS IN CURSOR MOVEMENT

θ (Deg.) |error|5 0.0810 0.1015 0.1220 0.1725 0.2130 0.24

analysis are articulated in this section with referring to someperformance indexes.

Case-1: Effect of Angular Orientation on Precise CursorMovement:The cursor navigation depends on the axial orientation ofthe smart phone and these motion parameters are derivedfrom the processing of Accelerometer sensing values. Thedemonstrated system has been tested for accurate mousemovement application in regard to the substantial alignment ofthe smart phone across a 2-D XY surface, when the azimuthal(Z-axis) component of motion has been considered constant.To observe the effects of angular motion parameters on thesimultaneous cursor position, the console of the host computerhas been regarded as a virtual 2-D surface of 100 squaregrids, each of them contains an area of 1cm2. The testingprocess is referred to a threshold of θ = 5Degree for anelementary change of 1 unit in the cursor position denotedas (X,Y) coordinates. In case of different angular positioning,the respective deviational errors of the cursor movement havebeen evaluated from a graphical interface fabricated into thehost machine. The observed deviations in cursor position arestored in Table I.

|error| = |r − r0|

where r0 = reference position and r = actual position.

Case-2: Effect of Voice Intensity on Command Activa-tion:The voice commanding mechanism has been established bydint of corrugated analysis of different vocal samples whichare both of male and female users. The inputted speech filesare first converted into respective text formats and then thetextual instructions are carried out in order to control theoperation of the host machine. The developed system has beenexperimentally tested for 100 voice samples with differentintensity measured in Decibel units. The vocal intensity varieswith respect to the pitch of a sample and according to thesevariations, the accuracy of detecting executable commands hasalso changed to a considerable extent. The observed accuraciesfor different vocal intensities are stored in Table II. I = Voiceintensity in dB and α = accuracy of command execution in%.

Case-3: Effect of Wireless Connectivity on System Effi-ciency:

TABLE IIEVALUATED ACCURACIES IN VOICE ACTIVATION

I (dB) α (%)90 9685 9380 9175 8770 8465 79

TABLE IIISYSTEM EFFICIENCIES FOR CLEAN AND NOISY CONNECTIVITY

d (m) ηs (%) ηn (%)5 91 8310 88 7915 83 7620 81 7125 77 65

The presented system has been tested in a noisy environ-ment where signaling interference and other communicativevariances prevailed. The practical simulation has circulatedthat the remote control over a targeted computer has beenestablished from a fair distance of 15-20m via a reasonablystrong Wi-Fi network while with a slow and relatively weakas well as multiple accessed networking provision, the smartphone application can access the host from a distance of10m. The caseworks stated in Case-1 and Case-2 have beenanalyzed with a prevalence of a strong and compatible Wi-Fi networking utility. Distant control operation yields to theavailability of an efficient wireless concert which is free fromnoisy artifacts and interferences. Table III contains systemefficiencies for different operating distances within noise-freesingle mode and noise-corrupted multi mode Wi-Fi connectiv-ity zones respectively. d = tested operating range (m) of thesystem within a Wi-Fi connectivity zone, here range meansthe distance between the Client smart phone and the targetedcomputer; ηs = efficiency in % for a noise-free connectivityand ηn = efficiency in % for a noisy connectivity.

Case-4: Application of Motion Recognition in RobotControl:The implemented motion detection based remote control overa host computer urges the applicability of accelerated readingsin case of wireless control of a robotic structure from a mod-erate distant place. As the embedded Accelerometer sensingmechanism has been proven efficient, especially in the state ofsignal interference, the processed motion data can be utilizedto manually control the movement and other activities of arobotic body from a remote place. The data transformationcan take place through Wi-Fi, GSM (Global System for MobileCommunication) or Bluetooth technology.

The developed HMI system has been tested for controllingoperation of a customized robotic framework which can nav-igate along a 2-D XY surface. The robot has been programedto take inputs from the Android Client application which sends

174

Page 6: An Android based human computer interactive system with ...static.tongtianta.site/paper_pdf/722c74c4-df80-11e9-ba31-00163e08bb86.pdfAbstract—Human Machine Interface (HMI), referred

Fig. 4. Robot control mechanism of the HMI system; (a) shows theorientation of the Accelerometer, (b) shows the tested robotic structure

the orientation parameters of the smart phone.Fig. 4. presents the robot control scheme via wireless connec-tivity by dint of motion recognition methodology developedin the proposed HCI system.According to the Accelerometer readings, which represent thephysical motion values of the phone, the robot has been madeto navigate along an arena. This practice compliments thecursor movement and the robotic activity can be interpreted asthe positioning of the mouse cursor along the host computer.The voice command execution has been experimented withthe robotic framework where some sample vocal sounds havebeen processed by a code snippet and transferred to theArduino firmware of the robot. The XY alignment, roboticarm movement and mounted camera rotation are some of themodes of actions tested in this casework. The accelerated datatransmission from the smart phone to the micro-controllerboard of the robot has been established via Bluetooth protocol.The works reported in [1], [2], [3], [4] and [5] have notprovided any practical accuracy evaluation with respect todifferent voice and gesture inputs; whereas this paper presentsreal-time performance parameters for different vocal andgesture inputs. The papers articulated in [6], [7], [9] and[10] have not presented any demonstration of the proposedmethods; whereas this proposed work consists of hardwareimplementation and relevant performance analysis. The worksin [8] and [11] have reported demonstration but there is nopragmatic performance evaluation, which is documented inthis paper. The evaluated performance analysis ascertains theeffectiveness of the proposed work.The discussed caseworks imply that the demonstrated frame-work is quite efficient and reliable in case of practical ap-plications. Several prospects have been investigated whiletesting the system, like motion sensitivity of the sensor andcorresponding response of the developed software, elementarydeflection of cursor position irrespective of the phone move-ment, misidentification of different vocal sounds and convertedtexts by the host software and little delays in execution of aspecific command. Such aspects are meant to be resolved withthe further improvement of the work.

VI. CONCLUSION

In this paper, the prospects of an Android-based wire-less human-machine, basically computer, interrelated softwaredependent system has been elaborately presented. The sys-tem consists of certain dedicated software applications anddata acquisition mechanism from embedded motion sensor.The motion recognition and voice command execution availstable remote control phenomenon over the operation of ahost machine. The demonstrated system has been provensustainable on the basis of its performance evaluation. Thewireless communicative system is happened to be useful inapplications of remote controlling of robots and electronicdevices. The physically impaired people are beneficiaries ofsuch an interactive medium. Such reliable systems augmentthe horizon of advancement in smart computing and invokefurther development in this field.

REFERENCES

[1] H. Cho, S. Kim, J. Baek, and P. Fisher, “Motion recognition with smartphone embedded 3-axis accelerometer sensor,” in Proc. IEEE Int. Conf.Systems, Man, and Cybernetics (SMC), Oct. 2012, pp. 919–924.

[2] E. Torunski, A. E. Saddik, and E. Petriu, “Gesture recognition on amobile device for remote event generation,” in Proc. IEEE Int. Conf.Multimedia and Expo (ICME), Jul. 2011, pp. 1–6.

[3] P. Kannan, S. K. Udayakumar, and K. R. Ahmed, “Automation usingvoice recognition with python sl4a script for android devices,” inProc. Int. Conf. Industrial Automation, Information and CommunicationsTechnology (IAICT), Aug. 28-30, 2014.

[4] K. I. Shin, J. S. Park, J. Y. Lee, and J. H. Park, “Design and imple-mentation of improved authentication system for android smartphoneusers,” in Proc. 26th Int. Conf. Advanced Information Networking andApplications Workshops, 2012.

[5] F. Zhang, X. Wang, Y. Yang, Y. Fu, and S. Wang, “A human-machineinterface software based on android system for hand rehabilitationrobot,” in Proc. Int. Conf. Information and Automation (ICInfA), Aug.2015.

[6] S. Zhang, P. McCullagh, C. Nugent, and H. Zheng, “Activity monitoringusing a smart phone’s accelerometer with hierarchical classification,” inProc. Sixth Int. Conf. Intelligent Environments (IE), Jul. 2010, pp. 158–163.

[7] B. Obuliraj, R. Vijayalakshmi, and K. Sudha, “Remote controlling pcwith smartphone inputs from remote place with internets,” in Proc. Na-tional Conf. Research Advances in Communication , Electrical Scienceand Structure (NCRACCESS-2015), pp. 40–43.

[8] S. U. Khadilkar and N. Wagdarikar, “Android phone controlled voice,gesture and touch screen operated smart wheelchair,” in Proc. Int. Conf.Pervasive Computing (ICPC), Jan. 2015, pp. 1–4.

[9] Y. Zhong, T. V. Raman, C. Burkhardt, F. Biadsy, and J. P.Bigham, “Justspeak: Enabling universal voice control on android,”in Proc. 11th Web for All Conf., ser. W4A ’14. New York,NY, USA: ACM, 2014, pp. 36:1–36:4. [Online]. Available: http://doi.acm.org/10.1145/2596695.2596720

[10] X. Wang, P. Tarrio, A. Bernardos, and E. Metola, “User-independentaccelerometer-based gesture recognition for mobile devices,” The Ad-vances in Distributed Computing and Artificial Intelligence Journal,2012.

[11] T. Marasovic, V. Papic, and J. Marasovic, “Motion-based gesture recog-nition algorithms for robot manipulation,” Int. Journal of AdvancedRobotic Systems, 2014.

175