inverse kinematic based brain computer interface to...
TRANSCRIPT
International Journal of Mechanical & Mechatronics Engineering IJMME-IJENS Vol:20 No:01 15
200701-5959-IJMME-IJENS © February 2020 IJENS I J E N S
Inverse Kinematic Based Brain Computer Interface to
Control Humanoid Robotic Arm
Abstract— New Inverse Kinematic based Brain Computer
Interface (IK-BCI) system was proposed. the system performs
aim selection intended by user through acquiring user’s EEG
signal, extract the signal’s feature, classify the intention behind
the signal, and performs inverse kinematic on the predicted
position to make the robotic arm be reached to the desired
position. Three types of five-classes EEG mental tasks signals
were acquired using EMOTIV EPOC EEG head set in separate
sessions and compared in terms of online system’s performance
after using each one as input signal. The proposed feature
extraction method was hybrid feature extraction that include
Multiclass Support Vector Machine (M-CSP) with
Autoregressive (AR) coefficients features. Multiclass Support
Vector Machine with Radial Basis kernel Function (SVM-RBF)
was used for machine learning processing based on LIBSVM
MATLAB library. Analytical solution was proposed to perform
the Inverse Kinematic (IK) on 5-DOF Humanoid Robotic Arm
(HRA) to be controlled in online basis. The practical results
showed a successful cooperation between the IK and BCI with
highest classification accuracy of 88.75% which leads to
successful reach of the desired target.
Index Term— Brain Computer Interface, Humanoid Robotic
Arm, Inverse Kinematics, Multiclass Common Spatial Patterns.
I. INTRODUCTION
Varity of patients who are suffering from neuromuscular
disorder or amputation are going to be terminated from the
outside world because their neural pathways that control their
voluntary muscles are damaged. Thus for this reason and due
to the development of high speed low-cost computers, a new
communication channels could be founded to communicate
with the outside world using the Electroencephalogram
(EEG) signal resulting from the brain activity. The paradigm
resulting from of interfacing the EEG signal with computers
that performs further processing stages to gain control on
remote devices called Brain Computer Interface (BCI).
General architecture of BCI must have the following stages:
EEG signal acquisition device, pre-processing of signal,
feature extraction, feature classification, and machine control
[1]. One of the most important goals in BCI approaches is to
provide movement to prosthetic devices such as robotic arms
to assist users doing their daily activities [2]. BCI system may
use Motor Imagery (MI) EEG signal to generate commands
that used to control the End-Effector (EE) of a robotic arm
reaching a target. Such control require the user to provide
long time-high level of concentration which might leads to
stress and frustration. Among available solutions is to adopt
a strategy that enables the user to achieve the goal without
having to preserve long time concentration. Semiautonomous
BCI might embodies this solution by identifying the user’s
attention as a desired target or aim then calculate its
corresponding position in terms of joint frames of robotic
arm. The joint angles can then calculated using appropriate
inverse kinematic model to reach that desired target. Many
BCI approaches have been proposed in literature of them, the
one proposed in [3] in which, the EEG signal processing
system presented in [4] have been used to control a robotic
hand in offline mode. The authors used two-class MI data to
extract Common Spatial Patterns (CSP) feature and then
classified using Support Vector Machine with Radial Basis
Kernel Function (SVM-RBF). In [5] Motor Imagery with
overt Motor Execution (MI+ME) EEG signal have been used
to generate command that control Humanoid Robotic Hand
(HRH) in real time. The proposed BCI paradigm was able to
process five class EEG signal and provide four control
commands in real time. To extract features, hybrid feature
extraction method that depends on multiclass CSP mixed
with Autoregressive coefficients have been proposed. The
used machine learning algorithm was SVM-RBF based on
LIBSVM. The system proposed in [6] depends on eye
blinking as EEG signal to control wheelchair in real time. In
the field of using Inverse kinematics (IK) with BCI integrated
in one system, the paradigm presented in [7] where, P300
signal was used to generate the mental commands. BCI 2000
framework has been used for BCI signal processing and
Jacobean matrix method was used to solve the IK problem in
order to control seven Degree of Freedom (7-DOF) robotic
manipulator. The authors in [2] raw EEG signal of Motor
execution (ME) signal to extract band power feature to be
used for real time prediction. The system uses Artificial
Ammar A. Al-Hamadani1, Mohammed Z. Al-Faiz2, Senior Member, IEEE 1 Department of Network Engineering, College of Engineering, Al-Iraqia University,
1College of Information and Communication Engineering, Al-Nahrain University,
Baghdad, Iraq, [email protected] 2 College of Information and Communication Engineering, Al-Nahrain University, Baghdad, Iraq,
International Journal of Mechanical & Mechatronics Engineering IJMME-IJENS Vol:20 No:01 16
200701-5959-IJMME-IJENS © February 2020 IJENS I J E N S
Neural network (ANN) and SVM-RBF for classification then
compare between their performances. A wearable robotic
arm was controlled by linking the BCI result to suitable IK
model. The system proposed in [8] used MI signal in mu and
beta bands to control robotic arm using IK algorithm. The
BCI system was able to classify three mental tasks using band
power features and Linear Discriminant Analysis (LDA) as
classifier. This paper presents a development step that takes
the system presented in [5] toward controlling 5-DOF
Humanoid Robotic Arm (HRA) using Inverse Kinematic
(IK) model proposed in [9]. Therefore; new Inverse
Kinematic based Brain computer interface system (IK-BCI)
was proposed and three mental tasks types of EEG signal
were experimented separately. The three mental tasks are: MI
of four limbs, MI+ME of right arm, and eyes blinking. Five
EEG classes were acquired for each mental task using
EMOTIV EPOC. Hybrid feature extraction method was used
along with multiclass SVM-RBF for machine learning. The
HRA was constructed based on InMoov design and analytical
solution of IK was used and cooperated with the BCI system
to generate the joint angles corresponding to the predicted
class.
II. PROPOSED METHOD
A. System Architecture
Robotic movement control can be regarded as value added
application to BCI system. Such control can be in the form of
forward kinematic control (i.e. provide joint angles directly)
or in the form of aim selection. For forward kinematic
control, BCI system required to specify the real time
movement of the robotic arm in continuous way thus, more
complex system. On the other hand, aim selection can use the
BCI system to indicate the intended desired position of the
EE and the ambient software and hardware handles the
forward kinematic control to achieve that position. So, aim
selection reflects the sense of inverse kinematic control. On-
line control of robotic arm based on aim selection strategy
was proposed to embody the system of IK-BCI. Ten trials of
three types of five-class EEG signal were acquired using
EMOTIV headset. The signal types were, (MI+ME of right
hand), (Pure MI of four limbs), (eye blinking signal), along
with neutral signal for each type. These three signals were
implemented one by one in separate training/online sessions
where, each one session contain four trials for each class.
Comparison between the classification accuracy of the IK-
BCI system using each signal was implemented and the
collected results will be presented later. A block diagram
illustrating the proposed system architecture of IK-BCI is
shown in Fig. 1.
Fig. 1. Proposed Architecture of IK-BCI System.
The training process of the IK-BCI system was repeated
three times. For each time, the system was trained using one
type of EEG signal. Ten trials of each class was used to train
the IK-BCI system. In every training session each recorded
class experienced the hybrid feature extraction where, each
class was decomposed against neutral class in M-CSP and
simultaneously, AR feature was extracted. Both
dimensionality reduction algorithms corresponding to its
paired feature extraction will then applied. After that, feature
normalization process will be implemented. To let the IK
model be able to work with this system, it was proposed that
each class of EEG signal should be assigned a predefined and
specific target that reflect the most desired or wanted position
that the user preferred to go. In this case it’s possible to
assign four desired positions since the system have the ability
to classify four EEG classes. These desired positions includes
the desired orientation of HRA End-Effector (EE).
B. EEG Signal Acquisition and mental task types
The acquisition system consists of two parts, the headset
part which is 14-channels EMOTIV EPOC headset. The
software part represented by MATLAB code to acquire the
signal from the device in real time and to store the acquired
signal in matrices as dataset in order to use in the training
process. Three types of mental tasks were used as follows:
1) Five-classes (MI+ME) Signal
Only one healthy subject with no previous experience in
BCI was participated in this experiment over a period of
three days. During the collection of training data, eleven
runs were recorded over the three days divided as three
M- CSP+
M-VECS
HRA
Assigned Char.
MATLAB
Visual Feedback
EEG adaptation side
One trial of desired EEG
class
Features extraction and classification
Motor Imagery
Motor Imagery
Environmental adaptation side
Online
Session
Training
Sessions
Online
predicting
Training
M- CSP+
M-VECS
AR+
PCA Predicted location
AR+
PCA
Majority Voting
EEG
Inverse Kinematic Algorithm
Calculate (𝜃1 𝑡𝑜 𝜃5)
International Journal of Mechanical & Mechatronics Engineering IJMME-IJENS Vol:20 No:01 17
200701-5959-IJMME-IJENS © February 2020 IJENS I J E N S
runs in the first day and four runs in the second and third
day respectively. Each run consisted of five separate
trials (files) of 5sec Motor Imagery with Motor Execution
(MI +ME) task recordings. The classes of the five trials
are open, close, pronation, supination and rest of right
hand. The rest class was used later as neutral class. The data of each trial stored in Matlab matrix file (.mat) with
size of (640 × 14). Only ten runs were used to train the
machine learning module while the eleventh run was used
for testing the trained machine learning module. Before
each trial recording, subject was instructed to stay calm,
not to clench his jaw or blink the eyes and listen to cue
sound (beep) as the beginning of acquisition process. In
every trial, subject performs the task when hearing the
beep sound and stops performing when hearing another
beep after 5 seconds of recordings. During the online
session, same sound cue system was employed and the
necessary (MI+ME) command signal would then generated by the subject as one trial to be processed by
the next processing stage.
2) Five-classes of Four Limbs MI Signal
The properties of the signal acquisition system is similar
to the one discussed above. Eleven trials of five classes
for task type of EEG signal were recorded using EMOTIV
EPOC EEG headset. Four of these trials represents the
required class and the last one considered as neutral class.
For the four recorded classes of MI signal, the subject was
instructed to imagine the movement of his right arm, left
arm, right leg, or left leg as separate sessions. While for the neutral class, the subject was instructed to stay calm
and tries his best not to imagine any movement. Each
recording session last for 5 second, 640 samples at
sampling rate of 128 Hz.
3) Five-classes of Eyes Blinking EEG Signal
Similar acquisition properties of the above were used. For
the five recorded classes of eye blinking signal, the
subject blinked his eyes by one, two, three, and four times
in four separate recording sessions to achieve the 4-class
data. The neutral class of blinking was to keep the eyes
open for 5 seconds. Each recording session last for 5
second, 640 samples at sampling rate of 128 Hz.
C. Features Extraction
New mix of features was proposed to work with the online
BCI system. The mix approach suggested to provide more
variety of features to represent the information delivered by
the EEG signal. Such technique shall provide more choices
being supplied to the classification module to seek the
intended class of mental task. The proposed hybrid feature
extraction method was built from mixing (M-CSP+M-VECS)
with (AR+PCA) and can be shortly termed as (M-CSP+AR).
Where, M-CSP+M-VECS stands for Multiclass Common
Spatial Patterns feature along with its suggested
dimensionality reduction of Multiclass Variance Entropy
Channel Selection. While, AR+PCA stands for
Autoregressive Coefficients features along with its suggested
dimensionality reduction of Principal Component Analysis.
D. Multiclass Common spatial Patterns (M-CSP)
The proposed approach extended the two-class CSP to
solve multiclass problem by breaking it down into several
two-class problems. Such approach become possible to be
applied by using neutral state to be provided as extra mental
task. Taking one trial matrix for each class (E) to from the
normalized covariance matrix as follows:
𝛴𝑖 =𝐸𝑖𝐸
′𝑖
𝑡𝑟𝑎𝑐𝑒(𝐸𝑖𝐸′𝑖)
(1)
Where, i = 1 to n and n = 4 while, 𝛴𝑖 is normalized
covariance matrix calculated for classes (1, 2, 3, 4). Then,
𝛴𝑛𝑒𝑢𝑡𝑟𝑎𝑙 is extracted separately as normalized covariance
matrix for neutral state. Therefore; the composite matrix 𝛴𝑐
become as Eq.2 and then decomposed using Eigen
decomposition to be Eq.3.
𝛴𝑐𝑖 = 𝛴𝑛𝑒𝑢𝑡𝑟𝑎𝑙 + 𝛴𝑖 (2)
𝛴𝑐𝑖 = 𝑢𝑖 𝜆𝑖𝑢′𝑖 (3)
Where, 𝑢 is a matrix of eigenvectors and 𝜆 represents its
corresponding diagonal matrix of eigenvalues then. Sorting
the 𝜆 in descending order to be 𝜆𝑑. Computing whitening
transformation matrix 𝜙 in Eq.4 to decorrelate 𝛴𝑛𝑒𝑢𝑡𝑟𝑎𝑙 and
𝑆𝑖 𝑎𝑠 in Eq.5. Such transformation leads them to share
common eigenvector. To extract the common eigenvector,
the ith class was decompose against the neutral state, then,
apply Eigen decomposition on the matrices resulting from
Eq. 5 to get the generalized common eigenvector U as in
Eq.6. For checking, that adding both eigenvalues of Eq. 3.19
should result in identity.
𝜙𝑖 = √𝜆𝑑𝑖−1 𝑢′𝑖 (4)
𝑆𝑛𝑒𝑢𝑡𝑟𝑎𝑙 = 𝜙𝑖𝛴𝑛𝑒𝑢𝑡𝑟𝑎𝑙 𝜙′𝑖 , 𝑆𝑖 = 𝜙𝑖𝛴𝑖 𝜙′𝑖 (5)
𝑆𝑛𝑒𝑢𝑡𝑟𝑎𝑙 = 𝑈 𝜃𝑛𝑒𝑢𝑡𝑟𝑎𝑙𝑈′, 𝑆𝑖 = 𝑈 𝜃𝑖𝑈
′ where, 𝜃𝑛𝑒𝑢𝑡𝑟𝑎𝑙 +
𝜃𝑖 = 𝐼 (6)
Where, 𝑆𝑛𝑒𝑢𝑡𝑟𝑎𝑙 and 𝑆𝑖 are the whitened (decorrelate)
versions of 𝛴𝑛𝑒𝑢𝑡𝑟𝑎𝑙 and 𝑆𝑖 matrices respectively, 𝜃𝑛𝑒𝑢𝑡𝑟𝑎𝑙 and 𝜃𝑖 are eigenvalues, and 𝑈 is the corresponding common
eigenvector. Sorting the elements of 𝜃𝑖 ascending and take
based on their index the corresponding eigenvector U
preparing it to be used in spatial filtration. For each trial (E),
spatial filter W can be derived as Eq.7, then, E was filtered
using Eq.8 and CSP features were extracted for the 14
channels EEG signal as Eq.9.
International Journal of Mechanical & Mechatronics Engineering IJMME-IJENS Vol:20 No:01 18
200701-5959-IJMME-IJENS © February 2020 IJENS I J E N S
𝑊𝑖 = (𝑈′ 𝜙𝑖)′ (7)
𝑍𝑖 = 𝑊𝑖 𝐸𝑖 (8)
𝑓𝑖,𝑗 = log (𝑣𝑎𝑟(𝑍𝑐ℎ)
∑ 𝑣𝑎𝑟(𝑍𝑖)𝐶𝑖=1
) (9)
Where, 𝑓𝑖,𝑗 is feature vector of i-th class and j-th run (trial).
Each feature vector contains 14 columns representing 14
EEG channels. Fig. 2 shows the proposed flowchart of
implementing M-CSP in training session. Reducing M-CSP
feature dimensionality means that selecting best r number of
EEG channels which best represents M-CSP features that
provide reduction in classification error and leads to higher
classification accuracy thus, leads to enhance classification
performance.
Fig. 2. Proposed calculation of M-CSP features in training session.
Multiclass Variance Entropy Channel Selection (M-
VECS) was proposed as dimensionality reduction algorithm
for M-CSP features. The proposed idea of the M-CSP with
M-VECS in training session presented in Fig.3.
Fig. 3. Implementation of the proposed M-CSP on the recorded data used
in SVM training session. Note that M-CSP algorithm was repeated per n
mental tasks (classes) for each RUN.
In the training session, features extraction procedure
started by applying M-CSP as each class decomposed against
neutral class for each run. CSP features were calculated based
on Eq.9 to yield 14 columns (channels) feature vector (f class,
run). M-VECS was applied to select r channels that best
represents M-CSP features. The implementation of
multiclass variance entropy of channels in training session is
defined as follows:
𝐸𝑛𝑡𝑟𝑜𝑝𝑦 𝑜𝑓 𝑐ℎ(𝑖) = −(𝑣𝑎𝑟(𝑓𝑐ℎ(𝑖)1 ) × ln(𝑣𝑎𝑟(𝑓𝑐ℎ(𝑖)
1 )) +
𝑣𝑎𝑟(𝑓𝑐ℎ(𝑖)2 ) × ln(𝑣𝑎𝑟(𝑓𝑐ℎ(𝑖)
2 )) + 𝑣𝑎𝑟(𝑓𝑐ℎ(𝑖)3 ) ×
ln(𝑣𝑎𝑟(𝑓𝑐ℎ(𝑖)3 )) + 𝑣𝑎𝑟(𝑓𝑐ℎ(𝑖)
4 ) × ln(𝑣𝑎𝑟(𝑓𝑐ℎ(𝑖)4 ))) (10)
Where, i is EEG channel’s number, (𝑓𝑐ℎ(𝑖)1 ,
𝑓𝑐ℎ(𝑖)1 , 𝑓𝑐ℎ(𝑖)
1 , 𝑓𝑐ℎ(𝑖)1 ) represent classes 1, 2, 3, and 4 feature
vectors of one trial of i-th channel. The channels with the four
lowest variance entropies were selected since the value of r =
4 was chosen based on best performance. Finally, feature
matrix was created by vertically concatenating the 4-channels
feature vectors. Same approach was implemented in online
session but this time, the single online trial (unknown class)
was proposed to be decomposed against each of the 10
neutral classes that been used in training session as in Fig. 4.
Fig. 4. Proposed calculation of M-CSP features in online session.
The implementation of multiclass variance entropy of
channels in online session is defined as follows:
𝐸𝑛𝑡𝑟𝑜𝑝𝑦 𝑜𝑓 𝑐ℎ(𝑖) = −(𝑣𝑎𝑟(𝑓𝑐ℎ(𝑖)x̂ ) × ln(𝑣𝑎𝑟(𝑓𝑐ℎ(𝑖)
x̂ ))) (11)
Where, i is the number of EEG channels and its equal to
14, and 𝑓𝑐ℎ(𝑖)�̂� the feature vector of the unknown class (𝑥) that
needed to be predicted. The proposed idea of online M-CSP
feature extraction with M-VECS is presented in Fig.5.
Input EEG data:Class 1, Class 2, Class 3, Class 4, Neutral
Select neutral data
Compute the composite matrix Σc
(Eq.2)
Apply eigen decomposition on Σc
(Eq.3)
Sort the eigenvalues in descending order λd
Apply whitening transformation on the normalized matrices to get SNeutral and
SClass(i)
(Eq.5)
Apply eigen decomposition on ( SNeutral and SClass(i))(Eq.6)
Computing the spatial filter W
(Eq.7)
Filtering the single trial data (calculating Z matrix)(Eq.8)
End of trials?
Yes
Start
EndA
A
Extracting CSP features for each channel of Z matrix(Eq.9)
For i =1 to 4
Select data of class (i)
Compute normalized covariance matrix
Eq.1
Compute normalized covariance matrix
Eq.1
Sort i ascending, take the index, sort U based on that index
End of for ?
Yes
Compute the whitening transformation matrix(Eq.4)
No
NO
Store the acquired data (unknown class)
Compute the composite matrix Σc
(Eq.2)
Apply eigen decomposition on Σc
(Eq.3)
Sort the eigenvalues in descending order λd
Apply whitening transformation on the normalized matrices to get SNeutral and
Sunknown
(Eq.5)
Apply eigen decomposition on ( SNeutral and Sunknown)(Eq.6)
Computing the spatial filter W
(Eq.7)
Filtering the single trial data (calculating Z matrix)(Eq.8)
End of trials?
Yes
Start
EndA
A
Extracting CSP features for each channel of Z matrix(Eq.9)
Compute normalized covariance matrix
(Eq.1)Sort i ascending, take the index, sort U
based on that index
Compute the whitening transformation matrix(Eq.4)
Acquiring EEG signal from EMOTIV headset
Input neutral data
Compute normalized covariance matrix
(Eq.1)
No
International Journal of Mechanical & Mechatronics Engineering IJMME-IJENS Vol:20 No:01 19
200701-5959-IJMME-IJENS © February 2020 IJENS I J E N S
Fig. 5. Extraction of M-CSP features during the online session. No neutral
trial was acquired in online session where the same neutral data used as in
training.
E. Autoregressive Coefficients (AR)
It’s a frequency domain feature represented based on
Linear Prediction Coding (LPC) model as follows:
𝑥𝑛 = −∑ 𝑎𝑖𝑥𝑛−𝑖 +𝑤𝑛𝑝𝑖=1 (12)
Where, 𝑥𝑛 represents nth sample of the signal, a is the
autoregressive coefficient, p is the order of the autoregressive
signal, and 𝑤𝑛 is white noise. AR model of fourth order (p=4)
was proposed to be used with EEG where, AR coefficients
were predicted using Levinson-Durbin method and
implemented by using lpc MATLAB function as follows:
𝐴𝑅 = 𝑙𝑝𝑐(𝑥, 𝑃); (13)
Where, AR is feature vector of (1×56), x is training EEG
data samples belongs to one class in training session or its
online EEG data samples belongs to unknown class that
needs to be predicted in the online session. P represents the
order of linear predictor. With 14 channels EEG, four AR
coefficients would be returned per channel yielding 56
features. To increase the resolution of feature space, sliding
window was proposed. Based on best performance, window
size of 1.6 sec. (200 EEG samples) with 0.9 sec. (120
samples) window increment was selected. In each window,
56 features were extracted and vertically concatenated with
the next window’s features. Since, 56 features regarded as
high dimensional feature vector, PCA was used to overcome
classifier over-fitting problem (cruse of dimensionality) and
thus increase classification accuracy. MATLAB function was
used to process the columns (channels) of the input feature
matrix with principal component analysis. The PCA
algorithm performs Eigen decomposition on the feature
matrix to calculate the eigenvalues and its corresponding
eigenvectors. The eigenvectors that corresponding to largest
two eigenvalues was only selected as feature vectors. The
MATLAB function usage was as follows:
𝑓𝑒𝑎𝑡𝑢𝑟𝑒s = 𝑝𝑐𝑎_𝑓𝑒𝑎𝑡𝑢𝑟𝑒_𝑟𝑒𝑑𝑢𝑐𝑡𝑖𝑜𝑛(𝐴𝑅, 𝑑); (14)
Where, features is feature vector of (1×21), AR is the
autoregressive coefficients as features, d is the required
dimension (columns) to be achieved after reduction. Fig. 6
shows flow chart of the proposed AR feature extraction
algorithm.
Fig. 6. Flow chart of the proposed implementation of AR features
extraction algorithm.
PCA de-correlates the columns of AR features so that it’s
ordered in terms of contribution to total variation. Columns
whose contribution is very week can be removed. The
number of features to be reduced was kept as many as
possible based on best performance therefore; d was selected
to be 21.
F. Feature Normalization
The normalization stage was proposed for the online BCI
system and use in both training and online sessions. Final
feature matrix of the paradigm would contain the two kinds
of features, M-CSP and AR concatenated horizontally. This
stage normalizes the hybrid feature so that, large scale data
doesn’t affect the smaller one. Such stage considered as
crucial since the final feature matrix contain two different
kinds of features (i.e. M-CSP and AR) thus, the both are place
in the same scale as follows:
𝑆 = √∑ (𝑓𝑖−𝑓̅)
2𝑛𝑖=1
𝑛−1 (15)
𝑄 =(𝑓−𝑓̅)
𝑆 (16)
Where, f is one feature data sample, 𝑓̅ mean value of one
row of the feature matrix, n is the total data points in each
row vector, 𝑆 is standard deviation, and Q is the normalized
data sample .
Start
Windowing\ Next WindowWin size: 200Win inc: 120
Apply PCA to reduce AR features
to (1 21)(pca_feature_reduction MATLAB
Function Eq.14)
Vertical concatenation of PCA reduced AR vectors
Yes
End
Training trials \ Online trialData
Acquiring EEG Data
Extract AR coefficients (ai) by solving Eq.12 using Levinson-
Durbin Method (lpc MATLAB Function Eq.13)
AR feature vector (1 56)
End of windows reached?
No
International Journal of Mechanical & Mechatronics Engineering IJMME-IJENS Vol:20 No:01 20
200701-5959-IJMME-IJENS © February 2020 IJENS I J E N S
G. Features Classification
LIBSVM MATLAB library as Multiclass Support Vector
Machine [10] (M-SVM) was employed as machine learning
algorithm for feature classification stage. SVM uses the
minimization to solve the problem of quadratic
programming. Data are mapped to higher dimension then
projected into separating hyperplanes. Distance between
these hyperplanes is maximized by minimizing its weights w,
bias b and the error of misclassification (𝜉) as follows:
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 {1
2𝑤 𝑤𝑇 + 𝐶∑ 𝜉𝑛
𝑖=1 }
𝑆𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑦𝑖(𝑤. Ø(𝑥𝑖) + 𝑏) ≥ 1 − 𝜉 𝑓𝑜𝑟 𝑐𝑙𝑎𝑠𝑠 1
𝑦𝑖(𝑤. Ø(𝑥𝑗) + 𝑏) ≤ −1 − 𝜉 𝑓𝑜𝑟 𝑐𝑙𝑎𝑠𝑠 2
} (17)
Where, w is weight vector, 𝜉 is classification error, C is the
weight of error, and Ø is transformation matrix to higher
plane. LIBSVM adopt “One-against-One” strategy to
perform the multiclass prediction process where, the
multiclass data are divided into several two-class classifiers.
Each of the trained two-class classifier is used in the
prediction process of the testing data or the online (unknown)
data. SVM uses Eq. 18 to predict the output class as follows:
𝑦𝑝 = 𝑠𝑖𝑔𝑛 (∑ 𝑦𝑖𝛼𝑖𝐾(𝑥𝑖 , 𝑥𝑗)𝑖 𝑏) (18)
Where, 𝑦𝑝 is the predicted class, 𝑦𝑖 is target class, 𝛼𝑖 is
Lagrangian multiplier, b is bias and 𝐾(𝑥𝑖 , 𝑥𝑗) is RBF kernel
function as in Eq.19.
𝐾(𝑥𝑖 , 𝑥𝑗) = 𝑒−𝛾‖𝑥𝑖−𝑥𝑗‖ 2 (19)
Based on best classification performance, 𝐶 = 28 and 𝛾 =2−3.474 has been chosen. The classification accuracy was
calculated using Eq.20.
𝑎𝑐𝑐 = ∑ 𝑇𝑟𝑢𝑒 𝑐𝑙𝑎𝑠𝑠𝑖𝑛𝑖=1
𝑇𝑜𝑡𝑎𝑙 (𝑡𝑟𝑢𝑒+𝑓𝑎𝑙𝑠𝑒) 𝑛 = 4 (20)
Post processing technique was used to enhance the
classification accuracy by reducing transitional error. For
single trial online session, majority voting was used to select
the major and most frequent class and admit it as single and
final output that reflect unique intention of the user as
illustrated in Fig. 7.
Fig. 7. Example of applying majority voting post processing technique.
H. Kinematics of the Humanoid Robotic Arm
For the 5-DOF Humanoid Robotic Arm depicted in Fig. 8,
its required to solve the Inverse Kinematic (IK) problem in
order to find the five joints angles (𝜃1, 𝜃2, … , 𝜃5) that
achieves the desired position and orientation of the arm’s EE.
The proposed solution for the IK problem is analytical
solution where, joints angle are represented by equations
derived from the reversed Forward Kinematic (FK) matrices
using algebraic, geometric, and trigonometric relations. First
assumption that was made is that the value of the angle of the
wrist (𝜃5) is always equal to zero since it represents the EE
of the HRA and it’s assumed that its value doesn’t affect the
pose of the HRA. The final calculations of IK algorithm is
shown in the flowchart of Fig. 9.
Fig. 8. Joints Angles and links of the proposed HRA.
Fig. 9. Flowchart of the proposed IK algorithm using analytical solution
where, S is referred to sin and C is referred to cos.
𝜽𝟏 = 𝑡𝑎𝑛−1(−𝑜𝑋𝑜𝑦
)
𝜽𝟐 = tan−1 (𝑃𝑥𝑆1 − 𝑃𝑦𝐶1
𝑃𝑧)
𝐶3 =𝑃𝑧 − az𝐿2
−𝐿1𝐶2
𝑆3 = √1 − 𝐶32
𝜽𝟑 = tan−1 (𝑆3
𝐶3
)
𝑆𝐸 = √(𝑃𝑥2 + 𝑃𝑦
2 + 𝑃𝑧2)
𝜽𝟒 = 𝜋 − cos−1 𝐿1
2 + 𝐿22 − 𝑆𝐸2
−2 𝐿1𝐿2
International Journal of Mechanical & Mechatronics Engineering IJMME-IJENS Vol:20 No:01 21
200701-5959-IJMME-IJENS © February 2020 IJENS I J E N S
Noting that this flowchart can be used directly in the
programming of the HRA kinematics. Since, each mental
task type contain set of four class and one neutral data. Each
one type of signal can be assigned to represent four desired
positions (A, B, C, D) that the user is aim to reach. These
desired positions can be defined as:
𝑃𝐴 = [𝑋𝐴𝑌𝐴𝑍𝐴
], 𝑃𝐵 = [𝑋𝐵𝑌𝐵𝑍𝐵
],
𝑃𝐶 = [𝑋𝐶𝑌𝐶𝑍𝐷
], 𝑃𝐷 = [𝑋𝐷𝑌𝐷𝑍𝐷
] (21)
For example, for (MI+ME) signal, the hand open can be
assigned to represent the position 𝑃𝐴, hand closed assigned to
represent position 𝑃𝐵 , hand pronation assigned to represent
position 𝑃𝐶 , and hand supination assigned to represent
position 𝑃𝐷 . For MI signal, imagining the right hand can be
assigned to represent the position 𝑃𝐴, left hand assigned to
represent position 𝑃𝐵 , right leg assigned to represent
position 𝑃𝐶 , and left leg assigned to represent position 𝑃𝐷 .
While for eye blinking, blinking the eyes once can be
assigned to represent the position 𝑃𝐴, two blinks assigned to
represent position 𝑃𝐵 , three blinks assigned to represent
position 𝑃𝐶 , and four blinks assigned to represent position 𝑃𝐷 .
Therefore; the proposed M-SVM-RBF algorithm was trained
to separate (predict) four different positions intended by the
IK-BCI user. So, the result of classification-majority voting
reflects one desired (position and orientation) out of four that
the user want the HRA’s EE to reach. During online session,
the user required to perform particular EEG class corresponds
to desired position and orientation. After online feature
extraction, normalization, class prediction, and majority
voting, the predicted class will be translated into its
corresponding (position-orientation) pair and transferred to
the IK algorithm in order to calculate the required joint angles
to reach that position. Output of IK module will be signaled
to Arduino in a form of character represents the required set
of joint angles. The programmed Arduino microcontroller
translates the arrived signal into servo motor rotation that
derive the HRA’s Links via Pulse Width Modulated (PMW)
signal to its corresponding angles. The subject observing the
HRA’s movement considered as visual feedback. Any
misclassification in the decoder output would be fixed by the
subject observing the process through reattempting to
generate the right EEG signal. Figure 10 displays a snapshot
of the working environment where, the user was wearing the
EEG headset and the HRA is placed on the origin of the
coordinate plotted on the workspace. The figure is also
presenting the user while trying to generate the EEG signal to
eventually manipulate the arm towards the selected aim or
position.
Fig. 10. User trying to reach the aimed position using EEG signal in the
proposed IK-BCI system.
III. RESULTS AND DISCUSSION
Aim selection strategy was implemented depending on the
EEG signal that reflect the user’s intention. Three types of
EEG mental tasks were used interchangeably by means of
selecting the desired aim or target by the user. Each type of
these signals was used as input to train the IK-BCI system
and predict the aim of user to reach a point in the online
session interchangeably. After implementing the training
session, eight online trials have been conducted to assess the
classification accuracy of using each signal. Online results
showed different performance for each signal type in terms
of classification accuracy. To compare the performance
between using each of the three signals, average overall
accuracies were placed in single plot as in Fig. 11.
Fig. 11. Combined overall online classification of the three types of EEG
signal.
As can be seen in the figures 1, the user enhanced in
providing the signal as keep doing more trials to reach
80.75% for MI of four limbs and 81.5% for eye blinking EEG
and that’s obeyed with the case of MI+ME that reached to
88.75 % after eight trials. This means that such operation
requires extensive training of user to get more experience to
deal with the IK-BCI system and to achieve better results.
Detailed of the achieved online classification accuracies of
the three signals is presented in table I.
International Journal of Mechanical & Mechatronics Engineering IJMME-IJENS Vol:20 No:01 22
200701-5959-IJMME-IJENS © February 2020 IJENS I J E N S
TABLE I
DETAILS OF CLASSIFICATION ACCURACIES.
Trial No. Right hand
MI+ME Eyes blinking
Four
limbs MI
1 75 49 50
2 65 50 51.25
3 76.25 50 54.75
4 76.87 50 62
5 71.87 50 63.25
6 85.62 57.5 72
7 76.25 65.25 79.25
8 88.75 81.5 80.75
Average: 76.95 56.66 64.16
As can be seen from table I, best performance can be
arranged as follows: right hand MI+ME, four limbs MI, then
eyes blinking EEG. It’s up to the kind of application to
choose between which signal type of those should be used.
For example, right hand MI+ME signal could be used in the
industrial business such as controlling robotic manipulator by
healthy operator. In the other hand, four limbs MI and eyes
blinking signals could be used in rehabilitation engineering
sector such as users how are suffering from amputation in the
upper limbs. Implementation of the proposed IK-BCI system
on real robotic arm represented by the designed HRA was
performed. Four online trials were conducted for each class
using one of the three signal types separately. Each class
represented by one desired position that the user intended to
reach. Since IK-BCI can classify up to four classes of EEG,
four positions were selected and assigned for each class as
desired points to be achieved by the HRA as in table II.
TABLE II
DESIRED POSITION PER CLASS.
Class Assigned
Orientation
Assigned
position
Point
name
1 (−168, 50,−152) [20,−20,−50]𝑇 𝑃𝐴
2 (−138, 65,−124) [50,−20,−30]𝑇 𝑃𝐵
3 (0, 90, 0) [64, 0, 0]𝑇 𝑃𝐶
4 (180, 31, 180) [15, 0, −60]𝑇 𝑃𝐷
Figure 12 displays the projection of these points on the real
workspace area where, the four desired positions were
depicted as red dots on the workspace. Results of IK-BCI
implementation is illustrated in Fig.13that represent the type
of used signal and the achieved position.
Fig. 12. Preview of the four desired positions projected on the workspace
as red dots.
(a)
(b)
International Journal of Mechanical & Mechatronics Engineering IJMME-IJENS Vol:20 No:01 23
200701-5959-IJMME-IJENS © February 2020 IJENS I J E N S
(c)
(d)
Fig. 13. Successful and unsuccessful online trials to reach the desired
positions using three different EEG signal.
(a) Reaching PA, (b) Reaching PB, (c) Reaching PC, (d) Reaching PD
For each signal type, the IK-BCI system was trained, and
four online trials were implemented for each trained system
to try to make the HRA achieves the desired position. Fig. 13
(a) presents the attempts of the user to reach the position PA
where, in the first trial, the user was succeeded to make the
HRA reaching the point using MI+ME signal while, failed to
do so using four limbs MI and blinking. As can be seen, using
MI signal, the HRA reached PB while, using blinking, the
HRA reached to the position PD and both are wrong
positions. In the second trial, both MI+ME and MI were
succeeded to reach PA while, blinking signal still reaching
PD. no signal achieved the desired position in trial number
three and, finally, only the four limbs MI signal was
succeeded to reach the point PA on the workspace. Fig. 13
(b) represents the four trials to reach the position PB. it’s
obvious from the plot that eyes blinking signal was succeeded
in the 1st trial while, right hand MI+ME signal was succeeded
in both 2nd and 4th trials. In Fig. 13 (c), the result of attempts
to bring the HRA up to the point PC have been displayed. The
use of four limbs MI signal has successfully achieved the aim
of the user to reach the position PC in the 1st and 2nd trials
while, the blinking signal was the only successful signal to
reach the desired position in the 3ed and 4th trials. The first
trial in Fig. 13 (d) had seen the success of the four limbs MI
signal to reach PD, while the second trial was successful with
the right hand MI+ME signal. There are no successful signals
in the third trial, while only the eye blinking and MI+ME
signals had succeeded in the fourth trial. To sum up, the use
of right hand MI+ME was succeeded seven times while, the
user was succeeded five times using the four limbs MI signal
and finally, the use of eyes blinking signal was succeeded
four times. The error trend could be enhanced if trials were
don in sequence for each type of signal separately and
increase the number of online trials. Such enhancement could
be attributed to user’s experience improvement in generating
the required format of EEG signal. However; the experiment
conducted as on trial for each signal type which make
temporal separation between on trial and the next of the same
signal kind.
IV. CONCLUSION
Inverse kinematics based brain computer interface was
practically implemented to control 5-DOF humanoid robotic
arm in online mode. There is opportunity to represent each
class of EEG signal by a desired position to be achieved by
the HRA’s EE and hence, four EEG classes can represent four
positions. Results showed that right hand MI+ME signal
achieved the highest average classification accuracy 76.95%
while, 64.16% and 56.66% were achieved using four limbs
MI and eyes blinking signal respectively. Such accuracy
could be enhanced with the increased experience of the user.
Online implementation of the proposed (IK-BCI) was
conducted to reach four desired positions in terms of four
trials per position. For each one of the three EEG signals, four
online trials were conducted for each EEG class (position) to
control HRA. Implementation result showed that the use of
MI+ME signal succeeded to reach the four positions while,
four limbs MI and eyes blinking signal achieved only three
of them. Total success of MI+ME signal for all positions was
seven times while five times of success achieved for MI and
blinking signal each. The kind of the used signal depend on
the application’s needs and the user’s health condition. For
healthy persons, right hand MI+ME signal might be
appropriate for industrial application. For people with upper
limbs disability, four limbs MI or eyes blinking might be the
appropriate signals to be used with IK-BCI system for
rehabilitation engineering applications.
REFERENCES [1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. P. Pfurtscheller,
and T. M. Vaughan, “Brain-computer interfaces for
communication and control,” Clin. Neurophysiol., vol. 113, pp.
767–791, May 2002.
[2] D. Bandara, J. Arata, and K. Kiguchi, “A noninvasive brain–
computer interface approach for predicting motion intention of
activities of daily living tasks for an upper-limb wearable robot,”
Int. J. Adv. Robot. Syst., vol. 15, no. 2, pp. 1–10, Mar. 2018.
[3] M. Z. Al-Faiz and A. A. Al-hamadani, “Analysis and
Implementation of Brain Waves Feature Extraction and
Classification to Control Robotic Hand,” Iraqi J. Inf. Commun.
International Journal of Mechanical & Mechatronics Engineering IJMME-IJENS Vol:20 No:01 24
200701-5959-IJMME-IJENS © February 2020 IJENS I J E N S
Technol., vol. 1, no. 3, pp. 31–41, 2019.
[4] M. Z. Al-Faiz and A. A. Al-hamadani, “IMPLEMENTATION OF
EEG SIGNAL PROCESSING AND DECODING FOR TWO-
CLASS MOTOR IMAGERY DATA,” Biomed. Eng. Appl. Basis
Commun., vol. 31, no. 04, p. 1950028, Aug. 2019.
[5] M. Z. Al Faiz and A. A. Al-Hamadani, “Online Brain Computer
Interface Based Five Classes EEG To Control Humanoid Robotic
Hand,” in 2019 42nd International Conference on
Telecommunications and Signal Processing (TSP), 2019, pp. 406–
410.
[6] J. S. Lin, K. C. Chen, and W. C. Yang, “EEG and eye-blinking
signals through a braincomputer interface based control for
electric wheelchairs with wireless scheme,” NISS2010 - 4th Int.
Conf. New Trends Inf. Sci. Serv. Sci., no. June 2010, pp. 731–734,
2010.
[7] F. Arrichiello, P. Di Lillo, D. Di Vito, G. Antonelli, and S.
Chiaverini, “Assistive robot operated via P300-based brain
computer interface,” in 2017 IEEE International Conference on
Robotics and Automation (ICRA), 2017, pp. 6032–6037.
[8] M. A. Ramirez-Moreno and D. Gutierrez, “Modeling a robotic arm
with conformal geometric algebra in a brain-computer interface,”
in 2018 International Conference on Electronics, Communications
and Computers (CONIELECOMP), 2018, vol. 2018-Janua, pp.
11–17.
[9] A. A. Al-Hamadani and M. Z. Al-Faiz, “Design And Simulation
of Inverse Kinematic Algorithm To Control 5-DOF Humanoid
Robotic Arm,” unpublished.
[10] C.-C. Chang and C.-J. Lin, “LIBSVM,” ACM Trans. Intell. Syst.
Technol., vol. 2, no. 3, pp. 1–27, Apr. 2011.