steady-statemotionvisualevokedpotential(ssmvep) … · 2019. 2. 14. ·...

15
Research Article Steady-StateMotionVisualEvokedPotential(SSMVEP) EnhancementMethodBasedonTime-FrequencyImageFusion WenqiangYan , 1,2 GuanghuaXu , 1,2 LongtingChen, 1,2 andXiaoweiZheng 1,2 1 School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, China 2 State Key Laboratory for Manufacturing Systems Engineering, Xi’an Jiaotong University, Xi’an, China Correspondence should be addressed to Guanghua Xu; [email protected] Received 14 February 2019; Revised 1 April 2019; Accepted 7 April 2019; Published 21 May 2019 Guest Editor: Zoran Peric Copyright © 2019 Wenqiang Yan et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. e steady-state motion visual evoked potential (SSMVEP) collected from the scalp suffers from strong noise and is contaminated by artifacts such as the electrooculogram (EOG) and the electromyogram (EMG). Spatial filtering methods can fuse the in- formation of different brain regions, which is beneficial for the enhancement of the active components of the SSMVEP. Traditional spatial filtering methods fuse electroencephalogram (EEG) in the time domain. Based on the idea of image fusion, this study proposed an SSMVEP enhancement method based on time-frequency (T-F) image fusion. e purpose is to fuse SSMVEP in the T-F domain and improve the enhancement effect of the traditional spatial filtering method on SSMVEP active components. Firstly, two electrode signals were transformed from the time domain to the T-F domain via short-time Fourier transform (STFT). e transformed T-F signals can be regarded as T-F images. en, two T-F images were decomposed via two-dimensional multiscale wavelet decomposition, and both the high-frequency coefficients and low-frequency coefficients of the wavelet were fused by specific fusion rules. e two images were fused into one image via two-dimensional wavelet reconstruction. e fused image was subjected to mean filtering, and finally, the fused time-domain signal was obtained by inverse STFT (ISTFT). e experimental results show that the proposed method has better enhancement effect on SSMVEP active components than the traditional spatial filtering methods. is study indicates that it is feasible to fuse SSMVEP in the T-F domain, which provides a new idea for SSMVEP analysis. 1.Introduction To improve the comfort of light-flashing stimulation, we proposed a steady-state motion visual evoked potential (SSMVEP) method to replace light-flashing stimulation with motion stimulation in the previous study [1]. In this study, the SSMVEP signal was selected as a research object. e SSMVEP collected from the scalp will be contaminated by a variety of artifacts, such as the electrooculogram (EOG) and electromyogram (EMG). Consequently, the requirements for subsequent signal processing are high. In electroen- cephalogram (EEG) signal processing, using multichannel EEG is beneficial and effective [2, 3]. In the EEG literature, the method for linearly fusing the multilead signals into single-channel or multichannel signals is called spatial fil- tering. Spatial filtering combines the EEG information of different brain regions, which enhances the active compo- nents of the EEG [4, 5]. At present, the main methods used in EEG spatial fil- tering are average fusion, native fusion, bipolar fusion, Laplacian fusion, common average reference (CAR) fusion, canonical correlation analysis (CCA) fusion, minimum energy fusion, and maximum contrast fusion [4–6]. Average fusion is obtained by averaging all electrode signals, and a bipolar fusion signal is obtained by subtracting the two electrode signals [7, 8]. It is called native fusion if only one of the electrode signals is analyzed. Laplacian fusion is a centered electrode signal minus the sum of four electrodes at the same distance from their surroundings. CAR fusion is another commonly used EEG spatial filtering method that enhances the signal-to-noise ratio (SNR) of the selected electrode signal by subtracting the mean of all electrode Hindawi Computational Intelligence and Neuroscience Volume 2019, Article ID 9439407, 14 pages https://doi.org/10.1155/2019/9439407

Upload: others

Post on 24-Jun-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Steady-StateMotionVisualEvokedPotential(SSMVEP) … · 2019. 2. 14. · wasconductedbyMATLAB’stfrstftfunction.Afterfusing two T-F images, the T-F signal was inversely transformed

Research ArticleSteady-State Motion Visual Evoked Potential (SSMVEP)Enhancement Method Based on Time-Frequency Image Fusion

Wenqiang Yan 12 Guanghua Xu 12 Longting Chen12 and Xiaowei Zheng 12

1School of Mechanical Engineering Xirsquoan Jiaotong University Xirsquoan China2State Key Laboratory for Manufacturing Systems Engineering Xirsquoan Jiaotong University Xirsquoan China

Correspondence should be addressed to Guanghua Xu ghxuxjtueducn

Received 14 February 2019 Revised 1 April 2019 Accepted 7 April 2019 Published 21 May 2019

Guest Editor Zoran Peric

Copyright copy 2019 Wenqiang Yan et al is is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work isproperly cited

e steady-state motion visual evoked potential (SSMVEP) collected from the scalp suffers from strong noise and is contaminatedby artifacts such as the electrooculogram (EOG) and the electromyogram (EMG) Spatial filtering methods can fuse the in-formation of different brain regions which is beneficial for the enhancement of the active components of the SSMVEP Traditionalspatial filtering methods fuse electroencephalogram (EEG) in the time domain Based on the idea of image fusion this studyproposed an SSMVEP enhancement method based on time-frequency (T-F) image fusion e purpose is to fuse SSMVEP in theT-F domain and improve the enhancement effect of the traditional spatial filtering method on SSMVEP active componentsFirstly two electrode signals were transformed from the time domain to the T-F domain via short-time Fourier transform (STFT)e transformed T-F signals can be regarded as T-F images en two T-F images were decomposed via two-dimensionalmultiscale wavelet decomposition and both the high-frequency coefficients and low-frequency coefficients of the wavelet werefused by specific fusion rules e two images were fused into one image via two-dimensional wavelet reconstruction e fusedimage was subjected to mean filtering and finally the fused time-domain signal was obtained by inverse STFT (ISTFT) eexperimental results show that the proposed method has better enhancement effect on SSMVEP active components than thetraditional spatial filtering methods is study indicates that it is feasible to fuse SSMVEP in the T-F domain which provides anew idea for SSMVEP analysis

1 Introduction

To improve the comfort of light-flashing stimulation weproposed a steady-state motion visual evoked potential(SSMVEP) method to replace light-flashing stimulation withmotion stimulation in the previous study [1] In this studythe SSMVEP signal was selected as a research object eSSMVEP collected from the scalp will be contaminated by avariety of artifacts such as the electrooculogram (EOG) andelectromyogram (EMG) Consequently the requirementsfor subsequent signal processing are high In electroen-cephalogram (EEG) signal processing using multichannelEEG is beneficial and effective [2 3] In the EEG literaturethe method for linearly fusing the multilead signals intosingle-channel or multichannel signals is called spatial fil-tering Spatial filtering combines the EEG information of

different brain regions which enhances the active compo-nents of the EEG [4 5]

At present the main methods used in EEG spatial fil-tering are average fusion native fusion bipolar fusionLaplacian fusion common average reference (CAR) fusioncanonical correlation analysis (CCA) fusion minimumenergy fusion and maximum contrast fusion [4ndash6] Averagefusion is obtained by averaging all electrode signals and abipolar fusion signal is obtained by subtracting the twoelectrode signals [7 8] It is called native fusion if only one ofthe electrode signals is analyzed Laplacian fusion is acentered electrode signal minus the sum of four electrodes atthe same distance from their surroundings CAR fusion isanother commonly used EEG spatial filtering method thatenhances the signal-to-noise ratio (SNR) of the selectedelectrode signal by subtracting the mean of all electrode

HindawiComputational Intelligence and NeuroscienceVolume 2019 Article ID 9439407 14 pageshttpsdoiorg10115520199439407

signals from the selected electrode signal Dennis et al studiedthe enhancement effects of standard ear reference CAR fu-sion and Laplacian fusion on the active components of EEGe results show that the Laplacian fusion achieved the bestfusion effects on EEG active components [9] Friman et alproposed theminimum energy fusion andmaximum contrastfusion methods [5] e minimum energy fusion wasachieved by minimizing the noise energy and the maximumcontrast fusion was achieved by maximizing the EEG SNR Inthe study Friman et al compared the effects of average fusionnative fusion bipolar fusion Laplacian fusion minimumenergy fusion and maximum contrast fusion on steady-statevisual evoked potential (SSVEP) signals Among these theenhancement effect of average fusion was worst and theenhancement effect of minimum energy fusion was best CCAfusion is used to study the linear relationship between twogroups of multidimensional variables and is also a commonlyused spatial filtering method [6]

Since Friman et al demonstrated that minimum energyfusion and maximum contrast fusion had better fusion ef-fects and did not compare minimum energy fusion andmaximum contrast fusion with CCA fusion this study onlycompared the enhancement effects of minimum energyfusion maximum contrast fusion CCA fusion and theproposed method on the active components of the SSMVEPMinimum energy fusion maximum contrast fusion andCCA fusion are used to analyze multidimensional EEGsignals in the time domain is study aims to investigatewhether electrode signals can be fused in the time-frequency(T-F) domainemethod of image fusion can fuse multipleimages into one thus improving the image quality [10ndash12]is study transformed EEG from the time domain to theT-F domain and analyzed multidimensional EEG with theidea of image fusion to improve the enhancement effect ofexisting spatial filtering methods on SSMVEP active com-ponents Firstly two electrode signals were transformedfrom the time domain to the T-F domain via short-timeFourier transform (STFT) en two T-F images weredecomposed by two-dimensional multiscale wavelet de-composition and both the high-frequency coefficients andlow-frequency coefficients of the wavelet were fused byspecific fusion rules After two-dimensional wavelet re-construction two images could be fused into one imageefused image was filtered via mean filter and the fused time-domain signal could be obtained by inverse STFT (ISTFT)e experimental results show that the proposed methodbased on T-F image fusion can enhance the SSMVEP betterthan the traditional time-domain analysis method

2 Methods

21 Subjects Six males and four females (20ndash28 yearsold) were recruited as subjects for this study e partici-pants were healthy and had normal colour and visualperception

22 Experimental Equipment gUSBamp (gtec Austria)was used to collect the EEG e sampling frequency of the

equipment is 1200Hz A gUSBamp EEG amplifier andgGAMMAbox active electrode system were combined toform the experimental platform Prior to the experiment thereference electrode was placed at the left ear of the subjectsand the ground electrode Fpz was placed at the foreheadEEGs were collected from the following six channels PO7Oz PO8 PO3 POz and PO4

23 Experimental Step A checkerboard of radialcontraction-expansion motion was used as a stimulationparadigm and the details of the paradigm can be found inReference [1] e stimulus paradigms were presented ona screen at a refresh rate of 144 Hz and the subjects werepositioned 06ndash08m from the screen e experimentincluded six blocks each containing 20 trials corre-sponding to all 20 targets e stimuli were arranged in a5times4 matrix and the horizontal and vertical intervalsbetween two neighboring stimuli were 4 cm and 3 cmrespectively e stimulus frequencies were 7ndash108 Hzwith a frequency interval of 02 Hz and the radius of thestimuli was 60 pixels Each trial lasted 5 s separated by aninterval of 2 s Between two blocks the subjects wereallowed to rest properly Twenty targets were simulta-neously presented on the screen and numbered 1 to 20Before the stimulus began one of the 20 serial numbersappeared below the corresponding target indicating thefocus target

24 Preprocessing of EEG Data e corresponding EEGdata segments were extracted in accordance with the trialstart and end times e MATLAB library function detrendwas used to remove the linear trends for each channelChebyshev bandpass filtering of 05ndash50Hz was used to re-move low-frequency drifts and high-frequency interferences

25 Significance Test e data were expressed as meanvalues A paired-sample t-test was used to determine thesignificance Statistical significance was defined as plt 005

26 Spatial Filtering Methods Minimum Energy FusionMaximum Contrast Fusion and CCA Fusion For EEG dataY recorded frommultiple electrode leads the spatial filteringmethod obtains new data by linearly summing each of theelectrode signals with spatial filter coefficientsW A spatiallyfiltered signal can be expressed as

Z WTY (1)

e spatial filter coefficients W for minimum energyfusion maximum contrast fusion and CCA fusion aredescribed below

261 Minimum Energy Fusion For a visual stimulus fre-quency f the SSMVEP signal recorded by the ith electrodecan be expressed as follows

2 Computational Intelligence and Neuroscience

yi 1113944n

j1aij sin 2jπft + ϕij1113872 1113873 + ei(t) (2)

where n represents the number of harmonics and aij and ϕij

represent the amplitude and phase of the jth harmoniccomponent respectively e model decomposes the signalinto the sum of the SSMVEP induced by the visual stimulusand the noise ei(t) generated by the electromyogram (EMG)electrooculogram (EOG) and other components usequation (2) can be expressed as

yi Si + ei (3)

where

Si aiX

X X1 X2 Xn1113858 1113859(4)

e submatrix Xn is composed of sin(2nπft) andcos(2nπft) and ai represents the amplitude of SSMVEP at itsstimulus frequency and harmonics We assume that thesignal matrix recorded by N electrodes is Y [Y1 Y2 YN]where each column corresponds to an electrode channel Ycan be projected to the SSMVEP space through a projectionmatrix Q namely

S QY (5)

Reference [13] gives the solution of Q as

Q X XTX1113872 1113873minus1

XT

(6)

us the noise signal can be expressed as

Yprime YminusQY (7)

e minimum energy fusion obtains the spatial filtercoefficients W by minimizing the noise energy namely

minW

YprimeW

2

minW

WTYprimeYprimeW (8)

e above minimization problem can be obtained bydecomposing the eigenvalues of the positive definite matrixYprime

TYprime After decomposition N eigenvalues and corre-

sponding eigenvectors can be obtained e spatial filtercoefficient W is the eigenvector corresponding to thesmallest eigenvalue (MATLAB code for minimum energyfusion and test data is provided in Supplementary Materials(available here))

262 Maximum Contrast Fusion e goal of maximumcontrast fusion is to maximize SSMVEP energy whileminimizing noise energy e SSMVEP energy can be ap-proximated as WTYTYW erefore the maximum contrastfusion can be achieved by the following maximizingequation

maxW

YW2

YprimeW

2 max

W

WTYTYW

WTYprimeTYprimeW (9)

Here Y and Yprime are the same as in Section 261 Equation (9)can be solved by generalized eigenvalue decomposition of

YTY and YprimeTYprime e spatial filter coefficient W is the ei-

genvector corresponding to the largest eigenvalue (MAT-LAB code for maximum contrast fusion and test data isprovided in Supplementary Materials (available here))

263 Canonical Correlation Analysis Fusion CCA fusion isused to study the linear relationship between two groups ofmultidimensional variables Using two sets of signals Y andX the goal is to find two linear projection vectors wY and wX

so that the two groups of linear combination signals wTYY

and wTXX have the largest correlation coefficients

maxwYwX

p E wT

YYXTwX( 1113857

E wTYYYTwY( 1113857

1113969

E wTXXXTwX( 1113857

(10)

e reference signals were constructed at the stimulationfrequency fd

X

cos 2πfdt( 1113857

sin 2πfdt( 1113857

cos 2kπfdt( 1113857

sin 2kπfdt( 1113857

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎬

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

t 1fs

l

fs

(11)

where k is the number of harmonics which is dependent onthe number of frequency harmonics existing in SSMVEP fsis the sampling rate and l represents the number of samplepoints e optimization problem of equation (10) can betransformed into the eigenvalue decomposition problemand the spatial filter coefficient W is the eigenvector cor-responding to the largest eigenvalue (MATLAB code forCCA fusion and test data is provided in SupplementaryMaterials (available here))

27 Two-Dimensional T-F Image Representation and Re-construction of EEG Signals e T-F analysis can transformthe signal from the time domain to the T-F domain andthen the T-F domain signal can be regarded as an image foranalysis In this study T-F transform was performed on thetime-domain EEG signal using STFT Let h(t) be a timewindow function with center at τ a height of 1 and a finitewidth e portion of the signal x(t) observed by h(tminus τ) isx(t)h(t minus τ) e STFT is generated by the fast Fouriertransform (FFT) of the windowed signal x(t)h(t minus τ)

STFTx(τ f) 1113946+infin

minusinfinx(t)h(tminus τ)e

minusj2πftdt (12)

Equation (12) can map the signal x(t) onto the two-dimensional T-F plane (τ f) and f can be regarded as thefrequency in the STFT After fusing multiple EEG T-Fimages the fused two-dimensional image signals need to betransformed into one-dimensional time-domain signals forthe subsequent analysis e STFT has an inverse transformthat transforms the T-F domain signal into a time-domainsignal

Computational Intelligence and Neuroscience 3

x(t) 1113946+infin

minusinfin1113946

+infin

minusinfinSTFT(τ f)h(tminus τ)e

j2πftdf dτ (13)

In this study the T-F transform of the SSMVEP signalwas conducted by MATLABrsquos tfrstft function After fusingtwo T-F images the T-F signal was inversely transformedinto a one-dimensional time-domain signal by MATLABrsquostfristft function

28 T-F Image Fusion Based on Two-Dimensional MultiscaleWavelet Transform Among the many image fusion tech-nologies two-dimensional multiscale wavelet transform-based image fusion methods have become a research hot-spot [14ndash16] Let f(x y) denote a two-dimensional signaland x and y are its abscissa and ordinate respectively Letψ(x y) denote a two-dimensional basic wavelet andψab1 b2

(x y) denote the scale expansion and two-dimensional displacement of ψ(x y)

ψab1 b2(x y)

1aψ

xminus b1

ayminus b2

a1113888 1113889 (14)

en the two-dimensional continuous wavelet trans-form is

WTf a b1 b2( 1113857 1a

1113946 1113946 f(x y)ψxminus b1

ayminus b2

a1113888 1113889dx dy

(15)

e factor 1a in equation (15) is a normalization factorthat has been introduced to ensure that the energy remainsunchanged before and after wavelet expansion e two-dimensional multiscale wavelet decomposition and re-construction process is shown in Figure 1 e de-composition process can be described as follows First one-dimensional discrete wavelet decomposition is performedon each line of the image which obtains the low-frequencycomponent L and the high-frequency component H of theoriginal image in the horizontal direction e low-frequency component and high-frequency component arethe low-frequency wavelet coefficients and high-frequencywavelet coefficients obtained after wavelet decompositionrespectively en one-dimensional discrete wavelet de-composition is performed on each column of the trans-formed data to obtain the low-frequency components LL inthe horizontal and vertical directions the high-frequencycomponents LH in the horizontal and vertical directions thelow-frequency components HL in the horizontal and verticaldirections and the high-frequency component HH in thevertical direction e reconstruction process can be de-scribed as follows Firstly one-dimensional discrete waveletreconstruction is performed on each column of the trans-form result en one-dimensional discrete wavelet re-construction is performed on each row of the transformeddata to obtain the reconstructed image

e T-F image obtained by the STFT is decomposed intoN layers by the two-dimensional multiscale wavelet de-composition as shown in Figure 1 and 3N+ 1 frequencycomponents are obtained In this study the LLN

decomposed from the highest level is defined as the low-frequency component and the HLM LHM and HHM (M 12 N) decomposed from each level are defined as high-frequency components e active components of the EEGare mainly contained in the low-frequency component efusion process of two T-F images based on two-dimensionalmultiscale wavelet transform is shown in Figure 2 Firstlythe T-F images 1 and 2 are decomposed by two-dimensionalwavelet decomposition and then the corresponding low-frequency components and high-frequency components arefused according to the corresponding fusion rules Finallytwo-dimensional wavelet reconstruction is performed toobtain the fused image by fusing low-frequency componentsand high-frequency components In this study the two-dimensional wavelet decomposition was performed usingthe MATLABrsquos wavedec2 function and the two-dimensionalwavelet reconstruction was performed using the MATLABrsquoswaverec2 function

29 Image Mean Filtering For the current pixel to beprocessed a template is selected which is composed ofseveral pixels adjacent to the current pixel e method ofreplacing the value of the original pixel with the mean of thetemplate is called mean filtering Defining f(x y) as thepixel value at coordinates (x y) the current pixel g(x y)

can be calculated by

g(x y) 1D

1113944

D

i1f xi yi( 1113857 (16)

where D represents the square of the template size

210 Summary of the ProposedMethod for EEG EnhancementBased on T-F Image Fusion Here the T-F image fusionmethod of the electrode signals x1(t) and x2(t) is introducedand the fusion process is shown in Figure 3

(1) STFT is performed on the two electrode signals x1(t)and x2(t) to obtain T-F images 1 and 2

(2) e low-frequency components LLN1 and LLN2 andthe high-frequency components (HLM1 LHM1 andHHM1) and (HLM2 LHM2 and HHM2) (M 1 2 N) of two T-F images 1 and 2 are obtained byN-layertwo-dimensional multiscale wavelet decomposition

(3) e low-frequency components LLN1 and LLN2 andthe high-frequency components (HLM1 LHM1 andHHM1) and (HLM2 LHM2 and HHM2) (M 1 2 N) of each decomposition layer are fused accordingto the following fusion rules

(a) e low-frequency components of the T-F im-ages 1 and 2 are subtracted to obtain the low-frequency components of the fused image F

LLNF LLN1 minus LLN2 (17)

e low-frequency components represent the activecomponents of the SSMVEP and the subtraction of the

4 Computational Intelligence and Neuroscience

T-F image

Row decomposition

Rowreconstruction

L H

Columndecomposition

Columnreconstruction LH1

LL1 HL1

HH1

Secondarydecomposition

Secondaryreconstruction LH1 HH1

HL1LL2

LH2

HL2

HH2

Figure 1 e process of two-dimensional discrete wavelet decomposition and reconstruction

T-F image 1 T-F image 2

Wavelet decomposition Wavelet decomposition

Low-frequency component

Fusion principle 1 Fusion principle 2

Two-dimensional wavelet reconstruction

Fusion image

High-frequency component

Low-frequency component

High-frequency component

Figure 2 Image fusion process based on two-dimensional wavelet transform

Raw signal 1 Raw signal 2

STFT STFT

Time (s) Time (s)

Freq

uenc

y (H

z)

Freq

uenc

y (H

z)

2D wavelet decomposition

Low-frequency component fusion

High-frequency component fusion

2D wavelet reconstruction

Time (s)

Freq

uenc

y (H

z)

Mean filtering

Time (s)

Freq

uenc

y (H

z)

ISTFT

Fused signal

Figure 3 e fusion process of two-electrode signal T-F images

Computational Intelligence and Neuroscience 5

low-frequency components can remove the commonnoise in the electrode signals

(b) e magnitudes of the high-frequency components(HLM1 LHM1 andHHM1) and (HLM2 LHM2 andHHM2) (M 1 2 N) of each decomposition

layer are obtained for the T-F images 1 and 2 ehigh-frequency component with a large value isused as the high-frequency component of the fusedimage F

HLiMF

HLiM1 real HLi

M11113872 1113873gt real HLiM21113872 1113873

HLiM2 real HLi

M11113872 1113873lt real HLiM21113872 1113873

⎧⎪⎨

⎪⎩

LHiMF

LHiM1 real LHi

M11113872 1113873gt real LHiM21113872 1113873

LHiM2 real LHi

M11113872 1113873lt real LHiM21113872 1113873

⎧⎪⎨

⎪⎩

HHiMF

HHiM1 real HHi

M11113872 1113873gt real HHiM21113872 1113873

HHiM2 real HHi

M11113872 1113873lt real HHiM21113872 1113873

⎧⎪⎨

⎪⎩

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

M 1 2 N (18)

e two-dimensional signal after the STFT is acomplex signal the two-dimensional signal aftertwo-dimensional wavelet decomposition is still acomplex signal e value used for the comparison isthe real part of the complex signal Here i representsthe number of frequency components of the Mdecomposition layer (ie the number of waveletcoefficients)

(4) According to both low-frequency components LLNFand high-frequency components (HLMF LHMF andHHMF) (M 1 2 N) of the fused image F thefused image F is generated by two-dimensionalwavelet reconstruction

(5) Mean filtering is performed on the fused image F(6) An ISTFT is performed on the mean filtered image to

generate a fused time-domain signal (MATLAB codefor T-F image fusion and test data is provided inSupplementary Materials (available here))

3 Results

31 Parameter Selection

311 STFT Parameters In this study the EEG time-domainsignals were subjected to STFT and ISTFT using theMATLAB library functions tfrstft and tfristfte parametersthat need to be set here are the number of frequency bins andthe frequency smoothing window If the number of fre-quency bins is too large the calculation speed of the entireprocess will be low In this study the value of this parametercan be set to [32 54 76 98120] Since the Fourier transform ofthe Gaussian function is also a Gaussian function thewindow function of the optimal time localization of theSTFT is a Gaussian function e EEG is a nonstationarysignal [17] and requires a high time resolution of the windowfunction that is the window length should not be too longIn this paper a Gaussian window was selected and the valueof the window length can be set to [37 47 57 67 77 87 97]

312 Two-Dimensional Multiscale Wavelet TransformParameters In this study the MATLAB library functionswavedec2 and waverec2 were used to perform two-dimensional multiscale wavelet decomposition and re-construction on T-F images Here the number of layers ofwavelet decomposition and the wavelet basis functions needto be set e sampling frequency of the EEG acquisitiondevice used in this paper was 1200Hz and the stimulationfrequency of the SSMVEP was 7ndash108Hz Considering thatthe SSMVEP may contain the second harmonic of thestimulation frequency this study sets the number of layers ofwavelet decomposition to 4 e dbN wavelet has good reg-ularity and was chosen as the wavelet basis function evanishing momentN of the wavelet function can be set to [2 35 7 10 13]

313 Mean Filtering Parameters Here the template size ofthemean filtering needs to be set In this study the value of thetemplate size can be set to [2 4 6 8 10 12 14 16 18 20 22 24 26]

314 Low-Frequency Component of the Fused Image Fe values of LLNF taking LLN1minusLLN2 or LLN2minusLLN1 willaffect the experimental results and the values of LLNF needto be determined to optimize the final fusion effect (seedetails in Section 210 (3))

315 Parameter Selection e T-F image fusion methodproposed in this study requires the setting of the followingfive parameters the number of frequency bins frequencysmoothing window vanishing moments of dbN waveletfunctions mean filter template size and the value of LLNFIn this study the grid search method was used to traverse thecombination of all parameters to find the best combinationof parameters that can enhance the SSMVEP active com-ponents In this study six rounds of experiments wereconducted e first three rounds of experimental data wereused to select the optimal combination of parameters for T-F

6 Computational Intelligence and Neuroscience

image fusion According to the selected combination of pa-rameters the last three rounds of experimental data were usedto compare the enhancement effect of SSMVEP active com-ponents by the method of minimum energy fusion maximumcontrast fusion CCA fusion and T-F image fusion edisadvantage of grid search is that the amount of calculation islarge however considering the difficulty of superparameterselection grid search is still a more reliable method

32 Electrode Signal Selection for T-F Image Fusion e T-Fimage fusion method analyzes two electrode signals ustwo suitable electrodes need to be selected from multipleelectrodes e bipolar fusion signal is obtained by sub-tracting the two original EEG signals In equation (17) wesubtracted the wavelet low-frequency components of the twoT-F images to obtain the wavelet low-frequency componentsof the fused image e subtraction of the low-frequencycomponents removes the common noise in the electrodesignals which is similar to the principle of bipolar fusionerefore the two electrode signals with the best bipolarfusion effect were used in the T-F image fusion For SSVEPor SSMVEP stimulation the Oz electrode usually has thestrongest response [18] and is most commonly used toanalyze SSVEP or SSMVEP [19] e Oz electrode was usedas one of the analysis electrodes for bipolar fusion In thisstudy the Oz electrode was fused with the other five elec-trode signals so that the best combination of electrodes canbe selected When bipolar fusion is used the Oz electrode asa reducing electrode or a reduced electrode will not affect theexperimental results For example Oz-POz and POz-Ozhave the same fusion effect In this study the Oz elec-trode was used as the reduced electrode Twenty trials persubject were used for electrode selection e Welch powerspectrum analysis [20] was performed on the obtainedsignals of bipolar fusion for each subject e Gaussianwindow was used and the length of the window was 5 s eoverlapping was 256 and the number of discrete Fouriertransform (DFT) points was 6000 for the Welch periodo-gram 20 focused targets correspond to 20 stimuli fre-quencies and the frequencies corresponding to the highestpower spectrum amplitudes at the 20 frequencies weredetermined as the focused target frequency e high rec-ognition accuracy indicates that the amplitude of the powerspectrum at the stimulation frequency is prominent and itindicates the effectiveness of the fusion method for theenhancement of the SSMVEP active component Table 1shows the recognition accuracy of all 10 subjects under thebipolar fusion e Oz electrode was fused with differentelectrodes and obtained different fusion effects Moreoverdue to the differences among individuals the electrodecombination with the highest recognition accuracy persubject was also different e asterisk (lowast) marks the bipolarfusion electrode group with the highest recognition accuracyof each subject and the corresponding electrode combi-nation was used in T-F image fusion

33 Comparison of Enhancement Effects of Minimum EnergyFusion Maximum Contrast Fusion CCA Fusion and T-F

Image Fusion on SSMVEP Active Components T-F imagefusion used two channel signals and CCA fusion maximumcontrast fusion and minimum energy fusion used all sixchannel signals In this study the accuracy was used as theevaluation standard of the fusion effect e high onlineaccuracy indicates that the amplitude at stimulus frequencyis prominent which shows the effectiveness of the fusionmethod For T-F image fusion the Welch power spectrumanalysis (the parameters are the same as in Section 32) wasperformed on the obtained signal after the T-F image fusionand the frequency with the highest amplitude at twentystimulus frequencies was identified as the focused targetfrequency e minimum energy fusion maximum contrastfusion and CCA fusion require prior knowledge of thefrequency of the signal to be fused However the frequencyof the signal to be fused cannot be determined because theuserrsquos focused target cannot be determined beforehand Forthe current tested signal the minimum energy fusionmaximum contrast fusion and CCA fusion need to firstperform the fusion analysis at all stimulus frequencies andthen the frequency with the highest amplitude was iden-tified as the focused target frequency Take CCA fusion as anexample and assume that the current tested signal is Y First20 template signals are set according to equation (11) andthen 20 spatial filter coefficients are obtained according toequation (10) e tested signal is fused with 20 spatial filtercoefficients respectively e obtained 20 sets of vectors areseparately analyzed by the Welch power spectrum (theparameters are the same as in Section 32) to obtain theamplitude at the corresponding frequency Finally thefrequency with the highest amplitude is identified as thefocused target frequency e first three rounds of experi-mental data were used to select the parameters of the T-Fimage fusion (see details in Section 31) and the last threerounds of the experimental data were used to test the onlinefusion effects under the four methods Figures 4(a)ndash4(d)show the analysis results plotted using the data of subject 4 inthe fourth round of the experiment Figure 4(a) shows thepower spectrum results of the 20 targets under T-F imagefusion Figure 4(b) shows the power spectrum results of 20targets under CCA fusion Figure 4(c) shows the powerspectrum results of 20 targets under minimum energy

Table 1 Accuracies of 10 subjects under bipolar fusion

SubjectsBipolar fusion

Oz-PO7 Oz-PO8 Oz-PO3 Oz-POz Oz-PO4S1 02 07 045 015 095lowastS2 025 0 03lowast 01 02S3 095lowast 085 085 04 08S4 08lowast 01 07 035 035S5 02lowast 015 015 015 01S6 095lowast 03 095lowast 085 07S7 02 015 025 06lowast 055S8 01 005 025 03 09lowastS9 03 015 0 035lowast 02S10 035 035 05 08lowast 04lowaste bipolar fusion electrode group with the highest recognition accuracyof each subject

Computational Intelligence and Neuroscience 7

times10ndash7

0

05

1f = 72 Hz

times10ndash7

0

1

2f = 76 Hz

times10ndash7

0

1

2f = 78 Hz

0

05

1times10ndash7 f = 7 Hz

times10ndash7

0

1

2f = 74 Hz

times10ndash7

0

05

1f = 92 Hz

times10ndash7

0

2

4f = 96 Hz

times10ndash7

0

1

2f = 98 Hz

times10ndash7

0

1

2f = 9 Hz

times10ndash7

0

2

4f = 94 Hz

times10ndash7

0

05

1f = 102 Hz

times10ndash7

0

1

2f = 106 Hz

times10ndash7

0

1

2f = 108 Hz

times10ndash7

0

1

2f = 10 Hz

times10ndash7

0

1

2f = 104 Hz

times10ndash7

0

2

4f = 82 Hz

times10ndash7

0

2

4f = 86 Hz

times10ndash7

0

2

4f = 88 Hz

times10ndash7

0

1

2f = 8 Hz

times10ndash7

0

05

1 f = 84 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(a)

0

2

4times10ndash13 f = 7 Hz

times10ndash13

0

5f = 72 Hz

times10ndash12

0

05

1f = 74 Hz

times10ndash12

0

05

1f = 76 Hz

times10ndash13

0

5f = 78 Hz

times10ndash13

0

2

4f = 9 Hz

times10ndash13

0

5f = 92 Hz

times10ndash12

0

1

2f = 94 Hz

times10ndash12

0

05

1f = 96 Hz

times10ndash13

0

2

4f = 98 Hz

times10ndash13

0

2

4f = 10 Hz

times10ndash12

0

05

1f = 102 Hz

times10ndash12

0

05

1f = 104 Hz

times10ndash12

0

05

1f = 106 Hz

times10ndash13

0

5f = 108 Hz

times10ndash13

0

2

4f = 8 Hz

times10ndash12

0

05

1f = 82 Hz

times10ndash12

0

1

2f = 84 Hz

times10ndash12

0

05

1f = 86 Hz

times10ndash13

0

5f = 88 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(b)

Figure 4 Continued

8 Computational Intelligence and Neuroscience

times10ndash13

0

05

1f = 7 Hz

times10ndash13

0

1

2f = 74 Hz

times10ndash13

0

05

1f = 72 Hz

times10ndash13

0

1

2f = 76 Hz

times10ndash14

0

2

4f = 78 Hz

times10ndash13

0

05

1f = 9 Hz

times10ndash13

0

1

2f = 94 Hz

times10ndash13

0

1

2f = 92 Hz

times10ndash13

0

05

1f = 96 Hz

times10ndash13

0

05

1f = 98 Hz

times10ndash13

0

05

1f = 10 Hz

times10ndash13

0

05

1f = 104 Hz

times10ndash13

0

05

1f = 102 Hz

times10ndash13

0

05

1f = 106 Hz

times10ndash13

0

1

2f = 108 Hz

times10ndash13

0

05

1f = 8 Hz

times10ndash13

0

1

2f = 84 Hz

times10ndash13

0

1

2f = 82 Hz

times10ndash13

0

05

1f = 86 Hz

times10ndash13

0

05

1f = 88 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(c)

times10ndash4

0

05

1f = 72 Hz

times10ndash4

0

1

2f = 74 Hz

times10ndash4

0

1

2f = 76 Hz

times10ndash4

0

05

1f = 78 Hz

times10ndash4

0

05

1f = 7 Hz

times10ndash4

0

05

1f = 92 Hz

times10ndash4

0

2

4f = 94 Hz

times10ndash4

0

1

2f = 96 Hz

times10ndash5

0

5f = 98 Hz

times10ndash5

0

2

4f = 9 Hz

times10ndash4

0

1

2f = 102 Hz

times10ndash4

0

05

1f = 104 Hz

times10ndash4

0

1

2f = 106 Hz

times10ndash4

0

05

1f = 108 Hz

times10ndash4

0

05

1f = 10 Hz

times10ndash4

0

1

2f = 82 Hz

times10ndash4

0

2

4f = 84 Hz

times10ndash4

0

1

2f = 86 Hz

times10ndash4

0

05

1f = 88 Hz

times10ndash4

0

05

1f = 8 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(d)

Figure 4 e power spectrum results of 20 targets under (a) T-F image fusion (b) CCA fusion (c) minimum energy fusion and (d)maximum contrast fusion (the discrete sequence frequency values from left to right are 7 8 910 72 82 92 102 74 84 94 104 76 86 96 10678 88 98 108 Hz respectively)

Computational Intelligence and Neuroscience 9

fusion and Figure 4(d) shows the power spectrum results of20 targets under maximum contrast fusion In the stalk plotthe discrete sequence frequency values from left to right are7 8 9 10 72 82 92 102 74 84 94 104 76 86 96 106 78 8898 108 respectively where f represents the stimulationfrequency of the corresponding target and the green dotmarks the amplitude of the corresponding target stimulusfrequency e red flag f represents the target was mis-identified and the blue flag f represents the target wascorrectly identified Figures 4(a)ndash4(d) show that the T-Fimage fusion method has the best online performance andthe minimum energy fusion method has the worst onlineperformance Maximum contrast fusion and CCA fusionhave similar fusion effects

In online analysis the power spectrum amplitudes of thetest signal at all stimulus frequencies were calculated andthen the frequency corresponding to the maximum am-plitude was identified as the focused target frequency Errorrecognition occurs when the amplitude at the focused targetfrequency is lower than the amplitudes at the nonfocusedtarget frequencieserefore the power spectrum amplitudeat the focused target frequency can be regarded as the activecomponent of the signal and the power spectrum ampli-tudes at the remaining nonfocused target frequencies can beregarded as noise e SNR in this study was defined as theratio of the power spectrum amplitude at the focused targetfrequency to the mean of the power spectrum amplitudes atthe remaining nonfocused target frequencies e SNRs often subjects at all stimulus frequencies were superimposedand averaged and the experimental results are shown inFigure 5 It can be seen from Figure 5 that the T-F imagefusion method obtained the highest SNR which indicatesthat the T-F image fusion method proposed in this study caneffectively enhance the SSMVEP SNR

e online accuracies of each subject in the three roundsof experiments were superimposed and averaged e ex-perimental results are shown in Figure 6 It can be seen thatmost subjects obtained highest online accuracy under theT-F image fusion and some subjects obtained highest onlineaccuracy under CCA fusion and maximum contrast fusionOnly subject 9 obtained the highest online accuracy underthe minimum energy fusion e online accuracies of all thesubjects were superimposed and averaged e experimentalresults are shown in Figure 7 e experimental results showthat the online performance of the T-F image fusion methodis better e online accuracy of the T-F image fusionmethod is 617 606 and 3050 higher than that of theCCA fusion maximum contrast fusion and the minimumenergy fusion respectively e paired t-test shows signifi-cant differences between the accuracies of T-F image fusionand minimum energy fusion (p 00174) Online accuracyanalysis results show that the proposedmethod is better thanthe traditional time-domain fusion method

4 Discussion

e SSMVEP collected from the scalp contains a lot of noiseand requires an effective signal processing method to en-hance the active components of SSMVEP Spatial filtering

methods utilize EEG information of multiple channels andexert positive significance to enhance the active componentsof the SSMVEP At present the commonly used spatialfiltering methods include average fusion native fusion bi-polar fusion Laplacian fusion CAR fusion CCA fusionminimum energy fusion and maximum contrast fusionefirst five spatial filtering methods can only process thepreselected electrode signals Minimum energy fusionmaximum contrast fusion and CCA fusion improve theabove problem and they can be used to fuse any number ofelectrode signals with better fusion effects Minimum energyfusion maximum contrast fusion and CCA fusion fuseSSMVEP in the time domainis study first put forward theidea of fusing EEG in the T-F domain and proposed a T-Fimage fusion method We compared the enhancement ef-fects of minimum energy fusion maximum contrast fusionCCA fusion and T-F image fusion on SSMVEP It is verifiedby the test data that the proposed method is better than thetraditional time-domain fusion method

T-F analysis is a good choice for the transformation ofthe EEG data from one dimension to two dimension EEGtransformed into the T-F domain can be used as an image foranalysis e STFT has an inverse transform which caninverse transform the fused T-F image into a time-domainsignal erefore the STFT was used to transform the EEGfrom the time domain to the T-F domain e two-dimensional wavelet transform can fuse two images intoone to achieve the purpose of multichannel EEG fusionAfter two-dimensional wavelet decomposition the low-frequency components of the two subimages were sub-tracted to obtain the low-frequency components of the fusedimage and the maximum value of the high-frequencycomponents of the two subimages was taken as the high-frequency component of the fused imagee low-frequencycomponents after wavelet decomposition represent the ac-tive components of the SSMVEPe subtraction of the low-

0Methods

5

10

15

Ave

rage

SN

R

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Figure 5 e average SNR of ten subjects at all stimulus fre-quencies under four methods

10 Computational Intelligence and Neuroscience

frequency components can remove the common noise in theelectrode signals is is similar to bipolar fusion whichsubtracts two electrode signals in the time domain We alsocompared the enhancement eects of bipolar fusion and theproposed method on SSMVEP active components eelectrodes used for bipolar fusion are the same as those usedin T-F image fusion e results show that the proposedmethod is better than the bipolar fusion (the online accuracyof bipolar fusion is 7152) In this study the fused imagewas ltered by mean lter to further achieve low-pass l-tering We also tried to remove the mean ltering step aftercompleting the T-F image fusion e results show thatremoving themean ltering step reduced the fusion eect Inthis study the mean ltering step was performed after T-Fimage fusion We tested the fusion eect when the meanltering step was performed before T-F image fusion(performing mean ltering step on the subimage afterSTFT) For some subjects it was better to perform the meanltering step after T-F image fusion erefore we recom-mend that researchers follow the analysis steps in this study

e wavelet low-frequency coecients (see details inequation (17)) of the fused image have an important in-uence on the fusion eect If we perform T-F image fusionon six electrode signals at the same time we will get six setsof wavelet low-frequency coecients Here each set of low-frequency coecient is a two-dimensional signal Referringto equation (1) we can assign a coecient to each two-dimensional signal and obtain the fused two-dimensionalsignal (ie the wavelet low-frequency coecients of thefused image) after linear summationis study explored thefeasibility of applying the spatial lter coecients of CCAfusion and maximum contrast fusion to T-F image fusionwhen fusing six electrode signals by using the T-F imagefusion method at the same time Since the fusion eect ofminimum energy fusion was poor the spatial lter co-ecients of the minimum energy fusion were not used heree spatial lter coecients of the maximum contrast fusionand CCA fusion at the frequency f were obtained byequations (9) and (10) and are set to Vmax and Vcca re-spectively Vmax and Vcca are two vectors with dimensions of6times1 where Vmax(1 1) and Vcca(1 1) represent the rstelements of the vectors Vmax and Vcca Since the T-F imagefusion method was performed on the six electrode signalsequation (17) is transformed into equation (19) where Vcorresponds to the spatial lter coecient Vmax or Vcca erest of the analysis process is the same as that in Section 210e same analysis steps were performed for all 20 targets(7ndash108Hz) e Welch spectrum analysis (the parametersare the same as in Section 32) was performed on the ob-tained signal after T-F image fusion Figures 8(a) and 8(b)show the Welch power spectra plotted using the data ofsubject 4 in the fourth round of the experiment Figure 8(a)shows the power spectrum plotted using spatial lter co-ecients of CCA fusion and Figure 8(b) shows the Welchpower spectrum plotted using spatial lter coecients ofmaximum contrast fusion where f represents the stimulusfrequency and the red circle indicates the amplitude at thestimulus frequency Figures 8(a) and 8(b) show that theamplitudes at the stimulation frequencies are prominente spatial lter coecients of CCA fusion and maximumcontrast fusion are eective for T-F image fusion us theT-F image fusion method proposed in this study focuses onthe fusion of wavelet low-frequency coecients of multiple

0

02

04

06

08

1

12

Acc

urac

y

Subject1 2 3 4 5 6 7 8 9 10

T-F image fusionCCA fusion

Minimum energy fusionMaximum contrast fusion

Figure 6 e accuracy of each subject under four fusion methods

0

02

04

06

08

1

12

Ave

rage

accu

racy

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Methods

Figure 7 e average accuracy of ten subjects under four fusionmethods

Computational Intelligence and Neuroscience 11

images (by assigning a coecient to each two-dimensionalsignal and obtaining the fused two-dimensional signal afterlinear summation) e premise of the above test results is

that we know the frequency of the signal to be fused If theonline test method is used (see details in Section 33) thesame online fusion results as CCA fusion and maximum

f = 7 Hz

times104

012

10 20 30 400

f = 72 Hz

times104

012

10 20 30 400

f = 74 Hz

times104

024

10 20 30 400

f = 76 Hz

times104

024

10 20 30 400

f = 78 Hz

times104

024

10 20 30 400

f = 8 Hz

times104

012

10 20 30 400

f = 82 Hz

times104

024

10 20 30 400

f = 84 Hz

times104

05

10

10 20 30 400

f = 86 Hz

times104

024

10 20 30 400

f = 88 Hz

times104

024

10 20 30 400

f = 9 Hz

times104

012

10 20 30 400

f = 92 Hz

times104

024

10 20 30 400

f = 94 Hz

times104

05

10

10 20 30 400

f = 96 Hz

times104

024

10 20 30 400

f = 98 Hz

times104

012

10 20 30 400

f = 10 Hz

times104

012

10 20 30 400

f = 102 Hz

times104

012

10 20 30 400

f = 104 Hz

times104

012

10 20 30 400

f = 106 Hz

times104

024

10 20 30 400

f = 108 Hz

times104

012

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(a)

f = 7 Hz

0

2

4

10 20 30 400

f = 8 Hz

0

2

4

10 20 30 400

f = 9 Hz

0

2

4

10 20 30 400

f = 10 Hz

0

2

4

10 20 30 400

f = 72 Hz0

2

4

10 20 30 400

f = 82 Hz0

5

10

10 20 30 400

f = 92 Hz0

2

4

10 20 30 400

f = 102 Hz0

2

4

10 20 30 400

f = 74 Hz

0

2

4

10 20 30 400

f = 84 Hz

0

10

20

10 20 30 400

f = 94 Hz

0

10

20

10 20 30 400

f = 104 Hz

0

2

4

10 20 30 400

f = 76 Hz0

5

10 20 30 400

f = 86 Hz0

5

10

10 20 30 400

f = 96 Hz0

5

10 20 30 400

f = 106 Hz0

5

10

10 20 30 400

f = 78 Hz0

5

10 20 30 400

f = 88 Hz0

5

10 20 30 400

f = 98 Hz0

2

4

10 20 30 400

f = 108 Hz0

2

4

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(b)

Figure 8 e Welch power spectrum plotted using the (a) CCA fusion spatial lter coecients and (b) maximum contrast fusion spatiallter coecients

12 Computational Intelligence and Neuroscience

contrast fusion were obtained T-F image fusion usedonly two electrode signals and the better online fusioneffect than that of the CCA fusion and the maximumcontrast fusion was obtained which shows the effec-tiveness of the proposed method Next we will explorethe fusion method of multiple T-F images (more than twoimages) that is to find suitable spatial filter coefficientsfor the wavelet low-frequency components of multipleT-F images

LLNF LLN1 lowastV(1 1) + LLN2 lowastV(2 1) + LLN3 lowastV(3 1)

+ LLN4 lowastV(4 1) + LLN5 lowastV(5 1)

+ LLN6 lowastV(6 1)

(19)

e parameters affect the fusion effect of the proposedmethod We listed the parameters that need to be selectedand the possible values of the parameters in Section 31 Inthis study the grid search method was used to traverse thecombination of all parameters and the best combination ofparameters was found In the experiment we found thatthe number of frequency bins and the Gaussian windowlength can be fixed to 54 and 57 We recommend that theresearchers determine the parameters according to theparameter selection principles and ranges given in Section31 In this study the SSMVEP active component wasenhanced by fusing multichannel signals into a single-channel signal and then the focused target frequencywas identified by performing spectrum analysis on thefused signals is is beneficial for SSMVEP studies basedon spectrum analysis For example Reference [21] pro-posed a frequency and phase mixed coding method in theSSVEP-based brain-computer interface (BCI) which in-creases the number of BCI coding targets by making onefrequency correspond to multiple different phases In thestudy the FFT analysis of the test signal is required to findthe possible focused target frequency and then calculate thephase value at that frequency e proposed method in thisstudy has a positive significance for accurately finding thefocused target frequency in the spectrum Moreover someEEG feature extraction algorithms for the BCI also requirespectrum analysis of the test signals [22 23] erefore themethod proposed in this study has a potential applicationvalue

5 Conclusion

To explore whether T-F domain analysis can achieve betterfusion effects than time-domain analysis this study pro-posed an SSMVEP enhancement method based on T-Fimage fusion e parameters of the T-F image fusionalgorithm were determined by the grid search method andthe electrode signals used for T-F image fusion were se-lected by bipolar fusion e analysis results show that thekey of the T-F image fusion algorithm is the fusion of thewavelet low-frequency components is study comparedthe enhancement effects of minimum energy fusionmaximum contrast fusion CCA fusion and T-F imagefusion on SSMVEP e experimental results show that the

online performance of the T-F image fusion method isbetter than that of the traditional spatial filtering methodswhich indicates that the proposed method is feasible to fuseSSMVEP in the T-F domain

Data Availability

e analytical data used to support the findings of this studyare included within the article and the raw data are availablefrom the corresponding author upon request

Ethical Approval

e subjects provided informed written consent in accor-dance with the protocol approved by the Institutional Re-view Board of Xirsquoan Jiaotong University

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Wenqiang Yan worked on the algorithm and wrote thepaper Guanghua Xu involved in discussions and gavesuggestions throughout this project Longting Chen andXiaowei Zheng worked on the experimental design Allauthors read and approved the final manuscript

Acknowledgments

is research was supported by the National Natural ScienceFoundation of China (NSFC) (no 51775415) We want tothank the subjects for participating in these experiments

Supplementary Materials

MATLAB codes for T-F image fusion minimum energyfusion maximum contrast fusion and CCA fusion (Sup-plementary Materials)

References

[1] W Yan G Xu J Xie et al ldquoFour novel motion paradigmsbased on steady-state motion visual evoked potentialrdquo IEEETransactions on Biomedical Engineering vol 65 no 8pp 1696ndash1704 2018

[2] J R Wolpaw and D J Mcfarland ldquoMultichannel EEG-basedbrain-computer communicationrdquo Electroencephalographyand Clinical Neurophysiology vol 90 no 6 pp 444ndash4491994

[3] B O Peters G Pfurtscheller and H Flyvbjerg ldquoMiningmulti-channel EEG for its information content an ANN-based method for a brain-computer interfacerdquo Neural Net-works vol 11 no 7-8 pp 1429ndash1433 1998

[4] S Parini L Maggi A C Turconi and G Andreoni ldquoA robustand self-paced BCI system based on a four class SSVEPparadigm algorithms and protocols for a high-transfer-ratedirect brain communicationrdquo Computational Intelligence andNeuroscience vol 2009 pp 1ndash11 2009

[5] O Friman I Volosyak and A Graser ldquoMultiple channeldetection of steady-state visual evoked potentials for brain-

Computational Intelligence and Neuroscience 13

computer interfacesrdquo IEEE Transactions on Biomedical En-gineering vol 54 no 4 pp 742ndash750 2007

[6] Z Lin C Zhang WWu and X Gao ldquoFrequency recognitionbased on canonical correlation analysis for SSVEP-basedBCIsrdquo IEEE Transactions on Biomedical Engineering vol 53- no 12 pp 2610ndash2614 2006

[7] M Middendorf G Mcmillan G Calhoun and K S JonesldquoBrain-computer interfaces based on the steady-state visual-evoked responserdquo IEEE Transactions on Rehabilitation En-gineering vol 8 no 2 pp 211ndash214 2000

[8] G R Muller-Putz R Scherer C Brauneis andG Pfurtscheller ldquoSteady-state visual evoked potential(SSVEP)-based communication impact of harmonic fre-quency componentsrdquo Journal of Neural Engineering vol 2no 4 pp 123ndash130 2005

[9] D J Mcfarland L M Mccane S V David and J R WolpawldquoSpatial filter selection for EEG-based communicationrdquoElectroencephalography and Clinical Neurophysiologyvol 103 no 3 pp 386ndash394 1997

[10] H Li B S Manjunath and S K Mitra ldquoMultisensor imagefusion using the wavelet transformrdquo Graphical Models andImage Processing vol 57 no 3 pp 235ndash245 1995

[11] S Li and B Yang ldquoMultifocus image fusion by combiningcurvelet and wavelet transformrdquo Pattern Recognition Lettersvol 29 no 9 pp 1295ndash1301 2008

[12] B Zhang ldquoStudy on image fusion based on different fusionrules of wavelet transformrdquo Acta Photonica Sinica vol 33pp 173ndash176 2004

[13] C DMeyerMatrix Analysis and Applied Linear Algebra Bookand Solutions Manual Society for Industrial and AppliedMathematics Philadelphia PA USA 2000

[14] K J A Sundar M Jahnavi and K Lakshmisaritha ldquoMulti-sensor image fusion based on empirical wavelet transformrdquo inProceedings of the International Conference on ElectricalElectronics Communication Computer and OptimizationTechniques pp 93ndash97 Mysuru India December 2018

[15] L Tian J Ahmed Q Du B Shankar and S Adnan ldquoMultifocus image fusion using combined median and average filterbased hybrid stationary wavelet transform and principalcomponent analysisrdquo International Journal of AdvancedComputer Science and Applications vol 9 no 6 2018

[16] G He S Xing D Dong and X Zhao ldquoPanchromatic andmulti-spectral image fusion method based on two-step sparserepresentation and wavelet transformrdquo in Asia-Pacific Signaland Information Processing Association Summit and Confer-ence pp 259ndash262 Honolulu HI USA January 2018

[17] Y Shin S Lee M Ahn H Cho S C Jun and H-N LeeldquoNoise robustness analysis of sparse representation basedclassification method for non-stationary EEG signal classifi-cationrdquo Biomedical Signal Processing and Control vol 21pp 8ndash18 2015

[18] M Xu L Chen L Zhang et al ldquoA visual parallel-BCI spellerbased on the time-frequency coding strategyrdquo Journal ofNeural Engineering vol 11 no 2 article 026014 2014

[19] J Xie G Xu A Luo et al ldquoe role of visual noise ininfluencing mental load and fatigue in a steady-state motionvisual evoked potential-based brain-computer interfacerdquoSensors vol 17 no 8 p 1873 2017

[20] K Barbe R Pintelon and J Schoukens ldquoWelch methodrevisited nonparametric power spectrum estimation viacircular overlaprdquo IEEE Transactions on Signal Processingvol 58 no 2 pp 553ndash565 2010

[21] C Jia X Gao B Hong and S Gao ldquoFrequency and phasemixed coding in SSVEP-based brain-computer interfacerdquo

IEEE transactions on bio-medical engineering vol 58 no 1pp 200ndash206 2011

[22] C Yang H Wu Z Li W He N Wang and C-Y Su ldquoMindcontrol of a robotic arm with visual fusion technologyrdquo IEEETransactions on Industrial Informatics vol 14 no 9pp 3822ndash3830 2017

[23] J Castillo S Muller E Caicedo and T Bastos ldquoFeatureextraction techniques based on power spectrum for a SSVEP-BCIrdquo in Proceedings of the IEEE 23rd International Sympo-sium on Industrial Electronics (ISIE) Istanbul Turkey June2014

14 Computational Intelligence and Neuroscience

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 2: Steady-StateMotionVisualEvokedPotential(SSMVEP) … · 2019. 2. 14. · wasconductedbyMATLAB’stfrstftfunction.Afterfusing two T-F images, the T-F signal was inversely transformed

signals from the selected electrode signal Dennis et al studiedthe enhancement effects of standard ear reference CAR fu-sion and Laplacian fusion on the active components of EEGe results show that the Laplacian fusion achieved the bestfusion effects on EEG active components [9] Friman et alproposed theminimum energy fusion andmaximum contrastfusion methods [5] e minimum energy fusion wasachieved by minimizing the noise energy and the maximumcontrast fusion was achieved by maximizing the EEG SNR Inthe study Friman et al compared the effects of average fusionnative fusion bipolar fusion Laplacian fusion minimumenergy fusion and maximum contrast fusion on steady-statevisual evoked potential (SSVEP) signals Among these theenhancement effect of average fusion was worst and theenhancement effect of minimum energy fusion was best CCAfusion is used to study the linear relationship between twogroups of multidimensional variables and is also a commonlyused spatial filtering method [6]

Since Friman et al demonstrated that minimum energyfusion and maximum contrast fusion had better fusion ef-fects and did not compare minimum energy fusion andmaximum contrast fusion with CCA fusion this study onlycompared the enhancement effects of minimum energyfusion maximum contrast fusion CCA fusion and theproposed method on the active components of the SSMVEPMinimum energy fusion maximum contrast fusion andCCA fusion are used to analyze multidimensional EEGsignals in the time domain is study aims to investigatewhether electrode signals can be fused in the time-frequency(T-F) domainemethod of image fusion can fuse multipleimages into one thus improving the image quality [10ndash12]is study transformed EEG from the time domain to theT-F domain and analyzed multidimensional EEG with theidea of image fusion to improve the enhancement effect ofexisting spatial filtering methods on SSMVEP active com-ponents Firstly two electrode signals were transformedfrom the time domain to the T-F domain via short-timeFourier transform (STFT) en two T-F images weredecomposed by two-dimensional multiscale wavelet de-composition and both the high-frequency coefficients andlow-frequency coefficients of the wavelet were fused byspecific fusion rules After two-dimensional wavelet re-construction two images could be fused into one imageefused image was filtered via mean filter and the fused time-domain signal could be obtained by inverse STFT (ISTFT)e experimental results show that the proposed methodbased on T-F image fusion can enhance the SSMVEP betterthan the traditional time-domain analysis method

2 Methods

21 Subjects Six males and four females (20ndash28 yearsold) were recruited as subjects for this study e partici-pants were healthy and had normal colour and visualperception

22 Experimental Equipment gUSBamp (gtec Austria)was used to collect the EEG e sampling frequency of the

equipment is 1200Hz A gUSBamp EEG amplifier andgGAMMAbox active electrode system were combined toform the experimental platform Prior to the experiment thereference electrode was placed at the left ear of the subjectsand the ground electrode Fpz was placed at the foreheadEEGs were collected from the following six channels PO7Oz PO8 PO3 POz and PO4

23 Experimental Step A checkerboard of radialcontraction-expansion motion was used as a stimulationparadigm and the details of the paradigm can be found inReference [1] e stimulus paradigms were presented ona screen at a refresh rate of 144 Hz and the subjects werepositioned 06ndash08m from the screen e experimentincluded six blocks each containing 20 trials corre-sponding to all 20 targets e stimuli were arranged in a5times4 matrix and the horizontal and vertical intervalsbetween two neighboring stimuli were 4 cm and 3 cmrespectively e stimulus frequencies were 7ndash108 Hzwith a frequency interval of 02 Hz and the radius of thestimuli was 60 pixels Each trial lasted 5 s separated by aninterval of 2 s Between two blocks the subjects wereallowed to rest properly Twenty targets were simulta-neously presented on the screen and numbered 1 to 20Before the stimulus began one of the 20 serial numbersappeared below the corresponding target indicating thefocus target

24 Preprocessing of EEG Data e corresponding EEGdata segments were extracted in accordance with the trialstart and end times e MATLAB library function detrendwas used to remove the linear trends for each channelChebyshev bandpass filtering of 05ndash50Hz was used to re-move low-frequency drifts and high-frequency interferences

25 Significance Test e data were expressed as meanvalues A paired-sample t-test was used to determine thesignificance Statistical significance was defined as plt 005

26 Spatial Filtering Methods Minimum Energy FusionMaximum Contrast Fusion and CCA Fusion For EEG dataY recorded frommultiple electrode leads the spatial filteringmethod obtains new data by linearly summing each of theelectrode signals with spatial filter coefficientsW A spatiallyfiltered signal can be expressed as

Z WTY (1)

e spatial filter coefficients W for minimum energyfusion maximum contrast fusion and CCA fusion aredescribed below

261 Minimum Energy Fusion For a visual stimulus fre-quency f the SSMVEP signal recorded by the ith electrodecan be expressed as follows

2 Computational Intelligence and Neuroscience

yi 1113944n

j1aij sin 2jπft + ϕij1113872 1113873 + ei(t) (2)

where n represents the number of harmonics and aij and ϕij

represent the amplitude and phase of the jth harmoniccomponent respectively e model decomposes the signalinto the sum of the SSMVEP induced by the visual stimulusand the noise ei(t) generated by the electromyogram (EMG)electrooculogram (EOG) and other components usequation (2) can be expressed as

yi Si + ei (3)

where

Si aiX

X X1 X2 Xn1113858 1113859(4)

e submatrix Xn is composed of sin(2nπft) andcos(2nπft) and ai represents the amplitude of SSMVEP at itsstimulus frequency and harmonics We assume that thesignal matrix recorded by N electrodes is Y [Y1 Y2 YN]where each column corresponds to an electrode channel Ycan be projected to the SSMVEP space through a projectionmatrix Q namely

S QY (5)

Reference [13] gives the solution of Q as

Q X XTX1113872 1113873minus1

XT

(6)

us the noise signal can be expressed as

Yprime YminusQY (7)

e minimum energy fusion obtains the spatial filtercoefficients W by minimizing the noise energy namely

minW

YprimeW

2

minW

WTYprimeYprimeW (8)

e above minimization problem can be obtained bydecomposing the eigenvalues of the positive definite matrixYprime

TYprime After decomposition N eigenvalues and corre-

sponding eigenvectors can be obtained e spatial filtercoefficient W is the eigenvector corresponding to thesmallest eigenvalue (MATLAB code for minimum energyfusion and test data is provided in Supplementary Materials(available here))

262 Maximum Contrast Fusion e goal of maximumcontrast fusion is to maximize SSMVEP energy whileminimizing noise energy e SSMVEP energy can be ap-proximated as WTYTYW erefore the maximum contrastfusion can be achieved by the following maximizingequation

maxW

YW2

YprimeW

2 max

W

WTYTYW

WTYprimeTYprimeW (9)

Here Y and Yprime are the same as in Section 261 Equation (9)can be solved by generalized eigenvalue decomposition of

YTY and YprimeTYprime e spatial filter coefficient W is the ei-

genvector corresponding to the largest eigenvalue (MAT-LAB code for maximum contrast fusion and test data isprovided in Supplementary Materials (available here))

263 Canonical Correlation Analysis Fusion CCA fusion isused to study the linear relationship between two groups ofmultidimensional variables Using two sets of signals Y andX the goal is to find two linear projection vectors wY and wX

so that the two groups of linear combination signals wTYY

and wTXX have the largest correlation coefficients

maxwYwX

p E wT

YYXTwX( 1113857

E wTYYYTwY( 1113857

1113969

E wTXXXTwX( 1113857

(10)

e reference signals were constructed at the stimulationfrequency fd

X

cos 2πfdt( 1113857

sin 2πfdt( 1113857

cos 2kπfdt( 1113857

sin 2kπfdt( 1113857

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎬

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

t 1fs

l

fs

(11)

where k is the number of harmonics which is dependent onthe number of frequency harmonics existing in SSMVEP fsis the sampling rate and l represents the number of samplepoints e optimization problem of equation (10) can betransformed into the eigenvalue decomposition problemand the spatial filter coefficient W is the eigenvector cor-responding to the largest eigenvalue (MATLAB code forCCA fusion and test data is provided in SupplementaryMaterials (available here))

27 Two-Dimensional T-F Image Representation and Re-construction of EEG Signals e T-F analysis can transformthe signal from the time domain to the T-F domain andthen the T-F domain signal can be regarded as an image foranalysis In this study T-F transform was performed on thetime-domain EEG signal using STFT Let h(t) be a timewindow function with center at τ a height of 1 and a finitewidth e portion of the signal x(t) observed by h(tminus τ) isx(t)h(t minus τ) e STFT is generated by the fast Fouriertransform (FFT) of the windowed signal x(t)h(t minus τ)

STFTx(τ f) 1113946+infin

minusinfinx(t)h(tminus τ)e

minusj2πftdt (12)

Equation (12) can map the signal x(t) onto the two-dimensional T-F plane (τ f) and f can be regarded as thefrequency in the STFT After fusing multiple EEG T-Fimages the fused two-dimensional image signals need to betransformed into one-dimensional time-domain signals forthe subsequent analysis e STFT has an inverse transformthat transforms the T-F domain signal into a time-domainsignal

Computational Intelligence and Neuroscience 3

x(t) 1113946+infin

minusinfin1113946

+infin

minusinfinSTFT(τ f)h(tminus τ)e

j2πftdf dτ (13)

In this study the T-F transform of the SSMVEP signalwas conducted by MATLABrsquos tfrstft function After fusingtwo T-F images the T-F signal was inversely transformedinto a one-dimensional time-domain signal by MATLABrsquostfristft function

28 T-F Image Fusion Based on Two-Dimensional MultiscaleWavelet Transform Among the many image fusion tech-nologies two-dimensional multiscale wavelet transform-based image fusion methods have become a research hot-spot [14ndash16] Let f(x y) denote a two-dimensional signaland x and y are its abscissa and ordinate respectively Letψ(x y) denote a two-dimensional basic wavelet andψab1 b2

(x y) denote the scale expansion and two-dimensional displacement of ψ(x y)

ψab1 b2(x y)

1aψ

xminus b1

ayminus b2

a1113888 1113889 (14)

en the two-dimensional continuous wavelet trans-form is

WTf a b1 b2( 1113857 1a

1113946 1113946 f(x y)ψxminus b1

ayminus b2

a1113888 1113889dx dy

(15)

e factor 1a in equation (15) is a normalization factorthat has been introduced to ensure that the energy remainsunchanged before and after wavelet expansion e two-dimensional multiscale wavelet decomposition and re-construction process is shown in Figure 1 e de-composition process can be described as follows First one-dimensional discrete wavelet decomposition is performedon each line of the image which obtains the low-frequencycomponent L and the high-frequency component H of theoriginal image in the horizontal direction e low-frequency component and high-frequency component arethe low-frequency wavelet coefficients and high-frequencywavelet coefficients obtained after wavelet decompositionrespectively en one-dimensional discrete wavelet de-composition is performed on each column of the trans-formed data to obtain the low-frequency components LL inthe horizontal and vertical directions the high-frequencycomponents LH in the horizontal and vertical directions thelow-frequency components HL in the horizontal and verticaldirections and the high-frequency component HH in thevertical direction e reconstruction process can be de-scribed as follows Firstly one-dimensional discrete waveletreconstruction is performed on each column of the trans-form result en one-dimensional discrete wavelet re-construction is performed on each row of the transformeddata to obtain the reconstructed image

e T-F image obtained by the STFT is decomposed intoN layers by the two-dimensional multiscale wavelet de-composition as shown in Figure 1 and 3N+ 1 frequencycomponents are obtained In this study the LLN

decomposed from the highest level is defined as the low-frequency component and the HLM LHM and HHM (M 12 N) decomposed from each level are defined as high-frequency components e active components of the EEGare mainly contained in the low-frequency component efusion process of two T-F images based on two-dimensionalmultiscale wavelet transform is shown in Figure 2 Firstlythe T-F images 1 and 2 are decomposed by two-dimensionalwavelet decomposition and then the corresponding low-frequency components and high-frequency components arefused according to the corresponding fusion rules Finallytwo-dimensional wavelet reconstruction is performed toobtain the fused image by fusing low-frequency componentsand high-frequency components In this study the two-dimensional wavelet decomposition was performed usingthe MATLABrsquos wavedec2 function and the two-dimensionalwavelet reconstruction was performed using the MATLABrsquoswaverec2 function

29 Image Mean Filtering For the current pixel to beprocessed a template is selected which is composed ofseveral pixels adjacent to the current pixel e method ofreplacing the value of the original pixel with the mean of thetemplate is called mean filtering Defining f(x y) as thepixel value at coordinates (x y) the current pixel g(x y)

can be calculated by

g(x y) 1D

1113944

D

i1f xi yi( 1113857 (16)

where D represents the square of the template size

210 Summary of the ProposedMethod for EEG EnhancementBased on T-F Image Fusion Here the T-F image fusionmethod of the electrode signals x1(t) and x2(t) is introducedand the fusion process is shown in Figure 3

(1) STFT is performed on the two electrode signals x1(t)and x2(t) to obtain T-F images 1 and 2

(2) e low-frequency components LLN1 and LLN2 andthe high-frequency components (HLM1 LHM1 andHHM1) and (HLM2 LHM2 and HHM2) (M 1 2 N) of two T-F images 1 and 2 are obtained byN-layertwo-dimensional multiscale wavelet decomposition

(3) e low-frequency components LLN1 and LLN2 andthe high-frequency components (HLM1 LHM1 andHHM1) and (HLM2 LHM2 and HHM2) (M 1 2 N) of each decomposition layer are fused accordingto the following fusion rules

(a) e low-frequency components of the T-F im-ages 1 and 2 are subtracted to obtain the low-frequency components of the fused image F

LLNF LLN1 minus LLN2 (17)

e low-frequency components represent the activecomponents of the SSMVEP and the subtraction of the

4 Computational Intelligence and Neuroscience

T-F image

Row decomposition

Rowreconstruction

L H

Columndecomposition

Columnreconstruction LH1

LL1 HL1

HH1

Secondarydecomposition

Secondaryreconstruction LH1 HH1

HL1LL2

LH2

HL2

HH2

Figure 1 e process of two-dimensional discrete wavelet decomposition and reconstruction

T-F image 1 T-F image 2

Wavelet decomposition Wavelet decomposition

Low-frequency component

Fusion principle 1 Fusion principle 2

Two-dimensional wavelet reconstruction

Fusion image

High-frequency component

Low-frequency component

High-frequency component

Figure 2 Image fusion process based on two-dimensional wavelet transform

Raw signal 1 Raw signal 2

STFT STFT

Time (s) Time (s)

Freq

uenc

y (H

z)

Freq

uenc

y (H

z)

2D wavelet decomposition

Low-frequency component fusion

High-frequency component fusion

2D wavelet reconstruction

Time (s)

Freq

uenc

y (H

z)

Mean filtering

Time (s)

Freq

uenc

y (H

z)

ISTFT

Fused signal

Figure 3 e fusion process of two-electrode signal T-F images

Computational Intelligence and Neuroscience 5

low-frequency components can remove the commonnoise in the electrode signals

(b) e magnitudes of the high-frequency components(HLM1 LHM1 andHHM1) and (HLM2 LHM2 andHHM2) (M 1 2 N) of each decomposition

layer are obtained for the T-F images 1 and 2 ehigh-frequency component with a large value isused as the high-frequency component of the fusedimage F

HLiMF

HLiM1 real HLi

M11113872 1113873gt real HLiM21113872 1113873

HLiM2 real HLi

M11113872 1113873lt real HLiM21113872 1113873

⎧⎪⎨

⎪⎩

LHiMF

LHiM1 real LHi

M11113872 1113873gt real LHiM21113872 1113873

LHiM2 real LHi

M11113872 1113873lt real LHiM21113872 1113873

⎧⎪⎨

⎪⎩

HHiMF

HHiM1 real HHi

M11113872 1113873gt real HHiM21113872 1113873

HHiM2 real HHi

M11113872 1113873lt real HHiM21113872 1113873

⎧⎪⎨

⎪⎩

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

M 1 2 N (18)

e two-dimensional signal after the STFT is acomplex signal the two-dimensional signal aftertwo-dimensional wavelet decomposition is still acomplex signal e value used for the comparison isthe real part of the complex signal Here i representsthe number of frequency components of the Mdecomposition layer (ie the number of waveletcoefficients)

(4) According to both low-frequency components LLNFand high-frequency components (HLMF LHMF andHHMF) (M 1 2 N) of the fused image F thefused image F is generated by two-dimensionalwavelet reconstruction

(5) Mean filtering is performed on the fused image F(6) An ISTFT is performed on the mean filtered image to

generate a fused time-domain signal (MATLAB codefor T-F image fusion and test data is provided inSupplementary Materials (available here))

3 Results

31 Parameter Selection

311 STFT Parameters In this study the EEG time-domainsignals were subjected to STFT and ISTFT using theMATLAB library functions tfrstft and tfristfte parametersthat need to be set here are the number of frequency bins andthe frequency smoothing window If the number of fre-quency bins is too large the calculation speed of the entireprocess will be low In this study the value of this parametercan be set to [32 54 76 98120] Since the Fourier transform ofthe Gaussian function is also a Gaussian function thewindow function of the optimal time localization of theSTFT is a Gaussian function e EEG is a nonstationarysignal [17] and requires a high time resolution of the windowfunction that is the window length should not be too longIn this paper a Gaussian window was selected and the valueof the window length can be set to [37 47 57 67 77 87 97]

312 Two-Dimensional Multiscale Wavelet TransformParameters In this study the MATLAB library functionswavedec2 and waverec2 were used to perform two-dimensional multiscale wavelet decomposition and re-construction on T-F images Here the number of layers ofwavelet decomposition and the wavelet basis functions needto be set e sampling frequency of the EEG acquisitiondevice used in this paper was 1200Hz and the stimulationfrequency of the SSMVEP was 7ndash108Hz Considering thatthe SSMVEP may contain the second harmonic of thestimulation frequency this study sets the number of layers ofwavelet decomposition to 4 e dbN wavelet has good reg-ularity and was chosen as the wavelet basis function evanishing momentN of the wavelet function can be set to [2 35 7 10 13]

313 Mean Filtering Parameters Here the template size ofthemean filtering needs to be set In this study the value of thetemplate size can be set to [2 4 6 8 10 12 14 16 18 20 22 24 26]

314 Low-Frequency Component of the Fused Image Fe values of LLNF taking LLN1minusLLN2 or LLN2minusLLN1 willaffect the experimental results and the values of LLNF needto be determined to optimize the final fusion effect (seedetails in Section 210 (3))

315 Parameter Selection e T-F image fusion methodproposed in this study requires the setting of the followingfive parameters the number of frequency bins frequencysmoothing window vanishing moments of dbN waveletfunctions mean filter template size and the value of LLNFIn this study the grid search method was used to traverse thecombination of all parameters to find the best combinationof parameters that can enhance the SSMVEP active com-ponents In this study six rounds of experiments wereconducted e first three rounds of experimental data wereused to select the optimal combination of parameters for T-F

6 Computational Intelligence and Neuroscience

image fusion According to the selected combination of pa-rameters the last three rounds of experimental data were usedto compare the enhancement effect of SSMVEP active com-ponents by the method of minimum energy fusion maximumcontrast fusion CCA fusion and T-F image fusion edisadvantage of grid search is that the amount of calculation islarge however considering the difficulty of superparameterselection grid search is still a more reliable method

32 Electrode Signal Selection for T-F Image Fusion e T-Fimage fusion method analyzes two electrode signals ustwo suitable electrodes need to be selected from multipleelectrodes e bipolar fusion signal is obtained by sub-tracting the two original EEG signals In equation (17) wesubtracted the wavelet low-frequency components of the twoT-F images to obtain the wavelet low-frequency componentsof the fused image e subtraction of the low-frequencycomponents removes the common noise in the electrodesignals which is similar to the principle of bipolar fusionerefore the two electrode signals with the best bipolarfusion effect were used in the T-F image fusion For SSVEPor SSMVEP stimulation the Oz electrode usually has thestrongest response [18] and is most commonly used toanalyze SSVEP or SSMVEP [19] e Oz electrode was usedas one of the analysis electrodes for bipolar fusion In thisstudy the Oz electrode was fused with the other five elec-trode signals so that the best combination of electrodes canbe selected When bipolar fusion is used the Oz electrode asa reducing electrode or a reduced electrode will not affect theexperimental results For example Oz-POz and POz-Ozhave the same fusion effect In this study the Oz elec-trode was used as the reduced electrode Twenty trials persubject were used for electrode selection e Welch powerspectrum analysis [20] was performed on the obtainedsignals of bipolar fusion for each subject e Gaussianwindow was used and the length of the window was 5 s eoverlapping was 256 and the number of discrete Fouriertransform (DFT) points was 6000 for the Welch periodo-gram 20 focused targets correspond to 20 stimuli fre-quencies and the frequencies corresponding to the highestpower spectrum amplitudes at the 20 frequencies weredetermined as the focused target frequency e high rec-ognition accuracy indicates that the amplitude of the powerspectrum at the stimulation frequency is prominent and itindicates the effectiveness of the fusion method for theenhancement of the SSMVEP active component Table 1shows the recognition accuracy of all 10 subjects under thebipolar fusion e Oz electrode was fused with differentelectrodes and obtained different fusion effects Moreoverdue to the differences among individuals the electrodecombination with the highest recognition accuracy persubject was also different e asterisk (lowast) marks the bipolarfusion electrode group with the highest recognition accuracyof each subject and the corresponding electrode combi-nation was used in T-F image fusion

33 Comparison of Enhancement Effects of Minimum EnergyFusion Maximum Contrast Fusion CCA Fusion and T-F

Image Fusion on SSMVEP Active Components T-F imagefusion used two channel signals and CCA fusion maximumcontrast fusion and minimum energy fusion used all sixchannel signals In this study the accuracy was used as theevaluation standard of the fusion effect e high onlineaccuracy indicates that the amplitude at stimulus frequencyis prominent which shows the effectiveness of the fusionmethod For T-F image fusion the Welch power spectrumanalysis (the parameters are the same as in Section 32) wasperformed on the obtained signal after the T-F image fusionand the frequency with the highest amplitude at twentystimulus frequencies was identified as the focused targetfrequency e minimum energy fusion maximum contrastfusion and CCA fusion require prior knowledge of thefrequency of the signal to be fused However the frequencyof the signal to be fused cannot be determined because theuserrsquos focused target cannot be determined beforehand Forthe current tested signal the minimum energy fusionmaximum contrast fusion and CCA fusion need to firstperform the fusion analysis at all stimulus frequencies andthen the frequency with the highest amplitude was iden-tified as the focused target frequency Take CCA fusion as anexample and assume that the current tested signal is Y First20 template signals are set according to equation (11) andthen 20 spatial filter coefficients are obtained according toequation (10) e tested signal is fused with 20 spatial filtercoefficients respectively e obtained 20 sets of vectors areseparately analyzed by the Welch power spectrum (theparameters are the same as in Section 32) to obtain theamplitude at the corresponding frequency Finally thefrequency with the highest amplitude is identified as thefocused target frequency e first three rounds of experi-mental data were used to select the parameters of the T-Fimage fusion (see details in Section 31) and the last threerounds of the experimental data were used to test the onlinefusion effects under the four methods Figures 4(a)ndash4(d)show the analysis results plotted using the data of subject 4 inthe fourth round of the experiment Figure 4(a) shows thepower spectrum results of the 20 targets under T-F imagefusion Figure 4(b) shows the power spectrum results of 20targets under CCA fusion Figure 4(c) shows the powerspectrum results of 20 targets under minimum energy

Table 1 Accuracies of 10 subjects under bipolar fusion

SubjectsBipolar fusion

Oz-PO7 Oz-PO8 Oz-PO3 Oz-POz Oz-PO4S1 02 07 045 015 095lowastS2 025 0 03lowast 01 02S3 095lowast 085 085 04 08S4 08lowast 01 07 035 035S5 02lowast 015 015 015 01S6 095lowast 03 095lowast 085 07S7 02 015 025 06lowast 055S8 01 005 025 03 09lowastS9 03 015 0 035lowast 02S10 035 035 05 08lowast 04lowaste bipolar fusion electrode group with the highest recognition accuracyof each subject

Computational Intelligence and Neuroscience 7

times10ndash7

0

05

1f = 72 Hz

times10ndash7

0

1

2f = 76 Hz

times10ndash7

0

1

2f = 78 Hz

0

05

1times10ndash7 f = 7 Hz

times10ndash7

0

1

2f = 74 Hz

times10ndash7

0

05

1f = 92 Hz

times10ndash7

0

2

4f = 96 Hz

times10ndash7

0

1

2f = 98 Hz

times10ndash7

0

1

2f = 9 Hz

times10ndash7

0

2

4f = 94 Hz

times10ndash7

0

05

1f = 102 Hz

times10ndash7

0

1

2f = 106 Hz

times10ndash7

0

1

2f = 108 Hz

times10ndash7

0

1

2f = 10 Hz

times10ndash7

0

1

2f = 104 Hz

times10ndash7

0

2

4f = 82 Hz

times10ndash7

0

2

4f = 86 Hz

times10ndash7

0

2

4f = 88 Hz

times10ndash7

0

1

2f = 8 Hz

times10ndash7

0

05

1 f = 84 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(a)

0

2

4times10ndash13 f = 7 Hz

times10ndash13

0

5f = 72 Hz

times10ndash12

0

05

1f = 74 Hz

times10ndash12

0

05

1f = 76 Hz

times10ndash13

0

5f = 78 Hz

times10ndash13

0

2

4f = 9 Hz

times10ndash13

0

5f = 92 Hz

times10ndash12

0

1

2f = 94 Hz

times10ndash12

0

05

1f = 96 Hz

times10ndash13

0

2

4f = 98 Hz

times10ndash13

0

2

4f = 10 Hz

times10ndash12

0

05

1f = 102 Hz

times10ndash12

0

05

1f = 104 Hz

times10ndash12

0

05

1f = 106 Hz

times10ndash13

0

5f = 108 Hz

times10ndash13

0

2

4f = 8 Hz

times10ndash12

0

05

1f = 82 Hz

times10ndash12

0

1

2f = 84 Hz

times10ndash12

0

05

1f = 86 Hz

times10ndash13

0

5f = 88 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(b)

Figure 4 Continued

8 Computational Intelligence and Neuroscience

times10ndash13

0

05

1f = 7 Hz

times10ndash13

0

1

2f = 74 Hz

times10ndash13

0

05

1f = 72 Hz

times10ndash13

0

1

2f = 76 Hz

times10ndash14

0

2

4f = 78 Hz

times10ndash13

0

05

1f = 9 Hz

times10ndash13

0

1

2f = 94 Hz

times10ndash13

0

1

2f = 92 Hz

times10ndash13

0

05

1f = 96 Hz

times10ndash13

0

05

1f = 98 Hz

times10ndash13

0

05

1f = 10 Hz

times10ndash13

0

05

1f = 104 Hz

times10ndash13

0

05

1f = 102 Hz

times10ndash13

0

05

1f = 106 Hz

times10ndash13

0

1

2f = 108 Hz

times10ndash13

0

05

1f = 8 Hz

times10ndash13

0

1

2f = 84 Hz

times10ndash13

0

1

2f = 82 Hz

times10ndash13

0

05

1f = 86 Hz

times10ndash13

0

05

1f = 88 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(c)

times10ndash4

0

05

1f = 72 Hz

times10ndash4

0

1

2f = 74 Hz

times10ndash4

0

1

2f = 76 Hz

times10ndash4

0

05

1f = 78 Hz

times10ndash4

0

05

1f = 7 Hz

times10ndash4

0

05

1f = 92 Hz

times10ndash4

0

2

4f = 94 Hz

times10ndash4

0

1

2f = 96 Hz

times10ndash5

0

5f = 98 Hz

times10ndash5

0

2

4f = 9 Hz

times10ndash4

0

1

2f = 102 Hz

times10ndash4

0

05

1f = 104 Hz

times10ndash4

0

1

2f = 106 Hz

times10ndash4

0

05

1f = 108 Hz

times10ndash4

0

05

1f = 10 Hz

times10ndash4

0

1

2f = 82 Hz

times10ndash4

0

2

4f = 84 Hz

times10ndash4

0

1

2f = 86 Hz

times10ndash4

0

05

1f = 88 Hz

times10ndash4

0

05

1f = 8 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(d)

Figure 4 e power spectrum results of 20 targets under (a) T-F image fusion (b) CCA fusion (c) minimum energy fusion and (d)maximum contrast fusion (the discrete sequence frequency values from left to right are 7 8 910 72 82 92 102 74 84 94 104 76 86 96 10678 88 98 108 Hz respectively)

Computational Intelligence and Neuroscience 9

fusion and Figure 4(d) shows the power spectrum results of20 targets under maximum contrast fusion In the stalk plotthe discrete sequence frequency values from left to right are7 8 9 10 72 82 92 102 74 84 94 104 76 86 96 106 78 8898 108 respectively where f represents the stimulationfrequency of the corresponding target and the green dotmarks the amplitude of the corresponding target stimulusfrequency e red flag f represents the target was mis-identified and the blue flag f represents the target wascorrectly identified Figures 4(a)ndash4(d) show that the T-Fimage fusion method has the best online performance andthe minimum energy fusion method has the worst onlineperformance Maximum contrast fusion and CCA fusionhave similar fusion effects

In online analysis the power spectrum amplitudes of thetest signal at all stimulus frequencies were calculated andthen the frequency corresponding to the maximum am-plitude was identified as the focused target frequency Errorrecognition occurs when the amplitude at the focused targetfrequency is lower than the amplitudes at the nonfocusedtarget frequencieserefore the power spectrum amplitudeat the focused target frequency can be regarded as the activecomponent of the signal and the power spectrum ampli-tudes at the remaining nonfocused target frequencies can beregarded as noise e SNR in this study was defined as theratio of the power spectrum amplitude at the focused targetfrequency to the mean of the power spectrum amplitudes atthe remaining nonfocused target frequencies e SNRs often subjects at all stimulus frequencies were superimposedand averaged and the experimental results are shown inFigure 5 It can be seen from Figure 5 that the T-F imagefusion method obtained the highest SNR which indicatesthat the T-F image fusion method proposed in this study caneffectively enhance the SSMVEP SNR

e online accuracies of each subject in the three roundsof experiments were superimposed and averaged e ex-perimental results are shown in Figure 6 It can be seen thatmost subjects obtained highest online accuracy under theT-F image fusion and some subjects obtained highest onlineaccuracy under CCA fusion and maximum contrast fusionOnly subject 9 obtained the highest online accuracy underthe minimum energy fusion e online accuracies of all thesubjects were superimposed and averaged e experimentalresults are shown in Figure 7 e experimental results showthat the online performance of the T-F image fusion methodis better e online accuracy of the T-F image fusionmethod is 617 606 and 3050 higher than that of theCCA fusion maximum contrast fusion and the minimumenergy fusion respectively e paired t-test shows signifi-cant differences between the accuracies of T-F image fusionand minimum energy fusion (p 00174) Online accuracyanalysis results show that the proposedmethod is better thanthe traditional time-domain fusion method

4 Discussion

e SSMVEP collected from the scalp contains a lot of noiseand requires an effective signal processing method to en-hance the active components of SSMVEP Spatial filtering

methods utilize EEG information of multiple channels andexert positive significance to enhance the active componentsof the SSMVEP At present the commonly used spatialfiltering methods include average fusion native fusion bi-polar fusion Laplacian fusion CAR fusion CCA fusionminimum energy fusion and maximum contrast fusionefirst five spatial filtering methods can only process thepreselected electrode signals Minimum energy fusionmaximum contrast fusion and CCA fusion improve theabove problem and they can be used to fuse any number ofelectrode signals with better fusion effects Minimum energyfusion maximum contrast fusion and CCA fusion fuseSSMVEP in the time domainis study first put forward theidea of fusing EEG in the T-F domain and proposed a T-Fimage fusion method We compared the enhancement ef-fects of minimum energy fusion maximum contrast fusionCCA fusion and T-F image fusion on SSMVEP It is verifiedby the test data that the proposed method is better than thetraditional time-domain fusion method

T-F analysis is a good choice for the transformation ofthe EEG data from one dimension to two dimension EEGtransformed into the T-F domain can be used as an image foranalysis e STFT has an inverse transform which caninverse transform the fused T-F image into a time-domainsignal erefore the STFT was used to transform the EEGfrom the time domain to the T-F domain e two-dimensional wavelet transform can fuse two images intoone to achieve the purpose of multichannel EEG fusionAfter two-dimensional wavelet decomposition the low-frequency components of the two subimages were sub-tracted to obtain the low-frequency components of the fusedimage and the maximum value of the high-frequencycomponents of the two subimages was taken as the high-frequency component of the fused imagee low-frequencycomponents after wavelet decomposition represent the ac-tive components of the SSMVEPe subtraction of the low-

0Methods

5

10

15

Ave

rage

SN

R

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Figure 5 e average SNR of ten subjects at all stimulus fre-quencies under four methods

10 Computational Intelligence and Neuroscience

frequency components can remove the common noise in theelectrode signals is is similar to bipolar fusion whichsubtracts two electrode signals in the time domain We alsocompared the enhancement eects of bipolar fusion and theproposed method on SSMVEP active components eelectrodes used for bipolar fusion are the same as those usedin T-F image fusion e results show that the proposedmethod is better than the bipolar fusion (the online accuracyof bipolar fusion is 7152) In this study the fused imagewas ltered by mean lter to further achieve low-pass l-tering We also tried to remove the mean ltering step aftercompleting the T-F image fusion e results show thatremoving themean ltering step reduced the fusion eect Inthis study the mean ltering step was performed after T-Fimage fusion We tested the fusion eect when the meanltering step was performed before T-F image fusion(performing mean ltering step on the subimage afterSTFT) For some subjects it was better to perform the meanltering step after T-F image fusion erefore we recom-mend that researchers follow the analysis steps in this study

e wavelet low-frequency coecients (see details inequation (17)) of the fused image have an important in-uence on the fusion eect If we perform T-F image fusionon six electrode signals at the same time we will get six setsof wavelet low-frequency coecients Here each set of low-frequency coecient is a two-dimensional signal Referringto equation (1) we can assign a coecient to each two-dimensional signal and obtain the fused two-dimensionalsignal (ie the wavelet low-frequency coecients of thefused image) after linear summationis study explored thefeasibility of applying the spatial lter coecients of CCAfusion and maximum contrast fusion to T-F image fusionwhen fusing six electrode signals by using the T-F imagefusion method at the same time Since the fusion eect ofminimum energy fusion was poor the spatial lter co-ecients of the minimum energy fusion were not used heree spatial lter coecients of the maximum contrast fusionand CCA fusion at the frequency f were obtained byequations (9) and (10) and are set to Vmax and Vcca re-spectively Vmax and Vcca are two vectors with dimensions of6times1 where Vmax(1 1) and Vcca(1 1) represent the rstelements of the vectors Vmax and Vcca Since the T-F imagefusion method was performed on the six electrode signalsequation (17) is transformed into equation (19) where Vcorresponds to the spatial lter coecient Vmax or Vcca erest of the analysis process is the same as that in Section 210e same analysis steps were performed for all 20 targets(7ndash108Hz) e Welch spectrum analysis (the parametersare the same as in Section 32) was performed on the ob-tained signal after T-F image fusion Figures 8(a) and 8(b)show the Welch power spectra plotted using the data ofsubject 4 in the fourth round of the experiment Figure 8(a)shows the power spectrum plotted using spatial lter co-ecients of CCA fusion and Figure 8(b) shows the Welchpower spectrum plotted using spatial lter coecients ofmaximum contrast fusion where f represents the stimulusfrequency and the red circle indicates the amplitude at thestimulus frequency Figures 8(a) and 8(b) show that theamplitudes at the stimulation frequencies are prominente spatial lter coecients of CCA fusion and maximumcontrast fusion are eective for T-F image fusion us theT-F image fusion method proposed in this study focuses onthe fusion of wavelet low-frequency coecients of multiple

0

02

04

06

08

1

12

Acc

urac

y

Subject1 2 3 4 5 6 7 8 9 10

T-F image fusionCCA fusion

Minimum energy fusionMaximum contrast fusion

Figure 6 e accuracy of each subject under four fusion methods

0

02

04

06

08

1

12

Ave

rage

accu

racy

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Methods

Figure 7 e average accuracy of ten subjects under four fusionmethods

Computational Intelligence and Neuroscience 11

images (by assigning a coecient to each two-dimensionalsignal and obtaining the fused two-dimensional signal afterlinear summation) e premise of the above test results is

that we know the frequency of the signal to be fused If theonline test method is used (see details in Section 33) thesame online fusion results as CCA fusion and maximum

f = 7 Hz

times104

012

10 20 30 400

f = 72 Hz

times104

012

10 20 30 400

f = 74 Hz

times104

024

10 20 30 400

f = 76 Hz

times104

024

10 20 30 400

f = 78 Hz

times104

024

10 20 30 400

f = 8 Hz

times104

012

10 20 30 400

f = 82 Hz

times104

024

10 20 30 400

f = 84 Hz

times104

05

10

10 20 30 400

f = 86 Hz

times104

024

10 20 30 400

f = 88 Hz

times104

024

10 20 30 400

f = 9 Hz

times104

012

10 20 30 400

f = 92 Hz

times104

024

10 20 30 400

f = 94 Hz

times104

05

10

10 20 30 400

f = 96 Hz

times104

024

10 20 30 400

f = 98 Hz

times104

012

10 20 30 400

f = 10 Hz

times104

012

10 20 30 400

f = 102 Hz

times104

012

10 20 30 400

f = 104 Hz

times104

012

10 20 30 400

f = 106 Hz

times104

024

10 20 30 400

f = 108 Hz

times104

012

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(a)

f = 7 Hz

0

2

4

10 20 30 400

f = 8 Hz

0

2

4

10 20 30 400

f = 9 Hz

0

2

4

10 20 30 400

f = 10 Hz

0

2

4

10 20 30 400

f = 72 Hz0

2

4

10 20 30 400

f = 82 Hz0

5

10

10 20 30 400

f = 92 Hz0

2

4

10 20 30 400

f = 102 Hz0

2

4

10 20 30 400

f = 74 Hz

0

2

4

10 20 30 400

f = 84 Hz

0

10

20

10 20 30 400

f = 94 Hz

0

10

20

10 20 30 400

f = 104 Hz

0

2

4

10 20 30 400

f = 76 Hz0

5

10 20 30 400

f = 86 Hz0

5

10

10 20 30 400

f = 96 Hz0

5

10 20 30 400

f = 106 Hz0

5

10

10 20 30 400

f = 78 Hz0

5

10 20 30 400

f = 88 Hz0

5

10 20 30 400

f = 98 Hz0

2

4

10 20 30 400

f = 108 Hz0

2

4

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(b)

Figure 8 e Welch power spectrum plotted using the (a) CCA fusion spatial lter coecients and (b) maximum contrast fusion spatiallter coecients

12 Computational Intelligence and Neuroscience

contrast fusion were obtained T-F image fusion usedonly two electrode signals and the better online fusioneffect than that of the CCA fusion and the maximumcontrast fusion was obtained which shows the effec-tiveness of the proposed method Next we will explorethe fusion method of multiple T-F images (more than twoimages) that is to find suitable spatial filter coefficientsfor the wavelet low-frequency components of multipleT-F images

LLNF LLN1 lowastV(1 1) + LLN2 lowastV(2 1) + LLN3 lowastV(3 1)

+ LLN4 lowastV(4 1) + LLN5 lowastV(5 1)

+ LLN6 lowastV(6 1)

(19)

e parameters affect the fusion effect of the proposedmethod We listed the parameters that need to be selectedand the possible values of the parameters in Section 31 Inthis study the grid search method was used to traverse thecombination of all parameters and the best combination ofparameters was found In the experiment we found thatthe number of frequency bins and the Gaussian windowlength can be fixed to 54 and 57 We recommend that theresearchers determine the parameters according to theparameter selection principles and ranges given in Section31 In this study the SSMVEP active component wasenhanced by fusing multichannel signals into a single-channel signal and then the focused target frequencywas identified by performing spectrum analysis on thefused signals is is beneficial for SSMVEP studies basedon spectrum analysis For example Reference [21] pro-posed a frequency and phase mixed coding method in theSSVEP-based brain-computer interface (BCI) which in-creases the number of BCI coding targets by making onefrequency correspond to multiple different phases In thestudy the FFT analysis of the test signal is required to findthe possible focused target frequency and then calculate thephase value at that frequency e proposed method in thisstudy has a positive significance for accurately finding thefocused target frequency in the spectrum Moreover someEEG feature extraction algorithms for the BCI also requirespectrum analysis of the test signals [22 23] erefore themethod proposed in this study has a potential applicationvalue

5 Conclusion

To explore whether T-F domain analysis can achieve betterfusion effects than time-domain analysis this study pro-posed an SSMVEP enhancement method based on T-Fimage fusion e parameters of the T-F image fusionalgorithm were determined by the grid search method andthe electrode signals used for T-F image fusion were se-lected by bipolar fusion e analysis results show that thekey of the T-F image fusion algorithm is the fusion of thewavelet low-frequency components is study comparedthe enhancement effects of minimum energy fusionmaximum contrast fusion CCA fusion and T-F imagefusion on SSMVEP e experimental results show that the

online performance of the T-F image fusion method isbetter than that of the traditional spatial filtering methodswhich indicates that the proposed method is feasible to fuseSSMVEP in the T-F domain

Data Availability

e analytical data used to support the findings of this studyare included within the article and the raw data are availablefrom the corresponding author upon request

Ethical Approval

e subjects provided informed written consent in accor-dance with the protocol approved by the Institutional Re-view Board of Xirsquoan Jiaotong University

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Wenqiang Yan worked on the algorithm and wrote thepaper Guanghua Xu involved in discussions and gavesuggestions throughout this project Longting Chen andXiaowei Zheng worked on the experimental design Allauthors read and approved the final manuscript

Acknowledgments

is research was supported by the National Natural ScienceFoundation of China (NSFC) (no 51775415) We want tothank the subjects for participating in these experiments

Supplementary Materials

MATLAB codes for T-F image fusion minimum energyfusion maximum contrast fusion and CCA fusion (Sup-plementary Materials)

References

[1] W Yan G Xu J Xie et al ldquoFour novel motion paradigmsbased on steady-state motion visual evoked potentialrdquo IEEETransactions on Biomedical Engineering vol 65 no 8pp 1696ndash1704 2018

[2] J R Wolpaw and D J Mcfarland ldquoMultichannel EEG-basedbrain-computer communicationrdquo Electroencephalographyand Clinical Neurophysiology vol 90 no 6 pp 444ndash4491994

[3] B O Peters G Pfurtscheller and H Flyvbjerg ldquoMiningmulti-channel EEG for its information content an ANN-based method for a brain-computer interfacerdquo Neural Net-works vol 11 no 7-8 pp 1429ndash1433 1998

[4] S Parini L Maggi A C Turconi and G Andreoni ldquoA robustand self-paced BCI system based on a four class SSVEPparadigm algorithms and protocols for a high-transfer-ratedirect brain communicationrdquo Computational Intelligence andNeuroscience vol 2009 pp 1ndash11 2009

[5] O Friman I Volosyak and A Graser ldquoMultiple channeldetection of steady-state visual evoked potentials for brain-

Computational Intelligence and Neuroscience 13

computer interfacesrdquo IEEE Transactions on Biomedical En-gineering vol 54 no 4 pp 742ndash750 2007

[6] Z Lin C Zhang WWu and X Gao ldquoFrequency recognitionbased on canonical correlation analysis for SSVEP-basedBCIsrdquo IEEE Transactions on Biomedical Engineering vol 53- no 12 pp 2610ndash2614 2006

[7] M Middendorf G Mcmillan G Calhoun and K S JonesldquoBrain-computer interfaces based on the steady-state visual-evoked responserdquo IEEE Transactions on Rehabilitation En-gineering vol 8 no 2 pp 211ndash214 2000

[8] G R Muller-Putz R Scherer C Brauneis andG Pfurtscheller ldquoSteady-state visual evoked potential(SSVEP)-based communication impact of harmonic fre-quency componentsrdquo Journal of Neural Engineering vol 2no 4 pp 123ndash130 2005

[9] D J Mcfarland L M Mccane S V David and J R WolpawldquoSpatial filter selection for EEG-based communicationrdquoElectroencephalography and Clinical Neurophysiologyvol 103 no 3 pp 386ndash394 1997

[10] H Li B S Manjunath and S K Mitra ldquoMultisensor imagefusion using the wavelet transformrdquo Graphical Models andImage Processing vol 57 no 3 pp 235ndash245 1995

[11] S Li and B Yang ldquoMultifocus image fusion by combiningcurvelet and wavelet transformrdquo Pattern Recognition Lettersvol 29 no 9 pp 1295ndash1301 2008

[12] B Zhang ldquoStudy on image fusion based on different fusionrules of wavelet transformrdquo Acta Photonica Sinica vol 33pp 173ndash176 2004

[13] C DMeyerMatrix Analysis and Applied Linear Algebra Bookand Solutions Manual Society for Industrial and AppliedMathematics Philadelphia PA USA 2000

[14] K J A Sundar M Jahnavi and K Lakshmisaritha ldquoMulti-sensor image fusion based on empirical wavelet transformrdquo inProceedings of the International Conference on ElectricalElectronics Communication Computer and OptimizationTechniques pp 93ndash97 Mysuru India December 2018

[15] L Tian J Ahmed Q Du B Shankar and S Adnan ldquoMultifocus image fusion using combined median and average filterbased hybrid stationary wavelet transform and principalcomponent analysisrdquo International Journal of AdvancedComputer Science and Applications vol 9 no 6 2018

[16] G He S Xing D Dong and X Zhao ldquoPanchromatic andmulti-spectral image fusion method based on two-step sparserepresentation and wavelet transformrdquo in Asia-Pacific Signaland Information Processing Association Summit and Confer-ence pp 259ndash262 Honolulu HI USA January 2018

[17] Y Shin S Lee M Ahn H Cho S C Jun and H-N LeeldquoNoise robustness analysis of sparse representation basedclassification method for non-stationary EEG signal classifi-cationrdquo Biomedical Signal Processing and Control vol 21pp 8ndash18 2015

[18] M Xu L Chen L Zhang et al ldquoA visual parallel-BCI spellerbased on the time-frequency coding strategyrdquo Journal ofNeural Engineering vol 11 no 2 article 026014 2014

[19] J Xie G Xu A Luo et al ldquoe role of visual noise ininfluencing mental load and fatigue in a steady-state motionvisual evoked potential-based brain-computer interfacerdquoSensors vol 17 no 8 p 1873 2017

[20] K Barbe R Pintelon and J Schoukens ldquoWelch methodrevisited nonparametric power spectrum estimation viacircular overlaprdquo IEEE Transactions on Signal Processingvol 58 no 2 pp 553ndash565 2010

[21] C Jia X Gao B Hong and S Gao ldquoFrequency and phasemixed coding in SSVEP-based brain-computer interfacerdquo

IEEE transactions on bio-medical engineering vol 58 no 1pp 200ndash206 2011

[22] C Yang H Wu Z Li W He N Wang and C-Y Su ldquoMindcontrol of a robotic arm with visual fusion technologyrdquo IEEETransactions on Industrial Informatics vol 14 no 9pp 3822ndash3830 2017

[23] J Castillo S Muller E Caicedo and T Bastos ldquoFeatureextraction techniques based on power spectrum for a SSVEP-BCIrdquo in Proceedings of the IEEE 23rd International Sympo-sium on Industrial Electronics (ISIE) Istanbul Turkey June2014

14 Computational Intelligence and Neuroscience

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 3: Steady-StateMotionVisualEvokedPotential(SSMVEP) … · 2019. 2. 14. · wasconductedbyMATLAB’stfrstftfunction.Afterfusing two T-F images, the T-F signal was inversely transformed

yi 1113944n

j1aij sin 2jπft + ϕij1113872 1113873 + ei(t) (2)

where n represents the number of harmonics and aij and ϕij

represent the amplitude and phase of the jth harmoniccomponent respectively e model decomposes the signalinto the sum of the SSMVEP induced by the visual stimulusand the noise ei(t) generated by the electromyogram (EMG)electrooculogram (EOG) and other components usequation (2) can be expressed as

yi Si + ei (3)

where

Si aiX

X X1 X2 Xn1113858 1113859(4)

e submatrix Xn is composed of sin(2nπft) andcos(2nπft) and ai represents the amplitude of SSMVEP at itsstimulus frequency and harmonics We assume that thesignal matrix recorded by N electrodes is Y [Y1 Y2 YN]where each column corresponds to an electrode channel Ycan be projected to the SSMVEP space through a projectionmatrix Q namely

S QY (5)

Reference [13] gives the solution of Q as

Q X XTX1113872 1113873minus1

XT

(6)

us the noise signal can be expressed as

Yprime YminusQY (7)

e minimum energy fusion obtains the spatial filtercoefficients W by minimizing the noise energy namely

minW

YprimeW

2

minW

WTYprimeYprimeW (8)

e above minimization problem can be obtained bydecomposing the eigenvalues of the positive definite matrixYprime

TYprime After decomposition N eigenvalues and corre-

sponding eigenvectors can be obtained e spatial filtercoefficient W is the eigenvector corresponding to thesmallest eigenvalue (MATLAB code for minimum energyfusion and test data is provided in Supplementary Materials(available here))

262 Maximum Contrast Fusion e goal of maximumcontrast fusion is to maximize SSMVEP energy whileminimizing noise energy e SSMVEP energy can be ap-proximated as WTYTYW erefore the maximum contrastfusion can be achieved by the following maximizingequation

maxW

YW2

YprimeW

2 max

W

WTYTYW

WTYprimeTYprimeW (9)

Here Y and Yprime are the same as in Section 261 Equation (9)can be solved by generalized eigenvalue decomposition of

YTY and YprimeTYprime e spatial filter coefficient W is the ei-

genvector corresponding to the largest eigenvalue (MAT-LAB code for maximum contrast fusion and test data isprovided in Supplementary Materials (available here))

263 Canonical Correlation Analysis Fusion CCA fusion isused to study the linear relationship between two groups ofmultidimensional variables Using two sets of signals Y andX the goal is to find two linear projection vectors wY and wX

so that the two groups of linear combination signals wTYY

and wTXX have the largest correlation coefficients

maxwYwX

p E wT

YYXTwX( 1113857

E wTYYYTwY( 1113857

1113969

E wTXXXTwX( 1113857

(10)

e reference signals were constructed at the stimulationfrequency fd

X

cos 2πfdt( 1113857

sin 2πfdt( 1113857

cos 2kπfdt( 1113857

sin 2kπfdt( 1113857

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎬

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

t 1fs

l

fs

(11)

where k is the number of harmonics which is dependent onthe number of frequency harmonics existing in SSMVEP fsis the sampling rate and l represents the number of samplepoints e optimization problem of equation (10) can betransformed into the eigenvalue decomposition problemand the spatial filter coefficient W is the eigenvector cor-responding to the largest eigenvalue (MATLAB code forCCA fusion and test data is provided in SupplementaryMaterials (available here))

27 Two-Dimensional T-F Image Representation and Re-construction of EEG Signals e T-F analysis can transformthe signal from the time domain to the T-F domain andthen the T-F domain signal can be regarded as an image foranalysis In this study T-F transform was performed on thetime-domain EEG signal using STFT Let h(t) be a timewindow function with center at τ a height of 1 and a finitewidth e portion of the signal x(t) observed by h(tminus τ) isx(t)h(t minus τ) e STFT is generated by the fast Fouriertransform (FFT) of the windowed signal x(t)h(t minus τ)

STFTx(τ f) 1113946+infin

minusinfinx(t)h(tminus τ)e

minusj2πftdt (12)

Equation (12) can map the signal x(t) onto the two-dimensional T-F plane (τ f) and f can be regarded as thefrequency in the STFT After fusing multiple EEG T-Fimages the fused two-dimensional image signals need to betransformed into one-dimensional time-domain signals forthe subsequent analysis e STFT has an inverse transformthat transforms the T-F domain signal into a time-domainsignal

Computational Intelligence and Neuroscience 3

x(t) 1113946+infin

minusinfin1113946

+infin

minusinfinSTFT(τ f)h(tminus τ)e

j2πftdf dτ (13)

In this study the T-F transform of the SSMVEP signalwas conducted by MATLABrsquos tfrstft function After fusingtwo T-F images the T-F signal was inversely transformedinto a one-dimensional time-domain signal by MATLABrsquostfristft function

28 T-F Image Fusion Based on Two-Dimensional MultiscaleWavelet Transform Among the many image fusion tech-nologies two-dimensional multiscale wavelet transform-based image fusion methods have become a research hot-spot [14ndash16] Let f(x y) denote a two-dimensional signaland x and y are its abscissa and ordinate respectively Letψ(x y) denote a two-dimensional basic wavelet andψab1 b2

(x y) denote the scale expansion and two-dimensional displacement of ψ(x y)

ψab1 b2(x y)

1aψ

xminus b1

ayminus b2

a1113888 1113889 (14)

en the two-dimensional continuous wavelet trans-form is

WTf a b1 b2( 1113857 1a

1113946 1113946 f(x y)ψxminus b1

ayminus b2

a1113888 1113889dx dy

(15)

e factor 1a in equation (15) is a normalization factorthat has been introduced to ensure that the energy remainsunchanged before and after wavelet expansion e two-dimensional multiscale wavelet decomposition and re-construction process is shown in Figure 1 e de-composition process can be described as follows First one-dimensional discrete wavelet decomposition is performedon each line of the image which obtains the low-frequencycomponent L and the high-frequency component H of theoriginal image in the horizontal direction e low-frequency component and high-frequency component arethe low-frequency wavelet coefficients and high-frequencywavelet coefficients obtained after wavelet decompositionrespectively en one-dimensional discrete wavelet de-composition is performed on each column of the trans-formed data to obtain the low-frequency components LL inthe horizontal and vertical directions the high-frequencycomponents LH in the horizontal and vertical directions thelow-frequency components HL in the horizontal and verticaldirections and the high-frequency component HH in thevertical direction e reconstruction process can be de-scribed as follows Firstly one-dimensional discrete waveletreconstruction is performed on each column of the trans-form result en one-dimensional discrete wavelet re-construction is performed on each row of the transformeddata to obtain the reconstructed image

e T-F image obtained by the STFT is decomposed intoN layers by the two-dimensional multiscale wavelet de-composition as shown in Figure 1 and 3N+ 1 frequencycomponents are obtained In this study the LLN

decomposed from the highest level is defined as the low-frequency component and the HLM LHM and HHM (M 12 N) decomposed from each level are defined as high-frequency components e active components of the EEGare mainly contained in the low-frequency component efusion process of two T-F images based on two-dimensionalmultiscale wavelet transform is shown in Figure 2 Firstlythe T-F images 1 and 2 are decomposed by two-dimensionalwavelet decomposition and then the corresponding low-frequency components and high-frequency components arefused according to the corresponding fusion rules Finallytwo-dimensional wavelet reconstruction is performed toobtain the fused image by fusing low-frequency componentsand high-frequency components In this study the two-dimensional wavelet decomposition was performed usingthe MATLABrsquos wavedec2 function and the two-dimensionalwavelet reconstruction was performed using the MATLABrsquoswaverec2 function

29 Image Mean Filtering For the current pixel to beprocessed a template is selected which is composed ofseveral pixels adjacent to the current pixel e method ofreplacing the value of the original pixel with the mean of thetemplate is called mean filtering Defining f(x y) as thepixel value at coordinates (x y) the current pixel g(x y)

can be calculated by

g(x y) 1D

1113944

D

i1f xi yi( 1113857 (16)

where D represents the square of the template size

210 Summary of the ProposedMethod for EEG EnhancementBased on T-F Image Fusion Here the T-F image fusionmethod of the electrode signals x1(t) and x2(t) is introducedand the fusion process is shown in Figure 3

(1) STFT is performed on the two electrode signals x1(t)and x2(t) to obtain T-F images 1 and 2

(2) e low-frequency components LLN1 and LLN2 andthe high-frequency components (HLM1 LHM1 andHHM1) and (HLM2 LHM2 and HHM2) (M 1 2 N) of two T-F images 1 and 2 are obtained byN-layertwo-dimensional multiscale wavelet decomposition

(3) e low-frequency components LLN1 and LLN2 andthe high-frequency components (HLM1 LHM1 andHHM1) and (HLM2 LHM2 and HHM2) (M 1 2 N) of each decomposition layer are fused accordingto the following fusion rules

(a) e low-frequency components of the T-F im-ages 1 and 2 are subtracted to obtain the low-frequency components of the fused image F

LLNF LLN1 minus LLN2 (17)

e low-frequency components represent the activecomponents of the SSMVEP and the subtraction of the

4 Computational Intelligence and Neuroscience

T-F image

Row decomposition

Rowreconstruction

L H

Columndecomposition

Columnreconstruction LH1

LL1 HL1

HH1

Secondarydecomposition

Secondaryreconstruction LH1 HH1

HL1LL2

LH2

HL2

HH2

Figure 1 e process of two-dimensional discrete wavelet decomposition and reconstruction

T-F image 1 T-F image 2

Wavelet decomposition Wavelet decomposition

Low-frequency component

Fusion principle 1 Fusion principle 2

Two-dimensional wavelet reconstruction

Fusion image

High-frequency component

Low-frequency component

High-frequency component

Figure 2 Image fusion process based on two-dimensional wavelet transform

Raw signal 1 Raw signal 2

STFT STFT

Time (s) Time (s)

Freq

uenc

y (H

z)

Freq

uenc

y (H

z)

2D wavelet decomposition

Low-frequency component fusion

High-frequency component fusion

2D wavelet reconstruction

Time (s)

Freq

uenc

y (H

z)

Mean filtering

Time (s)

Freq

uenc

y (H

z)

ISTFT

Fused signal

Figure 3 e fusion process of two-electrode signal T-F images

Computational Intelligence and Neuroscience 5

low-frequency components can remove the commonnoise in the electrode signals

(b) e magnitudes of the high-frequency components(HLM1 LHM1 andHHM1) and (HLM2 LHM2 andHHM2) (M 1 2 N) of each decomposition

layer are obtained for the T-F images 1 and 2 ehigh-frequency component with a large value isused as the high-frequency component of the fusedimage F

HLiMF

HLiM1 real HLi

M11113872 1113873gt real HLiM21113872 1113873

HLiM2 real HLi

M11113872 1113873lt real HLiM21113872 1113873

⎧⎪⎨

⎪⎩

LHiMF

LHiM1 real LHi

M11113872 1113873gt real LHiM21113872 1113873

LHiM2 real LHi

M11113872 1113873lt real LHiM21113872 1113873

⎧⎪⎨

⎪⎩

HHiMF

HHiM1 real HHi

M11113872 1113873gt real HHiM21113872 1113873

HHiM2 real HHi

M11113872 1113873lt real HHiM21113872 1113873

⎧⎪⎨

⎪⎩

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

M 1 2 N (18)

e two-dimensional signal after the STFT is acomplex signal the two-dimensional signal aftertwo-dimensional wavelet decomposition is still acomplex signal e value used for the comparison isthe real part of the complex signal Here i representsthe number of frequency components of the Mdecomposition layer (ie the number of waveletcoefficients)

(4) According to both low-frequency components LLNFand high-frequency components (HLMF LHMF andHHMF) (M 1 2 N) of the fused image F thefused image F is generated by two-dimensionalwavelet reconstruction

(5) Mean filtering is performed on the fused image F(6) An ISTFT is performed on the mean filtered image to

generate a fused time-domain signal (MATLAB codefor T-F image fusion and test data is provided inSupplementary Materials (available here))

3 Results

31 Parameter Selection

311 STFT Parameters In this study the EEG time-domainsignals were subjected to STFT and ISTFT using theMATLAB library functions tfrstft and tfristfte parametersthat need to be set here are the number of frequency bins andthe frequency smoothing window If the number of fre-quency bins is too large the calculation speed of the entireprocess will be low In this study the value of this parametercan be set to [32 54 76 98120] Since the Fourier transform ofthe Gaussian function is also a Gaussian function thewindow function of the optimal time localization of theSTFT is a Gaussian function e EEG is a nonstationarysignal [17] and requires a high time resolution of the windowfunction that is the window length should not be too longIn this paper a Gaussian window was selected and the valueof the window length can be set to [37 47 57 67 77 87 97]

312 Two-Dimensional Multiscale Wavelet TransformParameters In this study the MATLAB library functionswavedec2 and waverec2 were used to perform two-dimensional multiscale wavelet decomposition and re-construction on T-F images Here the number of layers ofwavelet decomposition and the wavelet basis functions needto be set e sampling frequency of the EEG acquisitiondevice used in this paper was 1200Hz and the stimulationfrequency of the SSMVEP was 7ndash108Hz Considering thatthe SSMVEP may contain the second harmonic of thestimulation frequency this study sets the number of layers ofwavelet decomposition to 4 e dbN wavelet has good reg-ularity and was chosen as the wavelet basis function evanishing momentN of the wavelet function can be set to [2 35 7 10 13]

313 Mean Filtering Parameters Here the template size ofthemean filtering needs to be set In this study the value of thetemplate size can be set to [2 4 6 8 10 12 14 16 18 20 22 24 26]

314 Low-Frequency Component of the Fused Image Fe values of LLNF taking LLN1minusLLN2 or LLN2minusLLN1 willaffect the experimental results and the values of LLNF needto be determined to optimize the final fusion effect (seedetails in Section 210 (3))

315 Parameter Selection e T-F image fusion methodproposed in this study requires the setting of the followingfive parameters the number of frequency bins frequencysmoothing window vanishing moments of dbN waveletfunctions mean filter template size and the value of LLNFIn this study the grid search method was used to traverse thecombination of all parameters to find the best combinationof parameters that can enhance the SSMVEP active com-ponents In this study six rounds of experiments wereconducted e first three rounds of experimental data wereused to select the optimal combination of parameters for T-F

6 Computational Intelligence and Neuroscience

image fusion According to the selected combination of pa-rameters the last three rounds of experimental data were usedto compare the enhancement effect of SSMVEP active com-ponents by the method of minimum energy fusion maximumcontrast fusion CCA fusion and T-F image fusion edisadvantage of grid search is that the amount of calculation islarge however considering the difficulty of superparameterselection grid search is still a more reliable method

32 Electrode Signal Selection for T-F Image Fusion e T-Fimage fusion method analyzes two electrode signals ustwo suitable electrodes need to be selected from multipleelectrodes e bipolar fusion signal is obtained by sub-tracting the two original EEG signals In equation (17) wesubtracted the wavelet low-frequency components of the twoT-F images to obtain the wavelet low-frequency componentsof the fused image e subtraction of the low-frequencycomponents removes the common noise in the electrodesignals which is similar to the principle of bipolar fusionerefore the two electrode signals with the best bipolarfusion effect were used in the T-F image fusion For SSVEPor SSMVEP stimulation the Oz electrode usually has thestrongest response [18] and is most commonly used toanalyze SSVEP or SSMVEP [19] e Oz electrode was usedas one of the analysis electrodes for bipolar fusion In thisstudy the Oz electrode was fused with the other five elec-trode signals so that the best combination of electrodes canbe selected When bipolar fusion is used the Oz electrode asa reducing electrode or a reduced electrode will not affect theexperimental results For example Oz-POz and POz-Ozhave the same fusion effect In this study the Oz elec-trode was used as the reduced electrode Twenty trials persubject were used for electrode selection e Welch powerspectrum analysis [20] was performed on the obtainedsignals of bipolar fusion for each subject e Gaussianwindow was used and the length of the window was 5 s eoverlapping was 256 and the number of discrete Fouriertransform (DFT) points was 6000 for the Welch periodo-gram 20 focused targets correspond to 20 stimuli fre-quencies and the frequencies corresponding to the highestpower spectrum amplitudes at the 20 frequencies weredetermined as the focused target frequency e high rec-ognition accuracy indicates that the amplitude of the powerspectrum at the stimulation frequency is prominent and itindicates the effectiveness of the fusion method for theenhancement of the SSMVEP active component Table 1shows the recognition accuracy of all 10 subjects under thebipolar fusion e Oz electrode was fused with differentelectrodes and obtained different fusion effects Moreoverdue to the differences among individuals the electrodecombination with the highest recognition accuracy persubject was also different e asterisk (lowast) marks the bipolarfusion electrode group with the highest recognition accuracyof each subject and the corresponding electrode combi-nation was used in T-F image fusion

33 Comparison of Enhancement Effects of Minimum EnergyFusion Maximum Contrast Fusion CCA Fusion and T-F

Image Fusion on SSMVEP Active Components T-F imagefusion used two channel signals and CCA fusion maximumcontrast fusion and minimum energy fusion used all sixchannel signals In this study the accuracy was used as theevaluation standard of the fusion effect e high onlineaccuracy indicates that the amplitude at stimulus frequencyis prominent which shows the effectiveness of the fusionmethod For T-F image fusion the Welch power spectrumanalysis (the parameters are the same as in Section 32) wasperformed on the obtained signal after the T-F image fusionand the frequency with the highest amplitude at twentystimulus frequencies was identified as the focused targetfrequency e minimum energy fusion maximum contrastfusion and CCA fusion require prior knowledge of thefrequency of the signal to be fused However the frequencyof the signal to be fused cannot be determined because theuserrsquos focused target cannot be determined beforehand Forthe current tested signal the minimum energy fusionmaximum contrast fusion and CCA fusion need to firstperform the fusion analysis at all stimulus frequencies andthen the frequency with the highest amplitude was iden-tified as the focused target frequency Take CCA fusion as anexample and assume that the current tested signal is Y First20 template signals are set according to equation (11) andthen 20 spatial filter coefficients are obtained according toequation (10) e tested signal is fused with 20 spatial filtercoefficients respectively e obtained 20 sets of vectors areseparately analyzed by the Welch power spectrum (theparameters are the same as in Section 32) to obtain theamplitude at the corresponding frequency Finally thefrequency with the highest amplitude is identified as thefocused target frequency e first three rounds of experi-mental data were used to select the parameters of the T-Fimage fusion (see details in Section 31) and the last threerounds of the experimental data were used to test the onlinefusion effects under the four methods Figures 4(a)ndash4(d)show the analysis results plotted using the data of subject 4 inthe fourth round of the experiment Figure 4(a) shows thepower spectrum results of the 20 targets under T-F imagefusion Figure 4(b) shows the power spectrum results of 20targets under CCA fusion Figure 4(c) shows the powerspectrum results of 20 targets under minimum energy

Table 1 Accuracies of 10 subjects under bipolar fusion

SubjectsBipolar fusion

Oz-PO7 Oz-PO8 Oz-PO3 Oz-POz Oz-PO4S1 02 07 045 015 095lowastS2 025 0 03lowast 01 02S3 095lowast 085 085 04 08S4 08lowast 01 07 035 035S5 02lowast 015 015 015 01S6 095lowast 03 095lowast 085 07S7 02 015 025 06lowast 055S8 01 005 025 03 09lowastS9 03 015 0 035lowast 02S10 035 035 05 08lowast 04lowaste bipolar fusion electrode group with the highest recognition accuracyof each subject

Computational Intelligence and Neuroscience 7

times10ndash7

0

05

1f = 72 Hz

times10ndash7

0

1

2f = 76 Hz

times10ndash7

0

1

2f = 78 Hz

0

05

1times10ndash7 f = 7 Hz

times10ndash7

0

1

2f = 74 Hz

times10ndash7

0

05

1f = 92 Hz

times10ndash7

0

2

4f = 96 Hz

times10ndash7

0

1

2f = 98 Hz

times10ndash7

0

1

2f = 9 Hz

times10ndash7

0

2

4f = 94 Hz

times10ndash7

0

05

1f = 102 Hz

times10ndash7

0

1

2f = 106 Hz

times10ndash7

0

1

2f = 108 Hz

times10ndash7

0

1

2f = 10 Hz

times10ndash7

0

1

2f = 104 Hz

times10ndash7

0

2

4f = 82 Hz

times10ndash7

0

2

4f = 86 Hz

times10ndash7

0

2

4f = 88 Hz

times10ndash7

0

1

2f = 8 Hz

times10ndash7

0

05

1 f = 84 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(a)

0

2

4times10ndash13 f = 7 Hz

times10ndash13

0

5f = 72 Hz

times10ndash12

0

05

1f = 74 Hz

times10ndash12

0

05

1f = 76 Hz

times10ndash13

0

5f = 78 Hz

times10ndash13

0

2

4f = 9 Hz

times10ndash13

0

5f = 92 Hz

times10ndash12

0

1

2f = 94 Hz

times10ndash12

0

05

1f = 96 Hz

times10ndash13

0

2

4f = 98 Hz

times10ndash13

0

2

4f = 10 Hz

times10ndash12

0

05

1f = 102 Hz

times10ndash12

0

05

1f = 104 Hz

times10ndash12

0

05

1f = 106 Hz

times10ndash13

0

5f = 108 Hz

times10ndash13

0

2

4f = 8 Hz

times10ndash12

0

05

1f = 82 Hz

times10ndash12

0

1

2f = 84 Hz

times10ndash12

0

05

1f = 86 Hz

times10ndash13

0

5f = 88 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(b)

Figure 4 Continued

8 Computational Intelligence and Neuroscience

times10ndash13

0

05

1f = 7 Hz

times10ndash13

0

1

2f = 74 Hz

times10ndash13

0

05

1f = 72 Hz

times10ndash13

0

1

2f = 76 Hz

times10ndash14

0

2

4f = 78 Hz

times10ndash13

0

05

1f = 9 Hz

times10ndash13

0

1

2f = 94 Hz

times10ndash13

0

1

2f = 92 Hz

times10ndash13

0

05

1f = 96 Hz

times10ndash13

0

05

1f = 98 Hz

times10ndash13

0

05

1f = 10 Hz

times10ndash13

0

05

1f = 104 Hz

times10ndash13

0

05

1f = 102 Hz

times10ndash13

0

05

1f = 106 Hz

times10ndash13

0

1

2f = 108 Hz

times10ndash13

0

05

1f = 8 Hz

times10ndash13

0

1

2f = 84 Hz

times10ndash13

0

1

2f = 82 Hz

times10ndash13

0

05

1f = 86 Hz

times10ndash13

0

05

1f = 88 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(c)

times10ndash4

0

05

1f = 72 Hz

times10ndash4

0

1

2f = 74 Hz

times10ndash4

0

1

2f = 76 Hz

times10ndash4

0

05

1f = 78 Hz

times10ndash4

0

05

1f = 7 Hz

times10ndash4

0

05

1f = 92 Hz

times10ndash4

0

2

4f = 94 Hz

times10ndash4

0

1

2f = 96 Hz

times10ndash5

0

5f = 98 Hz

times10ndash5

0

2

4f = 9 Hz

times10ndash4

0

1

2f = 102 Hz

times10ndash4

0

05

1f = 104 Hz

times10ndash4

0

1

2f = 106 Hz

times10ndash4

0

05

1f = 108 Hz

times10ndash4

0

05

1f = 10 Hz

times10ndash4

0

1

2f = 82 Hz

times10ndash4

0

2

4f = 84 Hz

times10ndash4

0

1

2f = 86 Hz

times10ndash4

0

05

1f = 88 Hz

times10ndash4

0

05

1f = 8 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(d)

Figure 4 e power spectrum results of 20 targets under (a) T-F image fusion (b) CCA fusion (c) minimum energy fusion and (d)maximum contrast fusion (the discrete sequence frequency values from left to right are 7 8 910 72 82 92 102 74 84 94 104 76 86 96 10678 88 98 108 Hz respectively)

Computational Intelligence and Neuroscience 9

fusion and Figure 4(d) shows the power spectrum results of20 targets under maximum contrast fusion In the stalk plotthe discrete sequence frequency values from left to right are7 8 9 10 72 82 92 102 74 84 94 104 76 86 96 106 78 8898 108 respectively where f represents the stimulationfrequency of the corresponding target and the green dotmarks the amplitude of the corresponding target stimulusfrequency e red flag f represents the target was mis-identified and the blue flag f represents the target wascorrectly identified Figures 4(a)ndash4(d) show that the T-Fimage fusion method has the best online performance andthe minimum energy fusion method has the worst onlineperformance Maximum contrast fusion and CCA fusionhave similar fusion effects

In online analysis the power spectrum amplitudes of thetest signal at all stimulus frequencies were calculated andthen the frequency corresponding to the maximum am-plitude was identified as the focused target frequency Errorrecognition occurs when the amplitude at the focused targetfrequency is lower than the amplitudes at the nonfocusedtarget frequencieserefore the power spectrum amplitudeat the focused target frequency can be regarded as the activecomponent of the signal and the power spectrum ampli-tudes at the remaining nonfocused target frequencies can beregarded as noise e SNR in this study was defined as theratio of the power spectrum amplitude at the focused targetfrequency to the mean of the power spectrum amplitudes atthe remaining nonfocused target frequencies e SNRs often subjects at all stimulus frequencies were superimposedand averaged and the experimental results are shown inFigure 5 It can be seen from Figure 5 that the T-F imagefusion method obtained the highest SNR which indicatesthat the T-F image fusion method proposed in this study caneffectively enhance the SSMVEP SNR

e online accuracies of each subject in the three roundsof experiments were superimposed and averaged e ex-perimental results are shown in Figure 6 It can be seen thatmost subjects obtained highest online accuracy under theT-F image fusion and some subjects obtained highest onlineaccuracy under CCA fusion and maximum contrast fusionOnly subject 9 obtained the highest online accuracy underthe minimum energy fusion e online accuracies of all thesubjects were superimposed and averaged e experimentalresults are shown in Figure 7 e experimental results showthat the online performance of the T-F image fusion methodis better e online accuracy of the T-F image fusionmethod is 617 606 and 3050 higher than that of theCCA fusion maximum contrast fusion and the minimumenergy fusion respectively e paired t-test shows signifi-cant differences between the accuracies of T-F image fusionand minimum energy fusion (p 00174) Online accuracyanalysis results show that the proposedmethod is better thanthe traditional time-domain fusion method

4 Discussion

e SSMVEP collected from the scalp contains a lot of noiseand requires an effective signal processing method to en-hance the active components of SSMVEP Spatial filtering

methods utilize EEG information of multiple channels andexert positive significance to enhance the active componentsof the SSMVEP At present the commonly used spatialfiltering methods include average fusion native fusion bi-polar fusion Laplacian fusion CAR fusion CCA fusionminimum energy fusion and maximum contrast fusionefirst five spatial filtering methods can only process thepreselected electrode signals Minimum energy fusionmaximum contrast fusion and CCA fusion improve theabove problem and they can be used to fuse any number ofelectrode signals with better fusion effects Minimum energyfusion maximum contrast fusion and CCA fusion fuseSSMVEP in the time domainis study first put forward theidea of fusing EEG in the T-F domain and proposed a T-Fimage fusion method We compared the enhancement ef-fects of minimum energy fusion maximum contrast fusionCCA fusion and T-F image fusion on SSMVEP It is verifiedby the test data that the proposed method is better than thetraditional time-domain fusion method

T-F analysis is a good choice for the transformation ofthe EEG data from one dimension to two dimension EEGtransformed into the T-F domain can be used as an image foranalysis e STFT has an inverse transform which caninverse transform the fused T-F image into a time-domainsignal erefore the STFT was used to transform the EEGfrom the time domain to the T-F domain e two-dimensional wavelet transform can fuse two images intoone to achieve the purpose of multichannel EEG fusionAfter two-dimensional wavelet decomposition the low-frequency components of the two subimages were sub-tracted to obtain the low-frequency components of the fusedimage and the maximum value of the high-frequencycomponents of the two subimages was taken as the high-frequency component of the fused imagee low-frequencycomponents after wavelet decomposition represent the ac-tive components of the SSMVEPe subtraction of the low-

0Methods

5

10

15

Ave

rage

SN

R

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Figure 5 e average SNR of ten subjects at all stimulus fre-quencies under four methods

10 Computational Intelligence and Neuroscience

frequency components can remove the common noise in theelectrode signals is is similar to bipolar fusion whichsubtracts two electrode signals in the time domain We alsocompared the enhancement eects of bipolar fusion and theproposed method on SSMVEP active components eelectrodes used for bipolar fusion are the same as those usedin T-F image fusion e results show that the proposedmethod is better than the bipolar fusion (the online accuracyof bipolar fusion is 7152) In this study the fused imagewas ltered by mean lter to further achieve low-pass l-tering We also tried to remove the mean ltering step aftercompleting the T-F image fusion e results show thatremoving themean ltering step reduced the fusion eect Inthis study the mean ltering step was performed after T-Fimage fusion We tested the fusion eect when the meanltering step was performed before T-F image fusion(performing mean ltering step on the subimage afterSTFT) For some subjects it was better to perform the meanltering step after T-F image fusion erefore we recom-mend that researchers follow the analysis steps in this study

e wavelet low-frequency coecients (see details inequation (17)) of the fused image have an important in-uence on the fusion eect If we perform T-F image fusionon six electrode signals at the same time we will get six setsof wavelet low-frequency coecients Here each set of low-frequency coecient is a two-dimensional signal Referringto equation (1) we can assign a coecient to each two-dimensional signal and obtain the fused two-dimensionalsignal (ie the wavelet low-frequency coecients of thefused image) after linear summationis study explored thefeasibility of applying the spatial lter coecients of CCAfusion and maximum contrast fusion to T-F image fusionwhen fusing six electrode signals by using the T-F imagefusion method at the same time Since the fusion eect ofminimum energy fusion was poor the spatial lter co-ecients of the minimum energy fusion were not used heree spatial lter coecients of the maximum contrast fusionand CCA fusion at the frequency f were obtained byequations (9) and (10) and are set to Vmax and Vcca re-spectively Vmax and Vcca are two vectors with dimensions of6times1 where Vmax(1 1) and Vcca(1 1) represent the rstelements of the vectors Vmax and Vcca Since the T-F imagefusion method was performed on the six electrode signalsequation (17) is transformed into equation (19) where Vcorresponds to the spatial lter coecient Vmax or Vcca erest of the analysis process is the same as that in Section 210e same analysis steps were performed for all 20 targets(7ndash108Hz) e Welch spectrum analysis (the parametersare the same as in Section 32) was performed on the ob-tained signal after T-F image fusion Figures 8(a) and 8(b)show the Welch power spectra plotted using the data ofsubject 4 in the fourth round of the experiment Figure 8(a)shows the power spectrum plotted using spatial lter co-ecients of CCA fusion and Figure 8(b) shows the Welchpower spectrum plotted using spatial lter coecients ofmaximum contrast fusion where f represents the stimulusfrequency and the red circle indicates the amplitude at thestimulus frequency Figures 8(a) and 8(b) show that theamplitudes at the stimulation frequencies are prominente spatial lter coecients of CCA fusion and maximumcontrast fusion are eective for T-F image fusion us theT-F image fusion method proposed in this study focuses onthe fusion of wavelet low-frequency coecients of multiple

0

02

04

06

08

1

12

Acc

urac

y

Subject1 2 3 4 5 6 7 8 9 10

T-F image fusionCCA fusion

Minimum energy fusionMaximum contrast fusion

Figure 6 e accuracy of each subject under four fusion methods

0

02

04

06

08

1

12

Ave

rage

accu

racy

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Methods

Figure 7 e average accuracy of ten subjects under four fusionmethods

Computational Intelligence and Neuroscience 11

images (by assigning a coecient to each two-dimensionalsignal and obtaining the fused two-dimensional signal afterlinear summation) e premise of the above test results is

that we know the frequency of the signal to be fused If theonline test method is used (see details in Section 33) thesame online fusion results as CCA fusion and maximum

f = 7 Hz

times104

012

10 20 30 400

f = 72 Hz

times104

012

10 20 30 400

f = 74 Hz

times104

024

10 20 30 400

f = 76 Hz

times104

024

10 20 30 400

f = 78 Hz

times104

024

10 20 30 400

f = 8 Hz

times104

012

10 20 30 400

f = 82 Hz

times104

024

10 20 30 400

f = 84 Hz

times104

05

10

10 20 30 400

f = 86 Hz

times104

024

10 20 30 400

f = 88 Hz

times104

024

10 20 30 400

f = 9 Hz

times104

012

10 20 30 400

f = 92 Hz

times104

024

10 20 30 400

f = 94 Hz

times104

05

10

10 20 30 400

f = 96 Hz

times104

024

10 20 30 400

f = 98 Hz

times104

012

10 20 30 400

f = 10 Hz

times104

012

10 20 30 400

f = 102 Hz

times104

012

10 20 30 400

f = 104 Hz

times104

012

10 20 30 400

f = 106 Hz

times104

024

10 20 30 400

f = 108 Hz

times104

012

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(a)

f = 7 Hz

0

2

4

10 20 30 400

f = 8 Hz

0

2

4

10 20 30 400

f = 9 Hz

0

2

4

10 20 30 400

f = 10 Hz

0

2

4

10 20 30 400

f = 72 Hz0

2

4

10 20 30 400

f = 82 Hz0

5

10

10 20 30 400

f = 92 Hz0

2

4

10 20 30 400

f = 102 Hz0

2

4

10 20 30 400

f = 74 Hz

0

2

4

10 20 30 400

f = 84 Hz

0

10

20

10 20 30 400

f = 94 Hz

0

10

20

10 20 30 400

f = 104 Hz

0

2

4

10 20 30 400

f = 76 Hz0

5

10 20 30 400

f = 86 Hz0

5

10

10 20 30 400

f = 96 Hz0

5

10 20 30 400

f = 106 Hz0

5

10

10 20 30 400

f = 78 Hz0

5

10 20 30 400

f = 88 Hz0

5

10 20 30 400

f = 98 Hz0

2

4

10 20 30 400

f = 108 Hz0

2

4

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(b)

Figure 8 e Welch power spectrum plotted using the (a) CCA fusion spatial lter coecients and (b) maximum contrast fusion spatiallter coecients

12 Computational Intelligence and Neuroscience

contrast fusion were obtained T-F image fusion usedonly two electrode signals and the better online fusioneffect than that of the CCA fusion and the maximumcontrast fusion was obtained which shows the effec-tiveness of the proposed method Next we will explorethe fusion method of multiple T-F images (more than twoimages) that is to find suitable spatial filter coefficientsfor the wavelet low-frequency components of multipleT-F images

LLNF LLN1 lowastV(1 1) + LLN2 lowastV(2 1) + LLN3 lowastV(3 1)

+ LLN4 lowastV(4 1) + LLN5 lowastV(5 1)

+ LLN6 lowastV(6 1)

(19)

e parameters affect the fusion effect of the proposedmethod We listed the parameters that need to be selectedand the possible values of the parameters in Section 31 Inthis study the grid search method was used to traverse thecombination of all parameters and the best combination ofparameters was found In the experiment we found thatthe number of frequency bins and the Gaussian windowlength can be fixed to 54 and 57 We recommend that theresearchers determine the parameters according to theparameter selection principles and ranges given in Section31 In this study the SSMVEP active component wasenhanced by fusing multichannel signals into a single-channel signal and then the focused target frequencywas identified by performing spectrum analysis on thefused signals is is beneficial for SSMVEP studies basedon spectrum analysis For example Reference [21] pro-posed a frequency and phase mixed coding method in theSSVEP-based brain-computer interface (BCI) which in-creases the number of BCI coding targets by making onefrequency correspond to multiple different phases In thestudy the FFT analysis of the test signal is required to findthe possible focused target frequency and then calculate thephase value at that frequency e proposed method in thisstudy has a positive significance for accurately finding thefocused target frequency in the spectrum Moreover someEEG feature extraction algorithms for the BCI also requirespectrum analysis of the test signals [22 23] erefore themethod proposed in this study has a potential applicationvalue

5 Conclusion

To explore whether T-F domain analysis can achieve betterfusion effects than time-domain analysis this study pro-posed an SSMVEP enhancement method based on T-Fimage fusion e parameters of the T-F image fusionalgorithm were determined by the grid search method andthe electrode signals used for T-F image fusion were se-lected by bipolar fusion e analysis results show that thekey of the T-F image fusion algorithm is the fusion of thewavelet low-frequency components is study comparedthe enhancement effects of minimum energy fusionmaximum contrast fusion CCA fusion and T-F imagefusion on SSMVEP e experimental results show that the

online performance of the T-F image fusion method isbetter than that of the traditional spatial filtering methodswhich indicates that the proposed method is feasible to fuseSSMVEP in the T-F domain

Data Availability

e analytical data used to support the findings of this studyare included within the article and the raw data are availablefrom the corresponding author upon request

Ethical Approval

e subjects provided informed written consent in accor-dance with the protocol approved by the Institutional Re-view Board of Xirsquoan Jiaotong University

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Wenqiang Yan worked on the algorithm and wrote thepaper Guanghua Xu involved in discussions and gavesuggestions throughout this project Longting Chen andXiaowei Zheng worked on the experimental design Allauthors read and approved the final manuscript

Acknowledgments

is research was supported by the National Natural ScienceFoundation of China (NSFC) (no 51775415) We want tothank the subjects for participating in these experiments

Supplementary Materials

MATLAB codes for T-F image fusion minimum energyfusion maximum contrast fusion and CCA fusion (Sup-plementary Materials)

References

[1] W Yan G Xu J Xie et al ldquoFour novel motion paradigmsbased on steady-state motion visual evoked potentialrdquo IEEETransactions on Biomedical Engineering vol 65 no 8pp 1696ndash1704 2018

[2] J R Wolpaw and D J Mcfarland ldquoMultichannel EEG-basedbrain-computer communicationrdquo Electroencephalographyand Clinical Neurophysiology vol 90 no 6 pp 444ndash4491994

[3] B O Peters G Pfurtscheller and H Flyvbjerg ldquoMiningmulti-channel EEG for its information content an ANN-based method for a brain-computer interfacerdquo Neural Net-works vol 11 no 7-8 pp 1429ndash1433 1998

[4] S Parini L Maggi A C Turconi and G Andreoni ldquoA robustand self-paced BCI system based on a four class SSVEPparadigm algorithms and protocols for a high-transfer-ratedirect brain communicationrdquo Computational Intelligence andNeuroscience vol 2009 pp 1ndash11 2009

[5] O Friman I Volosyak and A Graser ldquoMultiple channeldetection of steady-state visual evoked potentials for brain-

Computational Intelligence and Neuroscience 13

computer interfacesrdquo IEEE Transactions on Biomedical En-gineering vol 54 no 4 pp 742ndash750 2007

[6] Z Lin C Zhang WWu and X Gao ldquoFrequency recognitionbased on canonical correlation analysis for SSVEP-basedBCIsrdquo IEEE Transactions on Biomedical Engineering vol 53- no 12 pp 2610ndash2614 2006

[7] M Middendorf G Mcmillan G Calhoun and K S JonesldquoBrain-computer interfaces based on the steady-state visual-evoked responserdquo IEEE Transactions on Rehabilitation En-gineering vol 8 no 2 pp 211ndash214 2000

[8] G R Muller-Putz R Scherer C Brauneis andG Pfurtscheller ldquoSteady-state visual evoked potential(SSVEP)-based communication impact of harmonic fre-quency componentsrdquo Journal of Neural Engineering vol 2no 4 pp 123ndash130 2005

[9] D J Mcfarland L M Mccane S V David and J R WolpawldquoSpatial filter selection for EEG-based communicationrdquoElectroencephalography and Clinical Neurophysiologyvol 103 no 3 pp 386ndash394 1997

[10] H Li B S Manjunath and S K Mitra ldquoMultisensor imagefusion using the wavelet transformrdquo Graphical Models andImage Processing vol 57 no 3 pp 235ndash245 1995

[11] S Li and B Yang ldquoMultifocus image fusion by combiningcurvelet and wavelet transformrdquo Pattern Recognition Lettersvol 29 no 9 pp 1295ndash1301 2008

[12] B Zhang ldquoStudy on image fusion based on different fusionrules of wavelet transformrdquo Acta Photonica Sinica vol 33pp 173ndash176 2004

[13] C DMeyerMatrix Analysis and Applied Linear Algebra Bookand Solutions Manual Society for Industrial and AppliedMathematics Philadelphia PA USA 2000

[14] K J A Sundar M Jahnavi and K Lakshmisaritha ldquoMulti-sensor image fusion based on empirical wavelet transformrdquo inProceedings of the International Conference on ElectricalElectronics Communication Computer and OptimizationTechniques pp 93ndash97 Mysuru India December 2018

[15] L Tian J Ahmed Q Du B Shankar and S Adnan ldquoMultifocus image fusion using combined median and average filterbased hybrid stationary wavelet transform and principalcomponent analysisrdquo International Journal of AdvancedComputer Science and Applications vol 9 no 6 2018

[16] G He S Xing D Dong and X Zhao ldquoPanchromatic andmulti-spectral image fusion method based on two-step sparserepresentation and wavelet transformrdquo in Asia-Pacific Signaland Information Processing Association Summit and Confer-ence pp 259ndash262 Honolulu HI USA January 2018

[17] Y Shin S Lee M Ahn H Cho S C Jun and H-N LeeldquoNoise robustness analysis of sparse representation basedclassification method for non-stationary EEG signal classifi-cationrdquo Biomedical Signal Processing and Control vol 21pp 8ndash18 2015

[18] M Xu L Chen L Zhang et al ldquoA visual parallel-BCI spellerbased on the time-frequency coding strategyrdquo Journal ofNeural Engineering vol 11 no 2 article 026014 2014

[19] J Xie G Xu A Luo et al ldquoe role of visual noise ininfluencing mental load and fatigue in a steady-state motionvisual evoked potential-based brain-computer interfacerdquoSensors vol 17 no 8 p 1873 2017

[20] K Barbe R Pintelon and J Schoukens ldquoWelch methodrevisited nonparametric power spectrum estimation viacircular overlaprdquo IEEE Transactions on Signal Processingvol 58 no 2 pp 553ndash565 2010

[21] C Jia X Gao B Hong and S Gao ldquoFrequency and phasemixed coding in SSVEP-based brain-computer interfacerdquo

IEEE transactions on bio-medical engineering vol 58 no 1pp 200ndash206 2011

[22] C Yang H Wu Z Li W He N Wang and C-Y Su ldquoMindcontrol of a robotic arm with visual fusion technologyrdquo IEEETransactions on Industrial Informatics vol 14 no 9pp 3822ndash3830 2017

[23] J Castillo S Muller E Caicedo and T Bastos ldquoFeatureextraction techniques based on power spectrum for a SSVEP-BCIrdquo in Proceedings of the IEEE 23rd International Sympo-sium on Industrial Electronics (ISIE) Istanbul Turkey June2014

14 Computational Intelligence and Neuroscience

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 4: Steady-StateMotionVisualEvokedPotential(SSMVEP) … · 2019. 2. 14. · wasconductedbyMATLAB’stfrstftfunction.Afterfusing two T-F images, the T-F signal was inversely transformed

x(t) 1113946+infin

minusinfin1113946

+infin

minusinfinSTFT(τ f)h(tminus τ)e

j2πftdf dτ (13)

In this study the T-F transform of the SSMVEP signalwas conducted by MATLABrsquos tfrstft function After fusingtwo T-F images the T-F signal was inversely transformedinto a one-dimensional time-domain signal by MATLABrsquostfristft function

28 T-F Image Fusion Based on Two-Dimensional MultiscaleWavelet Transform Among the many image fusion tech-nologies two-dimensional multiscale wavelet transform-based image fusion methods have become a research hot-spot [14ndash16] Let f(x y) denote a two-dimensional signaland x and y are its abscissa and ordinate respectively Letψ(x y) denote a two-dimensional basic wavelet andψab1 b2

(x y) denote the scale expansion and two-dimensional displacement of ψ(x y)

ψab1 b2(x y)

1aψ

xminus b1

ayminus b2

a1113888 1113889 (14)

en the two-dimensional continuous wavelet trans-form is

WTf a b1 b2( 1113857 1a

1113946 1113946 f(x y)ψxminus b1

ayminus b2

a1113888 1113889dx dy

(15)

e factor 1a in equation (15) is a normalization factorthat has been introduced to ensure that the energy remainsunchanged before and after wavelet expansion e two-dimensional multiscale wavelet decomposition and re-construction process is shown in Figure 1 e de-composition process can be described as follows First one-dimensional discrete wavelet decomposition is performedon each line of the image which obtains the low-frequencycomponent L and the high-frequency component H of theoriginal image in the horizontal direction e low-frequency component and high-frequency component arethe low-frequency wavelet coefficients and high-frequencywavelet coefficients obtained after wavelet decompositionrespectively en one-dimensional discrete wavelet de-composition is performed on each column of the trans-formed data to obtain the low-frequency components LL inthe horizontal and vertical directions the high-frequencycomponents LH in the horizontal and vertical directions thelow-frequency components HL in the horizontal and verticaldirections and the high-frequency component HH in thevertical direction e reconstruction process can be de-scribed as follows Firstly one-dimensional discrete waveletreconstruction is performed on each column of the trans-form result en one-dimensional discrete wavelet re-construction is performed on each row of the transformeddata to obtain the reconstructed image

e T-F image obtained by the STFT is decomposed intoN layers by the two-dimensional multiscale wavelet de-composition as shown in Figure 1 and 3N+ 1 frequencycomponents are obtained In this study the LLN

decomposed from the highest level is defined as the low-frequency component and the HLM LHM and HHM (M 12 N) decomposed from each level are defined as high-frequency components e active components of the EEGare mainly contained in the low-frequency component efusion process of two T-F images based on two-dimensionalmultiscale wavelet transform is shown in Figure 2 Firstlythe T-F images 1 and 2 are decomposed by two-dimensionalwavelet decomposition and then the corresponding low-frequency components and high-frequency components arefused according to the corresponding fusion rules Finallytwo-dimensional wavelet reconstruction is performed toobtain the fused image by fusing low-frequency componentsand high-frequency components In this study the two-dimensional wavelet decomposition was performed usingthe MATLABrsquos wavedec2 function and the two-dimensionalwavelet reconstruction was performed using the MATLABrsquoswaverec2 function

29 Image Mean Filtering For the current pixel to beprocessed a template is selected which is composed ofseveral pixels adjacent to the current pixel e method ofreplacing the value of the original pixel with the mean of thetemplate is called mean filtering Defining f(x y) as thepixel value at coordinates (x y) the current pixel g(x y)

can be calculated by

g(x y) 1D

1113944

D

i1f xi yi( 1113857 (16)

where D represents the square of the template size

210 Summary of the ProposedMethod for EEG EnhancementBased on T-F Image Fusion Here the T-F image fusionmethod of the electrode signals x1(t) and x2(t) is introducedand the fusion process is shown in Figure 3

(1) STFT is performed on the two electrode signals x1(t)and x2(t) to obtain T-F images 1 and 2

(2) e low-frequency components LLN1 and LLN2 andthe high-frequency components (HLM1 LHM1 andHHM1) and (HLM2 LHM2 and HHM2) (M 1 2 N) of two T-F images 1 and 2 are obtained byN-layertwo-dimensional multiscale wavelet decomposition

(3) e low-frequency components LLN1 and LLN2 andthe high-frequency components (HLM1 LHM1 andHHM1) and (HLM2 LHM2 and HHM2) (M 1 2 N) of each decomposition layer are fused accordingto the following fusion rules

(a) e low-frequency components of the T-F im-ages 1 and 2 are subtracted to obtain the low-frequency components of the fused image F

LLNF LLN1 minus LLN2 (17)

e low-frequency components represent the activecomponents of the SSMVEP and the subtraction of the

4 Computational Intelligence and Neuroscience

T-F image

Row decomposition

Rowreconstruction

L H

Columndecomposition

Columnreconstruction LH1

LL1 HL1

HH1

Secondarydecomposition

Secondaryreconstruction LH1 HH1

HL1LL2

LH2

HL2

HH2

Figure 1 e process of two-dimensional discrete wavelet decomposition and reconstruction

T-F image 1 T-F image 2

Wavelet decomposition Wavelet decomposition

Low-frequency component

Fusion principle 1 Fusion principle 2

Two-dimensional wavelet reconstruction

Fusion image

High-frequency component

Low-frequency component

High-frequency component

Figure 2 Image fusion process based on two-dimensional wavelet transform

Raw signal 1 Raw signal 2

STFT STFT

Time (s) Time (s)

Freq

uenc

y (H

z)

Freq

uenc

y (H

z)

2D wavelet decomposition

Low-frequency component fusion

High-frequency component fusion

2D wavelet reconstruction

Time (s)

Freq

uenc

y (H

z)

Mean filtering

Time (s)

Freq

uenc

y (H

z)

ISTFT

Fused signal

Figure 3 e fusion process of two-electrode signal T-F images

Computational Intelligence and Neuroscience 5

low-frequency components can remove the commonnoise in the electrode signals

(b) e magnitudes of the high-frequency components(HLM1 LHM1 andHHM1) and (HLM2 LHM2 andHHM2) (M 1 2 N) of each decomposition

layer are obtained for the T-F images 1 and 2 ehigh-frequency component with a large value isused as the high-frequency component of the fusedimage F

HLiMF

HLiM1 real HLi

M11113872 1113873gt real HLiM21113872 1113873

HLiM2 real HLi

M11113872 1113873lt real HLiM21113872 1113873

⎧⎪⎨

⎪⎩

LHiMF

LHiM1 real LHi

M11113872 1113873gt real LHiM21113872 1113873

LHiM2 real LHi

M11113872 1113873lt real LHiM21113872 1113873

⎧⎪⎨

⎪⎩

HHiMF

HHiM1 real HHi

M11113872 1113873gt real HHiM21113872 1113873

HHiM2 real HHi

M11113872 1113873lt real HHiM21113872 1113873

⎧⎪⎨

⎪⎩

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

M 1 2 N (18)

e two-dimensional signal after the STFT is acomplex signal the two-dimensional signal aftertwo-dimensional wavelet decomposition is still acomplex signal e value used for the comparison isthe real part of the complex signal Here i representsthe number of frequency components of the Mdecomposition layer (ie the number of waveletcoefficients)

(4) According to both low-frequency components LLNFand high-frequency components (HLMF LHMF andHHMF) (M 1 2 N) of the fused image F thefused image F is generated by two-dimensionalwavelet reconstruction

(5) Mean filtering is performed on the fused image F(6) An ISTFT is performed on the mean filtered image to

generate a fused time-domain signal (MATLAB codefor T-F image fusion and test data is provided inSupplementary Materials (available here))

3 Results

31 Parameter Selection

311 STFT Parameters In this study the EEG time-domainsignals were subjected to STFT and ISTFT using theMATLAB library functions tfrstft and tfristfte parametersthat need to be set here are the number of frequency bins andthe frequency smoothing window If the number of fre-quency bins is too large the calculation speed of the entireprocess will be low In this study the value of this parametercan be set to [32 54 76 98120] Since the Fourier transform ofthe Gaussian function is also a Gaussian function thewindow function of the optimal time localization of theSTFT is a Gaussian function e EEG is a nonstationarysignal [17] and requires a high time resolution of the windowfunction that is the window length should not be too longIn this paper a Gaussian window was selected and the valueof the window length can be set to [37 47 57 67 77 87 97]

312 Two-Dimensional Multiscale Wavelet TransformParameters In this study the MATLAB library functionswavedec2 and waverec2 were used to perform two-dimensional multiscale wavelet decomposition and re-construction on T-F images Here the number of layers ofwavelet decomposition and the wavelet basis functions needto be set e sampling frequency of the EEG acquisitiondevice used in this paper was 1200Hz and the stimulationfrequency of the SSMVEP was 7ndash108Hz Considering thatthe SSMVEP may contain the second harmonic of thestimulation frequency this study sets the number of layers ofwavelet decomposition to 4 e dbN wavelet has good reg-ularity and was chosen as the wavelet basis function evanishing momentN of the wavelet function can be set to [2 35 7 10 13]

313 Mean Filtering Parameters Here the template size ofthemean filtering needs to be set In this study the value of thetemplate size can be set to [2 4 6 8 10 12 14 16 18 20 22 24 26]

314 Low-Frequency Component of the Fused Image Fe values of LLNF taking LLN1minusLLN2 or LLN2minusLLN1 willaffect the experimental results and the values of LLNF needto be determined to optimize the final fusion effect (seedetails in Section 210 (3))

315 Parameter Selection e T-F image fusion methodproposed in this study requires the setting of the followingfive parameters the number of frequency bins frequencysmoothing window vanishing moments of dbN waveletfunctions mean filter template size and the value of LLNFIn this study the grid search method was used to traverse thecombination of all parameters to find the best combinationof parameters that can enhance the SSMVEP active com-ponents In this study six rounds of experiments wereconducted e first three rounds of experimental data wereused to select the optimal combination of parameters for T-F

6 Computational Intelligence and Neuroscience

image fusion According to the selected combination of pa-rameters the last three rounds of experimental data were usedto compare the enhancement effect of SSMVEP active com-ponents by the method of minimum energy fusion maximumcontrast fusion CCA fusion and T-F image fusion edisadvantage of grid search is that the amount of calculation islarge however considering the difficulty of superparameterselection grid search is still a more reliable method

32 Electrode Signal Selection for T-F Image Fusion e T-Fimage fusion method analyzes two electrode signals ustwo suitable electrodes need to be selected from multipleelectrodes e bipolar fusion signal is obtained by sub-tracting the two original EEG signals In equation (17) wesubtracted the wavelet low-frequency components of the twoT-F images to obtain the wavelet low-frequency componentsof the fused image e subtraction of the low-frequencycomponents removes the common noise in the electrodesignals which is similar to the principle of bipolar fusionerefore the two electrode signals with the best bipolarfusion effect were used in the T-F image fusion For SSVEPor SSMVEP stimulation the Oz electrode usually has thestrongest response [18] and is most commonly used toanalyze SSVEP or SSMVEP [19] e Oz electrode was usedas one of the analysis electrodes for bipolar fusion In thisstudy the Oz electrode was fused with the other five elec-trode signals so that the best combination of electrodes canbe selected When bipolar fusion is used the Oz electrode asa reducing electrode or a reduced electrode will not affect theexperimental results For example Oz-POz and POz-Ozhave the same fusion effect In this study the Oz elec-trode was used as the reduced electrode Twenty trials persubject were used for electrode selection e Welch powerspectrum analysis [20] was performed on the obtainedsignals of bipolar fusion for each subject e Gaussianwindow was used and the length of the window was 5 s eoverlapping was 256 and the number of discrete Fouriertransform (DFT) points was 6000 for the Welch periodo-gram 20 focused targets correspond to 20 stimuli fre-quencies and the frequencies corresponding to the highestpower spectrum amplitudes at the 20 frequencies weredetermined as the focused target frequency e high rec-ognition accuracy indicates that the amplitude of the powerspectrum at the stimulation frequency is prominent and itindicates the effectiveness of the fusion method for theenhancement of the SSMVEP active component Table 1shows the recognition accuracy of all 10 subjects under thebipolar fusion e Oz electrode was fused with differentelectrodes and obtained different fusion effects Moreoverdue to the differences among individuals the electrodecombination with the highest recognition accuracy persubject was also different e asterisk (lowast) marks the bipolarfusion electrode group with the highest recognition accuracyof each subject and the corresponding electrode combi-nation was used in T-F image fusion

33 Comparison of Enhancement Effects of Minimum EnergyFusion Maximum Contrast Fusion CCA Fusion and T-F

Image Fusion on SSMVEP Active Components T-F imagefusion used two channel signals and CCA fusion maximumcontrast fusion and minimum energy fusion used all sixchannel signals In this study the accuracy was used as theevaluation standard of the fusion effect e high onlineaccuracy indicates that the amplitude at stimulus frequencyis prominent which shows the effectiveness of the fusionmethod For T-F image fusion the Welch power spectrumanalysis (the parameters are the same as in Section 32) wasperformed on the obtained signal after the T-F image fusionand the frequency with the highest amplitude at twentystimulus frequencies was identified as the focused targetfrequency e minimum energy fusion maximum contrastfusion and CCA fusion require prior knowledge of thefrequency of the signal to be fused However the frequencyof the signal to be fused cannot be determined because theuserrsquos focused target cannot be determined beforehand Forthe current tested signal the minimum energy fusionmaximum contrast fusion and CCA fusion need to firstperform the fusion analysis at all stimulus frequencies andthen the frequency with the highest amplitude was iden-tified as the focused target frequency Take CCA fusion as anexample and assume that the current tested signal is Y First20 template signals are set according to equation (11) andthen 20 spatial filter coefficients are obtained according toequation (10) e tested signal is fused with 20 spatial filtercoefficients respectively e obtained 20 sets of vectors areseparately analyzed by the Welch power spectrum (theparameters are the same as in Section 32) to obtain theamplitude at the corresponding frequency Finally thefrequency with the highest amplitude is identified as thefocused target frequency e first three rounds of experi-mental data were used to select the parameters of the T-Fimage fusion (see details in Section 31) and the last threerounds of the experimental data were used to test the onlinefusion effects under the four methods Figures 4(a)ndash4(d)show the analysis results plotted using the data of subject 4 inthe fourth round of the experiment Figure 4(a) shows thepower spectrum results of the 20 targets under T-F imagefusion Figure 4(b) shows the power spectrum results of 20targets under CCA fusion Figure 4(c) shows the powerspectrum results of 20 targets under minimum energy

Table 1 Accuracies of 10 subjects under bipolar fusion

SubjectsBipolar fusion

Oz-PO7 Oz-PO8 Oz-PO3 Oz-POz Oz-PO4S1 02 07 045 015 095lowastS2 025 0 03lowast 01 02S3 095lowast 085 085 04 08S4 08lowast 01 07 035 035S5 02lowast 015 015 015 01S6 095lowast 03 095lowast 085 07S7 02 015 025 06lowast 055S8 01 005 025 03 09lowastS9 03 015 0 035lowast 02S10 035 035 05 08lowast 04lowaste bipolar fusion electrode group with the highest recognition accuracyof each subject

Computational Intelligence and Neuroscience 7

times10ndash7

0

05

1f = 72 Hz

times10ndash7

0

1

2f = 76 Hz

times10ndash7

0

1

2f = 78 Hz

0

05

1times10ndash7 f = 7 Hz

times10ndash7

0

1

2f = 74 Hz

times10ndash7

0

05

1f = 92 Hz

times10ndash7

0

2

4f = 96 Hz

times10ndash7

0

1

2f = 98 Hz

times10ndash7

0

1

2f = 9 Hz

times10ndash7

0

2

4f = 94 Hz

times10ndash7

0

05

1f = 102 Hz

times10ndash7

0

1

2f = 106 Hz

times10ndash7

0

1

2f = 108 Hz

times10ndash7

0

1

2f = 10 Hz

times10ndash7

0

1

2f = 104 Hz

times10ndash7

0

2

4f = 82 Hz

times10ndash7

0

2

4f = 86 Hz

times10ndash7

0

2

4f = 88 Hz

times10ndash7

0

1

2f = 8 Hz

times10ndash7

0

05

1 f = 84 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(a)

0

2

4times10ndash13 f = 7 Hz

times10ndash13

0

5f = 72 Hz

times10ndash12

0

05

1f = 74 Hz

times10ndash12

0

05

1f = 76 Hz

times10ndash13

0

5f = 78 Hz

times10ndash13

0

2

4f = 9 Hz

times10ndash13

0

5f = 92 Hz

times10ndash12

0

1

2f = 94 Hz

times10ndash12

0

05

1f = 96 Hz

times10ndash13

0

2

4f = 98 Hz

times10ndash13

0

2

4f = 10 Hz

times10ndash12

0

05

1f = 102 Hz

times10ndash12

0

05

1f = 104 Hz

times10ndash12

0

05

1f = 106 Hz

times10ndash13

0

5f = 108 Hz

times10ndash13

0

2

4f = 8 Hz

times10ndash12

0

05

1f = 82 Hz

times10ndash12

0

1

2f = 84 Hz

times10ndash12

0

05

1f = 86 Hz

times10ndash13

0

5f = 88 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(b)

Figure 4 Continued

8 Computational Intelligence and Neuroscience

times10ndash13

0

05

1f = 7 Hz

times10ndash13

0

1

2f = 74 Hz

times10ndash13

0

05

1f = 72 Hz

times10ndash13

0

1

2f = 76 Hz

times10ndash14

0

2

4f = 78 Hz

times10ndash13

0

05

1f = 9 Hz

times10ndash13

0

1

2f = 94 Hz

times10ndash13

0

1

2f = 92 Hz

times10ndash13

0

05

1f = 96 Hz

times10ndash13

0

05

1f = 98 Hz

times10ndash13

0

05

1f = 10 Hz

times10ndash13

0

05

1f = 104 Hz

times10ndash13

0

05

1f = 102 Hz

times10ndash13

0

05

1f = 106 Hz

times10ndash13

0

1

2f = 108 Hz

times10ndash13

0

05

1f = 8 Hz

times10ndash13

0

1

2f = 84 Hz

times10ndash13

0

1

2f = 82 Hz

times10ndash13

0

05

1f = 86 Hz

times10ndash13

0

05

1f = 88 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(c)

times10ndash4

0

05

1f = 72 Hz

times10ndash4

0

1

2f = 74 Hz

times10ndash4

0

1

2f = 76 Hz

times10ndash4

0

05

1f = 78 Hz

times10ndash4

0

05

1f = 7 Hz

times10ndash4

0

05

1f = 92 Hz

times10ndash4

0

2

4f = 94 Hz

times10ndash4

0

1

2f = 96 Hz

times10ndash5

0

5f = 98 Hz

times10ndash5

0

2

4f = 9 Hz

times10ndash4

0

1

2f = 102 Hz

times10ndash4

0

05

1f = 104 Hz

times10ndash4

0

1

2f = 106 Hz

times10ndash4

0

05

1f = 108 Hz

times10ndash4

0

05

1f = 10 Hz

times10ndash4

0

1

2f = 82 Hz

times10ndash4

0

2

4f = 84 Hz

times10ndash4

0

1

2f = 86 Hz

times10ndash4

0

05

1f = 88 Hz

times10ndash4

0

05

1f = 8 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(d)

Figure 4 e power spectrum results of 20 targets under (a) T-F image fusion (b) CCA fusion (c) minimum energy fusion and (d)maximum contrast fusion (the discrete sequence frequency values from left to right are 7 8 910 72 82 92 102 74 84 94 104 76 86 96 10678 88 98 108 Hz respectively)

Computational Intelligence and Neuroscience 9

fusion and Figure 4(d) shows the power spectrum results of20 targets under maximum contrast fusion In the stalk plotthe discrete sequence frequency values from left to right are7 8 9 10 72 82 92 102 74 84 94 104 76 86 96 106 78 8898 108 respectively where f represents the stimulationfrequency of the corresponding target and the green dotmarks the amplitude of the corresponding target stimulusfrequency e red flag f represents the target was mis-identified and the blue flag f represents the target wascorrectly identified Figures 4(a)ndash4(d) show that the T-Fimage fusion method has the best online performance andthe minimum energy fusion method has the worst onlineperformance Maximum contrast fusion and CCA fusionhave similar fusion effects

In online analysis the power spectrum amplitudes of thetest signal at all stimulus frequencies were calculated andthen the frequency corresponding to the maximum am-plitude was identified as the focused target frequency Errorrecognition occurs when the amplitude at the focused targetfrequency is lower than the amplitudes at the nonfocusedtarget frequencieserefore the power spectrum amplitudeat the focused target frequency can be regarded as the activecomponent of the signal and the power spectrum ampli-tudes at the remaining nonfocused target frequencies can beregarded as noise e SNR in this study was defined as theratio of the power spectrum amplitude at the focused targetfrequency to the mean of the power spectrum amplitudes atthe remaining nonfocused target frequencies e SNRs often subjects at all stimulus frequencies were superimposedand averaged and the experimental results are shown inFigure 5 It can be seen from Figure 5 that the T-F imagefusion method obtained the highest SNR which indicatesthat the T-F image fusion method proposed in this study caneffectively enhance the SSMVEP SNR

e online accuracies of each subject in the three roundsof experiments were superimposed and averaged e ex-perimental results are shown in Figure 6 It can be seen thatmost subjects obtained highest online accuracy under theT-F image fusion and some subjects obtained highest onlineaccuracy under CCA fusion and maximum contrast fusionOnly subject 9 obtained the highest online accuracy underthe minimum energy fusion e online accuracies of all thesubjects were superimposed and averaged e experimentalresults are shown in Figure 7 e experimental results showthat the online performance of the T-F image fusion methodis better e online accuracy of the T-F image fusionmethod is 617 606 and 3050 higher than that of theCCA fusion maximum contrast fusion and the minimumenergy fusion respectively e paired t-test shows signifi-cant differences between the accuracies of T-F image fusionand minimum energy fusion (p 00174) Online accuracyanalysis results show that the proposedmethod is better thanthe traditional time-domain fusion method

4 Discussion

e SSMVEP collected from the scalp contains a lot of noiseand requires an effective signal processing method to en-hance the active components of SSMVEP Spatial filtering

methods utilize EEG information of multiple channels andexert positive significance to enhance the active componentsof the SSMVEP At present the commonly used spatialfiltering methods include average fusion native fusion bi-polar fusion Laplacian fusion CAR fusion CCA fusionminimum energy fusion and maximum contrast fusionefirst five spatial filtering methods can only process thepreselected electrode signals Minimum energy fusionmaximum contrast fusion and CCA fusion improve theabove problem and they can be used to fuse any number ofelectrode signals with better fusion effects Minimum energyfusion maximum contrast fusion and CCA fusion fuseSSMVEP in the time domainis study first put forward theidea of fusing EEG in the T-F domain and proposed a T-Fimage fusion method We compared the enhancement ef-fects of minimum energy fusion maximum contrast fusionCCA fusion and T-F image fusion on SSMVEP It is verifiedby the test data that the proposed method is better than thetraditional time-domain fusion method

T-F analysis is a good choice for the transformation ofthe EEG data from one dimension to two dimension EEGtransformed into the T-F domain can be used as an image foranalysis e STFT has an inverse transform which caninverse transform the fused T-F image into a time-domainsignal erefore the STFT was used to transform the EEGfrom the time domain to the T-F domain e two-dimensional wavelet transform can fuse two images intoone to achieve the purpose of multichannel EEG fusionAfter two-dimensional wavelet decomposition the low-frequency components of the two subimages were sub-tracted to obtain the low-frequency components of the fusedimage and the maximum value of the high-frequencycomponents of the two subimages was taken as the high-frequency component of the fused imagee low-frequencycomponents after wavelet decomposition represent the ac-tive components of the SSMVEPe subtraction of the low-

0Methods

5

10

15

Ave

rage

SN

R

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Figure 5 e average SNR of ten subjects at all stimulus fre-quencies under four methods

10 Computational Intelligence and Neuroscience

frequency components can remove the common noise in theelectrode signals is is similar to bipolar fusion whichsubtracts two electrode signals in the time domain We alsocompared the enhancement eects of bipolar fusion and theproposed method on SSMVEP active components eelectrodes used for bipolar fusion are the same as those usedin T-F image fusion e results show that the proposedmethod is better than the bipolar fusion (the online accuracyof bipolar fusion is 7152) In this study the fused imagewas ltered by mean lter to further achieve low-pass l-tering We also tried to remove the mean ltering step aftercompleting the T-F image fusion e results show thatremoving themean ltering step reduced the fusion eect Inthis study the mean ltering step was performed after T-Fimage fusion We tested the fusion eect when the meanltering step was performed before T-F image fusion(performing mean ltering step on the subimage afterSTFT) For some subjects it was better to perform the meanltering step after T-F image fusion erefore we recom-mend that researchers follow the analysis steps in this study

e wavelet low-frequency coecients (see details inequation (17)) of the fused image have an important in-uence on the fusion eect If we perform T-F image fusionon six electrode signals at the same time we will get six setsof wavelet low-frequency coecients Here each set of low-frequency coecient is a two-dimensional signal Referringto equation (1) we can assign a coecient to each two-dimensional signal and obtain the fused two-dimensionalsignal (ie the wavelet low-frequency coecients of thefused image) after linear summationis study explored thefeasibility of applying the spatial lter coecients of CCAfusion and maximum contrast fusion to T-F image fusionwhen fusing six electrode signals by using the T-F imagefusion method at the same time Since the fusion eect ofminimum energy fusion was poor the spatial lter co-ecients of the minimum energy fusion were not used heree spatial lter coecients of the maximum contrast fusionand CCA fusion at the frequency f were obtained byequations (9) and (10) and are set to Vmax and Vcca re-spectively Vmax and Vcca are two vectors with dimensions of6times1 where Vmax(1 1) and Vcca(1 1) represent the rstelements of the vectors Vmax and Vcca Since the T-F imagefusion method was performed on the six electrode signalsequation (17) is transformed into equation (19) where Vcorresponds to the spatial lter coecient Vmax or Vcca erest of the analysis process is the same as that in Section 210e same analysis steps were performed for all 20 targets(7ndash108Hz) e Welch spectrum analysis (the parametersare the same as in Section 32) was performed on the ob-tained signal after T-F image fusion Figures 8(a) and 8(b)show the Welch power spectra plotted using the data ofsubject 4 in the fourth round of the experiment Figure 8(a)shows the power spectrum plotted using spatial lter co-ecients of CCA fusion and Figure 8(b) shows the Welchpower spectrum plotted using spatial lter coecients ofmaximum contrast fusion where f represents the stimulusfrequency and the red circle indicates the amplitude at thestimulus frequency Figures 8(a) and 8(b) show that theamplitudes at the stimulation frequencies are prominente spatial lter coecients of CCA fusion and maximumcontrast fusion are eective for T-F image fusion us theT-F image fusion method proposed in this study focuses onthe fusion of wavelet low-frequency coecients of multiple

0

02

04

06

08

1

12

Acc

urac

y

Subject1 2 3 4 5 6 7 8 9 10

T-F image fusionCCA fusion

Minimum energy fusionMaximum contrast fusion

Figure 6 e accuracy of each subject under four fusion methods

0

02

04

06

08

1

12

Ave

rage

accu

racy

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Methods

Figure 7 e average accuracy of ten subjects under four fusionmethods

Computational Intelligence and Neuroscience 11

images (by assigning a coecient to each two-dimensionalsignal and obtaining the fused two-dimensional signal afterlinear summation) e premise of the above test results is

that we know the frequency of the signal to be fused If theonline test method is used (see details in Section 33) thesame online fusion results as CCA fusion and maximum

f = 7 Hz

times104

012

10 20 30 400

f = 72 Hz

times104

012

10 20 30 400

f = 74 Hz

times104

024

10 20 30 400

f = 76 Hz

times104

024

10 20 30 400

f = 78 Hz

times104

024

10 20 30 400

f = 8 Hz

times104

012

10 20 30 400

f = 82 Hz

times104

024

10 20 30 400

f = 84 Hz

times104

05

10

10 20 30 400

f = 86 Hz

times104

024

10 20 30 400

f = 88 Hz

times104

024

10 20 30 400

f = 9 Hz

times104

012

10 20 30 400

f = 92 Hz

times104

024

10 20 30 400

f = 94 Hz

times104

05

10

10 20 30 400

f = 96 Hz

times104

024

10 20 30 400

f = 98 Hz

times104

012

10 20 30 400

f = 10 Hz

times104

012

10 20 30 400

f = 102 Hz

times104

012

10 20 30 400

f = 104 Hz

times104

012

10 20 30 400

f = 106 Hz

times104

024

10 20 30 400

f = 108 Hz

times104

012

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(a)

f = 7 Hz

0

2

4

10 20 30 400

f = 8 Hz

0

2

4

10 20 30 400

f = 9 Hz

0

2

4

10 20 30 400

f = 10 Hz

0

2

4

10 20 30 400

f = 72 Hz0

2

4

10 20 30 400

f = 82 Hz0

5

10

10 20 30 400

f = 92 Hz0

2

4

10 20 30 400

f = 102 Hz0

2

4

10 20 30 400

f = 74 Hz

0

2

4

10 20 30 400

f = 84 Hz

0

10

20

10 20 30 400

f = 94 Hz

0

10

20

10 20 30 400

f = 104 Hz

0

2

4

10 20 30 400

f = 76 Hz0

5

10 20 30 400

f = 86 Hz0

5

10

10 20 30 400

f = 96 Hz0

5

10 20 30 400

f = 106 Hz0

5

10

10 20 30 400

f = 78 Hz0

5

10 20 30 400

f = 88 Hz0

5

10 20 30 400

f = 98 Hz0

2

4

10 20 30 400

f = 108 Hz0

2

4

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(b)

Figure 8 e Welch power spectrum plotted using the (a) CCA fusion spatial lter coecients and (b) maximum contrast fusion spatiallter coecients

12 Computational Intelligence and Neuroscience

contrast fusion were obtained T-F image fusion usedonly two electrode signals and the better online fusioneffect than that of the CCA fusion and the maximumcontrast fusion was obtained which shows the effec-tiveness of the proposed method Next we will explorethe fusion method of multiple T-F images (more than twoimages) that is to find suitable spatial filter coefficientsfor the wavelet low-frequency components of multipleT-F images

LLNF LLN1 lowastV(1 1) + LLN2 lowastV(2 1) + LLN3 lowastV(3 1)

+ LLN4 lowastV(4 1) + LLN5 lowastV(5 1)

+ LLN6 lowastV(6 1)

(19)

e parameters affect the fusion effect of the proposedmethod We listed the parameters that need to be selectedand the possible values of the parameters in Section 31 Inthis study the grid search method was used to traverse thecombination of all parameters and the best combination ofparameters was found In the experiment we found thatthe number of frequency bins and the Gaussian windowlength can be fixed to 54 and 57 We recommend that theresearchers determine the parameters according to theparameter selection principles and ranges given in Section31 In this study the SSMVEP active component wasenhanced by fusing multichannel signals into a single-channel signal and then the focused target frequencywas identified by performing spectrum analysis on thefused signals is is beneficial for SSMVEP studies basedon spectrum analysis For example Reference [21] pro-posed a frequency and phase mixed coding method in theSSVEP-based brain-computer interface (BCI) which in-creases the number of BCI coding targets by making onefrequency correspond to multiple different phases In thestudy the FFT analysis of the test signal is required to findthe possible focused target frequency and then calculate thephase value at that frequency e proposed method in thisstudy has a positive significance for accurately finding thefocused target frequency in the spectrum Moreover someEEG feature extraction algorithms for the BCI also requirespectrum analysis of the test signals [22 23] erefore themethod proposed in this study has a potential applicationvalue

5 Conclusion

To explore whether T-F domain analysis can achieve betterfusion effects than time-domain analysis this study pro-posed an SSMVEP enhancement method based on T-Fimage fusion e parameters of the T-F image fusionalgorithm were determined by the grid search method andthe electrode signals used for T-F image fusion were se-lected by bipolar fusion e analysis results show that thekey of the T-F image fusion algorithm is the fusion of thewavelet low-frequency components is study comparedthe enhancement effects of minimum energy fusionmaximum contrast fusion CCA fusion and T-F imagefusion on SSMVEP e experimental results show that the

online performance of the T-F image fusion method isbetter than that of the traditional spatial filtering methodswhich indicates that the proposed method is feasible to fuseSSMVEP in the T-F domain

Data Availability

e analytical data used to support the findings of this studyare included within the article and the raw data are availablefrom the corresponding author upon request

Ethical Approval

e subjects provided informed written consent in accor-dance with the protocol approved by the Institutional Re-view Board of Xirsquoan Jiaotong University

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Wenqiang Yan worked on the algorithm and wrote thepaper Guanghua Xu involved in discussions and gavesuggestions throughout this project Longting Chen andXiaowei Zheng worked on the experimental design Allauthors read and approved the final manuscript

Acknowledgments

is research was supported by the National Natural ScienceFoundation of China (NSFC) (no 51775415) We want tothank the subjects for participating in these experiments

Supplementary Materials

MATLAB codes for T-F image fusion minimum energyfusion maximum contrast fusion and CCA fusion (Sup-plementary Materials)

References

[1] W Yan G Xu J Xie et al ldquoFour novel motion paradigmsbased on steady-state motion visual evoked potentialrdquo IEEETransactions on Biomedical Engineering vol 65 no 8pp 1696ndash1704 2018

[2] J R Wolpaw and D J Mcfarland ldquoMultichannel EEG-basedbrain-computer communicationrdquo Electroencephalographyand Clinical Neurophysiology vol 90 no 6 pp 444ndash4491994

[3] B O Peters G Pfurtscheller and H Flyvbjerg ldquoMiningmulti-channel EEG for its information content an ANN-based method for a brain-computer interfacerdquo Neural Net-works vol 11 no 7-8 pp 1429ndash1433 1998

[4] S Parini L Maggi A C Turconi and G Andreoni ldquoA robustand self-paced BCI system based on a four class SSVEPparadigm algorithms and protocols for a high-transfer-ratedirect brain communicationrdquo Computational Intelligence andNeuroscience vol 2009 pp 1ndash11 2009

[5] O Friman I Volosyak and A Graser ldquoMultiple channeldetection of steady-state visual evoked potentials for brain-

Computational Intelligence and Neuroscience 13

computer interfacesrdquo IEEE Transactions on Biomedical En-gineering vol 54 no 4 pp 742ndash750 2007

[6] Z Lin C Zhang WWu and X Gao ldquoFrequency recognitionbased on canonical correlation analysis for SSVEP-basedBCIsrdquo IEEE Transactions on Biomedical Engineering vol 53- no 12 pp 2610ndash2614 2006

[7] M Middendorf G Mcmillan G Calhoun and K S JonesldquoBrain-computer interfaces based on the steady-state visual-evoked responserdquo IEEE Transactions on Rehabilitation En-gineering vol 8 no 2 pp 211ndash214 2000

[8] G R Muller-Putz R Scherer C Brauneis andG Pfurtscheller ldquoSteady-state visual evoked potential(SSVEP)-based communication impact of harmonic fre-quency componentsrdquo Journal of Neural Engineering vol 2no 4 pp 123ndash130 2005

[9] D J Mcfarland L M Mccane S V David and J R WolpawldquoSpatial filter selection for EEG-based communicationrdquoElectroencephalography and Clinical Neurophysiologyvol 103 no 3 pp 386ndash394 1997

[10] H Li B S Manjunath and S K Mitra ldquoMultisensor imagefusion using the wavelet transformrdquo Graphical Models andImage Processing vol 57 no 3 pp 235ndash245 1995

[11] S Li and B Yang ldquoMultifocus image fusion by combiningcurvelet and wavelet transformrdquo Pattern Recognition Lettersvol 29 no 9 pp 1295ndash1301 2008

[12] B Zhang ldquoStudy on image fusion based on different fusionrules of wavelet transformrdquo Acta Photonica Sinica vol 33pp 173ndash176 2004

[13] C DMeyerMatrix Analysis and Applied Linear Algebra Bookand Solutions Manual Society for Industrial and AppliedMathematics Philadelphia PA USA 2000

[14] K J A Sundar M Jahnavi and K Lakshmisaritha ldquoMulti-sensor image fusion based on empirical wavelet transformrdquo inProceedings of the International Conference on ElectricalElectronics Communication Computer and OptimizationTechniques pp 93ndash97 Mysuru India December 2018

[15] L Tian J Ahmed Q Du B Shankar and S Adnan ldquoMultifocus image fusion using combined median and average filterbased hybrid stationary wavelet transform and principalcomponent analysisrdquo International Journal of AdvancedComputer Science and Applications vol 9 no 6 2018

[16] G He S Xing D Dong and X Zhao ldquoPanchromatic andmulti-spectral image fusion method based on two-step sparserepresentation and wavelet transformrdquo in Asia-Pacific Signaland Information Processing Association Summit and Confer-ence pp 259ndash262 Honolulu HI USA January 2018

[17] Y Shin S Lee M Ahn H Cho S C Jun and H-N LeeldquoNoise robustness analysis of sparse representation basedclassification method for non-stationary EEG signal classifi-cationrdquo Biomedical Signal Processing and Control vol 21pp 8ndash18 2015

[18] M Xu L Chen L Zhang et al ldquoA visual parallel-BCI spellerbased on the time-frequency coding strategyrdquo Journal ofNeural Engineering vol 11 no 2 article 026014 2014

[19] J Xie G Xu A Luo et al ldquoe role of visual noise ininfluencing mental load and fatigue in a steady-state motionvisual evoked potential-based brain-computer interfacerdquoSensors vol 17 no 8 p 1873 2017

[20] K Barbe R Pintelon and J Schoukens ldquoWelch methodrevisited nonparametric power spectrum estimation viacircular overlaprdquo IEEE Transactions on Signal Processingvol 58 no 2 pp 553ndash565 2010

[21] C Jia X Gao B Hong and S Gao ldquoFrequency and phasemixed coding in SSVEP-based brain-computer interfacerdquo

IEEE transactions on bio-medical engineering vol 58 no 1pp 200ndash206 2011

[22] C Yang H Wu Z Li W He N Wang and C-Y Su ldquoMindcontrol of a robotic arm with visual fusion technologyrdquo IEEETransactions on Industrial Informatics vol 14 no 9pp 3822ndash3830 2017

[23] J Castillo S Muller E Caicedo and T Bastos ldquoFeatureextraction techniques based on power spectrum for a SSVEP-BCIrdquo in Proceedings of the IEEE 23rd International Sympo-sium on Industrial Electronics (ISIE) Istanbul Turkey June2014

14 Computational Intelligence and Neuroscience

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 5: Steady-StateMotionVisualEvokedPotential(SSMVEP) … · 2019. 2. 14. · wasconductedbyMATLAB’stfrstftfunction.Afterfusing two T-F images, the T-F signal was inversely transformed

T-F image

Row decomposition

Rowreconstruction

L H

Columndecomposition

Columnreconstruction LH1

LL1 HL1

HH1

Secondarydecomposition

Secondaryreconstruction LH1 HH1

HL1LL2

LH2

HL2

HH2

Figure 1 e process of two-dimensional discrete wavelet decomposition and reconstruction

T-F image 1 T-F image 2

Wavelet decomposition Wavelet decomposition

Low-frequency component

Fusion principle 1 Fusion principle 2

Two-dimensional wavelet reconstruction

Fusion image

High-frequency component

Low-frequency component

High-frequency component

Figure 2 Image fusion process based on two-dimensional wavelet transform

Raw signal 1 Raw signal 2

STFT STFT

Time (s) Time (s)

Freq

uenc

y (H

z)

Freq

uenc

y (H

z)

2D wavelet decomposition

Low-frequency component fusion

High-frequency component fusion

2D wavelet reconstruction

Time (s)

Freq

uenc

y (H

z)

Mean filtering

Time (s)

Freq

uenc

y (H

z)

ISTFT

Fused signal

Figure 3 e fusion process of two-electrode signal T-F images

Computational Intelligence and Neuroscience 5

low-frequency components can remove the commonnoise in the electrode signals

(b) e magnitudes of the high-frequency components(HLM1 LHM1 andHHM1) and (HLM2 LHM2 andHHM2) (M 1 2 N) of each decomposition

layer are obtained for the T-F images 1 and 2 ehigh-frequency component with a large value isused as the high-frequency component of the fusedimage F

HLiMF

HLiM1 real HLi

M11113872 1113873gt real HLiM21113872 1113873

HLiM2 real HLi

M11113872 1113873lt real HLiM21113872 1113873

⎧⎪⎨

⎪⎩

LHiMF

LHiM1 real LHi

M11113872 1113873gt real LHiM21113872 1113873

LHiM2 real LHi

M11113872 1113873lt real LHiM21113872 1113873

⎧⎪⎨

⎪⎩

HHiMF

HHiM1 real HHi

M11113872 1113873gt real HHiM21113872 1113873

HHiM2 real HHi

M11113872 1113873lt real HHiM21113872 1113873

⎧⎪⎨

⎪⎩

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

M 1 2 N (18)

e two-dimensional signal after the STFT is acomplex signal the two-dimensional signal aftertwo-dimensional wavelet decomposition is still acomplex signal e value used for the comparison isthe real part of the complex signal Here i representsthe number of frequency components of the Mdecomposition layer (ie the number of waveletcoefficients)

(4) According to both low-frequency components LLNFand high-frequency components (HLMF LHMF andHHMF) (M 1 2 N) of the fused image F thefused image F is generated by two-dimensionalwavelet reconstruction

(5) Mean filtering is performed on the fused image F(6) An ISTFT is performed on the mean filtered image to

generate a fused time-domain signal (MATLAB codefor T-F image fusion and test data is provided inSupplementary Materials (available here))

3 Results

31 Parameter Selection

311 STFT Parameters In this study the EEG time-domainsignals were subjected to STFT and ISTFT using theMATLAB library functions tfrstft and tfristfte parametersthat need to be set here are the number of frequency bins andthe frequency smoothing window If the number of fre-quency bins is too large the calculation speed of the entireprocess will be low In this study the value of this parametercan be set to [32 54 76 98120] Since the Fourier transform ofthe Gaussian function is also a Gaussian function thewindow function of the optimal time localization of theSTFT is a Gaussian function e EEG is a nonstationarysignal [17] and requires a high time resolution of the windowfunction that is the window length should not be too longIn this paper a Gaussian window was selected and the valueof the window length can be set to [37 47 57 67 77 87 97]

312 Two-Dimensional Multiscale Wavelet TransformParameters In this study the MATLAB library functionswavedec2 and waverec2 were used to perform two-dimensional multiscale wavelet decomposition and re-construction on T-F images Here the number of layers ofwavelet decomposition and the wavelet basis functions needto be set e sampling frequency of the EEG acquisitiondevice used in this paper was 1200Hz and the stimulationfrequency of the SSMVEP was 7ndash108Hz Considering thatthe SSMVEP may contain the second harmonic of thestimulation frequency this study sets the number of layers ofwavelet decomposition to 4 e dbN wavelet has good reg-ularity and was chosen as the wavelet basis function evanishing momentN of the wavelet function can be set to [2 35 7 10 13]

313 Mean Filtering Parameters Here the template size ofthemean filtering needs to be set In this study the value of thetemplate size can be set to [2 4 6 8 10 12 14 16 18 20 22 24 26]

314 Low-Frequency Component of the Fused Image Fe values of LLNF taking LLN1minusLLN2 or LLN2minusLLN1 willaffect the experimental results and the values of LLNF needto be determined to optimize the final fusion effect (seedetails in Section 210 (3))

315 Parameter Selection e T-F image fusion methodproposed in this study requires the setting of the followingfive parameters the number of frequency bins frequencysmoothing window vanishing moments of dbN waveletfunctions mean filter template size and the value of LLNFIn this study the grid search method was used to traverse thecombination of all parameters to find the best combinationof parameters that can enhance the SSMVEP active com-ponents In this study six rounds of experiments wereconducted e first three rounds of experimental data wereused to select the optimal combination of parameters for T-F

6 Computational Intelligence and Neuroscience

image fusion According to the selected combination of pa-rameters the last three rounds of experimental data were usedto compare the enhancement effect of SSMVEP active com-ponents by the method of minimum energy fusion maximumcontrast fusion CCA fusion and T-F image fusion edisadvantage of grid search is that the amount of calculation islarge however considering the difficulty of superparameterselection grid search is still a more reliable method

32 Electrode Signal Selection for T-F Image Fusion e T-Fimage fusion method analyzes two electrode signals ustwo suitable electrodes need to be selected from multipleelectrodes e bipolar fusion signal is obtained by sub-tracting the two original EEG signals In equation (17) wesubtracted the wavelet low-frequency components of the twoT-F images to obtain the wavelet low-frequency componentsof the fused image e subtraction of the low-frequencycomponents removes the common noise in the electrodesignals which is similar to the principle of bipolar fusionerefore the two electrode signals with the best bipolarfusion effect were used in the T-F image fusion For SSVEPor SSMVEP stimulation the Oz electrode usually has thestrongest response [18] and is most commonly used toanalyze SSVEP or SSMVEP [19] e Oz electrode was usedas one of the analysis electrodes for bipolar fusion In thisstudy the Oz electrode was fused with the other five elec-trode signals so that the best combination of electrodes canbe selected When bipolar fusion is used the Oz electrode asa reducing electrode or a reduced electrode will not affect theexperimental results For example Oz-POz and POz-Ozhave the same fusion effect In this study the Oz elec-trode was used as the reduced electrode Twenty trials persubject were used for electrode selection e Welch powerspectrum analysis [20] was performed on the obtainedsignals of bipolar fusion for each subject e Gaussianwindow was used and the length of the window was 5 s eoverlapping was 256 and the number of discrete Fouriertransform (DFT) points was 6000 for the Welch periodo-gram 20 focused targets correspond to 20 stimuli fre-quencies and the frequencies corresponding to the highestpower spectrum amplitudes at the 20 frequencies weredetermined as the focused target frequency e high rec-ognition accuracy indicates that the amplitude of the powerspectrum at the stimulation frequency is prominent and itindicates the effectiveness of the fusion method for theenhancement of the SSMVEP active component Table 1shows the recognition accuracy of all 10 subjects under thebipolar fusion e Oz electrode was fused with differentelectrodes and obtained different fusion effects Moreoverdue to the differences among individuals the electrodecombination with the highest recognition accuracy persubject was also different e asterisk (lowast) marks the bipolarfusion electrode group with the highest recognition accuracyof each subject and the corresponding electrode combi-nation was used in T-F image fusion

33 Comparison of Enhancement Effects of Minimum EnergyFusion Maximum Contrast Fusion CCA Fusion and T-F

Image Fusion on SSMVEP Active Components T-F imagefusion used two channel signals and CCA fusion maximumcontrast fusion and minimum energy fusion used all sixchannel signals In this study the accuracy was used as theevaluation standard of the fusion effect e high onlineaccuracy indicates that the amplitude at stimulus frequencyis prominent which shows the effectiveness of the fusionmethod For T-F image fusion the Welch power spectrumanalysis (the parameters are the same as in Section 32) wasperformed on the obtained signal after the T-F image fusionand the frequency with the highest amplitude at twentystimulus frequencies was identified as the focused targetfrequency e minimum energy fusion maximum contrastfusion and CCA fusion require prior knowledge of thefrequency of the signal to be fused However the frequencyof the signal to be fused cannot be determined because theuserrsquos focused target cannot be determined beforehand Forthe current tested signal the minimum energy fusionmaximum contrast fusion and CCA fusion need to firstperform the fusion analysis at all stimulus frequencies andthen the frequency with the highest amplitude was iden-tified as the focused target frequency Take CCA fusion as anexample and assume that the current tested signal is Y First20 template signals are set according to equation (11) andthen 20 spatial filter coefficients are obtained according toequation (10) e tested signal is fused with 20 spatial filtercoefficients respectively e obtained 20 sets of vectors areseparately analyzed by the Welch power spectrum (theparameters are the same as in Section 32) to obtain theamplitude at the corresponding frequency Finally thefrequency with the highest amplitude is identified as thefocused target frequency e first three rounds of experi-mental data were used to select the parameters of the T-Fimage fusion (see details in Section 31) and the last threerounds of the experimental data were used to test the onlinefusion effects under the four methods Figures 4(a)ndash4(d)show the analysis results plotted using the data of subject 4 inthe fourth round of the experiment Figure 4(a) shows thepower spectrum results of the 20 targets under T-F imagefusion Figure 4(b) shows the power spectrum results of 20targets under CCA fusion Figure 4(c) shows the powerspectrum results of 20 targets under minimum energy

Table 1 Accuracies of 10 subjects under bipolar fusion

SubjectsBipolar fusion

Oz-PO7 Oz-PO8 Oz-PO3 Oz-POz Oz-PO4S1 02 07 045 015 095lowastS2 025 0 03lowast 01 02S3 095lowast 085 085 04 08S4 08lowast 01 07 035 035S5 02lowast 015 015 015 01S6 095lowast 03 095lowast 085 07S7 02 015 025 06lowast 055S8 01 005 025 03 09lowastS9 03 015 0 035lowast 02S10 035 035 05 08lowast 04lowaste bipolar fusion electrode group with the highest recognition accuracyof each subject

Computational Intelligence and Neuroscience 7

times10ndash7

0

05

1f = 72 Hz

times10ndash7

0

1

2f = 76 Hz

times10ndash7

0

1

2f = 78 Hz

0

05

1times10ndash7 f = 7 Hz

times10ndash7

0

1

2f = 74 Hz

times10ndash7

0

05

1f = 92 Hz

times10ndash7

0

2

4f = 96 Hz

times10ndash7

0

1

2f = 98 Hz

times10ndash7

0

1

2f = 9 Hz

times10ndash7

0

2

4f = 94 Hz

times10ndash7

0

05

1f = 102 Hz

times10ndash7

0

1

2f = 106 Hz

times10ndash7

0

1

2f = 108 Hz

times10ndash7

0

1

2f = 10 Hz

times10ndash7

0

1

2f = 104 Hz

times10ndash7

0

2

4f = 82 Hz

times10ndash7

0

2

4f = 86 Hz

times10ndash7

0

2

4f = 88 Hz

times10ndash7

0

1

2f = 8 Hz

times10ndash7

0

05

1 f = 84 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(a)

0

2

4times10ndash13 f = 7 Hz

times10ndash13

0

5f = 72 Hz

times10ndash12

0

05

1f = 74 Hz

times10ndash12

0

05

1f = 76 Hz

times10ndash13

0

5f = 78 Hz

times10ndash13

0

2

4f = 9 Hz

times10ndash13

0

5f = 92 Hz

times10ndash12

0

1

2f = 94 Hz

times10ndash12

0

05

1f = 96 Hz

times10ndash13

0

2

4f = 98 Hz

times10ndash13

0

2

4f = 10 Hz

times10ndash12

0

05

1f = 102 Hz

times10ndash12

0

05

1f = 104 Hz

times10ndash12

0

05

1f = 106 Hz

times10ndash13

0

5f = 108 Hz

times10ndash13

0

2

4f = 8 Hz

times10ndash12

0

05

1f = 82 Hz

times10ndash12

0

1

2f = 84 Hz

times10ndash12

0

05

1f = 86 Hz

times10ndash13

0

5f = 88 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(b)

Figure 4 Continued

8 Computational Intelligence and Neuroscience

times10ndash13

0

05

1f = 7 Hz

times10ndash13

0

1

2f = 74 Hz

times10ndash13

0

05

1f = 72 Hz

times10ndash13

0

1

2f = 76 Hz

times10ndash14

0

2

4f = 78 Hz

times10ndash13

0

05

1f = 9 Hz

times10ndash13

0

1

2f = 94 Hz

times10ndash13

0

1

2f = 92 Hz

times10ndash13

0

05

1f = 96 Hz

times10ndash13

0

05

1f = 98 Hz

times10ndash13

0

05

1f = 10 Hz

times10ndash13

0

05

1f = 104 Hz

times10ndash13

0

05

1f = 102 Hz

times10ndash13

0

05

1f = 106 Hz

times10ndash13

0

1

2f = 108 Hz

times10ndash13

0

05

1f = 8 Hz

times10ndash13

0

1

2f = 84 Hz

times10ndash13

0

1

2f = 82 Hz

times10ndash13

0

05

1f = 86 Hz

times10ndash13

0

05

1f = 88 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(c)

times10ndash4

0

05

1f = 72 Hz

times10ndash4

0

1

2f = 74 Hz

times10ndash4

0

1

2f = 76 Hz

times10ndash4

0

05

1f = 78 Hz

times10ndash4

0

05

1f = 7 Hz

times10ndash4

0

05

1f = 92 Hz

times10ndash4

0

2

4f = 94 Hz

times10ndash4

0

1

2f = 96 Hz

times10ndash5

0

5f = 98 Hz

times10ndash5

0

2

4f = 9 Hz

times10ndash4

0

1

2f = 102 Hz

times10ndash4

0

05

1f = 104 Hz

times10ndash4

0

1

2f = 106 Hz

times10ndash4

0

05

1f = 108 Hz

times10ndash4

0

05

1f = 10 Hz

times10ndash4

0

1

2f = 82 Hz

times10ndash4

0

2

4f = 84 Hz

times10ndash4

0

1

2f = 86 Hz

times10ndash4

0

05

1f = 88 Hz

times10ndash4

0

05

1f = 8 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(d)

Figure 4 e power spectrum results of 20 targets under (a) T-F image fusion (b) CCA fusion (c) minimum energy fusion and (d)maximum contrast fusion (the discrete sequence frequency values from left to right are 7 8 910 72 82 92 102 74 84 94 104 76 86 96 10678 88 98 108 Hz respectively)

Computational Intelligence and Neuroscience 9

fusion and Figure 4(d) shows the power spectrum results of20 targets under maximum contrast fusion In the stalk plotthe discrete sequence frequency values from left to right are7 8 9 10 72 82 92 102 74 84 94 104 76 86 96 106 78 8898 108 respectively where f represents the stimulationfrequency of the corresponding target and the green dotmarks the amplitude of the corresponding target stimulusfrequency e red flag f represents the target was mis-identified and the blue flag f represents the target wascorrectly identified Figures 4(a)ndash4(d) show that the T-Fimage fusion method has the best online performance andthe minimum energy fusion method has the worst onlineperformance Maximum contrast fusion and CCA fusionhave similar fusion effects

In online analysis the power spectrum amplitudes of thetest signal at all stimulus frequencies were calculated andthen the frequency corresponding to the maximum am-plitude was identified as the focused target frequency Errorrecognition occurs when the amplitude at the focused targetfrequency is lower than the amplitudes at the nonfocusedtarget frequencieserefore the power spectrum amplitudeat the focused target frequency can be regarded as the activecomponent of the signal and the power spectrum ampli-tudes at the remaining nonfocused target frequencies can beregarded as noise e SNR in this study was defined as theratio of the power spectrum amplitude at the focused targetfrequency to the mean of the power spectrum amplitudes atthe remaining nonfocused target frequencies e SNRs often subjects at all stimulus frequencies were superimposedand averaged and the experimental results are shown inFigure 5 It can be seen from Figure 5 that the T-F imagefusion method obtained the highest SNR which indicatesthat the T-F image fusion method proposed in this study caneffectively enhance the SSMVEP SNR

e online accuracies of each subject in the three roundsof experiments were superimposed and averaged e ex-perimental results are shown in Figure 6 It can be seen thatmost subjects obtained highest online accuracy under theT-F image fusion and some subjects obtained highest onlineaccuracy under CCA fusion and maximum contrast fusionOnly subject 9 obtained the highest online accuracy underthe minimum energy fusion e online accuracies of all thesubjects were superimposed and averaged e experimentalresults are shown in Figure 7 e experimental results showthat the online performance of the T-F image fusion methodis better e online accuracy of the T-F image fusionmethod is 617 606 and 3050 higher than that of theCCA fusion maximum contrast fusion and the minimumenergy fusion respectively e paired t-test shows signifi-cant differences between the accuracies of T-F image fusionand minimum energy fusion (p 00174) Online accuracyanalysis results show that the proposedmethod is better thanthe traditional time-domain fusion method

4 Discussion

e SSMVEP collected from the scalp contains a lot of noiseand requires an effective signal processing method to en-hance the active components of SSMVEP Spatial filtering

methods utilize EEG information of multiple channels andexert positive significance to enhance the active componentsof the SSMVEP At present the commonly used spatialfiltering methods include average fusion native fusion bi-polar fusion Laplacian fusion CAR fusion CCA fusionminimum energy fusion and maximum contrast fusionefirst five spatial filtering methods can only process thepreselected electrode signals Minimum energy fusionmaximum contrast fusion and CCA fusion improve theabove problem and they can be used to fuse any number ofelectrode signals with better fusion effects Minimum energyfusion maximum contrast fusion and CCA fusion fuseSSMVEP in the time domainis study first put forward theidea of fusing EEG in the T-F domain and proposed a T-Fimage fusion method We compared the enhancement ef-fects of minimum energy fusion maximum contrast fusionCCA fusion and T-F image fusion on SSMVEP It is verifiedby the test data that the proposed method is better than thetraditional time-domain fusion method

T-F analysis is a good choice for the transformation ofthe EEG data from one dimension to two dimension EEGtransformed into the T-F domain can be used as an image foranalysis e STFT has an inverse transform which caninverse transform the fused T-F image into a time-domainsignal erefore the STFT was used to transform the EEGfrom the time domain to the T-F domain e two-dimensional wavelet transform can fuse two images intoone to achieve the purpose of multichannel EEG fusionAfter two-dimensional wavelet decomposition the low-frequency components of the two subimages were sub-tracted to obtain the low-frequency components of the fusedimage and the maximum value of the high-frequencycomponents of the two subimages was taken as the high-frequency component of the fused imagee low-frequencycomponents after wavelet decomposition represent the ac-tive components of the SSMVEPe subtraction of the low-

0Methods

5

10

15

Ave

rage

SN

R

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Figure 5 e average SNR of ten subjects at all stimulus fre-quencies under four methods

10 Computational Intelligence and Neuroscience

frequency components can remove the common noise in theelectrode signals is is similar to bipolar fusion whichsubtracts two electrode signals in the time domain We alsocompared the enhancement eects of bipolar fusion and theproposed method on SSMVEP active components eelectrodes used for bipolar fusion are the same as those usedin T-F image fusion e results show that the proposedmethod is better than the bipolar fusion (the online accuracyof bipolar fusion is 7152) In this study the fused imagewas ltered by mean lter to further achieve low-pass l-tering We also tried to remove the mean ltering step aftercompleting the T-F image fusion e results show thatremoving themean ltering step reduced the fusion eect Inthis study the mean ltering step was performed after T-Fimage fusion We tested the fusion eect when the meanltering step was performed before T-F image fusion(performing mean ltering step on the subimage afterSTFT) For some subjects it was better to perform the meanltering step after T-F image fusion erefore we recom-mend that researchers follow the analysis steps in this study

e wavelet low-frequency coecients (see details inequation (17)) of the fused image have an important in-uence on the fusion eect If we perform T-F image fusionon six electrode signals at the same time we will get six setsof wavelet low-frequency coecients Here each set of low-frequency coecient is a two-dimensional signal Referringto equation (1) we can assign a coecient to each two-dimensional signal and obtain the fused two-dimensionalsignal (ie the wavelet low-frequency coecients of thefused image) after linear summationis study explored thefeasibility of applying the spatial lter coecients of CCAfusion and maximum contrast fusion to T-F image fusionwhen fusing six electrode signals by using the T-F imagefusion method at the same time Since the fusion eect ofminimum energy fusion was poor the spatial lter co-ecients of the minimum energy fusion were not used heree spatial lter coecients of the maximum contrast fusionand CCA fusion at the frequency f were obtained byequations (9) and (10) and are set to Vmax and Vcca re-spectively Vmax and Vcca are two vectors with dimensions of6times1 where Vmax(1 1) and Vcca(1 1) represent the rstelements of the vectors Vmax and Vcca Since the T-F imagefusion method was performed on the six electrode signalsequation (17) is transformed into equation (19) where Vcorresponds to the spatial lter coecient Vmax or Vcca erest of the analysis process is the same as that in Section 210e same analysis steps were performed for all 20 targets(7ndash108Hz) e Welch spectrum analysis (the parametersare the same as in Section 32) was performed on the ob-tained signal after T-F image fusion Figures 8(a) and 8(b)show the Welch power spectra plotted using the data ofsubject 4 in the fourth round of the experiment Figure 8(a)shows the power spectrum plotted using spatial lter co-ecients of CCA fusion and Figure 8(b) shows the Welchpower spectrum plotted using spatial lter coecients ofmaximum contrast fusion where f represents the stimulusfrequency and the red circle indicates the amplitude at thestimulus frequency Figures 8(a) and 8(b) show that theamplitudes at the stimulation frequencies are prominente spatial lter coecients of CCA fusion and maximumcontrast fusion are eective for T-F image fusion us theT-F image fusion method proposed in this study focuses onthe fusion of wavelet low-frequency coecients of multiple

0

02

04

06

08

1

12

Acc

urac

y

Subject1 2 3 4 5 6 7 8 9 10

T-F image fusionCCA fusion

Minimum energy fusionMaximum contrast fusion

Figure 6 e accuracy of each subject under four fusion methods

0

02

04

06

08

1

12

Ave

rage

accu

racy

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Methods

Figure 7 e average accuracy of ten subjects under four fusionmethods

Computational Intelligence and Neuroscience 11

images (by assigning a coecient to each two-dimensionalsignal and obtaining the fused two-dimensional signal afterlinear summation) e premise of the above test results is

that we know the frequency of the signal to be fused If theonline test method is used (see details in Section 33) thesame online fusion results as CCA fusion and maximum

f = 7 Hz

times104

012

10 20 30 400

f = 72 Hz

times104

012

10 20 30 400

f = 74 Hz

times104

024

10 20 30 400

f = 76 Hz

times104

024

10 20 30 400

f = 78 Hz

times104

024

10 20 30 400

f = 8 Hz

times104

012

10 20 30 400

f = 82 Hz

times104

024

10 20 30 400

f = 84 Hz

times104

05

10

10 20 30 400

f = 86 Hz

times104

024

10 20 30 400

f = 88 Hz

times104

024

10 20 30 400

f = 9 Hz

times104

012

10 20 30 400

f = 92 Hz

times104

024

10 20 30 400

f = 94 Hz

times104

05

10

10 20 30 400

f = 96 Hz

times104

024

10 20 30 400

f = 98 Hz

times104

012

10 20 30 400

f = 10 Hz

times104

012

10 20 30 400

f = 102 Hz

times104

012

10 20 30 400

f = 104 Hz

times104

012

10 20 30 400

f = 106 Hz

times104

024

10 20 30 400

f = 108 Hz

times104

012

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(a)

f = 7 Hz

0

2

4

10 20 30 400

f = 8 Hz

0

2

4

10 20 30 400

f = 9 Hz

0

2

4

10 20 30 400

f = 10 Hz

0

2

4

10 20 30 400

f = 72 Hz0

2

4

10 20 30 400

f = 82 Hz0

5

10

10 20 30 400

f = 92 Hz0

2

4

10 20 30 400

f = 102 Hz0

2

4

10 20 30 400

f = 74 Hz

0

2

4

10 20 30 400

f = 84 Hz

0

10

20

10 20 30 400

f = 94 Hz

0

10

20

10 20 30 400

f = 104 Hz

0

2

4

10 20 30 400

f = 76 Hz0

5

10 20 30 400

f = 86 Hz0

5

10

10 20 30 400

f = 96 Hz0

5

10 20 30 400

f = 106 Hz0

5

10

10 20 30 400

f = 78 Hz0

5

10 20 30 400

f = 88 Hz0

5

10 20 30 400

f = 98 Hz0

2

4

10 20 30 400

f = 108 Hz0

2

4

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(b)

Figure 8 e Welch power spectrum plotted using the (a) CCA fusion spatial lter coecients and (b) maximum contrast fusion spatiallter coecients

12 Computational Intelligence and Neuroscience

contrast fusion were obtained T-F image fusion usedonly two electrode signals and the better online fusioneffect than that of the CCA fusion and the maximumcontrast fusion was obtained which shows the effec-tiveness of the proposed method Next we will explorethe fusion method of multiple T-F images (more than twoimages) that is to find suitable spatial filter coefficientsfor the wavelet low-frequency components of multipleT-F images

LLNF LLN1 lowastV(1 1) + LLN2 lowastV(2 1) + LLN3 lowastV(3 1)

+ LLN4 lowastV(4 1) + LLN5 lowastV(5 1)

+ LLN6 lowastV(6 1)

(19)

e parameters affect the fusion effect of the proposedmethod We listed the parameters that need to be selectedand the possible values of the parameters in Section 31 Inthis study the grid search method was used to traverse thecombination of all parameters and the best combination ofparameters was found In the experiment we found thatthe number of frequency bins and the Gaussian windowlength can be fixed to 54 and 57 We recommend that theresearchers determine the parameters according to theparameter selection principles and ranges given in Section31 In this study the SSMVEP active component wasenhanced by fusing multichannel signals into a single-channel signal and then the focused target frequencywas identified by performing spectrum analysis on thefused signals is is beneficial for SSMVEP studies basedon spectrum analysis For example Reference [21] pro-posed a frequency and phase mixed coding method in theSSVEP-based brain-computer interface (BCI) which in-creases the number of BCI coding targets by making onefrequency correspond to multiple different phases In thestudy the FFT analysis of the test signal is required to findthe possible focused target frequency and then calculate thephase value at that frequency e proposed method in thisstudy has a positive significance for accurately finding thefocused target frequency in the spectrum Moreover someEEG feature extraction algorithms for the BCI also requirespectrum analysis of the test signals [22 23] erefore themethod proposed in this study has a potential applicationvalue

5 Conclusion

To explore whether T-F domain analysis can achieve betterfusion effects than time-domain analysis this study pro-posed an SSMVEP enhancement method based on T-Fimage fusion e parameters of the T-F image fusionalgorithm were determined by the grid search method andthe electrode signals used for T-F image fusion were se-lected by bipolar fusion e analysis results show that thekey of the T-F image fusion algorithm is the fusion of thewavelet low-frequency components is study comparedthe enhancement effects of minimum energy fusionmaximum contrast fusion CCA fusion and T-F imagefusion on SSMVEP e experimental results show that the

online performance of the T-F image fusion method isbetter than that of the traditional spatial filtering methodswhich indicates that the proposed method is feasible to fuseSSMVEP in the T-F domain

Data Availability

e analytical data used to support the findings of this studyare included within the article and the raw data are availablefrom the corresponding author upon request

Ethical Approval

e subjects provided informed written consent in accor-dance with the protocol approved by the Institutional Re-view Board of Xirsquoan Jiaotong University

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Wenqiang Yan worked on the algorithm and wrote thepaper Guanghua Xu involved in discussions and gavesuggestions throughout this project Longting Chen andXiaowei Zheng worked on the experimental design Allauthors read and approved the final manuscript

Acknowledgments

is research was supported by the National Natural ScienceFoundation of China (NSFC) (no 51775415) We want tothank the subjects for participating in these experiments

Supplementary Materials

MATLAB codes for T-F image fusion minimum energyfusion maximum contrast fusion and CCA fusion (Sup-plementary Materials)

References

[1] W Yan G Xu J Xie et al ldquoFour novel motion paradigmsbased on steady-state motion visual evoked potentialrdquo IEEETransactions on Biomedical Engineering vol 65 no 8pp 1696ndash1704 2018

[2] J R Wolpaw and D J Mcfarland ldquoMultichannel EEG-basedbrain-computer communicationrdquo Electroencephalographyand Clinical Neurophysiology vol 90 no 6 pp 444ndash4491994

[3] B O Peters G Pfurtscheller and H Flyvbjerg ldquoMiningmulti-channel EEG for its information content an ANN-based method for a brain-computer interfacerdquo Neural Net-works vol 11 no 7-8 pp 1429ndash1433 1998

[4] S Parini L Maggi A C Turconi and G Andreoni ldquoA robustand self-paced BCI system based on a four class SSVEPparadigm algorithms and protocols for a high-transfer-ratedirect brain communicationrdquo Computational Intelligence andNeuroscience vol 2009 pp 1ndash11 2009

[5] O Friman I Volosyak and A Graser ldquoMultiple channeldetection of steady-state visual evoked potentials for brain-

Computational Intelligence and Neuroscience 13

computer interfacesrdquo IEEE Transactions on Biomedical En-gineering vol 54 no 4 pp 742ndash750 2007

[6] Z Lin C Zhang WWu and X Gao ldquoFrequency recognitionbased on canonical correlation analysis for SSVEP-basedBCIsrdquo IEEE Transactions on Biomedical Engineering vol 53- no 12 pp 2610ndash2614 2006

[7] M Middendorf G Mcmillan G Calhoun and K S JonesldquoBrain-computer interfaces based on the steady-state visual-evoked responserdquo IEEE Transactions on Rehabilitation En-gineering vol 8 no 2 pp 211ndash214 2000

[8] G R Muller-Putz R Scherer C Brauneis andG Pfurtscheller ldquoSteady-state visual evoked potential(SSVEP)-based communication impact of harmonic fre-quency componentsrdquo Journal of Neural Engineering vol 2no 4 pp 123ndash130 2005

[9] D J Mcfarland L M Mccane S V David and J R WolpawldquoSpatial filter selection for EEG-based communicationrdquoElectroencephalography and Clinical Neurophysiologyvol 103 no 3 pp 386ndash394 1997

[10] H Li B S Manjunath and S K Mitra ldquoMultisensor imagefusion using the wavelet transformrdquo Graphical Models andImage Processing vol 57 no 3 pp 235ndash245 1995

[11] S Li and B Yang ldquoMultifocus image fusion by combiningcurvelet and wavelet transformrdquo Pattern Recognition Lettersvol 29 no 9 pp 1295ndash1301 2008

[12] B Zhang ldquoStudy on image fusion based on different fusionrules of wavelet transformrdquo Acta Photonica Sinica vol 33pp 173ndash176 2004

[13] C DMeyerMatrix Analysis and Applied Linear Algebra Bookand Solutions Manual Society for Industrial and AppliedMathematics Philadelphia PA USA 2000

[14] K J A Sundar M Jahnavi and K Lakshmisaritha ldquoMulti-sensor image fusion based on empirical wavelet transformrdquo inProceedings of the International Conference on ElectricalElectronics Communication Computer and OptimizationTechniques pp 93ndash97 Mysuru India December 2018

[15] L Tian J Ahmed Q Du B Shankar and S Adnan ldquoMultifocus image fusion using combined median and average filterbased hybrid stationary wavelet transform and principalcomponent analysisrdquo International Journal of AdvancedComputer Science and Applications vol 9 no 6 2018

[16] G He S Xing D Dong and X Zhao ldquoPanchromatic andmulti-spectral image fusion method based on two-step sparserepresentation and wavelet transformrdquo in Asia-Pacific Signaland Information Processing Association Summit and Confer-ence pp 259ndash262 Honolulu HI USA January 2018

[17] Y Shin S Lee M Ahn H Cho S C Jun and H-N LeeldquoNoise robustness analysis of sparse representation basedclassification method for non-stationary EEG signal classifi-cationrdquo Biomedical Signal Processing and Control vol 21pp 8ndash18 2015

[18] M Xu L Chen L Zhang et al ldquoA visual parallel-BCI spellerbased on the time-frequency coding strategyrdquo Journal ofNeural Engineering vol 11 no 2 article 026014 2014

[19] J Xie G Xu A Luo et al ldquoe role of visual noise ininfluencing mental load and fatigue in a steady-state motionvisual evoked potential-based brain-computer interfacerdquoSensors vol 17 no 8 p 1873 2017

[20] K Barbe R Pintelon and J Schoukens ldquoWelch methodrevisited nonparametric power spectrum estimation viacircular overlaprdquo IEEE Transactions on Signal Processingvol 58 no 2 pp 553ndash565 2010

[21] C Jia X Gao B Hong and S Gao ldquoFrequency and phasemixed coding in SSVEP-based brain-computer interfacerdquo

IEEE transactions on bio-medical engineering vol 58 no 1pp 200ndash206 2011

[22] C Yang H Wu Z Li W He N Wang and C-Y Su ldquoMindcontrol of a robotic arm with visual fusion technologyrdquo IEEETransactions on Industrial Informatics vol 14 no 9pp 3822ndash3830 2017

[23] J Castillo S Muller E Caicedo and T Bastos ldquoFeatureextraction techniques based on power spectrum for a SSVEP-BCIrdquo in Proceedings of the IEEE 23rd International Sympo-sium on Industrial Electronics (ISIE) Istanbul Turkey June2014

14 Computational Intelligence and Neuroscience

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 6: Steady-StateMotionVisualEvokedPotential(SSMVEP) … · 2019. 2. 14. · wasconductedbyMATLAB’stfrstftfunction.Afterfusing two T-F images, the T-F signal was inversely transformed

low-frequency components can remove the commonnoise in the electrode signals

(b) e magnitudes of the high-frequency components(HLM1 LHM1 andHHM1) and (HLM2 LHM2 andHHM2) (M 1 2 N) of each decomposition

layer are obtained for the T-F images 1 and 2 ehigh-frequency component with a large value isused as the high-frequency component of the fusedimage F

HLiMF

HLiM1 real HLi

M11113872 1113873gt real HLiM21113872 1113873

HLiM2 real HLi

M11113872 1113873lt real HLiM21113872 1113873

⎧⎪⎨

⎪⎩

LHiMF

LHiM1 real LHi

M11113872 1113873gt real LHiM21113872 1113873

LHiM2 real LHi

M11113872 1113873lt real LHiM21113872 1113873

⎧⎪⎨

⎪⎩

HHiMF

HHiM1 real HHi

M11113872 1113873gt real HHiM21113872 1113873

HHiM2 real HHi

M11113872 1113873lt real HHiM21113872 1113873

⎧⎪⎨

⎪⎩

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

M 1 2 N (18)

e two-dimensional signal after the STFT is acomplex signal the two-dimensional signal aftertwo-dimensional wavelet decomposition is still acomplex signal e value used for the comparison isthe real part of the complex signal Here i representsthe number of frequency components of the Mdecomposition layer (ie the number of waveletcoefficients)

(4) According to both low-frequency components LLNFand high-frequency components (HLMF LHMF andHHMF) (M 1 2 N) of the fused image F thefused image F is generated by two-dimensionalwavelet reconstruction

(5) Mean filtering is performed on the fused image F(6) An ISTFT is performed on the mean filtered image to

generate a fused time-domain signal (MATLAB codefor T-F image fusion and test data is provided inSupplementary Materials (available here))

3 Results

31 Parameter Selection

311 STFT Parameters In this study the EEG time-domainsignals were subjected to STFT and ISTFT using theMATLAB library functions tfrstft and tfristfte parametersthat need to be set here are the number of frequency bins andthe frequency smoothing window If the number of fre-quency bins is too large the calculation speed of the entireprocess will be low In this study the value of this parametercan be set to [32 54 76 98120] Since the Fourier transform ofthe Gaussian function is also a Gaussian function thewindow function of the optimal time localization of theSTFT is a Gaussian function e EEG is a nonstationarysignal [17] and requires a high time resolution of the windowfunction that is the window length should not be too longIn this paper a Gaussian window was selected and the valueof the window length can be set to [37 47 57 67 77 87 97]

312 Two-Dimensional Multiscale Wavelet TransformParameters In this study the MATLAB library functionswavedec2 and waverec2 were used to perform two-dimensional multiscale wavelet decomposition and re-construction on T-F images Here the number of layers ofwavelet decomposition and the wavelet basis functions needto be set e sampling frequency of the EEG acquisitiondevice used in this paper was 1200Hz and the stimulationfrequency of the SSMVEP was 7ndash108Hz Considering thatthe SSMVEP may contain the second harmonic of thestimulation frequency this study sets the number of layers ofwavelet decomposition to 4 e dbN wavelet has good reg-ularity and was chosen as the wavelet basis function evanishing momentN of the wavelet function can be set to [2 35 7 10 13]

313 Mean Filtering Parameters Here the template size ofthemean filtering needs to be set In this study the value of thetemplate size can be set to [2 4 6 8 10 12 14 16 18 20 22 24 26]

314 Low-Frequency Component of the Fused Image Fe values of LLNF taking LLN1minusLLN2 or LLN2minusLLN1 willaffect the experimental results and the values of LLNF needto be determined to optimize the final fusion effect (seedetails in Section 210 (3))

315 Parameter Selection e T-F image fusion methodproposed in this study requires the setting of the followingfive parameters the number of frequency bins frequencysmoothing window vanishing moments of dbN waveletfunctions mean filter template size and the value of LLNFIn this study the grid search method was used to traverse thecombination of all parameters to find the best combinationof parameters that can enhance the SSMVEP active com-ponents In this study six rounds of experiments wereconducted e first three rounds of experimental data wereused to select the optimal combination of parameters for T-F

6 Computational Intelligence and Neuroscience

image fusion According to the selected combination of pa-rameters the last three rounds of experimental data were usedto compare the enhancement effect of SSMVEP active com-ponents by the method of minimum energy fusion maximumcontrast fusion CCA fusion and T-F image fusion edisadvantage of grid search is that the amount of calculation islarge however considering the difficulty of superparameterselection grid search is still a more reliable method

32 Electrode Signal Selection for T-F Image Fusion e T-Fimage fusion method analyzes two electrode signals ustwo suitable electrodes need to be selected from multipleelectrodes e bipolar fusion signal is obtained by sub-tracting the two original EEG signals In equation (17) wesubtracted the wavelet low-frequency components of the twoT-F images to obtain the wavelet low-frequency componentsof the fused image e subtraction of the low-frequencycomponents removes the common noise in the electrodesignals which is similar to the principle of bipolar fusionerefore the two electrode signals with the best bipolarfusion effect were used in the T-F image fusion For SSVEPor SSMVEP stimulation the Oz electrode usually has thestrongest response [18] and is most commonly used toanalyze SSVEP or SSMVEP [19] e Oz electrode was usedas one of the analysis electrodes for bipolar fusion In thisstudy the Oz electrode was fused with the other five elec-trode signals so that the best combination of electrodes canbe selected When bipolar fusion is used the Oz electrode asa reducing electrode or a reduced electrode will not affect theexperimental results For example Oz-POz and POz-Ozhave the same fusion effect In this study the Oz elec-trode was used as the reduced electrode Twenty trials persubject were used for electrode selection e Welch powerspectrum analysis [20] was performed on the obtainedsignals of bipolar fusion for each subject e Gaussianwindow was used and the length of the window was 5 s eoverlapping was 256 and the number of discrete Fouriertransform (DFT) points was 6000 for the Welch periodo-gram 20 focused targets correspond to 20 stimuli fre-quencies and the frequencies corresponding to the highestpower spectrum amplitudes at the 20 frequencies weredetermined as the focused target frequency e high rec-ognition accuracy indicates that the amplitude of the powerspectrum at the stimulation frequency is prominent and itindicates the effectiveness of the fusion method for theenhancement of the SSMVEP active component Table 1shows the recognition accuracy of all 10 subjects under thebipolar fusion e Oz electrode was fused with differentelectrodes and obtained different fusion effects Moreoverdue to the differences among individuals the electrodecombination with the highest recognition accuracy persubject was also different e asterisk (lowast) marks the bipolarfusion electrode group with the highest recognition accuracyof each subject and the corresponding electrode combi-nation was used in T-F image fusion

33 Comparison of Enhancement Effects of Minimum EnergyFusion Maximum Contrast Fusion CCA Fusion and T-F

Image Fusion on SSMVEP Active Components T-F imagefusion used two channel signals and CCA fusion maximumcontrast fusion and minimum energy fusion used all sixchannel signals In this study the accuracy was used as theevaluation standard of the fusion effect e high onlineaccuracy indicates that the amplitude at stimulus frequencyis prominent which shows the effectiveness of the fusionmethod For T-F image fusion the Welch power spectrumanalysis (the parameters are the same as in Section 32) wasperformed on the obtained signal after the T-F image fusionand the frequency with the highest amplitude at twentystimulus frequencies was identified as the focused targetfrequency e minimum energy fusion maximum contrastfusion and CCA fusion require prior knowledge of thefrequency of the signal to be fused However the frequencyof the signal to be fused cannot be determined because theuserrsquos focused target cannot be determined beforehand Forthe current tested signal the minimum energy fusionmaximum contrast fusion and CCA fusion need to firstperform the fusion analysis at all stimulus frequencies andthen the frequency with the highest amplitude was iden-tified as the focused target frequency Take CCA fusion as anexample and assume that the current tested signal is Y First20 template signals are set according to equation (11) andthen 20 spatial filter coefficients are obtained according toequation (10) e tested signal is fused with 20 spatial filtercoefficients respectively e obtained 20 sets of vectors areseparately analyzed by the Welch power spectrum (theparameters are the same as in Section 32) to obtain theamplitude at the corresponding frequency Finally thefrequency with the highest amplitude is identified as thefocused target frequency e first three rounds of experi-mental data were used to select the parameters of the T-Fimage fusion (see details in Section 31) and the last threerounds of the experimental data were used to test the onlinefusion effects under the four methods Figures 4(a)ndash4(d)show the analysis results plotted using the data of subject 4 inthe fourth round of the experiment Figure 4(a) shows thepower spectrum results of the 20 targets under T-F imagefusion Figure 4(b) shows the power spectrum results of 20targets under CCA fusion Figure 4(c) shows the powerspectrum results of 20 targets under minimum energy

Table 1 Accuracies of 10 subjects under bipolar fusion

SubjectsBipolar fusion

Oz-PO7 Oz-PO8 Oz-PO3 Oz-POz Oz-PO4S1 02 07 045 015 095lowastS2 025 0 03lowast 01 02S3 095lowast 085 085 04 08S4 08lowast 01 07 035 035S5 02lowast 015 015 015 01S6 095lowast 03 095lowast 085 07S7 02 015 025 06lowast 055S8 01 005 025 03 09lowastS9 03 015 0 035lowast 02S10 035 035 05 08lowast 04lowaste bipolar fusion electrode group with the highest recognition accuracyof each subject

Computational Intelligence and Neuroscience 7

times10ndash7

0

05

1f = 72 Hz

times10ndash7

0

1

2f = 76 Hz

times10ndash7

0

1

2f = 78 Hz

0

05

1times10ndash7 f = 7 Hz

times10ndash7

0

1

2f = 74 Hz

times10ndash7

0

05

1f = 92 Hz

times10ndash7

0

2

4f = 96 Hz

times10ndash7

0

1

2f = 98 Hz

times10ndash7

0

1

2f = 9 Hz

times10ndash7

0

2

4f = 94 Hz

times10ndash7

0

05

1f = 102 Hz

times10ndash7

0

1

2f = 106 Hz

times10ndash7

0

1

2f = 108 Hz

times10ndash7

0

1

2f = 10 Hz

times10ndash7

0

1

2f = 104 Hz

times10ndash7

0

2

4f = 82 Hz

times10ndash7

0

2

4f = 86 Hz

times10ndash7

0

2

4f = 88 Hz

times10ndash7

0

1

2f = 8 Hz

times10ndash7

0

05

1 f = 84 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(a)

0

2

4times10ndash13 f = 7 Hz

times10ndash13

0

5f = 72 Hz

times10ndash12

0

05

1f = 74 Hz

times10ndash12

0

05

1f = 76 Hz

times10ndash13

0

5f = 78 Hz

times10ndash13

0

2

4f = 9 Hz

times10ndash13

0

5f = 92 Hz

times10ndash12

0

1

2f = 94 Hz

times10ndash12

0

05

1f = 96 Hz

times10ndash13

0

2

4f = 98 Hz

times10ndash13

0

2

4f = 10 Hz

times10ndash12

0

05

1f = 102 Hz

times10ndash12

0

05

1f = 104 Hz

times10ndash12

0

05

1f = 106 Hz

times10ndash13

0

5f = 108 Hz

times10ndash13

0

2

4f = 8 Hz

times10ndash12

0

05

1f = 82 Hz

times10ndash12

0

1

2f = 84 Hz

times10ndash12

0

05

1f = 86 Hz

times10ndash13

0

5f = 88 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(b)

Figure 4 Continued

8 Computational Intelligence and Neuroscience

times10ndash13

0

05

1f = 7 Hz

times10ndash13

0

1

2f = 74 Hz

times10ndash13

0

05

1f = 72 Hz

times10ndash13

0

1

2f = 76 Hz

times10ndash14

0

2

4f = 78 Hz

times10ndash13

0

05

1f = 9 Hz

times10ndash13

0

1

2f = 94 Hz

times10ndash13

0

1

2f = 92 Hz

times10ndash13

0

05

1f = 96 Hz

times10ndash13

0

05

1f = 98 Hz

times10ndash13

0

05

1f = 10 Hz

times10ndash13

0

05

1f = 104 Hz

times10ndash13

0

05

1f = 102 Hz

times10ndash13

0

05

1f = 106 Hz

times10ndash13

0

1

2f = 108 Hz

times10ndash13

0

05

1f = 8 Hz

times10ndash13

0

1

2f = 84 Hz

times10ndash13

0

1

2f = 82 Hz

times10ndash13

0

05

1f = 86 Hz

times10ndash13

0

05

1f = 88 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(c)

times10ndash4

0

05

1f = 72 Hz

times10ndash4

0

1

2f = 74 Hz

times10ndash4

0

1

2f = 76 Hz

times10ndash4

0

05

1f = 78 Hz

times10ndash4

0

05

1f = 7 Hz

times10ndash4

0

05

1f = 92 Hz

times10ndash4

0

2

4f = 94 Hz

times10ndash4

0

1

2f = 96 Hz

times10ndash5

0

5f = 98 Hz

times10ndash5

0

2

4f = 9 Hz

times10ndash4

0

1

2f = 102 Hz

times10ndash4

0

05

1f = 104 Hz

times10ndash4

0

1

2f = 106 Hz

times10ndash4

0

05

1f = 108 Hz

times10ndash4

0

05

1f = 10 Hz

times10ndash4

0

1

2f = 82 Hz

times10ndash4

0

2

4f = 84 Hz

times10ndash4

0

1

2f = 86 Hz

times10ndash4

0

05

1f = 88 Hz

times10ndash4

0

05

1f = 8 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(d)

Figure 4 e power spectrum results of 20 targets under (a) T-F image fusion (b) CCA fusion (c) minimum energy fusion and (d)maximum contrast fusion (the discrete sequence frequency values from left to right are 7 8 910 72 82 92 102 74 84 94 104 76 86 96 10678 88 98 108 Hz respectively)

Computational Intelligence and Neuroscience 9

fusion and Figure 4(d) shows the power spectrum results of20 targets under maximum contrast fusion In the stalk plotthe discrete sequence frequency values from left to right are7 8 9 10 72 82 92 102 74 84 94 104 76 86 96 106 78 8898 108 respectively where f represents the stimulationfrequency of the corresponding target and the green dotmarks the amplitude of the corresponding target stimulusfrequency e red flag f represents the target was mis-identified and the blue flag f represents the target wascorrectly identified Figures 4(a)ndash4(d) show that the T-Fimage fusion method has the best online performance andthe minimum energy fusion method has the worst onlineperformance Maximum contrast fusion and CCA fusionhave similar fusion effects

In online analysis the power spectrum amplitudes of thetest signal at all stimulus frequencies were calculated andthen the frequency corresponding to the maximum am-plitude was identified as the focused target frequency Errorrecognition occurs when the amplitude at the focused targetfrequency is lower than the amplitudes at the nonfocusedtarget frequencieserefore the power spectrum amplitudeat the focused target frequency can be regarded as the activecomponent of the signal and the power spectrum ampli-tudes at the remaining nonfocused target frequencies can beregarded as noise e SNR in this study was defined as theratio of the power spectrum amplitude at the focused targetfrequency to the mean of the power spectrum amplitudes atthe remaining nonfocused target frequencies e SNRs often subjects at all stimulus frequencies were superimposedand averaged and the experimental results are shown inFigure 5 It can be seen from Figure 5 that the T-F imagefusion method obtained the highest SNR which indicatesthat the T-F image fusion method proposed in this study caneffectively enhance the SSMVEP SNR

e online accuracies of each subject in the three roundsof experiments were superimposed and averaged e ex-perimental results are shown in Figure 6 It can be seen thatmost subjects obtained highest online accuracy under theT-F image fusion and some subjects obtained highest onlineaccuracy under CCA fusion and maximum contrast fusionOnly subject 9 obtained the highest online accuracy underthe minimum energy fusion e online accuracies of all thesubjects were superimposed and averaged e experimentalresults are shown in Figure 7 e experimental results showthat the online performance of the T-F image fusion methodis better e online accuracy of the T-F image fusionmethod is 617 606 and 3050 higher than that of theCCA fusion maximum contrast fusion and the minimumenergy fusion respectively e paired t-test shows signifi-cant differences between the accuracies of T-F image fusionand minimum energy fusion (p 00174) Online accuracyanalysis results show that the proposedmethod is better thanthe traditional time-domain fusion method

4 Discussion

e SSMVEP collected from the scalp contains a lot of noiseand requires an effective signal processing method to en-hance the active components of SSMVEP Spatial filtering

methods utilize EEG information of multiple channels andexert positive significance to enhance the active componentsof the SSMVEP At present the commonly used spatialfiltering methods include average fusion native fusion bi-polar fusion Laplacian fusion CAR fusion CCA fusionminimum energy fusion and maximum contrast fusionefirst five spatial filtering methods can only process thepreselected electrode signals Minimum energy fusionmaximum contrast fusion and CCA fusion improve theabove problem and they can be used to fuse any number ofelectrode signals with better fusion effects Minimum energyfusion maximum contrast fusion and CCA fusion fuseSSMVEP in the time domainis study first put forward theidea of fusing EEG in the T-F domain and proposed a T-Fimage fusion method We compared the enhancement ef-fects of minimum energy fusion maximum contrast fusionCCA fusion and T-F image fusion on SSMVEP It is verifiedby the test data that the proposed method is better than thetraditional time-domain fusion method

T-F analysis is a good choice for the transformation ofthe EEG data from one dimension to two dimension EEGtransformed into the T-F domain can be used as an image foranalysis e STFT has an inverse transform which caninverse transform the fused T-F image into a time-domainsignal erefore the STFT was used to transform the EEGfrom the time domain to the T-F domain e two-dimensional wavelet transform can fuse two images intoone to achieve the purpose of multichannel EEG fusionAfter two-dimensional wavelet decomposition the low-frequency components of the two subimages were sub-tracted to obtain the low-frequency components of the fusedimage and the maximum value of the high-frequencycomponents of the two subimages was taken as the high-frequency component of the fused imagee low-frequencycomponents after wavelet decomposition represent the ac-tive components of the SSMVEPe subtraction of the low-

0Methods

5

10

15

Ave

rage

SN

R

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Figure 5 e average SNR of ten subjects at all stimulus fre-quencies under four methods

10 Computational Intelligence and Neuroscience

frequency components can remove the common noise in theelectrode signals is is similar to bipolar fusion whichsubtracts two electrode signals in the time domain We alsocompared the enhancement eects of bipolar fusion and theproposed method on SSMVEP active components eelectrodes used for bipolar fusion are the same as those usedin T-F image fusion e results show that the proposedmethod is better than the bipolar fusion (the online accuracyof bipolar fusion is 7152) In this study the fused imagewas ltered by mean lter to further achieve low-pass l-tering We also tried to remove the mean ltering step aftercompleting the T-F image fusion e results show thatremoving themean ltering step reduced the fusion eect Inthis study the mean ltering step was performed after T-Fimage fusion We tested the fusion eect when the meanltering step was performed before T-F image fusion(performing mean ltering step on the subimage afterSTFT) For some subjects it was better to perform the meanltering step after T-F image fusion erefore we recom-mend that researchers follow the analysis steps in this study

e wavelet low-frequency coecients (see details inequation (17)) of the fused image have an important in-uence on the fusion eect If we perform T-F image fusionon six electrode signals at the same time we will get six setsof wavelet low-frequency coecients Here each set of low-frequency coecient is a two-dimensional signal Referringto equation (1) we can assign a coecient to each two-dimensional signal and obtain the fused two-dimensionalsignal (ie the wavelet low-frequency coecients of thefused image) after linear summationis study explored thefeasibility of applying the spatial lter coecients of CCAfusion and maximum contrast fusion to T-F image fusionwhen fusing six electrode signals by using the T-F imagefusion method at the same time Since the fusion eect ofminimum energy fusion was poor the spatial lter co-ecients of the minimum energy fusion were not used heree spatial lter coecients of the maximum contrast fusionand CCA fusion at the frequency f were obtained byequations (9) and (10) and are set to Vmax and Vcca re-spectively Vmax and Vcca are two vectors with dimensions of6times1 where Vmax(1 1) and Vcca(1 1) represent the rstelements of the vectors Vmax and Vcca Since the T-F imagefusion method was performed on the six electrode signalsequation (17) is transformed into equation (19) where Vcorresponds to the spatial lter coecient Vmax or Vcca erest of the analysis process is the same as that in Section 210e same analysis steps were performed for all 20 targets(7ndash108Hz) e Welch spectrum analysis (the parametersare the same as in Section 32) was performed on the ob-tained signal after T-F image fusion Figures 8(a) and 8(b)show the Welch power spectra plotted using the data ofsubject 4 in the fourth round of the experiment Figure 8(a)shows the power spectrum plotted using spatial lter co-ecients of CCA fusion and Figure 8(b) shows the Welchpower spectrum plotted using spatial lter coecients ofmaximum contrast fusion where f represents the stimulusfrequency and the red circle indicates the amplitude at thestimulus frequency Figures 8(a) and 8(b) show that theamplitudes at the stimulation frequencies are prominente spatial lter coecients of CCA fusion and maximumcontrast fusion are eective for T-F image fusion us theT-F image fusion method proposed in this study focuses onthe fusion of wavelet low-frequency coecients of multiple

0

02

04

06

08

1

12

Acc

urac

y

Subject1 2 3 4 5 6 7 8 9 10

T-F image fusionCCA fusion

Minimum energy fusionMaximum contrast fusion

Figure 6 e accuracy of each subject under four fusion methods

0

02

04

06

08

1

12

Ave

rage

accu

racy

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Methods

Figure 7 e average accuracy of ten subjects under four fusionmethods

Computational Intelligence and Neuroscience 11

images (by assigning a coecient to each two-dimensionalsignal and obtaining the fused two-dimensional signal afterlinear summation) e premise of the above test results is

that we know the frequency of the signal to be fused If theonline test method is used (see details in Section 33) thesame online fusion results as CCA fusion and maximum

f = 7 Hz

times104

012

10 20 30 400

f = 72 Hz

times104

012

10 20 30 400

f = 74 Hz

times104

024

10 20 30 400

f = 76 Hz

times104

024

10 20 30 400

f = 78 Hz

times104

024

10 20 30 400

f = 8 Hz

times104

012

10 20 30 400

f = 82 Hz

times104

024

10 20 30 400

f = 84 Hz

times104

05

10

10 20 30 400

f = 86 Hz

times104

024

10 20 30 400

f = 88 Hz

times104

024

10 20 30 400

f = 9 Hz

times104

012

10 20 30 400

f = 92 Hz

times104

024

10 20 30 400

f = 94 Hz

times104

05

10

10 20 30 400

f = 96 Hz

times104

024

10 20 30 400

f = 98 Hz

times104

012

10 20 30 400

f = 10 Hz

times104

012

10 20 30 400

f = 102 Hz

times104

012

10 20 30 400

f = 104 Hz

times104

012

10 20 30 400

f = 106 Hz

times104

024

10 20 30 400

f = 108 Hz

times104

012

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(a)

f = 7 Hz

0

2

4

10 20 30 400

f = 8 Hz

0

2

4

10 20 30 400

f = 9 Hz

0

2

4

10 20 30 400

f = 10 Hz

0

2

4

10 20 30 400

f = 72 Hz0

2

4

10 20 30 400

f = 82 Hz0

5

10

10 20 30 400

f = 92 Hz0

2

4

10 20 30 400

f = 102 Hz0

2

4

10 20 30 400

f = 74 Hz

0

2

4

10 20 30 400

f = 84 Hz

0

10

20

10 20 30 400

f = 94 Hz

0

10

20

10 20 30 400

f = 104 Hz

0

2

4

10 20 30 400

f = 76 Hz0

5

10 20 30 400

f = 86 Hz0

5

10

10 20 30 400

f = 96 Hz0

5

10 20 30 400

f = 106 Hz0

5

10

10 20 30 400

f = 78 Hz0

5

10 20 30 400

f = 88 Hz0

5

10 20 30 400

f = 98 Hz0

2

4

10 20 30 400

f = 108 Hz0

2

4

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(b)

Figure 8 e Welch power spectrum plotted using the (a) CCA fusion spatial lter coecients and (b) maximum contrast fusion spatiallter coecients

12 Computational Intelligence and Neuroscience

contrast fusion were obtained T-F image fusion usedonly two electrode signals and the better online fusioneffect than that of the CCA fusion and the maximumcontrast fusion was obtained which shows the effec-tiveness of the proposed method Next we will explorethe fusion method of multiple T-F images (more than twoimages) that is to find suitable spatial filter coefficientsfor the wavelet low-frequency components of multipleT-F images

LLNF LLN1 lowastV(1 1) + LLN2 lowastV(2 1) + LLN3 lowastV(3 1)

+ LLN4 lowastV(4 1) + LLN5 lowastV(5 1)

+ LLN6 lowastV(6 1)

(19)

e parameters affect the fusion effect of the proposedmethod We listed the parameters that need to be selectedand the possible values of the parameters in Section 31 Inthis study the grid search method was used to traverse thecombination of all parameters and the best combination ofparameters was found In the experiment we found thatthe number of frequency bins and the Gaussian windowlength can be fixed to 54 and 57 We recommend that theresearchers determine the parameters according to theparameter selection principles and ranges given in Section31 In this study the SSMVEP active component wasenhanced by fusing multichannel signals into a single-channel signal and then the focused target frequencywas identified by performing spectrum analysis on thefused signals is is beneficial for SSMVEP studies basedon spectrum analysis For example Reference [21] pro-posed a frequency and phase mixed coding method in theSSVEP-based brain-computer interface (BCI) which in-creases the number of BCI coding targets by making onefrequency correspond to multiple different phases In thestudy the FFT analysis of the test signal is required to findthe possible focused target frequency and then calculate thephase value at that frequency e proposed method in thisstudy has a positive significance for accurately finding thefocused target frequency in the spectrum Moreover someEEG feature extraction algorithms for the BCI also requirespectrum analysis of the test signals [22 23] erefore themethod proposed in this study has a potential applicationvalue

5 Conclusion

To explore whether T-F domain analysis can achieve betterfusion effects than time-domain analysis this study pro-posed an SSMVEP enhancement method based on T-Fimage fusion e parameters of the T-F image fusionalgorithm were determined by the grid search method andthe electrode signals used for T-F image fusion were se-lected by bipolar fusion e analysis results show that thekey of the T-F image fusion algorithm is the fusion of thewavelet low-frequency components is study comparedthe enhancement effects of minimum energy fusionmaximum contrast fusion CCA fusion and T-F imagefusion on SSMVEP e experimental results show that the

online performance of the T-F image fusion method isbetter than that of the traditional spatial filtering methodswhich indicates that the proposed method is feasible to fuseSSMVEP in the T-F domain

Data Availability

e analytical data used to support the findings of this studyare included within the article and the raw data are availablefrom the corresponding author upon request

Ethical Approval

e subjects provided informed written consent in accor-dance with the protocol approved by the Institutional Re-view Board of Xirsquoan Jiaotong University

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Wenqiang Yan worked on the algorithm and wrote thepaper Guanghua Xu involved in discussions and gavesuggestions throughout this project Longting Chen andXiaowei Zheng worked on the experimental design Allauthors read and approved the final manuscript

Acknowledgments

is research was supported by the National Natural ScienceFoundation of China (NSFC) (no 51775415) We want tothank the subjects for participating in these experiments

Supplementary Materials

MATLAB codes for T-F image fusion minimum energyfusion maximum contrast fusion and CCA fusion (Sup-plementary Materials)

References

[1] W Yan G Xu J Xie et al ldquoFour novel motion paradigmsbased on steady-state motion visual evoked potentialrdquo IEEETransactions on Biomedical Engineering vol 65 no 8pp 1696ndash1704 2018

[2] J R Wolpaw and D J Mcfarland ldquoMultichannel EEG-basedbrain-computer communicationrdquo Electroencephalographyand Clinical Neurophysiology vol 90 no 6 pp 444ndash4491994

[3] B O Peters G Pfurtscheller and H Flyvbjerg ldquoMiningmulti-channel EEG for its information content an ANN-based method for a brain-computer interfacerdquo Neural Net-works vol 11 no 7-8 pp 1429ndash1433 1998

[4] S Parini L Maggi A C Turconi and G Andreoni ldquoA robustand self-paced BCI system based on a four class SSVEPparadigm algorithms and protocols for a high-transfer-ratedirect brain communicationrdquo Computational Intelligence andNeuroscience vol 2009 pp 1ndash11 2009

[5] O Friman I Volosyak and A Graser ldquoMultiple channeldetection of steady-state visual evoked potentials for brain-

Computational Intelligence and Neuroscience 13

computer interfacesrdquo IEEE Transactions on Biomedical En-gineering vol 54 no 4 pp 742ndash750 2007

[6] Z Lin C Zhang WWu and X Gao ldquoFrequency recognitionbased on canonical correlation analysis for SSVEP-basedBCIsrdquo IEEE Transactions on Biomedical Engineering vol 53- no 12 pp 2610ndash2614 2006

[7] M Middendorf G Mcmillan G Calhoun and K S JonesldquoBrain-computer interfaces based on the steady-state visual-evoked responserdquo IEEE Transactions on Rehabilitation En-gineering vol 8 no 2 pp 211ndash214 2000

[8] G R Muller-Putz R Scherer C Brauneis andG Pfurtscheller ldquoSteady-state visual evoked potential(SSVEP)-based communication impact of harmonic fre-quency componentsrdquo Journal of Neural Engineering vol 2no 4 pp 123ndash130 2005

[9] D J Mcfarland L M Mccane S V David and J R WolpawldquoSpatial filter selection for EEG-based communicationrdquoElectroencephalography and Clinical Neurophysiologyvol 103 no 3 pp 386ndash394 1997

[10] H Li B S Manjunath and S K Mitra ldquoMultisensor imagefusion using the wavelet transformrdquo Graphical Models andImage Processing vol 57 no 3 pp 235ndash245 1995

[11] S Li and B Yang ldquoMultifocus image fusion by combiningcurvelet and wavelet transformrdquo Pattern Recognition Lettersvol 29 no 9 pp 1295ndash1301 2008

[12] B Zhang ldquoStudy on image fusion based on different fusionrules of wavelet transformrdquo Acta Photonica Sinica vol 33pp 173ndash176 2004

[13] C DMeyerMatrix Analysis and Applied Linear Algebra Bookand Solutions Manual Society for Industrial and AppliedMathematics Philadelphia PA USA 2000

[14] K J A Sundar M Jahnavi and K Lakshmisaritha ldquoMulti-sensor image fusion based on empirical wavelet transformrdquo inProceedings of the International Conference on ElectricalElectronics Communication Computer and OptimizationTechniques pp 93ndash97 Mysuru India December 2018

[15] L Tian J Ahmed Q Du B Shankar and S Adnan ldquoMultifocus image fusion using combined median and average filterbased hybrid stationary wavelet transform and principalcomponent analysisrdquo International Journal of AdvancedComputer Science and Applications vol 9 no 6 2018

[16] G He S Xing D Dong and X Zhao ldquoPanchromatic andmulti-spectral image fusion method based on two-step sparserepresentation and wavelet transformrdquo in Asia-Pacific Signaland Information Processing Association Summit and Confer-ence pp 259ndash262 Honolulu HI USA January 2018

[17] Y Shin S Lee M Ahn H Cho S C Jun and H-N LeeldquoNoise robustness analysis of sparse representation basedclassification method for non-stationary EEG signal classifi-cationrdquo Biomedical Signal Processing and Control vol 21pp 8ndash18 2015

[18] M Xu L Chen L Zhang et al ldquoA visual parallel-BCI spellerbased on the time-frequency coding strategyrdquo Journal ofNeural Engineering vol 11 no 2 article 026014 2014

[19] J Xie G Xu A Luo et al ldquoe role of visual noise ininfluencing mental load and fatigue in a steady-state motionvisual evoked potential-based brain-computer interfacerdquoSensors vol 17 no 8 p 1873 2017

[20] K Barbe R Pintelon and J Schoukens ldquoWelch methodrevisited nonparametric power spectrum estimation viacircular overlaprdquo IEEE Transactions on Signal Processingvol 58 no 2 pp 553ndash565 2010

[21] C Jia X Gao B Hong and S Gao ldquoFrequency and phasemixed coding in SSVEP-based brain-computer interfacerdquo

IEEE transactions on bio-medical engineering vol 58 no 1pp 200ndash206 2011

[22] C Yang H Wu Z Li W He N Wang and C-Y Su ldquoMindcontrol of a robotic arm with visual fusion technologyrdquo IEEETransactions on Industrial Informatics vol 14 no 9pp 3822ndash3830 2017

[23] J Castillo S Muller E Caicedo and T Bastos ldquoFeatureextraction techniques based on power spectrum for a SSVEP-BCIrdquo in Proceedings of the IEEE 23rd International Sympo-sium on Industrial Electronics (ISIE) Istanbul Turkey June2014

14 Computational Intelligence and Neuroscience

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 7: Steady-StateMotionVisualEvokedPotential(SSMVEP) … · 2019. 2. 14. · wasconductedbyMATLAB’stfrstftfunction.Afterfusing two T-F images, the T-F signal was inversely transformed

image fusion According to the selected combination of pa-rameters the last three rounds of experimental data were usedto compare the enhancement effect of SSMVEP active com-ponents by the method of minimum energy fusion maximumcontrast fusion CCA fusion and T-F image fusion edisadvantage of grid search is that the amount of calculation islarge however considering the difficulty of superparameterselection grid search is still a more reliable method

32 Electrode Signal Selection for T-F Image Fusion e T-Fimage fusion method analyzes two electrode signals ustwo suitable electrodes need to be selected from multipleelectrodes e bipolar fusion signal is obtained by sub-tracting the two original EEG signals In equation (17) wesubtracted the wavelet low-frequency components of the twoT-F images to obtain the wavelet low-frequency componentsof the fused image e subtraction of the low-frequencycomponents removes the common noise in the electrodesignals which is similar to the principle of bipolar fusionerefore the two electrode signals with the best bipolarfusion effect were used in the T-F image fusion For SSVEPor SSMVEP stimulation the Oz electrode usually has thestrongest response [18] and is most commonly used toanalyze SSVEP or SSMVEP [19] e Oz electrode was usedas one of the analysis electrodes for bipolar fusion In thisstudy the Oz electrode was fused with the other five elec-trode signals so that the best combination of electrodes canbe selected When bipolar fusion is used the Oz electrode asa reducing electrode or a reduced electrode will not affect theexperimental results For example Oz-POz and POz-Ozhave the same fusion effect In this study the Oz elec-trode was used as the reduced electrode Twenty trials persubject were used for electrode selection e Welch powerspectrum analysis [20] was performed on the obtainedsignals of bipolar fusion for each subject e Gaussianwindow was used and the length of the window was 5 s eoverlapping was 256 and the number of discrete Fouriertransform (DFT) points was 6000 for the Welch periodo-gram 20 focused targets correspond to 20 stimuli fre-quencies and the frequencies corresponding to the highestpower spectrum amplitudes at the 20 frequencies weredetermined as the focused target frequency e high rec-ognition accuracy indicates that the amplitude of the powerspectrum at the stimulation frequency is prominent and itindicates the effectiveness of the fusion method for theenhancement of the SSMVEP active component Table 1shows the recognition accuracy of all 10 subjects under thebipolar fusion e Oz electrode was fused with differentelectrodes and obtained different fusion effects Moreoverdue to the differences among individuals the electrodecombination with the highest recognition accuracy persubject was also different e asterisk (lowast) marks the bipolarfusion electrode group with the highest recognition accuracyof each subject and the corresponding electrode combi-nation was used in T-F image fusion

33 Comparison of Enhancement Effects of Minimum EnergyFusion Maximum Contrast Fusion CCA Fusion and T-F

Image Fusion on SSMVEP Active Components T-F imagefusion used two channel signals and CCA fusion maximumcontrast fusion and minimum energy fusion used all sixchannel signals In this study the accuracy was used as theevaluation standard of the fusion effect e high onlineaccuracy indicates that the amplitude at stimulus frequencyis prominent which shows the effectiveness of the fusionmethod For T-F image fusion the Welch power spectrumanalysis (the parameters are the same as in Section 32) wasperformed on the obtained signal after the T-F image fusionand the frequency with the highest amplitude at twentystimulus frequencies was identified as the focused targetfrequency e minimum energy fusion maximum contrastfusion and CCA fusion require prior knowledge of thefrequency of the signal to be fused However the frequencyof the signal to be fused cannot be determined because theuserrsquos focused target cannot be determined beforehand Forthe current tested signal the minimum energy fusionmaximum contrast fusion and CCA fusion need to firstperform the fusion analysis at all stimulus frequencies andthen the frequency with the highest amplitude was iden-tified as the focused target frequency Take CCA fusion as anexample and assume that the current tested signal is Y First20 template signals are set according to equation (11) andthen 20 spatial filter coefficients are obtained according toequation (10) e tested signal is fused with 20 spatial filtercoefficients respectively e obtained 20 sets of vectors areseparately analyzed by the Welch power spectrum (theparameters are the same as in Section 32) to obtain theamplitude at the corresponding frequency Finally thefrequency with the highest amplitude is identified as thefocused target frequency e first three rounds of experi-mental data were used to select the parameters of the T-Fimage fusion (see details in Section 31) and the last threerounds of the experimental data were used to test the onlinefusion effects under the four methods Figures 4(a)ndash4(d)show the analysis results plotted using the data of subject 4 inthe fourth round of the experiment Figure 4(a) shows thepower spectrum results of the 20 targets under T-F imagefusion Figure 4(b) shows the power spectrum results of 20targets under CCA fusion Figure 4(c) shows the powerspectrum results of 20 targets under minimum energy

Table 1 Accuracies of 10 subjects under bipolar fusion

SubjectsBipolar fusion

Oz-PO7 Oz-PO8 Oz-PO3 Oz-POz Oz-PO4S1 02 07 045 015 095lowastS2 025 0 03lowast 01 02S3 095lowast 085 085 04 08S4 08lowast 01 07 035 035S5 02lowast 015 015 015 01S6 095lowast 03 095lowast 085 07S7 02 015 025 06lowast 055S8 01 005 025 03 09lowastS9 03 015 0 035lowast 02S10 035 035 05 08lowast 04lowaste bipolar fusion electrode group with the highest recognition accuracyof each subject

Computational Intelligence and Neuroscience 7

times10ndash7

0

05

1f = 72 Hz

times10ndash7

0

1

2f = 76 Hz

times10ndash7

0

1

2f = 78 Hz

0

05

1times10ndash7 f = 7 Hz

times10ndash7

0

1

2f = 74 Hz

times10ndash7

0

05

1f = 92 Hz

times10ndash7

0

2

4f = 96 Hz

times10ndash7

0

1

2f = 98 Hz

times10ndash7

0

1

2f = 9 Hz

times10ndash7

0

2

4f = 94 Hz

times10ndash7

0

05

1f = 102 Hz

times10ndash7

0

1

2f = 106 Hz

times10ndash7

0

1

2f = 108 Hz

times10ndash7

0

1

2f = 10 Hz

times10ndash7

0

1

2f = 104 Hz

times10ndash7

0

2

4f = 82 Hz

times10ndash7

0

2

4f = 86 Hz

times10ndash7

0

2

4f = 88 Hz

times10ndash7

0

1

2f = 8 Hz

times10ndash7

0

05

1 f = 84 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(a)

0

2

4times10ndash13 f = 7 Hz

times10ndash13

0

5f = 72 Hz

times10ndash12

0

05

1f = 74 Hz

times10ndash12

0

05

1f = 76 Hz

times10ndash13

0

5f = 78 Hz

times10ndash13

0

2

4f = 9 Hz

times10ndash13

0

5f = 92 Hz

times10ndash12

0

1

2f = 94 Hz

times10ndash12

0

05

1f = 96 Hz

times10ndash13

0

2

4f = 98 Hz

times10ndash13

0

2

4f = 10 Hz

times10ndash12

0

05

1f = 102 Hz

times10ndash12

0

05

1f = 104 Hz

times10ndash12

0

05

1f = 106 Hz

times10ndash13

0

5f = 108 Hz

times10ndash13

0

2

4f = 8 Hz

times10ndash12

0

05

1f = 82 Hz

times10ndash12

0

1

2f = 84 Hz

times10ndash12

0

05

1f = 86 Hz

times10ndash13

0

5f = 88 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(b)

Figure 4 Continued

8 Computational Intelligence and Neuroscience

times10ndash13

0

05

1f = 7 Hz

times10ndash13

0

1

2f = 74 Hz

times10ndash13

0

05

1f = 72 Hz

times10ndash13

0

1

2f = 76 Hz

times10ndash14

0

2

4f = 78 Hz

times10ndash13

0

05

1f = 9 Hz

times10ndash13

0

1

2f = 94 Hz

times10ndash13

0

1

2f = 92 Hz

times10ndash13

0

05

1f = 96 Hz

times10ndash13

0

05

1f = 98 Hz

times10ndash13

0

05

1f = 10 Hz

times10ndash13

0

05

1f = 104 Hz

times10ndash13

0

05

1f = 102 Hz

times10ndash13

0

05

1f = 106 Hz

times10ndash13

0

1

2f = 108 Hz

times10ndash13

0

05

1f = 8 Hz

times10ndash13

0

1

2f = 84 Hz

times10ndash13

0

1

2f = 82 Hz

times10ndash13

0

05

1f = 86 Hz

times10ndash13

0

05

1f = 88 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(c)

times10ndash4

0

05

1f = 72 Hz

times10ndash4

0

1

2f = 74 Hz

times10ndash4

0

1

2f = 76 Hz

times10ndash4

0

05

1f = 78 Hz

times10ndash4

0

05

1f = 7 Hz

times10ndash4

0

05

1f = 92 Hz

times10ndash4

0

2

4f = 94 Hz

times10ndash4

0

1

2f = 96 Hz

times10ndash5

0

5f = 98 Hz

times10ndash5

0

2

4f = 9 Hz

times10ndash4

0

1

2f = 102 Hz

times10ndash4

0

05

1f = 104 Hz

times10ndash4

0

1

2f = 106 Hz

times10ndash4

0

05

1f = 108 Hz

times10ndash4

0

05

1f = 10 Hz

times10ndash4

0

1

2f = 82 Hz

times10ndash4

0

2

4f = 84 Hz

times10ndash4

0

1

2f = 86 Hz

times10ndash4

0

05

1f = 88 Hz

times10ndash4

0

05

1f = 8 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(d)

Figure 4 e power spectrum results of 20 targets under (a) T-F image fusion (b) CCA fusion (c) minimum energy fusion and (d)maximum contrast fusion (the discrete sequence frequency values from left to right are 7 8 910 72 82 92 102 74 84 94 104 76 86 96 10678 88 98 108 Hz respectively)

Computational Intelligence and Neuroscience 9

fusion and Figure 4(d) shows the power spectrum results of20 targets under maximum contrast fusion In the stalk plotthe discrete sequence frequency values from left to right are7 8 9 10 72 82 92 102 74 84 94 104 76 86 96 106 78 8898 108 respectively where f represents the stimulationfrequency of the corresponding target and the green dotmarks the amplitude of the corresponding target stimulusfrequency e red flag f represents the target was mis-identified and the blue flag f represents the target wascorrectly identified Figures 4(a)ndash4(d) show that the T-Fimage fusion method has the best online performance andthe minimum energy fusion method has the worst onlineperformance Maximum contrast fusion and CCA fusionhave similar fusion effects

In online analysis the power spectrum amplitudes of thetest signal at all stimulus frequencies were calculated andthen the frequency corresponding to the maximum am-plitude was identified as the focused target frequency Errorrecognition occurs when the amplitude at the focused targetfrequency is lower than the amplitudes at the nonfocusedtarget frequencieserefore the power spectrum amplitudeat the focused target frequency can be regarded as the activecomponent of the signal and the power spectrum ampli-tudes at the remaining nonfocused target frequencies can beregarded as noise e SNR in this study was defined as theratio of the power spectrum amplitude at the focused targetfrequency to the mean of the power spectrum amplitudes atthe remaining nonfocused target frequencies e SNRs often subjects at all stimulus frequencies were superimposedand averaged and the experimental results are shown inFigure 5 It can be seen from Figure 5 that the T-F imagefusion method obtained the highest SNR which indicatesthat the T-F image fusion method proposed in this study caneffectively enhance the SSMVEP SNR

e online accuracies of each subject in the three roundsof experiments were superimposed and averaged e ex-perimental results are shown in Figure 6 It can be seen thatmost subjects obtained highest online accuracy under theT-F image fusion and some subjects obtained highest onlineaccuracy under CCA fusion and maximum contrast fusionOnly subject 9 obtained the highest online accuracy underthe minimum energy fusion e online accuracies of all thesubjects were superimposed and averaged e experimentalresults are shown in Figure 7 e experimental results showthat the online performance of the T-F image fusion methodis better e online accuracy of the T-F image fusionmethod is 617 606 and 3050 higher than that of theCCA fusion maximum contrast fusion and the minimumenergy fusion respectively e paired t-test shows signifi-cant differences between the accuracies of T-F image fusionand minimum energy fusion (p 00174) Online accuracyanalysis results show that the proposedmethod is better thanthe traditional time-domain fusion method

4 Discussion

e SSMVEP collected from the scalp contains a lot of noiseand requires an effective signal processing method to en-hance the active components of SSMVEP Spatial filtering

methods utilize EEG information of multiple channels andexert positive significance to enhance the active componentsof the SSMVEP At present the commonly used spatialfiltering methods include average fusion native fusion bi-polar fusion Laplacian fusion CAR fusion CCA fusionminimum energy fusion and maximum contrast fusionefirst five spatial filtering methods can only process thepreselected electrode signals Minimum energy fusionmaximum contrast fusion and CCA fusion improve theabove problem and they can be used to fuse any number ofelectrode signals with better fusion effects Minimum energyfusion maximum contrast fusion and CCA fusion fuseSSMVEP in the time domainis study first put forward theidea of fusing EEG in the T-F domain and proposed a T-Fimage fusion method We compared the enhancement ef-fects of minimum energy fusion maximum contrast fusionCCA fusion and T-F image fusion on SSMVEP It is verifiedby the test data that the proposed method is better than thetraditional time-domain fusion method

T-F analysis is a good choice for the transformation ofthe EEG data from one dimension to two dimension EEGtransformed into the T-F domain can be used as an image foranalysis e STFT has an inverse transform which caninverse transform the fused T-F image into a time-domainsignal erefore the STFT was used to transform the EEGfrom the time domain to the T-F domain e two-dimensional wavelet transform can fuse two images intoone to achieve the purpose of multichannel EEG fusionAfter two-dimensional wavelet decomposition the low-frequency components of the two subimages were sub-tracted to obtain the low-frequency components of the fusedimage and the maximum value of the high-frequencycomponents of the two subimages was taken as the high-frequency component of the fused imagee low-frequencycomponents after wavelet decomposition represent the ac-tive components of the SSMVEPe subtraction of the low-

0Methods

5

10

15

Ave

rage

SN

R

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Figure 5 e average SNR of ten subjects at all stimulus fre-quencies under four methods

10 Computational Intelligence and Neuroscience

frequency components can remove the common noise in theelectrode signals is is similar to bipolar fusion whichsubtracts two electrode signals in the time domain We alsocompared the enhancement eects of bipolar fusion and theproposed method on SSMVEP active components eelectrodes used for bipolar fusion are the same as those usedin T-F image fusion e results show that the proposedmethod is better than the bipolar fusion (the online accuracyof bipolar fusion is 7152) In this study the fused imagewas ltered by mean lter to further achieve low-pass l-tering We also tried to remove the mean ltering step aftercompleting the T-F image fusion e results show thatremoving themean ltering step reduced the fusion eect Inthis study the mean ltering step was performed after T-Fimage fusion We tested the fusion eect when the meanltering step was performed before T-F image fusion(performing mean ltering step on the subimage afterSTFT) For some subjects it was better to perform the meanltering step after T-F image fusion erefore we recom-mend that researchers follow the analysis steps in this study

e wavelet low-frequency coecients (see details inequation (17)) of the fused image have an important in-uence on the fusion eect If we perform T-F image fusionon six electrode signals at the same time we will get six setsof wavelet low-frequency coecients Here each set of low-frequency coecient is a two-dimensional signal Referringto equation (1) we can assign a coecient to each two-dimensional signal and obtain the fused two-dimensionalsignal (ie the wavelet low-frequency coecients of thefused image) after linear summationis study explored thefeasibility of applying the spatial lter coecients of CCAfusion and maximum contrast fusion to T-F image fusionwhen fusing six electrode signals by using the T-F imagefusion method at the same time Since the fusion eect ofminimum energy fusion was poor the spatial lter co-ecients of the minimum energy fusion were not used heree spatial lter coecients of the maximum contrast fusionand CCA fusion at the frequency f were obtained byequations (9) and (10) and are set to Vmax and Vcca re-spectively Vmax and Vcca are two vectors with dimensions of6times1 where Vmax(1 1) and Vcca(1 1) represent the rstelements of the vectors Vmax and Vcca Since the T-F imagefusion method was performed on the six electrode signalsequation (17) is transformed into equation (19) where Vcorresponds to the spatial lter coecient Vmax or Vcca erest of the analysis process is the same as that in Section 210e same analysis steps were performed for all 20 targets(7ndash108Hz) e Welch spectrum analysis (the parametersare the same as in Section 32) was performed on the ob-tained signal after T-F image fusion Figures 8(a) and 8(b)show the Welch power spectra plotted using the data ofsubject 4 in the fourth round of the experiment Figure 8(a)shows the power spectrum plotted using spatial lter co-ecients of CCA fusion and Figure 8(b) shows the Welchpower spectrum plotted using spatial lter coecients ofmaximum contrast fusion where f represents the stimulusfrequency and the red circle indicates the amplitude at thestimulus frequency Figures 8(a) and 8(b) show that theamplitudes at the stimulation frequencies are prominente spatial lter coecients of CCA fusion and maximumcontrast fusion are eective for T-F image fusion us theT-F image fusion method proposed in this study focuses onthe fusion of wavelet low-frequency coecients of multiple

0

02

04

06

08

1

12

Acc

urac

y

Subject1 2 3 4 5 6 7 8 9 10

T-F image fusionCCA fusion

Minimum energy fusionMaximum contrast fusion

Figure 6 e accuracy of each subject under four fusion methods

0

02

04

06

08

1

12

Ave

rage

accu

racy

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Methods

Figure 7 e average accuracy of ten subjects under four fusionmethods

Computational Intelligence and Neuroscience 11

images (by assigning a coecient to each two-dimensionalsignal and obtaining the fused two-dimensional signal afterlinear summation) e premise of the above test results is

that we know the frequency of the signal to be fused If theonline test method is used (see details in Section 33) thesame online fusion results as CCA fusion and maximum

f = 7 Hz

times104

012

10 20 30 400

f = 72 Hz

times104

012

10 20 30 400

f = 74 Hz

times104

024

10 20 30 400

f = 76 Hz

times104

024

10 20 30 400

f = 78 Hz

times104

024

10 20 30 400

f = 8 Hz

times104

012

10 20 30 400

f = 82 Hz

times104

024

10 20 30 400

f = 84 Hz

times104

05

10

10 20 30 400

f = 86 Hz

times104

024

10 20 30 400

f = 88 Hz

times104

024

10 20 30 400

f = 9 Hz

times104

012

10 20 30 400

f = 92 Hz

times104

024

10 20 30 400

f = 94 Hz

times104

05

10

10 20 30 400

f = 96 Hz

times104

024

10 20 30 400

f = 98 Hz

times104

012

10 20 30 400

f = 10 Hz

times104

012

10 20 30 400

f = 102 Hz

times104

012

10 20 30 400

f = 104 Hz

times104

012

10 20 30 400

f = 106 Hz

times104

024

10 20 30 400

f = 108 Hz

times104

012

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(a)

f = 7 Hz

0

2

4

10 20 30 400

f = 8 Hz

0

2

4

10 20 30 400

f = 9 Hz

0

2

4

10 20 30 400

f = 10 Hz

0

2

4

10 20 30 400

f = 72 Hz0

2

4

10 20 30 400

f = 82 Hz0

5

10

10 20 30 400

f = 92 Hz0

2

4

10 20 30 400

f = 102 Hz0

2

4

10 20 30 400

f = 74 Hz

0

2

4

10 20 30 400

f = 84 Hz

0

10

20

10 20 30 400

f = 94 Hz

0

10

20

10 20 30 400

f = 104 Hz

0

2

4

10 20 30 400

f = 76 Hz0

5

10 20 30 400

f = 86 Hz0

5

10

10 20 30 400

f = 96 Hz0

5

10 20 30 400

f = 106 Hz0

5

10

10 20 30 400

f = 78 Hz0

5

10 20 30 400

f = 88 Hz0

5

10 20 30 400

f = 98 Hz0

2

4

10 20 30 400

f = 108 Hz0

2

4

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(b)

Figure 8 e Welch power spectrum plotted using the (a) CCA fusion spatial lter coecients and (b) maximum contrast fusion spatiallter coecients

12 Computational Intelligence and Neuroscience

contrast fusion were obtained T-F image fusion usedonly two electrode signals and the better online fusioneffect than that of the CCA fusion and the maximumcontrast fusion was obtained which shows the effec-tiveness of the proposed method Next we will explorethe fusion method of multiple T-F images (more than twoimages) that is to find suitable spatial filter coefficientsfor the wavelet low-frequency components of multipleT-F images

LLNF LLN1 lowastV(1 1) + LLN2 lowastV(2 1) + LLN3 lowastV(3 1)

+ LLN4 lowastV(4 1) + LLN5 lowastV(5 1)

+ LLN6 lowastV(6 1)

(19)

e parameters affect the fusion effect of the proposedmethod We listed the parameters that need to be selectedand the possible values of the parameters in Section 31 Inthis study the grid search method was used to traverse thecombination of all parameters and the best combination ofparameters was found In the experiment we found thatthe number of frequency bins and the Gaussian windowlength can be fixed to 54 and 57 We recommend that theresearchers determine the parameters according to theparameter selection principles and ranges given in Section31 In this study the SSMVEP active component wasenhanced by fusing multichannel signals into a single-channel signal and then the focused target frequencywas identified by performing spectrum analysis on thefused signals is is beneficial for SSMVEP studies basedon spectrum analysis For example Reference [21] pro-posed a frequency and phase mixed coding method in theSSVEP-based brain-computer interface (BCI) which in-creases the number of BCI coding targets by making onefrequency correspond to multiple different phases In thestudy the FFT analysis of the test signal is required to findthe possible focused target frequency and then calculate thephase value at that frequency e proposed method in thisstudy has a positive significance for accurately finding thefocused target frequency in the spectrum Moreover someEEG feature extraction algorithms for the BCI also requirespectrum analysis of the test signals [22 23] erefore themethod proposed in this study has a potential applicationvalue

5 Conclusion

To explore whether T-F domain analysis can achieve betterfusion effects than time-domain analysis this study pro-posed an SSMVEP enhancement method based on T-Fimage fusion e parameters of the T-F image fusionalgorithm were determined by the grid search method andthe electrode signals used for T-F image fusion were se-lected by bipolar fusion e analysis results show that thekey of the T-F image fusion algorithm is the fusion of thewavelet low-frequency components is study comparedthe enhancement effects of minimum energy fusionmaximum contrast fusion CCA fusion and T-F imagefusion on SSMVEP e experimental results show that the

online performance of the T-F image fusion method isbetter than that of the traditional spatial filtering methodswhich indicates that the proposed method is feasible to fuseSSMVEP in the T-F domain

Data Availability

e analytical data used to support the findings of this studyare included within the article and the raw data are availablefrom the corresponding author upon request

Ethical Approval

e subjects provided informed written consent in accor-dance with the protocol approved by the Institutional Re-view Board of Xirsquoan Jiaotong University

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Wenqiang Yan worked on the algorithm and wrote thepaper Guanghua Xu involved in discussions and gavesuggestions throughout this project Longting Chen andXiaowei Zheng worked on the experimental design Allauthors read and approved the final manuscript

Acknowledgments

is research was supported by the National Natural ScienceFoundation of China (NSFC) (no 51775415) We want tothank the subjects for participating in these experiments

Supplementary Materials

MATLAB codes for T-F image fusion minimum energyfusion maximum contrast fusion and CCA fusion (Sup-plementary Materials)

References

[1] W Yan G Xu J Xie et al ldquoFour novel motion paradigmsbased on steady-state motion visual evoked potentialrdquo IEEETransactions on Biomedical Engineering vol 65 no 8pp 1696ndash1704 2018

[2] J R Wolpaw and D J Mcfarland ldquoMultichannel EEG-basedbrain-computer communicationrdquo Electroencephalographyand Clinical Neurophysiology vol 90 no 6 pp 444ndash4491994

[3] B O Peters G Pfurtscheller and H Flyvbjerg ldquoMiningmulti-channel EEG for its information content an ANN-based method for a brain-computer interfacerdquo Neural Net-works vol 11 no 7-8 pp 1429ndash1433 1998

[4] S Parini L Maggi A C Turconi and G Andreoni ldquoA robustand self-paced BCI system based on a four class SSVEPparadigm algorithms and protocols for a high-transfer-ratedirect brain communicationrdquo Computational Intelligence andNeuroscience vol 2009 pp 1ndash11 2009

[5] O Friman I Volosyak and A Graser ldquoMultiple channeldetection of steady-state visual evoked potentials for brain-

Computational Intelligence and Neuroscience 13

computer interfacesrdquo IEEE Transactions on Biomedical En-gineering vol 54 no 4 pp 742ndash750 2007

[6] Z Lin C Zhang WWu and X Gao ldquoFrequency recognitionbased on canonical correlation analysis for SSVEP-basedBCIsrdquo IEEE Transactions on Biomedical Engineering vol 53- no 12 pp 2610ndash2614 2006

[7] M Middendorf G Mcmillan G Calhoun and K S JonesldquoBrain-computer interfaces based on the steady-state visual-evoked responserdquo IEEE Transactions on Rehabilitation En-gineering vol 8 no 2 pp 211ndash214 2000

[8] G R Muller-Putz R Scherer C Brauneis andG Pfurtscheller ldquoSteady-state visual evoked potential(SSVEP)-based communication impact of harmonic fre-quency componentsrdquo Journal of Neural Engineering vol 2no 4 pp 123ndash130 2005

[9] D J Mcfarland L M Mccane S V David and J R WolpawldquoSpatial filter selection for EEG-based communicationrdquoElectroencephalography and Clinical Neurophysiologyvol 103 no 3 pp 386ndash394 1997

[10] H Li B S Manjunath and S K Mitra ldquoMultisensor imagefusion using the wavelet transformrdquo Graphical Models andImage Processing vol 57 no 3 pp 235ndash245 1995

[11] S Li and B Yang ldquoMultifocus image fusion by combiningcurvelet and wavelet transformrdquo Pattern Recognition Lettersvol 29 no 9 pp 1295ndash1301 2008

[12] B Zhang ldquoStudy on image fusion based on different fusionrules of wavelet transformrdquo Acta Photonica Sinica vol 33pp 173ndash176 2004

[13] C DMeyerMatrix Analysis and Applied Linear Algebra Bookand Solutions Manual Society for Industrial and AppliedMathematics Philadelphia PA USA 2000

[14] K J A Sundar M Jahnavi and K Lakshmisaritha ldquoMulti-sensor image fusion based on empirical wavelet transformrdquo inProceedings of the International Conference on ElectricalElectronics Communication Computer and OptimizationTechniques pp 93ndash97 Mysuru India December 2018

[15] L Tian J Ahmed Q Du B Shankar and S Adnan ldquoMultifocus image fusion using combined median and average filterbased hybrid stationary wavelet transform and principalcomponent analysisrdquo International Journal of AdvancedComputer Science and Applications vol 9 no 6 2018

[16] G He S Xing D Dong and X Zhao ldquoPanchromatic andmulti-spectral image fusion method based on two-step sparserepresentation and wavelet transformrdquo in Asia-Pacific Signaland Information Processing Association Summit and Confer-ence pp 259ndash262 Honolulu HI USA January 2018

[17] Y Shin S Lee M Ahn H Cho S C Jun and H-N LeeldquoNoise robustness analysis of sparse representation basedclassification method for non-stationary EEG signal classifi-cationrdquo Biomedical Signal Processing and Control vol 21pp 8ndash18 2015

[18] M Xu L Chen L Zhang et al ldquoA visual parallel-BCI spellerbased on the time-frequency coding strategyrdquo Journal ofNeural Engineering vol 11 no 2 article 026014 2014

[19] J Xie G Xu A Luo et al ldquoe role of visual noise ininfluencing mental load and fatigue in a steady-state motionvisual evoked potential-based brain-computer interfacerdquoSensors vol 17 no 8 p 1873 2017

[20] K Barbe R Pintelon and J Schoukens ldquoWelch methodrevisited nonparametric power spectrum estimation viacircular overlaprdquo IEEE Transactions on Signal Processingvol 58 no 2 pp 553ndash565 2010

[21] C Jia X Gao B Hong and S Gao ldquoFrequency and phasemixed coding in SSVEP-based brain-computer interfacerdquo

IEEE transactions on bio-medical engineering vol 58 no 1pp 200ndash206 2011

[22] C Yang H Wu Z Li W He N Wang and C-Y Su ldquoMindcontrol of a robotic arm with visual fusion technologyrdquo IEEETransactions on Industrial Informatics vol 14 no 9pp 3822ndash3830 2017

[23] J Castillo S Muller E Caicedo and T Bastos ldquoFeatureextraction techniques based on power spectrum for a SSVEP-BCIrdquo in Proceedings of the IEEE 23rd International Sympo-sium on Industrial Electronics (ISIE) Istanbul Turkey June2014

14 Computational Intelligence and Neuroscience

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 8: Steady-StateMotionVisualEvokedPotential(SSMVEP) … · 2019. 2. 14. · wasconductedbyMATLAB’stfrstftfunction.Afterfusing two T-F images, the T-F signal was inversely transformed

times10ndash7

0

05

1f = 72 Hz

times10ndash7

0

1

2f = 76 Hz

times10ndash7

0

1

2f = 78 Hz

0

05

1times10ndash7 f = 7 Hz

times10ndash7

0

1

2f = 74 Hz

times10ndash7

0

05

1f = 92 Hz

times10ndash7

0

2

4f = 96 Hz

times10ndash7

0

1

2f = 98 Hz

times10ndash7

0

1

2f = 9 Hz

times10ndash7

0

2

4f = 94 Hz

times10ndash7

0

05

1f = 102 Hz

times10ndash7

0

1

2f = 106 Hz

times10ndash7

0

1

2f = 108 Hz

times10ndash7

0

1

2f = 10 Hz

times10ndash7

0

1

2f = 104 Hz

times10ndash7

0

2

4f = 82 Hz

times10ndash7

0

2

4f = 86 Hz

times10ndash7

0

2

4f = 88 Hz

times10ndash7

0

1

2f = 8 Hz

times10ndash7

0

05

1 f = 84 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(a)

0

2

4times10ndash13 f = 7 Hz

times10ndash13

0

5f = 72 Hz

times10ndash12

0

05

1f = 74 Hz

times10ndash12

0

05

1f = 76 Hz

times10ndash13

0

5f = 78 Hz

times10ndash13

0

2

4f = 9 Hz

times10ndash13

0

5f = 92 Hz

times10ndash12

0

1

2f = 94 Hz

times10ndash12

0

05

1f = 96 Hz

times10ndash13

0

2

4f = 98 Hz

times10ndash13

0

2

4f = 10 Hz

times10ndash12

0

05

1f = 102 Hz

times10ndash12

0

05

1f = 104 Hz

times10ndash12

0

05

1f = 106 Hz

times10ndash13

0

5f = 108 Hz

times10ndash13

0

2

4f = 8 Hz

times10ndash12

0

05

1f = 82 Hz

times10ndash12

0

1

2f = 84 Hz

times10ndash12

0

05

1f = 86 Hz

times10ndash13

0

5f = 88 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(b)

Figure 4 Continued

8 Computational Intelligence and Neuroscience

times10ndash13

0

05

1f = 7 Hz

times10ndash13

0

1

2f = 74 Hz

times10ndash13

0

05

1f = 72 Hz

times10ndash13

0

1

2f = 76 Hz

times10ndash14

0

2

4f = 78 Hz

times10ndash13

0

05

1f = 9 Hz

times10ndash13

0

1

2f = 94 Hz

times10ndash13

0

1

2f = 92 Hz

times10ndash13

0

05

1f = 96 Hz

times10ndash13

0

05

1f = 98 Hz

times10ndash13

0

05

1f = 10 Hz

times10ndash13

0

05

1f = 104 Hz

times10ndash13

0

05

1f = 102 Hz

times10ndash13

0

05

1f = 106 Hz

times10ndash13

0

1

2f = 108 Hz

times10ndash13

0

05

1f = 8 Hz

times10ndash13

0

1

2f = 84 Hz

times10ndash13

0

1

2f = 82 Hz

times10ndash13

0

05

1f = 86 Hz

times10ndash13

0

05

1f = 88 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(c)

times10ndash4

0

05

1f = 72 Hz

times10ndash4

0

1

2f = 74 Hz

times10ndash4

0

1

2f = 76 Hz

times10ndash4

0

05

1f = 78 Hz

times10ndash4

0

05

1f = 7 Hz

times10ndash4

0

05

1f = 92 Hz

times10ndash4

0

2

4f = 94 Hz

times10ndash4

0

1

2f = 96 Hz

times10ndash5

0

5f = 98 Hz

times10ndash5

0

2

4f = 9 Hz

times10ndash4

0

1

2f = 102 Hz

times10ndash4

0

05

1f = 104 Hz

times10ndash4

0

1

2f = 106 Hz

times10ndash4

0

05

1f = 108 Hz

times10ndash4

0

05

1f = 10 Hz

times10ndash4

0

1

2f = 82 Hz

times10ndash4

0

2

4f = 84 Hz

times10ndash4

0

1

2f = 86 Hz

times10ndash4

0

05

1f = 88 Hz

times10ndash4

0

05

1f = 8 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(d)

Figure 4 e power spectrum results of 20 targets under (a) T-F image fusion (b) CCA fusion (c) minimum energy fusion and (d)maximum contrast fusion (the discrete sequence frequency values from left to right are 7 8 910 72 82 92 102 74 84 94 104 76 86 96 10678 88 98 108 Hz respectively)

Computational Intelligence and Neuroscience 9

fusion and Figure 4(d) shows the power spectrum results of20 targets under maximum contrast fusion In the stalk plotthe discrete sequence frequency values from left to right are7 8 9 10 72 82 92 102 74 84 94 104 76 86 96 106 78 8898 108 respectively where f represents the stimulationfrequency of the corresponding target and the green dotmarks the amplitude of the corresponding target stimulusfrequency e red flag f represents the target was mis-identified and the blue flag f represents the target wascorrectly identified Figures 4(a)ndash4(d) show that the T-Fimage fusion method has the best online performance andthe minimum energy fusion method has the worst onlineperformance Maximum contrast fusion and CCA fusionhave similar fusion effects

In online analysis the power spectrum amplitudes of thetest signal at all stimulus frequencies were calculated andthen the frequency corresponding to the maximum am-plitude was identified as the focused target frequency Errorrecognition occurs when the amplitude at the focused targetfrequency is lower than the amplitudes at the nonfocusedtarget frequencieserefore the power spectrum amplitudeat the focused target frequency can be regarded as the activecomponent of the signal and the power spectrum ampli-tudes at the remaining nonfocused target frequencies can beregarded as noise e SNR in this study was defined as theratio of the power spectrum amplitude at the focused targetfrequency to the mean of the power spectrum amplitudes atthe remaining nonfocused target frequencies e SNRs often subjects at all stimulus frequencies were superimposedand averaged and the experimental results are shown inFigure 5 It can be seen from Figure 5 that the T-F imagefusion method obtained the highest SNR which indicatesthat the T-F image fusion method proposed in this study caneffectively enhance the SSMVEP SNR

e online accuracies of each subject in the three roundsof experiments were superimposed and averaged e ex-perimental results are shown in Figure 6 It can be seen thatmost subjects obtained highest online accuracy under theT-F image fusion and some subjects obtained highest onlineaccuracy under CCA fusion and maximum contrast fusionOnly subject 9 obtained the highest online accuracy underthe minimum energy fusion e online accuracies of all thesubjects were superimposed and averaged e experimentalresults are shown in Figure 7 e experimental results showthat the online performance of the T-F image fusion methodis better e online accuracy of the T-F image fusionmethod is 617 606 and 3050 higher than that of theCCA fusion maximum contrast fusion and the minimumenergy fusion respectively e paired t-test shows signifi-cant differences between the accuracies of T-F image fusionand minimum energy fusion (p 00174) Online accuracyanalysis results show that the proposedmethod is better thanthe traditional time-domain fusion method

4 Discussion

e SSMVEP collected from the scalp contains a lot of noiseand requires an effective signal processing method to en-hance the active components of SSMVEP Spatial filtering

methods utilize EEG information of multiple channels andexert positive significance to enhance the active componentsof the SSMVEP At present the commonly used spatialfiltering methods include average fusion native fusion bi-polar fusion Laplacian fusion CAR fusion CCA fusionminimum energy fusion and maximum contrast fusionefirst five spatial filtering methods can only process thepreselected electrode signals Minimum energy fusionmaximum contrast fusion and CCA fusion improve theabove problem and they can be used to fuse any number ofelectrode signals with better fusion effects Minimum energyfusion maximum contrast fusion and CCA fusion fuseSSMVEP in the time domainis study first put forward theidea of fusing EEG in the T-F domain and proposed a T-Fimage fusion method We compared the enhancement ef-fects of minimum energy fusion maximum contrast fusionCCA fusion and T-F image fusion on SSMVEP It is verifiedby the test data that the proposed method is better than thetraditional time-domain fusion method

T-F analysis is a good choice for the transformation ofthe EEG data from one dimension to two dimension EEGtransformed into the T-F domain can be used as an image foranalysis e STFT has an inverse transform which caninverse transform the fused T-F image into a time-domainsignal erefore the STFT was used to transform the EEGfrom the time domain to the T-F domain e two-dimensional wavelet transform can fuse two images intoone to achieve the purpose of multichannel EEG fusionAfter two-dimensional wavelet decomposition the low-frequency components of the two subimages were sub-tracted to obtain the low-frequency components of the fusedimage and the maximum value of the high-frequencycomponents of the two subimages was taken as the high-frequency component of the fused imagee low-frequencycomponents after wavelet decomposition represent the ac-tive components of the SSMVEPe subtraction of the low-

0Methods

5

10

15

Ave

rage

SN

R

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Figure 5 e average SNR of ten subjects at all stimulus fre-quencies under four methods

10 Computational Intelligence and Neuroscience

frequency components can remove the common noise in theelectrode signals is is similar to bipolar fusion whichsubtracts two electrode signals in the time domain We alsocompared the enhancement eects of bipolar fusion and theproposed method on SSMVEP active components eelectrodes used for bipolar fusion are the same as those usedin T-F image fusion e results show that the proposedmethod is better than the bipolar fusion (the online accuracyof bipolar fusion is 7152) In this study the fused imagewas ltered by mean lter to further achieve low-pass l-tering We also tried to remove the mean ltering step aftercompleting the T-F image fusion e results show thatremoving themean ltering step reduced the fusion eect Inthis study the mean ltering step was performed after T-Fimage fusion We tested the fusion eect when the meanltering step was performed before T-F image fusion(performing mean ltering step on the subimage afterSTFT) For some subjects it was better to perform the meanltering step after T-F image fusion erefore we recom-mend that researchers follow the analysis steps in this study

e wavelet low-frequency coecients (see details inequation (17)) of the fused image have an important in-uence on the fusion eect If we perform T-F image fusionon six electrode signals at the same time we will get six setsof wavelet low-frequency coecients Here each set of low-frequency coecient is a two-dimensional signal Referringto equation (1) we can assign a coecient to each two-dimensional signal and obtain the fused two-dimensionalsignal (ie the wavelet low-frequency coecients of thefused image) after linear summationis study explored thefeasibility of applying the spatial lter coecients of CCAfusion and maximum contrast fusion to T-F image fusionwhen fusing six electrode signals by using the T-F imagefusion method at the same time Since the fusion eect ofminimum energy fusion was poor the spatial lter co-ecients of the minimum energy fusion were not used heree spatial lter coecients of the maximum contrast fusionand CCA fusion at the frequency f were obtained byequations (9) and (10) and are set to Vmax and Vcca re-spectively Vmax and Vcca are two vectors with dimensions of6times1 where Vmax(1 1) and Vcca(1 1) represent the rstelements of the vectors Vmax and Vcca Since the T-F imagefusion method was performed on the six electrode signalsequation (17) is transformed into equation (19) where Vcorresponds to the spatial lter coecient Vmax or Vcca erest of the analysis process is the same as that in Section 210e same analysis steps were performed for all 20 targets(7ndash108Hz) e Welch spectrum analysis (the parametersare the same as in Section 32) was performed on the ob-tained signal after T-F image fusion Figures 8(a) and 8(b)show the Welch power spectra plotted using the data ofsubject 4 in the fourth round of the experiment Figure 8(a)shows the power spectrum plotted using spatial lter co-ecients of CCA fusion and Figure 8(b) shows the Welchpower spectrum plotted using spatial lter coecients ofmaximum contrast fusion where f represents the stimulusfrequency and the red circle indicates the amplitude at thestimulus frequency Figures 8(a) and 8(b) show that theamplitudes at the stimulation frequencies are prominente spatial lter coecients of CCA fusion and maximumcontrast fusion are eective for T-F image fusion us theT-F image fusion method proposed in this study focuses onthe fusion of wavelet low-frequency coecients of multiple

0

02

04

06

08

1

12

Acc

urac

y

Subject1 2 3 4 5 6 7 8 9 10

T-F image fusionCCA fusion

Minimum energy fusionMaximum contrast fusion

Figure 6 e accuracy of each subject under four fusion methods

0

02

04

06

08

1

12

Ave

rage

accu

racy

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Methods

Figure 7 e average accuracy of ten subjects under four fusionmethods

Computational Intelligence and Neuroscience 11

images (by assigning a coecient to each two-dimensionalsignal and obtaining the fused two-dimensional signal afterlinear summation) e premise of the above test results is

that we know the frequency of the signal to be fused If theonline test method is used (see details in Section 33) thesame online fusion results as CCA fusion and maximum

f = 7 Hz

times104

012

10 20 30 400

f = 72 Hz

times104

012

10 20 30 400

f = 74 Hz

times104

024

10 20 30 400

f = 76 Hz

times104

024

10 20 30 400

f = 78 Hz

times104

024

10 20 30 400

f = 8 Hz

times104

012

10 20 30 400

f = 82 Hz

times104

024

10 20 30 400

f = 84 Hz

times104

05

10

10 20 30 400

f = 86 Hz

times104

024

10 20 30 400

f = 88 Hz

times104

024

10 20 30 400

f = 9 Hz

times104

012

10 20 30 400

f = 92 Hz

times104

024

10 20 30 400

f = 94 Hz

times104

05

10

10 20 30 400

f = 96 Hz

times104

024

10 20 30 400

f = 98 Hz

times104

012

10 20 30 400

f = 10 Hz

times104

012

10 20 30 400

f = 102 Hz

times104

012

10 20 30 400

f = 104 Hz

times104

012

10 20 30 400

f = 106 Hz

times104

024

10 20 30 400

f = 108 Hz

times104

012

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(a)

f = 7 Hz

0

2

4

10 20 30 400

f = 8 Hz

0

2

4

10 20 30 400

f = 9 Hz

0

2

4

10 20 30 400

f = 10 Hz

0

2

4

10 20 30 400

f = 72 Hz0

2

4

10 20 30 400

f = 82 Hz0

5

10

10 20 30 400

f = 92 Hz0

2

4

10 20 30 400

f = 102 Hz0

2

4

10 20 30 400

f = 74 Hz

0

2

4

10 20 30 400

f = 84 Hz

0

10

20

10 20 30 400

f = 94 Hz

0

10

20

10 20 30 400

f = 104 Hz

0

2

4

10 20 30 400

f = 76 Hz0

5

10 20 30 400

f = 86 Hz0

5

10

10 20 30 400

f = 96 Hz0

5

10 20 30 400

f = 106 Hz0

5

10

10 20 30 400

f = 78 Hz0

5

10 20 30 400

f = 88 Hz0

5

10 20 30 400

f = 98 Hz0

2

4

10 20 30 400

f = 108 Hz0

2

4

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(b)

Figure 8 e Welch power spectrum plotted using the (a) CCA fusion spatial lter coecients and (b) maximum contrast fusion spatiallter coecients

12 Computational Intelligence and Neuroscience

contrast fusion were obtained T-F image fusion usedonly two electrode signals and the better online fusioneffect than that of the CCA fusion and the maximumcontrast fusion was obtained which shows the effec-tiveness of the proposed method Next we will explorethe fusion method of multiple T-F images (more than twoimages) that is to find suitable spatial filter coefficientsfor the wavelet low-frequency components of multipleT-F images

LLNF LLN1 lowastV(1 1) + LLN2 lowastV(2 1) + LLN3 lowastV(3 1)

+ LLN4 lowastV(4 1) + LLN5 lowastV(5 1)

+ LLN6 lowastV(6 1)

(19)

e parameters affect the fusion effect of the proposedmethod We listed the parameters that need to be selectedand the possible values of the parameters in Section 31 Inthis study the grid search method was used to traverse thecombination of all parameters and the best combination ofparameters was found In the experiment we found thatthe number of frequency bins and the Gaussian windowlength can be fixed to 54 and 57 We recommend that theresearchers determine the parameters according to theparameter selection principles and ranges given in Section31 In this study the SSMVEP active component wasenhanced by fusing multichannel signals into a single-channel signal and then the focused target frequencywas identified by performing spectrum analysis on thefused signals is is beneficial for SSMVEP studies basedon spectrum analysis For example Reference [21] pro-posed a frequency and phase mixed coding method in theSSVEP-based brain-computer interface (BCI) which in-creases the number of BCI coding targets by making onefrequency correspond to multiple different phases In thestudy the FFT analysis of the test signal is required to findthe possible focused target frequency and then calculate thephase value at that frequency e proposed method in thisstudy has a positive significance for accurately finding thefocused target frequency in the spectrum Moreover someEEG feature extraction algorithms for the BCI also requirespectrum analysis of the test signals [22 23] erefore themethod proposed in this study has a potential applicationvalue

5 Conclusion

To explore whether T-F domain analysis can achieve betterfusion effects than time-domain analysis this study pro-posed an SSMVEP enhancement method based on T-Fimage fusion e parameters of the T-F image fusionalgorithm were determined by the grid search method andthe electrode signals used for T-F image fusion were se-lected by bipolar fusion e analysis results show that thekey of the T-F image fusion algorithm is the fusion of thewavelet low-frequency components is study comparedthe enhancement effects of minimum energy fusionmaximum contrast fusion CCA fusion and T-F imagefusion on SSMVEP e experimental results show that the

online performance of the T-F image fusion method isbetter than that of the traditional spatial filtering methodswhich indicates that the proposed method is feasible to fuseSSMVEP in the T-F domain

Data Availability

e analytical data used to support the findings of this studyare included within the article and the raw data are availablefrom the corresponding author upon request

Ethical Approval

e subjects provided informed written consent in accor-dance with the protocol approved by the Institutional Re-view Board of Xirsquoan Jiaotong University

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Wenqiang Yan worked on the algorithm and wrote thepaper Guanghua Xu involved in discussions and gavesuggestions throughout this project Longting Chen andXiaowei Zheng worked on the experimental design Allauthors read and approved the final manuscript

Acknowledgments

is research was supported by the National Natural ScienceFoundation of China (NSFC) (no 51775415) We want tothank the subjects for participating in these experiments

Supplementary Materials

MATLAB codes for T-F image fusion minimum energyfusion maximum contrast fusion and CCA fusion (Sup-plementary Materials)

References

[1] W Yan G Xu J Xie et al ldquoFour novel motion paradigmsbased on steady-state motion visual evoked potentialrdquo IEEETransactions on Biomedical Engineering vol 65 no 8pp 1696ndash1704 2018

[2] J R Wolpaw and D J Mcfarland ldquoMultichannel EEG-basedbrain-computer communicationrdquo Electroencephalographyand Clinical Neurophysiology vol 90 no 6 pp 444ndash4491994

[3] B O Peters G Pfurtscheller and H Flyvbjerg ldquoMiningmulti-channel EEG for its information content an ANN-based method for a brain-computer interfacerdquo Neural Net-works vol 11 no 7-8 pp 1429ndash1433 1998

[4] S Parini L Maggi A C Turconi and G Andreoni ldquoA robustand self-paced BCI system based on a four class SSVEPparadigm algorithms and protocols for a high-transfer-ratedirect brain communicationrdquo Computational Intelligence andNeuroscience vol 2009 pp 1ndash11 2009

[5] O Friman I Volosyak and A Graser ldquoMultiple channeldetection of steady-state visual evoked potentials for brain-

Computational Intelligence and Neuroscience 13

computer interfacesrdquo IEEE Transactions on Biomedical En-gineering vol 54 no 4 pp 742ndash750 2007

[6] Z Lin C Zhang WWu and X Gao ldquoFrequency recognitionbased on canonical correlation analysis for SSVEP-basedBCIsrdquo IEEE Transactions on Biomedical Engineering vol 53- no 12 pp 2610ndash2614 2006

[7] M Middendorf G Mcmillan G Calhoun and K S JonesldquoBrain-computer interfaces based on the steady-state visual-evoked responserdquo IEEE Transactions on Rehabilitation En-gineering vol 8 no 2 pp 211ndash214 2000

[8] G R Muller-Putz R Scherer C Brauneis andG Pfurtscheller ldquoSteady-state visual evoked potential(SSVEP)-based communication impact of harmonic fre-quency componentsrdquo Journal of Neural Engineering vol 2no 4 pp 123ndash130 2005

[9] D J Mcfarland L M Mccane S V David and J R WolpawldquoSpatial filter selection for EEG-based communicationrdquoElectroencephalography and Clinical Neurophysiologyvol 103 no 3 pp 386ndash394 1997

[10] H Li B S Manjunath and S K Mitra ldquoMultisensor imagefusion using the wavelet transformrdquo Graphical Models andImage Processing vol 57 no 3 pp 235ndash245 1995

[11] S Li and B Yang ldquoMultifocus image fusion by combiningcurvelet and wavelet transformrdquo Pattern Recognition Lettersvol 29 no 9 pp 1295ndash1301 2008

[12] B Zhang ldquoStudy on image fusion based on different fusionrules of wavelet transformrdquo Acta Photonica Sinica vol 33pp 173ndash176 2004

[13] C DMeyerMatrix Analysis and Applied Linear Algebra Bookand Solutions Manual Society for Industrial and AppliedMathematics Philadelphia PA USA 2000

[14] K J A Sundar M Jahnavi and K Lakshmisaritha ldquoMulti-sensor image fusion based on empirical wavelet transformrdquo inProceedings of the International Conference on ElectricalElectronics Communication Computer and OptimizationTechniques pp 93ndash97 Mysuru India December 2018

[15] L Tian J Ahmed Q Du B Shankar and S Adnan ldquoMultifocus image fusion using combined median and average filterbased hybrid stationary wavelet transform and principalcomponent analysisrdquo International Journal of AdvancedComputer Science and Applications vol 9 no 6 2018

[16] G He S Xing D Dong and X Zhao ldquoPanchromatic andmulti-spectral image fusion method based on two-step sparserepresentation and wavelet transformrdquo in Asia-Pacific Signaland Information Processing Association Summit and Confer-ence pp 259ndash262 Honolulu HI USA January 2018

[17] Y Shin S Lee M Ahn H Cho S C Jun and H-N LeeldquoNoise robustness analysis of sparse representation basedclassification method for non-stationary EEG signal classifi-cationrdquo Biomedical Signal Processing and Control vol 21pp 8ndash18 2015

[18] M Xu L Chen L Zhang et al ldquoA visual parallel-BCI spellerbased on the time-frequency coding strategyrdquo Journal ofNeural Engineering vol 11 no 2 article 026014 2014

[19] J Xie G Xu A Luo et al ldquoe role of visual noise ininfluencing mental load and fatigue in a steady-state motionvisual evoked potential-based brain-computer interfacerdquoSensors vol 17 no 8 p 1873 2017

[20] K Barbe R Pintelon and J Schoukens ldquoWelch methodrevisited nonparametric power spectrum estimation viacircular overlaprdquo IEEE Transactions on Signal Processingvol 58 no 2 pp 553ndash565 2010

[21] C Jia X Gao B Hong and S Gao ldquoFrequency and phasemixed coding in SSVEP-based brain-computer interfacerdquo

IEEE transactions on bio-medical engineering vol 58 no 1pp 200ndash206 2011

[22] C Yang H Wu Z Li W He N Wang and C-Y Su ldquoMindcontrol of a robotic arm with visual fusion technologyrdquo IEEETransactions on Industrial Informatics vol 14 no 9pp 3822ndash3830 2017

[23] J Castillo S Muller E Caicedo and T Bastos ldquoFeatureextraction techniques based on power spectrum for a SSVEP-BCIrdquo in Proceedings of the IEEE 23rd International Sympo-sium on Industrial Electronics (ISIE) Istanbul Turkey June2014

14 Computational Intelligence and Neuroscience

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 9: Steady-StateMotionVisualEvokedPotential(SSMVEP) … · 2019. 2. 14. · wasconductedbyMATLAB’stfrstftfunction.Afterfusing two T-F images, the T-F signal was inversely transformed

times10ndash13

0

05

1f = 7 Hz

times10ndash13

0

1

2f = 74 Hz

times10ndash13

0

05

1f = 72 Hz

times10ndash13

0

1

2f = 76 Hz

times10ndash14

0

2

4f = 78 Hz

times10ndash13

0

05

1f = 9 Hz

times10ndash13

0

1

2f = 94 Hz

times10ndash13

0

1

2f = 92 Hz

times10ndash13

0

05

1f = 96 Hz

times10ndash13

0

05

1f = 98 Hz

times10ndash13

0

05

1f = 10 Hz

times10ndash13

0

05

1f = 104 Hz

times10ndash13

0

05

1f = 102 Hz

times10ndash13

0

05

1f = 106 Hz

times10ndash13

0

1

2f = 108 Hz

times10ndash13

0

05

1f = 8 Hz

times10ndash13

0

1

2f = 84 Hz

times10ndash13

0

1

2f = 82 Hz

times10ndash13

0

05

1f = 86 Hz

times10ndash13

0

05

1f = 88 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(c)

times10ndash4

0

05

1f = 72 Hz

times10ndash4

0

1

2f = 74 Hz

times10ndash4

0

1

2f = 76 Hz

times10ndash4

0

05

1f = 78 Hz

times10ndash4

0

05

1f = 7 Hz

times10ndash4

0

05

1f = 92 Hz

times10ndash4

0

2

4f = 94 Hz

times10ndash4

0

1

2f = 96 Hz

times10ndash5

0

5f = 98 Hz

times10ndash5

0

2

4f = 9 Hz

times10ndash4

0

1

2f = 102 Hz

times10ndash4

0

05

1f = 104 Hz

times10ndash4

0

1

2f = 106 Hz

times10ndash4

0

05

1f = 108 Hz

times10ndash4

0

05

1f = 10 Hz

times10ndash4

0

1

2f = 82 Hz

times10ndash4

0

2

4f = 84 Hz

times10ndash4

0

1

2f = 86 Hz

times10ndash4

0

05

1f = 88 Hz

times10ndash4

0

05

1f = 8 Hz

Am

plitu

de V

2 (H

z)

Discrete sequence frequency value (Hz)

(d)

Figure 4 e power spectrum results of 20 targets under (a) T-F image fusion (b) CCA fusion (c) minimum energy fusion and (d)maximum contrast fusion (the discrete sequence frequency values from left to right are 7 8 910 72 82 92 102 74 84 94 104 76 86 96 10678 88 98 108 Hz respectively)

Computational Intelligence and Neuroscience 9

fusion and Figure 4(d) shows the power spectrum results of20 targets under maximum contrast fusion In the stalk plotthe discrete sequence frequency values from left to right are7 8 9 10 72 82 92 102 74 84 94 104 76 86 96 106 78 8898 108 respectively where f represents the stimulationfrequency of the corresponding target and the green dotmarks the amplitude of the corresponding target stimulusfrequency e red flag f represents the target was mis-identified and the blue flag f represents the target wascorrectly identified Figures 4(a)ndash4(d) show that the T-Fimage fusion method has the best online performance andthe minimum energy fusion method has the worst onlineperformance Maximum contrast fusion and CCA fusionhave similar fusion effects

In online analysis the power spectrum amplitudes of thetest signal at all stimulus frequencies were calculated andthen the frequency corresponding to the maximum am-plitude was identified as the focused target frequency Errorrecognition occurs when the amplitude at the focused targetfrequency is lower than the amplitudes at the nonfocusedtarget frequencieserefore the power spectrum amplitudeat the focused target frequency can be regarded as the activecomponent of the signal and the power spectrum ampli-tudes at the remaining nonfocused target frequencies can beregarded as noise e SNR in this study was defined as theratio of the power spectrum amplitude at the focused targetfrequency to the mean of the power spectrum amplitudes atthe remaining nonfocused target frequencies e SNRs often subjects at all stimulus frequencies were superimposedand averaged and the experimental results are shown inFigure 5 It can be seen from Figure 5 that the T-F imagefusion method obtained the highest SNR which indicatesthat the T-F image fusion method proposed in this study caneffectively enhance the SSMVEP SNR

e online accuracies of each subject in the three roundsof experiments were superimposed and averaged e ex-perimental results are shown in Figure 6 It can be seen thatmost subjects obtained highest online accuracy under theT-F image fusion and some subjects obtained highest onlineaccuracy under CCA fusion and maximum contrast fusionOnly subject 9 obtained the highest online accuracy underthe minimum energy fusion e online accuracies of all thesubjects were superimposed and averaged e experimentalresults are shown in Figure 7 e experimental results showthat the online performance of the T-F image fusion methodis better e online accuracy of the T-F image fusionmethod is 617 606 and 3050 higher than that of theCCA fusion maximum contrast fusion and the minimumenergy fusion respectively e paired t-test shows signifi-cant differences between the accuracies of T-F image fusionand minimum energy fusion (p 00174) Online accuracyanalysis results show that the proposedmethod is better thanthe traditional time-domain fusion method

4 Discussion

e SSMVEP collected from the scalp contains a lot of noiseand requires an effective signal processing method to en-hance the active components of SSMVEP Spatial filtering

methods utilize EEG information of multiple channels andexert positive significance to enhance the active componentsof the SSMVEP At present the commonly used spatialfiltering methods include average fusion native fusion bi-polar fusion Laplacian fusion CAR fusion CCA fusionminimum energy fusion and maximum contrast fusionefirst five spatial filtering methods can only process thepreselected electrode signals Minimum energy fusionmaximum contrast fusion and CCA fusion improve theabove problem and they can be used to fuse any number ofelectrode signals with better fusion effects Minimum energyfusion maximum contrast fusion and CCA fusion fuseSSMVEP in the time domainis study first put forward theidea of fusing EEG in the T-F domain and proposed a T-Fimage fusion method We compared the enhancement ef-fects of minimum energy fusion maximum contrast fusionCCA fusion and T-F image fusion on SSMVEP It is verifiedby the test data that the proposed method is better than thetraditional time-domain fusion method

T-F analysis is a good choice for the transformation ofthe EEG data from one dimension to two dimension EEGtransformed into the T-F domain can be used as an image foranalysis e STFT has an inverse transform which caninverse transform the fused T-F image into a time-domainsignal erefore the STFT was used to transform the EEGfrom the time domain to the T-F domain e two-dimensional wavelet transform can fuse two images intoone to achieve the purpose of multichannel EEG fusionAfter two-dimensional wavelet decomposition the low-frequency components of the two subimages were sub-tracted to obtain the low-frequency components of the fusedimage and the maximum value of the high-frequencycomponents of the two subimages was taken as the high-frequency component of the fused imagee low-frequencycomponents after wavelet decomposition represent the ac-tive components of the SSMVEPe subtraction of the low-

0Methods

5

10

15

Ave

rage

SN

R

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Figure 5 e average SNR of ten subjects at all stimulus fre-quencies under four methods

10 Computational Intelligence and Neuroscience

frequency components can remove the common noise in theelectrode signals is is similar to bipolar fusion whichsubtracts two electrode signals in the time domain We alsocompared the enhancement eects of bipolar fusion and theproposed method on SSMVEP active components eelectrodes used for bipolar fusion are the same as those usedin T-F image fusion e results show that the proposedmethod is better than the bipolar fusion (the online accuracyof bipolar fusion is 7152) In this study the fused imagewas ltered by mean lter to further achieve low-pass l-tering We also tried to remove the mean ltering step aftercompleting the T-F image fusion e results show thatremoving themean ltering step reduced the fusion eect Inthis study the mean ltering step was performed after T-Fimage fusion We tested the fusion eect when the meanltering step was performed before T-F image fusion(performing mean ltering step on the subimage afterSTFT) For some subjects it was better to perform the meanltering step after T-F image fusion erefore we recom-mend that researchers follow the analysis steps in this study

e wavelet low-frequency coecients (see details inequation (17)) of the fused image have an important in-uence on the fusion eect If we perform T-F image fusionon six electrode signals at the same time we will get six setsof wavelet low-frequency coecients Here each set of low-frequency coecient is a two-dimensional signal Referringto equation (1) we can assign a coecient to each two-dimensional signal and obtain the fused two-dimensionalsignal (ie the wavelet low-frequency coecients of thefused image) after linear summationis study explored thefeasibility of applying the spatial lter coecients of CCAfusion and maximum contrast fusion to T-F image fusionwhen fusing six electrode signals by using the T-F imagefusion method at the same time Since the fusion eect ofminimum energy fusion was poor the spatial lter co-ecients of the minimum energy fusion were not used heree spatial lter coecients of the maximum contrast fusionand CCA fusion at the frequency f were obtained byequations (9) and (10) and are set to Vmax and Vcca re-spectively Vmax and Vcca are two vectors with dimensions of6times1 where Vmax(1 1) and Vcca(1 1) represent the rstelements of the vectors Vmax and Vcca Since the T-F imagefusion method was performed on the six electrode signalsequation (17) is transformed into equation (19) where Vcorresponds to the spatial lter coecient Vmax or Vcca erest of the analysis process is the same as that in Section 210e same analysis steps were performed for all 20 targets(7ndash108Hz) e Welch spectrum analysis (the parametersare the same as in Section 32) was performed on the ob-tained signal after T-F image fusion Figures 8(a) and 8(b)show the Welch power spectra plotted using the data ofsubject 4 in the fourth round of the experiment Figure 8(a)shows the power spectrum plotted using spatial lter co-ecients of CCA fusion and Figure 8(b) shows the Welchpower spectrum plotted using spatial lter coecients ofmaximum contrast fusion where f represents the stimulusfrequency and the red circle indicates the amplitude at thestimulus frequency Figures 8(a) and 8(b) show that theamplitudes at the stimulation frequencies are prominente spatial lter coecients of CCA fusion and maximumcontrast fusion are eective for T-F image fusion us theT-F image fusion method proposed in this study focuses onthe fusion of wavelet low-frequency coecients of multiple

0

02

04

06

08

1

12

Acc

urac

y

Subject1 2 3 4 5 6 7 8 9 10

T-F image fusionCCA fusion

Minimum energy fusionMaximum contrast fusion

Figure 6 e accuracy of each subject under four fusion methods

0

02

04

06

08

1

12

Ave

rage

accu

racy

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Methods

Figure 7 e average accuracy of ten subjects under four fusionmethods

Computational Intelligence and Neuroscience 11

images (by assigning a coecient to each two-dimensionalsignal and obtaining the fused two-dimensional signal afterlinear summation) e premise of the above test results is

that we know the frequency of the signal to be fused If theonline test method is used (see details in Section 33) thesame online fusion results as CCA fusion and maximum

f = 7 Hz

times104

012

10 20 30 400

f = 72 Hz

times104

012

10 20 30 400

f = 74 Hz

times104

024

10 20 30 400

f = 76 Hz

times104

024

10 20 30 400

f = 78 Hz

times104

024

10 20 30 400

f = 8 Hz

times104

012

10 20 30 400

f = 82 Hz

times104

024

10 20 30 400

f = 84 Hz

times104

05

10

10 20 30 400

f = 86 Hz

times104

024

10 20 30 400

f = 88 Hz

times104

024

10 20 30 400

f = 9 Hz

times104

012

10 20 30 400

f = 92 Hz

times104

024

10 20 30 400

f = 94 Hz

times104

05

10

10 20 30 400

f = 96 Hz

times104

024

10 20 30 400

f = 98 Hz

times104

012

10 20 30 400

f = 10 Hz

times104

012

10 20 30 400

f = 102 Hz

times104

012

10 20 30 400

f = 104 Hz

times104

012

10 20 30 400

f = 106 Hz

times104

024

10 20 30 400

f = 108 Hz

times104

012

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(a)

f = 7 Hz

0

2

4

10 20 30 400

f = 8 Hz

0

2

4

10 20 30 400

f = 9 Hz

0

2

4

10 20 30 400

f = 10 Hz

0

2

4

10 20 30 400

f = 72 Hz0

2

4

10 20 30 400

f = 82 Hz0

5

10

10 20 30 400

f = 92 Hz0

2

4

10 20 30 400

f = 102 Hz0

2

4

10 20 30 400

f = 74 Hz

0

2

4

10 20 30 400

f = 84 Hz

0

10

20

10 20 30 400

f = 94 Hz

0

10

20

10 20 30 400

f = 104 Hz

0

2

4

10 20 30 400

f = 76 Hz0

5

10 20 30 400

f = 86 Hz0

5

10

10 20 30 400

f = 96 Hz0

5

10 20 30 400

f = 106 Hz0

5

10

10 20 30 400

f = 78 Hz0

5

10 20 30 400

f = 88 Hz0

5

10 20 30 400

f = 98 Hz0

2

4

10 20 30 400

f = 108 Hz0

2

4

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(b)

Figure 8 e Welch power spectrum plotted using the (a) CCA fusion spatial lter coecients and (b) maximum contrast fusion spatiallter coecients

12 Computational Intelligence and Neuroscience

contrast fusion were obtained T-F image fusion usedonly two electrode signals and the better online fusioneffect than that of the CCA fusion and the maximumcontrast fusion was obtained which shows the effec-tiveness of the proposed method Next we will explorethe fusion method of multiple T-F images (more than twoimages) that is to find suitable spatial filter coefficientsfor the wavelet low-frequency components of multipleT-F images

LLNF LLN1 lowastV(1 1) + LLN2 lowastV(2 1) + LLN3 lowastV(3 1)

+ LLN4 lowastV(4 1) + LLN5 lowastV(5 1)

+ LLN6 lowastV(6 1)

(19)

e parameters affect the fusion effect of the proposedmethod We listed the parameters that need to be selectedand the possible values of the parameters in Section 31 Inthis study the grid search method was used to traverse thecombination of all parameters and the best combination ofparameters was found In the experiment we found thatthe number of frequency bins and the Gaussian windowlength can be fixed to 54 and 57 We recommend that theresearchers determine the parameters according to theparameter selection principles and ranges given in Section31 In this study the SSMVEP active component wasenhanced by fusing multichannel signals into a single-channel signal and then the focused target frequencywas identified by performing spectrum analysis on thefused signals is is beneficial for SSMVEP studies basedon spectrum analysis For example Reference [21] pro-posed a frequency and phase mixed coding method in theSSVEP-based brain-computer interface (BCI) which in-creases the number of BCI coding targets by making onefrequency correspond to multiple different phases In thestudy the FFT analysis of the test signal is required to findthe possible focused target frequency and then calculate thephase value at that frequency e proposed method in thisstudy has a positive significance for accurately finding thefocused target frequency in the spectrum Moreover someEEG feature extraction algorithms for the BCI also requirespectrum analysis of the test signals [22 23] erefore themethod proposed in this study has a potential applicationvalue

5 Conclusion

To explore whether T-F domain analysis can achieve betterfusion effects than time-domain analysis this study pro-posed an SSMVEP enhancement method based on T-Fimage fusion e parameters of the T-F image fusionalgorithm were determined by the grid search method andthe electrode signals used for T-F image fusion were se-lected by bipolar fusion e analysis results show that thekey of the T-F image fusion algorithm is the fusion of thewavelet low-frequency components is study comparedthe enhancement effects of minimum energy fusionmaximum contrast fusion CCA fusion and T-F imagefusion on SSMVEP e experimental results show that the

online performance of the T-F image fusion method isbetter than that of the traditional spatial filtering methodswhich indicates that the proposed method is feasible to fuseSSMVEP in the T-F domain

Data Availability

e analytical data used to support the findings of this studyare included within the article and the raw data are availablefrom the corresponding author upon request

Ethical Approval

e subjects provided informed written consent in accor-dance with the protocol approved by the Institutional Re-view Board of Xirsquoan Jiaotong University

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Wenqiang Yan worked on the algorithm and wrote thepaper Guanghua Xu involved in discussions and gavesuggestions throughout this project Longting Chen andXiaowei Zheng worked on the experimental design Allauthors read and approved the final manuscript

Acknowledgments

is research was supported by the National Natural ScienceFoundation of China (NSFC) (no 51775415) We want tothank the subjects for participating in these experiments

Supplementary Materials

MATLAB codes for T-F image fusion minimum energyfusion maximum contrast fusion and CCA fusion (Sup-plementary Materials)

References

[1] W Yan G Xu J Xie et al ldquoFour novel motion paradigmsbased on steady-state motion visual evoked potentialrdquo IEEETransactions on Biomedical Engineering vol 65 no 8pp 1696ndash1704 2018

[2] J R Wolpaw and D J Mcfarland ldquoMultichannel EEG-basedbrain-computer communicationrdquo Electroencephalographyand Clinical Neurophysiology vol 90 no 6 pp 444ndash4491994

[3] B O Peters G Pfurtscheller and H Flyvbjerg ldquoMiningmulti-channel EEG for its information content an ANN-based method for a brain-computer interfacerdquo Neural Net-works vol 11 no 7-8 pp 1429ndash1433 1998

[4] S Parini L Maggi A C Turconi and G Andreoni ldquoA robustand self-paced BCI system based on a four class SSVEPparadigm algorithms and protocols for a high-transfer-ratedirect brain communicationrdquo Computational Intelligence andNeuroscience vol 2009 pp 1ndash11 2009

[5] O Friman I Volosyak and A Graser ldquoMultiple channeldetection of steady-state visual evoked potentials for brain-

Computational Intelligence and Neuroscience 13

computer interfacesrdquo IEEE Transactions on Biomedical En-gineering vol 54 no 4 pp 742ndash750 2007

[6] Z Lin C Zhang WWu and X Gao ldquoFrequency recognitionbased on canonical correlation analysis for SSVEP-basedBCIsrdquo IEEE Transactions on Biomedical Engineering vol 53- no 12 pp 2610ndash2614 2006

[7] M Middendorf G Mcmillan G Calhoun and K S JonesldquoBrain-computer interfaces based on the steady-state visual-evoked responserdquo IEEE Transactions on Rehabilitation En-gineering vol 8 no 2 pp 211ndash214 2000

[8] G R Muller-Putz R Scherer C Brauneis andG Pfurtscheller ldquoSteady-state visual evoked potential(SSVEP)-based communication impact of harmonic fre-quency componentsrdquo Journal of Neural Engineering vol 2no 4 pp 123ndash130 2005

[9] D J Mcfarland L M Mccane S V David and J R WolpawldquoSpatial filter selection for EEG-based communicationrdquoElectroencephalography and Clinical Neurophysiologyvol 103 no 3 pp 386ndash394 1997

[10] H Li B S Manjunath and S K Mitra ldquoMultisensor imagefusion using the wavelet transformrdquo Graphical Models andImage Processing vol 57 no 3 pp 235ndash245 1995

[11] S Li and B Yang ldquoMultifocus image fusion by combiningcurvelet and wavelet transformrdquo Pattern Recognition Lettersvol 29 no 9 pp 1295ndash1301 2008

[12] B Zhang ldquoStudy on image fusion based on different fusionrules of wavelet transformrdquo Acta Photonica Sinica vol 33pp 173ndash176 2004

[13] C DMeyerMatrix Analysis and Applied Linear Algebra Bookand Solutions Manual Society for Industrial and AppliedMathematics Philadelphia PA USA 2000

[14] K J A Sundar M Jahnavi and K Lakshmisaritha ldquoMulti-sensor image fusion based on empirical wavelet transformrdquo inProceedings of the International Conference on ElectricalElectronics Communication Computer and OptimizationTechniques pp 93ndash97 Mysuru India December 2018

[15] L Tian J Ahmed Q Du B Shankar and S Adnan ldquoMultifocus image fusion using combined median and average filterbased hybrid stationary wavelet transform and principalcomponent analysisrdquo International Journal of AdvancedComputer Science and Applications vol 9 no 6 2018

[16] G He S Xing D Dong and X Zhao ldquoPanchromatic andmulti-spectral image fusion method based on two-step sparserepresentation and wavelet transformrdquo in Asia-Pacific Signaland Information Processing Association Summit and Confer-ence pp 259ndash262 Honolulu HI USA January 2018

[17] Y Shin S Lee M Ahn H Cho S C Jun and H-N LeeldquoNoise robustness analysis of sparse representation basedclassification method for non-stationary EEG signal classifi-cationrdquo Biomedical Signal Processing and Control vol 21pp 8ndash18 2015

[18] M Xu L Chen L Zhang et al ldquoA visual parallel-BCI spellerbased on the time-frequency coding strategyrdquo Journal ofNeural Engineering vol 11 no 2 article 026014 2014

[19] J Xie G Xu A Luo et al ldquoe role of visual noise ininfluencing mental load and fatigue in a steady-state motionvisual evoked potential-based brain-computer interfacerdquoSensors vol 17 no 8 p 1873 2017

[20] K Barbe R Pintelon and J Schoukens ldquoWelch methodrevisited nonparametric power spectrum estimation viacircular overlaprdquo IEEE Transactions on Signal Processingvol 58 no 2 pp 553ndash565 2010

[21] C Jia X Gao B Hong and S Gao ldquoFrequency and phasemixed coding in SSVEP-based brain-computer interfacerdquo

IEEE transactions on bio-medical engineering vol 58 no 1pp 200ndash206 2011

[22] C Yang H Wu Z Li W He N Wang and C-Y Su ldquoMindcontrol of a robotic arm with visual fusion technologyrdquo IEEETransactions on Industrial Informatics vol 14 no 9pp 3822ndash3830 2017

[23] J Castillo S Muller E Caicedo and T Bastos ldquoFeatureextraction techniques based on power spectrum for a SSVEP-BCIrdquo in Proceedings of the IEEE 23rd International Sympo-sium on Industrial Electronics (ISIE) Istanbul Turkey June2014

14 Computational Intelligence and Neuroscience

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 10: Steady-StateMotionVisualEvokedPotential(SSMVEP) … · 2019. 2. 14. · wasconductedbyMATLAB’stfrstftfunction.Afterfusing two T-F images, the T-F signal was inversely transformed

fusion and Figure 4(d) shows the power spectrum results of20 targets under maximum contrast fusion In the stalk plotthe discrete sequence frequency values from left to right are7 8 9 10 72 82 92 102 74 84 94 104 76 86 96 106 78 8898 108 respectively where f represents the stimulationfrequency of the corresponding target and the green dotmarks the amplitude of the corresponding target stimulusfrequency e red flag f represents the target was mis-identified and the blue flag f represents the target wascorrectly identified Figures 4(a)ndash4(d) show that the T-Fimage fusion method has the best online performance andthe minimum energy fusion method has the worst onlineperformance Maximum contrast fusion and CCA fusionhave similar fusion effects

In online analysis the power spectrum amplitudes of thetest signal at all stimulus frequencies were calculated andthen the frequency corresponding to the maximum am-plitude was identified as the focused target frequency Errorrecognition occurs when the amplitude at the focused targetfrequency is lower than the amplitudes at the nonfocusedtarget frequencieserefore the power spectrum amplitudeat the focused target frequency can be regarded as the activecomponent of the signal and the power spectrum ampli-tudes at the remaining nonfocused target frequencies can beregarded as noise e SNR in this study was defined as theratio of the power spectrum amplitude at the focused targetfrequency to the mean of the power spectrum amplitudes atthe remaining nonfocused target frequencies e SNRs often subjects at all stimulus frequencies were superimposedand averaged and the experimental results are shown inFigure 5 It can be seen from Figure 5 that the T-F imagefusion method obtained the highest SNR which indicatesthat the T-F image fusion method proposed in this study caneffectively enhance the SSMVEP SNR

e online accuracies of each subject in the three roundsof experiments were superimposed and averaged e ex-perimental results are shown in Figure 6 It can be seen thatmost subjects obtained highest online accuracy under theT-F image fusion and some subjects obtained highest onlineaccuracy under CCA fusion and maximum contrast fusionOnly subject 9 obtained the highest online accuracy underthe minimum energy fusion e online accuracies of all thesubjects were superimposed and averaged e experimentalresults are shown in Figure 7 e experimental results showthat the online performance of the T-F image fusion methodis better e online accuracy of the T-F image fusionmethod is 617 606 and 3050 higher than that of theCCA fusion maximum contrast fusion and the minimumenergy fusion respectively e paired t-test shows signifi-cant differences between the accuracies of T-F image fusionand minimum energy fusion (p 00174) Online accuracyanalysis results show that the proposedmethod is better thanthe traditional time-domain fusion method

4 Discussion

e SSMVEP collected from the scalp contains a lot of noiseand requires an effective signal processing method to en-hance the active components of SSMVEP Spatial filtering

methods utilize EEG information of multiple channels andexert positive significance to enhance the active componentsof the SSMVEP At present the commonly used spatialfiltering methods include average fusion native fusion bi-polar fusion Laplacian fusion CAR fusion CCA fusionminimum energy fusion and maximum contrast fusionefirst five spatial filtering methods can only process thepreselected electrode signals Minimum energy fusionmaximum contrast fusion and CCA fusion improve theabove problem and they can be used to fuse any number ofelectrode signals with better fusion effects Minimum energyfusion maximum contrast fusion and CCA fusion fuseSSMVEP in the time domainis study first put forward theidea of fusing EEG in the T-F domain and proposed a T-Fimage fusion method We compared the enhancement ef-fects of minimum energy fusion maximum contrast fusionCCA fusion and T-F image fusion on SSMVEP It is verifiedby the test data that the proposed method is better than thetraditional time-domain fusion method

T-F analysis is a good choice for the transformation ofthe EEG data from one dimension to two dimension EEGtransformed into the T-F domain can be used as an image foranalysis e STFT has an inverse transform which caninverse transform the fused T-F image into a time-domainsignal erefore the STFT was used to transform the EEGfrom the time domain to the T-F domain e two-dimensional wavelet transform can fuse two images intoone to achieve the purpose of multichannel EEG fusionAfter two-dimensional wavelet decomposition the low-frequency components of the two subimages were sub-tracted to obtain the low-frequency components of the fusedimage and the maximum value of the high-frequencycomponents of the two subimages was taken as the high-frequency component of the fused imagee low-frequencycomponents after wavelet decomposition represent the ac-tive components of the SSMVEPe subtraction of the low-

0Methods

5

10

15

Ave

rage

SN

R

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Figure 5 e average SNR of ten subjects at all stimulus fre-quencies under four methods

10 Computational Intelligence and Neuroscience

frequency components can remove the common noise in theelectrode signals is is similar to bipolar fusion whichsubtracts two electrode signals in the time domain We alsocompared the enhancement eects of bipolar fusion and theproposed method on SSMVEP active components eelectrodes used for bipolar fusion are the same as those usedin T-F image fusion e results show that the proposedmethod is better than the bipolar fusion (the online accuracyof bipolar fusion is 7152) In this study the fused imagewas ltered by mean lter to further achieve low-pass l-tering We also tried to remove the mean ltering step aftercompleting the T-F image fusion e results show thatremoving themean ltering step reduced the fusion eect Inthis study the mean ltering step was performed after T-Fimage fusion We tested the fusion eect when the meanltering step was performed before T-F image fusion(performing mean ltering step on the subimage afterSTFT) For some subjects it was better to perform the meanltering step after T-F image fusion erefore we recom-mend that researchers follow the analysis steps in this study

e wavelet low-frequency coecients (see details inequation (17)) of the fused image have an important in-uence on the fusion eect If we perform T-F image fusionon six electrode signals at the same time we will get six setsof wavelet low-frequency coecients Here each set of low-frequency coecient is a two-dimensional signal Referringto equation (1) we can assign a coecient to each two-dimensional signal and obtain the fused two-dimensionalsignal (ie the wavelet low-frequency coecients of thefused image) after linear summationis study explored thefeasibility of applying the spatial lter coecients of CCAfusion and maximum contrast fusion to T-F image fusionwhen fusing six electrode signals by using the T-F imagefusion method at the same time Since the fusion eect ofminimum energy fusion was poor the spatial lter co-ecients of the minimum energy fusion were not used heree spatial lter coecients of the maximum contrast fusionand CCA fusion at the frequency f were obtained byequations (9) and (10) and are set to Vmax and Vcca re-spectively Vmax and Vcca are two vectors with dimensions of6times1 where Vmax(1 1) and Vcca(1 1) represent the rstelements of the vectors Vmax and Vcca Since the T-F imagefusion method was performed on the six electrode signalsequation (17) is transformed into equation (19) where Vcorresponds to the spatial lter coecient Vmax or Vcca erest of the analysis process is the same as that in Section 210e same analysis steps were performed for all 20 targets(7ndash108Hz) e Welch spectrum analysis (the parametersare the same as in Section 32) was performed on the ob-tained signal after T-F image fusion Figures 8(a) and 8(b)show the Welch power spectra plotted using the data ofsubject 4 in the fourth round of the experiment Figure 8(a)shows the power spectrum plotted using spatial lter co-ecients of CCA fusion and Figure 8(b) shows the Welchpower spectrum plotted using spatial lter coecients ofmaximum contrast fusion where f represents the stimulusfrequency and the red circle indicates the amplitude at thestimulus frequency Figures 8(a) and 8(b) show that theamplitudes at the stimulation frequencies are prominente spatial lter coecients of CCA fusion and maximumcontrast fusion are eective for T-F image fusion us theT-F image fusion method proposed in this study focuses onthe fusion of wavelet low-frequency coecients of multiple

0

02

04

06

08

1

12

Acc

urac

y

Subject1 2 3 4 5 6 7 8 9 10

T-F image fusionCCA fusion

Minimum energy fusionMaximum contrast fusion

Figure 6 e accuracy of each subject under four fusion methods

0

02

04

06

08

1

12

Ave

rage

accu

racy

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Methods

Figure 7 e average accuracy of ten subjects under four fusionmethods

Computational Intelligence and Neuroscience 11

images (by assigning a coecient to each two-dimensionalsignal and obtaining the fused two-dimensional signal afterlinear summation) e premise of the above test results is

that we know the frequency of the signal to be fused If theonline test method is used (see details in Section 33) thesame online fusion results as CCA fusion and maximum

f = 7 Hz

times104

012

10 20 30 400

f = 72 Hz

times104

012

10 20 30 400

f = 74 Hz

times104

024

10 20 30 400

f = 76 Hz

times104

024

10 20 30 400

f = 78 Hz

times104

024

10 20 30 400

f = 8 Hz

times104

012

10 20 30 400

f = 82 Hz

times104

024

10 20 30 400

f = 84 Hz

times104

05

10

10 20 30 400

f = 86 Hz

times104

024

10 20 30 400

f = 88 Hz

times104

024

10 20 30 400

f = 9 Hz

times104

012

10 20 30 400

f = 92 Hz

times104

024

10 20 30 400

f = 94 Hz

times104

05

10

10 20 30 400

f = 96 Hz

times104

024

10 20 30 400

f = 98 Hz

times104

012

10 20 30 400

f = 10 Hz

times104

012

10 20 30 400

f = 102 Hz

times104

012

10 20 30 400

f = 104 Hz

times104

012

10 20 30 400

f = 106 Hz

times104

024

10 20 30 400

f = 108 Hz

times104

012

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(a)

f = 7 Hz

0

2

4

10 20 30 400

f = 8 Hz

0

2

4

10 20 30 400

f = 9 Hz

0

2

4

10 20 30 400

f = 10 Hz

0

2

4

10 20 30 400

f = 72 Hz0

2

4

10 20 30 400

f = 82 Hz0

5

10

10 20 30 400

f = 92 Hz0

2

4

10 20 30 400

f = 102 Hz0

2

4

10 20 30 400

f = 74 Hz

0

2

4

10 20 30 400

f = 84 Hz

0

10

20

10 20 30 400

f = 94 Hz

0

10

20

10 20 30 400

f = 104 Hz

0

2

4

10 20 30 400

f = 76 Hz0

5

10 20 30 400

f = 86 Hz0

5

10

10 20 30 400

f = 96 Hz0

5

10 20 30 400

f = 106 Hz0

5

10

10 20 30 400

f = 78 Hz0

5

10 20 30 400

f = 88 Hz0

5

10 20 30 400

f = 98 Hz0

2

4

10 20 30 400

f = 108 Hz0

2

4

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(b)

Figure 8 e Welch power spectrum plotted using the (a) CCA fusion spatial lter coecients and (b) maximum contrast fusion spatiallter coecients

12 Computational Intelligence and Neuroscience

contrast fusion were obtained T-F image fusion usedonly two electrode signals and the better online fusioneffect than that of the CCA fusion and the maximumcontrast fusion was obtained which shows the effec-tiveness of the proposed method Next we will explorethe fusion method of multiple T-F images (more than twoimages) that is to find suitable spatial filter coefficientsfor the wavelet low-frequency components of multipleT-F images

LLNF LLN1 lowastV(1 1) + LLN2 lowastV(2 1) + LLN3 lowastV(3 1)

+ LLN4 lowastV(4 1) + LLN5 lowastV(5 1)

+ LLN6 lowastV(6 1)

(19)

e parameters affect the fusion effect of the proposedmethod We listed the parameters that need to be selectedand the possible values of the parameters in Section 31 Inthis study the grid search method was used to traverse thecombination of all parameters and the best combination ofparameters was found In the experiment we found thatthe number of frequency bins and the Gaussian windowlength can be fixed to 54 and 57 We recommend that theresearchers determine the parameters according to theparameter selection principles and ranges given in Section31 In this study the SSMVEP active component wasenhanced by fusing multichannel signals into a single-channel signal and then the focused target frequencywas identified by performing spectrum analysis on thefused signals is is beneficial for SSMVEP studies basedon spectrum analysis For example Reference [21] pro-posed a frequency and phase mixed coding method in theSSVEP-based brain-computer interface (BCI) which in-creases the number of BCI coding targets by making onefrequency correspond to multiple different phases In thestudy the FFT analysis of the test signal is required to findthe possible focused target frequency and then calculate thephase value at that frequency e proposed method in thisstudy has a positive significance for accurately finding thefocused target frequency in the spectrum Moreover someEEG feature extraction algorithms for the BCI also requirespectrum analysis of the test signals [22 23] erefore themethod proposed in this study has a potential applicationvalue

5 Conclusion

To explore whether T-F domain analysis can achieve betterfusion effects than time-domain analysis this study pro-posed an SSMVEP enhancement method based on T-Fimage fusion e parameters of the T-F image fusionalgorithm were determined by the grid search method andthe electrode signals used for T-F image fusion were se-lected by bipolar fusion e analysis results show that thekey of the T-F image fusion algorithm is the fusion of thewavelet low-frequency components is study comparedthe enhancement effects of minimum energy fusionmaximum contrast fusion CCA fusion and T-F imagefusion on SSMVEP e experimental results show that the

online performance of the T-F image fusion method isbetter than that of the traditional spatial filtering methodswhich indicates that the proposed method is feasible to fuseSSMVEP in the T-F domain

Data Availability

e analytical data used to support the findings of this studyare included within the article and the raw data are availablefrom the corresponding author upon request

Ethical Approval

e subjects provided informed written consent in accor-dance with the protocol approved by the Institutional Re-view Board of Xirsquoan Jiaotong University

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Wenqiang Yan worked on the algorithm and wrote thepaper Guanghua Xu involved in discussions and gavesuggestions throughout this project Longting Chen andXiaowei Zheng worked on the experimental design Allauthors read and approved the final manuscript

Acknowledgments

is research was supported by the National Natural ScienceFoundation of China (NSFC) (no 51775415) We want tothank the subjects for participating in these experiments

Supplementary Materials

MATLAB codes for T-F image fusion minimum energyfusion maximum contrast fusion and CCA fusion (Sup-plementary Materials)

References

[1] W Yan G Xu J Xie et al ldquoFour novel motion paradigmsbased on steady-state motion visual evoked potentialrdquo IEEETransactions on Biomedical Engineering vol 65 no 8pp 1696ndash1704 2018

[2] J R Wolpaw and D J Mcfarland ldquoMultichannel EEG-basedbrain-computer communicationrdquo Electroencephalographyand Clinical Neurophysiology vol 90 no 6 pp 444ndash4491994

[3] B O Peters G Pfurtscheller and H Flyvbjerg ldquoMiningmulti-channel EEG for its information content an ANN-based method for a brain-computer interfacerdquo Neural Net-works vol 11 no 7-8 pp 1429ndash1433 1998

[4] S Parini L Maggi A C Turconi and G Andreoni ldquoA robustand self-paced BCI system based on a four class SSVEPparadigm algorithms and protocols for a high-transfer-ratedirect brain communicationrdquo Computational Intelligence andNeuroscience vol 2009 pp 1ndash11 2009

[5] O Friman I Volosyak and A Graser ldquoMultiple channeldetection of steady-state visual evoked potentials for brain-

Computational Intelligence and Neuroscience 13

computer interfacesrdquo IEEE Transactions on Biomedical En-gineering vol 54 no 4 pp 742ndash750 2007

[6] Z Lin C Zhang WWu and X Gao ldquoFrequency recognitionbased on canonical correlation analysis for SSVEP-basedBCIsrdquo IEEE Transactions on Biomedical Engineering vol 53- no 12 pp 2610ndash2614 2006

[7] M Middendorf G Mcmillan G Calhoun and K S JonesldquoBrain-computer interfaces based on the steady-state visual-evoked responserdquo IEEE Transactions on Rehabilitation En-gineering vol 8 no 2 pp 211ndash214 2000

[8] G R Muller-Putz R Scherer C Brauneis andG Pfurtscheller ldquoSteady-state visual evoked potential(SSVEP)-based communication impact of harmonic fre-quency componentsrdquo Journal of Neural Engineering vol 2no 4 pp 123ndash130 2005

[9] D J Mcfarland L M Mccane S V David and J R WolpawldquoSpatial filter selection for EEG-based communicationrdquoElectroencephalography and Clinical Neurophysiologyvol 103 no 3 pp 386ndash394 1997

[10] H Li B S Manjunath and S K Mitra ldquoMultisensor imagefusion using the wavelet transformrdquo Graphical Models andImage Processing vol 57 no 3 pp 235ndash245 1995

[11] S Li and B Yang ldquoMultifocus image fusion by combiningcurvelet and wavelet transformrdquo Pattern Recognition Lettersvol 29 no 9 pp 1295ndash1301 2008

[12] B Zhang ldquoStudy on image fusion based on different fusionrules of wavelet transformrdquo Acta Photonica Sinica vol 33pp 173ndash176 2004

[13] C DMeyerMatrix Analysis and Applied Linear Algebra Bookand Solutions Manual Society for Industrial and AppliedMathematics Philadelphia PA USA 2000

[14] K J A Sundar M Jahnavi and K Lakshmisaritha ldquoMulti-sensor image fusion based on empirical wavelet transformrdquo inProceedings of the International Conference on ElectricalElectronics Communication Computer and OptimizationTechniques pp 93ndash97 Mysuru India December 2018

[15] L Tian J Ahmed Q Du B Shankar and S Adnan ldquoMultifocus image fusion using combined median and average filterbased hybrid stationary wavelet transform and principalcomponent analysisrdquo International Journal of AdvancedComputer Science and Applications vol 9 no 6 2018

[16] G He S Xing D Dong and X Zhao ldquoPanchromatic andmulti-spectral image fusion method based on two-step sparserepresentation and wavelet transformrdquo in Asia-Pacific Signaland Information Processing Association Summit and Confer-ence pp 259ndash262 Honolulu HI USA January 2018

[17] Y Shin S Lee M Ahn H Cho S C Jun and H-N LeeldquoNoise robustness analysis of sparse representation basedclassification method for non-stationary EEG signal classifi-cationrdquo Biomedical Signal Processing and Control vol 21pp 8ndash18 2015

[18] M Xu L Chen L Zhang et al ldquoA visual parallel-BCI spellerbased on the time-frequency coding strategyrdquo Journal ofNeural Engineering vol 11 no 2 article 026014 2014

[19] J Xie G Xu A Luo et al ldquoe role of visual noise ininfluencing mental load and fatigue in a steady-state motionvisual evoked potential-based brain-computer interfacerdquoSensors vol 17 no 8 p 1873 2017

[20] K Barbe R Pintelon and J Schoukens ldquoWelch methodrevisited nonparametric power spectrum estimation viacircular overlaprdquo IEEE Transactions on Signal Processingvol 58 no 2 pp 553ndash565 2010

[21] C Jia X Gao B Hong and S Gao ldquoFrequency and phasemixed coding in SSVEP-based brain-computer interfacerdquo

IEEE transactions on bio-medical engineering vol 58 no 1pp 200ndash206 2011

[22] C Yang H Wu Z Li W He N Wang and C-Y Su ldquoMindcontrol of a robotic arm with visual fusion technologyrdquo IEEETransactions on Industrial Informatics vol 14 no 9pp 3822ndash3830 2017

[23] J Castillo S Muller E Caicedo and T Bastos ldquoFeatureextraction techniques based on power spectrum for a SSVEP-BCIrdquo in Proceedings of the IEEE 23rd International Sympo-sium on Industrial Electronics (ISIE) Istanbul Turkey June2014

14 Computational Intelligence and Neuroscience

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 11: Steady-StateMotionVisualEvokedPotential(SSMVEP) … · 2019. 2. 14. · wasconductedbyMATLAB’stfrstftfunction.Afterfusing two T-F images, the T-F signal was inversely transformed

frequency components can remove the common noise in theelectrode signals is is similar to bipolar fusion whichsubtracts two electrode signals in the time domain We alsocompared the enhancement eects of bipolar fusion and theproposed method on SSMVEP active components eelectrodes used for bipolar fusion are the same as those usedin T-F image fusion e results show that the proposedmethod is better than the bipolar fusion (the online accuracyof bipolar fusion is 7152) In this study the fused imagewas ltered by mean lter to further achieve low-pass l-tering We also tried to remove the mean ltering step aftercompleting the T-F image fusion e results show thatremoving themean ltering step reduced the fusion eect Inthis study the mean ltering step was performed after T-Fimage fusion We tested the fusion eect when the meanltering step was performed before T-F image fusion(performing mean ltering step on the subimage afterSTFT) For some subjects it was better to perform the meanltering step after T-F image fusion erefore we recom-mend that researchers follow the analysis steps in this study

e wavelet low-frequency coecients (see details inequation (17)) of the fused image have an important in-uence on the fusion eect If we perform T-F image fusionon six electrode signals at the same time we will get six setsof wavelet low-frequency coecients Here each set of low-frequency coecient is a two-dimensional signal Referringto equation (1) we can assign a coecient to each two-dimensional signal and obtain the fused two-dimensionalsignal (ie the wavelet low-frequency coecients of thefused image) after linear summationis study explored thefeasibility of applying the spatial lter coecients of CCAfusion and maximum contrast fusion to T-F image fusionwhen fusing six electrode signals by using the T-F imagefusion method at the same time Since the fusion eect ofminimum energy fusion was poor the spatial lter co-ecients of the minimum energy fusion were not used heree spatial lter coecients of the maximum contrast fusionand CCA fusion at the frequency f were obtained byequations (9) and (10) and are set to Vmax and Vcca re-spectively Vmax and Vcca are two vectors with dimensions of6times1 where Vmax(1 1) and Vcca(1 1) represent the rstelements of the vectors Vmax and Vcca Since the T-F imagefusion method was performed on the six electrode signalsequation (17) is transformed into equation (19) where Vcorresponds to the spatial lter coecient Vmax or Vcca erest of the analysis process is the same as that in Section 210e same analysis steps were performed for all 20 targets(7ndash108Hz) e Welch spectrum analysis (the parametersare the same as in Section 32) was performed on the ob-tained signal after T-F image fusion Figures 8(a) and 8(b)show the Welch power spectra plotted using the data ofsubject 4 in the fourth round of the experiment Figure 8(a)shows the power spectrum plotted using spatial lter co-ecients of CCA fusion and Figure 8(b) shows the Welchpower spectrum plotted using spatial lter coecients ofmaximum contrast fusion where f represents the stimulusfrequency and the red circle indicates the amplitude at thestimulus frequency Figures 8(a) and 8(b) show that theamplitudes at the stimulation frequencies are prominente spatial lter coecients of CCA fusion and maximumcontrast fusion are eective for T-F image fusion us theT-F image fusion method proposed in this study focuses onthe fusion of wavelet low-frequency coecients of multiple

0

02

04

06

08

1

12

Acc

urac

y

Subject1 2 3 4 5 6 7 8 9 10

T-F image fusionCCA fusion

Minimum energy fusionMaximum contrast fusion

Figure 6 e accuracy of each subject under four fusion methods

0

02

04

06

08

1

12

Ave

rage

accu

racy

T-F image fusionCCA fusionMinimum energy fusionMaximum contrast fusion

Methods

Figure 7 e average accuracy of ten subjects under four fusionmethods

Computational Intelligence and Neuroscience 11

images (by assigning a coecient to each two-dimensionalsignal and obtaining the fused two-dimensional signal afterlinear summation) e premise of the above test results is

that we know the frequency of the signal to be fused If theonline test method is used (see details in Section 33) thesame online fusion results as CCA fusion and maximum

f = 7 Hz

times104

012

10 20 30 400

f = 72 Hz

times104

012

10 20 30 400

f = 74 Hz

times104

024

10 20 30 400

f = 76 Hz

times104

024

10 20 30 400

f = 78 Hz

times104

024

10 20 30 400

f = 8 Hz

times104

012

10 20 30 400

f = 82 Hz

times104

024

10 20 30 400

f = 84 Hz

times104

05

10

10 20 30 400

f = 86 Hz

times104

024

10 20 30 400

f = 88 Hz

times104

024

10 20 30 400

f = 9 Hz

times104

012

10 20 30 400

f = 92 Hz

times104

024

10 20 30 400

f = 94 Hz

times104

05

10

10 20 30 400

f = 96 Hz

times104

024

10 20 30 400

f = 98 Hz

times104

012

10 20 30 400

f = 10 Hz

times104

012

10 20 30 400

f = 102 Hz

times104

012

10 20 30 400

f = 104 Hz

times104

012

10 20 30 400

f = 106 Hz

times104

024

10 20 30 400

f = 108 Hz

times104

012

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(a)

f = 7 Hz

0

2

4

10 20 30 400

f = 8 Hz

0

2

4

10 20 30 400

f = 9 Hz

0

2

4

10 20 30 400

f = 10 Hz

0

2

4

10 20 30 400

f = 72 Hz0

2

4

10 20 30 400

f = 82 Hz0

5

10

10 20 30 400

f = 92 Hz0

2

4

10 20 30 400

f = 102 Hz0

2

4

10 20 30 400

f = 74 Hz

0

2

4

10 20 30 400

f = 84 Hz

0

10

20

10 20 30 400

f = 94 Hz

0

10

20

10 20 30 400

f = 104 Hz

0

2

4

10 20 30 400

f = 76 Hz0

5

10 20 30 400

f = 86 Hz0

5

10

10 20 30 400

f = 96 Hz0

5

10 20 30 400

f = 106 Hz0

5

10

10 20 30 400

f = 78 Hz0

5

10 20 30 400

f = 88 Hz0

5

10 20 30 400

f = 98 Hz0

2

4

10 20 30 400

f = 108 Hz0

2

4

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(b)

Figure 8 e Welch power spectrum plotted using the (a) CCA fusion spatial lter coecients and (b) maximum contrast fusion spatiallter coecients

12 Computational Intelligence and Neuroscience

contrast fusion were obtained T-F image fusion usedonly two electrode signals and the better online fusioneffect than that of the CCA fusion and the maximumcontrast fusion was obtained which shows the effec-tiveness of the proposed method Next we will explorethe fusion method of multiple T-F images (more than twoimages) that is to find suitable spatial filter coefficientsfor the wavelet low-frequency components of multipleT-F images

LLNF LLN1 lowastV(1 1) + LLN2 lowastV(2 1) + LLN3 lowastV(3 1)

+ LLN4 lowastV(4 1) + LLN5 lowastV(5 1)

+ LLN6 lowastV(6 1)

(19)

e parameters affect the fusion effect of the proposedmethod We listed the parameters that need to be selectedand the possible values of the parameters in Section 31 Inthis study the grid search method was used to traverse thecombination of all parameters and the best combination ofparameters was found In the experiment we found thatthe number of frequency bins and the Gaussian windowlength can be fixed to 54 and 57 We recommend that theresearchers determine the parameters according to theparameter selection principles and ranges given in Section31 In this study the SSMVEP active component wasenhanced by fusing multichannel signals into a single-channel signal and then the focused target frequencywas identified by performing spectrum analysis on thefused signals is is beneficial for SSMVEP studies basedon spectrum analysis For example Reference [21] pro-posed a frequency and phase mixed coding method in theSSVEP-based brain-computer interface (BCI) which in-creases the number of BCI coding targets by making onefrequency correspond to multiple different phases In thestudy the FFT analysis of the test signal is required to findthe possible focused target frequency and then calculate thephase value at that frequency e proposed method in thisstudy has a positive significance for accurately finding thefocused target frequency in the spectrum Moreover someEEG feature extraction algorithms for the BCI also requirespectrum analysis of the test signals [22 23] erefore themethod proposed in this study has a potential applicationvalue

5 Conclusion

To explore whether T-F domain analysis can achieve betterfusion effects than time-domain analysis this study pro-posed an SSMVEP enhancement method based on T-Fimage fusion e parameters of the T-F image fusionalgorithm were determined by the grid search method andthe electrode signals used for T-F image fusion were se-lected by bipolar fusion e analysis results show that thekey of the T-F image fusion algorithm is the fusion of thewavelet low-frequency components is study comparedthe enhancement effects of minimum energy fusionmaximum contrast fusion CCA fusion and T-F imagefusion on SSMVEP e experimental results show that the

online performance of the T-F image fusion method isbetter than that of the traditional spatial filtering methodswhich indicates that the proposed method is feasible to fuseSSMVEP in the T-F domain

Data Availability

e analytical data used to support the findings of this studyare included within the article and the raw data are availablefrom the corresponding author upon request

Ethical Approval

e subjects provided informed written consent in accor-dance with the protocol approved by the Institutional Re-view Board of Xirsquoan Jiaotong University

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Wenqiang Yan worked on the algorithm and wrote thepaper Guanghua Xu involved in discussions and gavesuggestions throughout this project Longting Chen andXiaowei Zheng worked on the experimental design Allauthors read and approved the final manuscript

Acknowledgments

is research was supported by the National Natural ScienceFoundation of China (NSFC) (no 51775415) We want tothank the subjects for participating in these experiments

Supplementary Materials

MATLAB codes for T-F image fusion minimum energyfusion maximum contrast fusion and CCA fusion (Sup-plementary Materials)

References

[1] W Yan G Xu J Xie et al ldquoFour novel motion paradigmsbased on steady-state motion visual evoked potentialrdquo IEEETransactions on Biomedical Engineering vol 65 no 8pp 1696ndash1704 2018

[2] J R Wolpaw and D J Mcfarland ldquoMultichannel EEG-basedbrain-computer communicationrdquo Electroencephalographyand Clinical Neurophysiology vol 90 no 6 pp 444ndash4491994

[3] B O Peters G Pfurtscheller and H Flyvbjerg ldquoMiningmulti-channel EEG for its information content an ANN-based method for a brain-computer interfacerdquo Neural Net-works vol 11 no 7-8 pp 1429ndash1433 1998

[4] S Parini L Maggi A C Turconi and G Andreoni ldquoA robustand self-paced BCI system based on a four class SSVEPparadigm algorithms and protocols for a high-transfer-ratedirect brain communicationrdquo Computational Intelligence andNeuroscience vol 2009 pp 1ndash11 2009

[5] O Friman I Volosyak and A Graser ldquoMultiple channeldetection of steady-state visual evoked potentials for brain-

Computational Intelligence and Neuroscience 13

computer interfacesrdquo IEEE Transactions on Biomedical En-gineering vol 54 no 4 pp 742ndash750 2007

[6] Z Lin C Zhang WWu and X Gao ldquoFrequency recognitionbased on canonical correlation analysis for SSVEP-basedBCIsrdquo IEEE Transactions on Biomedical Engineering vol 53- no 12 pp 2610ndash2614 2006

[7] M Middendorf G Mcmillan G Calhoun and K S JonesldquoBrain-computer interfaces based on the steady-state visual-evoked responserdquo IEEE Transactions on Rehabilitation En-gineering vol 8 no 2 pp 211ndash214 2000

[8] G R Muller-Putz R Scherer C Brauneis andG Pfurtscheller ldquoSteady-state visual evoked potential(SSVEP)-based communication impact of harmonic fre-quency componentsrdquo Journal of Neural Engineering vol 2no 4 pp 123ndash130 2005

[9] D J Mcfarland L M Mccane S V David and J R WolpawldquoSpatial filter selection for EEG-based communicationrdquoElectroencephalography and Clinical Neurophysiologyvol 103 no 3 pp 386ndash394 1997

[10] H Li B S Manjunath and S K Mitra ldquoMultisensor imagefusion using the wavelet transformrdquo Graphical Models andImage Processing vol 57 no 3 pp 235ndash245 1995

[11] S Li and B Yang ldquoMultifocus image fusion by combiningcurvelet and wavelet transformrdquo Pattern Recognition Lettersvol 29 no 9 pp 1295ndash1301 2008

[12] B Zhang ldquoStudy on image fusion based on different fusionrules of wavelet transformrdquo Acta Photonica Sinica vol 33pp 173ndash176 2004

[13] C DMeyerMatrix Analysis and Applied Linear Algebra Bookand Solutions Manual Society for Industrial and AppliedMathematics Philadelphia PA USA 2000

[14] K J A Sundar M Jahnavi and K Lakshmisaritha ldquoMulti-sensor image fusion based on empirical wavelet transformrdquo inProceedings of the International Conference on ElectricalElectronics Communication Computer and OptimizationTechniques pp 93ndash97 Mysuru India December 2018

[15] L Tian J Ahmed Q Du B Shankar and S Adnan ldquoMultifocus image fusion using combined median and average filterbased hybrid stationary wavelet transform and principalcomponent analysisrdquo International Journal of AdvancedComputer Science and Applications vol 9 no 6 2018

[16] G He S Xing D Dong and X Zhao ldquoPanchromatic andmulti-spectral image fusion method based on two-step sparserepresentation and wavelet transformrdquo in Asia-Pacific Signaland Information Processing Association Summit and Confer-ence pp 259ndash262 Honolulu HI USA January 2018

[17] Y Shin S Lee M Ahn H Cho S C Jun and H-N LeeldquoNoise robustness analysis of sparse representation basedclassification method for non-stationary EEG signal classifi-cationrdquo Biomedical Signal Processing and Control vol 21pp 8ndash18 2015

[18] M Xu L Chen L Zhang et al ldquoA visual parallel-BCI spellerbased on the time-frequency coding strategyrdquo Journal ofNeural Engineering vol 11 no 2 article 026014 2014

[19] J Xie G Xu A Luo et al ldquoe role of visual noise ininfluencing mental load and fatigue in a steady-state motionvisual evoked potential-based brain-computer interfacerdquoSensors vol 17 no 8 p 1873 2017

[20] K Barbe R Pintelon and J Schoukens ldquoWelch methodrevisited nonparametric power spectrum estimation viacircular overlaprdquo IEEE Transactions on Signal Processingvol 58 no 2 pp 553ndash565 2010

[21] C Jia X Gao B Hong and S Gao ldquoFrequency and phasemixed coding in SSVEP-based brain-computer interfacerdquo

IEEE transactions on bio-medical engineering vol 58 no 1pp 200ndash206 2011

[22] C Yang H Wu Z Li W He N Wang and C-Y Su ldquoMindcontrol of a robotic arm with visual fusion technologyrdquo IEEETransactions on Industrial Informatics vol 14 no 9pp 3822ndash3830 2017

[23] J Castillo S Muller E Caicedo and T Bastos ldquoFeatureextraction techniques based on power spectrum for a SSVEP-BCIrdquo in Proceedings of the IEEE 23rd International Sympo-sium on Industrial Electronics (ISIE) Istanbul Turkey June2014

14 Computational Intelligence and Neuroscience

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 12: Steady-StateMotionVisualEvokedPotential(SSMVEP) … · 2019. 2. 14. · wasconductedbyMATLAB’stfrstftfunction.Afterfusing two T-F images, the T-F signal was inversely transformed

images (by assigning a coecient to each two-dimensionalsignal and obtaining the fused two-dimensional signal afterlinear summation) e premise of the above test results is

that we know the frequency of the signal to be fused If theonline test method is used (see details in Section 33) thesame online fusion results as CCA fusion and maximum

f = 7 Hz

times104

012

10 20 30 400

f = 72 Hz

times104

012

10 20 30 400

f = 74 Hz

times104

024

10 20 30 400

f = 76 Hz

times104

024

10 20 30 400

f = 78 Hz

times104

024

10 20 30 400

f = 8 Hz

times104

012

10 20 30 400

f = 82 Hz

times104

024

10 20 30 400

f = 84 Hz

times104

05

10

10 20 30 400

f = 86 Hz

times104

024

10 20 30 400

f = 88 Hz

times104

024

10 20 30 400

f = 9 Hz

times104

012

10 20 30 400

f = 92 Hz

times104

024

10 20 30 400

f = 94 Hz

times104

05

10

10 20 30 400

f = 96 Hz

times104

024

10 20 30 400

f = 98 Hz

times104

012

10 20 30 400

f = 10 Hz

times104

012

10 20 30 400

f = 102 Hz

times104

012

10 20 30 400

f = 104 Hz

times104

012

10 20 30 400

f = 106 Hz

times104

024

10 20 30 400

f = 108 Hz

times104

012

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(a)

f = 7 Hz

0

2

4

10 20 30 400

f = 8 Hz

0

2

4

10 20 30 400

f = 9 Hz

0

2

4

10 20 30 400

f = 10 Hz

0

2

4

10 20 30 400

f = 72 Hz0

2

4

10 20 30 400

f = 82 Hz0

5

10

10 20 30 400

f = 92 Hz0

2

4

10 20 30 400

f = 102 Hz0

2

4

10 20 30 400

f = 74 Hz

0

2

4

10 20 30 400

f = 84 Hz

0

10

20

10 20 30 400

f = 94 Hz

0

10

20

10 20 30 400

f = 104 Hz

0

2

4

10 20 30 400

f = 76 Hz0

5

10 20 30 400

f = 86 Hz0

5

10

10 20 30 400

f = 96 Hz0

5

10 20 30 400

f = 106 Hz0

5

10

10 20 30 400

f = 78 Hz0

5

10 20 30 400

f = 88 Hz0

5

10 20 30 400

f = 98 Hz0

2

4

10 20 30 400

f = 108 Hz0

2

4

10 20 30 400Frequency (Hz)

Am

plitu

de V

2 (H

z)

(b)

Figure 8 e Welch power spectrum plotted using the (a) CCA fusion spatial lter coecients and (b) maximum contrast fusion spatiallter coecients

12 Computational Intelligence and Neuroscience

contrast fusion were obtained T-F image fusion usedonly two electrode signals and the better online fusioneffect than that of the CCA fusion and the maximumcontrast fusion was obtained which shows the effec-tiveness of the proposed method Next we will explorethe fusion method of multiple T-F images (more than twoimages) that is to find suitable spatial filter coefficientsfor the wavelet low-frequency components of multipleT-F images

LLNF LLN1 lowastV(1 1) + LLN2 lowastV(2 1) + LLN3 lowastV(3 1)

+ LLN4 lowastV(4 1) + LLN5 lowastV(5 1)

+ LLN6 lowastV(6 1)

(19)

e parameters affect the fusion effect of the proposedmethod We listed the parameters that need to be selectedand the possible values of the parameters in Section 31 Inthis study the grid search method was used to traverse thecombination of all parameters and the best combination ofparameters was found In the experiment we found thatthe number of frequency bins and the Gaussian windowlength can be fixed to 54 and 57 We recommend that theresearchers determine the parameters according to theparameter selection principles and ranges given in Section31 In this study the SSMVEP active component wasenhanced by fusing multichannel signals into a single-channel signal and then the focused target frequencywas identified by performing spectrum analysis on thefused signals is is beneficial for SSMVEP studies basedon spectrum analysis For example Reference [21] pro-posed a frequency and phase mixed coding method in theSSVEP-based brain-computer interface (BCI) which in-creases the number of BCI coding targets by making onefrequency correspond to multiple different phases In thestudy the FFT analysis of the test signal is required to findthe possible focused target frequency and then calculate thephase value at that frequency e proposed method in thisstudy has a positive significance for accurately finding thefocused target frequency in the spectrum Moreover someEEG feature extraction algorithms for the BCI also requirespectrum analysis of the test signals [22 23] erefore themethod proposed in this study has a potential applicationvalue

5 Conclusion

To explore whether T-F domain analysis can achieve betterfusion effects than time-domain analysis this study pro-posed an SSMVEP enhancement method based on T-Fimage fusion e parameters of the T-F image fusionalgorithm were determined by the grid search method andthe electrode signals used for T-F image fusion were se-lected by bipolar fusion e analysis results show that thekey of the T-F image fusion algorithm is the fusion of thewavelet low-frequency components is study comparedthe enhancement effects of minimum energy fusionmaximum contrast fusion CCA fusion and T-F imagefusion on SSMVEP e experimental results show that the

online performance of the T-F image fusion method isbetter than that of the traditional spatial filtering methodswhich indicates that the proposed method is feasible to fuseSSMVEP in the T-F domain

Data Availability

e analytical data used to support the findings of this studyare included within the article and the raw data are availablefrom the corresponding author upon request

Ethical Approval

e subjects provided informed written consent in accor-dance with the protocol approved by the Institutional Re-view Board of Xirsquoan Jiaotong University

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Wenqiang Yan worked on the algorithm and wrote thepaper Guanghua Xu involved in discussions and gavesuggestions throughout this project Longting Chen andXiaowei Zheng worked on the experimental design Allauthors read and approved the final manuscript

Acknowledgments

is research was supported by the National Natural ScienceFoundation of China (NSFC) (no 51775415) We want tothank the subjects for participating in these experiments

Supplementary Materials

MATLAB codes for T-F image fusion minimum energyfusion maximum contrast fusion and CCA fusion (Sup-plementary Materials)

References

[1] W Yan G Xu J Xie et al ldquoFour novel motion paradigmsbased on steady-state motion visual evoked potentialrdquo IEEETransactions on Biomedical Engineering vol 65 no 8pp 1696ndash1704 2018

[2] J R Wolpaw and D J Mcfarland ldquoMultichannel EEG-basedbrain-computer communicationrdquo Electroencephalographyand Clinical Neurophysiology vol 90 no 6 pp 444ndash4491994

[3] B O Peters G Pfurtscheller and H Flyvbjerg ldquoMiningmulti-channel EEG for its information content an ANN-based method for a brain-computer interfacerdquo Neural Net-works vol 11 no 7-8 pp 1429ndash1433 1998

[4] S Parini L Maggi A C Turconi and G Andreoni ldquoA robustand self-paced BCI system based on a four class SSVEPparadigm algorithms and protocols for a high-transfer-ratedirect brain communicationrdquo Computational Intelligence andNeuroscience vol 2009 pp 1ndash11 2009

[5] O Friman I Volosyak and A Graser ldquoMultiple channeldetection of steady-state visual evoked potentials for brain-

Computational Intelligence and Neuroscience 13

computer interfacesrdquo IEEE Transactions on Biomedical En-gineering vol 54 no 4 pp 742ndash750 2007

[6] Z Lin C Zhang WWu and X Gao ldquoFrequency recognitionbased on canonical correlation analysis for SSVEP-basedBCIsrdquo IEEE Transactions on Biomedical Engineering vol 53- no 12 pp 2610ndash2614 2006

[7] M Middendorf G Mcmillan G Calhoun and K S JonesldquoBrain-computer interfaces based on the steady-state visual-evoked responserdquo IEEE Transactions on Rehabilitation En-gineering vol 8 no 2 pp 211ndash214 2000

[8] G R Muller-Putz R Scherer C Brauneis andG Pfurtscheller ldquoSteady-state visual evoked potential(SSVEP)-based communication impact of harmonic fre-quency componentsrdquo Journal of Neural Engineering vol 2no 4 pp 123ndash130 2005

[9] D J Mcfarland L M Mccane S V David and J R WolpawldquoSpatial filter selection for EEG-based communicationrdquoElectroencephalography and Clinical Neurophysiologyvol 103 no 3 pp 386ndash394 1997

[10] H Li B S Manjunath and S K Mitra ldquoMultisensor imagefusion using the wavelet transformrdquo Graphical Models andImage Processing vol 57 no 3 pp 235ndash245 1995

[11] S Li and B Yang ldquoMultifocus image fusion by combiningcurvelet and wavelet transformrdquo Pattern Recognition Lettersvol 29 no 9 pp 1295ndash1301 2008

[12] B Zhang ldquoStudy on image fusion based on different fusionrules of wavelet transformrdquo Acta Photonica Sinica vol 33pp 173ndash176 2004

[13] C DMeyerMatrix Analysis and Applied Linear Algebra Bookand Solutions Manual Society for Industrial and AppliedMathematics Philadelphia PA USA 2000

[14] K J A Sundar M Jahnavi and K Lakshmisaritha ldquoMulti-sensor image fusion based on empirical wavelet transformrdquo inProceedings of the International Conference on ElectricalElectronics Communication Computer and OptimizationTechniques pp 93ndash97 Mysuru India December 2018

[15] L Tian J Ahmed Q Du B Shankar and S Adnan ldquoMultifocus image fusion using combined median and average filterbased hybrid stationary wavelet transform and principalcomponent analysisrdquo International Journal of AdvancedComputer Science and Applications vol 9 no 6 2018

[16] G He S Xing D Dong and X Zhao ldquoPanchromatic andmulti-spectral image fusion method based on two-step sparserepresentation and wavelet transformrdquo in Asia-Pacific Signaland Information Processing Association Summit and Confer-ence pp 259ndash262 Honolulu HI USA January 2018

[17] Y Shin S Lee M Ahn H Cho S C Jun and H-N LeeldquoNoise robustness analysis of sparse representation basedclassification method for non-stationary EEG signal classifi-cationrdquo Biomedical Signal Processing and Control vol 21pp 8ndash18 2015

[18] M Xu L Chen L Zhang et al ldquoA visual parallel-BCI spellerbased on the time-frequency coding strategyrdquo Journal ofNeural Engineering vol 11 no 2 article 026014 2014

[19] J Xie G Xu A Luo et al ldquoe role of visual noise ininfluencing mental load and fatigue in a steady-state motionvisual evoked potential-based brain-computer interfacerdquoSensors vol 17 no 8 p 1873 2017

[20] K Barbe R Pintelon and J Schoukens ldquoWelch methodrevisited nonparametric power spectrum estimation viacircular overlaprdquo IEEE Transactions on Signal Processingvol 58 no 2 pp 553ndash565 2010

[21] C Jia X Gao B Hong and S Gao ldquoFrequency and phasemixed coding in SSVEP-based brain-computer interfacerdquo

IEEE transactions on bio-medical engineering vol 58 no 1pp 200ndash206 2011

[22] C Yang H Wu Z Li W He N Wang and C-Y Su ldquoMindcontrol of a robotic arm with visual fusion technologyrdquo IEEETransactions on Industrial Informatics vol 14 no 9pp 3822ndash3830 2017

[23] J Castillo S Muller E Caicedo and T Bastos ldquoFeatureextraction techniques based on power spectrum for a SSVEP-BCIrdquo in Proceedings of the IEEE 23rd International Sympo-sium on Industrial Electronics (ISIE) Istanbul Turkey June2014

14 Computational Intelligence and Neuroscience

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 13: Steady-StateMotionVisualEvokedPotential(SSMVEP) … · 2019. 2. 14. · wasconductedbyMATLAB’stfrstftfunction.Afterfusing two T-F images, the T-F signal was inversely transformed

contrast fusion were obtained T-F image fusion usedonly two electrode signals and the better online fusioneffect than that of the CCA fusion and the maximumcontrast fusion was obtained which shows the effec-tiveness of the proposed method Next we will explorethe fusion method of multiple T-F images (more than twoimages) that is to find suitable spatial filter coefficientsfor the wavelet low-frequency components of multipleT-F images

LLNF LLN1 lowastV(1 1) + LLN2 lowastV(2 1) + LLN3 lowastV(3 1)

+ LLN4 lowastV(4 1) + LLN5 lowastV(5 1)

+ LLN6 lowastV(6 1)

(19)

e parameters affect the fusion effect of the proposedmethod We listed the parameters that need to be selectedand the possible values of the parameters in Section 31 Inthis study the grid search method was used to traverse thecombination of all parameters and the best combination ofparameters was found In the experiment we found thatthe number of frequency bins and the Gaussian windowlength can be fixed to 54 and 57 We recommend that theresearchers determine the parameters according to theparameter selection principles and ranges given in Section31 In this study the SSMVEP active component wasenhanced by fusing multichannel signals into a single-channel signal and then the focused target frequencywas identified by performing spectrum analysis on thefused signals is is beneficial for SSMVEP studies basedon spectrum analysis For example Reference [21] pro-posed a frequency and phase mixed coding method in theSSVEP-based brain-computer interface (BCI) which in-creases the number of BCI coding targets by making onefrequency correspond to multiple different phases In thestudy the FFT analysis of the test signal is required to findthe possible focused target frequency and then calculate thephase value at that frequency e proposed method in thisstudy has a positive significance for accurately finding thefocused target frequency in the spectrum Moreover someEEG feature extraction algorithms for the BCI also requirespectrum analysis of the test signals [22 23] erefore themethod proposed in this study has a potential applicationvalue

5 Conclusion

To explore whether T-F domain analysis can achieve betterfusion effects than time-domain analysis this study pro-posed an SSMVEP enhancement method based on T-Fimage fusion e parameters of the T-F image fusionalgorithm were determined by the grid search method andthe electrode signals used for T-F image fusion were se-lected by bipolar fusion e analysis results show that thekey of the T-F image fusion algorithm is the fusion of thewavelet low-frequency components is study comparedthe enhancement effects of minimum energy fusionmaximum contrast fusion CCA fusion and T-F imagefusion on SSMVEP e experimental results show that the

online performance of the T-F image fusion method isbetter than that of the traditional spatial filtering methodswhich indicates that the proposed method is feasible to fuseSSMVEP in the T-F domain

Data Availability

e analytical data used to support the findings of this studyare included within the article and the raw data are availablefrom the corresponding author upon request

Ethical Approval

e subjects provided informed written consent in accor-dance with the protocol approved by the Institutional Re-view Board of Xirsquoan Jiaotong University

Conflicts of Interest

e authors declare that they have no conflicts of interest

Authorsrsquo Contributions

Wenqiang Yan worked on the algorithm and wrote thepaper Guanghua Xu involved in discussions and gavesuggestions throughout this project Longting Chen andXiaowei Zheng worked on the experimental design Allauthors read and approved the final manuscript

Acknowledgments

is research was supported by the National Natural ScienceFoundation of China (NSFC) (no 51775415) We want tothank the subjects for participating in these experiments

Supplementary Materials

MATLAB codes for T-F image fusion minimum energyfusion maximum contrast fusion and CCA fusion (Sup-plementary Materials)

References

[1] W Yan G Xu J Xie et al ldquoFour novel motion paradigmsbased on steady-state motion visual evoked potentialrdquo IEEETransactions on Biomedical Engineering vol 65 no 8pp 1696ndash1704 2018

[2] J R Wolpaw and D J Mcfarland ldquoMultichannel EEG-basedbrain-computer communicationrdquo Electroencephalographyand Clinical Neurophysiology vol 90 no 6 pp 444ndash4491994

[3] B O Peters G Pfurtscheller and H Flyvbjerg ldquoMiningmulti-channel EEG for its information content an ANN-based method for a brain-computer interfacerdquo Neural Net-works vol 11 no 7-8 pp 1429ndash1433 1998

[4] S Parini L Maggi A C Turconi and G Andreoni ldquoA robustand self-paced BCI system based on a four class SSVEPparadigm algorithms and protocols for a high-transfer-ratedirect brain communicationrdquo Computational Intelligence andNeuroscience vol 2009 pp 1ndash11 2009

[5] O Friman I Volosyak and A Graser ldquoMultiple channeldetection of steady-state visual evoked potentials for brain-

Computational Intelligence and Neuroscience 13

computer interfacesrdquo IEEE Transactions on Biomedical En-gineering vol 54 no 4 pp 742ndash750 2007

[6] Z Lin C Zhang WWu and X Gao ldquoFrequency recognitionbased on canonical correlation analysis for SSVEP-basedBCIsrdquo IEEE Transactions on Biomedical Engineering vol 53- no 12 pp 2610ndash2614 2006

[7] M Middendorf G Mcmillan G Calhoun and K S JonesldquoBrain-computer interfaces based on the steady-state visual-evoked responserdquo IEEE Transactions on Rehabilitation En-gineering vol 8 no 2 pp 211ndash214 2000

[8] G R Muller-Putz R Scherer C Brauneis andG Pfurtscheller ldquoSteady-state visual evoked potential(SSVEP)-based communication impact of harmonic fre-quency componentsrdquo Journal of Neural Engineering vol 2no 4 pp 123ndash130 2005

[9] D J Mcfarland L M Mccane S V David and J R WolpawldquoSpatial filter selection for EEG-based communicationrdquoElectroencephalography and Clinical Neurophysiologyvol 103 no 3 pp 386ndash394 1997

[10] H Li B S Manjunath and S K Mitra ldquoMultisensor imagefusion using the wavelet transformrdquo Graphical Models andImage Processing vol 57 no 3 pp 235ndash245 1995

[11] S Li and B Yang ldquoMultifocus image fusion by combiningcurvelet and wavelet transformrdquo Pattern Recognition Lettersvol 29 no 9 pp 1295ndash1301 2008

[12] B Zhang ldquoStudy on image fusion based on different fusionrules of wavelet transformrdquo Acta Photonica Sinica vol 33pp 173ndash176 2004

[13] C DMeyerMatrix Analysis and Applied Linear Algebra Bookand Solutions Manual Society for Industrial and AppliedMathematics Philadelphia PA USA 2000

[14] K J A Sundar M Jahnavi and K Lakshmisaritha ldquoMulti-sensor image fusion based on empirical wavelet transformrdquo inProceedings of the International Conference on ElectricalElectronics Communication Computer and OptimizationTechniques pp 93ndash97 Mysuru India December 2018

[15] L Tian J Ahmed Q Du B Shankar and S Adnan ldquoMultifocus image fusion using combined median and average filterbased hybrid stationary wavelet transform and principalcomponent analysisrdquo International Journal of AdvancedComputer Science and Applications vol 9 no 6 2018

[16] G He S Xing D Dong and X Zhao ldquoPanchromatic andmulti-spectral image fusion method based on two-step sparserepresentation and wavelet transformrdquo in Asia-Pacific Signaland Information Processing Association Summit and Confer-ence pp 259ndash262 Honolulu HI USA January 2018

[17] Y Shin S Lee M Ahn H Cho S C Jun and H-N LeeldquoNoise robustness analysis of sparse representation basedclassification method for non-stationary EEG signal classifi-cationrdquo Biomedical Signal Processing and Control vol 21pp 8ndash18 2015

[18] M Xu L Chen L Zhang et al ldquoA visual parallel-BCI spellerbased on the time-frequency coding strategyrdquo Journal ofNeural Engineering vol 11 no 2 article 026014 2014

[19] J Xie G Xu A Luo et al ldquoe role of visual noise ininfluencing mental load and fatigue in a steady-state motionvisual evoked potential-based brain-computer interfacerdquoSensors vol 17 no 8 p 1873 2017

[20] K Barbe R Pintelon and J Schoukens ldquoWelch methodrevisited nonparametric power spectrum estimation viacircular overlaprdquo IEEE Transactions on Signal Processingvol 58 no 2 pp 553ndash565 2010

[21] C Jia X Gao B Hong and S Gao ldquoFrequency and phasemixed coding in SSVEP-based brain-computer interfacerdquo

IEEE transactions on bio-medical engineering vol 58 no 1pp 200ndash206 2011

[22] C Yang H Wu Z Li W He N Wang and C-Y Su ldquoMindcontrol of a robotic arm with visual fusion technologyrdquo IEEETransactions on Industrial Informatics vol 14 no 9pp 3822ndash3830 2017

[23] J Castillo S Muller E Caicedo and T Bastos ldquoFeatureextraction techniques based on power spectrum for a SSVEP-BCIrdquo in Proceedings of the IEEE 23rd International Sympo-sium on Industrial Electronics (ISIE) Istanbul Turkey June2014

14 Computational Intelligence and Neuroscience

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 14: Steady-StateMotionVisualEvokedPotential(SSMVEP) … · 2019. 2. 14. · wasconductedbyMATLAB’stfrstftfunction.Afterfusing two T-F images, the T-F signal was inversely transformed

computer interfacesrdquo IEEE Transactions on Biomedical En-gineering vol 54 no 4 pp 742ndash750 2007

[6] Z Lin C Zhang WWu and X Gao ldquoFrequency recognitionbased on canonical correlation analysis for SSVEP-basedBCIsrdquo IEEE Transactions on Biomedical Engineering vol 53- no 12 pp 2610ndash2614 2006

[7] M Middendorf G Mcmillan G Calhoun and K S JonesldquoBrain-computer interfaces based on the steady-state visual-evoked responserdquo IEEE Transactions on Rehabilitation En-gineering vol 8 no 2 pp 211ndash214 2000

[8] G R Muller-Putz R Scherer C Brauneis andG Pfurtscheller ldquoSteady-state visual evoked potential(SSVEP)-based communication impact of harmonic fre-quency componentsrdquo Journal of Neural Engineering vol 2no 4 pp 123ndash130 2005

[9] D J Mcfarland L M Mccane S V David and J R WolpawldquoSpatial filter selection for EEG-based communicationrdquoElectroencephalography and Clinical Neurophysiologyvol 103 no 3 pp 386ndash394 1997

[10] H Li B S Manjunath and S K Mitra ldquoMultisensor imagefusion using the wavelet transformrdquo Graphical Models andImage Processing vol 57 no 3 pp 235ndash245 1995

[11] S Li and B Yang ldquoMultifocus image fusion by combiningcurvelet and wavelet transformrdquo Pattern Recognition Lettersvol 29 no 9 pp 1295ndash1301 2008

[12] B Zhang ldquoStudy on image fusion based on different fusionrules of wavelet transformrdquo Acta Photonica Sinica vol 33pp 173ndash176 2004

[13] C DMeyerMatrix Analysis and Applied Linear Algebra Bookand Solutions Manual Society for Industrial and AppliedMathematics Philadelphia PA USA 2000

[14] K J A Sundar M Jahnavi and K Lakshmisaritha ldquoMulti-sensor image fusion based on empirical wavelet transformrdquo inProceedings of the International Conference on ElectricalElectronics Communication Computer and OptimizationTechniques pp 93ndash97 Mysuru India December 2018

[15] L Tian J Ahmed Q Du B Shankar and S Adnan ldquoMultifocus image fusion using combined median and average filterbased hybrid stationary wavelet transform and principalcomponent analysisrdquo International Journal of AdvancedComputer Science and Applications vol 9 no 6 2018

[16] G He S Xing D Dong and X Zhao ldquoPanchromatic andmulti-spectral image fusion method based on two-step sparserepresentation and wavelet transformrdquo in Asia-Pacific Signaland Information Processing Association Summit and Confer-ence pp 259ndash262 Honolulu HI USA January 2018

[17] Y Shin S Lee M Ahn H Cho S C Jun and H-N LeeldquoNoise robustness analysis of sparse representation basedclassification method for non-stationary EEG signal classifi-cationrdquo Biomedical Signal Processing and Control vol 21pp 8ndash18 2015

[18] M Xu L Chen L Zhang et al ldquoA visual parallel-BCI spellerbased on the time-frequency coding strategyrdquo Journal ofNeural Engineering vol 11 no 2 article 026014 2014

[19] J Xie G Xu A Luo et al ldquoe role of visual noise ininfluencing mental load and fatigue in a steady-state motionvisual evoked potential-based brain-computer interfacerdquoSensors vol 17 no 8 p 1873 2017

[20] K Barbe R Pintelon and J Schoukens ldquoWelch methodrevisited nonparametric power spectrum estimation viacircular overlaprdquo IEEE Transactions on Signal Processingvol 58 no 2 pp 553ndash565 2010

[21] C Jia X Gao B Hong and S Gao ldquoFrequency and phasemixed coding in SSVEP-based brain-computer interfacerdquo

IEEE transactions on bio-medical engineering vol 58 no 1pp 200ndash206 2011

[22] C Yang H Wu Z Li W He N Wang and C-Y Su ldquoMindcontrol of a robotic arm with visual fusion technologyrdquo IEEETransactions on Industrial Informatics vol 14 no 9pp 3822ndash3830 2017

[23] J Castillo S Muller E Caicedo and T Bastos ldquoFeatureextraction techniques based on power spectrum for a SSVEP-BCIrdquo in Proceedings of the IEEE 23rd International Sympo-sium on Industrial Electronics (ISIE) Istanbul Turkey June2014

14 Computational Intelligence and Neuroscience

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 15: Steady-StateMotionVisualEvokedPotential(SSMVEP) … · 2019. 2. 14. · wasconductedbyMATLAB’stfrstftfunction.Afterfusing two T-F images, the T-F signal was inversely transformed

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom