chapter 2 literature survey -...

42
14 CHAPTER 2 LITERATURE SURVEY 2.1 INTRODUCTION The cardiovascular system consists of myocardium (the heart), veins, arteries and capillaries. The main function of cardiovascular system is to transmit oxygen to the muscles and remove carbon dioxide. Furthermore, it transmits waste products to the kidneys and liver, white blood cells to tissues and controls acid base balance of the body. The autonomic nervous system as shown in Figure 2.1 has primary control of the heart’s rate and rhythm. It also has control over the smooth muscle fibers, glands, and blood flow to the genitals during sexual acts, gastrointestinal tract, sweating and the papillary aperture. The autonomic nervous system consists of parasympathetic and sympathetic parts. They have contrary effects on the human body, for example, parasympathetic activation preserves the blood circulation in muscles, while sympathetic activation accelerates it. The primary research topic in the field of heart rate variability is to quantify and interpret the autonomic process of the human body and the balance between parasympathetic and sympathetic activation. The monitoring of the heart rate and its variability is an attempt to apply an indirect measurement of autonomic control. Hence, HRV can be used as a general health index, much like noninvasive diastolic and systolic blood pressure. The clinical applications for such a system would include, for

Upload: others

Post on 20-Jun-2020

18 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

14

CHAPTER 2

LITERATURE SURVEY

2.1 INTRODUCTION

The cardiovascular system consists of myocardium (the heart),

veins, arteries and capillaries. The main function of cardiovascular system is

to transmit oxygen to the muscles and remove carbon dioxide. Furthermore, it

transmits waste products to the kidneys and liver, white blood cells to tissues

and controls acid base balance of the body.

The autonomic nervous system as shown in Figure 2.1 has primary

control of the heart’s rate and rhythm. It also has control over the smooth

muscle fibers, glands, and blood flow to the genitals during sexual acts,

gastrointestinal tract, sweating and the papillary aperture. The autonomic

nervous system consists of parasympathetic and sympathetic parts. They have

contrary effects on the human body, for example, parasympathetic activation

preserves the blood circulation in muscles, while sympathetic activation

accelerates it. The primary research topic in the field of heart rate variability

is to quantify and interpret the autonomic process of the human body and the

balance between parasympathetic and sympathetic activation.

The monitoring of the heart rate and its variability is an attempt to

apply an indirect measurement of autonomic control. Hence, HRV can be

used as a general health index, much like noninvasive diastolic and systolic

blood pressure. The clinical applications for such a system would include, for

Page 2: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

15

instance the monitoring of mental stress, preventing and predicting

Myocardial infarction or dysfunction, the monitoring of isometric and

dynamic exercise, the prediction and diagnosis of overreaching, measuring

vitality, the monitoring of recovery from exercise or injury, etc. However, the

diagnostic products are yet to come, since at present no widely approved

clinical system for the monitoring of the autonomic nervous system via heart

rate exists.

Original figure from National Parkinson Foundation, www.parkinson.org

Figure 2.1 Autonomous Nervous Systems

2.2 HEART RATE DYNAMICS

Heart rate is a complex product of several physiological

mechanisms, which poses a challenge to a valid interpretation of the heart

rate. Emotions and stress may have an instant effect on the heart rate. The

Page 3: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

16

recovery from stress may be moderately rapid, but continuous stress may also

appear as a long time alteration to heart rate level and variability.

The characteristics and properties of the heart rate change

considerably if the heart rates of a resting or exercising individual are

compared. This appears in the temporal dynamics and characteristics of the

signal. For example, an acceleration of the heart rate from a resting level to an

individual’s maximal heart rate may be relatively rapid as a maximal exercise

response. However, the recovery from the maximum heart rate level back to

the resting level is not as instantaneous and may take hours, or even days after

heavy exercise. The body remains in a metabolic state to remove carbon

dioxide and body lactates; this process accelerates the cardiovascular system.

Furthermore, the body has to recover from the oxygen deficit induced by the

exercise. A more rapid increase in the heart rate may be achieved with more

intense sports, e.g., 400 meters running.

Characteristics of heart rate time series are heavily influenced by

inter and intra individual variations. The difference between two successive

RR intervals decreases as the heart rate increases. During sleep the difference

is at its highest. HRV variations are affected by body movements, position

changes, temperature alterations, pain or mental responses. The heart rate, its

variation, recovery from position change and standing responses differ among

individuals. Furthermore, the individual’s age, gender, mental stress, vitality

and fitness are reported to affect the heart rate variability. The investigation of

nonlinear dynamics (NLD) and the indices to quantify the complexity of the

dynamics have challenged our view on physiological networks regulating

heart rate (HR) and blood pressure, thereby enhancing our knowledge and

stimulating significant and innovative research into cardiovascular dynamics.

Page 4: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

17

2.3 PREPROCESSING METHODS

Any artifact in the HRV interfere with the analysis of the signal.

The artifacts within HRV signals can be divided into technical and

physiological artifacts. The technical artifacts can include missing or

additional QRS complex detections and errors in R-wave occurrence times.

The physiological artifacts, on the other hand, include Ectopic beats and

arrhythmic events. In order to avoid the interference of such artifacts, these

beats must be removed either by editing or by some means of filtering or

interpolation.

2.3.1 Removal of Artifacts

Ectopic beats that originate from secondary and tertiary pacemakers

of locally aberrant beat will temporarily disrupt normal neurocardiac

modulation. An ectopic beat will often appear late or early with respect to the

timing of a sinus beat (Javier Mateo and Pablo Laguna 2003). This creates a

sharp spike in the RR interval which is likely to add a significant power

contribution to the power spectrum at an artifactual frequency. Many of the

commonly used standard time domain measures involve Euclidean distance

computations and therefore just one outlier can significantly alter the value of

a metric. There exists an algorithm that detects and classifies ectopic beats

(Laguna et al 1996), but for HRV analysis these beats must be removed either

by editing (Salo et al 2001), or by some means of interpolation or filtering.

In addition to ectopic beats, QRS complex misdetections can

generate similar effect to that of ectopic beats in the HRV analysis.

Conventionally ectopic beats are corrected manually before the analysis. For

the first time Keenan et al (2006) presented a discrete wavelet threshold based

interpolation method for detecting and correcting Ectopic beats automatically.

Page 5: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

18

2.3.1.1 Discrete Wavelet Transform based filtering

The discrete wavelet transform (DWT) is used for both

identification and filtering of ectopic beats. Wavelet coefficients at the highest

level of detail are analyzed to locate the ectopic beats. The DWT coefficients

are obtained from the unprocessed RR interval signal by decomposing it into

a set of frequency bands by applying low pass and high pass filter banks.

Reconstruction normally takes places after some kind of soft or hard

thresholding, or compression. Signal reconstruction is therefore the inverse

DWT. Following decomposition and prior to the wavelet reconstruction

process, ectopic beats are removed by a thresholding process.

Soft thresholding or shrinkage requires selecting a threshold where

all wavelet coefficients in each sub band falling below this threshold are

reduced to zero. The coefficients above this threshold have the threshold

subtracted from it, so the coefficients tend toward zero. Hard thresholding

requires reducing the coefficients below this threshold to zero leaving

coefficients above this threshold constant. This nonlinear approach is very

different from conventional filtering and has been shown to be very effective

in denoising images. The hard thresholding approach used for ectopic beat

removal is demonstrated by Equation (2.1).

W[n] = { 0 , W[n] > T

W[n] , W[n] < T } (2.1)

where, W[n] represents the wavelet coefficients and the threshold T is chosen

for the higher frequency RR intervals, which will allow the removal of

Ectopic beats while retaining the signal quality. Therefore, coefficients above

this threshold are set to zero.

Page 6: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

19

X[n] z[n]

Figure 2.2 Wavelet filtering

The block diagram of Figure 2.2 outlines the process of wavelet

filtering. The signal is first passed through the DWT with up to 5 levels of

decomposition (scale S=5). Hard thresholding is applied to each wavelet

coefficient for each scale or subband using Equation (2.1), where T is

precomputed based on the average level of DWT coefficients. Following hard

thresholding, the inverse DWT is applied to the coefficients where higher

amplitude coefficients generated by noise are replaced with zero. The

efficiency of the method depends on the suitability of wavelet function, level

of wavelet decomposition and optimal threshold condition for the irregular

dynamics of HRV.

2.3.1.2 DWT with linear Interpolation

Another approach for removing ectopic beats is by wavelet

filtering and interpolation. Ectopic beats are first identified and corrected by

one level DWT and then 2 beat cycles are low pass interpolated to create a

smooth signal. Ectopic beats are typically beats with a shorter cardiac cycle,

followed by a longer cardiac cycle. The Ectopic beats are thus replaced by

linear interpolated samples expressed in Equation (2.2)

N 1

n Nn 0

X a X (2.2)

where N = 17 and the ‘a’ terms are FIR filter coefficients modeling a

symmetrical filter which allows the original data to pass through unchanged

DWT

S = 5

HARD/SOFT

THRESHOLDING

iDWT

S =5

Page 7: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

20

and interpolates between samples so as the mean square error between them

and their values are minimized.

2.3.1.3 Rank order filter

A popular scheme to deal with the image noise is the median filter

(MF). Ioannis Pitas and Anastasios (1992) presented a median filtering

approach for effective noise removal in images. Its widespread use is based

on its simplicity, speed and its excellent edge preservation properties. The

advantages of this MF are low computational complexity and good results in

cases of low noise density. But, the performance deteriorates as the noise

density increases or data complexity increases. In high noise density, they

may not able to remove noise since the median can be a noisy value. To

eliminate this problem Hwang and Haddad (1995) proposed two new

algorithms for adaptive median filter. The first one, called the ranked-order

based adaptive median filter (RAMF), is based on a two-level test. The first

level tests for the presence of residual impulses in the MF output, and the

second level tests whether the center element itself is corrupted by an impulse

or not. The second one, called the impulse size based adaptive median filter

(SAMF), is based on the detection of the size of the impulse noise.

This adaptive filter changes its behavior based on the statistical

characteristics of the signal. So, the performance of adaptive filter is usually

superior to non adaptive counterparts. The adaptive structure of this filter

ensures that most of the impulse noise is detected even at a high noise level

provided that the window size is large enough. Here, the noise elements are

replaced by the median, while the remaining pixels are left unaltered. The

expansion of window size in the adaptive median filter is determined by the

criterion if the median is noisy or not. This criterion is not appropriate when

the noise density is moderate or high. The elements processed by this filter are

Page 8: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

21

reused again and again. This process degrades the quality of the restored

signal.

To avoid the problems in the adaptive median filter (AMF), the

adaptive rank order filter (AROF) is proposed. This filter expands the window

if all the elements within the window are noisy. This AROF algorithm

addresses two problems: blurring of information for large window size and

poor noise removal for smaller window size, which are encountered in other

methods.

2.3.1.4 Adaptive rank order filter

The adaptive rank order filter (AROF) was proposed for image

denoising by Cheng-Hsiung Hsieh and Po-Chin Huang (2009). Two types of

adaptations are incorporated into the adaptive rank order filter, adaptive

filtering output and adaptive window size. For the aspect of adaptive filtering

output, the output may be a noise-free median or a noise-free non-median

which is then used to replace the noisy element in the window. As for the

adaptive window size the window expands when all the elements within the

current window are noisy. The proposed filter in this paper has adaptive

threshold conditions in addition to adaptability to window size and non

median filtering. The adaptive threshold condition tracks the non-stationary

trend present in the HRV time series. The thresholds are updated based on the

previous window non- noisy value.

2.3.2 Non-stationary Trend

A trend is an intrinsically fitted monotonic function or a function in

which there can be at most one extreme within a given data span. Here, “a

given data span” could be the whole length, or a part of the data. Detrending

is the operation of removing the trend. The variability is the residue of the

Page 9: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

22

data after the removal of the trend within a given data span. HRV spectral

measures employ techniques that assume weak stationary condition in the

signal. If part of the signal in the selected window of analysis exhibits

significant changes in the mean or variance, the HRV estimation technique

can no longer be trusted. A cursory analysis of any real HRV reveals that

shifts in the mean or variance are a frequent occurrence, Gari et al (2007). For

this reason it is a common practice to de-trend the signal by removing the

trend from the signal prior to calculating a metric. However, this detrending

should not remove any changes in variability over a stationary scale change,

or any changes in the spectral distribution of component frequencies. It is not

only illogical to attempt to calculate a metric that assumes stationary

condition over the window of interest in such circumstances; it is also unclear

what the meaning of a metric taken over segments of differing autonomic tone

could be.

2.3.2.1 Predetermined Trend and Detrending

The most commonly seen trend is the simple trend, which is a

straight line fitted to the data, and the most common de-trending process

usually consists of removing a straight line best fit, yielding a zero-mean

residue. Such a trend may suit well in a purely linear and stationary world.

However, the approach may be illogical and physically meaningless for real-

world applications such as in HRV analysis. The linearly fitted trend makes

little sense for the underlying mechanism which is likely to be nonlinear and

non-stationary.

Another commonly used trend is the one taken as the result of a

moving mean of the data. A moving mean requires a predetermined time scale

so as to carry out the mean operation. The predetermined time scale has little

rational basis for non-stationary processes, where the local time scale is

unknown in priori. More complicated trend extraction methods, such as

Page 10: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

23

regression analysis or Fourier-based filtering, are often based on stationary

and linear assumptions; therefore, one will face a similar difficulty in

justifying their usage. Even in the case in which the trend calculated from a

nonlinear regression happens to fit the data well fortuitously, there is no

justification in selecting a time-independent regression formula and applying

it globally for non-stationary processes.

2.3.2.2 Smoothness Prior based Detrending

A better detrending procedure was presented by Tarvainen et al

(2001) to remove the non-stationary mean. The approach is based on

smoothness prior’s regularization. The RR series can be considered to consist

of two components Z=Zstat + Ztrend where Zstat is the nearly stationary HRV

series of interest and Ztrend was the low-frequency aperiodic trend component.

The trend component was modeled with a linear observation model as Ztrend =

+ where H is the observation matrix, are the regression parameters and

is the observation error. The regularized least squares solution of the

estimate2 22

darg min{ H z D H } where is the regularization

parameter and Dd indicates the discrete approximation of the dth

derivative

operator. The estimated trend to be removed is trendˆZ H . The cutoff

frequency of the filter decreases when the regularization parameter is

increased. This smoothing parameter should be selected in such a way that the

spectral components of interest are not significantly affected by the

detrending.

In general, there is no foundation to support any contention that the

underlying mechanisms should follow the selected simplistic, or even

sophistic, functional forms, except for the cases in which physical processes

are completely known. If the functional form of the trend is not preselected,

the processes of determining the trend have to be adaptive to accommodate

Page 11: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

24

data from non-stationary and nonlinear processes. The recently developed

EMD method fits the requirements.

2.4 CONVENTIONAL MEASURES OF HEART RATE

VARIABILITY

2.4.1 Time Domain Measures

Variations in heart rate may be evaluated by a number of methods.

The simplest one to perform is the time domain method. With these methods

either the heart rate at any point of time or the intervals between successive

normal complexes are determined. In a continuous Electrocardiograph (ECG)

record, each QRS complex is detected, and the so-called normal-to-normal

(NN) intervals (that is all intervals between adjacent QRS complexes resulting

from sinus node depolarization), or the instantaneous heart rate is determined.

Simple time domain variables that can be calculated include the mean NN

interval, the mean heart rate, the difference between the longest and shortest

NN interval, the difference between night and day heart rate, etc.

2.4.1.1 Statistical measures

From a series of instantaneous heart rates or cycle intervals,

particularly those recorded over longer periods, traditionally 24 hours, more

complex statistical time domain measures can be calculated. These may be

divided into two classes, (a) those derived from direct measurements of the

NN intervals or instantaneous heart rate, and (b) those derived from the

differences between NN intervals. These variables may be derived from

analysis of the total Electrocardiograph recording or may be calculated using

smaller segments of the recording period. The latter method allows

comparison of HRV to be made during varying activities, e.g. rest, sleep, etc.

Page 12: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

25

The simplest variable to calculate is the standard deviation of the

NN interval (SDNN) i.e., the square root of variance. Since variance is

mathematically equal to total power of spectral analysis, SDNN reflects all

the cyclic components responsible for variability in the period of recording. In

many studies, SDNN is calculated over a 24-h period and thus encompasses

both short-term high frequency variations, as well as the lowest frequency

components seen in a 24-h period. As the period of monitoring decreases,

SDNN estimates shorter and shorter cycle lengths. It should also be noted that

the total variance of HRV increases with the length of analyzed recording.

Thus, on arbitrarily selected ECGs, SDNN is not a well defined statistical

quantity because of its dependence on the length of recording period. Thus, in

practice, it is inappropriate to compare SDNN measures obtained from

recordings of different durations. However, durations of the recordings used

to determine SDNN values should be standardized.

Other commonly used statistical variables calculated from segments

of the total monitoring period include SDANN, the standard deviation of the

average NN interval calculated over short periods, usually 5 minutes, which is

an estimate of the changes in heart rate due to cycles longer than 5 minutes,

and the SDNN index, the mean of the 5-minutes standard deviation of the NN

interval calculated over 24 hours, which measures the variability due to cycles

shorter than 5 minutes.

The most commonly used measures derived from interval

differences include RMSSD, the square root of the mean squared differences

of successive NN intervals, NN50, the number of interval differences of

successive NN intervals greater than 50 ms, and pNN50 the proportion

derived by dividing NN50 by the total number of NN intervals. All these

measurements of short-term variation estimate high frequency variations in

heart rate and thus are highly correlated.

Page 13: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

26

2.4.1.2 Geometrical measures

The series of NN intervals can also be converted into a geometric

pattern, such as the sample density distribution of NN interval durations,

sample density distribution of differences between adjacent NN intervals,

Lorenz plot of NN or RR intervals, etc., and a simple formula is used which

judges the variability based on the geometric and/or graphic properties of the

resulting pattern. Three general approaches are used in geometric methods:

(a) a basic measurement of the geometric pattern (e.g., the width of the

distribution histogram at the specified level) is converted into the measure of

HRV, (b) the geometric pattern is interpolated by a mathematically defined

shape (e.g., approximation of the distribution histogram by a triangle, or

approximation of the differential histogram by an exponential curve) and then

the parameters of this mathematical shape are used, and (c) the geometric

shape is classified into several pattern-based categories which represent

different classes of HRV (e.g., elliptic, linear and triangular shapes of Lorenz

plots).

The major advantage of geometric methods for clinical practices is

in their relative insensitivity to the analytical quality of the series of NN

intervals. The major disadvantage is the need for a reasonable number of NN

intervals to construct the geometric pattern. In practice, recordings of at least

20 minutes (but preferably 24 hours) should be used to ensure the correct

performance of the geometric methods i.e., the current geometric methods are

inappropriate to assess short-term changes in HRV.

2.4.2 Frequency domain measures

Power spectral density (PSD) analysis provides the basic

information of distribution of power (i.e., variance) as a function of

frequency. Independent of the method employed, only an estimate of the true

Page 14: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

27

PSD of the signals can be obtained by proper mathematical algorithms.

Methods for the calculation of PSD may be generally classified as non-

parametric and parametric. In most instances, both methods provide

comparable results. The advantages of the non-parametric methods are: (a)

the simplicity of the algorithm employed (Fast Fourier Transform in most of

the cases) and (b) the high processing speed, whilst the advantages of

parametric methods are: (a) smoother spectral components which can be

distinguished independently of preselected frequency bands, (b) easy post-

processing of the spectrum with an automatic calculation of low and high

frequency power components and easy identification of the central frequency

of each component, and (c) an accurate estimation of PSD even on a small

number of samples on which the signal is supposed to maintain stationary.

The basic disadvantage of parametric methods is the need to verify the

suitability of the chosen model and its complexity (i.e., the order of the

model).

Short-term recordings: Three main spectral components are

distinguished in a spectrum calculated from short-term recordings of 2 to 5

minutes: very low frequency (VLF), low frequency (LF), and high frequency

(HF) components. The distribution of the power and the central frequency of

LF and HF are not fixed but may vary in relation to changes in autonomic

modulations of the heart period.

Measurement of VLF, LF and HF power components is usually

made in absolute values of power (ms2), but LF and HF may also be measured

in normalized units (n.u.) which represent the relative value of each power

component in proportion to the total power minus the VLF component. The

representation of LF and HF in n.u. emphasizes the controlled and balanced

behavior of the two branches of the autonomic nervous system. Moreover,

normalization tends to minimize the effect on the values of LF and HF

Page 15: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

28

components of the changes in total power. Nevertheless, n.u. should always

be quoted with absolute values of LF and HF power in order to describe the

distribution of power in spectral components.

Long-term recordings: Spectral analysis may also be used to

analyze the sequence of NN intervals in the entire 24-h period. The result then

includes an ultra-low frequency component (ULF), in addition to VLF, LF

and HF components. The slope of the 24-h spectrum can also be assessed on a

log_log scale by linear fitting of the spectral values. The problem of

‘stationarity’ is frequently discussed with long-term recordings. If

mechanisms responsible for heart period modulations of a certain frequency

remain unchanged during the whole period of recording, the corresponding

frequency component of HRV may be used as a measure of these

modulations. If the modulations are not stable, interpretation of the results of

frequency analysis is less well defined. It should be remembered that the

components of HRV provide measurements of the degree of autonomic

modulations rather than that of the level of autonomic tone and averages of

modulations do not represent an average level of tone.

2.4.3 Non-linear Measures

Non-linear phenomena are determined by complex interactions of

haemodynamic, electrophysiological and humoral variables, as well as by

autonomic and central nervous regulations. The parameters which have been

used to measure non-linear properties of HRV include 1/f scaling of Fourier

spectra, Hurst scaling exponent, and Coarse Graining Spectral Analysis. For

data representation, Poincarè sections, low-dimension attractor plots, singular

value decomposition, and attractor trajectories have been used. For other

quantitative descriptions, the D2 correlation dimension, Lyapunov exponents,

and Kolmogorov entropy have been employed. While various methods have

been proposed to gain deeper insight into the nature of the complex

Page 16: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

29

fluctuations in heart rate, none of them has so far provided complete

characteristics of actual HRV. One of the reasons for this might be that the

exact mechanism behind heart rate complexity is still not completely

understood.

In this research some of the prominent nonlinear measures are

implemented and studied for its efficiency in discriminating healthy and

pathological subject’s HRV signals. The prominent nonlinear measures are:

1. Fractal measures: to assess self-affinity of heartbeat

fluctuations over multiple time scales.

2. Entropy measures: assess the regularity/irregularity or

randomness of heartbeat fluctuations.

3. Poincare plot representation: to assess the heartbeat

dynamics based on a simplified phase-space embedding.

4. Principal Dynamic Modes: to study the effect of

sympathetic and parasympathetic activities of ANS on HRV.

To study the above conventional nonlinear measures, long duration

HRV from two different groups of subjects were analyzed:

i) 12 adults without any clinical evidence of heart disease.

ii) 12 subjects with Congestive Heart Failure (CHF) pathology.

Data for the two groups has been collected from the widely used

biomedical website, http://www.physionet.org. The healthy data was drawn

from the Fantasia database and the CHF data from the BIDMC-CHF database.

The details of the database are as follows:

Page 17: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

30

Fantasia Database: The rigorously screened healthy subjects

underwent 120 minutes (2 hours) of continuous supine resting while

continuous Electrocardiographic (ECG) signals are collected. All subjects

remained in a resting state in sinus rhythm while watching the movie

‘Fantasia’ (Disney 1940) to help maintain wakefulness. The ECG signals

were digitized at 250 Hz. Each heartbeat was annotated using an automated

arrhythmia detection algorithm and each beat annotation was verified by

visual inspection.

Congestive Heart Failure Database (CHF database): This

database includes long-term ECG recordings from subjects with severe

congestive heart failure. The individual recordings are about 20 hours

duration each, and contain ECG signals sampled at 250 samples per second

with 12-bit resolution over a range of ±10 milli volts. The original analog

recordings were made at Boston's Beth Israel Hospital, using ambulatory

ECG recorders.

2.4.3.1 Fractal Measures

Power-law correlation

Kobayashi and Musha (1982) first reported the frequency

dependence of the power spectrum of HRV. The slope of the regression line

of the log of power versus log of frequency relation (1/f), usually calculated in

the 10-4

to 10-2

Hz frequency range corresponds to the negative scaling

exponent ß (Equation 2.3) and provides an index for long-term scaling

characteristics (Saul et al 1987).

S(f ) (1/ f ) (2.3)

Page 18: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

31

This broadband spectrum, characterizing mainly slow HR

fluctuations indicates a fractal-like process with a long-term dependence

(Lombardi 2000, Saul et al 1987) found that ß is similar to -1 in healthy

young men. But, Bigger et al (1996) reported an altered regression line (ß -

1.15) in patients after MI.

Fractal forms are composed of subunits that resemble the structure

of the overall object. The self-similarity is sometimes not visually obvious but

there may be numerical or statistical measures that are preserved across

scales, a scale invariant property known as statistical self similarity.

This figure is from physionet tutorial, www.physionet.org

Figure 2.3 Statistical Self Similarities

Power laws describe dynamics that have a similar pattern at

different scales with many small variations, and fewer and fewer larger

Page 19: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

32

variations; and the pattern of variation is statistically similar regardless of the

size of the variation. Magnifying or shrinking the scale of the signal reveals

the same relationship that defines the dynamics of the signal as shown in

Figure2.3. Systems that have power-law correlations usually have certain

scaling properties related to fractal and nonlinear mechanisms, and are

described by homogeneous functions. The physical meaning of a

homogeneous function is that the value of the function at a new scale is

simply related to the value of the function at the original scale by a constant

scale factor. Based on this, the fractal signals are distinguished as follows:

Monofractals: They are homogeneous signals with the same

scaling properties throughout the entire period. They can be characterized by

a single global exponent, which is termed as Hurst exponent, H.

Multifractals: They are inhomogeneous and can be decomposed

into many subsets and are characterized by different local Hurst exponents, h.

These exponents quantify the local singular behavior and thus relate to the

local scaling of the time series. Since the local scaling properties change with

time, multifractal signals require many exponents to fully characterize their

nonstationary properties. The multifractal formalism describes the statistical

properties of some measure in terms of its distribution of the singularity

spectrum D(h) corresponding to its singularity strength h.

Limitations: Stationary conditions, periodicity and the need for

large datasets are required; artifacts and patients movements influence

spectral components.

HRV analysis based on nonlinear fractal dynamics were performed

by Goldberger and West (1987). It was suggested that self-similar (fractal)

scaling may underlie the 1/f –like spectra (Kobayashi and Musha 1982) seen

Page 20: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

33

in multiple systems like heart rate variability. They proposed that this fractal

scale invariance may provide a mechanism for the ‘constrained randomness’

underlying physiological variability and adaptability. Later, Goldberger et al

(1988) reported that patients prone to high risk of sudden cardiac death

showed evidence of nonlinear HR dynamics, including abrupt spectral

changes and sustained low frequency (LF) oscillations. At a later date, they

suggested that a loss of complex physiological variability could occur under

certain pathological conditions such as reduced HR dynamics before sudden

death and ageing (Goldberger 1991).

De-trended fluctuation analysis

This method is based on a modified random walk analysis and was

introduced and applied to physiological time series by Peng et al (1995). It

quantifies the presence or absence of fractal correlation properties in non-

stationary time series. DFA usually involves the estimation of a short-term

fractal scaling exponent 1 over the range of 4 16 heartbeats and a long-

term scaling exponent 2 over the range of 16 64 heartbeats (Peng et al

1995). DFA was developed to quantify the fluctuations on multi-length scales.

The self-similarity occurring over a large range of time scales can be defined

for a selected time scale with this method (Mäkikallio et al 1999). Healthy

subjects revealed a scaling exponent of approximately one, thereby indicating

fractal-like behaviour. Patients with cardiovascular disease showed reduced

scaling exponents and suggest a loss of fractal-like HR dynamics (Mäkikallio

et al 1999, Huikuri et al 2000).

Multifractal analysis

Multifractal analysis describes signals that are more complex than

those fully characterized by a monofractal model. Ivanov et al (1999)

Page 21: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

34

demonstrated that healthy HRV is even more complex than previously

suspected and requires a multifractal representation that uses large number of

local scaling exponents to fully characterize the scaling properties. Ivanov

et al (1999) found a loss in HRV multifractality in patients suffering from

congestive heart failure (CHF). Multifractality in heartbeat dynamics

indicates the involvement of coupled cascades of feedback loops in a system

operating far from equilibrium (Rafal Galaska 2008). The multifractal

characteristics of HRV are analyzed using Multifractal Detrended Fluctuation

Analysis (MFDA). The method was implemented and the multifractality of

healthy and congestive heart failure HRV dynamics were studied. Power laws

describe dynamics that have a similar pattern at different scales. If the signal

f(x) is a fractal, then scaling of its amplitude must be in proportion to scaling

of its independent variable x. This property of similarity can now be

expressed in the form of a dilation equation as given in Equation (2.4).

0(x )0 0 0 0f (x x) f (x ) h f (x x) f (x ) , x0 € R (2.4)

where, x0 is located at the beginning of the interval over which the scaling is

considered, is the dilation coefficient, and h(x0) is the scaling or hurst

exponent.

The dilation equation can be used not only to describe self affinity,

but also monofractality and multifractality. If the scaling exponent is constant

over the signal space, the signal is called a monofractal. Monofractal signals

are thus quantified by a single global Hurst exponent, H, where 0 < H < 1.

When H=0.5, the magnitude of the sequential points of the time series are

independent and therefore uncorrelated. The time series thus exhibits the

properties of a random walk. As H tends towards zero, trends are more

rapidly reversed, where there are large variations between adjacent values,

Page 22: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

35

which give them an irregular look. As H approaches one, the time series

exhibits an overall relative smoothness thereby indicating the presence of

positive long-range correlations.

On the other hand, if the scaling coefficient is not a constant, then it

is called a multifractal. Multifractal signals can be decomposed into many

subsets characterized by different local hurst exponents h, which quantify the

local singular behavior and thus relate to the local fractal properties of the

time series. The statistical properties of the different subsets characterized by

the different exponent values of h are quantified by the function D (h). Here,

D (h0) is the fractal dimension of the subset of the original time series

characterized by the local hurst exponent h0.

This figure is courtesy of L.A.N. Amaral, H.E. Stanley et al , Physica A 270 (1999) 309-324

Figure 2.4 Singularity Spectrum

The frequency of occurrence of a given singularity strength ‘h’ is

measured by the multifractal spectrum D (h) shown in Figure 2.4. Fractal

dimension, therefore, serves as a quantifier of complexity. D takes values

between 0 and 1. Smaller D (h) means that fewer points behave with

strength h.

Page 23: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

36

The singularity spectrum D (h) quantifies the degree of non-

linearity in the processes generating the output f(x) in a very compact way.

For a linear fractal process, the output of a system will have the same fractal

properties (i.e., the same type of singularities) regardless of initial conditions

or of driving forces. In contrast, non-linear fractal processes will generate

outputs with different fractal properties that depend on the input conditions or

the history of the system. That is, the output of the system over extended

periods of time will display different types of singularities.

Multifractal Detrended Fluctuation Analysis

Multifractal Detrended Fluctuation Analysis is used to investigate

the spectrum of singularity exponents based on long range power-law

correlations in heart rate variability time series and hence classifies the

various healthy and heart failure subjects. Figures 2.5 (a) and (b) present HRV

of healthy and congestive heart failure subjects. In healthy subjects, the

scaling and the singularity spectrums are nonlinear as shown in Figures 2.6

(a) and (b). In congestive heart failure subject the scaling function is less

nonlinear and the singularity spectrum was broken as shown in Figures 2.7 (a)

and (b), which represent the loss of multifractality in congestive heart failure

subjects.

0 1000 2000 3000 4000 5000 6000

0.8

1

1.2

1.4

1.6

1.8

2

No.of samples

RR

In

terv

als

Healthy Subject

0 1 2 3 4 5 6 7 8 9 10

x 104

0.4

0.5

0.6

0.7

0.8

0.9

1

1.1

1.2

1.3

no.of samples

RR

In

terv

als

CHF

(a) (b)

Figure 2.5 (a) Healthy young signal and (b) CHF signal

Page 24: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

37

-5 -4 -3 -2 -1 0 1 2 3 4 5-2.2

-2

-1.8

-1.6

-1.4

-1.2

-1

-0.8

-0.6

-0.4

-0.2

moments (q)

(q)

Scaling Function

0.12 0.14 0.16 0.18 0.2 0.22 0.24 0.26 0.28 0.30.75

0.8

0.85

0.9

0.95

1

Hurst exponent h

Sin

gu

lar

Sp

ectr

um

D(h

)

Multifractallity

(a) (b)

Figure 2.6 (a) The scaling function and (b) Multifractal spectrum of

healthy young signal

-5 -4 -3 -2 -1 0 1 2 3 4 5-2.5

-2

-1.5

-1

-0.5

0

0.5

moments (q)

(q)

Scaling Function

0.25 0.255 0.26 0.265 0.27 0.275 0.28 0.285 0.290.93

0.94

0.95

0.96

0.97

0.98

0.99

1

1.01

Hurst exponent h

Sin

gu

lar

Sp

ectr

um

D(h

)

Multifractallity

(a) (b)

Figure 2.7 (a) The scaling function and (b) Multifractal spectrum of

CHF signal

The efficiency of Singularity measures in discriminating healthy

and CHF subjects is 91.67% when the CHF data length is approximately 20

hours of duration, as shown in Figure 2.8.

Page 25: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

38

Multifractal Detrended Fluctuation Analysis

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

1 2 3 4 5 6 7 8 9 10 11 12

Record Number

Do

min

en

t H

urs

t v

alu

e

Healthy

CHF

Figure 2.8 Discrimination of healthy and CHF subjects HRV signal

using dominant ‘Hurst’

The patients with nearly terminal pathology like congestive heart

failure show a significant break down of multifractal complexity.

Physiologically, this loss of multifractality is related to the dysfunction of the

control mechanisms regulating the heart beat i.e., the autonomous nervous

system. The method finds the difference between spectra of healthy and heart

failure subjects. For the latter ones, the localizations of the spectra are

changed, i.e., they are moved to higher h values. This observation suggests

that the nonlinear complexity of the heart rate appears to degrade in

characteristic ways with diseases, reducing adaptiveness to various

physiological and physical conditions of the individual.

Limitations: The efficiency of the method depends on the record

length; it requires many local and theoretically infinite exponents to fully

characterize their scaling properties.

Page 26: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

39

2.4.3.2 Entropy measures

To estimate the complexity of cardiovascular dynamics, Pincus

(1991) modified the original correlation dimension and Kolmogorov entrophy

notions (Grassberger and Procaccia 1983 a,b, Eckmann and Ruelle 1985),

creating the approximate entrophy (ApEn). This technique was later improved

and termed ‘sample entropy’ (SampEn) by Richman and Moorman (2000)

and reduces the superimposed bias within the original method. A very

promising way to quantify complexity over multiple scales was introduced by

Costa et al (2002, 2005). The apparent loss of multiscale complexity in life-

threatening conditions (Norris et al 2008) suggests a clinical importance of

this multiscale complexity measure.

Approximate entropy/sample entropy

The ApEn represents a simple index for the overall complexity and

predictability of time series. ApEn quantifies the likelihood that runs of

patterns, which are close, remain similar for subsequent incremental

comparisons (Ho et al 1997). High values of ApEn indicate high irregularity

and complexity in time series data. For healthy subjects, ApEn values range

from approximately 1.0 to 1.2 and for post-infarction patients ApEn values

are approximately 1.2 (Mäkikallio et al 1996, Ho et al 1997).

Limitations of ApEn: stationary and noise-free data are required;

inherent bias exists; counting self-matches; dependency on the record length;

lacks relative consistency; evaluates regularity on one scale only; outliers

(missed beat detections, artifacts) may affect the entropy values.

SampEn, improving ApEn, quantifies the conditional probability

that two sequences of ‘m’ consecutive data points that are similar to each

Page 27: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

40

other (within a given tolerance r) will remain similar when one consecutive

point is included. Self-matches are not included in calculating the probability.

Lake et al (2002) described a reduction in SampEn of neonatal HR prior to the

clinical diagnosis of sepsis and sepsis-like illness. The SampEn was found to

be significantly reduced before the onset of atrial fibrillation (Tuzcu et al

2006).

Limitations of SampEn: stationary condition is required; higher

pattern length requires an increased number of data points; evaluates

regularity on one scale only; outliers (missed beats, artifacts) may affect the

entropy values.

Multiscale entropy

Biological systems are likely to present structures on multiple

spatio-temporal scales. Multiscale entropy (MSE) assesses multiple time

scales to measure a system’s complexity. The main advantage of MSE is its

ability to measure complexity according to its definition ‘a meaningful

structural richness’ and being applicable to signals of finite length (Costa et al

2005). The MSE method demonstrated that healthy HRV is more complex

than pathological HRV. Costa et al (2002) found that pathological dynamics

associated with either increased regularity/decreased variability or with

increased variability are both characterized by a reduction in complexity due

to the loss of correlation properties. Costa et al (2002) reported the best

discrimination between pathological (CHF) and healthy HR signals on scale

5. The healthy subjects show structural richness: Sample entropy value is

higher (e.g., 2.04) and the Congestive heart failure subjects with more

regularity: Sample entropy value is less (e.g., 0.67). The MSE method gives

Page 28: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

41

distinctive signatures for healthy complexity and for pathological complexity

as shown in Figure 2.9 (a) and (b).

The congestive heart failure subject shows a significant reduction in

multiscale entropy. Physiologically, this loss of complexity is related to the

less adaptive nature of the control mechanisms regulating the heart beat i.e.,

the autonomous nervous system. The method finds the difference between

complexity of healthy and heart failure subjects. This observation suggests

that the multiscale entropy of the heart rate appears to degrade in

characteristic ways with disease, reducing the adaptive capability of the

individual. The efficiency of entropy measures in discriminating healthy and

CHF subjects are 99% as shown in Figure 2.10, but one has to be cautious in

interpreting these signatures in terms of the underlying dynamics. In

particular, different dynamical systems can exhibit the same signatures and

that similar systems may have different signatures depending on the time

scales involved.

0 2 4 6 8 10 12 14 16 18 201.7

1.75

1.8

1.85

1.9

1.95

2

2.05

2.1

scale

Sa

mp

le E

ntr

op

y

MSE

0 2 4 6 8 10 12 14 16 18 200.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

scale

Sa

mp

le E

ntr

op

y

MSE

(a) (b)

Figure 2.9 (a) Signature of healthy Young signal and (b) Signature of

CHF signal

Page 29: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

42

Multiscale Entropy Analysis

0

0.5

1

1.5

2

2.5

1 2 3 4 5 6 7 8 9 10 11 12

Record Number

Sa

mp

le E

ntr

op

y

Healthy

CHF

Figure 2.10 Discrimination of healthy and CHF subjects HRV signal

using Multiscale Sample Entropy

Limitations: stationary condition is required; outliers (missed beat

detections, artifacts) may affect the entropy values; the consistency of MSE

will be progressively lost as the number of data points decreases.

2.4.3.3 Poincare plot representation

The Poincare plot analysis (PPA) is a quantitative visual technique,

whereby the shape of the plot is categorized into functional classes (Weiss

et al 1994, Kamen et al 1996, Brennan et al 2002) and provides detailed beat-

to-beat information on the behaviour of the heart. Usually, Poincare plots are

applied for a two-dimensional graphical and quantitative representation

(scatter plots), where RRn is plotted against RRn+1. Most commonly, three

indices are calculated from Poincare plots: the standard deviation of the short-

term RR-interval variability (minor axis of the cloud, SD1), the standard

Page 30: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

43

deviation of the long-term RR-interval variability (major axis of the

cloud, SD2) and the axes ratio (SD1/SD2) (Kamen and Tonkin 1995,

Brennan et al 2002). For the healthy heart, PPA shows a cigar-shaped

cloud of points oriented along the line of identity, Figure 2.11(a). For

congestive heart failure subjects the shape is much scattered, Figure 2.11(b).

These indices are correlated with linear indices. Laitio et al (2002) showed

that an increased SD1/SD2 ratio was the most powerful predictor of post-

operative Ischemia. Mäkikallio (1998) found SD2 125 ms in healthy

subjects and SD2 85 ms in post-infarction patients with ventricular

tachyarrhythmia.

0 0.5 1 1.5 20

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

Poincare Plot for a Healthy Subject

RRn

RR

n+

1

0.2 0.4 0.6 0.8 1 1.2 1.4 1.60.2

0.4

0.6

0.8

1

1.2

1.4

1.6

Poincare plot of CHF

RRn

RR

n+

1

(a) (b)

Figure 2.11 (a) Poincare Plot of healthy signal (b)Poincare Plot of CHF

signal

The SD1/SD2 ratio discrimination efficiency is much less as shown

in Figure 2.12 compared to other measures. The statistical significance of

singularity measures, complexity measures and poincare plots measures are

given in the Appendix 3.

Page 31: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

44

Poincare plot analysis

0

0.2

0.4

0.6

0.8

1

1.2

1 2 3 4 5 6 7 8 9 10 11 12

Record Number

SD

1/S

D2

ra

tio

Healthy

CHF

Figure 2.12 Discrimination of healthy and CHF subjects HRV signal

using Poincare Plots

Limitations: SD1, SD2 dependent on other time-domain measures

and expects 20 hours duration for CHF signals.

2.4.3.4 Principal Dynamic Modes

Experimental evidence suggests that myocardial ischemia, acute

myocardial infarction and chronic heart failure exhibit signs of autonomic

imbalance (Huikuri et al 1997). The ratio of the LF to HF power obtained

from spectral analysis has been shown to be a good marker of the

sympathovagal balance in assessing HRV (Bianchi et al 1997).

The LF/HF ratio obtained via the PSD is inaccurate for determining

the state of the ANS. The sympathovagal balance calculated simply by taking

the LF/HF ratio of the PSD relies on two major erroneous assumptions: that

the parasympathetic nervous system dynamics are exhibited only in high

frequencies, and that ANS control is linear. It has been well established that

Page 32: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

45

dynamics of the parasympathetic nervous system are not only reflected in

high frequencies but they are also well represented in low frequencies

Eckberg (1997). A recent report corroborating this statement suggests that the

low-frequency component results from an interaction of both the sympathetic

and parasympathetic nervous systems, and not solely the sympathetic nervous

activity (Houle and Billman 1999). Secondly, the LF/HF ratio is based on

linear power spectral analysis, which itself is limited because it is widely

recognized that control of ANSs involves nonlinear interactions. It is through

efficient interactions between vagal and sympathetic nervous systems that

homeostasis of the cardiovascular system is properly maintained. The

interactions are believed to be nonlinear because physiological conditions

would most likely involve ANS regulation based on dynamic and

simultaneous activity of the vagal and sympathetic nervous systems in

response to physical environmental stress.

The nonlinear PDM method was first introduced and applied in the

analysis of physiological systems by Marmarelis et al (1993, 1997, 1999). The

PDMs are calculated using Volterra-Wiener kernels based on expansion of

Laguerre polynomials (Marmarelis et al 1993). Yuru zhong et al (2004)

modified the PDM technique to be used with even a single output signal of

HRV data, whereas the original PDM required both input and output data.

The modified PDM analysis revealed the first two dominant PDMs obtained

from the heart rate data of healthy human subjects correspond to the two ANS

activities. The dominant PDMs calculated using Volterra-Wiener kernels

based on expansion of Laguerre polynomials represent the parasympathetic

(HF) and sympathetic activities of autonomous nervous system. Hence, it was

shown that the LF/HF ratio obtained through PDMs will account for the ANS

activities.

Page 33: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

46

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.50

0.5

1

1.5

2

2.5

3

PDM 1

Frequency (Hz)

FF

T M

ag

nitu

de

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.50.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

2.2

2.4

PDM 2

Frequency (Hz)

FF

T M

ag

nitu

de

(a) (b)

Figure 2.13 (a) Principle Dynamic Mode 1 (b) Principle Dynamic Mode 2

The PDM is based on the principle that among all possible choices

of expansion bases, there are some that require the minimum number of basic

functions to achieve a given mean-square approximation of the system output.

Such a minimum set of basic functions is termed PDMs of the nonlinear

system. The first two dominant PDMs are shown in Figure 2.13 (a) and (b),

have similar frequency characteristics for parasympathetic and sympathetic

activities. Validation of the separation of parasympathetic and sympathetic

activities was performed by the application of the autonomic nervous system

blocking drugs atropine and propranol. With separate application of the

respective drugs, a significant decrease in the amplitude of the waveforms that

correspond to each nervous activity was observed and near complete

elimination of these dynamics when both drugs were given to the subjects.

The LF/HF ratio provides more accurate assessment of the autonomic nervous

balance.

2.4.4 Adaptive Nonlinear Methods

The absolute variability limits of a cardiovascular system and its

response characteristics to triggering forces are not known. It is very difficult

to determine whether an increased or decreased level of fluctuation is due to a

Page 34: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

47

pathological condition, or to a change in physical conditions. Another

important implication is the HRV resulting in one physiological condition

may not appear in all physical conditions. Hence, there is a great need to

develop adaptive methods that preprocess and process the HRV data without

prior assumption about the dynamics. Empirical mode decomposition

approach is a nonlinear adaptive method that does not assume any functional

forms before HRV analysis.

2.4.4.1 Empirical Mode Decomposition

Empirical mode decomposition (EMD), introduced by Huang et al

in 1998, is a method of decomposing nonlinear, non-stationary, multi

component signals. The components resulting from EMD are called Intrinsic

Mode Functions (IMFs).

The available HRV data is usually of finite duration, non-stationary

and from physiological systems that are non-linear (Yuru Zhong 2004). Under

such conditions the Fourier spectral analysis, spectrogram analysis, Wavelet

analysis are of limited use (Huang 1998). The Fourier transform requires the

underlying process to be linear, so that the superposition of sinusoidal

solutions makes physical sense. If stationary condition does not meet the

spectral energy spreads, then it not only makes the physical description

difficult but also very often non unique. Spectrogram data analysis is one of

the non-stationary data processing methods limited to linear systems. Here the

data is assumed to be piecewise stationary. This assumption is not always

justified as the window size adopted does not coincide with the stationary

time scales of the signal. The very appealing feature of wavelet analysis is

that it gives uniform resolution for all the scales. It is useful in analyzing

signals with gradual frequency changes. This analysis being linear also suffers

some pitfalls such as leakage due to the limited length of the basic wavelet

and its non adaptive nature i.e., once a wavelet is selected; it is used to

Page 35: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

48

analyze all the data. The necessary conditions for the basis to represent a non-

linear and non-stationary time series are:

1 Completeness: It guarantees the degree of accuracy of the

expansion.

2 Orthogonality: This condition ensures the positivity of energy

and avoids leakage.

3 Locality: Since non-stationary data has no time scale, all

events have to be identified by the time of their occurrences.

4 Adaptivity: The requirement for adaptivity is crucial for non-

linear and non-stationary data.

Only by adapting to the local variations of the data the

decomposition of time series can interpret the underlying dynamics of the

process. Being fully data dependent and highly adaptive it is found to be a

highly efficient method of decomposing any nonlinear and non-stationary

signals. EMD decomposes the HRV time series as a sum of zero-mean

amplitude and frequency modulation components, called Intrinsic Mode

Functions (IMFs), that describe the local behavior of the time series. Thus we

can localize any physiological variations of the data in time and frequency

axis.

EMD is defined by an algorithm and has got no analytical

formulation. Hence the decomposition is best understood by experimental

investigation rather than analytical results. Balocchi et al (2004) applied EMD

method to decompose the HRV series into its components in order to identify

the respiratory oscillation. Neto et al (2004) applied EMD to situations where

postural changes occur, provoking instantaneous changes in heart rate as a

result of autonomic modifications. Ortiz et al (2005) applied EMD method to

Page 36: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

49

decompose the fetal HRV series into its components in order to identify the

high frequency oscillations. Shafqat et al (2009) applied EMD to evaluate the

effect of local anesthesia on HRV parameters. Job and Joydeep Bhattacharya

(2010) used EMD and Independent component analysis method to correct the

blink artifacts. In this research, the EMD method is used to analyze the

latencies present in the half an hour duration HRV signal.

2.4.4.2 Stochastic Models and Particle Filtering

The source of variability of RR intervals includes both purely

stochastic components and deterministic ones related to the multiple

interactions of the cardiac system of which the cardiorespiratory interaction

plays a dominant role. The dynamics of RR intervals during spontaneous

breathing reveal both a deterministic behavior in the so-called “angular

component” and a random one in the “radial component” (Janson et al 2001).

So it is natural to consider a stochastic model for the fluctuations as a

candidate to describe some other features of the RR series as the time

dependent variability. The time varying variance has been used (Xu and

Philips 2008) to model volatility in non stationary financial series. It has been

proved that the variance of the RR intervals is increasing with mean (Camillo

cammarota and Mario Curione 2011). Stochastic models of RR fluctuations

have been recently used in different situations (Kuusela et al 2003, Petelczyc

et al 2009).

Volatility refers to the fluctuations observed in RR intervals over

time. More precisely volatility is defined as the standard deviation of a

random variable. There are many methods for modeling the mean value of the

variable in interest, recently modeling the changes of patterns in variability is

observed in time series. When the random component of the time series

shows changes in variability and it is inefficient to use volatility measures

based on the assumption of constant volatility over some period.

Page 37: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

50

Some of the nonlinear models that capture the volatility of a time

series are ARCH, GARCH and SV models. In ARCH/GARCH models, the

volatility is considered as deterministic and in SV model, it is modeled as

stochastic (Chiara Pederzoli 2006, Sangoon Kim 1998). In SVM, the

volatility is stochastic and so this model explains the asymmetric property of

the time series. The stochastic volatility model (SVM) is a nonlinear

nongaussian state space model in which the variance equation has its own

innovation component which makes the process stochastic rather than

deterministic. An important task when analyzing data by state space model is

estimation of the underlying state process based on measurements from the

observation process. The parameters of SV model are difficult to estimate.

Though there are various methods like quasi maximum likelihood estimation,

maximum likelihood estimation, the best one is to use simulation based

method. So, the parameters are estimated by Particle filtering, Particle

smoothing and Expectation Maximization Algorithm (Jeongeun Kim 2005).

Particle Filters

Particle filter estimates the dynamic state of a nonlinear

nongaussian stochastic system with the guidance of a nonlinear dynamic state

space model. For the estimation of the state recursively, several filtering

approaches have been proposed. The method that has been investigated most

is the Kalman filter that is an optimal filter in which the equations are linear

and the noises are independent, additive and Gaussian. For scenarios where

the equations are nonlinear and the noises are nongaussian, various methods

have been proposed of which the Extended Kalman filter is the most

prominent. When the equations are nonlinear and the noises are nongaussian,

the Extended Kalman filter gives a large estimation error. Particle filtering has

become an important alternative to the Extended Kalman filter. The

advantage of particle filtering over the other methods is in that the exploited

Page 38: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

51

approximation does not involve linearization around current estimates, but

rather approximations in the representation of desired distributions by discrete

random measures (Geir Stovik 2002, Arulampalam et al 2002).

The state vector contains all relevant information required to

describe the system under investigation. The measurement vector represents

noisy observations that are related to the state vector and is of lower

dimension than the state vector. The state and observation Equations (2.5) and

(2.6) are represented as,

t t t 1 tx f x ,u (2.5)

t t t ty g x ,v (2.6)

where tx is the state vector, ty is the vector of observations (measurements),

tf is the system transition function, tg is the measurement function,

tu and tv are noise vectors and t is the time index. In particle filtering, it is

assumed that these models are available in probabilistic form. The

probabilistic state space formulation and requirement for updating the

information on receipt of new measurement are ideally suited for Bayesian

approach.

In Bayesian approach, one attempts to construct the posterior

Probability Density Function (PDF) of the state based on all the available

information including the set of received measurements. Since the PDF has all

the available statistical information, it is said to be the complete solution of

the estimation problem. The main task of particle filtering (sequential signal

processing) is to estimate the state tx recursively from the observations ty . In

general, there are three probability distributions of interest as described

below:

Page 39: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

52

Filtering / Tracking: This estimates the state at time t from all

observations up to time t i.e. t 1 2 tp x | y , y ,.....y from

t 1 1 2 t 1p x | y , y ,.......y .

Smoothing: This estimates the state at time t from all the past

and some future observations i.e. t 1 2 tp x | y , y ,.....y from

t 1 1 2 Tp x | y , y ,......y where T t .

Prediction: This estimates state at time t from observations up

to the last point of time. i.e. t l 1 2 tp x | y , y ,.....y from

t l 1 1 2 tp x | y , y ,.....y where l 1.

In Particle Filtering, the distributions are approximated by discrete

random measures defined by particles and weights assigned to particles. If the

distribution of interest is p(x), its approximating random measure is given by

Equation (2.7),

Mm m

m 1x ,w (2.7)

where x(m)

are the particles, w(m)

is the weights and M is the number of

particles used in the observation. The random measure approximates the

distribution p(x) by Equation (2.8),

Mm m

m 1

p x w x x (2.8)

where is the dirac delta function.

Page 40: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

53

Importance Sampling

In order to approximate the distribution into discrete random

measure it is necessary to sample the distribution. The concept of importance

sampling is used for the purpose. In general, p(x) can be directly sampled and

equal weights (1/ M ) can be assigned to the particles. When direct sampling

is intractable, particles x (m)

can be generated from a distribution x known

as importance function. Weights are assigned as given by Equation (2.9),

* m p xw

x (2.9)

Resampling

A major problem with particle filtering is that the discrete random

measure degenerates quickly i.e. all except very few particles are assigned

negligible weights. This degeneracy indicates a reduction in performance of

particle filters. This can be reduced by using good importance sampling

functions and resampling. Resampling eliminates particles with small weights

and replicates particles with large weights. It is implemented in the following

two steps,

Draw M particlesm

tx from the distribution t .

Assign equal weights 1/ M to the particles.

The Figure 2.14 illustrates the particle filtering concept. The solid

curves represent the distributions of interest, which are approximated

by discrete measures. The sizes of particles reflect the weights assigned to

them.

Page 41: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

54

This figure is from IEEE signal processing magazine, 1938/3/September 2003

Figure 2.14 Particle Filtering

Expectation Maximization Algorithm

The Expectation Maximization Algorithm is one of the parameter

estimation tools to achieve Maximum Likelihood estimator (MLE) and it has

been widely applied to the cases where the data is considered to be

incomplete in the sense that it is not fully observable.

Expectation Step [E Step] and Maximization step [M step] form

one iteration of the EM algorithm. At kth

iteration, the updated parameter(k)

is obtained from(k 1)

as follows:

E Step: Compute the expected likelihood Q (k-1

), where

Q ( ’) =E [log f (x ’) y, ]

M Step: Choosekwhich maximizes Q ( ’)

Page 42: CHAPTER 2 LITERATURE SURVEY - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/13448/7/07_chapter 2.pdf · 2.3.1.1 Discrete Wavelet Transform based filtering The discrete wavelet

55

Convergence is assured, since the algorithm is guaranteed to

increase the likelihood function in successive iteration.

2.5 CONCLUSION

This chapter provides the survey of existing nonlinear measures of

HRV and it scans the adaptive nonlinear methods in recent literatures. Though

conventional nonlinear methods are efficient in the removal of artifacts and in

discriminating healthy and pathological subjects, the methods assume some

functional forms which are not adaptive to the complexity of the signal. This

chapter also briefs about the ANS and complexity in interpreting the

complexity of HRV. The conventional nonlinear methods expect more data

points to discriminate pathological subject’s HRV from healthy subject’s

HRV. Thus this survey paves the way for the development of adaptive

nonlinear techniques for preprocessing and processing HRV signal to

discriminate the pathological complexity from healthy complexity.