editorial board - politehnica university of timișoara · 2015-09-10 · buletinul Ştiinţific al...

56
Editorial Board Prof. Dr. Eng. Ioan NAFORNITA, Editor-in-chief Prof. Dr. Eng. Virgil TIPONUT Prof. Dr. Eng. Alexandru ISAR Prof. Dr. Eng. Dorina ISAR Prof. Dr. Eng. Traian JURCA Prof. Dr. Eng. Aldo DE SABATA Prof. Dr. Eng. Florin ALEXA Prof. Dr. Eng. Radu VASIU Lecturer Dr. Eng. Maria KOVACI, Scientific Secretary Associate Prof. Dr. Eng. Corina NAFORNITA, Scientific Secretary Scientific Board Prof. Dr. Eng. Monica BORDA, Technical University of Cluj-Napoca, Romania Prof. Dr. Eng. Aldo DE SABATA, Politehnica University of Timisoara, Romania Prof. Dr. Eng. Karen EGUIAZARIAN, Tampere University of Technology, Institute of Signal Processing, Finland Prof. Dr. Eng. Liviu GORAS, Technical University Gheorghe Asachi, Iasi, Romania Prof. Dr. Eng. Alexandru ISAR, Politehnica University of Timisoara, Romania Prof. Dr. Eng. Michel JEZEQUEL, TELECOM Bretagne, Brest, France Prof. Dr. Eng. Traian JURCA, Politehnica University of Timisoara, Romania Prof. Dr. Eng. Ioan NAFORNITA, Politehnica University of Timisoara, Romania Prof. Dr. Eng. Mohamed NAJIM, ENSEIRB Bordeaux, France Prof. Dr. Eng. Emil PETRIU, SITE, University of Ottawa, Canada Prof. Dr. Eng. Andre QUINQUIS, Ministère de la Défense, Paris, France

Upload: others

Post on 21-Feb-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Editorial Board

• Prof. Dr. Eng. Ioan NAFORNITA, Editor-in-chief

• Prof. Dr. Eng. Virgil TIPONUT

• Prof. Dr. Eng. Alexandru ISAR

• Prof. Dr. Eng. Dorina ISAR

• Prof. Dr. Eng. Traian JURCA

• Prof. Dr. Eng. Aldo DE SABATA

• Prof. Dr. Eng. Florin ALEXA

• Prof. Dr. Eng. Radu VASIU

• Lecturer Dr. Eng. Maria KOVACI, Scientific Secretary

• Associate Prof. Dr. Eng. Corina NAFORNITA, Scientific

Secretary

Scientific Board

• Prof. Dr. Eng. Monica BORDA, Technical University of

Cluj-Napoca, Romania

• Prof. Dr. Eng. Aldo DE SABATA, Politehnica University

of Timisoara, Romania

• Prof. Dr. Eng. Karen EGUIAZARIAN, Tampere

University of Technology, Institute of Signal Processing,

Finland

• Prof. Dr. Eng. Liviu GORAS, Technical University

Gheorghe Asachi, Iasi, Romania

• Prof. Dr. Eng. Alexandru ISAR, Politehnica University

of Timisoara, Romania

• Prof. Dr. Eng. Michel JEZEQUEL, TELECOM Bretagne,

Brest, France

• Prof. Dr. Eng. Traian JURCA, Politehnica University of

Timisoara, Romania

• Prof. Dr. Eng. Ioan NAFORNITA, Politehnica University

of Timisoara, Romania

• Prof. Dr. Eng. Mohamed NAJIM, ENSEIRB Bordeaux,

France

• Prof. Dr. Eng. Emil PETRIU, SITE, University of

Ottawa, Canada

• Prof. Dr. Eng. Andre QUINQUIS, Ministère de la

Défense, Paris, France

Page 2: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

• Prof. Dr. Eng. Maria Victoria RODELLAR BIARGE,

Polytechnic University of Madrid, Spain

• Prof. Dr. Eng. Alexandru SERBANESCU, Technical

Military Academy, Bucharest, Romania

• Prof. Dr. Eng. Virgil TIPONUT, Politehnica University

of Timisoara, Romania

• Prof. Dr. Eng. Radu VASIU, Politehnica University of

Timisoara, Romania

Advisory Board

• Prof. Dr. Eng. Ioan NAFORNITA, Politehnica University

of Timisoara, Romania

• Prof. Dr. Eng. Alexandru ISAR, Politehnica University of

Timisoara, Romania

• Prof. Dr. Eng. Radu VASIU, Politehnica University of

Timisoara, Romania

• Prof. Dr. Eng. Florin ALEXA, Politehnica University of

Timisoara, Romania

• Prof. Dr. Eng. Vladimir CRETU, Politehnica University

of Timisoara, Romania

Page 3: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Buletinul Ştiinţific al Universităţii Politehnica Timişoara

TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Volume 60(74), Issue 1, 2015

CONTENTS

Cristina Stolojescu Crisan, Alexandru Isar:

"Optical Coherence Tomography Speckle Reduction in the Wavelets Domain".......... 3

Mihai Micea, Cristina Stangaciu, Vladimir Cretu:

"Analysis of Non-Preemptive Scheduling Techniques for HRT Systems"..................... 9

Valentin Stangaciu, Olivia Datcu, Mihai Micea, Vladimir Cretu:

"INVERTA – Specification of Real-Time Scheduling Algorithms"............................. 15

Cristian Cosariu, Alexandru Iovanovici, Lucian Prodan, Mircea Vladutiu:

"TACTICS: Adaptive Framework for Reactive Control of Road Traffic Systems"..... 21

Maria Kovaci, Horia Balta:

"Performance of Turbo Encoders with 64-QAM Modulators Interfacing Systems in

Fading Environment".............................................................................................................. 27

Cuzman Călin-Alexandru, Bunaciu Cristian-Adrian, Marius Marcu, Sebastian Fuicu:

"The study of radio coverage and service quality of a Campus-Wide Wireless

Network".................................................................................................................................. 33

Cristina Vasilescu, Mihai Onita:

"Digital Rights Management - Creative Commons Perspective"............................... 39

Oana Munteanu, Thierry Bouwmans, El-Hadi Zahzah, Radu Vasiu:

"The detection of moving objects in video by background subtraction using Dempster-

Shafer theory".......................................................................................................................... 45

Instructions for authors at the Scientific Bulletin of the Politehnica University of Timisoara -

Transactions on Electronics and Communications ................................................................ 53

1

Page 4: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

2

Page 5: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Buletinul Ştiinţific al Universităţii Politehnica Timişoara

TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Volume 60(74), Issue 1, 2015

Optical Coherence Tomography Speckle Reduction in the

Wavelets Domain

Cristina Stolojescu-Crisan1 Alexandru Isar

2

1 Faculty of Electronics and Telecommunications, Communications Dept. Bd. V. Parvan 2, 300223 Timisoara, Romania, e-mail [email protected] 2 Faculty of Electronics and Telecommunications, Communications Dept.

Bd. V. Parvan 2, 300223 Timisoara, Romania, e-mail [email protected]

Abstract – This paper proposes a denoising method that

associates the Hyperanalytic Wavelet Transform (HWT)

with a Maximum A Posteriori (MAP) filter named

bishrink. The method is tested on Optical coherence

tomography (OCT) images. The experimental results

prove that the denoising algorithm can effectively reduce

the speckle noise, while preserving the structural and

textural features and improves the quality of OCT

images.

Keywords: — Denoising, Hyperanalytic Wavelet

Transform, Optical coherence tomography, bishrink

filter, speckle noise

I. INTRODUCTION

Worldwide, degenerative eye diseases such as

macular degeneration, glaucoma, cataract, or retinal

detachment are the main causes of blindness [1].

More, retinal diseases are already the most common

cause of childhood blindness worldwide [2]. The main

microvascular complication of diabetes in the eye is

the diabetic retinopathy (DR), which is found in

almost 20% of newly diagnosed diabetic people. Age-

related macular degeneration (AMD) is another retinal

disease discussed and highlighted as a growing

concern and it is already the third largest cause of

blindness in the world. The annual incidence of

Retinal detachment (RD) is estimated at 10/100,000

per year. Globally, 90 eyes are blinded by RD every

hour [3].

Optical coherence tomography (OCT) is a non-

invasive imaging test that provides high resolution

images of retinal structures, helping the early

detection, diagnosis and treatment guidance for retinal

diseases in their early stages, before vision is affected.

The OCT produces cross sectional view of the retina,

with an accuracy ranging from 5 to 10 microns [4]. It

is analogous to ultrasound imaging, except that it uses

light instead of sound [5-6].

One of the main limitations of OCT images is the

presence of an unwanted speckle noise, a

multiplicative noise that affects small and low-

intensity features. Many well known digital denoising

methods have been adapted for OCT images,

including median filtering [7-8], anisotropic diffusion

filters [8-9], or bayesian estimations [10]. Wavelets

based denoising methods have the advantage of

performing denoising on multiple resolutions. The

Dual Tree Complex Wavelet Transform has been used

in [11], while the curvelet transform was used in [12].

This paper presents a speckle reduction method in

the wavelets domain, that associates the

Hyperanalytic Wavelet Transform (HWT) with a

Maximum A Posteriori (MAP) filter called bishrink.

The rest of the paper is structured as follows:

Section II is dedicated to the theoretical part regarding

the proposed denoising method. In Section III, the

experimental results obtained for real OCT images are

presented, while the last section is dedicated to

conclusions.

II. MATERIAL AND METHODS

Images denoising methods can be classified in

two distinct categories: methods acting in the spatial

domain and the methods acting in the wavelets

domain [13]. This paper is focused on the second

category. This class of denoising methods has three

steps:

1. Computation of a wavelet transform,

2. Detail coefficients filtering, and

3. Computation of the corresponding inverse

wavelet transform.

Regarding the first and the last step, there are

various wavelet transforms that can be used. One of

them is the Discrete Wavelet Transform (DWT).

However, it has three main disadvantages: it is not

shift invariant, the associated mother wavelets are not

symmetric, and its directional selectivity is poor. An

alternative to the use of the DWT is the Undecimated

Discrete Wavelet Transform (UDWT). The UDWT,

also called Stationary Wavelet Transform (SWT), was

used in [14]. However, even if the UDWT is

translations invariant, its directional selectivity is poor

and it is very redundant [13]. The previously stated

3

Page 6: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

three disadvantages of the DWT can be diminished

using complex wavelet transforms. The interest in

complex wavelets may be linked to the development

of the dual filter bank [15-16]. The DT-CWT is a

quadrature pair of DWT trees and its coefficients may

be interpreted as arising from the DWT associated

with a quasi-analytic wavelet. The main property of

the 2D DT-CWT is the quasi-shift invariance [13]:

perfect shift invariance at level 1, and approximately

achieved shift invariance beyond this level. In this

paper, we will focus on the HWT. The HWT is quite

similar to the DT-CWT behavior. However, the DT-

CWT requires special mother wavelets, while for the

implementation of the HWT classical mother

wavelets, such as the ones belonging to the

Daubechies family, can be used.

Concerning the second step of wavelets based

denoising algorithms, one of the most efficient

denoising methods implies the use of maximum a

posteriori (MAP) filters. An interesting MAP filter is

the bishrink filter.

A. The Hyperanalytic Wavelet Transform (HWT)

Being given the real mother wavelets, ( ),x yψ ,

the hypercomplex mother wavelet associated to

( ),x yψ is defined as:

( ) ( ) ( )

( ) ( )

, , ,

, ,y x y

xψ x y ψ x y i ψ x y

j ψ x y k ψ x y

= + +

+ +

a

H H H

H

, (1)

where 2 2 2 1i j k= = − = − , ij ji k= = and H

represents the Hilbert transform [13].

The HWT of an image ( ),f x y can be computed

as:

( ) ( ) ( ) , , , ,f a

HWT HWT f x y f x y x yψ= = .

(2)

Using (1) and (2), it results:

( ) ( )

( ) ( )

, ,

, ,

f x

y y x

HWT f x y f x y

f x y f x y

DWT iDWT

jDWT kDWT

= + +

+ +

H

H H H

(3)

In the end we obtain:

( ) ( ) ( ), , ,, .f a aHWT f x y x y f x yDWTψ= =

(4)

The HWT of the image can be obtained using the

2D-DWT of its associated hypercomplex image. The

HWT implementation is presented in Fig. 1.

The HWT implementation shown in Fig. 1 uses

four trees, each one implementing a 2D-DWT: the

first one is applied to the input image, the next two

trees are applied to the 1D Hilbert transforms

computed across the lines (xH ) or columns ( yH ) of

the input image, and the last tree is applied to the

result obtained by the computation of the two 1D

Hilbert transforms on the input image.

Fig. 1. The 2D HWT implementation architecture.

B. Bishrink filtering

The bishrink filter is a MAP filter that takes into

account the interscale dependency of wavelet

coefficients. Based on the observation y = w + n,

where n represents the wavelet transform of the noise,

in , obtained as the logarithm of the speckle

logi

n sp= , and w represents the wavelet transform of

the useful component corresponding to the input

image s, obtained as the logarithm of the noiseless

component of the acquired image logs u= . The

MAP estimation of w is given by:

( ) ( ) ( )( ) ˆ argmax ln n ww

w y p y w p w= − , (5)

where pn

is the noise probability density function

(pdf), when the noise is AWGN (independent), while

the a priori distribution of the parameter w, or “prior”

( )wp w contains what is known before making the

measurements.

For the construction of the bishrink filter, the

noise is assumed to be i.i.d. Gaussian [17], because

the HWT is a unitary transform which do not correlate

the i.i.d. Gaussian noise [18]:

np (n)

2 21 2

22

2

1

2n

n n

σ

n

eπσ

+−

= ⋅ , n = [1 2,n n ]. (6)

The model of a noiseless image is given using a

heavy tailed distribution:

wp (w)

2 21 2

3

2

3

2

w wσe

πσ

− +

= ⋅ , w = [1 2,w w ]. (7)

If we replace these two pdfs in equation (6) we

obtain:

4

Page 7: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

( )

( ) ( )2 2

1 1 2 22 2

2 1 2

1

3

2

2 2

ˆ

1 3argmax ln

2 2n

y w y w

w w

n

e eσ σ

πσ πσ

− + −− − +

=

= ⋅

w

w y

(8)

After several computations it results:

2 2

1 2

1 12 2 2

1 2

2 2

1 2

2 22 2 2

1 2

3

3

n

n

w ww y

w w

w ww y

w w

σ

σ σ

σ

σ σ

+ = + +

+=

+ +

(9)

By making the sum 2

1w +2

2w , the following result is

obtained:

( )( )

2 2 2

1 22 2 2 2

1 2 1 222 2 2

1 2

22 2 2

1 22 2 2 2

1 2 1 22

3

3

n

n

w ww w y y

w w

w ww w y y

σ

σ σ

σ σ

σ

++ = +

+ +

+ +

+ = = +

c

(10)

In the end it results:

22 2 2 2

1 2 1 2 3 nw w y yσ

σ+

+ = + −

(11)

By combining equation (8) and equation (9), we

obtain:

22 2

1 2

1

1 12 2

1 2

22 2

1 2

1

2 22 2

1 2

3

ˆ

3

ˆ

n

n

y y

w yy y

y y

w yy y

σ

σ

σ

σ

+

+

+ −

= +

+ −

= +

(12)

Thus, the input-output relation of the bishrink filter is:

22 2

1 2

1 12 2

1 2

σ

ny y

w yy y

+

+ −

=

+

) (13)

The bishrink filter requires prior knowledge of

the noise variance and of the marginal variance of the

noise-less image for each wavelet coefficient. For the

estimation of the noise variance from the noisy

wavelet coefficients, a robust median estimator from

the wavelet coefficients finest scale is used [19]:

( )2

medianˆ ,

0.6745

i

n

yσ = sub-bandiy ∈ HH. (14)

The marginal variance of the kth

coefficient can

be estimated using neighboring coefficients in the

region N(k), a squared shaped window centered on

this coefficient, with the size of 7×7 [21]. The

estimation can be done using the equation:

2 2 2

y nσ σ σ= + , (15)

where 2

yσ represents the marginal variance of the

noisy observations 1y and

2y .

It results:

2 2

^ ^^

y n

+

= -σ σ σ

(16)

For the estimation of the marginal variance of the

noisy observations, the following relation is proposed

in [17]:

( )

^2 21

,i

y i

y N k

yM

σ∈

= ∑ (17)

where the neighborhood N(k) has the size M.

In order to estimate the local standard deviation

of the useful component corresponding to the parent

coefficients, 2σ , in a given sub-band, the sub-band is

first interpolated by the repetition of each line and

column. Then, by applying the relations (16) and (17),

the local standard deviation of the useful component

corresponding to the child coefficients is obtained:

1 2ˆ ˆ0.5

ˆ2

σ σσ

+ ⋅= (18)

The local variance of a pixel also gives some

information about the frequency content of the region

to which the considered pixel belongs: pixels having

low local variances imply a corresponding region with

low frequencies, while pixels having high local variances imply a corresponding region containing

high frequencies.

The estimation of the noise variance is obtained

using the equation:

( )2ˆ ,n iσ median y= iy ∈ sub-band HH. (19)

5

Page 8: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

The standard deviation of the noiseless

coefficients can be estimated as:

2 2 2 2

( ) ( )

1 1ˆ ˆ, 0

ˆ

0,

i i

i n i n

y N k y N k

y σ if y σσ M M

if not

∈ ∈

− − >

=

∑ ∑

(20)

where M is the size of the moving window N(k),

centered on the kth pixel of the acquired image.

The sensitivity of the bishrink filter with the

estimation of the noise standard deviation nσ can be

computed with the relation:

1

ˆ 1

1

ˆ

ˆ ˆn n

w

n

dwS

d w

σ σ

σ= ⋅)

) (21)

Using the input-output relation of the bishrink

filter in equation (10) we obtain:

1

2 22 2

1 2ˆ 2 2 2ˆ 1 2

ˆ ˆ2 3 3, if

ˆˆ ˆ3

0, otherwise

n

n n

w n

y yS y y

σ

σ σ

σσ σ

−+ >

= + −

(22)

The absolute value of the sensitivity is an

increasing function of ˆnσ . The performance of the

bishrink filter decreases with the increase of the noise

standard deviation estimation value. An important parameter of the bishrink filter is

the local estimation of the noiseless image marginal

variance ( σ ). The sensitivity of the estimation 1w

with σ is given by:

1

2 22 2

1 2ˆ 2 2 2ˆ 1 2

ˆ ˆ3 3, if

ˆˆ ˆ3

0, otherwise

n n

w n

y yS y y

σ

σ σ

σσ σ

+ >

= + −

(23)

The estimation precision using the bishrink filter

decreases with the decreasing of σ .

III. RESULTS

In this section, we test our denoising approach on

three OCT images shown in Fig. 2.

a) OCT 1

b) OCT 2

c) OCT 3

Fig. 2. The three OCT images used for testing.

The obtained results are analyzed in terms of the

noise variance and the Equivalent Number of Looks

(ENL) which quantifies the homogeneity degree of a

region. The ENL is defined by the ratio of the squares

of pixels mean and variance situated in the considered

region. It can be computed as follows:

2

meanENL

standard deviation

=

. (24)

The results are shown in Table 1.

Table 1

Images ENLi ENLo niσ noσ

OCT 1 5.19 73.94 8.94 0.28

OCT 2 5.56 86.82 8.37 0.33

OCT 3 6.03 100.64 8.386 0.31

6

Page 9: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

In Table 1, ENLi represents the input ENL value,

while the ENLo is the value obtained after the

denoising procedure. niσ and noσ are the values of

the noise variance before and after the denoising.

The denoising algorithm significantly reduces the

noise variance and the ENL output values indicate a

good performance of the proposed denoising

algorithm.

In Fig. 3 two homogenous regions (before and

after denoising) from each test images, are compared. Based on visual inspection, the proposed

denoising method seems to be effective.

IV. CONCLUSIONS

This paper presents an effective wavelets based

denoising system for OCT images. Wavelets based

denoising methods have the advantage of performing

denoising on multiple resolutions which is useful in

the case of correlated noise.

The proposed denoising algorithm associates the

Hyperanalytic Wavelet Transform and with the

bishrink filter. The implementation of the HWT is

very simple and flexible, permitting the use of any

orthogonal or biorthogonal real mother wavelets for

its computation. In this paper we used the Daubechies

family of mother wavelets.

The experimental results presented in Table 1 and

in Fig. 3 highlight the effectiveness of the proposed

algorithm.

Acknowledgement

This work was partially supported by the strategic

grant POSDRU/159/1.5/S/137070 (2014) of the

Ministry of National Education, Romania, co-

financed by the European Social Fund – Investing in

People, within the Sectoral Operational Programme

Human Resources Development 2007-2013.

REFERENCES

[1] C. Delcourt, “Nutrition and age-related eye diseases: the Alienor (Antioxydants, Lipides Essentiels, Nutrition et maladies OculaiRes)

Study”, Journal of Nutrition Health Aging, 14(10), pp. 854-61,

2010.

[2] D. Yorston, “Retinal Diseases and VISION 2020”, Community

Eye Health, 16(46), pp. 19–20, 2003.

[3] S. Shah, “Blindness and visual impairment due to retinal diseases”, Community Eye Health, 22(69), pp. 8–9, 2009.

[4] D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W.

Chang, M. Hee, T. Flotte, K. Gregory, C. Puliafito C, J. Fujimoto,

“Optical coherence tomography”, Science 254, pp. 1178-1181,

1991.

[5] M. Born, E. Wolf, Interference and Diffraction with Partially

Coherent Light, fourth ed., Principles of Optics, Pergamon Press,

United Kingdom pp. 491–505, 1970.

[6] Kiernan, W. Mieler, and S. Hariprasad, “Spectral-domain

optical coherence tomography: a comparison of modern high-resolution retinal imaging systems”, American Journal of

Ophthalmology 149(1), pp.18–31, 2010.

[7] M. E. B. Jadwiga Rogowska, “Image processing techniques for

noise removal, enhancement and segmentation of cartilage OCT

images”, Physics in Medicine and Biology, 47(4), pp. 641–655,

2012. [8] C. P. Loizu, C. Theofanous, M. Pantziaris, et al., “Despeckle

filtering software toolbox for ultrasound imaging of the common

carotid artery”, Computer Methods and Programs in Biomedicine,

I14, pp. 109-124, 2014.

[9] P. Puvanathasan and K. Bizheva, “Interval type-II fuzzy

anisotropic diffusion algorithm for speckle noise reduction in optical coherence tomography images”, Opt. Express, 17(2),pp.

733–746, 2009.

[10] A. Wong, A. Mishra, K. Bizheva, and D. A. Clausi, “General

Bayesian estimation for speckle noise reduction in optical

coherence tomography retinal imagery”, Opt. Express, 18(8), pp.

8338–8352, 2010.

[11] S. Chitchian, M. A. Fiddy, and N. M. Fried, “Denoising during

optical coherence tomography of the prostate nerves via wavelet

shrinkage using dual-tree complex wavelet transform”, Journal of

Biomedical Optics, 14(1), pp. 14-31, 2009.

[12] Z. Jian, L. Yu, B. Rao, B. J. Tromberg, and Z. Chen, “Three-

dimensional speckle suppression in optical coherence tomography

based on the curvelet transform,” Optics Express, 18(2), pp. 1024–

1032, 2010.

[13] A. Isar, I. Firoiu, C. Nafornita, S. Moga, “Sonar Images

Denoising”, in: Nikolai Kolev, N. (Ed.), Sonar Systems.

INTECH, Croatia, pp. 173-206, 2011. [14] S. Foucher, G. B. Benie, J. M. Boucher, “Multiscale MAP

Filtering of SAR images”, IEEE Transactions on Image

Processing, 10(1), pp. 49-60, 2001.

[15] N. Kingsbury, “The dual-tree complex wavelet transform: a

new efficient tool for image restoration and enhancement,”

Proceedings of EUSIPCO, Rhodes, Greece, 1998, pp. 319–322.

[16] N. Kingsbury, “Complex Wavelets for Shift Invariant Analysis

and Filtering of Signals”, Applied and Computational Harmonic

Analysis, vol. 10, pp. 234-253, 2001. [17] L. Sendur and I. W. Selesnick, “Bivariate shrinkage functions

for wavelet-based denoising exploiting interscale dependency”,

IEEE Transactions on Signal Processing, 50(11), pp. 2744-2756,

2002.

[18] I. Firoiu, C. Nafornita, D. Isar, A. Isar, “Bayesian

hyperanalytic denoising of SONAR images”, IEEE Geoscience

and Remote Sensing Letters, 8(6), pp. 1065-1069, 2011.

[19] D. L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by

wavelet shrinkage”, Biometrika. 81(3), pp. 425-455, 1994.

7

Page 10: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

before after

a) OCT 1

before after

b) OCT 2

before after

c) OCT 3

Fig. 3. Results for OCT images in an homogenous region before / after HWT+bishrink.

8

Page 11: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Buletinul Ştiinţific al Universităţii Politehnica Timişoara

TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Volume 60(74), Issue 1, 2015

Analysis of Non-Preemptive Scheduling Techniques for

HRT Systems

Mihai V. Micea1 Cristina. S. Stangaciu

1 Vladimir I. Cretu

1

1 Faculty of Automation and Computer Engineering, Computer Engineering and Information Technology Dept.

Bd. V. Parvan 2, 300223 Timisoara, Romania, e-mail [email protected]

Abstract – Special cases of hard-real time (HRT)

scheduling mechanisms, which provide high

predictability regarding task scheduling and execution,

are studied in this paper. These mechanisms are all

based on a proposed task model called ModX. Extensive

evaluation tests have been performed to simulate and

analyze the proposed scheduling algorithms and their

comparative performance, which is also discussed in this

paper.

Keywords: Scheduling, Embedded, Hard Real-Time,

Non-Preemptive.

I. INTRODUCTION

Digital control is a topic of major interest in today's

engineering and research activities. Embedded

systems and digital signal processing (DSP) systems

[1]-[4] are widely used in digital control applications,

requiring, in most cases, real-time behavior of the

hardware-software components. Many applications

have a critical impact on the environment and/or on

humans. Examples of such applications include:

modern flight control systems, fly-by-wire, autopilot,

automotive control, industrial mechatronics, nuclear

plant surveillance, and so on.

There are two essential characteristics a

hardware-software platform has to meet to provide

correct operation results for critical applications [5]:

(a) the entire process of system development should

integrate the time coordinate, and (b) the system must

provide maximum of predictability for the hard real-

time tasks. As a key component of real-time

application development and operation, task

scheduling is closely related to the previously stated

requirements.

Although a very large number and variety of

scheduling techniques have been developed in the late

years for both single processor and multiprocessor

systems [6], hard real-time task scheduling with

maximum of predictability still remains an open

problem for critical applications. Some of the main

reasons include the architectures which optimize the

average case system operation (cache, pipelines, etc.),

and the unrestricted use of interrupts and of the

associated asynchronous mechanisms and tasks [7].

Our research focuses on developing suitable

methodologies and architectures that enable hard real-

time systems to meet the two basic requirements

stated here. The approach is based on studying and

integrating proper models of time, signals and tasks,

emphasizing on non-preemptive scheduling

techniques.

The next section introduces the model of hard

real-time tasks, the ModX, based on which a number

of non-preemptive scheduling techniques will be

studied in Section III. The main results of the

evaluation tests performed to simulate and analyze the

proposed scheduling algorithms are presented in

Section IV. A discussion on the non-preemptive

scheduling techniques and their performance, current

work and some prospects conclude the paper.

II. HARD REAL-TIME TASK MODEL

In a general acceptance, real-time applications (even

those with critical operating requirements) contain

both types of tasks – soft real-time (SRT) and hard

real-time (HRT) tasks. Therefore, the development,

scheduling and concurrent execution of the two types

of tasks must be accommodated properly. In our

approach, a task is classified as SRT if its correct

operation is considered with respect to functional

behavior only, while a HRT task also requires in

addition a correct temporal behavior.

SRT tasks can therefore be modelled and

analyzed using classical techniques; instead, the

model of the HRT tasks must be able to describe and

manipulate their temporal parameters. Thus, it must

be considered with extreme care.

A ModX (executable module) is defined [8] as a

periodic, modular, HRT task, with complete and strict

temporal specifications, scheduled and executed in

non-preemptive context:

FSPT ,,,≡iM (1)

where: P = PIN, POUT, PGLB is the set of input, output

and global parameters of Mi, respectively; S = SIN,

SOUT is the set of input and output signals Mi interacts

with; F is the task's instruction set (its functional

specification); and:

= iiiii MM

dy

M

dl

Mex

Mpr NTTTT ,,,,T (2)

9

Page 12: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

represents the set of temporal parameters of Mi, in

their respective order: period, execution time,

deadline, delay of execution during each period, and

execution count.

Information exchange between the application

ModXs is performed through the input, output and

global parameters which define the set P (see (1)).

ModXs can process input signals or can generate

output signals, which formally define the S set. In the

case of input signals, their temporal parameters define

the behavior of the corresponding ModXs. The input

signals (including the asynchronous events) are

processed with our ModX model by periodic polling.

III. NON-PREEMPTIVE SCHEDULING

ALGORITHMS OF INDEPENDENT MODX SETS

This section discusses the non-preemptive scheduling

algorithms of hard real-time tasks on single-processor

systems. Several cases are treated, starting from

simple to more complex and realistic ones.

The task set model consists of simple and

independent ModXs, each having the initial invocation

time at t0 = 0. Thus, each ModX Mi in the set can be

characterized, according to (2), by:

∞= ,0,,, iii Mpr

Mex

Mpr TTTT (3)

In other words, the deadline of Mi equals its period,

the execution delay during each period is null and the

execution count states a continuous execution for Mi.

The execution of Mi is not conditioned by any control

or data dependencies with any other ModXs in the set.

Lemma 1. Let M be a set of simple and

independent ModXs, characterized as in (3), and TLCM

the time interval equal to the least common multiplier

of the ModX periods in M:

∈∀= MiMprLCM MCTCT i ,min (4)

where: x/y means x divides y. If a particular algorithm

is able to schedule the set M within the TLCM interval,

then M is feasible with respect to this scheduling

algorithm.

Proof. The set M is composed of simple and

independent ModXs, with their initial invocations

aligned at the t0 time instance. Moreover, the

invocation time of all the ModXs are also aligned at

each moment which is a common multiple of the task

periods. On the other hand, the scheduling algorithms

must guarantee that each ModX executes only once

during each of its periods and without missing any of

the specified deadlines. As a result, a cyclic behavior

of the scheduling can be established based on the

TLCM interval.

Lemma 1 reduces the offline schedulability

analysis of a set M of ModXs to a time interval of

finite length, TLCM.

Two main dynamic non-preemptive scheduling

algorithms, considered as most efficient in the

literature [9],[10], have been adapted to our task

model: MLFNP (Minimum Laxity First Non-

Preemptive) and EDFNP (Earliest Deadline First

Non-Preemptive). Both have a general algorithmic

framework, in which the ModX set is first sorted in

non-decreasing order by period (i.e., for any pair of

tasks Mi and Mj, if i < j, then ji

M

prMpr TT ≤ ). At any

scheduling moment t, a ModX is selected for

execution if it has not been already scheduled during

its current period and if a particular criterion is

verified:

(a) MLFNP selects the ModX with the minimum

laxity (i.e. the time interval remaining available for

the correct scheduling of the ModX, starting from t),

as defined by:

( ) tTT

tTtL i

i

i MexM

pr

Mpri −−

+

= 1 (5)

(b) EDFNP selects the ModX with the earliest

deadline with respect to the current time t.

After a particular ModX, Mj, has been scheduled at

time t, the scheduling time is increased with the

execution time of Mj, and the procedure is reiterated

until t reaches TLCM.

An important advantage of the non-preemptive

task models and scheduling techniques is that the

offline analysis of the system feasibility is very close

to the actual operating conditions at run-time, thus

increasing the system predictability. The offline

schedulability analysis can be speeded up by applying

some necessity and/or sufficiency conditions instead

of employing the algorithm to verify the feasibility of

a task set.

The ModX model imposes some particularities to

the schedulability conditions. Consider M a set of n

ModXs, sorted in non-decreasing order by period. If M

has a feasible schedule, then:

CN1) 11

≤∑=

n

iM

pr

M

ex

i

i

T

T (6)

This necessary condition is the basic relation that

characterizes the feasible task scheduling on a single

processor system. It states that the cumulative

processor utilization cannot exceed unity. The second

necessary condition has been demonstrated in [11]:

CN2) :.;1. 1 iMpr

Mpr TLTLnii <<∀≤<∀

j

j

iM

ex

i

jM

pr

Mex T

T

LTL ⋅

+≥ ∑−

=

1

1

1 (7)

The condition (7) basically states that the

processor utilization of a task set over any time

interval L should not exceed that interval.

Nevertheless, there is a difference between the task

model considered in [11] and our ModX set, which is

a concrete task set, with initial invocation times

aligned to t0 = 0. Therefore, examples of ModX sets

can be found to be schedulable without satisfying

CN2):

10

Page 13: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

. .

..

. .

. .

..

. .

MMMM exexexex maxmaxmaxmax

tttt

tttt

Time interval fully occupied withthe executions of the 2 ModXs

MMMM prprprpr minminminmin

Texex maxM

Tprex maxM

Texpr minM

Tprpr minM Tpr

pr minM

. .

..

. .

. .

..

. .

MMMM exexexex maxmaxmaxmax

tttt

tttt

Time interval fully occupied withthe executions of the 2 ModXs

MMMM prprprpr minminminmin

Texex maxMTexex maxM

Tprex maxMTprex maxM

Texpr minMTexpr minM

Tprpr minMTprpr minM Tpr

pr minMTprpr minM

Fig. 1. Worst case for a feasible scheduling

( ) ( ) ( ) ( ) 1,90,4,90,8,15,4,10

,

=

=

≡= ii Mex

Mpri TTMM

(8)

For the ModX set in (8), which is schedulable

with the EDFNP algorithm, the CN2) condition fails

for i = 2 and L = 11.

Theorem 1. Let M be a set of n simple and

independent ModXs, characterized as in (3). If M is

schedulable, then:

CN3)

−≤

minminmax 2prprex

M

ex

M

prM

ex TTT (9)

where: maxexMexT is the execution time of the ModX

with the maximum execution time in the set;

minprM

prT and minprM

exT are the period and execution

time, respectively, of the ModX with the minimum period in the set.

Proof. The theorem specifies a limiting condition

for the maximum execution time of any ModX in M,

with respect to the minimum ModX period in the set,

assuming the execution without preemption of the

ModXs.

The worst case for the execution (scheduling) of a

feasible set M, regarding the two ModXs implied by

the theorem, is presented in the figure above. It can be

noticed that the time interval available for scheduling

the Mexmax ModX without missing its deadlines is limited by the period and execution time of Mprmin.

Theorem 1 states the necessary condition added

by our particular model of hard real-time task set to

the non-preemptive scheduling analysis.

IV. PERFORMANCE OF THE NON-PREEMPTIVE ALGORITHMS

The performance evaluation of the non-preemptive

scheduling algorithms discussed in the previous

section focuses on determining the following parameters:

The results of the schedulability conditions

applied to the scheduling algorithm under test;

The results of the schedulability analysis performed on randomly generated ModX sets. The

analysis consists on applying the scheduling

algorithm over the TLCM interval calculated for the

ModX sets under test (according to Lemma 1 and

(4));

The elapsed time of the schedulability analysis for

each set of ModXs, on a PC type of workstation.

This parameter characterizes only the general

behavior of a particular scheduling algorithm

during the offline analysis and differs from the

run-time behavior parameters of the online

scheduler.

Each set of ModXs is randomly generated, based

on some general configuration parameters: n, the total

number of ModXs in the set; the time interval which

contains each of the ModX periods; the type of

distribution used by the randomization algorithm to

generate the periods – uniform distribution and normal (Gaussian) distribution; the rational values

interval containing the processor utilization for the

ModX set, UM

= PU; and the upper limit for the TLCM

value.

A comparative evaluation of the MLFNP and EDFNP scheduling algorithms has been performed,

using the 12 workstations of the DSPLabs laboratory

at UPT Timisoara (http://dsplabs.upt.ro). More than

24000 tests have been accomplished to calculate the

schedulability ratio (SR) for the two algorithms, as a

function of the following additional parameters: the

total number of ModXs in the sets, 9, 15, 20; the

processor utilization PU, bounded by the following

intervals: [0.6, 0.7], [0.7, 0.8], [0.8, 0.9] and [0.9,

1.0]; the ModX periods are randomly generated using

the uniform and the normal distributions, with the

upper limit of 310 and the lower limit of 10. As a result, the ModXs tested have a maximum ratio of

1/310 between the execution time and the period.

Although the second schedulability condition,

CN2), does not apply properly to our ModX model

(see discussion in Section III), we have included it in the evaluation tests (denoted as "Jeffay").

Figure. 2 presents some of the main results of the

evaluation tests. The results show clearly that the

EDFNP algorithm behaves much better than the

MLFNP (i.e. the former issues a higher schedulability

ratio than the latter), for all the cases considered: any

ModX set dimension, any processor utilization PU,

and any type of distribution used to generate the

temporal parameters of the ModXs. The success ratio

of both algorithms decreases when the processor

utilization of the ModX sets is increased. On the other

hand, the behavior of the algorithms improves when the number of ModXs in each set is increased. The

reason is that, while the processor utilization remains

constant, increasing the number of ModXs in a set

implies a lowering of the execution times of each

ModX. Therefore, the non-preemptive scheduling will

11

Page 14: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

have more chances of success with "many, but smaller

tasks" (higher task granularity) than vice versa.

Regarding the "Jeffay" test, the results show that EDFNP succeeds in scheduling many ModX sets for

which the CN2) condition does not hold. This

observation confirms our discussion about CN2), in

Section III. On the other hand, MLFNP shows that the

"Jeffay" test can be used as a valid condition for this

algorithm in all the cases considered in our tests.

As previously mentioned, an upper bound

parameter has been specified for the TLCM value,

calculated for each generated set of ModXs. This

limitation is imposed because for sets of 20 ModXs

for example, TLCM can easily reach a magnitude order

of 1030 and even more, generating a two-fold problem for our offline schedulability analysis

approach:

a) The necessity of operating with very large

numbers, which cannot be natively represented on

PC architectures. As a result, specialized large

integer arithmetic libraries must be used; b) The time needed to perform the offline

schedulability analysis is proportional with the

size of TLCM.

Some scheduling times obtained for sets of 18

ModXs with the limit of 2,000,000,000 for TLCM, are shown in Table 1. The processor utilization has been

set as low as possible (i.e. in the [1.0, 2.0] interval) to

maximize the analysis times for the tested sets. The

values in the table can be considered in a comparative

manner, showing that the EDFNP algorithm is

quicker than MLFNP.

Table 1. Elapsed times for some offline

schedulability analysis tests

TLCM values Scheduling times [seconds]

MLFNP EDFNP

145,044,900 476 469

325,155,600 1,060 1,052

149,189,040 483 481

681,912,000 2,214 2,212

1,730,907,360 5,698 5,601

Average values

1,000,000,000 3,275 3,237

V. CONCLUSIONS

Critical and hard real-time applications require high operation predictability of the target system. Non-

preemptive task models and scheduling techniques

have been proven as a valid solution to develop and

implement such applications on embedded and DSP-

based platforms.

The offline feasibility analysis is a necessary step

which eliminates the NP-hard type time and system

resource requirements of an online analysis. Although

reduced to a limited temporal interval (TLCM) by using

the Lemma 1, the offline schedulability analysis can

be, in many cases, prohibitively time- (resource-)

consuming. A set of schedulability conditions

(necessary and/or sufficient conditions) can speed up

the feasibility decision of some particular non-

preemptive scheduling algorithm for a given task set. Two of the most efficient dynamic non-

preemptive scheduling algorithms have been adapted

to our ModX model and studied: MLFNP and

EDFNP. The performace evaluation tests have shown

that EDFNP behaves better than MLFNP. Therefore,

EDFNP has been chosen as the core of the online

scheduling algorithms further developed to

accommodate the realistic implementation of non-

preemptive scheduling on real-time platforms.

The theoretical studies and test results showed

that the CN2) schedulability condition, demonstrated

in [11], does not apply to our ModX set model, which is a particular case of the task set considered in [11].

The non-preemptive task model and

scheduling techniques presented in this paper are

successfully being used in the development and

implementation of a hard real-time kernel on a Motorola DSP56307 EVM platform [12][13]: the

HARETICK kernel [5][14].

ACKNOWLEDGEMENTS

This work was partially supported by the strategic

grant POSDRU/159/1.5/S/137070 (2014) of the

Ministry of National Education, Romania, co-

financed by the European Social Fund – Investing in People, within the Sectoral Operational Programme

Human Resources Development 2007-2013.

12

Page 15: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

SR vs PU

Sets of 10 ModXs, Normal distribution

20.00%

30.00%

40.00%

50.00%

60.00%

70.00%

80.00%

90.00%

100.00%

[PU]

[SR]

Jeffay

EDFNP

MLFNP

0.6 0.7 0.8 0.9 1.0

SR vs PU

Sets of 10 ModXs, Uniform distribution

20.00%

30.00%

40.00%

50.00%

60.00%

70.00%

80.00%

90.00%

100.00%

[PU]

[SR]

0.6 0.7 0.8 0.9 1.0

SR vs PU

Sets of 15 ModXs, Normal distribution

20.00%

30.00%

40.00%

50.00%

60.00%

70.00%

80.00%

90.00%

100.00%

[PU]

[SR]

Jeffay

EDFNP

MLFNP

0.6 0.7 0.8 0.9 1.0

SR vs PU

Sets of 15 ModXs, Uniform distribution

20.00%

30.00%

40.00%

50.00%

60.00%

70.00%

80.00%

90.00%

100.00%

[PU]

[SR]

0.6 0.7 0.8 0.9 1.0

Fig. 2. SR as a function of PU for the MLFNP and EDFNP algorithms

13

Page 16: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

REFERENCES

[1]. Lin, Chi-Ying, Li, Chien-Yao: Design and

Implementation of Advanced Digital Controls for Piezo-

Actuated Systems using Embedded Control Platform. In:

Appl. Math 9.1L (2015), p. 251-258. [2] R. E. Collin,

Foundations for Microwave Engineering, Second

Edition, McGraw-Hill, Inc., 1992. [2]. Antao, R., Mota, A., et al: Adaptive control of a buck

converter with an ARM Cortex-M4. In: Proceedings of

the 16th IEEE International Power Electronics and

Motion Control Conference and Exposition, Antalya,

2014, p. 359

[3]. Morkoc, C., Onal, Y., et al: DSP based embedded code

generation for PMSM using sliding mode controller. In:

Proceedings of the 16th IEEE International Power

Electronics and Motion Control Conference and

Exposition, Antalya, 2014, p. 472

[4]. Puiu, D., Moldoveanu F. The Time Delay Ccontrol of a

CAN Network with Message Recognition.In Bulletin of the Transilvania University of Braşov, Vol 3 (2010): 52,

p.285-292.

[5]. Micea, M.V., Cretu, V.: Non-Preemptive Execution

Support for Critical and Hard Real-Time Applications on

Embedded Platforms. In: Proceedings of the

International. Symposium on Signals, Systems and

Electronics, Linz, 2004

[6]. Baruah, S., Bertogna, M., et al: Multiprocessor

Scheduling for Real-Time Systems. Springer, 2015 [7]. Stewart, D. B.: Twenty-five Most Common Mistakes with

Realtime Software Development. In Embedded Systems

Conference, San Francisco, 2001.

[8]. Micea, M. V., Cretu, V., et al: Program Modeling and

Analysis of Real-Time and Embedded Applications. In:

Scientific Bulletin of "Politehnica" University of Timisoara, Transactions on Automatic Control and

Computer Science. 49 (2004) No. 3, p. 207-212.

[9]. George, L., Rivierre, N., at al: Preemptive and Non-

Preemptive Real-Time Uni-Processor Scheduling. In:

Rapport de recherche, Nr. 2966, Institut National de

Recherche en Informatique et en Automatique, INRIA,

Rocquencourt, France, 1996.

[10]. Kang, S.I., Lee, H.K.: Analysis and Solution of Non-

Preemptive Policies for Scheduling Readers and Writers. In ACM Operating Systems Review 32 (1998), p. 30-50.

[11]. Jeffay, K., Stanat, D., et al: On Non-Preemptive

Scheduling of Periodic and Sporadic Tasks. In:

Proceedings of the 12th IEEE Real-Time Systems.

Symposium, San Antonio, p. 129.

[12]. Motorola, Inc.: DSP56307: 24-Bit Digital Signal

Processor: User's Manual, DSP56307UM/D, Rev. 0,

08/10/98, Semiconductor Products Sector, DSP Division, Austin, USA, 1998.

[13]. Motorola, Inc.: DSP56300: 24-Bit Digital Signal

Processor: Family Manual. Rev. 3, DSP56300FM/AD,

Semiconductor Products Sector, DSP Division, Austin,

USA, 2000.

[14]. Micea, M.V.: HARETICK: A Real-Time Compact Kernel

for Critical Applications on Embedded Platforms. In:

Proceedings of the 7th International Conference on

Development and Application Systems, Suceava, 2004, p. 16.

14

Page 17: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Buletinul Ştiinţific al Universităţii Politehnica Timişoara

TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Volume 60(74), Issue 1, 2015

INVERTA – Specification of Real-Time Scheduling

Algorithms

V. Stangaciu1 , O. Datcu

2, M. Micea

3, V. Cretu

4

1 Faculty of Automation and Computers, Dept. of Computer and Software Engineering

Bd. V. Parvan 2, 300223 Timisoara, Romania, e-mail [email protected] 2 Faculty of Automation and Computers, Dept. of Computer and Software Engineering

Bd. V. Parvan 2, 300223 Timisoara, Romania, e-mail [email protected] 3 Faculty of Automation and Computers, Dept. of Computer and Software Engineering Bd. V. Parvan 2, 300223 Timisoara, Romania, e-mail [email protected] 4 Faculty of Automation and Computers, Dept. of Computer and Software Engineering

Bd. V. Parvan 2, 300223 Timisoara, Romania, e-mail [email protected]

Abstract – This paper describes how the scheduling

algorithms for real time applications can be specified

formally and the development of a simulator that

verifies if a set of tasks for a real time application can be

scheduled with an existing scheduling algorithm or with

an algorithm defined by the user. This simulator is part

of the integrated visual environment for designing and

analysing real-time applications called INVERTA.

Keywords: scheduling, real-time, simulator

I. INTRODUCTION

Embedded systems and digital signal processing

(DSP) systems are used in a variety of application

today. Such applications include: automotive control,

nuclear plant surveillance, flight control systems, and

industrial mechatronics. These systems usually run

hard real-time tasks, for which the violation of their

time requirements (deadlines), may have catastrophic

impacts, thus special task scheduling policies must be

used. This class of hard real time scheduling policies

must provide schedulability tests which state if a

certain set of tasks is feasible or not. If a set of task is

feasible with a certain algorithm there is a guarantee

that no deadline is missed. Thus, these algorithms

have been and still are, heavily analyzed [1, 2].

OPEN-HARTS (Operating Environment for Hard

Real-Time Systems) is a methodology that was

introduced recently for development and

implementation of hard real-time systems and

applications and is based on signals and tasks. This

system is represented by the interconnection of two

sub-systems: one for analysis of the task set called

INVERTA (Integrated Visual Environment for Real-

Time Application Analysis and Development) and

one for running the task set called HARETICK (Hard

Real-Time Compact Kernel).

The paper has the following structure which will

be further described: problem statement, theoretical

foundations, related work, proposed solution and

research methodology, implementation, experimental

results, contribution and conclusions.

II. PROBLEM STATEMENT

INVERTA allows the building, specification and

visual display of real-time applications, designed as a

set of tasks of different types, each task having a

characteristic set of parameters (including parameters

of time) and a set of control links with other tasks of

the application.

The INVERTA sub-system which is presented in

this paper, along with HARETICK (Hard Real-Time

Compact Kernel) sub-system is part of OPEN-

HARTS (Operating Environment for Hard Real-Time

Systems) system. The role of the INVERTA sub-

system is to take the running context of the current

application from the HARETICK module, to analyse

the application, to modify its parameters and to send

the modified application back to it.

Most scheduling simulators do not offer the

possibility to simulate a customized real time

scheduling algorithm. This is a drawback because

users that propose new algorithms cannot test them to

see if they are feasible or not. Another disadvantage

of some of the existing scheduling simulators is that

they are not optimized to work for high number of

tasks.

III. THEORETICAL FOUNDATIONS

A real time system is defined by J.S. Ostroff as:

“A real-time system (RTS) is any system in which the

time at which the output is produced is significant.

This is usually because the input corresponds to some

movement in the physical world, and the output has to

relate to that same movement. The lag from input time

to output time must be sufficiently small for

acceptable timeliness.” [3]

Real time system can be divided in the following:

critical RTS (not meeting the deadline can result in a

catastrophe), strict RTS (not meeting the deadline

results in a wrong behaviour of the system), and soft

15

Page 18: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

RTS (not meeting the deadline results in the loss of

the system’s value and of the quality provided by the

system).

Tasks scheduling refers to finding reliable

solutions for the processor’s assignment, for each

tasks, in a way in which there is no overlapping in

their execution while the system operates.[4]

Taking into consideration if they admit or not

interruptions, the scheduling algorithms can be

classified as follows: preemptive (the execution of a

task can be interrupted by a task with a higher

priority) and non-preemptive (the execution of a task

cannot be interrupted).

Off-line non-preemptive scheduling techniques

provide solutions to hard real-time constraints and

predictability, which are important demands in critical

applications. On the other hand, these scheduling

techniques do not provide flexibility, as online

scheduling techniques like the ones that rely on task

prioritization (RM, EDF, LLF and others).

A scheduler is the part of a system that deals with

the operation of scheduling a task set. In order to find

a valid schedule for a task set the scheduler executes a

schedulability test. The scheduler can be preemetive if

the execution of a task can be interrupted by another

task and non-preemtive if no interruption is allowed.

Fig. 1 presents a real-time scheduler [5]. As it can

been seen in Fig. 1, the scheduling algorithm needs

the task set and the resource management protocol to

apply the schedulability test, for a given system

architecture, and give an answer if the task set can be

scheduled or not.

Fig. 1. Real-time scheduler

IV. RELATED WORK

Liu and Layland [6] showed that RM is the best

fixed priority algorithm to be used in a uniprocessor

system. They proved that a task set that is not

schedulable by RM it cannot be scheduled by any

other fixed priority scheme. They were the first

authors who provided a necessity condition for a set

of n periodic tasks under RM, based on the processor

utilization factor U (1) and an upper bound bn (2),

both defined below:

∑=

=n

i i

i

T

CU

1

, (1)

where, Ci represents the computation time of task

i and Ti represents the period of the same task i.

)12( /1 −= n

n nb (2)

The condition is that if the processor utilization

factor is greater than bn, then the set of tasks is not

schedulable by RM. This condition was improved by

Bini in [7] where the Hyperbolic Bound (HB)

improves the acceptance ratio by a factor of √2 for

large n, compared with the Liu and Layland test.

According to HB method, a set of periodic tasks is

schedulable by RM if condition (3) is satisfied:

Cn

i

iU1

2)1(=

≤+ (3)

In [8] a sufficiency test is provided for the same

RM algorithm. The task set is proven to be

schedulable if the utilization factor is smaller or equal

to:

)12( −≤ nnU (4)

The first formulation of the Rate Monotonic

Analysis was done by Lehoczky in [9]. The goal of

the article was to present an exact characterization of

the ability of the rate monotonic algorithm to meet the

deadlines for a set of period tasks. The article also

includes a stochastic analysis of the performance of

the algorithm when the task sets are generated

randomly. Manabe and Aoyagi improved this article

in [10] by reducing the number of points where the

time demand has to be checked. Another

improvement was done by Bini and Buttazzo [11],

who proposed a way to trade complexity versus

accuracy of the RM feasibility tests.

In [5] Chen presents an overview of the existing

real-time scheduling tool-kits. These tools are useful

for real-time system designers and programmers to

verify if a task set is schedulable with a scheduling

algorithm. Chen divides these scheduling tool-kits

based on their functionality in the following

categories: simulators, simulation languages and

frameworks.

A drawback of the simulators is that they have all

the functionality predefined and the user cannot add

new code. Among the developed simulators there are:

GAST [12], DET/SAT/SIM, PERTS SAT,

DTRESS/PERTSSim, AFTER, Brux, CAISARTS,

and Scheduler 1-2-3.

A simulation language called STRESS was

proposed in [13]. Although STRESS is a good tool to

evaluate scheduling algorithms and can be used to

design new algorithms, the cost of a context switch is

considered to be zero, a task can only start on a tick of

the system clock and resources are limited to

semaphores. Asserts (A Software Simulation

Environment for Real-Time Systems) [14] is another

simulation language which is focused on distributed

and heterogeneous systems. The user can define

nonstandard systems by specifying the task body in

pseudo-code.

Frameworks take into consideration the user

requirements and the possibility of extension. A

framework is able to generate, compile and the run

code based on the user specification of a simulation

environment, scheduler, resource management

16

Page 19: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

protocols, and task set. A framework of the Oregon

State University, which is implemented in C++ was

presented in Chen’s study from [12]. Another

framework that targets failure analysis and

hierarchical scheduling was described by Matthew

Francis Storch in [15].

Cheddar [16] is another framework, which was

implemented in Ada language, and allows the user to

check if a real time application meets its temporal

constraints. The purpose for creating this framework

was mainly educational. This framework can connect

to other tools such as editors, design tool and

simulations, easily because the data sent to the

framework and received by the framework is in XML

format.

V. PROBLEM STATEMENT

This paper defines a meta-language for the

INVERTA environment, which has the ability to

model numerous schedulers (executives). The

simulation will be based on scripts that will be

translated into simulation parameters and interpreted

by the simulation engine.

The general architecture of the simulator

described in this paper is presented in Fig 2. The

simulator was developed as a plugin for INVERTA

application. As it can been seen in the figure, the

simulator plugin receives as an input a configuration

for a task set and an XML file in which the scheduling

algorithm is specified. INVERTA environment is

used to describe the configuration of the task set. The

XML specification file is generated by the Formal

Specification plugin from INVERTA. This plugin

offers a User Interface where the scheduling

algorithm can be defined in an XML format.

Fig. 3 illustrates the structure of the XML file

used for describing the scheduling algorithm. The

XML file is composed of five tags. The first one is the

ScheduleName, in which the name for the scheduling

algorithm is entered. The second tag, Acronym,

identifies the acronym used for the algorithm. The

value from this tag is optional. The next tag,

DeclarePriority, describes the type of the scheduling

algorithm: static, dynamic or special. The forth tag,

DeclarePreemtiveBehavior, specifies if the algorithm

is preemptive or non-preemptive. The condition for

priority assignment is defined in the last tag, called

PriorityAssignement.

In order to evaluate the expression that defines

the priority assignment for a scheduling algorithm, the

expression is first split into atoms, which are stored in

a list of atoms. An atom can be an operator, a numeric

constant or a task parameter. Based on the literature

review, a set of task parameters were identified:

Task Set

Configuration Scheduling Plugin

XML Specification

of Scheduling

Algorithm

Formal

Specification

Plugin

Scheduling output

Fig. 2. General architecture of Task Simulator

• The name of the scheduling algorithm SchedulerName

• The acronym used for the scheduling

algorithm Achronime

• Priority declaration: STATIC, DYNAMIC,

SPECIAL DeclarePriority

• Scheduling algorithm preemptive behavior:

PREEMPTIVE, NON-PREEMPTIVE DeclarePreemptiveBehaviour

• Expression used for assigning of priorities PriorityAssignement

Fig. 3. XML Specification file structure

− T[i] - The task relative period

− D[i] - The task relative deadline

− C[i] - The task computation time

− P[i] - The task priority

− S[i] - The task start time inside current period

− d[i] - The task absolute deadline

− s[i] - The task absolute start time

In the next step, the expression is transformed in

Reversed Polish Notation. From this notation the

binary evaluation tree was constructed. The result of

the expression is obtained from the in-order traversal

of the tree. The above steps are presented in Fig. 4.

17

Page 20: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Expression

Parser

Expression String

Reversed Polish

Notation

Expression Tree

Expression Tree

Evaluation

(Inorder)

Fig. 4. Expression evaluation steps

The list of atoms is iterated in order to verify each

atom. If an atom is a number, it is added to the

Reversed Polish Notation list. If the atom is an

operator and the stack is empty the atom is pushed on

the stack. If the stack is not empty, the precedence of

the current atom is compared with the precedence of

the atom from the top of the stack, and a specific

action is performed based on the precedence. If the

atom is a start of parenthesis character the atom is

pushed on the stack. On the other hand, if the end of

parenthesis is encountered the content of the stack

until the start of parenthesis is stored in the output

RPN list. The pseudo-code used to specify the RPN

list construction algorithm is very similar with C

programing language. The reserved words are written

in bold and the main operations are listed in italic

style:

− isNumber – returns true if an atom is a number

and false otherwise

− isOperator – returns true if an atom is an

operator and false otherwise

− isStartParan – returns true if the atom is a start of

parenthesis character and false otherwise

− isStopParan – returns true if the atom is a stop of

parenthesis character and false otherwise

− isStackEmpty – returns true if the sack is empty

and false otherwise

− Push – adds an element to the stack

− Pop – removes the element from the top of the

stack

− Peek – returns the element from the top of the

stack

− Precedence – returns the precedence of the

operator given as a parameter

− AddRPNList – adds an element to the Reversed

Polish Notation list

Reversed Polish Notation construction algorithm

1: foreach (Atom in AtomList) do 2: if isNumber(Atom) do 3: 4: 5: 6: 7: 8: 9:

10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21:

AddRPNList(Atom)

Push(Atom) else if isStartParan(Peek()) do

Push(Atom)

else if isOperator(Atom) do if isStackEmpty() do

else if Precedence(Atom) > Precedence(Peek()) do Push(Atom)

else while (!isStackEmpty() && !isStartParan(Peek()) && Precedence(Atom) < Precedence(Peek())) do

TempAtom = Pop()

end do Push(Atom)

end if else if isStartParan(Atom) do

Push(Atom)

22: 23:

else if isStopParan(Atom) do

24: 25: 26:

while (!isStackEmpty() && !isStartParan(Peek())) do

TempAtom = Pop()

end do

27: 28:

Pop(Atom) end if while (!isStackEmpty()) do

29:

30:

TempAtom = Pop()

AddRPNList(TempAtom)

AddRPNList(TempAtom)

AddRPNList(TempAtom)

31: 32:

end do end foreach

Fig. 5 Reversed Polish Notation Construction

Algorithm

VI. EXPERIMENTAL RESULTS

The output of the Scheduling PlugIn from

INVERTA for the task set defined in Fig. 6 and

scheduled with Rate Monotonic Non-Preemptive, a

static algorithm, is presented in Fig. 8. Fig. 7 presents

the XML file that specifies the Rate Monotonic Non-

Preemptive algorithm.

Fig. 6 Task set scheduled with RM algorithm

18

Page 21: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Fig. 7 XML specification for RM algorithm

Fig. 8 RM scheduling example

The output of the Scheduling PlugIn from

INVERTA for the task set defined in Fig. 9 and

planned with MLFNP - Minimum Laxity First Non-

Preemptive, a dynamic algorithm, is presented in Fig.

11. The task set from Fig. 9 was taken from the

example that was treated in [1] for MLFNP algorithm.

Fig. 10 presents the XML file that specifies the

MLNFNP algorithm.

Fig. 9 Task set scheduled with MLFNP algorithm

Fig. 10 XML specification for MLFNP algorithm

Fig. 11 MLFNP scheduling example

VII. CONCLUSION

The development of real-time systems remains a

very important research domain because of the

complexity of the problems which characterize these

systems. Task scheduling is one of the most important

problems from real-time systems and without which

the function of the system would be unfeasible. This

fact is supported by the tremendous number of

research papers from this domain which treat different

types of scheduling algorithms. INVERTA

environment is intended to help users define real-time

applications in a visual user friendly environment,

analyse these applications from the feasibility point of

view and simulate existing and custom defined

scheduling algorithms.

ACKNOWLEDGMENT

This work was partially supported by the strategic

grant POSDRU/159/1.5/S/137070 (2014) of the

Ministry of National Education, Romania, co-

financed by the European Social Fund – Investing in

People, within the Sectoral Operational Programme

Human Resources Development 2007-2013.

19

Page 22: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

REFERENCES

[1] S. Baruah, M. Bertogna, and G. Buttazzo, "A

Review of Selected Results on

Uniprocessors," in Multiprocessor

Scheduling for Real-Time Systems, ed:

Springer International Publishing, 2015,

ISBN: 978-3-319-08695-8, pp. 29-33.

[2] Yan Feng Zhai and Feng Xiang Zhang, "A

Review of Sufficient Schedulability Analysis

for Fixed Priority Scheduling Systems,"

Applied Mechanics and Materials, vol. 741,

no. 1, pp. 856-859 2015.

[3] J. S. Ostroff, "Formal methods for the

specification and design of real-time safety

critical systems," J. Syst. Softw., vol. 18, no.

1, pp. 33-60, 1992.

[4] M. V. Micea, "Proiectarea si implementarea

sistemelor timp-real pentru aplicatii critice de

achizitie si prelucrare numerica de semnal,"

PhD, Politehnica Timisoara, 2004.

[5] J. Chen, "Extensions to Fixed Priority with

Preemption Threshold and Reservation-

Based Scheduling," PhD, University of

Waterloo, 2005.

[6] C. L. Liu and J. W. Layland, "Scheduling

Algorithms for Multiprogramming in a Hard-

Real-Time Environment," J. ACM, vol. 20,

no. 1, pp. 46-61, 1973.

[7] E. Bini, G. C. Buttazzo, and G. M. Buttazzo,

"Rate monotonic analysis: the hyperbolic

bound," Computers, IEEE Transactions on,

vol. 52, no. 7, pp. 933-942, 2003.

[8] R. Devillers, Jo, #235, and l. Goossens, "Liu

and Layland's schedulability test revisited,"

Inf. Process. Lett., vol. 73, no. 5-6, pp. 157-

161, 2000.

[9] J. Lehoczky, S. Lui, and Y. Ding, "The rate

monotonic scheduling algorithm: exact

characterization and average case behavior,"

in Real Time Systems Symposium, 1989.,

Proceedings., 1989, pp. 166-171.

[10] Y. Manabe and S. Aoyagi, "A Feasibility

Decision Algorithm for Rate Monotonic

andDeadline Monotonic Scheduling," Real-

Time Syst., vol. 14, no. 2, pp. 171-181, 1998.

[11] E. Bini and G. C. Buttazzo, "Schedulability

analysis of periodic fixed priority systems,"

Computers, IEEE Transactions on, vol. 53,

no. 11, pp. 1462-1473, 2004.

[12] J. Johnson, "The Impact of Application and

Architecture Properties of Real-Time

Multiprocessor Scheduling," PhD, CTH

Department of Computer Engineering,

Computer Architecture Laboratory (CAL),

MicroMultiProcessor Group, 1997.

[13] N. C. Audsley, A. Burns, M. F. Richardson,

and A. J. Wellings, "STRESS: a simulator

for hard real-time systems," Softw. Pract.

Exper., vol. 24, no. 6, pp. 543-564, 1994.

[14] K. Ghose, S. Aggarwal, P. Vasek, S.

Chandra, A. Raghav, A. Ghosh, and D. R.

Vogel, "ASSERTS: a toolkit for real-time

software design, development and

evaluation," in Real-Time Systems, 1997.

Proceedings., Ninth Euromicro Workshop

on, 1997, pp. 224-232.

[15] M. F. Storch, "A framework for the

simulation of complex real-time systems,"

University of Illinois at Urbana-Champaign,

1997.

[16] F. Singhoff, J. Legrand, L. Nana, L. Marc,

and #233, "Cheddar: a flexible real time

scheduling framework," presented at the

Proceedings of the 2004 annual ACM

SIGAda international conference on Ada:

The engineering of correct and reliable

software for real-time &amp; distributed

systems using Ada and related technologies,

Atlanta, Georgia, USA, 2004.

20

Page 23: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Buletinul Ştiinţific al Universităţii Politehnica Timişoara

TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Volume 60(74), Issue 1, 2015

TACTICS: Adaptive Framework for Reactive Control of

Road Traffic Systems

Cristian Cosariu1, Alexandru Iovanovici

2, Lucian Prodan

3, Mircea Vladutiu

4

1 Faculty of Automation and Computer Engineering, Computer Engineering and Information Technology Dept.

Bd. V. Parvan 2, 300223 Timisoara, Romania, e-mail [email protected] 2 Faculty of Automation and Computer Engineering, Computer Engineering and Information Technology Dept.

Bd. V. Parvan 2, 300223 Timisoara, Romania, e-mail [email protected] 3 Faculty of Automation and Computer Engineering, Computer Engineering and Information Technology Dept. Bd. V. Parvan 2, 300223 Timisoara, Romania, e-mail [email protected] 4 Faculty of Automation and Computer Engineering, Computer Engineering and Information Technology Dept.

Bd. V. Parvan 2, 300223 Timisoara, Romania, e-mail [email protected]

Abstract – This paper proposes an adaptive traffic

framework used to respond to continuous traffic

changes in a network with control points in key

intersections as derived trough complex network

analysis. The main actuators of this framework are the

intelligent traffic ligths which run the entire adaption

algorithm without affecting the current deployed

infrastructure. We illustrate the proposed solution

through a case study conducted over the city of

Timisoara, Romania. Our algorithm was tested using the

VISSIM simulator and results show improvements in

reducing waiting times and queue lengths over the

currently deployed solution based on fixed time plans.

Keywords: traffic control framework, intelligent

transportation systems, complex network analysis,

urban topology, road traffic quality

I. INTRODUCTION

Congestion and its side effects are real problems that

concern any urban transport system. Intelligent

Transportation Systems (ITS) gather the most

significant work done in this direction in order to

improve urban transportation operations.

Large and complex systems are still being

developed and deployed all over the world. A large

number of them use a centralized control scheme to

coordinate traffic movement based on the input read

from pavement installed sensors, cameras, video

surveillance, on-car devices and the list could

continue [2]. But, all these control systems require a

framework to guide the integration of all used smart

devices into a real intelligent system.

Based on the data acquisition methods traffic

systems can be static or real time. The real time

control ones respond to traffic changes by processing

the recorded data as they read it. A further analysis

reveals that real time traffic systems are reactive or

proactive [2]. In the proactive approach, traffic

control system is adapting its operations based on the

data estimated to be on a certain moment of time.

Reactive systems respond to traffic changes with a

certain delay, caused by the read time needed to

determine actual traffic conditions. Proactive systems

were deployed in the early stages of ITS development,

but do not seem to have a general solution and

continue to motivate the research in this direction.

While algorithms trying to forecast traffic conditions

are still being developed [3], reactive methodologies

are already implemented by systems like, SCATS,

SCOOTS, UTOPIA, MOTION or BALANCE [2].

Instead of trying to forecast traffic conditions,

another solution is to react quickly and adapt to traffic

changes as they occur. Minimizing the reaction time

of a system to adapt to traffic changes where reactive

systems still have to be improved. The most used

traffic actuator by the reactive systems remains the

traffic signal [4]. From changing phase order to

modifying cycle length and switching between

different timing plans to find the right phase order are

just few of the currently used solutions [5]. Reactive

systems are systems whose role is to maintain an

ongoing interaction with their environment rather than

produce some final value upon termination. Typical

examples of reactive systems are Air traffic control

system, programs controlling mechanical devices such

as a train, a plane, or ongoing processes such as a

nuclear reactor.

TACTICS is the adaptive traffic framework

envisioned to respond to continuous traffic changes in

a network that implements the three layered

formalism proposed in [6]. The main actuators of this

framework are the intelligent traffic lights which run

the adaptive green time algorithm. The hardware

deployment is done without affecting the current

infrastructure. A new hardware that uses only video

camera detection and communication module will be

used, without the need of installing pavement sensors

where they are not already installed. The proposed

workflow was partially tested as described in [6],

21

Page 24: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

using the VISSIM [19] simulator. Improvements were

obtained in terms of reducing waiting times and queue

lengths over the currently deployed solution based on

fixed time plans.

This paper proposes a framework for developing

a reactive traffic control system based on the adaption

of green time values for traffic signals without

modifying the cycle length or phase order. As there is

no general solution found yet, we define an approach

where traffic lights are the only active system

components that self adapt and communicate each

other in a distributed manner. We cover the

exchanged message definition required by TACTICS,

in order to change green times to control traffic

movements in a traffic network.

II. STATE OF THE ART

Much work was carried in the area of intelligent

transportation systems. From a theoretical point of

view most of the traffic theory was based on the

background of ideal fluids, at most taking into

consideration the compression properties [7]. All

these approaches have major problems when applied

to real-life traffic, or otherwise stated: real road traffic

is neither an ideal fluid nor it behaves like one.

In the last years, the mathematical models for

road traffic simulation have been improved. Most of

the classical models, inspired by gas or fluid behavior

in pipes give non-realistic results in modern traffic

situations and are considered inappropriate [8], but in

the last decade we witness a refactoring of these

models and implementation in simulation tools [9].

Responsible for this effect is the nonlinear and chaotic

character of the systems that describe road traffic, the

so-called: ”butterfly effect” [8]. The slightest changes

in traffic conditions on a road upstream the point of

observation induces effects and current models are not

able to give accurate “what-if” simulations.

For these systems, primary data is represented by

the number of vehicles passing on a road segment

over a given time period (possibly also the

distribution by categories: cars, trucks, bicycles,

pedestrians etc) and the average speed on that given

segment of road at any given time of day and any

given day of week [misra2011global]. Additional data

can be represented by the average acceleration and

deceleration when entering and exiting the road and

even the statistical distribution of the weight of the

vehicles and the number of traffic incidents/accidents.

The problem of improving the capacity of the

existing transportation infrastructure was previously

addressed from applying the mathematical models

presented above to the evolution of control rules to

improve system structure and reduce the complexity

of city topology [11]. In [9, 10] we can see solutions

designed for identifying the critical areas in an

existing topology or to predict problems in a proposed

one and to perform the simulation and validation

(finding the maximum traffic capability) of any

particular intersections or road segments. But these

approaches require a framework for the

implementation of the proposed methodologies.

An adaptive traffic control framework is

addressed in [12] and it is used in case of an

emergency large scale evacuation. The authors use a

methodology based on a model reference adaptive

control (MRAC) framework to serve their scope.

The field of Cyber-Physical Systems (CPS)

emerged in 2006, integrates the fields of computation

and controlling of physical entities. Opposed to

traditional embedded systems, CPS is typically

designed as a network of interacting elements with

physical input and output instead of as standalone

devices. The notion is closely related to concepts of

sensor networks. Complex, distributed and dynamic

systems like the ones providing air and road traffic

control and smart cities have been discussed in the

CPS community, concluding the need for an inter-

disciplinary combination of diverse engineering

fields. Several goals and requirements in large-scale

CPS have been identified so far, concurrency, real-

time capability, distributed control, self-adaption, self-

organization, reliability and fault tolerance [13].

Classical engineered solutions focus on

centralized approaches relying on global information,

but they lack the dynamic dependencies, which make

them easy to understand and manage. Centralized

approaches, however, assume that collecting data and

its processing meet real-time requirements. In large

and complex systems, this period of collecting and

processing data is longer than entities can wait for a

response. Traffic in large road networks is one

example of a situation where centralized optimization

is almost impossible: continuously collecting dynamic

traffic information from all roads, optimizing traffic

flows takes too long to be practically deployed in real

world networks. New approaches must at least self-

adapt to changing demand and loads in the network to

route vehicles to their destinations [13].

Self-organization implies previously described

self-adaption and also explores new strategies to reach

other objectives. Physical environments and

conditions may change frequently, requiring methods

that detect changes without external request or

modification. As a main desiderate for any system is a

high reliability and an increased fault tolerance. CPS

brings together specific engineering methods and

computer science research on embedded systems,

scheduling and distributed algorithms, emphasizing

the mapping of processes and physical features. A

good example of CPS domain is the control of vehicle

flows with the goal of reducing congestion and travel

times in a road network.

III. PROPOSED SOLUTION

A. TACTICS Framework

In [6] the authors propose a three layered traffic

system control stack, from which they have described

the methodology that runs at the first layer. Briefly,

22

Page 25: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

their method consists in several steps that use an

adaptive mechanism to modify green time values to

improve local conditions for a single intersection.

A1. Deployment

In this context, we consider each intersection as part

of a higher complexity structure, a network in which

intersections communicate to each other to find a

global traffic optimum. Because we cannot decouple

local intersection’s behavior from the entire network,

we propose to interconnect the ones identified as the

central loading points in terms of traffic load. In [14]

the authors proposed the methodology for selecting

key nodes that will act in master-slave configuration

to reach correlated decisions using a communication

mechanism over the network. Complex Network

Analysis is used over the entire network and mark

nodes with highest betweenness [15] as master nodes.

Traffic data collection falls outside the scope of this

paper and according to [6] it is a layer 1 specific

operation. Selecting key nodes in the traffic network

is an operation specific for layer 2 and is directly

related to the proposed framework; because it selects

the nodes that will constitute the so called Intersection

Control Unit, see Fig 1.

Using the three layered optimization stack we

define the communication procedure and the specific

messages that define the upper layer of the stack. This

third and last step is responsible for the system’s

response and adaption to continuous traffic changes.

Each node uniquely identified by a traffic light will be

dynamically controlled to act as a traffic officer.

Fig. 1. Traffic network for a city using TACTICS understanding

Our proposed framework defines the physical

implementation of the three layered stack proposed in

[6]. The first layer runs local adaption mechanisms

that change green time values at intersection level

based on the detected traffic flow. But, running this

algorithm on each intersection is not an optimal

solution because of the high number of intersections

in a city. The layout of this framework can use the

algorithm described in [16] to deploy the system in a

real world situation. Because local intersection’s

behavior must be seen as part of a traffic network,

central loading points in terms of traffic load must be

selected. STiLO methodology [14] identifies “hot

points” and selects the relevant to work in master-

slave configuration to reach correlated decisions.

TACTICS implements the characteristics of a

cyber-physical system to create a fault tolerant

framework for the adaptive control of traffic

movements. This system consists in several

customized Intersection Controller Units; each of

them handles an entire intersection, covering all the

signal controllers in that physical location. For each

direction a Queue Detector (QD) is installed to

determine the queue length for that specific direction.

Their results act as input for each Signal Controller

(SC) which is responsible for the new green time

changes. All the SCs in the intersection are

interconnected (Wireless or not) creating the so called

Intersection Controller Unit (ICU), see Fig 2. This is

responsible for the behavior and the adaption of the

entire intersection to traffic changes. Any city, or

large portions of it, can be reduced to several

independent ICUs which are all interconnected, but

with no centralized control center. On each of these

units, STiLO methodology is applied to define if it is

running in a master or a slave configuration.

Fig 2. Intersection Controller Unit (ICU)

Fig 3 shows the working flow diagram for each

ICU. Literature gives different solutions for real

traffic data gathering [17], such as license plate

recognition to roadside sensors that log in real time

traffic data. Each QD reads the queue length using of-

the-shelf car detectors and classification tools.

Otherwise, a hardware module capable of estimating

the length and dynamics of a queue must be

implemented and used for queue detection. Data

collected is feed into the Traffic Data Acquisition

System which creates the modified Origin Destination

table and the traffic/flow matrix of the intersection.

The literature gives us different solutions for real

traffic data gathering [4, 17, 18], ranging from license

plate recognition to roadside sensors that log in real

time traffic data. For our proposed framework we

have decided to use the video data collection

mechanism, mainly for its ease of deployment.

Using the formulas described in [6] these

structures provide input for the Adjustment

Mechanism working at the SC level. These

computations lead to the new set of green times. The

new computed values along with the parameters and

messages are ready to be sent to the interconnected

intersections via Communication Controller. The

Feedback Controller also receives these values and it

decides to wait or not for an external response. The

Communication Controller is responsible for sending

the messages to the interconnected intersections and

also receiving the corresponding responses. These are

parsed and sent to the Feedback Controller which will

23

Page 26: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

take them into consideration or not before setting the

new green times in the ICU.

One can see that the Communication Controller

could be missing and in this case the adjustment

works only at intersection level. This happens if the

intersection that is being optimized is isolated and it

works as standalone or if the communication is

offline. This framework uses no redundancy since it

can work offline without any centralized control. If

the master nodes are to implement the hardware

redundancy it will be a cost increase in order to

protect of a failure that is not a real threat to the

system, since each signal controller can take the role

of ICU. Several solutions are to be further studied,

like the need of a failure detection module can be

implemented to monitor the state of ICU.

Fig 3. Functional block diagram of an ICU of TACTICS

TACTICS implements the three layered

optimization stack in [6], the communication

procedure and the specific messages that are defined

so that the system responds and adapts to continuous

traffic changes. Each node uniquely identified by a

traffic light is dynamically controlled to act as a

virtual traffic officer. For this framework to be

operational, the network topology will have to be

defined at deployment time. A procedure for a new

node insertion, corresponding to a new traffic signal

installation is needed to be defined. Using this

mechanism, each node is capable of positioning itself

into the network, by knowing his neighbors and it is

able to find its role. STiLO must be run for the new

deployed node to determine its role in the network.

The adaptive green time mechanism is the core of

this algorithm, because it is determines and sends the

new green times to the traffic signals operating in

intersections. The dynamic of each traffic light-

controlled intersection is defined using a set of only

three parameters and new green time values are

derived based on their values. These are, green time

value, meaning the time which allows traffic to flow

through an intersection, traffic flow, representing the

number of vehicles passing on a specific direction and

cycle length, which is the timeframe between two

consecutive green times.

Several steps are performed for changing traffic

signal timings. First step is to determine whether a

local intersection has a problem in managing passing

traffic flow through it. Next step is to determine if it is

possible to make changes locally or not, based on the

input values read. If the intersection can respond to

traffic changes by changing its own green time values

then it will determine the changing coefficient that

will be sent to the interconnected ones. In case the

current intersection is identified using STiLO as

master than it communicates to the slaves the changes

made on the impacted directions. It also notifies the

other interconnected masters about the changes. The

greenTimeIncrease and the coefficient_level are

computed and sent to the connected intersection. The

response is expected during the same cycle in to know

if changes are accepted or not. The algorithm starts

over and reads traffic data after each cycle is over.

Depending on the desired goal, different sets of

parameters can be selected as input data; similar to

vehicle to infrastructure, V2I, or infrastructure to

vehicle, I2V, which use physics parameters (speed,

acceleration). These cover the behavior of any

intersection and provide all the information needed to

assess new timing plans. Due to reduced number of

operations this will need low computational power. In

a real-world system, measuring and collecting data

traffic values still represents a challenge.

A.2. Adapting Green Time Values

The adaptive green time mechanism is the core of this

algorithm, because it is responsible for effectively

determine and send the new green times to the traffic

signals operating in intersections. We start by defining

the dynamic of each traffic light-controlled

intersection using a set of only three parameters and

we will derive new traffic signals based on their

values. These are, green time value (Gt), meaning the

time which allows traffic to flow through an

intersection, traffic flow (td), representing the number

of vehicles passing on a specific direction and cycle

length (Cl), which is the timeframe between two

consecutive green times.

Several steps must be performed in order to

change traffic signal timings. First is to determine if

the local intersection has a problem in managing

passing traffic flow. Next is to determine if it is

possible for it to make changes locally, based on the

input values read and it will compute the changing

coefficient that will be sending to the interconnected

ones. If the current intersection was identified by the

algorithm as a master than it will communicate to the

slaves the changes made on the impacted directions

and also will notify the other interconnected masters

about the changes. As the results are sent, a response

is expected during the same cycle in order to know if

changes were made or not. The algorithm restarts and

reads traffic data on each cycle.

B. Inter Traffic Signal Communication

As for reading and computing new green times the

methodology was described earlier, it is the

communication part that we will detail in this part.

We define two types of messages: requests and

reports, to be exchanged between master and slave

intersections. Their format is defined in Fig 4 and has

24

Page 27: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

a minimal format in order to be easily implemented

regarding the transmission method used (TCP/IP,

Bluetooth etc).

Message ID Message Type Source Target Payload

Fig 4. Message format used by TACTICS

Based on the resulting coefficient values and on

the adaptive green time methodology, six Message

IDs are defined: REQ_INC_LOW, REQ_INC_HIGH,

REQ_DEC_LOW, REQ_DEC_HIGH, REP_YES,

REP_NO and an optional ACK can also be used, but

this depends on each intersection load.

REQ_INC_LOW and REQ_INC_HIGH each

correspond to a request for increasing the green time

value with low or high coefficient as described in

[14]. The same applies for REQ_DEC_LOW and

REQ_DEC_HIGH where they represent a request for

decreasing the green time values. REP_YES and

REP_NO are the reports sent by the slave intersection

as an answer to each of the before mentioned requests.

A bidirectional communication is proposed to

exchange information using a simple request-reply

report, where each intersection notifies the

interconnected one about the changes that is going to

perform. Each intersection will also take into

consideration the incoming requests if its local

conditions permit it. When the other intersection

acknowledges the message, it means that the

information will be used for the next timing

adjustments and a negative answer means the

information cannot be used because of the already

calculated green times. Time aspect is important

because there is no synchronization of traffic signals.

The main target of the proposed framework is to

assure the environment for traffic optimization

process in order to ensure a continuous traffic flow

between key intersections inside an urban traffic

network. Each intersection is seen either as a

standalone entity or part of a complex network

described by three parameters: green times, traffic

flow and cycle lengths. By correlating intersections

and interconnecting nodes to operate in synergy,

faster flow will be achieved at network level.

Several cases are identified: one is when the

green time of the slave intersections overlaps the

master green time value and the second is the case

when the response from the slave is received during

maser's green time. In the first case the request from

the master is not reaching the slave in the current

cycle which means no response from the slave. This is

the specific case in which the master will adapt its

green time without any change from the slave. The

adaption from the slave will take place in the next

cycle following the response to master.

Each semaphore has its own working time: cycle

length, number of phases, changing order and the list

could continue. Because of this aspect, rules must be

described, so the communication between the

intersections is optimal and also to avoid unnecessary

overhead inside ICU. All computations are done

during the first red time period after a cycle is

completed. In this interval, the new green times and

coefficient levels are determined based on each

specific methodology. All other requests coming from

slave intersections in the next period will be taken

into account only in the next cycle.

Another rule is that no answer is kept more than

one cycle. When the request from the master is not

reaching the slave, because of a larger cycle length

and in this case, the master is always changing its

values and sending new requests until it gets a

response. If the communication is lost, each

intersection acts as master without sending any

message. Statistically, acting as master an intersection

could improve locally for short time and because any

congestion is limited in time it could cover the time

needed to pass that situation.

IV. CASE STUDY

The case study follows the changes made in the

system before the framework implementation and

after. An indicator of the improvements in the

network will be the time a queue is decreased, with no

adaption and using the proposed adaptive framework

control system. The proposed methodology finds the

optimal traffic balance for all directions in a single

intersection and communicates its results with the

interconnected ones in order to achieve a more

balanced network. But, continuous recalculation will

naturally lead to a point in time when adapting green

times is not possible anymore.

The proposed working model was evaluated

using the VISSIM simulator, a microscopic

simulation tool that provides conditions for testing

different traffic scenarios in a realistic manner. With

VISSIM, the urban network was defined around the

central part of Timisoara city and it simulated several

groups of traffic lights working using TACTICS

framework configuration.

Results present several traffic controlled

intersections, subject to the adaptive traffic signal

control, all in central area of Timisoara. Using

VISSIM, specific queue counters were set on each

direction to monitor traffic flow. These counters

record traffic data passing through during simulation

time. Two parameters are of specific interest: average

queue and maximum queue length. One central

intersection adapts its green time phases dynamically,

according to the described methodology. Traffic

values are injected into the urban network using

VISSIM specific traffic data zone generators. During

simulation, green times were adapted with five and

ten time units, increasing green time for the directions

heading north and decreasing south heading direction.

To determine the impact over one of the studied

intersections, traffic conditions were measured on all

four exits, recording values before and after adaption

of green times. The results show improvements at

local intersection level for the intersection that adapts

signal timings. Compared with the initial value, there

25

Page 28: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

are moments in time when the improvements reach

almost 40% percent for the Average Queue Length,

see Fig 5 and Fig 6. This parameter describes a more

dynamic intersection, with shorter waiting times.

Meanwhile, the Maximum Queue Length parameter

shows an interest aspect when it reduces the pick the

value, fact that is caused by the progressive response

to the increasing traffic conditions.

Fig 5. Queue Length for one intersection VISSIM simulation results

Fig 6. Maximum Queue Length for one intersection VISSIM

simulation results

V. CONCLUSIONS AND FUTURE WORK

In this paper we proposed and tested in simulation an

adaptive traffic control framework, designed to

respond to dynamic changes in traffic conditions by

using intelligent traffic signaling. We described our

approach to be an efficient one in terms of new

hardware required and communication overhead

needed. Because it requires only a new module per

intersection and it uses current infrastructure without

any additional pavement installed sensors.

TACTICS is designed to interact with already

installed traffic monitoring ITS technologies and

proposes a self adapting methodology, without any

centralized control using a low message overhead for

each intersection due to its small number of

exchanged messages. The results presented in the case

study, show also low message overhead which makes

this framework an energy efficient one.

The cost for the new hardware installed in each

intersection is estimated to be around 12.000 Euros

based on our calculation. This certifies that this

solution is a low cost one compared to the costs of

installing an intelligent solution for an intersection,

which usually reach 30.000 - 40.000 Euros.

ACKNOWLEDGMENT

This work was partially supported by the strategic

grant POSDRU/159/1.5/S/137070 (2014) of the

Ministry of National Education, Romania, co-

financed by the European Social Fund – Investing in

People, within the Sectoral Operational Programme

Human Resources Development 2007-2013

REFERENCES

[1] K. Fehon, „Adaptive Traffic Signals, Are we missing the

boat?,” in ITE District 6, Annual Meeting, Sacramento, 2004.

[2] A. Stevanovic, „Review of Adaptive Traffic Control Principles

and Deployments in Larger Cities,” in International Scientific

Conference on Mobility and Transport, Munich, 2009. [3] O. Juhlin, „Traffic behaviour as social interaction-implications

for the design of artificial drivers,” in Proceedings of 6th World

Congress on Intelligent Transport Systems (ITS), Toronto, 1999.

[4] A. Stevanovic, „Adaptive Traffic Control Systems: Domestic

and Foreign State of Practice A Synthesis of Highway Practice –

Advanced Transportation Concepts”. [5] Warberg, Andreas and Larsen, Jesper and Jorgensen, Rene

Munk, "Green wave traffic optimization-a survey", Informatics and

Mathematical Modelling (2008).

[6] C. Cosariu, L. Prodan and M. Vladutiu, „Toward traffic

movement optimization using adaptive inter-traffic signaling,” in IEEE 14th International Symposium on Computational Intelligence

and Informatics (CINTI), Budapest, 2013.

[7] Papageorgiou, Markos and Diakaki, Christina and Dinopoulou,

Vaya and Kotsialos, Apostolos and Wang, Yibing, "Review of road

traffic control strategies", Proceedings of the IEEE (2003), 2043--

2067. [8] Daganzo, Carlos F, "Requiem for second-order fluid

approximations of traffic flow", Transportation Research Part B:

Methodological (1995), 277--286.

[9] Aw, A and Rascle, Michel, "Resurrection of" second order"

models of traffic flow", SIAM journal on applied mathematics

(2000), 916--938. [10] Bernot, Marc and Caselles, Vicent and Morel, Jean-Michel,

"Optimal transportation networks: models and theory", Springer

Verlag (2009).

[11] Montana, David J. and Czerwinski, Steven, "Evolving Control

Laws for a Network of Traffic Signals", MIT Press (1996), 333--

338.

[12] Zhou, Binbin and Cao, Jiannong and Zeng, Xiaoqin and Wu,

Hejun, "Adaptive traffic light control in wireless sensor network-

based intelligent transportation system" (2010), 1--5. [13] Senge, S. and Wedde, H.F., "Bee-Inpired Road Traffic Control

as an Example of Swarm Intelligence in Cyber-Physical Systems"

(2012), 258-265.

[14] Iovanovici, Alexandru and Cosariu Cristian and Prodan,

Lucian and Vladutiu, Mircea, "A Hierachical approach in

Deploying Traffic Light based on Complex Network Analysis" (2014), 232--237.

[15] Rami Puzis and Yaniv Altshuler and Yuval Elovici and

Shlomo Bekhor, "Augmented betweenness centrality for

environmentally-aware traffic monitoring in transportation

networks".

[16] Iovanovici, Alexandru and Topirceanu, Alexandru and

Cosariu, Cristian and Udrescu, Mihai and Prodan, Lucian and

Vladutiu, Mircea, "Heuristic Optimization of Wireless Sensor

Networks using Social Network Analysis" (2014). [17] Iovanovici, Alexandru and Prodan, Lucian and Vladutiu,

Mircea, "Collaborative environment for road traffic monitoring"

(2013), 232--237.

[18] Kevin Fehon, PE and Principal, DKS, "Adaptive Traffic

Signals Are we missing the boat?", Citeseer.

[19] http://vision-traffic.ptvgroup.com/en-us/products/ptv-vissim/

26

Page 29: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Buletinul Ştiinţific al Universităţii Politehnica Timişoara

TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Volume 60(74), Issue 1, 2015

Performance of Turbo Encoders with 64-QAM

Modulators Interfacing Systems in Fading Environment

Maria Kovaci1 Horia Balta

1,2

1 Faculty of Electronics and Telecommunications, Communications Dept.

Bd. V. Parvan 2, 300223 Timisoara, Romania, e-mail: [email protected] 2 Valahia University of Targoviste, 2 Avenue King Carol I, 130024, Romania, e-mail: [email protected]

Abstract – This paper presents a study on the

interfacing between the turbo encoder and modulator.

The binary allocation of the bits from a turbo coded

symbol towards the modulator symbol can be done in

several ways. This study shows the performance of the

allocation modes taking into account the quadrature

amplitude modulation with 64 points and the Rice

fluctuating transmission channel. The simulations

presented show that the performance of the entire

transmission system, measured in coding gain may be

influenced by up to 1 dB by a suitable choice of the

allocation method.

Keywords: fading channel, communication systems,

mapping, quadrature amplitude modulation, turbo code

I. INTRODUCTION

One of the most used modulations in the current

communications systems is undoubtedly the

Quadrature Amplitude Modulation (QAM). QAM is

among the specifications of communications

standards. Under its different variants, QAM is used

in digital cable television or wireless and cellular

technology applications. The 64-QAM is a good

compromise between spectral efficiency (6 bit/s/Hz)

and performance of bit/frame error rate (B/FER)

versus signal to noise ratio (SNR) [1]. 64-QAM gives

a symbol error rate of 10-6

for a SNR of about 19 dB

for uncoded system in non-fluctuating channel (i.e.,

Additive White Gaussian Noise channel – AWGN

channel) and, practically, it cannot be used in fading

channel. However, using a turbo code, a BER of 10-10

can be obtained at a SNR of 9 dB for the AWGN

channel and at SNR of 13 dB for the pure fluctuant

channel (Rayleigh channel). Obviously, the

advantages are the spectral efficiency and the

simplicity of the implementation. For these reasons,

the square 64-QAM is the most frequently digital

modulation encountered in applications. For example,

in LTE is specified that such modulation techniques

with Gray allocation can be used to minimize the

BER [2].

Of course, there are also disadvantages. One of them

is that constellations with QAM modulations Gray

allocation does not protect equally all the bits of the

modulator symbol. Neither the 64-QAM modulation

constellations is no exception to this. The problem

arising is to find the binary allocation variant between

the coded symbol and the modulator symbol which

optimizes the performance. Our previous studies have

been dedicated to this question for QAM

constellations [3], [4], [5], in AWGN channel. In the

present paper we study the turbo coded bit allocation

for the 64-QAM constellations in Rice fading

environment. A similar study, for 16-QAM was done

in [6]. In this study we used both the double binary

turbo code (DBTC) of the DVB-RCS2 standard [7]

and the single binary turbo code (SBTC) of the LTE

standard [2].

The Rice channel to which we referred above is a

model for the real channels in which the received

signal is a mixture between the direct wave (Line of

Sight– LOS) which is propagated directly from the

transmitter to the receiver and the waves reflected by

different objects.

In this paper, as in [5], we have analysed three

locations for the placement of the information and

parity bits generated by turbo coding in the symbol

modulator. In the first case the information bit was

placed in the best protected position, followed by two

parity bits placed in less protected positions. In the

second case the information bit is placed on the

middle position, so that in the better and less protected

positions are placed the parity bits. Finally, in the

third case, the information bit appears on the poorly

protected position. The results of simulations show a

completely different behaviour in the performance of

B/FER vs SNR of these allocation variants.

The structure of this work is organized as

follows. In Section II are presented the turbo encoders

used in this paper (single binary - SBTE and double

binary - DBTE) in order to identify the bits to be

allocated in the symbol modulator. Section III briefly

describes the square 64-QAM with the same aim to

identify positions from the modulator symbol that will

be filled by turbo encoded bits, nominated previously.

Section IV is dedicated to presenting allocation

alternatives. Section V shows the simulation results

and Section VI concludes the paper.

27

Page 30: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Fig. 1. The scheme of the SBTE.

II. THE TURBO ENCODER

The direct coupling between the turbo encoder and the

modulator supposes the representation of the turbo

coded block under a periodical structure form, with a

period equal to the modulator symbol length. The

structure of a turbo coded block is influenced by the

structure of the turbo encoder and puncturing matrix.

This section describes the SBTE specified in [2] and

the DBTE specified in [7], configured for the coding

rate 1/3 and 2/3, respectively.

A. Single binary turbo encoder

Fig. 1 shows the structure of a SBTE. Input sequence

u is encoded directly by the convolutional encoder C1

and via interleaver (π) by the encoder C0. Depending

on the requirements, the outputs of the two

convolutional encoders are punctured to obtain higher

coding rate. It follows redundant sequences x0 and x1,

which, along with the original information sequence

u=x2 form SBTE's output. In the absence of

puncturation, the (natural) encoding rate of SBTE's is

1/3. At this rate, the turbo coded block size is 3×NS

where NS is the length of interleaving. In other words,

one turbo coded block consists of NS symbols of the

form xj=(jjj

xxx012

,, ), with j from 0 to NS-1.

B. Double binary turbo encoder

Fig. 2 shows the scheme of a DBTE. Unlike SBTE, a

DBTE generates a four-bit symbols xj=( jjjjxxxx 0123 ,,, )

at its natural rate 1/2. In this case the size of a turbo

coded block is 4×ND where ND is the length of inter-

symbol interleaving. Note that DBTE performs both

the inter-symbol interleaving (information symbols

are interleaved) and the intra-symbol interleaving (the

bits from information symbol are interleaved).

Fig. 2. The scheme of the DBTE.

Because the modulator symbol for 64-QAM contains

6 bits, three for each carrier, for compatibility, we

chose to use the coding rate 2/3. To obtain the coding rate 2/3 for DBTE. we have used

the punctured matrix:

=

10

01pdM , (1)

which also applies to sequences x1 and x0. The

structure of a turbo coded block is of the form:

... j

x3 , 13

+jx , ...

... jx2 , 1

2+j

x , ...

... j

x1 , , ...

... , 10

+jx , ...

with j from 0 to ( ) 12 −DN .

Thus, in both cases (SBTE with coding rate 1/3 and

DBTE with coding rate 2/3) we have obtained a

periodic structure of the data block of 3 or 2×3 bits. These triplets of bits will form the modulator symbol for 64-QAM, symbol of 6 bits, as shown in the next

section.

III. THE SQUARED 64-QAM

The constellations for 64-QAM square modulation is

presented in Fig. 3. A signal modulated using squared

64-QAM has the form:

( ) ( ) ( )tqtpts jjj 21 ϕ⋅+ϕ⋅= , j∈1,2, ... ,64, (2)

Fig. 3. Signal points constellation for square 64-QAM with Gray

allocation.

C1

C0

π

u

x0

x2

x1

P

C1

C0

π

u2

u1

x3

x0

x2

x1

P

010

ϕ2(t)

ϕ1(t)001

ααααββββγγγγ

101

111

000

100

110

0110

11

abc

101

100

110

111

010

000

001

m0

28

Page 31: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

where ϕ1(t) and ϕ2(t) are the in-phase and quadrature carriers, of unitary energy (1J). The coefficients pj and

qj take values in the set –7, –5, –3, –1, 1, 3, 5, 7⋅m0,

each of them depending by the 3 bits of the 6 bits of

the modulating symbol, mj, where:

mj = [aj αj bj βj cj γj], j∈1,2, ... ,64, (3)

with αj, βj, γj, aj, bj, cj∈0,1, and 4210 =m . (The

bits order of mj, (relation 3) and the m0 value were

chosen as in [2].) For a Gray allocation we have:

( ) ( ) ( )( )( )( ) ( ) ( )( )( ) 0

0

12212421

12212421

mq

mcbap

jjjj

jjjj

⋅−γ⋅+⋅−β⋅+⋅α⋅−=

⋅−⋅+⋅−⋅+⋅⋅−=.(4)

The binary values for αj and aj determine the sign of

the coefficients pj and qj (in negative logic) while the

pairs (βj, γj) and (bj, cj) determine their module. The

bits βj and bj, are playing the role of the most

significant bit, and γj and cj are playing the role of the

least significant bit. Thus, the 64-QAM square

modulation will protect differently the bits of mj. The

most protected bits will be the sign bits, αj and aj,

then bits from the pairs (βj, bj) and (γj, cj).

The modulated signal is sent through a Rice flat

fading channel. At the output of the demodulator it

results a samples sequence with the form:

iiii nhy +⋅α= , (5)

where αi is the amplitude of the Ricean fading, hi is

given by pj or qj, and ni is a sample of the AWGN

noise. The fading amplitude has a Rice probability

distribution. A random variable with Rice distribution

22YX +=α can be modeled as a sum of two

normally distributed variables, with the same variance

σ2, one with zero mean, Y, and one with non-zero

mean (A), X. The random variable X can be thought

as:

AZX += , (6)

where Z represents the normal random variable with

zero mean and variance σ2.

Thus, the random variable with Rice distribution, α,

can be written as in:

( ) 2222cos2 ArArYAZ +Φ⋅⋅⋅+=++=α , (7)

where 22ZYr += is a random variable with

Rayleigh distribution; Φ is the phase of complex

distribution whose real and imaginary parts are given

by the random variables Y and Z.

The ratio of power of LOS component to the power of

multipath component is called Ricean K factor, [8],

defined as:

( )222 σ⋅= AK , (8)

In our simulations we assumed the total power

α2=A

2+r

2 = A

2+2⋅σ

2 to be unitary so A

2 ∈ [0, 1].

IV. INTERFACING TURBO ENCODER AND 64-

QAM MODULATOR

This section describes interconnection ways

(interfacing) between the turbo encoder and

modulator. For each turbo code and coding rate we

have chosen three bits allocation ways, indicated by

acronyms q0, q1 and q2, respectively. On the

complete labeling of variants we have noted the SBTC with s, the DBTC with d and the encoding rates

1/3 and 2/3 with 33 or 67, respectively.

A. CMBM variants for SBTC

Variants of coding to modulation bit mapping

(CMBM) for SBTC with coding rate 1/3 are shown in

Table 1. Since the natural coding rate of SBTC is 1/3,

in this case the bits allocation for in-phase component

is identical to those of the quadrature component.

What is different is only the position of the modulator

symbol mj in which the information bit x2 will be

placed. In the first case s33q0, x2 is the most protected

bit (with role of aj or αj). In the second case s33q1, x2

is the middle bit (with role of bj or βj) and in s33q2

case, x2 is the least protected bit (with role of cj or γj).

B. CMBM variants for DBTC

We used 2/3 coding rate for DBTC. CMBM variants

in this case are shown in Table 2. Because of the

symmetry, we chose the symbol bits (generated by DBTE) with even index to be assigned to in-phase

component and the symbol bits with odd index to be

assigned to odd symbols. By doing so, we will have 2

information bits and only one parity bit for triplets (aj

bj cj) and (αj βj γj). The cases chosen and presented in

Table 2 differ by positioning the parity bit.

Table 1 CMBM Variants for SBTC and a Coding Rate of 1/3

aj, ααααj bj, ββββj cj, γγγγj protects

s33q0 x2 x1 x0 information

s33q1 x1 x2 x0 hybrid

s33q2 x1 x0 x2 parity

Table 2 CMBM Variants for DBTC and a Coding Rate of 2/3

in-phase quadrature

aj bj cj ααααj ββββj γγγγj

d67q0 jx3

jx2

jx1

13

+jx

12

+jx

10

+jx

d67q1 jx3

jx1

jx2

13

+jx

10

+jx

12

+jx

d67q2 jx1

jx3

jx2

10

+jx

13

+jx

12

+jx

29

Page 32: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.510

-10

10-9

10-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

100

3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.510

-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

100

Fig. 4. The performances of memory 4 SBTC from [2] with the coding rate Rc = 1/3 and CMBM modes for 64-QAM: s33q0 – red-

circles, s33q1 – blue-x, s33q2 – black-diamonds; with continuous

line after 100 iterations, with dashed line after 16 iterations and

with dotted line after 8 iterations.

V. EXPERIMENTAL RESULTS

This section presents the results of our investigations.

More specifically, there are presented the performance

of SBTC of LTE standard [2] and the performance of

DBTC of DVB-RCS2 standard [7] using squared 64-

QAM and all variants of CMBM presented in the

previous section (Table 1 and 2).

A. Turbo coding parameters used in the simulations

In the simulations we considered the parameters of

TCs specified in the two standards. We refer to the

component convolutional encoders and to the

specified interleaving methods. We have used the

1504 data bit blocks in all cases. For this reason we

set NS=2⋅ND=1504. The circular closing method (tail

biting) of the trellis was considered in all cases, [9].

3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.510

-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

100

3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.510

-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

100

3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.510

-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

100

We used the Max-Log-MAP algorithm for decoding

[10], with a weighting of extrinsic information [11]. Extrinsic information weighting coefficients were 0.7

for SBTC and 0.75 in the DBTC case. We used also

the genie iterations stopping criterion [12], with

values for the maximum number of iterations of 8, 16

and 100. We considered a Rice channel with a percentage of the non-fluctuating wave power (LOS)

with the values: 0% (Rayleigh channel), 50%, 75%

and 100% (AWGN channel).

B. Simulation results

The simulation results are shown in Fig. 4 and Fig. 5.

For each point of the curves shown in the diagrams of

these figures, we have carried out simulations to

average SNR (dB)

BE

R a

fter

100

ite

rati

ons

A2=100%

A2=75%

A2=50%

A2=0%

FE

R a

fter

8, 1

6,

and

100

ite

rati

ons

A2=50%

A2=100%

A2=0%

a)

c)

average SNR (dB)

average SNR (dB)

A2=0%

A2=50%

A2=75%

A2=100%

average SNR (dB)

FE

R a

fter

8, 1

6,

and

100

ite

rati

ons

A2=50% A2=75% A2=75%

A2=100%

A2=0%

average SNR (dB)

FE

R a

fter

8, 1

6,

and

100

ite

rati

on

s

A2=50% A2=75%

A2=100%

A2=0%

FE

R a

fter

100

ite

rati

ons

e)

b)

d)

30

Page 33: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

7 7.5 8 8.5 9 9.5 10 10.5 11 11.5 12 12.5 13 13.5 1410

-10

10-9

10-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

100

7 7.5 8 8.5 9 9.5 10 10.5 11 11.5 12 12.5 13 13.5 1410

-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

100

Fig. 5. The performances of memory 4 DBTC from [7] with the

coding rate Rc = 2/3 and CMBM modes for 64-QAM: d67q0 – red-

circles, d67q1 – blue-x, d67q2 – black-diamonds; with continuous

line after 100 iterations, with dashed line after 16 iterations and with dotted line after 8 iterations.

obtain 500 erroneous blocks or to process a number of

109 data blocks.

Fig. 4 shows the performance of SBTC for each of the

3 CMBM variants given in Table 1, at a natural

coding rate of 1/3. In the waterfall region, the curves

built for the same value of A2 (the percentage power

of the non-fluctuating wave) are spaced with about 1

dB on SNR. The hierarchy on performance in this

region is s33q0, s33q1 and s33q2, respectively. With

the transition to error floor region of curves, the

hierarchy changes, version s33q0 showing a more

pronounced error floor effect. It is noticeable the

consistent effect of the fluctuating component on

performance. Thus, if only 25% of the total power is

reflected in the fluctuating component, the system

performance, in terms of coding gain, decreases at half (the curves denoted A 2= 75% are placed at mid-

distance between the curves for A2

= 100% – non-

fluctuating channel and the curves for A 2 = 0% –

7 7.5 8 8.5 9 9.5 10 10.5 11 11.5 12 12.5 13 13.5 1410

-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

100

7 7.5 8 8.5 9 9.5 10 10.5 11 11.5 12 12.5 13 13.5 1410

-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

100

7 7.5 8 8.5 9 9.5 10 10.5 11 11.5 12 12.5 13 13.5 1410

-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

100

pure fluctuating channel).

The curves in Fig. 5 show the performance of DBTC, with rate 2/3, for each of CMBM methods described

in Table 2. Like the first case, here we have a large

"spreading" of the curves in the waterfall region. Like

in previous cases hierarchies are, in order of

performance d67q0, d67q1, d67q2 for waterfall region

and d67q2, d67q1, d67q0 for error floor region.

Also, as for SBTC, the curves obtained in this case for

different values of the Ricean factor (the balance

between the fluctuating component and non-

fluctuating component) appear as some "echoes at

right" of the AWGN channel curves. Regarding the influence of the maximum number of

iterations in turbo decoding on performance, we note

a gain of about 0.1 dB in the waterfall region, from 8

to 16 iterations and from 16 to 100 iterations. This

gain is canceled for curves in red-circles (s33q0 and

average SNR (dB) average SNR (dB)

average SNR (dB) average SNR (dB)

average SNR (dB)

BE

R a

fter

100

ite

rati

ons

FE

R a

fter

8, 1

6,

and

100

ite

rati

ons

FE

R a

fter

8, 1

6,

and

100

ite

rati

ons

FE

R a

fter

8, 1

6,

and

100

ite

rati

on

s

A2=100% A2=75%

A2=50%

A2=0%

A2=0% A2=50%

A2=50%

A2=50%

A2=50%

A2=75%

A2=75%

A2=75%

A2=75%

A2=100%

A2=100%

A2=100%

A2=100%

A2=0%

A2=0%

A2=0%

FE

R a

fter

100

ite

rati

ons

a) b)

c) d)

e)

31

Page 34: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

d67q0) in the error floor region, and tends to increase

for the black-diamonds curves (s33q2 and d67q2).

The phenomena are similar also for SBTC.

VI. CONCLUSIONS

First, we note that the relative performance of the

different CMBM variants is practically independent of

the value of A2. Noteworthy is the big gap between the

performance variants with information protection

(...q0) and those with parity protection (...q2). This

gap can reach the value 1 dB.

The variants ”... q0” are the best in the waterfall

region. They are followed, as performance, by the

hybrid variants noted with ”... q1”. The worst performance in the waterfall region is obtained by the

variants ”…q2”, in which the QAM modulation

protection is on the parity bits. But, while the FER

lowers, the curves obtained for the variants ”.... q0”.

lose their superiority one after another in favour of

other variants. In the bottom of the curves (the error

floor region) appears a major difference in the gains

brought by the performing of some additional

iterations. Thus, if for variants ”... q0” performing

additional iterations is unnecessary for the other

variants, the additional iterations bring a consistent coding gain. The explanation is the fact that at hybrid

variant and at parity protection variant, the error floor

region practically was not reached until the

investigated FER values.

The hybrid variants (s33q1 and d67q1) which protect

alternative the information bits and the parity bits are a good solution for the balance between the waterfall

and the error floor regions. In SBTC the difference

between the hybrid variant and the information bits

protect variant is very small, so that the exchange in

the hierarchy of performance between the two

variants occurs earlier.

Therefore, the study presented in this paper

recommends the CMBM hybrid variants.

ACKNOWLEDGEMENTS

This work was partially supported by the strategic

grant POSDRU/159/1.5/S/137070 (2014) of the

Ministry of National Education, Romania, co-

financed by the European Social Fund – Investing in

People, within the Sectoral Operational Programme

Human Resources Development 2007-2013 and by a grant of the Romanian Ministry of Education, CNCS

– UEFISCDI, project number PN-II-RUPD-2012-3-

0122.

REFERENCES

[1] J.G. Proakis, Digital Communications, McGraw-Hill, 4th

edition, 2001.

[2] ETSI, 3GPP TS 36.212: “Evolved Universal Terrestrial Radio Access (E-UTRA), Multiplexing and channel coding”.

http://www.etsi.org/deliver/etsi_ts/136200_136299/136212/08.08

.00_60/ts_136212v080800p.pdf

[3] H. Balta, F. Alexa, and A. Vesa, “On the allocation of double-

binary turbo coded bits in the case of 16-QAM modulation”,

Proceedings of the 11th International Symposium on Electronics

and Telecommunications, ISBN 978-1-4799-7265-4, November 14-15, Timişoara, România, pp. 191-196, 2014.

[4] H. Balta, J. Gal, and C. Stolojescu-Crişan, “On the Double-

Binary Turbo Coded Bits Allocation Mode in the Case of 256-

QAM Square Modulation”, Proceedings of the 37th International

Conference on Telecommunications and Signal Processing

(TSP), ISBN 978-80-214-4983-1, ISSN 1805-5435, July 1-3, Berlin, Germany, pp. 129-134, 2014.

[5] R. Lucaciu, M. Kovaci, J. Gal, A. Mihaescu, and H. Balta, On

the Turbo Coded Bits Allocation Mode for the 64-QAM Square

Modulation, 38th International Conference on

Telecommunications and Signal Processing (TSP), July 9-11,

Prague, Czech Republic, 2015. [6] M. Kovaci, and H. Balta, A study on turbo coded 16-QAM bit

allocation in Rice flat fading channel, The 10th International

Conference on Future Networks and Communications (FCN

2015), August 17-20, Belfort, France, 2015.

[7] European Telecommunications Standards Institute, “DVB

Interactive Satellite System, Part 2: Lower Layers for Satellite

standard”, DVB Document A155-2, March 2011. Available:

http://www.dvb.org /technology/standards/a155-2_DVB-

RCS2_Lower_Layers.pdf. [8] F. Vatta, G. Montorsi, F. Babich, “Analysis and Simulation of

Turbo Codes Performance over Rice Fading Channels”, IEEE

International Conference on Communications, ICC 2002, 28

April-2 May, 2002, New York City, NY, USA, vol.3, pp. 1506-

1510.

[9] C. Weiss, C. Bettstetter, S. Riedel, and D. J. Costello, “Turbo

decoding with tailbiting trellises”, in Proc. IEEE Int. Symp.

Signals, Syst., Electron.,Pisa, Italy, pp. 343–348, Oct. 1998. [10] W. Koch, and A. Baier, “Optimum and sub-optimum detection

of coded data disturbed by time-varying intersymbol

interference.” In Proc. GLOBECOM ’90, pp. 1679-1684,

December 1990.

[11] H. Balta, and C. Douillard, “On the Influence of the Extrinsic

Information Scaling Coefficient on the Performance of Single

and Double Binary Turbo Codes”, Advances in Electrical and

Computer Engineering, Vol. 13, No. 2, pp. 77-84, May 2013.

[12] A. Matache, S. Dolinar, and F. Pollara, “Stopping Rules for Turbo Decoders”, TMO Progress Report 42-142, August 2000,

Jet Propulsion Laboratory, Pasadena, California.

32

Page 35: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Buletinul Ştiinţific al Universităţii Politehnica Timişoara

TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Volume 60(74), Issue 1, 2015

The study of radio coverage and service quality of a

Campus-Wide Wireless Network

Cuzman Călin-Alexandru1, Bunaciu Cristian-Adrian

2, Marius Marcu

3, Sebastian Fuicu

4

1 Faculty of Electronics and Telecommunications, Communications Dept.,

bd. V. Parvan 2, 300223 Timisoara, Romania, [email protected] 2 Faculty of Electronics and Telecommunications, Communications Dept.,

bd. V. Parvan 2, 300223 Timisoara, Romania, [email protected] 3 Faculty of Automations and Computers, Computer Science Dept. Bd. V. Parvan 2, 300223 Timisoara, Romania, [email protected] 4 Faculty of Automations and Computers, Computer Science Dept.

Bd. V. Parvan 2, 300223 Timisoara, Romania, [email protected]

Abstract – The appearance and development of mobile

equipment led to a growth in the usage of Wi-Fi

networks. At present, in order to access the Internet, the

most used networks are the ones based on the IEEE

802.11 standard. These networks were conceived to

service a limited number of customers with a symmetric

traffic for uplink and downlink and concurrently with a

limited coverage area dependent by the access point

(AP) radio transmission power. The herein paper

describes the tools and the steps followed to increase the

radio coverage and to improve the quality of the services

provided by a campus network made of 200 interior and

exterior Wi-Fi hot-spots managed by one Alcatel-Lucent

OmniAccess dedicated controller.

Keywords: Wi-Fi networks, radio analysis, radio

coverage, optimization, QoS, radio map

I. INTRODUCTION

The contemporary society is more and more based on

mobile equipment and wireless communication

allowing mobile users to access information

anywhere, anytime in a timely and cost-effective

ways. According to ITU statistics the number of

mobile (cellular) subscriptions worldwide at the end

of 2014 is more than 6.95 billion, close to the size of

worldwide population [1]. Smartphone ownership in

developed markets surpassed featured phones

ownership in 2013 [2]. Smartphone penetration rate

on developing markets follows also and increasing

trend [2]. It seems that in every aspect of our live, the

ability to communicate becomes more and more

important, with people using mobile terminals on a

daily basis for phone calls, email, to access the

Internet and applications of social networks. For most

employees, the phone or the tablet has become a

compulsory instrument that accompanies them

everywhere, including at their workplace [3].

The term “Wi-Fi” refers to local wireless networks

which use the specifications of the IEEE 802.11

standard versions. A new version of the IEEE 802.11

family of standards, IEEE 802.11ac, has recently been

defined with the promise of delivering significant

increases in bandwidth while improving the overall

reliability of a wireless connection [3]. The main goal

of this standard is to provide wireless data rates

compared to common wired LAN infrastructures,

over 1 Gbps bandwidth. Wi-Fi networks are used in

schools, campuses, companies and homes, as an

alternative to LAN wired networks. Usually, hotels,

cafes, airports and, generally, public places offer

public access to Internet by Wi-Fi, these locations

being called “hotspots” [4]. Despite their spread,

wireless networks are still lacking the performance

and quality of wired networks. The recognized

problems of WLAN still remain the radio coverage

and variable transfer rates, both resulting in poor

quality of services.

The present paper represents a starting point in the

improving of the radio coverage capacity, as well as

in the quality of the services provided by EduRoam

network of the Politehnica University of Timisoara.

The first step in the making of this project was finding

the software and hardware instruments, needed for

determining the present state of functioning of the

network and its evaluation from the point of view of

the radio coverage and transfer rates. The second step

was generating a radio coverage map by using

dedicated software for measuring the radio signal

power strength of the AP’s. The measuring was made

within the premises of the campus, inside the main

university buildings and outdoor, in the nearby park.

The next step implied the correlation of the radio

coverage with the transmission rates of the AP’s in

different locations, beginning with the area with the

best signal quality and ending with the area with

worst signal quality, within the measured areas. The

last aspect of this study is the interpretation of the

results and the offer of a solution based on coverage,

quality and cost for a maximum exploitation of the

network.

The following sections will cover a part of the most

important scientific contributions to this subject.

33

Page 36: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Section II provides the description of previous related

work on radio signal mapping and wireless

optimization. Section III presents the existing

environment under analysis. Section IV describes the

methodology used to do the analysis and monitoring.

The results and optimizing recommendations are

presented in section V. The last section concludes the

paper

II. RELATED WORKS

Up to this time there have been carried out several

studies on the analysis and optimization of Wi-Fi

networks, studies based on generating coverage radio

maps that use RSSI (received signal strength

indicator) or analysis of the traffic generated by users

(number of customers, type of data, broadband).

[5][6]

The study conducted by Pechac, Klepal and Martinez

led to an optimization algorithm based on evolution

strategies, being implemented in a web application of

planning radio resources. The algorithm allows

automatic projection of some wireless LAN

heterogeneous models with a minimum of data

gathered on field [7]. The authors use the

Architect/One software for planning WLAN network

and APs placing for the optimal layout and quantity to

achieve the required network parameters. This method

is used before WLAN implementation providing no

monitoring support for network parameters’

validation at runtime.

Connelly, Liu, Bulwinkle, Miller and Bobbit

produced a set of tools for automatic generation of

radio maps outside the buildings. The set of tools

could collect data with the help of the personnel of the

campus or the security of the campus during their

normal work (the set being put in a simple backpack).

The collected data were integrated in a merging

algorithm in order to obtain a complete image, used

afterwards as a radio map [8]. The achieved radio

maps, based on RSSI interpolation, are used to

implement an outdoor wireless positioning system,

but no optimization decision are taken.

Kotz and Essein studied in 2001 the wireless network

of a campus, year in which it was implemented [5].

Henderson, Kotz and Abyzov came back to the

campus network when it reached maturity in 2003-

2004 [9]. Another example of university whose

wireless network was studied was North Carolina

University [10]. These studies are very important for

those who develop, deploy and manage WLAN

infrastructure, as well as those who develop

applications for wireless networks. However, these

studies consider nomadic computing traffic coming

from laptop users. Similar studies are therefore

needed for more recent kind of traffic, those who is

coming from mobile users.

Guillet assembled a typical environment of home

network in order to evaluate and optimize the design

of Wi-Fi antennas for residential gateways. His paper

describes the measuring process and the illustration of

the interactions between different antennas and their

working environment [11]. The examples provided

illustrating the interactions of WiFi antennas of

monitoring equipment with indoor multipath channel

have been used to establish the measurement

approach and its implementation.

A large scale WLAN monitoring system deployed at

Dartmouth College, covering 210 campus locations

and 5000 users, is presented in [6]. In this paper the

authors describe the monitoring approach, designs and

solutions addressing the technical challenges that have

resulted from efficiency, scalability, security, and

management perspectives of the campus WLAN

network. The proposed WiFi monitoring system is

made of three components: (1) a high-performance

sniffing system, (2) an online network trace

sanitization and distribution system, and (3) a tool for

configuring, launching, monitoring, and terminating

an experiment. The main goal of our work is similar

to the monitoring system presented in [6]. However,

first measurements and deployment radio and transfer

rates were achieved manually.

Similar studies have been carried on in diverse home

environments. The authors of [12] present a

measurement study of wireless experience in such

environments by deploying an infrastructure

composed of OpenWRT based APs. They are

configured with a dedicated measurement and

monitoring software that communicates with a

measurement controller through an open API.

Although in the field of research, the subject of Wi-Fi

is popular, there are few studies on the area of

analysis of radio coverage. This is why with this paper

we will bring a contribution to this field by

exemplifying the methods that can be used for

analyzing and optimizing a wireless network, the

work tools used and, also, our conclusions and

recommendations in Wi-Fi optimization with direct

application on the campus network under study.

III. WORKING ENVIROMENT

The UPT network EduRoam was developed with the

purpose of improving the Internet communication

infrastructure within an extensive project of

cooperation between the Politehnica University of

Timisoara and Debrecen University. The project

implied installing 200 specific equipment (Access

Points) for Wi-Fi communications, connected to a

monitoring and managing equipment called controller

(Fig. 1). The AP’s assure coverage for the faculty

buildings, as well as for the University dormitories.

Coverage inside and outside the buildings was

assured. In the interior, the communal spaces have

been mostly considered (halls, corridors, study

rooms), and in the exterior the coverage of parks and

alleys around the building has been deployed.

OmniAccess APs are produced by the Alcatel-Lucent

and operate exclusively with OAW 6000 WLAN

controller to provide network access to wireless

customers. The equipment support IEEE

34

Page 37: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

WLAN controller

Main router

Datacenter

Building node

Building node

Building node

Applications servers Internet

Firewall

Fig. 1 Controller managed wireless network

802.11a/b/g/n standards for wireless systems and

adaptive radio management (ARM). ARM is a radio

frequency resource allocation algorithm enabling each

AP to select the optimum radio channel and

transmission power setting to minimize interference

and maximize coverage and throughput. The AP’s

have the capacity of radio adaptation to surrounding

interferences, increasing or decreasing their

transmitting power as appropriate and switching

channels based on the level of engagement of the

channel. The APs scan for better channels at periodic

intervals and report information to the WLAN

controller to set-up the APs’ configuration parameters

[13].

The WLAN controller is the central equipment which

manages the configurations of the APs and, at the

same time, functions as a switch for wireless traffic.

The controller is an equipment of enterprise class

which functions as a connection between the traffic of

wireless customers from/ to traditional wired

networks. It has many functions, such as:

• The management of the entire wireless network is

concentrated to a single point

• It behaves as a firewall between the cabled part

and the wireless part of the network

• VPN connectivity

• Mechanism of detection and prevention of

intrusions

• Central handover mechanism

• Analysis and monitoring of the radio spectrum

The authentication of users to the UPT network

EduRoam is based on a user name and a password (e-

mail account of students) by an AAA server

(Authentication, Authorization and Accounting) using

Active Directory services and Radius protocol.

Starting with the existing specifications and the

capabilities of the previously presented network, we

decided to carry out an extensive radio analysis of the

coverage area and the data transfer speed rates which

will be described in the following section.

IV. MEASUREMENT METHODOLOGY

In the making of this study, we tried to gather as much

information as possible about the way the Wi-Fi

network EduRoam functions, its specifications, as

well as the exact positioning of the APs. In the

measurement process we identify every AP by name

(configured in the WLAN controller), MAC address

(hardwired), IPv4 address (allocated statically by the

controller) and location. At each testing location there

are several APs to be take in consideration.

The next step was finding a way of measuring

(quantifying) the coverage area beginning from using

the Chanalyzer software, together with the hard

equipment Wi-Spy DBx [14]. The results obtained

after processing were not used in the making of the

radio map, because, physically, the location of the

measurements could not be determined.

After a thorough documentation we used two software

applications, the purpose of this choice being

checking the accuracy of the measurements data. In

the first app, called Ekahau Heat Mapper, a plan of

the area or the building where the measurements will

take place is necessary. The software generates a

radio map by repeated measurements of the signal

power in different points, in the end being capable to

recognize the surrounding AP’s, as well as their

coverage [15].

The second tool used, Wi-Fi Speed Test offers

information on the quality of the radio connection

from the point of view of the transfer speed to and

from the user. A notebook featuring two network

interfaces (one internal and one external connected to

USB port) have been used to measure and monitor the

radio interface and transfer rates, respectively.

Once established the measuring instruments, we

decided to choose two relevant areas in which to

make the preliminary analysis, more precisely the 4th

floor of building B and the 3rd floor of building A due

to their specific constraints and problems occurred:

(1) building B has many small laboratories and

separating walls and (2) building A long corridor with

variable transfer rates and often disconnections in

some locations because of ARM. We began our

analysis by carrying out repeated measurements at the

same floor in order to test two aspects:

• the first step was confirming the hypothesis that

says: the more the distance between the sampled

points increases, the more the results are more

inaccurate;

• the second step was about modifying the emission

strength of the AP’s when these are in each other’s

proximity.

After carrying out the measurements and generating

multiple radio maps in the specified locations, we

passed to the stage of wanting to know the

upload/download speed in different locations. We

began from the most concentrated areas in which the

radio signal had the highest values and went to the

periphery of the coverage area of the AP’s in order to

35

Page 38: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

see how the degrading of the radio signal influences

the data transmission speed in the network.

At the end of the process of measuring and testing of

the UPT network EduRoam, we gathered several

results about coverage and transmission speed, results

which are presented in the next sections.

V. RESULTS AND OPTIMIZATION

RECOMMENDATION

We collected a large amount of data and we extended

our research to several buildings over the course of

six weeks, but the herein paper will present only a

small amount of what we considered relevant.

At the 4th floor of the building B we carried out four

repeated tests to know if the number of samples

influences the way in which the radio coverage is

disposed. After the measurements, we arrived at the

conclusion that an increased sampling is necessary (it

is necessary we take into account as many points of

the building as possible) in order to obtain precise

results, as it can be observed in Fig. 2 and 3.

At the same time, to confirm the credibility of the

results, we decided to increase the measuring area.

Therefore, besides the communal spaces from the 4th

floor, we extended our measurements to the

laboratories at the same floor in order to consider the

Fig. 2 Radio coverage with a reduced number of samples

Fig. 3 Radio coverage with an increased number of samples

separating walls (Fig. 4). As it can be seen, there is no

significant change in the radio coverage, which led us

to believe that it is sufficient to follow a measuring

track only in the communal spaces and Ekahau Heat

Mapper will generate a radio assessment extended to

laboratories and class rooms. However, this

assumption is true only for the case of building B due

to the internal walls surrounding the main lobby

where the APs have been installed.

Once the 4th floor radio map was realized for one

active AP, we activated the APs in the neighborhood.

The APs nearby have been turned off to observe the

signal strength of one single radio equipment. In the

second scenario we analyze how nearby APs could

influence each other. Cumulative radio map of APs

belonging to our network which emits in that area is

presented in Fig. 5. Analyzing the radio coverage map

we confirm the zones of B building (labs at every

floor) that do not have WLAN access – labs in the left

and bottom right. Furthermore, foreign an unofficial

hotspots interfering the EduRoam ones are identified.

These APs limit the radio coverage of our APs

overcrowding the radio spectrum in the area.

The next step was monitoring the transmission speed

in the coverage area of the AP in the same time. The

speeds were of minimum 4.35 Mbit/s and maximum

12.7 Mbit/s, as it can be seen in Fig. 6. Transfer rates

are decreasing with the received radio signal strength

of the APs.

Fig. 4 Radio coverage of the extended measurement area

Fig. 5 Radio coverage of the extended measurement area

36

Page 39: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Fig. 6 Transmission speeds in the 4th floor coverage area

The proposed optimization solution for increasing the

radio coverage of the network, considering the

previous results, is the following: bringing in 2

additional new AP’s and fixing them on each

classroom access hall, more precisely at the centre of

the hall and relocating the AP which is currently at the

centre of one of the halls of the 6th floor, as it can be

seen in Fig. 7.

After validating the optimization hypothesis, the

result was an increased radio coverage at the 4th floor

in building B, in the classrooms as well as in the

common space mostly used by students. Figure 8

presents the radio coverage after applying the

optimization process.

After measurements being taken, at the 3rd floor of

building A, there were identified 2 AP’s of the

EduRoam network which have the radio coverage

presented in Figure 9. Also, after tests conducted to

establish the transmission speed in the coverage area,

we obtained a maximum transfer of 10.24 Mbit/s and

a minimum of 5.25 Mbit/s. A problem observed

during the transfer speed test was connection loss at

the border of the two AP’s, due to the transmission

power adaptation mechanism of the two APs (ARM).

Fig. 7 Optimization solution for radio coverage at the 4th floor of building B

Fig. 8 Radio coverage after optimization

Fig. 9 Radio coverage area at the 3rd floor of building A and

transmission speeds

The optimization solution we propose in this case is

establishing manually the radio coverage area of each.

Once modified, the problem of connection loss at the

border of the two APs will disappear.

VI. CONCLUSIONS

The herein study tried to anticipate our users’ need of

having unrestricted access to the UPT Wi-Fi network

EduRoam on an area as large as possible, a radio

connection quality as good as possible and at transfer

speeds close to the actual needs of students and

professors alike of the Politehnica University of

Timisoara. In our study we analyzed several patterns

of building structures and how WLAN behaves in

these environments. We identified one design and

implementation problem (building B) and one

intermittent problem (building A). The first problem

occurred due to the size of floor concrete. It has been

solved by adding one AP each floor and reorganizing

existing APs accordingly. The second problem

occurred at the boundaries between two adjacent APs

due to the automatic adaptation algorithms of the

WLAN controller. It has been solved temporarily by

limiting the transmission power of the two APs and

by selecting statically the channels they are operating.

37

Page 40: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

ACKNOWLEDGEMENTS

This work has been partially supported by the project

HURO//1101/074/1.2.1 – JCBICS-UDUPT – “Joint

Cross-Border Internet Communication System of the

University of Debrecen and Politehnica University of

Timisoara”, 2013-2015.

REFERENCES

[1] ITU statistics 2015, http://www.itu.int/en/ITU-

D/Statistics/Documents/ statistics/2015/Mobile_cellular_2000-2014.xls, Jul. 2015.

[2] Bain & Company, “Worldwide surge in smartphone and tablet

sales revolutionizes online content consumption”, Digital Media

Report, http://www.bain.com/about/press/press-releases/worldwide-

surge-in-smartphone-and-tablet-sales-revolutionizes-online-

content-consumption.aspx, Nov. 2013.

[3] R. Watson, “Understanding the IEEE 802.11ac Wi‐Fi Standard – Preparing for the next gen of WLAN”, http://www.merunetworks.com/collateral/white-papers/wp-ieee-

802-11ac-understanding-enterprise-wlan-challenges.pdf,

Whitepaper, Jul. 2013.

[4] TechTarget, “Wi-Fi definition”,

http://searchmobilecomputing. techtarget.com/definition/Wi-Fi,

Accessed Jun.2015.

[5] D. Kotz and K. Essien, “Analysis of a campus-wide wireless

network”, Wireless Networks Journal, vol.11, iss.1-2, pp.115–133,

Jan. 2005 (extension of the MobiCom’02 original paper). [6] K. Tan, C. McDonald, B. Vance, C. Arackaparambil, S.

Bratus, and D. Kotz, “From MAP to DIST: The Evolution of a

Large-Scale WLAN Monitoring System”, IEEE Transactions on

Mobile Computing, vol.13, no.1, pp.216-229, Jan. 2014.

[7] P. Pechac, M. Klepal, and A. Martinez, “Modeling and

Optimization of Heterogeneous Wireless LAN”, Proceedings of IEEE Vehicular Technology Conference (VTC2004), vol.6,

pp.4442-4445, Sep. 2004.

[8] K. Connelly, Y. Liu, D. Bulwinkle, A. Miller, and I. Bobbitt,

“A Toolkit for Automatically Constructing Outdoor Radio Maps”,

Proceedings of International Conference on Information

Technology: Coding and Computing (ITCC 2005), vol.2, pp.248-

253, Apr. 2005.

[9] T. Henderson, D. Kotz, and I. Abyzov, “The changing usage

of a mature campus-wide wireless network”, Proceedings of the 10th Annual International Conference on Mobile Computing and

Networking (MobiCom '04), pp.187–201, Philadelphia, USA, Sept.

2004.

[10] F. Chinchilla, M. Lindsey, and M. Papadopouli, “Analysis of

wireless information locality and association patterns in a campus”,

Proceedings of 23rd Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM 2004), vol.2,

pp.906–917, Hong Kong, China, Mar. 2004.

[11] V. Guillet, “Over the air antenna measurement test-bed to

assess and optimize WiFi performance”, Proceedings of IEEE

Conference on Antenna Measurements & Applications (CAMA

2014), pp.1-4, Antibes Juan-les-Pins, France, Nov. 2014. [12] A. Patro, S. Govindan, and S. Banerjee, “Observing home

wireless experience through WiFi Aps”, Proceedings of the 19th

annual International Conference on Mobile Computing &

Networking (MobiCom '13). ACM, New York, NY, USA, pp.339-

350, 2013.

[13] Alcatel Lucent, “AOS-W User Guide - User-Centric Network

Components”, AOS-W Version 3.3.2, Jun. 2008.

[14] MetaGeek, “Diagnose with Wi-Spy + Chanalyzer”,

http://www.metageek.net/products/wi-spy/, Accessed Jan. 2015. [15] T. Vanhatupa, “Wi-Fi Capacity Analysis for 802.11ac and

802.11n: Theory & Practice”, WhitePaper,

http://www.ekahau.com/userData/ekahau/wifi-design/documents/

whitepapers/Wi-Fi_Capacity_Analysis_WP.pdf, 2015

38

Page 41: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Buletinul Ştiinţific al Universităţii Politehnica Timişoara

TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Volume 60(74), Issue 1, 2015

Digital Rights Management - Creative Commons

Perspective

Cristina Vasilescu1, Mihai Onița

2

1 Faculty of Communication Sciences, Communication, Public Relations and Digital Media Str. Traian Lalescu Nr. 2a 300223 Timisoara, Romania, e-mail [email protected] 2 Faculty of Electronics and Telecommunications, Communications Dept.

Bd. V. Parvan 2, 300223 Timisoara, Romania, e-mail [email protected]

Abstract - This paper is addressed to an area with a

significant development in recent years: Digital Rights

Management (DRM). These data copyright can be

applied to several types of digital materials as images,

audio recordings, videos, and text. To be more specific,

we present in the paper, Creative Commons (CC)

technology, as an alternative to classical DRM. We

bring in discussion layers and types of a CC license, and

we include a study case of most popular platforms under

CC license. We make some recommendations and

extract some conclusions.

Keywords: DRM, Creative Commons, Public License,

CC platform, video, audio, text

I. INTRODUCTION

According to the Romanian Copyright Office,

Copyright is a legal term that it recognizes rights of

creators of literary, scientific or any work of

intellectual creation. Digital Rights Management

(DRM) is an intellectual property right that the

authors have over their creations. By creation,

researchers refer to any material: photos, audio

recordings, videos, written materials (text), etc. These

rights represent a method of protection recognized by

law, and they apply to everyone, regardless of status,

education, race or religion [1]. The Romanian law, for

example, gives the author the right to authorize or

prohibit (quoted from the Law) [1]:

• Reproduction of work, distribution of work;

• Commercialization of copies with author approval;

• Renting work, loan work;

• Public communication of the creation directly or

indirectly;

• Broadcasting the work;

• Cable retransmission of the work;

• Making derivative works;

These are the rights (patrimonial rights) that the law

recognizes the author. Of course, there are some

exceptions, but not major. Copyrights apply to

published materials and unpublished materials,

finished or unfinished. The material is recognized and

protected by the simple fact of its implementation,

even if it was not brought to the public attention [2].

Digital Rights Management is connected with systems

that restrict access to the digital media space. It is a

technology used by content providers to control the

usage and distribution of images, digital music, video

or files [3]. DRM fights against illegal modification,

copying, viewing or distribution/distributing of digital

media materials. Some of the copyright holders argue

that DRM handles large losses due to illegal

distribution of copyrighted material.

The DRM system is designed to adjust the

dissemination of digital information for following

types of digital materials: video, music, audio,

electronic books, software, video games. The

technology associated with DRM is intended to

provide the seller control over digital content or

devices after they have been entrusted to the buyer.

Content owners may use different types of DRM to

protect their intellectual property [4]:

• Restrictive Licensing Agreement controls access to

digital materials, copyright, public areas, etc.;

• Encryption (Encryption);

• Scrambling control online information access and

reproduction (e.g. backup copies for personal use);

• Digital signatures - provides secure content and

allows secure transactions;

• Fingerprint/watermarking incorporating information

about ownership to facilitate tracking and

monitoring the use, copying and distribution [5].

II. ALTERNATIVES

Open licenses are those materials considered to be

implicit protected by law and provide access to the

work that can be reused and redistributed [4]. Creative

Commons is a global non-governmental organization

dedicated to supporting a free and open Internet,

enriched through free knowledge and creative

resources so that people everywhere can use them,

distribute and develop [6].

39

Page 42: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Fig. 1. Layers of a CC license [6]

All Creative Commons product licenses have

common features. Any license helps creators (referred

here as licensors) retain their copyright while

allowing others to copy, distribute or use their

contents. Licenses incorporate an innovative design

with a structure composed of three layers: Legal

Code, Human Readable, and Machine Readable

(Fig.1). This organization has four types of items that

may constitute the type of license required [7]:

Attribution: people using the material must give credit

to the author.

Noncommercial: Individuals are not allowed to

distribute, modify or re-use the material if the purpose

is a commercial advantage or monetary compensation.

No derivatives: The material can be distributed, but

must be kept in original form without modification.

Share Alike: The adapted or modified material should

be distributed under the same Creative Commons

license

Fig. 2 reveal the possible combination of CC licenses:

Fig. 2. Types of CC licenses [4]

Attribution CC BY - this type of license allows others

to share, remix, modify/add to the original work as

long as credit is given for the original work. This type

of license is one of the most convenient services of

this kind offered by Creative Commons (CC).

Attribution NoDerivs CC BY ND - allows

redistribution (for commercial or non-commercial

purposes)with the condition that the content is not

altered.

Attribution-NonCommercial Share Alike CC BY NC

SA – allows others to remix, add or remove parts

from the non-commercial material with the condition

to recognize the source and to license the new content

respecting the same terms.

Attribution-Share Alike CC BY SA - offers the

opportunity to remix, modify or add to the content

(even commercial usage). The procedure has to be as

described above at other licenses. CC BY SA is often

compared to open source software licenses. Any

derivation from the original work will carry the same

license. This type of license is used by Wikipedia and

is recommended for Wikipedia materials that allow

improvements or additions or may be used in similar

projects.

Attribution Non-Commercial CC BY NC refers to

non-commercial materials that can be remixed,

modified, updated without the need for additional

licenses for the resulted content.

Attribution Non Commercial No Derivs CC BY NC

ND is the most restrictive of all licenses, allowing

others only to download and share the content as it is,

with the condition that they acknowledge the source,

without being able to make changes or use for

commercial purposes [7].

III. CASE STUDY - CC LICENSED PLATFORMS

There are a series of platforms, online applications

that have collections of images, music, videos and

documents that can be reused under certain

restrictions related to copyrights. These can be

divided into four categories, namely: an online

database of images, an online database for audio-

video materials, an online database of texts, and

online database for multimedia searching applications.

In the current study, we have identified those under

Creative Commons (CC), cataloged with Alexa

ranking and briefly described them.

Table 1

Application Domain Alexa

Rank

Flickr Images 130

Google images Images 2.587.437

Pixabay Images 1.040

Fotopedia Images 169.411

Open clipart Images 18.964

Instagram Images 34

Kepguru Images 265.400

Gorgraph Images 57.189

Creativity 103 Images 349.467

Deviant Art Images 160

Jamendo Audio- video 20.233

ccMixter Audio 62.954

Free sound Audio 12.868

Sound cloud Audio 176

Tribe of noise Audio 1.436.736

Europeana Audio- video 53.996

Youtube Audio- video 3

Blip tv Audio- video 14.975

Vimeo Audio- video 172

40

Page 43: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Wisdom Commons Text 447.235

Travellers point Text 39.525

Intra text Text 336.002

Creative Commons General

content 3899

Internet Archive General

content 234

Freebase General

content 1.740.431

Wikipedia

Commons

General

content 207

A. Images

Flickr, www.flickr.com is a site that hosts photos and

videos. It enjoys great popularity among bloggers that

store a lot of pictures for later use distributing them. It

can also be used to a mobile phone or with a computer

[8].

Google image, https://images.google.com is a search

and storage platform for images that allows users to

search the Web for image content. Keywords for the

image search are based on the image's file name.

When an image is sought, it displays a thumbnail.

When the user accesses the image, it is displayed in a

box on the website belongs to. The user can close the

image and can browse the web, or view the full image

in various sizes [9].

Pixabay, http://pixabay.com is a site that provides

access to a database of high-quality images under free

licenses. The images can be distributed and used

without any restriction because they are shown under

Creative Commons CCO dedicated to the public

domain. Images can be copied, modified, distributed,

and even used for commercial purposes without the

need for permission or without having to pay for

them. There is still the possibility that what is found

in these pictures to be under the protection of

trademarks or because of private rights [15].

Fotopedia, http://www.fotopedia.com was created by

five former Apple employees and represents a

database for images of photographers and authors

who have entered a form of cooperation. The

collaborators names have attached a hyperlink directly

related to their personal website where you can find

the entire gallery with high-quality pictures on various

topics from around the world. Unfortunately, in July

of 2014, Fotopedia management announced its

cessation asking users to store their data in personal

computers because if they did not, they would lose all

materials stored on the company server.

Open clipart, https://openclipart.org is a digital media

community that can store vector clip creations under a

free license. The project started in early 2004 by the

Inkscape developers desiring to collect specimens of

flags from around the world. It had a positive

development therefore objectives were extended to

generic clipart.

Instagram, www.instagram.com is a fun and different

way to share life with friends through a series of

images. It was created from the desire to allow the

sharing of life events through images as close to the

time they occur. The application was named from a

combination of two words: instant and telegram.

Kepguru, http://kepguru.hu is an online application,

launched in Hungary that became very popular. To

upload images is required an email address, username,

password, and the users consent to the rules imposed

by developers.

Gorgraph, www.geograph.org.uk at the moment of

launching had the main goal to collect, publish,

organize and archive the information or images

representative of Great Britain, Ireland and the Isle of

Man. Through this website was created access to a

geographic database freely available to the public. All

photographic observations are registered under a

Creative Commons Attribution-Share Alike license

granting those who access the site, rights to use the

materials for any purpose, as long as credit is given to

the copyright holder and that derivative works are

used under the same license.

Creativity 103, http://creativity103.com is a source of

photographic materials that has all sorts patterns and

textures, unusual and abstract; all available for free

under a Creative Commons licenses. It was released

in 2001 due to the lack of sites for people who wanted

to use textures and backgrounds in their projects. The

platform currently contains more than 2500 files, 6GB

of free photos. The downloads are designed to be used

directly in the drawings, as layer textures or as a

source of inspiration and ideas for further

development.

DeviantArt, www.deviantart.com is described in

Chapter IV.

B. Audio-video

Jamendo, www.jamendo.com is a music website and

an open community of music authors. It is an

economic model that allows free music downloads for

Internet users while providing revenue opportunities

for artists through commercial usage [11]. The name

"Jamendo" comes from the fusion of two musical

terms, i.e., "jam session" and "crescendo".

ccMixter, http://ccmixter.org is a website that offers

remixed music under Creative Commons. It provides

the possibility to download and listen to any type

music anywhere, anytime and with anyone. Some

songs may have certain restrictions, depending on the

applied licenses. The site supports popular formats

like MP3, WMV, OGG and others. Those who wish

to upload audio material on this site are advised to

archive their materials before sending them.

Free sound, www.freesound.org aims to create a

database of audio snippets, samples, and records

provided with Creative Commons licenses that allow

reuse. It provides new ways to access materials by

41

Page 44: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

browsing using keywords; uploading and

downloading tons to and from the database under the

same Creative Common License; also offers the

ability to interact with other sound artists.

Sound cloud, https://soundcloud.com is the largest

social music platform in the world, where any user

can create sounds and can share them. Recording and

uploading sounds on this platform allow users to share

easily either privately with friends or on public blogs,

websites, and social networks. Also, sound creators

can use the platform to receive detailed statistics and

feedbacks from SoundCloud community. It can be

easily accessed via smartphone applications for

iPhone and Android.

Tribe of noise, www.tribeofnoise.com is an ever-

growing community that has at this moment 25,000

artists from 185 countries. It connects amateur

musicians with professionals from the media and

enterprises worldwide that need to provide music with

all rights included. Independent artists can preserve

their rights and at the same time, can take advantage

of the best collective business deals.

Europeana, www.europeana.eu is an Internet portal

that acts as an interface for books, paintings, films, art

objects and archival records that have been digitized

in Europe. These stored data on a single Internet

address allow users to explore Europe's cultural and

scientific heritage from early prehistory and until

today [12].

YouTube, www.youtube.com is a platform that allows

a large number of people to discover, watch and share

videos. It provides a forum for people to connect,

inform, but also to inspire others. You can find

videos, TV clips, music videos, and other content

such as video blogging, short original videos, and

educational videos. The access to this content is free

and can be made by any device as long as there is an

Internet connection [13].

BlipTv, www.blip.tv belongs to Studios Maker. It

develops, manufactures and distributes the best web

original series from well-known productions to

potential successful productions. Provides user’s free

access to a variety of materials of various types, such

as drama, comedy, artistic, sports and other shows and

makes facilitates the search with the help of

keywords. Since it was launched in 2005, BlipTv

turned into the largest platform for digital videos in

the world, reaching hundreds of millions of views per

month.

Vimeo, www.vimeo.com was released in November

2004 by a group of filmmakers who wanted to share

their creations and special moments with the whole

world and from lives. As time passed, more and more

people have discovered the usefulness of this site and

helped build a community to support people with a

wide range of passions. It is possible to upload videos

from all categories, but from July 2008 the site

management does not allow the upload video games

tutorials, one reason being they're’s extremely large

size.

C. Text

Wisdom Commons, www.wisdomcommons.org is an

interactive website containing a collection of over

3.000 poems, fables, essays and more that can be used

without restrictions. It is a place to find and discuss

the virtues of life that are considered important such

as generosity, compassion or courage. As a user or

member, you can search or insert quotes, sayings,

meditations, stories or essays from all the places of

the world.

Traveller point, www.travellerspoint.com is one of the

largest and most active community of web travel with

members representing every country in the world.

This platform is designed for people seeking guidance

before traveling or people who cannot decide on a

destination for their holiday. There are more than

30,000 blogs that share stories over 175,000 and more

than 1.4 million photos posted.

Intratext, www.intratext.com is an online library

managed by experts, publishing works very accurate

and with detailed scientific precision. It contains over

12 million written materials dating from 900 years BC

to the present. A large amount of materials are

licensed under the Creative Commons Attribution-

NonCommercial-ShareAlike allowing others to

modify, remove or add to a work (to non-commercial

materials) with the condition to recognize the source

and to license new content in compliance with the

same terms.

D. General content

Creative Commons, www.creativecommons.org is

designed in such a way as to ease the searching

process for the types of materials on the Internet

under free licenses and at the same time to link the

existing platforms through a single interface. This site

is not a search engine but a platform that provides

access to other platforms, such as the ones presented

above in sections A, B, and C.

Internet Archive, www.archive.org is an on-line

library whose main aim is to provide permanent

access for researchers, historians, students, people

with disabilities or the general public to historical

collections of all types of materials that may exist in

digital format. Currently, Archive includes materials

as text, audio, moving images, and software as well as

archived web pages in their collections and provides

specialized services for people with disabilities and

the blind.

Freebase, www.freebase.com was launched as a

search engine powered by the community for all kinds

of materials under free licenses. It contains

approximately 20 million subjects. Most of the items

are related to several categories such as people,

places, books, movies, etc. Therefore, when searching

42

Page 45: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

for a specific title, it might be found in many

categories and topics at the same time. From March

31, 2015, the platform became "read-only" meaning

that materials can no longer allow additions or

modifications of any type.

Wikimedia Commons is an on-line storehouse for

images https://commons.wikimedia.org, sounds, and

other media files. This deposit is not created,

maintained and developed by specialists, but by

volunteers who enjoy collecting and archiving

multimedia content. Materials found on this site can

be used by anyone who has Internet access, whether

or not they possess a user account [14].

IV. TUTORIAL

We developed a tutorial for uploading images on

Deviant Art. The results can be follow-on to the

address:http://mihai.cm.upt.ro/projects/atracting/tutori

al/DeviantArt and consist of next step:

• Account creating;

• Profile settings;

• Submitting one photo or collection;

• Settings for resolution, watermark, tagging, Creative

Commons characteristics;

• Uploading;

• Results: an image with the important metadata

displayed and with the characteristic established in

preview steps.

Fig. 3. Deviant Art

The platform has free account version, but after

creating the account, the site offers the opportunity to

buy "premium membership". It has a storage space of

10GB compared to 2GB the classic one. The update to

premium can be monthly or on a one-year period, the

price being $ 2.49 a month, and for a whole year to

29.95 dollars. It is a platform that gives artists and art

lovers the opportunity to interact in different ways

with each other. Application developers support the

movement for creative expression liberation so that

the access is unlimited allowing any user to create a

cultural context to how art is created, discovered and

shared. From August 2000 until March 2013 the site

registered over 25 million members and over 36

million visitors.

V. CONCLUSIONS

Digital Media is part of the life of each as it is the

quickest form of information dissemination, yet the

instant access to a huge volume of information has

both positive and negative effects. Positive because

information can travel the World in just minutes, and

negative because it is very difficult to monitor a large

volume of content. The described platforms represent

just a part of what the Internet has to offer as the

criterion collection of materials under copyrights. The

current paper is the result of the first approach into the

world of these kinds of applications. Imposing

Copyrights for materials created in digital media

should be a priority as big as it is imposing copyrights

applied to materials that come from traditional media.

Because of the digital media evolution and the

Internet it has become possible for widespread and

free or almost free, distribution of copyrighted works

to take place. Creative Commons developed and made

available several easy to use copyright licenses known

as Creative Commons licenses (CC licenses). It comes

to help content creators to make their materials

available for others to access and reuse or limit their

rights completely.

ACKNOWLEDGEMENTS

This work was partially supported by the strategic

grant POSDRU/159/1.5/S/137070 (2014) of the

Ministry of National Education, Romania, co-

financed by the European Social Fund – Investing in

People, within the Sectoral Operational Programme

Human Resources Development 2007-2013.

REFERENCES

[1] B. Manolea, The Eighth law concerning copyrights,

http://www.legi-internet.ro/legislatie-itc/drept-de-autor/legea-

dreptului-de-autor.html#c145, Accessed August 2014

[2] Free Software Federation Europe, DRM - The Strange, Broken

World of Digital Rights Management, EDRi paper, Issue 04, http://www.edri.org/files/2012EDRiPapers/DRM.pdf, Accessed

August 2014

[3] A. Russ, Digital Rights Management Overview, Sans Institute

InfoSec Reading Room, Security Essentials v1.2e, July 2001

[4] B. Hansen, D. Stith, L., Tesdell, Plagiarism: What’s The Big

Deal? Minnesota State University, Mankato Business

Communication Quarterly, Volume 74, Number 2, June 2011 188-

191

43

Page 46: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

[5] E. Thomas and K. Sassi, An Ethical Dilemma: Talking about

Plagiarism and Academic Integrity in the Digital Age, The English Journal, Vol. 100, No. 6 (July 2011), pp. 47-53

[6] Creative Commons, About Creative Commons,

http://creativecommons.org, Accessed November 2014

[7] Creative Commons, Constituting elements of Creative

Commons licenses http://creativecommons.org.nz/licences/licences-

explained, Accessed November 2014

[8] Flickr, What is Flickr,

https://www.flickr.com/about, Accessed September 2014

[9] University of Melbourne, Finding Creative Commons Images using Googles,

googlehttp://www.unimelb.edu.au/copyright/information/guides/go

ogleimagesblue.pdf, Accessed September 2014

[10] Pixabay, Free quality high images,

http://pixabay.com, Accessed September 2014

[11] Jamendo, About Jamendo,

https://www.jamendo.com/en, Accessed September 2014

[12] Europeana, Despre europeanu.eu, www.europeana.eu,

Accessed September 2014

[13] Youtube, Creative Commons on Youtube,

https://www.youtube.com/user/creativecommons,

Accessed September 2014

[14] Wikipedia, Wikimedia Commons,

http://commons.wikimedia.org/wiki/Main_Page,

Accessed September 2014

[15] Ottawa, Beware of plagiarism! It’s easy, it’s tempting ... But it

can be very costly!

www.uOttawa.ca/plagiarism.pdf, Accessed September 2014

[16] Integrity, Nine Things You Should Already Know About

Plagiarism,

http://integrity.ou.edu/files/nine_things_you_should_know.pdf,

Accessed September 2014

[17] Plagiarism.org, Definitions and types of plagiarism,

http://www.plagiarism.org, Accessed September 2014

[18] N. Helberger, N. Dufft Digital Rights Management and

Consumer Acceptability, A Multi-Disciplinary Discussion of

Consumer Concerns and Expectations, State-of-the-Art Report,

INDICARE project

44

Page 47: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Buletinul Ştiinţific al Universităţii Politehnica Timişoara

TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Volume 60(74), Issue 1, 2015

The detection of moving objects in video by background

subtraction using Dempster-Shafer theory

Oana Munteanu12

, Thierry Bouwmans2, El-Hadi Zahzah

2, Radu Vasiu

1

1 Faculty of Electronics and Telecommunications, Multimedia Dept. Bd. V. Parvan 2, 300223 Timisoara, Romania, e-mail: [email protected], [email protected] 2 Mathematics, Image and Applications Laboratory, University of La Rochelle

Avenue Michel Crepeau 17042 La Rochelle, France, e-mail: [email protected], [email protected]

Abstract – Detection of moving objects has been widely

used in many computer vision applications like video

surveillance, multimedia applications, optical motion

capture and video object segmentation. The key steps in

detecting the moving objects are the background

subtraction and the foreground detection. To handle

these processes, we need to classify the corresponding

pixels of the current image as background or

foreground. This paper describes the background

subtraction and the foreground detection within the

context of Dempster-Shafer theory which better

represents uncertainty by considering the situations of

risk and ignorance. The proposed method addresses the

methodology modeling in the Dempster-Shafer theory of

evidence by representing the information extracted from

the current image as measures of belief. The mass

functions are computed from the probabilities assigned

to each class being combined with the Dempster-Shafer

rule of combination and the maximum of mass function

is used for decision-making. The proposed method has

been tested on several datasets showing an optimal

performance compared to other fuzzy approaches based

on the Sugeno and Choquet integrals and has proved its

robustness.

Keywords: Dempster-Shafer theory of evidence,

background subtraction, foreground detection,

uncertainty information, data fusion, decision.

I. INTRODUCTION

Background subtraction techniques have been used in

many applications in which the background is not

static, for instance in video surveillance [1],

multimedia applications [2], optical motion capture

[3], video object segmentation [4]. These techniques

are based on different methods for subtracting the

background and properly manage the background

modeling, thus several surveys can be found in

[5][6][7].

The basic operation needed is the separation of the

moving objects called ”foreground” from the static

information known as ”background” [5]. Background

subtraction is the particular case when: 1) one image

is the background image and the other one is the

current image, and 2) the changes are due to moving

objects. Therefore, in this paper we focus on the

detection of moving objects in videos. The idea of

background subtraction is to find the difference

between the current image and the corresponding

reference of the background model. Such comparison

is made by using color and texture features to

compute similarity measures between pixels in current

and background images.

The main contribution of this paper is to propose a

foreground-background segmentation algorithm using

a Dempster-Shafer fusion approach. Each pixel is

characterized by its mass functions defining each

corresponding classes. The final segmentation is

carried by assigning each pixel to the maximum belief

assumption of its corresponding class. This paper is

organised as follows. In Section II we present some

researches that have shown an important impact into

the background subtraction area and some recent

surveys regarding Dempster-Shafer applicability in

image segmentation. Section III highlights a brief

review about background subtraction techniques and

some fundamental concepts regarding Dempster-

Shafer theory of evidence are described in Section IV.

Furthermore, the description of our system is

illustrated in Section V and in Section VI we discuss

the similarity measures. A brief explanation of our

proposed Dempster-Shafer method is given in Section

VII followed by the experiments in Section VIII.

Based on the results obtained, we highlight some

relevant conclusions and future improvements in

Section IX.

II. RELATED WORK

Many researches about background subtraction can be

found in the literature [5][8][9]. In [5], Bouwmans

highlighted a complete overview of the concepts,

theories, algorithms and applications regarding both

traditional and recent approaches in background

modeling for detecting the foreground. As image

segmentation can be made using fuzzy foreground

detection, Zhang and Xu [10] used texture and color

features to compute similarity measures between

current and background pixels. These similarity

measures have been aggregated by applying the

45

Page 48: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Sugeno integral. The moving objects are detected by

thresholding the results of the Sugeno integral. El Baf

et al. [11] used the same features but applying the

Choquet integral instead of the Sugeno approach

proving robustness to shadows and illumination

changes. Recently, Azab et al. [12] have aggregated

three features, that are color, edge and texture. Fuzzy

foreground detection is more robust to illumination

changes and shadows than crisp foreground detection.

There are available several background-foreground

segmentation algorithms, as for example the

Background Subtraction Library (BGSLibrary)

developed by Sobral [13] which provides a C++

framework including statistical models, clustering

models, neural networks and fuzzy models.

The use of Dempster-Shafer theory of evidence has

shown relevant challenges in many applications

[14][15][16], and also in the image segmentation area

[17][18]. Moro et al. [19] introduced an improved

foreground-background segmentation algorithm using

the Dempster-Shafer theory by providing significant

improvements in a complex scenario. Their approach

performs successfully the background modeling for

moving objects that remain stationary for a long time

and start moving again. The Dempster-Shafer theory

has been also used in skin detection researches [20] as

a powerful and flexible framework for representing

and handling uncertainties in available information

and overcome the limitations of the current state-of-

the-art methods.

In this paper, we seek to perform the Dempster-Shafer

fusion approach in detecting the foreground by

aggregating both color and texture features. The aim

is to prove if our proposed method can perform better

than the already applied Sugeno and Choquet fuzzy

integrals.

III. BACKGROUND SUBTRACTION: A BRIEF

REVIEW

Several background subtraction methods have been

discussed in many articles proving their efficiency

along their corresponding implementation [13]. The

simplest way of modeling the background is to

consider a background image without any moving

object. Moreover, the background can be affected by

critical changes such as illumination changes,

dynamic backgrounds, objects being introduced or

removed from the scene [5]. To overcome these

issues, the background representation model must be

robust and adaptive.

There are various background representation models

that were developed along the time, from the

traditional to the recent ones such as:

• Basic Background Modeling: The basic way of

modeling the background is by either using the

average [21], median [22] or histogram analysis over

time [23]. Once the model is computed, the

foreground detection can be determined as follows:

d(It(x, y) − Bt−1(x, y)) > T (1)

where T is a constant threshold, It(x, y) the current

image and Bt(x, y) the background image at time t. If

condition 1 is not accomplished, the pixels are

assigned as background.

• Statistical Background Modeling: The background

representation is modeled using a single Gaussian

[24], a Mixture of Gaussians [25][26][27] or a Kernel

Density Estimation [28][29][30]. Statistical models

are used in detecting pixels as background or

foreground due to their robustness to illumination

changes and dynamic backgrounds.

• Fuzzy Models: These models take into

consideration the imprecisions and the uncertainties

encountered in the process of background subtraction.

The algorithm commonly used is the Gaussian

Mixture Model [31], but one drawback is that the

parameters are determined using a training sequence

which might contain insufficient or noisy data.

Combining approaches consisting of aggregating

different features such as color and texture lead to

robust results. Therefore, El Baf et al. [11] have fused

these two features using the Sugeno and Choquet

aggregation integrals proving that using more than

one feature can better overcome the illumination

changes and shadows issues.

As seen previously, a large variaty of background

representation models can be used depending on the

critical situations that need to be handled.

IV. DEMPSTER-SHAFER THEORY OF

EVIDENCE: SOME FUNDAMENTALS

The Dempster-Shafer (D-S) theory of evidence was

introduced by Dempster [32] and Shafer [33]. It

provides a unifying framework for representing

uncertainty by taking into consideration the situations

of risk and ignorance. The D-S theory of evidence can

be interpreted as a generalization of probability theory

where probabilities are assigned to sets of possible

events.

In this framework, each information i is characterized

by a mass function mi that can be mapped into the

numerical values interval [0, 1] to each subset of the

discernment set Ω. D-S allows the representation of

both imprecision and uncertainty through the

definition of two functions: belief (Bel) and

plausibility (Pl), both derived from a mass function m

[34][32].

Considering the set of classes of interest:

Ω = C1, C2, ..., Ci (2)

The mass function m represents the function from 2Ω

onto [0, 1], such that:

m : 2Ω → [0, 1] (3)

m(∅) = 0, ∑⊂

=ΩA

m(A) 1 (4)

46

Page 49: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

A subset A with non-zero mass value is called a focal

element. As explained above, belief and plausibility

functions are derived from the mass functions. The

Belief function for a set A is defined as the sum of all

the basic probability assignments of the proper

subsets (B) of the set of interest (A) (see equation 5).

The Plausibility represents the sum of all the basic

probability assignments of the sets (B) that intersect

the set of interest (A) (see equation 6). The belief and

plausibility functions satisfy the condition shown in

equation 7.

∑⊆

=AB

m(B) Bel(A) (5)

(6)

)()( APlABel ≤ (7)

The combination rule is generated by the orthogonal

sum expressed for n sources as:

∑=∩∩∩

=−

=⊕ABBB

nni

n

i

n

BmBmK

Am...

111

21

)()...(1

1)( (8)

where A, B1, B2, ..., Bn are the subsets of Ω and K is

the basic probability mass associated with conflict

determined by summing the products of the mass

functions of all sets where the intersection is null (see

equation 9).

(9)

The denominator in Dempster’s combination rule,

1−K is a normalization factor that attributes any

probability mass associated with conflict to the null

set so as to ignore the conflict [33].

Note that the combination rule is commutative,

associative, but not idempotent or continuous.

V. SYSTEM OVERVIEW

The first step of several video analysis systems is

represented by the segmentation of foreground objects

from the background. This task is very important

since the background subtraction algorithm has to

cope with a number of critical situations (e.g.,

presence of noise, continuous and sudden illumination

changes, permanent and temporal variation in

background objects).

In the following subsections, we briefly discuss the

fundamental steps that were taken into consideration

when building our system.

A. Background subtraction

The main steps in detecting the background are

illustrated in Fig. 1.

Fig. 1: Diagram of the background management.

a. Background initialization

This first step requires an important attention of

exploiting the frames at the beginning of the

sequence. In our case, the background initialization is

made by using the average of the N first video frames

where objects were present.

b. Background maintenance

An update rule of the background model is required in

order to adapt its changes occured in the scene over

time. The selective maintenance scheme used is:

),(),()1(),( 11 yxIyxByxB ttt ++ α+α−=

if (x,y) is background (10)

),(),()1(),( 11 yxIyxByxB ttt ++ β+β−=

if (x,y) is foreground (11)

where Bt(x, y) is the background image, It+1(x, y) is the

current image, α is the learning rate which determines

the speed of the adaptions to illumination changes and

β is the learning rate which handles the incorporation

of motionless foreground objects.

c. Foreground detection

This step represents a classification task and consists

of labeling pixels as background or foreground. Our

foreground detection process is shown in Fig. 2. First,

we extract color and texture features from the

background image B(t) and the current image I(t + 1).

Furthermore, the similarity measures are computed

for each feature and then they are aggregated by

Dempster-Shafer method. Finally, the classification of

background/foreground is made by thresholding with

the D-S maximum belief assumption.

Fig. 2: Foreground detection process.

47

Page 50: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

B. Color and texture features

The choice of features is an important task due to their

different properties which allow to handle the critical

situations differently. Color features are often very

discriminative but they have several limitations in the

presence of illumination changes, camouflage and

shadows. Texture is adapted to the illumination

changes and shadows. The addition of several features

together can lead to even more robust results.

a. Color features

A number of color space comparisons are presented in

the literature [35][36]. In foreground detection, the

most commonly used color space is RGB due to being

directly available from the sensor or the camera.

For building our system, we use the RGB color space.

We choose two components according to the relevant

information which they contain so as to have the least

sensitivity to illumination changes.

b. Texture feature

We use the eXtended CS-LBP (XCS-LBP) texture

feature which was developed by Silva et al. [37]. This

texture feature extracts image details by comparing

the gray values of pairs of center-symmetric pixels

and considering the result as a binary number.

The XCS-LBP mathematical expression is:

∑−

=

+=−1)2/(

021, 2)),(),(()(

P

i

iRP cigcigscLBPXCS (12)

where g1(i,c) and g2(i,c) are considered as:

−−=

+−=

+

+

))((),(

)(),(

)2/(2

)2/(1

cPici

cPii

ggggcig

gggcig (13)

and the threshold function s which determines the

types of local pattern transition is defined as follows:

≥+

=+. ,0

0)( ,1)(

2121

otherwise

xxifxxs (14)

Therefore, we perform the fusion of these two

features, namely color and texture, by using the

Dempster-Shafer theory which will be described in

section VII.

VI. SIMILARITY MEASURES

Foreground detection is based on the comparison

between the current and the background images. We

propose to detect the foreground by defining a

similarity measure between pixels in the current and

background images.

A. Color similarity measures

When computing the color similarity measure, we

consider:

>

=

<

=

),(),( ,),(

),(

),(),( ,1

),(),( ,),(

),(

),(

yxIyxIifyxI

yxI

yxIyxIif

yxIyxIifyxI

yxI

yxS

Bk

CkC

k

Bk

Bk

Ck

Bk

CkB

k

Ck

Ck

(15)

where k ∈ 1, 2, 3 is one of the three color features,

B and C is the background and the current images at

time t. If IkB(x,y) and Ik

C(x,y) are similar, we assign 1

as value, otherwise the values correspond between 0

and 1.

B. Texture similarity measures

Based on the same idea, the texture similarity measure

ST(x, y) for the pixel (x, y) is computed as follows:

>

=

<

=

),(),( ,),(

),(

),(),( ,1

),(),( ,),(

),(

),(

yxLyxLifyxL

yxL

yxLyxLif

yxLyxLifyxL

yxL

yxS

BC

C

B

BC

BC

B

C

T (16)

where LB(x,y) and L

C(x,y) represent the texture of

pixel (x,y) of the background and the current images

at time t. ST(x,y) is 1 if L

B(x,y) and L

C(x,y) are similar,

otherwise ST

(x,y) is assigned between 0 and 1.

VII. THE PROPOSED DEMPSTER-SHAFER

ALGORITHM

Another fundamental task in foreground detection is

the aggregation of the similarity measures through

Dempster-Shafer theory. Starting from the theoretical

concepts discussed in section IV, we propose the

following problem formulation:

Let us consider the discernment set comprising three

main classes, that are FG representing the foreground,

BG the background, Θ the uncertainty and m(∅) = 0

(see equation 17).

Ω = ∅, FG, BG, Θ (17)

A suggestive framework describing the Dempster-

Shafer fusion’s flow is illustrated in Fig. 3.

For each pixel (x,y), we take into consideration three

sources represented by the two color components of

the RGB color space and the XCS-LBP texture

feature. For each source, we define three hypothetical

mass functions corresponding to the foreground,

background and uncertainty classes.

48

Page 51: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Fig. 3: Dempster-Shafer fusion’s framework.

We start fusing the first two sources (e.g., the two

color components) by using all the corresponding

probabilities assigned to each of the class.

For instance, when fusing R and G components we

calculate the combination rule for each class as

follows:

RFGGGFGRFGGFGRFG mmmmmmSm ΘΘ ++=)12(

RBGGGBGRBGGBGRBG mmmmmmSm ΘΘ ++=)12(

GRmmSm ΘΘΘ =)12( (18)

where the factor of conflict, K, is defined as:

FGGBGRBGGFGR mmmmK += (19)

Then, we determine the next fusion between the third

source m(S3) and the previous fusion result m(S12).

The final fusion is represented by the sum of the two

fused results normalized so that to assign the values in

the [0, 1] interval. We can now define the [Belief,

Plausibility] interval which is computed as follows:

FGMBel =

Θ+= MMPl FG

PlBel ≤ (20)

where MFG and MΘ are the results of the final fusion

describing the foreground and the uncertainty.

After knowing both Belief and Plausibility, we search

for the best decision rule by determining which of the

hypotheses mass functions are included in the interval

assigning the foreground as following:

≤++

backgroundisyxpixel

otherwise

foregroundisyxpixel

BelSmSmSm

),(

,

),(

)max()3()2()1(

(21)

After all these steps, we can proceed in extracting the

foreground mask and the obtained results are shown

in the following section.

VIII. EXPERIMENTS

The proposed Dempster-Shafer method has been

evaluated with several datasets: the first one is the

Aquateque dataset3 used in a multimedia application

[2] where the output images are 384×288 pixels, and

the second dataset4 provided for the Scene

Background Modeling and Initialization (SBMI2015)

workshop. For each dataset, we provide a comparison

with other approaches such as Sugeno and Choquet

fuzzy integrals [11] where their threshold is optimized

to give the best results.

A. Aquateque dataset

This dataset consists of video sequences presenting

fishes in tank. The goal is to detect the fishes and

identify them. In these video sequences, there are

several critical local or global situations such as the

illumination changes owed to the ambient light, the

spotlights which light the tank from the inside and

from the outside, the movement of the water due to

fish and the continuous renewal of water.

Furthermore, the aquarium environment (e.g., rocks,

algae) and the texture of fishes amplify the

consequences of the brilliant variations.

Fig. 4 illustrates the experiments performed on the

sequence #201.

(a) Original image #201 (b) XCS-LBP texture

(c) Ground truth (d) Sugeno

(e) Choquet (f) Proposed D-S

Fig. 4: Aquateque dataset.

3 sites.google.com/site/thierrybouwmans/recherche---aqu-theque-

dataset 4 sbmi2015.na.icar.cnr.it

49

Page 52: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

As shown above, we expose the ideal result given by

the ground truth (see 4c) with the results obtained by

applying the two existing approaches (see 4d and 4e)

and our proposed Dempster-Shafer method (see 4f).

As can be observed, the proposed method gives more

optimal results than the other two approaches.

Furthermore, we compute the quantitative evaluation

using the similarity measure performed also in [11].

Considering A being a detected region and B the

corresponding ground truth, the similarity measure

between A and B can be defined as:

BA

BABAS

∩=),( (22)

If A and B are similar, S(A,B) approaches 1, otherwise

0. Table 1 shows the similarity values obtained when

applying the three methods over the sequence #201 of

the Aquateque dataset. As can be seen, the best result

is given by our proposed method, thus foreground

pixels have been better mapped by performing D-S

method than the other two approaches.

Table 1: Similarity Measure

Method Sugeno Choquet D-S

S(A,B) 0.166 0.159 0.205

To further estimate the performance of each

algorithm, we show in Table 2 the results obtained

regarding Precision, Recall and F-measure. In order

to do that, we compute each of the measures as

follows:

FPTP

TPecisionP

+=r

FNTP

TPcallR

+=e

callRecisionP

callRecisionPmeasureF

er

er2

+

⋅⋅=− (23)

where TP is the total number of true positives, FP the

total number of false positives and FN the total

number of false negatives.

As F-measure is assigned within the [0, 1] interval,

the higher the F-measure the better performance of the

algorithm on detecting correctly the pixels as

foreground. Therefore, we can notice that our

proposed method gives the optimal results compared

to the Sugeno and Choquet integrals.

Table 2: Performance Measures

Method Sugeno Choquet D-S

Precision 0.811 0.816 0.799

Recall 0.173 0.164 0.216

F-measure 0.285 0.274 0.340

B. SBMI2015 datasets

Furthermore, we test our proposed Dempster-Shafer

method on another datasets provided by SBMI2015.

These datasets consists of indoor and outdoor

sequences in video surveillance context. The goal is to

detect moving persons and/or vehicles. We also

provide the comparison of our proposed algorithm

with respect to the Sugeno and Choquet approaches.

Once again, we illustrate that the use of our proposed

method gives more robustness in the foreground-

detection segmentation.

(a) Original image #295 (b) Sugeno

(c) Choquet (d) Proposed D-S

Fig. 5: Hall&Monitor dataset.

(a) Original image #257 (b) Sugeno

(c) Choquet (d) Proposed D-S

Fig. 6: CaVignal dataset.

(a) Original image #499 (b) Sugeno

(c) Choquet (d) Proposed D-S

Fig. 7: HighwayII dataset.

50

Page 53: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

IX CONCLUSION

In this paper, we have presented a foreground

detection method using the Dempster-Shafer fusion

approach for aggregating RGB color space and XCS-

LBP texture features. The experiments using

Aquateque and SBMI2015 datasets show more

robustness to shadows and illumination changes than

the other two methods. Furthermore, the quantitative

evaluation reflects that our proposed method gives

better results than the use of the Choquet and Sugeno

fuzzy integrals.

Some directions of the future work include the

expansion of the fusion and comparison of other color

and texture features. Another further research consists

of performing more quantitative evaluations on other

datasets proving the Dempster-Shafer method’s

efficiency.

ACKNOWLEDGEMENTS

We thank to the PhD students, Andrews Sobral and

Carolina Silva, for their support during the research.

REFERENCES

[1] S. Brutzer, B. Hoferlin, and G. Heidemann. “Evaluation of

Background Subtraction Techniques for Video Surveillance”.

Computer Vision and Pattern Recognition (CVPR), pp. 1937 - 1944, June 2011.

[2] F. El Baf, and T. Bouwmans. “Comparison of background

subtraction methods for a multimedia learning space”. International

Conference on Signal Processing and Multimedia, July 2007.

[3] D. D. Doyle, A. L. Jennings, and J. T. Black. “Optical flow

background subtraction for real-time PTZ camera object tracking”.

Instrumentation and Measurement Technology Conference

(I2MTC), pp. 37-46, May 2013.

[4] R. S. Basuki, M. A. Soeleman, R. A. Pramunendar, A. F. Yogananti, and C. Supriyanto. “Video object segmentation

applying spectral analysis and background subtraction”. Journal of

Theoretical and Applied Information Technology, pp. 208- 214,

February 2015.

[5] T. Bouwmans. “Traditional and Recent Approaches in

Background Modeling for Foreground Detection: An Overview”. Lab. MIA Univ. La Rochelle France, May 2014.

[6] M. Piccardi. “Background subtraction techniques: a review”. Proceedings of the International Conference on Systems, Man and

Cybernetics, pp. 3199-3204, October 2004.

[7] S. Elhabian, K. El-Sayed, and S. Ahmed. “Moving object

detection on spatial domain using background removal techniques -

State-of-Art”. Recent Patents on Computer Science, vol. 1, no. 1,

pp. 32-54, January 2008.

[8] A. L. Nel, P. E. Robinson, and C. J. F. Reyneke. “Comparison

of background subtraction techniques under sudden illumination

changes”. Conference proceedings (APK Electrical and Electronic Engineering Science), 2014.

[9] A. Sobral, and A. Vacavant. “A comprehensive review of

background subtraction algorithms evaluated with synthetic and

real videos”. Computer Vision and Image Understanding

ELSEVIER, vol. 122, pp. 4–21, May 2014.

[10] H. Zhang, and D. Xu. “Fusing color and texture features for background model”. Third International Conference on Fuzzy

Systems and Knowledge Discovery (FSKD), pp. 887–893,

September 2006.

[11] F. El Baf, T. Bouwmans, and B. Vachon. “Fuzzy Integral for

Moving Object Detection”. Fuzzy Systems, 2008. FUZZ-IEEE

2008 (IEEE World Congress on Computational Intelligence), pp. 1729-1736, June 2008.

[12] M. Azab, H. Shedeed, and A. Hussein. “A new technique for

background modeling and subtraction for motion detection in real-

time videos”. International Conference on Image Processing (ICIP),

pp. 3453–3456, September 2010. [13] A. Sobral. “BGSLibrary: An OpenCV C++ Background

Subtraction Library”. IX Workshop de Visao Computacional

(WVC), 2013.

[14] J. Ruo-yu, Y. Jing-feng, L. Qi-ming, and C. Yan. “The

application of Dempster-Shafer theory in soft information

management of construction projects”. International Conference on Management Science & Engineering (ICMSE), pp. 1814 - 1819,

Aug. 2014.

[15] M. Khazaee, H. Ahmadi, M. Omid, A. Moosavian, and M.

Khazaee. “Classifier fusion of vibration and acoustic signals for

fault diagnosis and classification of planetary gears based on

Dempster–Shafer evidence theory”. Journal of Process Mechanical Engineering, vol. 228, no.1, pp. 21-32, February 2014.

[16] Y. Wanga, Y. Daia, Y. Chenb, and F. Menga. “The Evidential

Reasoning Approach to Medical Diagnosis using Intuitionistic

Fuzzy Dempster-Shafer Theory”. International Journal of

Computational Intelligence Systems, vol. 8, pp. 75-94, September

2014.

[17] S. B. Chaabane, M. Sayadi, F. Fnaiech, E. Brassart.

“Relevance of the DempsterShafer Evidence Theory for Image

Segmentation”. 2009 International Conference on Signals, Circuits and Systems, pp. 1 - 4, 2009.

[18] J. Ni, J. Luo, and W. Liu. “3D Palmprint Recognition Using

Dempster-Shafer Fusion Theory”. Journal of Sensors, vol. 2015,

article ID 252086, 7 pages, January 2015.

[19] A. Moro, E. Mumolo, M. Nolich, K. Terabayashi, and K.

Umeda. “Improved Foreground-Background Segmentation using

Dempster-Shafer Fusion”. 8th International Symposium on Image

and Signal Processing and Analysis (ISPA 2013), pp. 4-6, September 2013.

[20] M. Shoyaib, M. Abdullah-Al-Wadud, and O. Chae. “A skin

detection approach based on the Dempster–Shafer theory of

evidence”. International Journal of Approximate Reasoning

ELSEVIER, vol. 53, pp. 636–659, January 2012.

[21] B. Lee, and M. Hedley. “Background Estimation for Video

Surveillance”. Image and Vision Computing New Zealand, pp. 315-

320, 2002.

[22] N. McFarlane, and C. Schofield. “Segmentation and tracking of piglets in images”. British Machine Vision and Applications, pp.

187-193, 1995.

[23] J. Zheng, and Y. Wang. “Extracting Roadway Background

Image: A mode based approach”. Transportation Research Board,

2006.

[24] M. Zhao, N. Li, and C. Chen. “Robust automatic video object segmentation technique”. IEEE International Conference on Image

Processing (ICIP), September 2002.

[25] C. Stauffer. “Adaptive background mixture models for real-

time tracking”. Proceedings IEEE Conference on Computer Vision

and Pattern Recognition, pp. 246-252, 1999.

[26] J. Zhang, and C. Chen. “Moving Objects Detection and

Segmentation in Dynamic Video Backgrounds”. Conference on

Technologies for Homeland Security, pp. 64-69, Woburn, USA,

May 2007. [27] R. Tan, H. Huo, J. Qian, and T. Fang. “Traf- fic Video

Segmentation using Adaptive-K Gaussian Mixture Model”. The

International Workshop on Intelligent Computing (IWICPAS),

August 2006.

[28] A. Elgammal, and L. Davis. “Nonparametric Model for

Background Subtraction”. 6th European Conference on Computer Vision, June 2000.

[29] C. Ianasi, V. Gui, C. Toma, and D. Pescaru. “Fast Algorithm

for Background Tracking in Video Surveillance, Using

Nonparametric Kernel Density Estimation”. Facta Universitatis,

Series: Electronics and Energetics, vol. 18, no. 1, pp. 127-144,

April 2005. [30] A. Tavakkoli, M. Nicolescu, and G. Bebis. “Robust Recursive

Learning for Foreground Region Detection in Videos with Quasi-

Stationary Backgrounds”. Proceedings of the International

Conference on Pattern Recognition (ICPR), vol. 1, pp. 315-318,

Hong Kong, August 2006.

[31] C. Stauffer, and E. Grimson. “Adaptive background mixture

models for real-time tracking”. IEEE Conference on Computer

Vision and Pattern Recognition (CVPR), pp. 246-252, 1999.

51

Page 54: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

[32] A. Dempster. “Upper and lower probabilities induced by

multivalued mapping”. Annals of Mathematical Statistics, vol. 38, pp. 325-339, 1967.

[33] G. Shafer. “A Mathematical Theory of Evidence”. Princeton

University Press, 1976.

[34] A. Appriou. “Probabilites et incertitude en fusion de donnees

multisenseurs”. Revue scientifique et technique de la defense, pp.

27-40, Novembre 1991. [35] H. Ribeiro, and A. Gonzaga. “Hand Image Segmentation in

Video Sequence by GMM: a comparative analysis”. XIX Brazilian

Symposium on Computer Graphics and Image Processing

(SIBGRAPI), pp. 357- 364, Manaus, Brazil, 2006.

[36] S. Kanprachar, and S. Tangkawanit. “Performance of RGB and

HSV color systems in object detection applications under different illumination intensities”. International Multi Conference of

Engineers and Computer Scientists, vol. 2, pp. 1943-1948,

Kowloon, China, March 2007.

[37] C. Silva, T. Bouwmans, and C. Frelicot. “ An eXtended

Center-Symmetric Local Binary Patternfor Background Modeling

and Subtraction in Videos”. Conference on Computer Vision

Theory and Applications (VISAPP), pp. 1-8, March 2015.

52

Page 55: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Buletinul Ştiinţific al Universităţii Politehnica Timişoara

TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

Volume 60(74), Issue 2, 2015

Instructions for authors at the Scientific Bulletin of the

Politehnica University of Timisoara - Transactions on

Electronics and Communications

First Author1 Second Author

2

1 Faculty of Electronics and Telecommunications, Communications Dept. Bd. V. Parvan 2, 300223 Timisoara, Romania, e-mail [email protected] 2 Faculty of Electronics and Telecommunications, Communications Dept.

Bd. V. Parvan 2, 300223 Timisoara, Romania, e-mail [email protected]

Abstract – These instructions present a model for editing

the papers accepted at the Scientific Bulletin of

“Politehnica” University of Timisoara, Transactions on

Electronics and Communications. The abstract should

contain the description of the problem, methods,

solutions and results in a maximum of 12 lines. No

references are allowed here.

Keywords: editing, Bulletin, author

I. INTRODUCTION

The page format is A4. The articles must be of 6

pages or less, tables and figures included.

II. GUIDELINES

The paper should be sent in this standard form. Use a

good quality printer, and print on a single face of the

sheet. Use a double column format with 0.5 cm in

between columns, on an A4, portrait oriented,

standard size. The top and bottom margins should be

of 2.28 cm, and the left and right margins of 2.54 cm.

Microsoft Word for Windows is recommended as a

text editor. Choose Times New Roman fonts, and

single spaced lines. Font sizes should be: 18 pt bold

for the paper title, 12 pt for the author(s), 9 pt bold for

the abstract and keywords, 10 pt capitals for the

section titles, 10 pt italic for the subsection titles;

distance between section numbers and titles should be

of 0.25 cm; use 10 pt for the normal text, 8 pt for

affiliation, footnotes, figure captions, and references.

III. FIGURES AND TABLES

Figures should be centered, and tables should be left

aligned, and should be placed after the first reference

in the text. Use abbreviations such as “Fig.1.” even at

the beginning of the sentence. Leave an empty line

before and after equations. Equation numbering

should be simple: (1), (2), (3) … and right aligned:

∫−−=

a

adtytx ττ )()( . (1)

IV. ABOUT REFERENCES

References should be numbered in a simple form [1],

[2], [3]…, and quoted accordingly [1]. References are

not allowed in footnotes. It is recommended to

mention all authors; “et al.” should be used only for

more than 6 authors.

Table 1

Parameter Value Unit

I 2.4 A

U 10.0 V

V. REMARKS

A. Abbreviations and acronyms

Abbreviations and acronyms should be explained

when they appear for the first time in the text.

Abbreviations such as IEEE, IEE, SI, MKS, CGS, ac,

dc and rms need no further explanation. It is

recommended not to use abbreviations in section or

subsection titles.

Fig. 1. Amplitudes in the standing wave

53

Page 56: Editorial Board - Politehnica University of Timișoara · 2015-09-10 · Buletinul Ştiinţific al Universităţii Politehnica Timişoara TRANSACTIONS on ELECTRONICS and COMMUNICATIONS

B. Further recommendations

The International System of units is recommended.

Do not mix SI and CGS. Preliminary, experimental

results are not accepted. Roman section numbering is

optional.

REFERENCES

[1] A. Ignea, “Preparation of papers for the International

Symposium Etc. ’98”, Buletinul Universităţii “Politehnica”, Seria

Electrotehnica, Electronica si Telecomunicatii, Tom 43 (57), 1998,

Fascicola 1, 1998, pp. 81.

[2] R. E. Collin, Foundations for Microwave Engineering, Second

Edition, McGraw-Hill, Inc., 1992.

[3] http://www.tc.etc.upt.ro/bulletin

54