9 neural network for the reliability analysis of a …...reliability using neural network method....

14
International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online), Volume 5, Issue 12, December (2014), pp. 73-86 © IAEME 73 NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUBJECTED TO FINITE COMMON - CAUSE AND FINITE HUMAN ERROR FAILURES K. Umamaheswari Research Scholar, Dept. of Mathematics, Sri Krishnadevaraya University, Ananthapuramu-515001, A.P., India A. Mallikarjuna Reddy Professor, Dept. of Mathematics, Sri Krishnadevaraya University, Ananthapuramu-515001, A.P, India ABSTRACT Artificial neural networks can achieve high computation rates by employing a massive number of simple processing elements with a high degree of connectivity between the elements. Neural networks with feedback connections provide a computing model capable of exploiting fine- grained parallelism to solve a rich class of complex problems. In this paper we discuss a complex series-parallel system subjected to finite common cause and finite human error failures and its reliability using neural network method. Keywords: Reliability, availability, Markov Model, Neural networks and Series-parallel system. 1. INTRODUCTION Artificial neural networks can achieve high computation rates by employing a massive number of simple processing elements with a high degree of connectivity between the elements. Neural networks with feedback connections provide a computing model capable of exploiting fine- grained parallelism to solve a rich class of complex problems. Network parameters are explicitly computed based upon problem specifications, to cause the network to overage to an equilibrium that represents a solution. Recently Mahmoud and Suliman [1990] introduced a new approach to the INTERNATIONAL JOURNAL OF ADVANCED RESEARCH IN ENGINEERING AND TECHNOLOGY (IJARET) ISSN 0976 - 6480 (Print) ISSN 0976 - 6499 (Online) Volume 5, Issue 12, December (2014), pp. 73-86 © IAEME: www.iaeme.com/ IJARET.asp Journal Impact Factor (2014): 7.8273 (Calculated by GISI) www.jifactor.com IJARET © I A E M E

Upload: others

Post on 17-Jun-2020

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 9 NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A …...reliability using neural network method. Keywords: Reliability, availability, Markov Model, Neural networks and Series-parallel

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –

6480(Print), ISSN 0976 – 6499(Online), Volume 5, Issue 12, December (2014), pp. 73-86 © IAEME

73

NEURAL NETWORK FOR THE RELIABILITY ANALYSIS

OF A SERIES - PARALLEL SYSTEM SUBJECTED TO

FINITE COMMON - CAUSE AND FINITE HUMAN ERROR

FAILURES

K. Umamaheswari

Research Scholar, Dept. of Mathematics, Sri Krishnadevaraya University,

Ananthapuramu-515001, A.P., India

A. Mallikarjuna Reddy

Professor, Dept. of Mathematics, Sri Krishnadevaraya University,

Ananthapuramu-515001, A.P, India

ABSTRACT

Artificial neural networks can achieve high computation rates by employing a massive

number of simple processing elements with a high degree of connectivity between the elements.

Neural networks with feedback connections provide a computing model capable of exploiting fine-

grained parallelism to solve a rich class of complex problems. In this paper we discuss a complex

series-parallel system subjected to finite common cause and finite human error failures and its

reliability using neural network method.

Keywords: Reliability, availability, Markov Model, Neural networks and Series-parallel system.

1. INTRODUCTION

Artificial neural networks can achieve high computation rates by employing a massive

number of simple processing elements with a high degree of connectivity between the elements.

Neural networks with feedback connections provide a computing model capable of exploiting fine-

grained parallelism to solve a rich class of complex problems. Network parameters are explicitly

computed based upon problem specifications, to cause the network to overage to an equilibrium that

represents a solution. Recently Mahmoud and Suliman [1990] introduced a new approach to the

INTERNATIONAL JOURNAL OF ADVANCED RESEARCH IN ENGINEERING

AND TECHNOLOGY (IJARET)

ISSN 0976 - 6480 (Print) ISSN 0976 - 6499 (Online) Volume 5, Issue 12, December (2014), pp. 73-86 © IAEME: www.iaeme.com/ IJARET.asp

Journal Impact Factor (2014): 7.8273 (Calculated by GISI) www.jifactor.com

IJARET

© I A E M E

Page 2: 9 NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A …...reliability using neural network method. Keywords: Reliability, availability, Markov Model, Neural networks and Series-parallel

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –

6480(Print), ISSN 0976 – 6499(Online), Volume 5, Issue 12, December (2014), pp. 73-86 © IAEME

74

reliability analysis based on Neural Network approach. In general the reliability of hardware under

design in usually arrived at by assuming suitable values for certain parameters such as the failure

rate, coverage factor and the repair rate whichever is applicable to the design. The reliability of the

system is then computed using discrete or continuous time analysis. If the resulting reliability does

not meet the design requirements, then the whole process is repeated to obtain another set of values.

A table is usually set at the end showing the various reliability values with the corresponding

parameters of the design so that the designer may pick the most suitable features for the system. This

technique is lengthy and complicated when dealing with complex fault tolerant system. Help of

neural network philosophy is taken to avoid complication.

The initial conditions and the desired reliability are fed in to the neural network. When neural

network converges their different weight indicates the appropriate parameters and hence features of

the system under investigation. A computation is performed collectively by the whole network with

the actively distributed over all the computing elements. This collective operation results in a high

degree of parallel computations for past solution of complex problem.

Artificial neural networks (ANN’S) are motivated by biological nervous systems. Modern

computers and algorithmic computations are good at well defined tasks. Biological brains, on the

other hand, easily solve speech and vision problems under a wide range of condition tasks that no

digital computer has solved adequately. This inadequacy has prompted researchers to study

biological neural systems in an attempt to design computational systems with brain-like capabilities.

At the same time, modern analog and digital integrated circuit technology is offering the potential for

implementing massively parallel networks of sample processing elements. Nero computing will

enable us to take advantage of these advances in VLSI by providing the computational model

necessary to program and coordinate the behavior of thousands of processing elements.

Neural network models are providing new approaches to problem solving. Neural network

can be simulated on special purpose neural hardware accelerators as well as conventional machines.

For maximum processing speed they may even be realized using optimal implementations or silicon

VLSI. The key to the utility of ANN’s is that they provide a computational model that can be used to

systematize the process of simple processors. This chapter deals with, the definition of artificial

neural networks, types of neural networks, their use and a systematic approach to the availability

analysis of a “series-parallel system” with repair, which illustrates the neural network approach. The

discrete-time Markov model of a series-parallel system is realized using feed- forward recursive

neural network. The obtained results are verified with the continuous time solutions of the Markov

models and digitals simulation.

2 ARTIFICIAL NEURAL NETWORKS

2.1 Network Models

Networks may be distinguished on the basis of the directions in which signal flow. Basically,

there are two types of networks. Feed Forward and Feedback Networks. A network in which signals

propagate in only one direction from an input stage through intermediate neurons to an output stage

is called a Feed forward network. Feedback networks, on the other area networks in which signals

may propagate from the output of any neuron to the input of any neuron.

2.2 Feed Forward Networks

Fig 1 illustrates a Feed forward network. The first layer serves only to distribute a weighted

version of the input vector to the neurons in the inner layer. Neurons in the inner layer, called hidden

neurons, respond to the accumulated effects of their inputs and propagate their response signals to

neurons in the ouput layer. Neurons in the output layer also accumulate the effect of the signals they

receive and collectively produce an output vector of signals which represents the response of the

Page 3: 9 NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A …...reliability using neural network method. Keywords: Reliability, availability, Markov Model, Neural networks and Series-parallel

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –

6480(Print), ISSN 0976 – 6499(Online), Volume 5, Issue 12, December (2014), pp. 73-86 © IAEME

75

network to the input vector. There are several powerful algorithms [1991] & [1988] available for

adopting the strengths of the interconnections between neurons in feed forward network so that the

network learns to map input patterns into desired output patterns. Feed forward networks have been

applied successively to a number of problem areas.

Fig.1: Feed Forward Networks

2.3 Feedback Networks

Fig 2 shows a feedback network with five neurons. Each block dot represents a set of

feedback connections that are analogous to biological synopses. Because the output of a neuron may

be fed back into the networks as an input to other neurons, a neuron may influence its own future

state. Neural models that permit feedback have been employed to develop networks capable of

unsupervised learning, self-organization, retrieving stored memory patterns, and computing solutions

to a variety of optimization problems. The neural solution to each of these problems involves

interpreting the state of the network after it stabilizes. It is therefore necessary to state criteria for the

design of suitable neural networks.

Fig.2: Feedback Networks

Page 4: 9 NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A …...reliability using neural network method. Keywords: Reliability, availability, Markov Model, Neural networks and Series-parallel

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –

6480(Print), ISSN 0976 – 6499(Online), Volume 5, Issue 12, December (2014), pp. 73-86 © IAEME

76

1

( ) ( ) ( )n

ii i i i ij j j

j

dXa X b X C g X

dt =

= −

2.4 Cohen-Grossberg Stability Results

Grossberg [1963] developed a mathematical model which compasses a variety of neural

network models as well as models from population biology and macromolecular evolution. The

Analysis of this model by Cohen and Grossberg yielded conditions under which the systems of

differential equations used to characterize a number of popular neural network model which

converge to stable states. The model which they analyzed is a dynamical system of mutually

independent differential euqations of the form.

(1)

They showed the existence of Lyapunov function for a system of such equations if the matrix [Cij]

and the functions ai, bi and gi meet three conditions:

1. The matrix [Cij] must be symmetric (i.e., Cij = Cji)

2. The functions ai and bi must be continuous with ai nonnegative.

3. The functions gi must be non-decreasing.

A Lyapunov function for a dynamical system places constraints on the collective behavior of

the equations comprising the system. The central idea is that the system always evolves a manner

that does not increase the value of the Lyapunov function. The existence of Lyapunov function for

system of independent differential equations of the form of equation (1) therefore guarantees that the

system will follow a trajectory leading to a stable state, regardless of the initial state, provided the

above conditions hold. The Cohen-Grossberg proof that establishes the stability of any neural

network model that can be characterized by equation (1). One of the cases of this model was

independently conceived by Hopfield.

2.5 The Hopfield Neural Network

The earliest Neural Network model introduced by Hopfield employed two - state neurons. He

used this model to design neural content - addressable memories. Hopfield later introduced a

modified version of his earlier model which employed a continuous non - linear function to describe

the output behavior of the neurons. It is Hopfields continuous model that corresponds to a special

case of the Grossberg mathematical model for additive neural networks.

In Hopfields continuous model, the behavior of a neuron is characterized by its activation

level ui which is governed by the differential equation.

(2)

Where i

i

n

u− is a passive decay term, Wij is the strength of the interconnection between neuron i,

gj(uj) is the activation function for neuron j, Ij is the external input to neuron I. The activation level

ui is a continuous variable that corresponds to the membrane potential in biological neurons. In the

absence of an external input and inputs from other neurons, the passive decay term cause ui toward 0

at a rate proportional to ni. The output of neuron i can be desired by its mean firing rate vi

corresponding to the activation level ui. The output vi is continuous over its range and is related to ui

1

( )n

i iij j j j

ji

du uW g u I

dt n =

−= + +∑

Page 5: 9 NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A …...reliability using neural network method. Keywords: Reliability, availability, Markov Model, Neural networks and Series-parallel

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –

6480(Print), ISSN 0976 – 6499(Online), Volume 5, Issue 12, December (2014), pp. 73-86 © IAEME

77

by the activation function vi = gi(ui). The activation function is typically a smooth sigmoid. A

frequent choice for g(u) is

g(u) = 0.5[1+tanh(gain u)] (3)

As long as g(u) is non-decreasing, it meets the Cohen - Grossberg requirements for stability.

Thus, if the external inputs are maintained at a constant value, a network of neurons modeled by

equation (3) will eventually equilibrate, regardless of the starting state. Hopfield discovered a

Lyapunov function for a network of n neurons characterized by equation (3) which can be expressed

as

(4)

When gain of the activation function is sufficiently high. This expression, which Hopfield

refers to as the networks “computational energy” or just “energy function” can be derived from the

Lyapunov function discovered earlier by Cohen and Grossberg. The term “energy function” stems

from an analogy between the network behavior and that of certain physical systems. Just as physical

systems may evolve toward an equilibrium state, a network of neurons will always evolve toward a

minimum of energy function. The stable states of a network of neurons therefore correspond to the

local minima of the energy function.

Hopfield and Tank [1985] had a key insight when they recognized that it was possible to use

the energy function to perform computations. Because a network of neurons will seek to minimize

the energy function, one may design a neural network for function minimization by associating

variable in an optimization problem with variables in the energy function.

2.6 Series - Parallel System

Consider the Markov model of a series-parallel system consisting n units as shown in the figure 3.

Assumptions The following assumptions are associated with the system under study:

� Failures are statistically independent.

� All system units are active, identical and form a parallel network.

� A unit failure rate is constant.

� A Common - Cause failure or a critical human error leads to system failure.

� A common - cause failure or a critical human error can occur when one (or more) unit is

operating.

� Critical human error and Common - Cause failure rates are constant.

� Failed system repair rates are constant / non-constant.

� At least one unit must operate normally for the system’s success.

Symbols The following symbols are associated with this model;

n - Number of units in the parallel system

λ - constant failure rate of a unit

i - system up state as shown in boxes fig 3

i = 0 (all units operating normally),

i = 1 (one unit failed, (n-1) operating),

i = 2 (two units failed, (n-2) operating),

1 1 1

1( ) ( ) ( )

2

n n n

ij i j j j i i i

i j i

E W g u g u I g u= = =

= −∑∑ ∑

Page 6: 9 NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A …...reliability using neural network method. Keywords: Reliability, availability, Markov Model, Neural networks and Series-parallel

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –

6480(Print), ISSN 0976 – 6499(Online), Volume 5, Issue 12, December (2014), pp. 73-86 © IAEME

78

i = 3 (three units failed, (n-3) operating),

i = 4 (four units failed, (n-4) operating),

i = k (k units failed, (n-k) operating).

K - number of failed units in the system and corresponding up state of the system, for k

= 1, 2.........(n-1)

λcliλc

2i- Constant common cause failure rate from system up state i, for i=0,1,2......k

λh1i,

λh2i-Constant critical human error rate from system up state i, for i=0,1,2, .........k

i = System down state as shown in boxes of fig.3 j=n (all units failed other than due to a

common-cause failure or a critical human error)

j = c1, c

2, c

3..............c

k (system failed due to common cause failure)

j = h1, h

2, h

3..............h

k (system failed due to critical human error)

Pi(t) - Probability that the system is in up state i at time t for i = 0, 1, 2, 3,........k

Pj(t) - Probability that the system is in down state j at time t for j=n,c

1,c

2,..........c

k,h

1,h

2.............h

k

Pi - Steady state probability that the system is in up state i, for i = 0, 1, 2,.........k

Pj - Steady state probability that the system is in down state j, for

j=n,c1,c

2,..........c

k,h

1,h

2.............h

k

GENERAL MODEL

The system transition diagram is shown in Fig.3 The discrete time equations for the Markov

model are given by

The system transition diagram is shown in Fig.3

The discrete time equations for the Markov model are given by

P0(t+∆t) = P

0(t)[1-nλ∆t - λ

c10∆t] - λ

c20∆t -λ

c30∆t............... - λ

ck0∆t -λ

h10∆t -λ

h20∆t -λ

h30∆t ............. - λ

hk0∆t

]+P1(t)nλ∆t +P

c1(t)λ

c10∆t +P

c2(t)λ

c20∆t +P

c3(t)λ

c30∆t +.............+P

ckλ

ck0∆t +P

h1(t)λ

h10∆t +P

h2(t)λ

h20∆t

+Ph3

(t) λh30

∆t +...............+Phk0

λhk0

∆t.

Pc1

(t+∆t) = P0(t)r

1∆t+P

c1(t)[1-r

1∆t]

Pc2

(t+∆t) = P0(t)r

2∆t+P

c2(t)[1-r

2∆t]

Pc3

(t+∆t) = P0(t)r

3∆t+P

c3(t)[1-r

3∆t]

.

.

.

Pck(t+∆t) = P

0(t)r

k∆t+P

ck(t)[1-r

k∆t]

Ph1

(t+∆t) = P0(t)z

1∆t+P

h1(t)[1-z

1∆t]

Ph2

(t+∆t) = P0(t)z

2∆t+P

h2(t)[1-z

2∆t]

Ph3

(t+∆t) = P0(t)z

3∆t+P

h3(t)[1-z

3∆t]

.

.

.

Phk

(t+∆t) = P0(t)z

k∆t+P

hk(t)[1-z

k∆t]

Fig.3 System Transition

Page 7: 9 NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A …...reliability using neural network method. Keywords: Reliability, availability, Markov Model, Neural networks and Series-parallel

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –

6480(Print), ISSN 0976 – 6499(Online), Volume 5, Issue 12, December (2014), pp. 73-86 © IAEME

79

P1(t+∆t) = P

c1(t)λ

c11∆t +P

c2(t)λ

c21∆t +P

c3(t)λ

c31∆t +...................+P

ck(t)λ

ck1∆t +P

h1(t)λ

h11∆t +P

h2(t)λ

h21∆t

+Ph3

(t)λh31

∆t +..................+Phk

(t)λhk1

∆t +P2(t)(n-1) λ∆t +P

1(t)[1-λ

c11∆t -λ

c21∆t -λ

c31∆t - .............. -

λck1

∆t -λh11

∆t -λh21

∆t -λh31

∆t - ................. - λhk1

∆t - (n-1)λ ∆t ]

P2(t+∆t) = P

c1(t)λ

c12∆t +P

c2(t)λ

c22∆t +P

c3(t)λ

c32∆t +...................+P

ck(t)λ

ck2∆t +P

h1(t)λ

h12∆t +P

h2(t)λ

h22∆t

+Ph3

(t)λh32

∆t +..................+Phk

(t)λhk2

∆t +P3(t)(n-2) λ∆t +P

2(t)[1-λ

c12∆t -λ

c22∆t -λ

c32∆t - .............. -

λck2

∆t -λh12

∆t -λh22

∆t -λh32

∆t - ................. - λhk2

∆t]

P3(t+∆t) = P

c1(t)λ

c13∆t +P

c2(t)λ

c23∆t +P

c3(t)λ

c33∆t +...............+P

ck(t)λ

ck∆

3t +P

h1(t)λ

h13∆t +P

h2(t)λ

h23∆t

+Ph3

(t)λh33

∆t +................+Phk

(t)λhk3

∆t +P3(t)[1-λ

c13∆t -λ

c23∆t -λ

c33∆t - ........... - λ

ck3∆t -λ

h13∆t -λ

h23∆t -

λh33

∆t - .......... - λhk3

∆t - (n-3)λ ∆t ]

Pk(t+∆t) = P

c1(t)λ

c1k∆t +P

c2(t)λ

c2k∆t +P

c3(t)λ

c3k∆t +...................+P

ck(t)λ

ckk∆t +P

h1(t)λ

h1k∆t +P

h2(t)λ

h2k∆t

+Ph3

(t)λh3k

∆t +..................+Phk

(t)λhkk

∆t +Pk+1

(t)(n-k)λ ∆t -Pk(t)[1−λ

c1k∆t-λ

c2k∆t -λ

c3k∆t - .............. -

λckk

∆t -λh1k

∆t -λh2k

∆t -λh3k

∆t - ................. - λhkk

∆t - (n-k)λ ∆t ]

Pn(t+∆t) = P

0(t)µ ∆t+P

n(t)[1-µ∆t].

2.7 The Neural Network for A Series - Parallel System

A feed forward cascade recursive network is set to represent the parallel system. As shown in

Fig.4 the network consists of two layers of neurons: one form by input and other forms the output,

the number of neurons in each layer equals to the number of states, in Markov model. The weights

connecting the input and output neurons represent the entries of the transition matrix of the

differential equations. In other words, the weights of the neural network are related as follows to the

Markov model.

W12 = λc10∆t W21 = r1∆t W2k+2,2= λc11∆t W3k+1,2= λc1k∆t

W13 = λc20∆t W31 = r2∆t W2k+2,3= λc21∆t W3k+1,3= λc2k∆t

W14 = λc30∆t W41 = r3∆t W2k+2,4= λc31∆t W3k+1,4= λc3k∆t

W1,k+1= λcko∆t Wk+1,1= rk∆t W2k+2,k+1=λck1∆t W3k+1,k+1= λckk∆t

W1,k+2= λh10∆t Wk+2,1= z1∆t W2k+3,2=λc12∆t W3k+1,k+2= λh1k∆t

W1,k+3= λh20∆t Wk+3,1= z2∆t W2k+3,3=λc22∆t W3k+1,k+3= λh2k∆t

W1,k+4= λh30∆t Wk+4,1= z3∆t W2k+3,4=λc32∆t W3k+1,k+4= λh3k∆t

W1,2k+1= λhko∆t W2k+1,1= zk∆t W2k+3,k+1=λck2∆t W3k+1,2k+1= λhkk∆t

W1,2k+2= nλ∆t

W2k+4,2 = λc13∆t W3k+1,2k+2= (n-k)λ ∆t

W2k+4,3 = λc23∆t W2k+n+1,1 = µ∆t

W2k+2,2k+3= (n-1)λ ∆t W2k+4,4 = λc33∆t

W2k+4,k+1 = λck3∆t

W2k+3,2K+4= (n-2)λ ∆t

W2k+4,2K+5= (n-3)λ ∆∆t

W11=1-[W12+W13+W14+W1,k+1+W1,k+2+W1,k+3+W1,k+4+W1,2k+1+W1,2k+2]

W22=1-W21

W33=1-W31

Page 8: 9 NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A …...reliability using neural network method. Keywords: Reliability, availability, Markov Model, Neural networks and Series-parallel

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –

6480(Print), ISSN 0976 – 6499(Online), Volume 5, Issue 12, December (2014), pp. 73-86 © IAEME

80

W44=1-W41

.

.

Wk+1,k+1=1-Wk+1,1

.

.

.

Wk+2,k+2=1-Wk+2,1

Wk+3,k+3=1-Wk+3,1

Wk+4,k+4=1-Wk+4,1

.

.

.

.

W2k+1,2k+1=1-W2k+1,1

W2k+2,2k+2=1-W2k+2,2

W2k+3,2k+3=1-W2k+3,2

W2k+4,2k+4=1-W2k+4,2

W3k+1,3k+1=1-W3k+1,2

W2k+n+1,2k+n+1=1-W2k+n+1,1

W2k+1,2k+1=1-W2k+1,1

At any time ‘t’ during operation of

the system

X1 = p0(t)

X2 = pc1(t)

X3 = pc2(t)

X4 = pc3(t)

Xk+1 = pck(t)

Xk+2 = ph1(t)

Xk+3 = ph2(t) Fig.4 Series-Parallel System

Xk+4 = ph3(t)

:

X2k+1 = phk(t)

X2k+2 = p1(t)

X2k+3 = p2(t)

X2k+4 = p3(t)

X3k+1 = pk(t)

X2k+n+1= pn(t)

Page 9: 9 NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A …...reliability using neural network method. Keywords: Reliability, availability, Markov Model, Neural networks and Series-parallel

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –

6480(Print), ISSN 0976 – 6499(Online), Volume 5, Issue 12, December (2014), pp. 73-86 © IAEME

81

Y1 = p0(t+ ∆ t)

Y2 = pc1(t+ ∆ t)

Y3 = pc2(t+ ∆ t)

Y4 = pc3(t+ ∆ t)

:

Yk+1 = pck(t+ ∆ t)

Yk+2 = ph1(t+ ∆ t)

Yk+3 = ph2(t+ ∆ t)

Yk+4 = ph3(t+ ∆ t)

:

Y2k+1 = phk(t+ ∆ t)

Y2k+2 = p1(t+ ∆ t)

Y2k+3 = p2(t+ ∆ t)

Y2k+4 = p3(t+ ∆ t)

Y3k+1 = pk(t+ ∆ t)

Y2k+n+1 = pn(t+ ∆ t)

The initial conditions are given by

X1 = 1 X2 = X3 = X4 = X5 = ................. = X2k+n+1 = 0

The basic equations of the Neural network are

Y1 = W11X1 + W21X2 + W31X3 + W41X4+ Wk+1,1Xk+1+ Wk+2,1Xk+2+ Wk+3,1Xk+3+

Wk+4,1 Xk+4+ W2k+1,1 X2k+1+ W2k+n+1,1X2k+n+1

Y2 = W12X1 + W22X2 + W2k+2,2X2K+2 + W2k+3,2X2K+3 + W2k+4,2X2K+4 +.......................

W3k+1,2X3K+1 + .................... + W2k+n,2X2K+n

Y3 = W13X1 + W33X3 + W2k+2,3X2K+2 + W2k+3,3X2K+3 + W2k+4,3X2K+4 +.......................

W3k+1,3X3K+1 + .................... + W2k+n,3X2K+n

Y4 = W14X1 + W44X4 + W2k+2,4X2K+2 + W2k+3,4X2K+3 + W2k+4,4X2K+4 +.......................

W3k+1,4X3K+1 + .................... + W2k+n,4X2K+n

Yk+1 = W1,k+1X1+Wk+1,k+1Xk+1+W2k+2,k+1X2k+2+W2k+3,k+1X2k+3+W2k+4,k+1X2k+4+............+

W3k+1,k+1X3k+1+ .............+W2k+n,k+1X2k+n

Yk+2 = W1,k+2X1+Wk+2,k+2Xk+2+ W2k+2,k+2X2k+2 + W2k+3,k+2X2k+3+ W2k+4,k+2X2k+4+............+

W3k+1,k+2X3k+1+ .............+W2k+n,k+2X2k+n

Yk+3 = W1,k+3X1+Wk+3,k+3Xk+3+ W2k+2,k+3X2k+2 + W2k+3,k+3X2k+3+ W2k+4,k+3X2k+4+............+

W3k+1,k+3X3k+1+ .............+W2k+n,k+3X2k+n

Page 10: 9 NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A …...reliability using neural network method. Keywords: Reliability, availability, Markov Model, Neural networks and Series-parallel

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –

6480(Print), ISSN 0976 – 6499(Online), Volume 5, Issue 12, December (2014), pp. 73-86 © IAEME

82

1

n

m=

Yk+4 = W1,k+4X1+Wk+4,k+4Xk+4+ W2k+2,k+4X2k+2 + W2k+3,k+4X2k+3+ W2k+4,k+4X2k+4+............+

W3k+1,k+4X3k+1+ .............+W2k+n,k+4X2k+n

Y2k+1 = W1,2k+1X1+W2k+1,2k+1X2k+1+W2k+2,2k+1X2k+2+W2k+3,2k+1X2k+3+ W2k+4,2k+1X2k+4 +

...........+ W3k+1,2k+1X3k+1+ .............+W2k+n,2k+1X2k+n

Y2k+2 = W12,k+2X1+W2k+2,2k+2X2k+2

Y2k+3 = W2k+2,2k+3X2k+2+W2k+3,2k+3X2k+3

Y2k+4 = W2k+3,2k+4X2k+3+W2k+4,2k+4X2k+4

Y3k+1 = W3k,3kX3k+W3k+1,3k+1X3k+1

K = 2, 3, 4,..............n-1

Y2k+n+1 = W2k,2kX2k+W2k+n+1,2k+n+1X2k+n+1

The energy function E for the neural network and update equations are obtained using the

least mean square, gradient - descent learning procedure as follows:

Where Yi is the output of neuron i in the output layer corresponding to the probability of

system being in state i. Di is the desired output of neuron I, equivalent to the design requirement for

the probability of the system being in the state I after a specified time of operation, and it is to be

determined from the target reliability of the design.

By the least mean square, gradient descent procedure, the update equation for the neural

network is derived as follows.

The change in the weight Wj denoted by WIJ is the related to the energy function by the

following update relation.

∆wij = -k∂ E / ∂ wij

Where K is the constant of proportionality. Now by using the chain rule

∂ E / ∂ wij (∂ E / ∂ Ym) (∂ Ym / ∂wij)

and since the energy function is quadratic, then

∂ E / ∂ Ym = 2(Ym - Dm)

And so from equation

∆wij = - 2k ∑ (Ym - Dm) ∂ Ym / ∂ wij

∆wij = - 2k ∑ Em∂ Ym / ∂ wij

∆w12 = - k∂ E / ∂ w12

= 2kX1 [E2 - E1]

∆w13 = - k∂ E / ∂ w13

[ ]22 1

1

k n

i i

i

E Y D+ +

=

= −∑

Page 11: 9 NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A …...reliability using neural network method. Keywords: Reliability, availability, Markov Model, Neural networks and Series-parallel

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –

6480(Print), ISSN 0976 – 6499(Online), Volume 5, Issue 12, December (2014), pp. 73-86 © IAEME

83

= 2kX1 [E3 - E1]

∆w14 = 2kX1 [E4 - E1]

∆W1,k+1= 2KX1 [Ek+1 - E1]

∆W1,k+2= 2KX1 [Ek+2 - E1]

∆W1,k+3= 2KX1 [Ek+3 - E1]

∆W1,k+4= 2KX1 [Ek+4 - E1]

∆W1,2k+1= 2KX1 [E2k+1 - E1]

∆W1,2k+2= 2KX1 [E2k+2 - E1]

∆W21 = 2KX2 [E1 - E2]

∆W31 = 2KX3 [E1 - E3]

∆W41 = 2KX4 [E1 - E4]

∆Wk+1,1= 2KXk+1 [E1 - Ek+1]

∆Wk+2,1= 2KXk+2 [E1 - Ek+2]

∆Wk+3,1= 2KXk+3 [E1 - Ek+3]

∆Wk+4,1= 2KXk+4 [E1 - Ek+4]

∆W2K+1,1= 2KX2K+1 [E1 - E2K+1]

∆W2K+2,2= 2KX2K+2 [E2 - E2K+2]

∆W2K+2,3= 2KX2K+2 [E3 - E2K+2]

∆W2K+2,4= 2KX2K+2 [E4 - E2K+2]

∆W2K+2,K+1= 2KX2K+2 [EK+1 - E2K+2]

∆W2K+2,K+2= 2KX2K+2 [EK+2 - E2K+2]

∆W2K+2,K+3= 2KX2K+2 [EK+3 - E2K+2]

∆W2K+2,K+4= 2KX2K+2 [EK+4 - E2K+2]

∆W2K+2,2K+1= 2KX2K+2 [E2K+1 - E2K+2]

∆W2K+2,2K+3= 2KX2K+2 [E2K+3 - E2K+2]

∆W2K+3,2= 2KX2K+3 [E2 - E2K+3]

∆W2K+3,3= 2KX2K+3 [E3 - E2K+3]

∆W2K+3,4= 2KX2K+3 [E4 - E2K+3]

∆W2K+3,K+1= 2KX2K+3 [EK+1 - E2K+3]

∆W2K+3,K+2= 2KX2K+3 [EK+2 - E2K+3]

∆W2K+3,K+3= 2KX2K+3 [EK+3 - E2K+3]

∆W2K+3,K+4= 2KX2K+3 [EK+4 - E2K+3]

∆W2K+3,2K+1= 2KX2K+3 [E2K+1 - E2K+3]

∆W2K+3,2K+2= 2KX2K+3 [E2K+2 - E2K+3]

∆W2K+3,2K+4= 2KX2K+3 [E2K+4 - E2K+3]

∆W2K+4,2 = 2KX2K+4 [E2 - E2K+4]

∆W2K+4,3 = 2KX2K+4 [E3 - E2K+4]

Page 12: 9 NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A …...reliability using neural network method. Keywords: Reliability, availability, Markov Model, Neural networks and Series-parallel

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –

6480(Print), ISSN 0976 – 6499(Online), Volume 5, Issue 12, December (2014), pp. 73-86 © IAEME

84

∆W2K+4,4 = 2KX2K+4 [E4 - E2K+4]

∆W2K+4,k+1 = 2KX2K+4 [EK+1 - E2K+4]

∆W2K+4,k+2 = 2KX2K+4 [EK+2 - E2K+4]

∆W2K+4,k+3 = 2KX2K+4 [EK+3 - E2K+4]

∆W2K+4,k+4 = 2KX2K+4 [EK+4 - E2K+4]

∆W2K+4,2k+1 = 2KX2K+4 [E2K+1 - E2K+4]

∆W2K+4,2k+2 = 2KX2K+4 [E2K+2 - E2K+4]

∆W2K+4,2k+3 = 2KX2K+4 [E2K+3 - E2K+4]

∆W2K+4,2k+5 = 2KX2K+4 [E2K+5 - E2K+4]

∆W3K+1,2 = 2KX3K+1 [E2 - E3K+1]

∆W3K+1,3 = 2KX3K+1 [E3 - E3K+1]

∆W3K+1,4 = 2KX3K+1 [E4 - E3K+1]

∆W3K+1,K+1 = 2KX3K+1 [EK+1 - E3K+1]

∆W3K+1,K+2 = 2KX3K+1 [EK+2 - E3K+1]

∆W3K+1,K+3 = 2KX3K+1 [EK+3 - E3K+1]

∆W3K+1,K+4 = 2KX3K+1 [EK+4 - E3K+1]

∆W3K+1,2K+1 = 2KX3K+1 [E2K+1 - E3K+1]

∆W3K+1,3K+2 = 2KX3K+1 [E2K+2 - E3K+1]

∆W2K+n+1,1 = 2KX2K+n+1 [E1 - E2K+n+1]

Where errorm=Em=(Ym-Dm) or the difference between the actual and the desired output of neuron

m in the output layer.

3 SIMULATION RESULTS AND DISCUSSION

Computer software is developed for simulation of neural network representing the series -

parallel system and is tested for 4-unit series - parallel system. The time of operation of the system is

taken as t=10hrs and t = 0.1 sec. The initial failure and repair rates chosen with in an attainable

practical range. Samples of the results obtained from the simulation are shown in Table 1.

Discrete time Markov model of a series-parallel system can be realized by a neural network,

after feeding in the desired reliability as shown in table. The main interesting feature of this method

is the utilization of the collective computational abilities of neural network in the analyzed.

Page 13: 9 NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A …...reliability using neural network method. Keywords: Reliability, availability, Markov Model, Neural networks and Series-parallel

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –

6480(Print), ISSN 0976 – 6499(Online), Volume 5, Issue 12, December (2014), pp. 73-86 © IAEME

85

Table.1 Shows the Convergence results

P0

Desi

red

P1

Valu

es

µ µµµ

r1

r2

r3

r4

r5

r6

z1

z2

z3

z4

z5

z6

λ λλλ

λ λλλc1

0

λ λλλc1

1

λ λλλc12

0.1

4

0.1

5

2.4

154

1.2

63

0.3

553 0.2

211 2

.823

1.2

34

0.1

43

9 2.4

08

0.7

31

1

2.3

23

1.1

97

2.8

50

2.9

54

0.1

71

1

0.0

02769

0.0

00

46

88 0

.005

54

6

0.4

5

0.0

28

1.5

034

2.4

02

1.6

33

2.2

21

1.3

00

0.7

559

2.1

25

1.5

56

2.8

25

1.6

68

0.7

148 2.0

99

1.4

04

0.5

27

8

0.0

00112

9 0

.002

44

4

0.0

01

53

6

0.0

89

0.2

2

1.1

152

0.8

042 0.4

251 2.5

08

0.0

29

67 0.0

2526 1.5

12

2.9

49

1.6

07

2.4

77

0.9

990 0.6

61

7 2.6

43

0.2

70

7

0.0

00770

9 0

.003

39

4

0.0

06

18

9

0.3

6

0.2

0

3.2

201

0.9

059 2.8

96

0.7

288 0

.854

9

0.7

342

1.8

69

1.3

36

2.8

49

2.6

72

1.3

80

1.7

56

2.3

85

0.2

98

3

0.0

03241

0.0

04

89

4

0.0

04

62

3

0.3

3

0.2

5

3.1

443

2.8

47

1.7

37

2.7

60

1.4

58

1.9

79

0.5

86

9 1.2

11

2.3

00

2.8

50

0.1

606 2.3

46

0.7

098

0.3

53

3

0.0

05836

0.0

04

62

0

0.0

05

62

4

0.3

0

0.1

7

3.7

493

0.9

366 2.9

84

2.1

04

1.6

06

2.2

29

1.0

68

0.5

388 2

.243

2.6

05

0.3

248 1.7

68

0.3

395

0.1

76

2

0.0

05173

0.0

03

88

2

0.0

05

40

5

0.1

7

0.2

9

2.5

308

1.2

66

1.1

49

1.2

21

0.2

34

0

2.8

99

0.1

03

6 1.3

14

1.0

49

1.6

29

0.8

807 2.0

59

0.0

4188 0

.392

8

0.0

02961

0.0

04

20

8

0.0

02

78

4

0.2

1

0.0

76

2.5

193

2.9

45

0.3

157 2.4

35

0.6

05

1

1.1

31

0.6

31

6 2.1

28

0.1

20

1

0.7

900 2.5

85

0.7

91

2 2.8

03

0.5

64

9

0.0

04946

0.0

05

62

7

0.0

04

74

1

0.5

0

0.3

8

3.3

016

1.6

66

2.8

09

0.3

457 2

.352

1.1

98

1.5

50

2.7

49

2.6

01

0.2

652 1.7

02

0.3

78

5 0.7

112

0.1

10

1

0.0

01197

0.0

01

38

6

0.0

04

89

0

0.3

4

0.1

7

3.5

862

2.0

29

1.1

82

1.2

65

2.8

58

1.3

27

1.5

07

2.4

97

1.0

66

0.9

200 0.7

750 1.2

75

2.0

06

0.2

72

2

0.0

04436

0.0

03

88

0

0.0

01

58

9

0.2

0

0.0

82

3.0

756

0.2

582 1.8

33

1.3

69

2.6

89

1.4

81

2.7

36

2.7

80

0.5

31

4

0.2

456 2.3

93

2.2

25

0.6

047

0.1

15

8

0.0

02214

0.0

00

13

35 0

.007

04

9

0.0

53

0.0

87

3.6

493

1.5

37

1.3

50

1.0

66

1.4

04

2.8

05

2.4

80

0.7

619 0

.917

1

2.4

20

2.2

10

1.2

86

0.3

532

0.0

53

04 0.0

02598

0.0

03

10

1

0.0

06

55

9

0.1

6

0.0

86

0.7

9659 0.7

729 0.8

554 2.3

69

1.3

28

1.0

81

1.2

81

0.8

590 0

.068

80 1.2

91

1.1

02

1.1

62

1.2

12

0.0

71

33 0.0

04356

0.0

05

43

0

0.0

00

31

72

0.4

9

0.2

2

1.4

996

1.9

94

1.1

67

0.4

753 0

.815

3

2.4

93

0.2

82

8 0.8

916 0

.610

0

0.5

014 1.5

06

1.9

26

1.0

62

0.0

77

84 0.0

05081

0.0

05

26

4

0.0

01

35

2

0.4

0

0.2

3

1.3

423

2.3

69

1.9

97

1.0

37

2.8

57

2.4

93

2.7

60

2.8

00

1.0

36

2.4

94

1.3

34

2.8

83

1.7

11

0.2

81

2

0.0

02213

0.0

03

37

8

0.0

06

71

8

λ λλλc1

3

λ λλλc2

0

λ λλλc2

1

λ λλλc2

2

λ λλλc2

3

λ λλλh

10

λ λλλh

11

λ λλλh1

2

λ λλλh

13

λ λλλh

20

λ λλλh

21

λ λλλh2

2

λ λλλh

23

Tim

e

K

No.

of

Iterati

on

s

0.0

02470

0.0

07490

0.0

03

47

6

0.0

00851

7 0

.001

72

5

0.0

03723

0.0

05

91

2

0.0

06878

0.0

03

43

2

0.0

02233

0.0

02469

0.0

009689 0.0

03107

0.0

05

0.1

9

24

0.0

02002

0.0

01511

0.0

01

66

2

0.0

04714

0.0

01

62

7

0.0

06731

0.0

02

51

8

0.0

06452

0.0

02

62

9

0.0

05627

0.0

01737

0.0

01380

2.4

14e-0

5

0.0

01

0.3

0

20

0.0

002665 0.0

04247

0.0

00

65

97 0.0

00512

5 0

.005

88

3

0.0

04232

0.0

00

22

64 0.0

01067

0.0

00

49

86 0.0

04919

0.0

01673

0.0

07155

0.0

01954

0.0

03

0.0

56 4

0.0

01865

0.0

05377

0.0

01

87

6

0.0

02330

0.0

03

79

4

0.0

04181

0.0

01

61

9

0.0

02871

0.0

05

93

9

0.0

06751

0.0

04524

0.0

06259

0.0

03057

0.0

05

0.0

81 30

0.0

04561

0.0

01443

0.0

03

69

8

0.0

01913

0.0

01

41

5

0.0

00422

9 0

.004

16

6

0.0

01787

0.0

01

61

7

0.0

03325

0.0

04481

0.0

01923

9.8

48e-0

5

0.0

05

0.2

7

28

0.0

04250

0.0

03359

0.0

05

58

8

0.0

01435

0.0

02

93

0

0.0

02636

0.0

04

63

4

0.0

03023

0.0

03

65

7

0.0

02617

0.0

03558

0.0

02330

0.0

01539

0.0

04

0.2

4

30

0.0

01497

0.0

05375

0.0

05

07

1

0.0

05190

3.2

55

e-05

0.0

07488

0.0

00

94

58 0.0

04125

0.0

03

46

2

0.0

008592 0.0

04278

0.0

008390 0.0

02058

0.0

04

0.2

5

2

0.0

02438

0.0

01133

0.0

01

68

5

0.0

02335

0.0

03

88

8

0.0

05396

0.0

00

46

69 0.0

06016

0.0

04

15

3

0.0

06328

0.0

05654

0.0

005117 0.0

00918

9 0.0

03

0.1

3

4

0.0

05301

0.0

03706

0.0

05

17

7

0.0

01634

0.0

01

32

7

0.0

04494

0.0

00

62

64 0.0

02540

0.0

02

62

2

0.0

06426

0.0

05617

0.0

03751

0.0

04202

0.0

00

9 0

.26

12

0.0

01515

0.0

06930

0.0

02

45

4

0.0

07175

0.0

00

93

21 0

.001982

0.0

00

96

86 0.0

005553 0

.002

86

9

0.0

03266

0.0

04522

0.0

03037

0.0

02230

0.0

03

0.2

5

30

0.0

04233

0.0

07042

0.0

01

19

9

0.0

02796

0.0

01

86

3

0.0

01719

0.0

05

82

1

0.0

07409

0.0

00

37

38 0.0

03064

0.0

03316

0.0

04348

0.0

04338

0.0

06

0.1

3

16

0.0

004429 0.0

00882

7 0

.001

15

0

0.0

03139

0.0

02

82

4

0.0

06001

0.0

05

93

8

0.0

03851

0.0

0107

9

0.0

004140 0.0

05986

0.0

02348

0.0

03226

0.0

03

0.1

2

22

0.0

02856

0.0

07376

0.0

05

30

4

0.0

07953

0.0

01

28

8

0.0

07816

0.0

01

75

4

0.0

005891 0

.005

28

4

0.0

04791

0.0

02514

0.0

07416

0.0

02580

0.0

02

0.1

4

10

0.0

04004

0.0

06747

0.0

01

13

9

0.0

01383

0.0

05

97

3

0.0

07246

0.0

03

01

2

0.0

04768

0.0

04

98

9

0.0

04395

0.0

00106

3 0.0

03959

0.0

00116

6 0.0

00

2 0

.22

26

0.0

008449 0.0

06459

0.0

03

14

6

0.0

01524

0.0

03

59

0

0.0

01575

0.0

04

32

6

0.0

06201

0.0

01

15

0

0.0

05874

0.0

01719

0.0

05978

0.0

03975

0.0

03

0.1

4

28

Page 14: 9 NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A …...reliability using neural network method. Keywords: Reliability, availability, Markov Model, Neural networks and Series-parallel

International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –

6480(Print), ISSN 0976 – 6499(Online), Volume 5, Issue 12, December (2014), pp. 73-86 © IAEME

86

REFERENCES

1. Hopfield, J.J. and D.W. Tank (1985), Neural Computation of Decision in optimization

problem, Biological Cybernetics Vol.52, pp.141-152.

2. Johan Hertz, Anders Krogh, Richard G. Palmer (1991). Introduction to the Theory of Neural

Computation, Addision-Wesley, Publishing Company.

3. Mahmoud, A. Manzaul & Mamauss Suliman, Micro electron. Reliab. Vol.30, p.785 and

p.801. 1990.

4. Nakagawa Toshio (1988), Sequential Imperfect Preventive Maintenance Policies, IEEE

Transactions on Reliability, Vol. 40, No.3.

5. Sandler, G.H., (1963), System Reliability Engineering. Prentice - Hall, Eaglewood Cliffs.

6. Sweety H Meshram, Vinaykeswani and Nileshbodne, “Data Filtration and Simulation By

Artificial Neural Network” International journal of Electronics and Communication

Engineering &Technology (IJECET), Volume 5, Issue 7, 2014, pp. 46 - 55, ISSN Print:

0976- 6464, ISSN Online: 0976 –6472.

7. Shikha Dixit and Appu Kuttan. K.K, “Artificial Neural Network Based Data Mining

Approach For Human Heart Disease Prediction” International journal of Computer

Engineering & Technology (IJCET), Volume 5, Issue 6, 2014, pp. 136 - 142, ISSN Print:

0976 – 6367, ISSN Online: 0976 – 6375.

8. Dheerendra Vikram Singh, Dr. Govind Maheshwari, Neha Mathur, Pushpendra Mishra, Ishan

Patel, “Comparison Between Training Function Trainbfg And Trainbr In Modeling of Neural

Network For Predicting The Value of Specific Heat Capacity of Working Fluid Libr-H2o

Used In Vapour Absorption Refrigeration System” International Journal of Advanced

Research in Engineering & Technology (IJARET), Volume 1, Issue 1, 2010, pp. 118 - 127,

ISSN Print: 0976-6480, ISSN Online: 0976-6499.