modeling viscosity of nanofluids using diffusional neural networks

6
Modeling viscosity of nanouids using diffusional neural networks Fakhri Yousea, , Hajir Karimi b , Mohammad Mehdi Papari c a Department of Chemistry, Yasouj University, Yasouj, 75914353, Iran b Department of Chemical Engineering, Yasouj University, Yasouj, 75914353, Iran c Department of Chemistry, Shiraz 0055niversity of Technology, 71555313, Iran abstract article info Article history: Received 10 August 2011 Received in revised form 15 August 2012 Accepted 17 August 2012 Available online 28 August 2012 Keywords: Viscosity Nanouid Neural networks Suspension In our previous work (Int. J. Thermal Sci. 50 (2011) 4452), we developed diffusional neural network scheme to model thermal conductivity of several nanouids. In this paper, we have extended the neural networks method to predict the relative viscosity of nanouids including nanoparticles CuO suspended in propylene glycol+water, CuO suspended in ethylene glycol+water,SiO 2 suspended in water, SiO 2 suspended in ethanol, Al 2 O 3 suspended in water, and TiO 2 suspended in water. The results obtained have been compared with other theoretical models as well as experimental values. The predicted relative viscosities of suspensions using diffu- sional neural networks (DNN) are in accordance with the literature values. © 2012 Elsevier B.V. All rights reserved. 1. Introduction Nanouids are mixtures of solid nanoparticles with average particle size smaller than 100 nm dispersed in base uids such as water, ethyl- ene glycol, propylene glycol or engine oil. Research on nanouids has received great attention during the last decade due to the prospect of enhanced transport properties. Among transport properties, viscosity is a fundamental characteristic property of a uid that inuences ow and heat transfer phenomena. Determining the viscosity of nanouids is essential for optimizing ow transport devices in energy supply. Many experimental and theoretical studies have been dedicated to the thermal conductivity of nanouids [19]. However, experimental data for the effective viscosity of nanouids are limited to certain nanouids [1022]. The ranges of the investigated variables such as the particle volume concentration; particle size and temperature are also limited. More correlations reported for determining nanouids viscosity are based on simple Einstein model [23,24]. A few theoretical models have been developed for the determination of a nanoparticles suspension viscosity [25,26]. Still, the experimental data show the trend that the effective viscosities of nanouids are higher than the existing theoretical predictions [14]. In an attempt to modify this situa- tion, researchers proposed models applied to specic applications, e.g., Al 2 O 3 in water [14,27], Al 2 O 3 in ethylene glycol [27], and CuO in water with temperature change [28].However, the problem with these models is that they do not reduce to the Einstein model [23] at very low particle volume concentrations and, hence, lack a sound phys- ical basis. Moreover, many of the deterministic or conceptual viscosity models need a sufcient amount of data for calibration and validation purposes that makes them computationally incompetent. As a result this has caused the attention of the researcher to focus on a separate category of models called systems theoretic models. These models such as articial neural networks (ANN), also known as the blackbox models, attempt to develop relationships among input and output vari- ables involved in a physical process without considering the underlying physical process. The ANN technique has applied successfully in various elds of modeling and prediction in many engineering systems, mathematics, medicine, economics, metrology and many others. It has become increasingly popular during the last decade. The advantages of ANN compared to conceptual models are its high speed, simplicity and large capacity which reduce engineering attempt. Some recent applications are made in thermophysical properties [2934]. Back prop- agation neural network (BPNN) is widely used because it can effectively solve non-linear problem. However, there are some deciencies for BP neural network, such as getting into local extreme and slow conver- gence. This is disadvantageous under limited experiment data of nanouid viscosity. In the present study, the effective viscosity of nanouids is deter- mined as a function of the temperature, nanoparticle volume fraction, nanoparticle size and the base uid physical properties. In this paper, we have used neural networks method to calculate the relative viscosity of suspension of nanoparticles CuO in propylene glycol + water, CuO in ethylene glycol+water SiO 2 in water, SiO 2 in ethanol, Al 2 O 3 in water, and TiO 2 in water. In this respect, rst, we utilize a hybrid model based upon the mega-trend-diffusion technique and neural networks to estimate the expected domain range of data as hypothetical. Second, we generate a number of virtual data points to reduce the error of esti- mated function with respect to a small actual dataset. Third, we build Journal of Molecular Liquids 175 (2012) 8590 Corresponding author. Fax: +98 741 222 1711. E-mail address: fyouse@mail.yu.ac.ir (F. Youse). 0167-7322/$ see front matter © 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.molliq.2012.08.015 Contents lists available at SciVerse ScienceDirect Journal of Molecular Liquids journal homepage: www.elsevier.com/locate/molliq

Upload: mohammad-mehdi

Post on 29-Nov-2016

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Modeling viscosity of nanofluids using diffusional neural networks

Journal of Molecular Liquids 175 (2012) 85–90

Contents lists available at SciVerse ScienceDirect

Journal of Molecular Liquids

j ourna l homepage: www.e lsev ie r .com/ locate /mol l iq

Modeling viscosity of nanofluids using diffusional neural networks

Fakhri Yousefi a,⁎, Hajir Karimi b, Mohammad Mehdi Papari c

a Department of Chemistry, Yasouj University, Yasouj, 75914‐353, Iranb Department of Chemical Engineering, Yasouj University, Yasouj, 75914‐353, Iranc Department of Chemistry, Shiraz 0055niversity of Technology, 71555‐313, Iran

⁎ Corresponding author. Fax: +98 741 222 1711.E-mail address: [email protected] (F. Yousefi).

0167-7322/$ – see front matter © 2012 Elsevier B.V. Allhttp://dx.doi.org/10.1016/j.molliq.2012.08.015

a b s t r a c t

a r t i c l e i n f o

Article history:Received 10 August 2011Received in revised form 15 August 2012Accepted 17 August 2012Available online 28 August 2012

Keywords:ViscosityNanofluidNeural networksSuspension

In our previous work (Int. J. Thermal Sci. 50 (2011) 44–52), we developed diffusional neural network schemeto model thermal conductivity of several nanofluids. In this paper, we have extended the neural networksmethod to predict the relative viscosity of nanofluids including nanoparticles CuO suspended in propyleneglycol+water, CuO suspended in ethylene glycol+water,SiO2 suspended in water, SiO2 suspended in ethanol,Al2O3suspended in water, and TiO2 suspended in water. The results obtained have been compared with othertheoretical models as well as experimental values. The predicted relative viscosities of suspensions using diffu-sional neural networks (DNN) are in accordance with the literature values.

© 2012 Elsevier B.V. All rights reserved.

1. Introduction

Nanofluids are mixtures of solid nanoparticles with average particlesize smaller than 100 nm dispersed in base fluids such as water, ethyl-ene glycol, propylene glycol or engine oil. Research on nanofluids hasreceived great attention during the last decade due to the prospect ofenhanced transport properties. Among transport properties, viscosityis a fundamental characteristic property of a fluid that influences flowand heat transfer phenomena. Determining the viscosity of nanofluidsis essential for optimizing flow transport devices in energy supply.

Many experimental and theoretical studies have been dedicated tothe thermal conductivity of nanofluids [1–9]. However, experimentaldata for the effective viscosity of nanofluids are limited to certainnanofluids [10–22]. The ranges of the investigated variables such asthe particle volume concentration; particle size and temperature arealso limited. More correlations reported for determining nanofluidsviscosity are based on simple Einstein model [23,24]. A few theoreticalmodels have been developed for the determination of a nanoparticlessuspension viscosity [25,26]. Still, the experimental data show thetrend that the effective viscosities of nanofluids are higher than theexisting theoretical predictions [14]. In an attempt to modify this situa-tion, researchers proposed models applied to specific applications,e.g., Al2O3 in water [14,27], Al2O3 in ethylene glycol [27], and CuO inwater with temperature change [28].However, the problem withthese models is that they do not reduce to the Einstein model [23] atvery low particle volume concentrations and, hence, lack a sound phys-ical basis. Moreover, many of the deterministic or conceptual viscosity

rights reserved.

models need a sufficient amount of data for calibration and validationpurposes that makes them computationally incompetent. As a resultthis has caused the attention of the researcher to focus on a separatecategory of models called systems theoretic models. These modelssuch as artificial neural networks (ANN), also known as the blackboxmodels, attempt to develop relationships among input and output vari-ables involved in a physical process without considering the underlyingphysical process. The ANN technique has applied successfully invarious fields of modeling and prediction inmany engineering systems,mathematics, medicine, economics, metrology and many others. It hasbecome increasingly popular during the last decade. The advantagesof ANN compared to conceptual models are its high speed, simplicityand large capacity which reduce engineering attempt. Some recentapplications aremade in thermophysical properties [29–34]. Back prop-agation neural network (BPNN) is widely used because it can effectivelysolve non-linear problem. However, there are some deficiencies forBP neural network, such as getting into local extreme and slow conver-gence. This is disadvantageous under limited experiment data ofnanofluid viscosity.

In the present study, the effective viscosity of nanofluids is deter-mined as a function of the temperature, nanoparticle volume fraction,nanoparticle size and the base fluid physical properties. In this paper,we have used neural networksmethod to calculate the relative viscosityof suspension of nanoparticles CuO in propylene glycol+water, CuO inethylene glycol+water SiO2 in water, SiO2 in ethanol, Al2O3 in water,and TiO2 in water. In this respect, first, we utilize a hybrid modelbased upon the mega-trend-diffusion technique and neural networksto estimate the expected domain range of data as hypothetical. Second,we generate a number of virtual data points to reduce the error of esti-mated function with respect to a small actual dataset. Third, we build

Page 2: Modeling viscosity of nanofluids using diffusional neural networks

Input layer Hidden layer Output layer

S neurons 1 Out putP variables

Fig. 1. Topology of conventional P-S-1 MLP.

86 F. Yousefi et al. / Journal of Molecular Liquids 175 (2012) 85–90

the robust prediction model for predicting relative viscosity of theaforementioned nanofluids. Finally, we will compare the obtainedresults with actual data as well as other models to show the reliabilityof the ANN method.

2. Conventional artificial neural networks

The ANNs supplies a non-linear function mapping of a set of inputvariables into the corresponding network output variables, withoutthe requirement of having to specify the actual mathematics form ofthe relation between the input and output variables.

The multilayer perceptron (MLP) is a type of feedforward neuralnetwork that has been used commonly for the approximate functions[35].The ANNs are discussed in details in the literature, therefore onlya few well-known features of it are given here to describe the generalnature of the network.

MLP neural networks consist of multiple layers of simple activationunits called neurons that are arranged in such a way that each neuronin one layer is connected with each neuron in the next by weightedconnections (see Fig. 1). Neurons are arranged in layers that make upthe global architecture. MLP networks are comprised of one inputlayer, at least one hidden layer and an output layer. The number ofneurons in the input layer is defined by the problem to be solved. Theinput layer receives the data. The output layer delivers the responsecorresponding to the property values. The hidden layer processes andorganizes the information received from the input layer and delivers itto the output layer. The number of neurons in the hidden layer, whichto some extent play the role of intermediate variables, may be consid-ered as which might be either output from other neurons or inputfrom external sources. Each neuron thus has a series ofweighted inputs,wij which might be either output from other neurons or input fromexternal sources. Each neuron calculates a sum of the weighted inputsand transforms it by the following transfer function:

nj ¼1

1þ exp −xð Þ ð1Þ

Table 1The lower and upper limits of virtual data for each variable.

Limit T[K] f (fraction) Particlesize [nm]

L 238.25 0.0246 78.72U 323.35 0.0431 105.11

Where nj is the output of the j-th neuron and x is given by:

x ¼Xni¼1

wijpi þ bj ð2Þ

Where wij represents the weights applied to the connections fromthe i-th neurons in the previous layer to the j-th neurons, pi is the out-put from the i-th neuron and b is a bias term. MLP networks operatein the supervised learning mode. In the training algorithm, commonlyback propagation error called BPE, training data are given to the net-works and the network iteratively adjusts connection weights w andbiases b (starting from initial random values) until the network pre-dicted values match the actual values satisfactorily. In the BPE train-ing algorithm, this adjustment is carried out by comparing theactual value tij and the predicted values aij of the network by meansof calculation of the total sum of the square error (SSE) for the ndata of the training dataset,

SSE ¼Xni¼1

tij−aij� �2 ð3Þ

Although artificial neural networks (ANNs) arewidely utilized to ex-tract management knowledge from acquired data, in neuralnetworkmodeling mainly assume that the learning data for training a neuralnetwork are sufficient. For cases in which the data are insufficient, it isimpossible to recognize a nonlinear system. In other words, there is anon-negligible error between the real function and the estimated func-tion from a trained neural network [36]. Moreover using basic neuralnetworks with small training data points, it is difficult to guaranteethat a good predictive model will be obtained in the complete actualdomain. In most cases, only good predictions are achieved in regionsin the vicinity of actual points contained in the training dataset [37].However, a sufficient training data set is defined as a data setwhichpro-vides enough information to a learningmethod to obtain stable trainingaccuracy.

2.1. Detailed steps in the ANN modeling procedure

In this study, the actual data set was provided by the Refs. [38–42].the number of actual data points is182 in total.

According to available data, there are five variables including tem-perature, particle volume fraction, particle size, base fluid viscosityand density ratio of fluid base to particle as input and relative viscosityof nanofluid is the output of the network. As explained previously, theavailable experimental data points (182data points) do not provide suf-ficient information for training robust prediction neural network.Therefore, the research of Li et al. [43] was referred to, and the mega-trend-diffusion and estimation of domain range techniques were usedto create virtual data (adding a number of virtual data points) to con-struct the predicted model. Because the purpose of this research is toapply the procedure proposed by Li et al. [43] to construct a predictionmodel, the details of the derivation of the methodology are omittedhere. In order to fill all the data gaps and estimate the data trend, themega-trend-diffusion method assumes that the data selected are locat-ed within a certain range and have possibility indexes determined by acommon membership function (MF). The numbers of data which are

Density ratio of(fluid/particle)

Vis. ofbase [pa.s]

Relative vis.of nanofluid

0.2980 2.172 1.3780.3530 3.999 1.647

Page 3: Modeling viscosity of nanofluids using diffusional neural networks

Table 2The random virtual numbers selected between L and U for each variable.

T[K] f (fraction) Particle size[nm]

Density ratio of(fluid /particle

Vis. of base[pa.s]

Relative viscosity

Nv 238.58 263.56 284.12 0.0067 0.0405 0.0501 29.00 23.33 83.40 0.24 0.25 0.41 0.656 6.220 0.001 1.117 1.462 1.841MF 0.01 0.59 0.92 0.1905 0.8416 0.5688 0.19 0.13 0.80 0.06 0.21 0.34 0.190 0.194 0.000 0.351 0.863 0.572

2.3CuO / PG+Water (T=293.35)

87F. Yousefi et al. / Journal of Molecular Liquids 175 (2012) 85–90

smaller or greater than the average of the data are considered the skew-ness of the MF, which is used to estimate the trend of the population.When the data collected are insufficient to build a reliable predictingmodel, the procedure systematically generates some virtual databetween the estimated lower and upper extremes (L and U) of therange to fill the data gaps. The computation of L and U mainly employsthe statistical diffusion techniques provided in the procedure. The pro-cedure further calculates the MF values for each virtual data as itsimportance level.

Li et al. [43] showed that if the number of actual data (experimentaldata) chosen for generation of virtual data increases, the average errorrate of trained net will decrease. So in this study first all available actualdata were used for generating a virtual data then the ANN was trainedusing generated data. Finally, the actual data were used for testing theANN. There are a total of four steps in the procedure which wasimplemented with182 actual data:

Step 1 Selection of all actual data as the training data of ANN.Step 2 Usage of Eqs. (4) and (5) to calculate lower bound (L) and

upper bound (U), respectively, for each variable (i.e. input–output variables) of the ANN. The results are shown inTable 1.

L ¼ Uset−SkL �ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi−2� s2x=NL � ln ϕ Lð Þð Þ

qð4Þ

U ¼ Uset þ SkU �ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi−2� s2x=NU � ln ϕ Uð Þð Þ

q: ð5Þ

In these equations:

hset ¼ s2x=n ð6Þ

Uset ¼ minþ maxð Þ=2; ð7Þ

where s2x ¼ ∑ni¼1 xi−�xð Þ2=n−1 is the data set variance; n is

the dataset size; and Uset, the core of the dataset so that

SkL ¼ NL= NL þ NUð Þ and SkU ¼ NU= NL þ NUð Þ ð8Þ

The skewness (Sk) characterizes the degree of asymmetry ofa distribution around dataset core. SkU indicates a distribu-tion fraction with an asymmetric tail extending greater than

Input layer Hidden layer Output layer

MFvariable

2P variables S* neurons 1 Out put

Fig. 2. The topology of 2P-S-1 MLP for the small data set training.

the core value. SkL indicates a distribution fraction with anasymmetric tail extending less than core value.NL andNU repre-sent the number of data smaller and greater than Uset, respec-tively.Since a common function to diffuse the data setwas utilized, thevalues of the membership function for these two data rangelimits, the left end L and the right end U, approach zero. Conse-quently, in Eqs. (4), and (5), a very small number, 10−20was setfor φ(L) and φ(U) to be the value of the membership function,respectively. The lower (L) and upper (U) limits are calculatedusing Eqs. (4) and (5) for each variable. The results are listedin Table 1.

Step 3 generation a virtual data between L and U for each variable.After calculating the domain (L, U) for each variable, nsdata aregenerated within this domain. These data are defined as virtualdata.These data, however, cannot be used to train the BPNN becausethe data cannot present the complete information for the datadistribution. Therefore, the value of membership function is cal-culated using Eq. (9) and considered for each data. The viscosityratio is the output for the ANN, and there is no need to calculateits MF value. For instance, a number of virtual data generatedfor each variable and the corresponding value of MF are sum-marized in Table 2.

MF ¼Nv−LUset−L

NvbUset

U−Nv

U−UsetNv > Uset

8>><>>:

ð9Þ

where Nv is a virtual number.

Repeating the step 3 for each variable, this gives rise to 462 virtualdata for the training the MLP network.

The conventional topology of MLP can be shown as a P-S-1 networkappeared in Fig. 1. This topology represents the five input variables cor-responding to the temperature (T), volume fraction of nanoparticles (f),particle size (d), base fluid viscosity (μ f) density ratio of fluid base to

0.8

1.3

1.8

0 0.02 0.04 0.06

Rel

ativ

e V

isco

sity

Volume Fraction

Exp

AAN

Einstein

Batchlor

Brinkman

Lundgren

Masoumi

Fig. 3. Comparison of the predicted relative viscosity of CuO‐(60:40) PG+H2O at T=293.35 K with the experimental values [39] and other models [23–25,44,45].

Page 4: Modeling viscosity of nanofluids using diffusional neural networks

0.8

1

1.2

1.4

1.6

1.8

2

2.2

290 300 310 320 330

Rel

ativ

e V

isco

sity

T(K)

SiO2/Water(φ =0.019)

Exp

AAN

Einstein

Batchlor

Brinkman

Lundgren

Masoumi

Fig. 4. Comparison of the predicted relative viscosity with the experimental [42] andother models [23–25,44,45] for SiO2-H2O at volume fraction 0.019.

0.9

0.95

1

1.05

1.1

1.15

1.2

1.25

1.3

1.35

0 0.005 0.01 0.015 0.02 0.025

Exp

AAN

Einstein

Batchlor

Brinkman

Lundgren

Masoumi

Volume Fraction

Rel

ativ

e V

isco

sity

TiO2/ Water(T=298.15 K)

Fig. 6. Comparison of the predicted relative viscosity with the experimental [41] andother models [23–25,44,45] for TiO2-water at T=298.15.

88 F. Yousefi et al. / Journal of Molecular Liquids 175 (2012) 85–90

particle; S is the neuron in hidden layer and 1 is the output variable cor-responding to the relative viscosity (μe/μ f).

In sum, the hybrid model thatwas employed in this work comprisesboth the mega-trend diffusion technique along with the neural net-work. This hybrid model comprises an input layer of ten neurons corre-sponding to the temperature (T), volume fraction of nanoparticles (f),particle size (d), base fluid viscosity (μf) density ratio of fluid base toparticle and their corresponding values of MF, plus a variable numberof neurons in the hidden layer and a neuron in the output layer corre-sponding to the relative viscosity (μe/μ f). This topology is illustrated inFig. 2. The output layer neuron has a linear transfer function, whilethe hidden layer neurons have a hyperbolic tangent transfer function.The neural network was trained according to scaled conjugate gradientbackpropagation (trainscg) algorithm available in the neural networktoolbox of MATLAB.

The 462 virtual data obtained in Step 4 plus 10% of actual data (18data) were assigned to the training of MLP, while rest of the actualdata were utilized to test the model.

In the first step of the training procedure, all data points werescaled to the range of [−1, 1]. For the training step, the number ofneurons in the hidden layer plays an important role in the networkoptimization. Therefore, in order to optimize the network, to achievegeneralization of the model and avoid over-fitting, we started with 1neuron in the hidden layer and gradually increased the number neurons

0.8

1

1.2

1.4

1.6

1.8

2

2.2

2.4

290 300 310 320 330

Al2O

3/ Water(φ=0.015)

Exp

AAN

Einstein

Batchlor

Brinkman

Lundgren

Masoumi

Rel

ativ

e V

isco

sity

T(K)

Fig. 5. Comparison of the predicted relative viscosity with the experimental [42] andother models [23–25,44,45] for Al2O3-water at volume fraction 0.015.

until no significant improvement in the performance of the net wasobserved. For this study, the mean square error (MSE) was chosen asa measure of the performance of the networks. The network modelwith seven neurons in the hidden layer, maximum epoch 300, momen-tum constant 0.8 and learning rate 0.01. The MSE for this configurationwas 0.0014. After determining the optimal topology of the network, theactual data and the correspondingMF values are used as testing data forthe trained MLP to calculate the absolute average error. The averageabsolute deviation (AAD) is defined as:

AAD ¼

XNi¼1

X exp:i −Xcalc:

i

� �=X exp

i

������� 100

Nð10Þ

where N is the number of data.

3. Result and discussion

In this study, as already mentioned, in this study diffusional neuralnetwork (DNN) schemehas been employed to compute the relative vis-cosity of nanofluids. The selected nanofluids were CuO in propyleneglycol+water, SiO2 in water, Al2O3 in water, TiO2 in water, CuO in eth-ylene glycol+water and SiO2 in ethanol. Several models [23–25,44,45]capable of calculating the viscosity of nanofluids were selected to becompared against our results. It should be remembered that most ofthe old theoretical models [23,24,44,45], depend only on the volumefraction of particle and viscosity of base fluid. A few theoreticalmodels have been developed for the determination of a nanoparticles

0.9

1.1

1.3

1.5

1.7

1.9

2.1

2.3

220 240 260 280 300 320 340

CuO/ EG+water(φ=0.05)

Exp

AAN

Einstein

Batchlor

Brinkman

Lundgren

Masoumi

T(K)

Rel

ativ

e V

isco

sity

Fig. 7. Comparison of thepredicted relative viscositywith the experimental [40] and othermodels [23–25,44,45] for CuO‐(60:40) ethylene glycol+H2O at volume fraction 0.05.

Page 5: Modeling viscosity of nanofluids using diffusional neural networks

0.9

1.1

1.3

1.5

1.7

1.9

2.1

2.3

0 0.02 0.04 0.06 0.08

Exp

AAN

Einstein

Batchlor

Brinkman

Lundgren

Masoumi

CuO/ EG+Water(T=303.05 K)

Volume Fraction

Rel

ativ

e V

isco

sity

Fig. 8. Comparison of the predicted relative viscosity with the experimental [40] andother models [23–25,44,45] for CuO‐(60:40) ethylene glycol+H2O at T=303.05 K.

0.9

1.1

1.3

1.5

1.7

1.9

2.1

0 0.01 0.02 0.03 0.04 0.05 0.06

Exp

AAN

Einstein

Batchlor

Brinkman

Lundgren

Masoumi

Volume Fraction

Rel

ativ

e V

isco

sity

SiO2/Ethanol(d=35 nm , T=298.15 K)

Fig. 10. Comparison of the predicted relative viscosity with the experimental data [38]and other models [23–25,44,45] for SiO2‐Ethanol at d=35 nm and T=298.15 K.

89F. Yousefi et al. / Journal of Molecular Liquids 175 (2012) 85–90

suspension viscosity [25]. In this model thevolume fraction of particle,viscosity of base fluid and diameter of particle are considered. InMasoumi et al. [25] the model parameters are adjusted for limitednanoparticle size, consequently this model is not usable for widerange of nanoparticle size.

We carried out diffusional neural network calculations, explainedin the previous section, for all fluids and compared the resultsobtained with those calculated by the above-mentioned models. Forthe verification of the validity of our calculations and the models,the predicted values of the ratio viscosity have also been comparedwith the experimental measurements [38–42]. Fig. 3 compares thepredicted relative viscosity of CuO‐(60:40) PG: H2O at T=293.35 asa function of the volume fraction of nanoparticle with the experimen-tal data [39] and other models [23–25,44,45]. As it may be observed inthis figure, the calculated relative viscosity using the DNN methodagrees well with the experimental value [39].

Fig. 4 shows the comparison of the predicted relative viscosityof SiO2/H2O at volume fraction 0.019 as a function of temperaturewith the experimental [42] and other models [23–25,44,45]. As it isobvious the DNN stands over other models. The predicted relativeviscosity of Al2O3-Water at volume fraction 0.015 as a function of tem-perature with the experimental [42] and other models [23–25,44,45] iscompared in Fig. 5. Fig. 6 compares the predicted relative viscosityTiO2-Water at T=298.15 as a function of volume fraction with the ex-perimental values [41] and other models [23–25,44,45]. Furthermore,

0.9

1.1

1.3

1.5

1.7

1.9

2.1

0 0.02 0.04 0.06 0.08

Exp

AAN

Einstein

Batchlor

Brinkman

Lundgren

Masoumi

Volume Fraction

Rel

ativ

e V

isco

sity

SiO2/Ethanol(d= 94 nm, T=298.15 K )

Fig. 9. Comparison of the predicted relative viscosity with the experimental [38] andother models [23–25,44,45] for SiO2‐Ethanol at d=94 nm and T=298.15 K.

Figs. 7, 8 illustrate, respectively, the deviation plot of relative viscosityof CuO- (60:40) EG: H2O at volume fraction 0.05and as a function oftemperature and CuO-(60:40) EG: H2O at T=303.05 K as a functionof volume fraction, with the experimental ones [40] and other models[23–25,44,45].

Further Figs. (9) and (10) show respectively, the comparison ofthe predicted relative viscosity of SiO2‐Ethanol with nanoparticlesize 94 nm and T=298.15 K and of SiO2‐Ethanol with nanoparticlesize 35 nm and T=298.15 K as a function of volume fraction withthe experimental data [38] and other models [23–25,44,45]. As allfigures illustrate that the DNN method outperforms other models.

In order that the validity of the results obtained are checked fur-ther, the average absolute deviation (AAD) of the relative viscosityfrom the literature values in the entire range of volume fraction forwhole of nanofluids has been tabulated in Table 3. As Table 3 attests,the relative viscosity computed from the Mega-Trend-Diffusion neu-ral networks method stands over other models. From Table 3, it is im-mediately verifiable that the DNN-MLP model possesses a high abilityto predict the relative viscosity with an overall AAD of 3.44%. There isa good agreement between the predicted and the experimentalvalues of the viscosity ratio (r=0.994).

Acknowledgments

The authors thank Yasouj University and Shiraz University ofTechnology for providing the computer facility.

Table 3The AAD% of the calculated relative viscosity using Einstein, Batchlor, Brinkman,Lundgren, Masoumi models and ANN method from the experimental data [38–42].

Systems Einstein Batchlor Brinkman Lundgren Masoumi ANN

CuO-(60:40)ethyleneglycol+H2O

14.32 13.71 13.87 13.61 44.9 1.63

SiO2-H2O 27.58 27.52 27.54 27.52 67.6 5.55Al2O3-H2O 31.86 31.81 31.83 31.81 7.59 6.64TiO2-H2O 6.84 6.76 6.78 6.76 17.41 4.08CuO-(60:40)Propyleneglycol+H2O

18.64 18.00 18.15 17.90 32.24 1.22

SiO2-Ethanol 18.01 18.09 18.22 18.01 8.20 3.25

Page 6: Modeling viscosity of nanofluids using diffusional neural networks

90 F. Yousefi et al. / Journal of Molecular Liquids 175 (2012) 85–90

References

[1] S.U.S. Choi, ASME Publications 66 (1995) 99–105.[2] S. Lee, S.U.S. Choi, S. Li, J.A. Eastman, Journal of Heat Transfer 121 (1999) 280–289.[3] X. Zhang, M. Fujii, International Journal of Heat and Mass Transfer 48 (2004)

2926–2932.[4] Y. Ren, H. Xie, A. Cai, Applied Physics 38 (2005) 3958–3961.[5] R. Prasher, E.P. Phelan, ASME Journal of Heat Transfer 128 (2006) 588–596.[6] S. Lee, S.U.S. Choi, S. Li, J.A. Eastman, ASME Journal of Heat Transfer 121 (1999)

280–290.[7] H.A. Mintsa, G. Roy, C.T. Nguyen, D. Doucet, International Journal of Thermal

Sciences 48 (2009) 363–371.[8] M. Mehrabi, M. Sharifpur, J.P. Meyer, International Communications in Heat and

Mass Transfer 39 (2012) 971–977.[9] S.K. Das, N. Putra, P. Thiesen, W. Roetzel, ASME Journal of Heat Transfer 125

(2003) 567–573.[10] N.S. Cheng, A.W.K. Law, Powder Technology 129 (2003) 156–160.[11] C.T. Nguyen, F. Desgranges, G. Roy, N. Galanis, T. Mare, S. Boucher, H.A. Mintsa,

International Journal of Heat and Fluid Flow 28 (2007) 1492–1506.[12] S.M.S. Murshed, K.C. Leong, C. Yang, International Journal of Thermal Sciences 47

(2008) 560–568.[13] R. Prasher, D. Song, J. Wang, P.E. Phelan, Applied Physics Letters 89 (2006)

33108–133111.[14] C.T. Nguyen, F. Desgranges, N. Galanis, G. Roy, T. Mare, S. Boucher, H. Angue

Mintsa, International Journal of Thermal Sciences 47 (2008) 103–111.[15] Y. Yang, Z.G. Zhang, E.A. Grulke, W.B. Anderson, G. Wu, Journal of Heat and Mass

Transfer 48 (2005) 1107–1116.[16] W.J. Tseng, K.C. Lin, Journal of Materials Science and Engineering A 355 (2003)

186–192.[17] H. Chen, W. Yang, Y. He, Y. Ding, L. Zhang, C. Tan, A.A. Lapkin, D.V. Bavaykin,

Journal of Powder Technology 183 (2007) 63–72.[18] Y. He, Y. Jin, H. Chen, Y. Ding, D. Cang, H. Lu, Journal of Heat and Mass Transfer 50

(2007) 2272–2281.[19] C.T. Nguyen, F. Desgranges, G. Roy, N. Galanis, T. Maré, S. Boucher, H.A. Mintsa

Temperature, Journal of Heat and Fluid Flow 28 (2007) 1492–1506.[20] P.K. Namburu, D.P. Kulkarni, D. Misra, D.K. Das, Experimental Thermal and Fluid

Science 32 (2007) 397–402.[21] M. Chandrasekar, S. Suresh, A. Chandra Bose, Experimental Thermal and Fluid

Science 34 (2010) 210–216.[22] W. Duangthongsuk, S. Wongwises, Journal of Experimental Thermal and Fluid

Science 33 (2009) 706–714.

[23] A. Einstein, Annals of Physics 19 (1906) 289–306.[24] G.K. Batchelor, Journal of Fluid Mechanics 83 (1977) 97–117.[25] N. Masoumi, N. Sohrabi, A. Behzadmehr, Journal of Physics D: Applied Physics 42

(2009) 055501–055507.[26] M.S. Hosseini, A. Mohebbi, S. Ghader, Chinese Journal of Chemical Engineering 18

(2010) 102–110.[27] S.E.B. Maiga, C.T. Nguyen, N. Galanis, G. Roy, Superlattices and Microstructures 35

(2004) 543–557.[28] D.P. Kulkarni, D.K. Das, G. Chukwu, Journal of Nanoscience and Nanotechnology 6

(2006) 1150–1154.[29] H. Karimi, F. Yousefi, Chinese Journal of Chemical Engineering 15 (2007)

765–771.[30] H. Karimi, N. Saghatoleslami, M.R. Rahimi, Chemical and Biochemical Engineering

Quarterly 24 (2009) 167–176.[31] H. Kurt, M. Kayfeci, Applied Energy 86 (2009) 862244–862248.[32] S.S. Sablani, A. Kacimov, J. Perret, A.S. Mujumdar, A. Campo, International Journal

of Heat and Mass Transfer 48 (2005) 665–790.[33] H. Kurt, K. Atik, M. Ozkaymak, A.K. Binark, Journal of the Energy Institute 80

(2006) 46–51.[34] M.M. Papari, F. Yousefi, J. Moghadasi, H. Karimi, A. Campo, International Journal of

Thermal Sciences 50 (2011) 44–52.[35] P.P. Van der Smagt, Neural Networks 7 (1994) 1994.[36] C.C.F. Huang, C. Moraga, International Journal of Approximate Reasoning 35

(2004) 137–161.[37] R. Lanouette, J. Thibault, J.L. Valade, Computers & Chemical Engineering 23 (1999)

1167–1176.[38] J. Chevalier, O. Tillement, F. Ayela, Applied Physics Letters 91 (2007) 233103.[39] P. Devdatta, D.P. Kulkarni, K. Das. Debendra, L. Patil. Shirish, Journal of

Nanoscience and Nanotechnology 7 (2007) 2318–2322.[40] P.K. Namburu, D.P. Kulkarni, D. Misra, D.K. Das, Experimental Thermal and Fluid

Science 32 (2007) 397–402.[41] W. Duangthongsuk, S. Wongwises, Experimental Thermal and Fluid Science 33

(2009) 706–714.[42] A. Tavman, M. Turgut, H.P. Chirtoc, S. Schuchmann, International Scientific Journal

34 (2008) 99–104.[43] D.C. Li, C.S. Wu, T.I. Tsai, Y.S. Lina, Computers and Operations Research 34 (2007)

966–982.[44] H.C. Brinkman, Journal of Chemical Physics 20 (1952) 571–581.[45] T. Lundgren, Journal of Fluid Mechanics 51 (1972) 273–299.