maximum entropy distribution under moments and quantiles constraints

6
Maximum entropy distribution under moments and quantiles constraints Bartosz Barzdajn 1 Główny Urza ˛ d Miar, Laboratorium Długos ´ci Zakładu Długos ´ci i Ka ˛ ta, ul. Elektoralna 2, 00-139 Warszawa, Poland article info Article history: Received 10 February 2014 Received in revised form 24 June 2014 Accepted 23 July 2014 Available online 1 August 2014 Keywords: Relative entropy Maximum entropy principle Moments Quantiles Lagrange multipliers Uncertainty Measurement abstract When the results of a measurement are transferred from one stage in the chain of traceability to the next, the information gathered about the measurement is summarised. The summary involves, for example, details about applied measurement methods, environmental condi- tions, and measurement results including measurement uncertainty. The information about uncertainty usually takes the form of summary statistics such as an estimate, a standard deviation and a coverage interval specified by two quantiles. The information is used to con- struct a probability distribution for a given property or characteristic of an artefact, which is needed when the artefact is used as a reference in a subsequent stage. But in order to ensure impartiality in the process to establish the probability distribution, a general rule should be applied, for example, the principle of maximum entropy. In this paper, the application of this principle to establish a probability distribution when the mentioned summary statistics are available will be discussed, and its extension to moment constraints to satisfy the require- ments of metrology will be introduced. Ó 2014 Elsevier Ltd. All rights reserved. 1. Introduction Establishing traceability in metrology requires that measurement results can be linked to references through a documented unbroken chain ([1]). Consider the following simple example. At the first stage of a traceability chain, an artefact has been calibrated, for example, a steel rule. As a result of the calibration, the expectation of a parameter characterising the artefact, for example, the deviation from nominal value, is given with the associated standard deviation and two quantiles, but without any information about the probability distribution for the parameter. The artefact is then used in a subsequent stage as a reference standard. At the formulation stage of uncertainty evaluation of the subsequent stage, the information about the parameter is used to assign a probability distribution for the parameter [2]. To ensure impartiality in the process to assign the probability distribution, it is recommended in Section 6 of the GUMS1 [3] to use the principle of maxi- mum entropy. Since the GUM [4] and its supplements are key documents giving harmonised procedures for uncer- tainty evaluation in metrology, it is important to provide explicit tools for the assignment of a probability distribu- tion for a quantity when the available information is that most commonly encountered in metrology, i.e., an esti- mate, a standard deviation and quantiles, regardless of whether they ware calculated according to GUM, GUMS1 or to any other guidelines. It is worth emphasising, that the method presented in this paper will be interesting only in cases when quantiles and higher moments do not introduce redundant information. An example might be a situation when we are interpreting the results of Monte Carlo simulations applied to the model for which the conditions of applicability of central limit theorem have http://dx.doi.org/10.1016/j.measurement.2014.07.012 0263-2241/Ó 2014 Elsevier Ltd. All rights reserved. E-mail address: [email protected] URL: http://www.gum.gov.pl 1 Currently at Imperial College London, Department of Materials, South Kensington Campus, London SW7 2AZ, UK. Measurement 57 (2014) 102–107 Contents lists available at ScienceDirect Measurement journal homepage: www.elsevier.com/locate/measurement

Upload: bartosz

Post on 18-Feb-2017

216 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Maximum entropy distribution under moments and quantiles constraints

Measurement 57 (2014) 102–107

Contents lists available at ScienceDirect

Measurement

journal homepage: www.elsevier .com/ locate /measurement

Maximum entropy distribution under moments and quantilesconstraints

http://dx.doi.org/10.1016/j.measurement.2014.07.0120263-2241/� 2014 Elsevier Ltd. All rights reserved.

E-mail address: [email protected]: http://www.gum.gov.pl

1 Currently at Imperial College London, Department of Materials, SouthKensington Campus, London SW7 2AZ, UK.

Bartosz Barzdajn 1

Główny Urzad Miar, Laboratorium Długosci Zakładu Długosci i Kata, ul. Elektoralna 2, 00-139 Warszawa, Poland

a r t i c l e i n f o

Article history:Received 10 February 2014Received in revised form 24 June 2014Accepted 23 July 2014Available online 1 August 2014

Keywords:Relative entropyMaximum entropy principleMomentsQuantilesLagrange multipliersUncertaintyMeasurement

a b s t r a c t

When the results of a measurement are transferred from one stage in the chain of traceabilityto the next, the information gathered about the measurement is summarised. The summaryinvolves, for example, details about applied measurement methods, environmental condi-tions, and measurement results including measurement uncertainty. The information aboutuncertainty usually takes the form of summary statistics such as an estimate, a standarddeviation and a coverage interval specified by two quantiles. The information is used to con-struct a probability distribution for a given property or characteristic of an artefact, which isneeded when the artefact is used as a reference in a subsequent stage. But in order to ensureimpartiality in the process to establish the probability distribution, a general rule should beapplied, for example, the principle of maximum entropy. In this paper, the application of thisprinciple to establish a probability distribution when the mentioned summary statistics areavailable will be discussed, and its extension to moment constraints to satisfy the require-ments of metrology will be introduced.

� 2014 Elsevier Ltd. All rights reserved.

1. Introduction evaluation of the subsequent stage, the information about

Establishing traceability in metrology requires thatmeasurement results can be linked to references througha documented unbroken chain ([1]). Consider the followingsimple example. At the first stage of a traceability chain, anartefact has been calibrated, for example, a steel rule. As aresult of the calibration, the expectation of a parametercharacterising the artefact, for example, the deviation fromnominal value, is given with the associated standarddeviation and two quantiles, but without any informationabout the probability distribution for the parameter. Theartefact is then used in a subsequent stage as a referencestandard. At the formulation stage of uncertainty

the parameter is used to assign a probability distributionfor the parameter [2]. To ensure impartiality in the processto assign the probability distribution, it is recommended inSection 6 of the GUMS1 [3] to use the principle of maxi-mum entropy. Since the GUM [4] and its supplements arekey documents giving harmonised procedures for uncer-tainty evaluation in metrology, it is important to provideexplicit tools for the assignment of a probability distribu-tion for a quantity when the available information is thatmost commonly encountered in metrology, i.e., an esti-mate, a standard deviation and quantiles, regardless ofwhether they ware calculated according to GUM, GUMS1or to any other guidelines. It is worth emphasising, thatthe method presented in this paper will be interesting onlyin cases when quantiles and higher moments do notintroduce redundant information. An example might be asituation when we are interpreting the results of MonteCarlo simulations applied to the model for which theconditions of applicability of central limit theorem have

Page 2: Maximum entropy distribution under moments and quantiles constraints

B. Barzdajn / Measurement 57 (2014) 102–107 103

not been met. This may be due to non-linearity of the mea-surement model or the impact of non-Gaussian influencefactors, but what is most important here is that often, thedistribution cannot be expressed explicitly in a closedform.

The paper is organised as follows. First, the concept ofentropy and maximum entropy principle will be intro-duced including several potential applications. Next, thegeneral framework for evaluating maximum entropydistribution using Lagrange multipliers will be discussed.Further, it will be elaborated on how this techniquecan be applied to cases when moments of a distributionare known. This topic is very well known and widelydescribed in the literature. However, it will providebackground for further considerations. In the followingsection it will be explained how the mentioned techniquecan by applied to evaluate maximum entropy distributionspecified by moments and quantiles. Those considerationswill be backed by two numerical experiments constructedin such a way that they will allow verification of presentedresults.

2. Entropy and the maximum entropy principle

In this paper, the concept of entropy taken from infor-mation theory, the so-called Shannon entropy [5], will beconsidered. Since the concept of entropy originated inphysics, and was then strongly developed for use in infor-mation theory, it is not an easy task to provide an intuitiveinterpretation of the concept appropriate to the topic ofuncertainty evaluation. Entropy is often regarded as anexpectation of information content. Information contentitself is a measure of ’’informativeness’’ of a given possibleoutcome. Therefore, by selecting the maximum entropydistribution we select a least informative distribution.Principle of maximum entropy was introduced by Jaynesin [6]. Since then, this principle found many applicationsfor example [7–10].

For a continuous one-dimensional random variable Xdescribed by a probability density function p xð Þ that hasinfinite support, the entropy h is defined by the functional

h p xð Þ½ � ¼ �KZ 1

�1p xð Þ log p xð Þdx; ð2:1Þ

where K is a constant value. Here, squared brackets areused to embrace the argument of a functional. We willrestrict ourselves in this paper to the case described above,although the presented framework can be applied to othercases as well, e.g. for a random variable that has supportbounded from both sides. Furthermore it will be assumedthat K ¼ 1.

Using the principle of maximum entropy, summary sta-tistics may be used to determine the distribution of therandom variable. For example, if only available informationis that it has finite support, then according to the principleof maximum entropy a rectangular distribution should beassigned to the variable. Many other commonly encoun-tered cases are also well known. For example, GUMS1 [3]treats situations when we have different knowledge aboutthe random variable and its distribution. Furthermore,

maximum entropy distributions under moment con-straints have been widely studied. Nevertheless, there isstill a lack of study about cases when other summarystatistics are known, such as the median, quartile range,quantiles of various orders together with classicalmoments. Such statistics can be the outcome of a calcula-tion of uncertainty when an expectation and expandeduncertainty are insufficient to summarise the results. Thedevelopment of a general framework will not only providethe means to reconstruct the distribution from such sum-mary information, but also could popularise the applica-tion of more robust summary statistics based on ranks,and overall give rise to new important results. However,one have to keep in mind that when applying principle ofmaximum entropy, the random variable, for which thedistribution was evaluated on the basis of limited informa-tion, by no means, is identical to the state-of-knowledgedistribution. The difference between these two distribu-tions will reflect the loss of information due to animperfect way of summarising information aboutuncertainty. This fact can be used to asses applicability ofvarious frameworks.

3. Maximum entropy framework

Recall that entropy, as most summary statistics, is afunctional. Maximisation of a functional is an optimisationproblem and we may use the method of Lagrange multipli-ers to obtain its solution. The general form for the Lagrang-ian, in the case when we wish to determine the argumentp xð Þ that minimises the function f p xð Þ½ � under N constraintsgi p xð Þ½ � ¼ ci, is

L p xð Þ; k½ � ¼ f p xð Þ½ � �XN

i¼1

ki gi p xð Þ½ � � cið Þ;

where ki are Lagrange multipliers. To find a maximum off p xð Þ½ �, we seek to minimise the objective function�f p xð Þ½ � by solving the system of equations

@L p xð Þ; k½ �@p xð Þ ¼ 0; ð3:1Þ

@L p xð Þ; k½ �@ki

¼ 0; ð3:2Þ

where Eq. (3.1) defines a stationary point and Eq. (3.2)defines arguments for which the constraints are satisfied.By solving the above equations with f p xð Þ½ � replaced bythe entropy h and the constraints expressing the giveninformation about the distribution, we can determine themaximum entropy distribution. Furthermore, by exploringthe properties of those equations we may also specify thefamily of distributions to which the maximum entropydistribution belongs.

4. Maximum entropy distribution determined bymoments

It is assumed that if the moments of the distribution areknown, it is possible to specify the family of distributionsto which the corresponding maximum entropy distribution

Page 3: Maximum entropy distribution under moments and quantiles constraints

104 B. Barzdajn / Measurement 57 (2014) 102–107

belongs in term of a set of parameters. The followingoptimisation problem defines the maximum entropy distri-bution determined by a set of moment constraints:

minp� h p xð Þ½ � :

Z 1

�1xnp xð Þdx ¼ mn; n ¼ 0;1; . . . ;N;

ð4:1Þ

where mn is the nth arithmetical moment

mn :¼ mn p xð Þ½ � ¼Z 1

�1xnp xð Þdx; ð4:2Þ

with m0 ¼ 1 equal to the normalisation factor and m1 theexpectation. If we shift the distribution by its expectation,then the second arithmetical moment will define the vari-ance of the distribution. To write down a general formulafor p xð Þ under such constraints we will use, as mentionedbefore, the method of Lagrange multipliers. The Lagrangianfor such an optimisation problem is

L p xð Þ; k½ � ¼ �h pðxÞ½ � �XN

n¼0

kn

Z 1

�1xnp xð Þdx�mn

� �:

SinceZ@f p xð Þ½ �@p xð Þ / xð Þdx ¼ lim

�!0

f p xð Þ þ �/ xð Þ½ � � f p xð Þ½ ��

¼ df p xð Þ þ �/ xð Þ½ �d�

�����¼0;

we may show that

@f p xð Þ½ �@p xð Þ ¼ r x0ð Þ if f p xð Þ½ � ¼

Z 1

�1p xð Þr xð Þdx; ð4:3Þ

where x0 is a dummy variable, which implies that

@mn p xð Þ½ �@p xð Þ ¼ xn

and

@f p xð Þ½ �@p xð Þ ¼ 1þ log p x0ð Þ if f p xð Þ½ � ¼

Z 1

�1p xð Þ log p xð Þdx:

ð4:4Þ

By applying the relations (4.3) and (4.4) to the functionalderivatives of the Lagrangian we get

@L p xð Þ½ �@p xð Þ ¼ 0 ¼ 1þ log p xð Þ �

XN

n¼0

knxn: ð4:5Þ

From this equation we can evaluate the general formula forthe maximum entropy distribution

p xð Þ ¼ expXN

n¼0

knxn � 1

!; ð4:6Þ

parametrised by kif g. It can easily be seen that, if theexpectation and variance are known, the maximumentropy distribution will be Gaussian, and, if onlyboundaries of the support are known, the maximum entropydistribution will be rectangular. In order to find specific val-ues of the kif g and therefore specify explicitly (4.6) we haveto solve a system of the N þ 1 nonlinear equations

mn ¼Z 1

�1xn exp

XN

n¼0

knxn � 1

!dx:

This can be accomplished, e.g. by iterative Newton’smethod.

5. Maximum entropy distribution determined byquantiles and moments

In metrology the most commonly used summary statis-tics are expectation l, standard deviation r, and two quan-tiles q1 and q2 determining a coverage interval that isdefined asZ q2

q1

p xð Þdx ¼ c:

Here, p xð Þ is a probability density function for the randomvariable X, and c defines the coverage probability, i.e. theprobability that X takes values in the interval q1; q2½ � 2 R,in most cases taken as 95%. In this section I present amethod to determine the maximum entropy distributionbased on these summary statistics. By including quantileswe also provide tools to determine the maximum entropydistribution when other summary statistics are available,such as the median (specified by the quantile of rank 0.5)or the quartile range (represented by two quantiles of rank0.25 and 0.75). The following optimisation is relevant tothe considered case:

minp� h p xð Þ½ � :

Z 1

�1p xð Þdx ¼ 1

� �\

Z 1

�1xp xð Þdx ¼ l

� �

\Z 1

�1x� lð Þ2p xð Þdx ¼ r2

� �

\Z q1

�1p xð Þdx ¼ Q 1

� �\

Z q2

�1p xð Þdx ¼ Q2

� �;

where Q1 and Q 2 are the orders of the quantiles. In order toapply the same framework as in the previous section weneed to adjust the limits of integration. To achieve thiswe use the Heaviside step function defined as

h xð Þ ¼0; forx < 0;12 ; forx ¼ 0;1; forx > 0;

8><>:

to redefine the quantile constraints. SinceZ q1

�1p xð Þdx ¼

Z 1

�1p xð Þh q1 � xð Þdx;

the optimisation may be defined in the following way:

minp� h p xð Þ½ � :

Z 1

�1p xð Þdx ¼ 1

� �\

Z 1

�1xp xð Þdx ¼ l

� �

\Z 1

�1x� lð Þ2p xð Þdx ¼ r2

� �

\Z 1

�1p xð Þh q1 � xð Þdx ¼ Q1

� �

\Z 1

�1p xð Þh q2 � xð Þdx ¼ Q2

� �;

Repeating the steps from the previous section we get

Page 4: Maximum entropy distribution under moments and quantiles constraints

B. Barzdajn / Measurement 57 (2014) 102–107 105

@L p xð Þ; kð Þ@p xð Þ ¼ 0 ¼ 1þ log p xð Þ � k0 � k1x� k2 x� lð Þ2

� k3h q1 � xð Þ � k4h q2 � xð Þ:

By putting together constant factors we get a general for-mula for the maximum entropy distribution under theconsidered constraints:

p xð Þ¼ exp k0�1þk1xþk2 x�lð Þ2þk3h q1�xð Þþk4h q2�xð Þ� �

:

ð5:1Þ

Using the same approach we can derive a general formulafor the maximum entropy distribution under constraints ofknown arithmetical moments and quantiles:

p xð Þ ¼ expXN

n¼0

anxn þXM

m¼0

bmh qm � xð Þ !

; ð5:2Þ

where N is a number of moment constraints and M is anumber of quantile constraints. We can switch betweenarithmetical and central moments by shifting theconstraints by the expectation (or an estimate of theexpectation). Since a distribution of the form

q xð Þ ¼ exp a0 þ a1xþ a2x2� �

is a Gaussian distribution, the maximum entropydistribution when expectation, variance and two quantilesare known will be a piecewise continuous distribution,shaped on each interval as a Gaussian distribution onlywith different scaling coefficients, e.g.,

a0 þ a3 þ a4 for x 2 �1; q1ð Þ;a0 þ a4 for x 2 q1; q2ð Þ;a4 for x 2 q2;1ð Þ:

At the points q1 and q2 we have to mind that the Heavisidestep function takes the value 1=2 and scaling factors wouldbe different there.

6. Case study

6.1. Description of case study

Consider the simple measurement equation

d ¼ f r;að Þ ¼ r sina;

where d is the quantity of interest, e.g., a parameter char-acterising an artefact, and r and a are influence factors. Thissimple model could reflect the situation when we wish todetermine the cathetus of a right-angled triangle knowingthe hypotenuse and opposite angle. Suppose that the uncer-tainty of measurement of r and a can be modelled byassigning to them, respectively, the following randomvariables: X1 � N 10;0:1ð Þ and X2 � N 0:275p;0:2ð Þ. Then,according to the GUM, the uncertainty of measurementof d can be expressed by the random variable Y defined as

Y ¼ f X1;X2ð Þ ¼ X1 sin X2: ð6:1Þ

The probability density function for Y is given by

pY gð Þ ¼Z 1

�1

Z 1

�1pX n1; n2ð Þd g� f n1; n2ð Þð Þdn1dn2; ð6:2Þ

where pX is the joint probability density function forX ¼ X1;X2ð Þ>. Since in many cases it is impossible toevaluate the integral (6.2) analytically the most commonapproach is to use a Monte Carlo method to approximatethe integral. In this method we simulate random drawsof pairs of values from the joint probability distributionfor X, and evaluate Eq. (6.1) for each random draw. As aresult, we obtain a simulation of random draws from thedistribution for Y. The next step is to summarise theinformation obtained. In most cases, this would be doneby evaluating the mean and standard deviation of therandom draws together with estimates of two quantilesof rank 0.025 and 0.975.

Now suppose we passed the calibrated artefact and thissummary information to another laboratory where thisartefact will be used as a reference standard in a subse-quent calibration. This laboratory would like also to usethe Monte Carlo method for uncertainty evaluation. Therandom variable Y will represent an influence factor inthe same way that X1 and X2 did in the previous calibra-tion. The problem is that this laboratory does not haveany information about the distribution for Y other thanthe summary statistics included in the calibrationcertificate. Nevertheless, the principle of maximumentropy can be used to ’’generate’’ the distribution for Yfrom the summary information.

6.2. Details of numerical evaluation

The first step in the proposed framework is to choosethe number of parameters to be specified using the proba-bility density function according to the number of con-straints. In the considered case there are five parameters.Having completed the specification, we have got from Eq.(5.2) an analytical form of the probability density functionand we may perform the numerical optimisation in orderto describe the distribution explicitly. In this paper theresults were calculated using GNU Octave [11], a numericalcomputing environment similar to Matlab and the ’’sqp’’routine. To evaluate the constraints and the objectivefunction a doubly-adaptive Clenshaw-Curtis quadraturerule was used [12]. In order to validate the results anoptimisation in terms of the values of the probabilitydensity function at evenly spaced discrete values of thequantity was performed, without any assumption aboutthe family to which the distribution belongs. As a startingpoint for the discrete case, a rectangular distribution waschosen. Summing up, for the continuous case, there werefive parameters optimised, i.e. a1;a2; . . . ;a5, and in the dis-crete case the values of the probability density functionwere treated as separate parameters. Therefore, the discre-tised distribution could assume a general shape.

6.3. Numerical results

In Fig. 6.1, the plot on the left shows the results of theMonte Carlo calculation with 106 draws and the values ofspecific statistics. The plot on the right shows the resultsfor the continuous and discrete approximations of themaximum entropy distribution. Because of the finitedomain and the limited resolution for the discrete

Page 5: Maximum entropy distribution under moments and quantiles constraints

q2m+smmsq1

scaled histogram of the samplem = 7.46q1 = 4.55q2 = 9.52m s = 6.17m + s = 8.75

prob

abilit

y de

nsity

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

random variable value

q2m+smm-sq1

continuousdiscrete approximation

prob

abilit

y de

nsity

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

random variable value

Fig. 6.1. On the left – scaled histogram of a sample drawn from the distribution for Y and summary statistics calculated from this sample: mean – m,standard deviation – s; q1 and q2 – upper and lower quantile of rank 0.025 and 0.975. On the right – maximum entropy distributions constrained to thosesummary statistics.

q2m+smmsq1

scaled histogramof the sample

m = 0.00q1 = 1.96q2 = 1.96m s = 1.00m + s = 1.00

prob

abilit

y de

nsity

0.00

0.10

0.20

0.30

0.40

random variable value

N(m,s)continuousdiscreteapproximation

q2m+smm-sq1

random variable value

prob

abilit

y de

nsity

0.00

0.10

0.20

0.30

0.40

Fig. 6.2. On the left – scaled histogram of a sample drawn from N l;rð Þ and summary statistics calculated from this sample: mean – m, standard deviation –s; q1 and q2 – upper and lower quantile of rank 0.025 and 0.975. On the right – maximum entropy distributions constrained to those summary statistics.

106 B. Barzdajn / Measurement 57 (2014) 102–107

approximation we may observe different results and jumpsnear the quantile constraints. Recall that the discreteapproximation is introduced only to compare the shapesof the probability density functions, mainly to see if similarjumps at the quantiles would be present.

In order to gain more confidence in the results anothertest was performed. The same approach was applied to asample drawn from a normal distribution. Because theGaussian distribution is the maximum entropy distribu-tion, when the expectation and variance are known, byconstraining the optimisation to include the quantiles ofthe Gaussian distribution, we should recreate this distribu-tion. The results are presented in Fig. 6.2.

7. Conclusions

This work presents a new family of probability distribu-tions specified by the maximum entropy principle,moments and quantiles. Although one might argue thatdistribution belonging to this family is unobservable innature, we have to keep in mind that it reflects our lackof knowledge arising from the decision to present adetailed information about experiment in a compact andconvenient form.

The key motivation for this research was to provide abasis for measuring information loss related to summaris-ing a distribution by a finite set of summary statistics thatare most common in metrology, when the measure of

information loss is Kullback–Leibler divergence [13]. Suchstudies can answer some important questions in metrol-ogy: ’’Which quantiles are most informative?’’, ’’Whichway of summarising information about uncertainty isapplicable in given case?’’, ’’Whether introducing estima-tion of higher moments can cause some improvement inthis scope?’’. Furthermore, this might lead to improvementin uncertainty evaluation through better modelling of theinfluence factors or better presentation of the outcomes.When recommendations presented in GUMS1[3] areapplied, usually bounds and moments are taken intoaccount. The presented results might help to extend thisframework to utilise such summary statistics as median,sample quantiles etc.

It is worth mentioning that metrological community islooking forward to introducing in the near future a newstandard of calibration certificates. This standard shouldallow extension for the information written on the certifi-cate by digital content. By means of that, it may become pos-sible to provide the customer with, for instance, a completeinformation about the results of Monte Carlo simulation andin consequence to solve some issues addressed in this paper.

Acknowledgements

This paper was prepared as a part of EMRP Joint ResearchProject NEW06 ’’Traceability for Computationally - Inten-sive Metrology’’. The EMRP is jointly funded by the EMRP

Page 6: Maximum entropy distribution under moments and quantiles constraints

B. Barzdajn / Measurement 57 (2014) 102–107 107

participating countries within EURAMET and the EuropeanUnion. The aim of this project is to establish mechanismsfor assuring quality of algorithms, their implementationsand computational software in the scope of metrology.

I would like to thank Peter Harris from National Physi-cal Laboratory for his support in writing this paper and alsoMaurice Cox and Alistair Forbes for their valuablecomments.

References

[1] JCGM 200:2008, International vocabulary of metrology – basic andgeneral concepts and associated terms.

[2] M.G. Cox, P.M. Harris, Software support for metrology, Tech. rep.,National Physical Laboratory, 2010.

[3] JCGM 101:2008, Evaluation of measurement data – supplement 1 toguide to the expression of uncertainty in measurement.

[4] JCGM 100:2008, Evaluation of measurement data – guide to theexpression of uncertainty in measurement.

[5] C.E. Shannon, A mathematical theory of communication, Bell SystemTechnical Journal 27.

[6] E.T. Jaynes, Information theory and statistical mechanics, PhysicalReview 106.

[7] S.J. Phillips, M. Dudík, R.E. Schapire, A maximum entropy approach tospecies distribution modeling, in: Proceedings of the twenty-firstinternational conference on Machine learning, ACM, 2004, p. 83.

[8] J.M. Cozzolino, M.J. Zahner, The maximum-entropy distribution ofthe future market price of a stock, Oper. Res. 21 (6) (1973) 1200–1211.

[9] E.T. Jaynes, On the rationale of maximum-entropy methods, Proc.IEEE 70 (9) (1982) 939–952.

[10] J.N. Kapur, H.K. Kesavan, Entropy optimization principles withapplications, Academic Press, 1992.

[11] J.W. Eaton, D. Bateman, S. Hauberg, GNU Octave version 3.0.1manual: a high-level interactive language for numericalcomputations, CreateSpace Independent Publishing Platform, 2009,ISBN 1441413006.

[12] C.W. Clenshaw, A.R. Curtis, A method for numerical integration onan automatic computer, Numer. Math. 2 (1) (1960) 197–205.

[13] S. Kullback, R.A. Leibler, On information and sufficiency, AnnalsMath. Stat. 22 (1) (1951) 79–86.