solving zadeh's magnus problem

Post on 22-Feb-2015

32 Views

Category:

Documents

6 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Mohammad Reza Rajati1, Jerry Mendel1, Dongrui Wu2

1University of Southern California2GE Global Research

Kolmogorov→Dempster→ZadehZadeh: “…[Various theories of uncertainty such as]

fuzzy logic and probability theory are complementary rather than competitive”

� Most Swedes are tall. Most tall Swedes are

blond. What is the probability that Magnus (a

Swede picked at random) is blond?

� Involves linguistic quantifiers (most) and

linguistic attributes (tall, blond)

� An implicit assignment of the linguistic value

“Most” to:“Most” to:

the portion of Swedes who are tall

the portion of tall Swedes who are blond.

� Therefore categorized as a prototypical

advanced CWW problem.

� Q1 A’s are B’s

� Q2(A and B)’s are C’s

� Q1 x Q2A’s are (B and C)’s

At least (Q xQ ) A’s are C’s� At least (Q1 xQ2 ) A’s are C’s

� x is the multiplication of two fuzzy sets via:

� At least is the following operation:

1 1 22 ( ) sup(min( ( ), ( )))Q Q Q Qz xy

z x yµ µ µ×=

=

( ) ( ) sup( ( ))At least Q Qy x

x yµ µ≤

=

� 50% of the students of the EE Department at

USC are graduate students.

� 80% of the graduate students of the EE

Department at USC are on F1 visa.Department at USC are on F1 visa.

� 50% ×80% of the graduate students of the EE

Department at USC are on F1 visa.

� At least 40% of the students of the EE

Department at USC are on F1 visa

� In Magnus problem:

Q1= Most, Q2=Most, A= Swede, B= tall, C=blond

Therefore, At least (MostxMost)=Most2� Therefore, At least (MostxMost)=Most2

Swedes are both tall and blond.

� Most is modeled as a monotonic quantifier

and therefore At least (Most2)=Most2

� Zadeh interprets a linguistic constraint on the

portion of a population as a linguistic

probability (LProb), and directly concludes

that:that:

� LProb(Magnus is blond)=MostxMost=Most2

� We construct a MF for Most:

� We construct a vocabulary of type-1 fuzzy

probabilities to translate the solution to a

word:

Absolutely improbable, Almost improbable, � Absolutely improbable, Almost improbable,

Very unlikely, Unlikely, Moderately likely,

Likely, Very likely, Almost Certain, Absolutely

Certain

� MF of the words are shown here:

� The MF of Most2 is depicted in the following:

� We compute the Jaccard’s similarity between

Most2 and the members of the vocabulary

� It is concluded that “It is Likely that Magnus is

tall”

� Most Swedes are Tall

� A few Swedes are not Tall

� We generally have the following syllogism:

Q A’s are B’s� Q A’s are B’s

� ¬Q A’s are not B’s

( ) (1 )

( ) 1 ( )

Q Q

not B B

u u

u u

µ µ

µ µ¬ = −

= −

� Similarly:

� Most tall Swedes are blond

� A few tall Swedes are not blond

� However, we do not know about the

distribution of blonds among those few

Swedes who are not tall.

� All of them or none of them can be blond

� The available information is summarized in

the following tree:

� In the pessimistic case, none of Swedes who

are not tall is blond, so:

LPr obMost Most Few �one− × + ×

=

� In the optimistic case, all of Swedes who are

not tall is blond, so:

LPr obMost Most Few �one

Most Few

− × + ×=

+

LPr obMost Most Few All

Most Few

+ × + ×=

+

� LProb(blond|Swede) =LProb(tall|Swede) ×

LProb(blond|tall and Swede)+

LProb(¬tall|Swede) × LProb(blond|¬tall and

Swede)Swede)

� Assuming LProb(blond|¬tall and Swede) is

either None or All yields LProb- (Magnus is

blond) or LProb+(Magnus is blond).

� All and None are modeled as singletons:

1 0( )

0 otherwise�one

uuµ

==

� We also construct models for Most and Few,

and a vocabulary of linguistic probabilities

1 1( )

0 otherwiseAll

uuµ

=

=

� MF’s of T2FS models of Most and Few:

� We construct a vocabulary of linguistic

probabilities to decode the solution to a

word:

� The pessimistic and optimistic linguistic

probabilities are depicted here:

� The Jaccard’s similarities between the

solutions and the members of the vocabulary

are demonstrated in the following table:

� “The probability that Magnus is blond is

between Likely and Very Likely”

Using the average centroids of the solutions � Using the average centroids of the solutions

we can also say that:

� “The probability that Magnus is blond is

between around 80% and around 89%.”

� Linguistic Approximation is similar to

rounding numeric values

� The resolution of the vocabulary is important

When vocabularies are small, the pessimistic � When vocabularies are small, the pessimistic

and optimistic probabilities may map to the

same word

� We studied the effect of the size of

vocabularies on the decoded solution

� Vocabularies with different sizes:

� Tables show the similarities of the solutions

with members of each of the vocabularies

� Using all of these vocabularies, both the

pessimistic and the optimistic solutions map

to the same word, which is Likely for the first

vocabulary, and is Very Likely for others.vocabulary, and is Very Likely for others.

� For small vocabularies, the total ignorance

present in the problem does not affect the

outcome.

� Novel Weighted Averages are promising

when dealing with linguistic probabilities

� Our solution builds a probability model for

the problem which obeys a set of axiomsthe problem which obeys a set of axioms

� Is the problem really reduced to calculating

the belief and plausibility of a Dempster-

Shafer Model?

top related