solving zadeh's magnus problem

31
Mohammad Reza Rajati 1 , Jerry Mendel 1 , Dongrui Wu 2 1 University of Southern California 2 GE Global Research

Upload: mohammad-reza-rajati

Post on 22-Feb-2015

31 views

Category:

Documents


6 download

TRANSCRIPT

Page 1: Solving Zadeh's Magnus Problem

Mohammad Reza Rajati1, Jerry Mendel1, Dongrui Wu2

1University of Southern California2GE Global Research

Page 2: Solving Zadeh's Magnus Problem

Kolmogorov→Dempster→ZadehZadeh: “…[Various theories of uncertainty such as]

fuzzy logic and probability theory are complementary rather than competitive”

Page 3: Solving Zadeh's Magnus Problem

� Most Swedes are tall. Most tall Swedes are

blond. What is the probability that Magnus (a

Swede picked at random) is blond?

Page 4: Solving Zadeh's Magnus Problem

� Involves linguistic quantifiers (most) and

linguistic attributes (tall, blond)

� An implicit assignment of the linguistic value

“Most” to:“Most” to:

the portion of Swedes who are tall

the portion of tall Swedes who are blond.

� Therefore categorized as a prototypical

advanced CWW problem.

Page 5: Solving Zadeh's Magnus Problem

� Q1 A’s are B’s

� Q2(A and B)’s are C’s

� Q1 x Q2A’s are (B and C)’s

At least (Q xQ ) A’s are C’s� At least (Q1 xQ2 ) A’s are C’s

� x is the multiplication of two fuzzy sets via:

� At least is the following operation:

1 1 22 ( ) sup(min( ( ), ( )))Q Q Q Qz xy

z x yµ µ µ×=

=

( ) ( ) sup( ( ))At least Q Qy x

x yµ µ≤

=

Page 6: Solving Zadeh's Magnus Problem

� 50% of the students of the EE Department at

USC are graduate students.

� 80% of the graduate students of the EE

Department at USC are on F1 visa.Department at USC are on F1 visa.

� 50% ×80% of the graduate students of the EE

Department at USC are on F1 visa.

� At least 40% of the students of the EE

Department at USC are on F1 visa

Page 7: Solving Zadeh's Magnus Problem

� In Magnus problem:

Q1= Most, Q2=Most, A= Swede, B= tall, C=blond

Therefore, At least (MostxMost)=Most2� Therefore, At least (MostxMost)=Most2

Swedes are both tall and blond.

� Most is modeled as a monotonic quantifier

and therefore At least (Most2)=Most2

Page 8: Solving Zadeh's Magnus Problem

� Zadeh interprets a linguistic constraint on the

portion of a population as a linguistic

probability (LProb), and directly concludes

that:that:

� LProb(Magnus is blond)=MostxMost=Most2

Page 9: Solving Zadeh's Magnus Problem

� We construct a MF for Most:

Page 10: Solving Zadeh's Magnus Problem

� We construct a vocabulary of type-1 fuzzy

probabilities to translate the solution to a

word:

Absolutely improbable, Almost improbable, � Absolutely improbable, Almost improbable,

Very unlikely, Unlikely, Moderately likely,

Likely, Very likely, Almost Certain, Absolutely

Certain

Page 11: Solving Zadeh's Magnus Problem

� MF of the words are shown here:

Page 12: Solving Zadeh's Magnus Problem

� The MF of Most2 is depicted in the following:

Page 13: Solving Zadeh's Magnus Problem

� We compute the Jaccard’s similarity between

Most2 and the members of the vocabulary

� It is concluded that “It is Likely that Magnus is

tall”

Page 14: Solving Zadeh's Magnus Problem

� Most Swedes are Tall

� A few Swedes are not Tall

� We generally have the following syllogism:

Q A’s are B’s� Q A’s are B’s

� ¬Q A’s are not B’s

( ) (1 )

( ) 1 ( )

Q Q

not B B

u u

u u

µ µ

µ µ¬ = −

= −

Page 15: Solving Zadeh's Magnus Problem

� Similarly:

� Most tall Swedes are blond

� A few tall Swedes are not blond

� However, we do not know about the

distribution of blonds among those few

Swedes who are not tall.

� All of them or none of them can be blond

Page 16: Solving Zadeh's Magnus Problem

� The available information is summarized in

the following tree:

Page 17: Solving Zadeh's Magnus Problem

� In the pessimistic case, none of Swedes who

are not tall is blond, so:

LPr obMost Most Few �one− × + ×

=

� In the optimistic case, all of Swedes who are

not tall is blond, so:

LPr obMost Most Few �one

Most Few

− × + ×=

+

LPr obMost Most Few All

Most Few

+ × + ×=

+

Page 18: Solving Zadeh's Magnus Problem

� LProb(blond|Swede) =LProb(tall|Swede) ×

LProb(blond|tall and Swede)+

LProb(¬tall|Swede) × LProb(blond|¬tall and

Swede)Swede)

� Assuming LProb(blond|¬tall and Swede) is

either None or All yields LProb- (Magnus is

blond) or LProb+(Magnus is blond).

Page 19: Solving Zadeh's Magnus Problem

� All and None are modeled as singletons:

1 0( )

0 otherwise�one

uuµ

==

� We also construct models for Most and Few,

and a vocabulary of linguistic probabilities

1 1( )

0 otherwiseAll

uuµ

=

=

Page 20: Solving Zadeh's Magnus Problem

� MF’s of T2FS models of Most and Few:

Page 21: Solving Zadeh's Magnus Problem

� We construct a vocabulary of linguistic

probabilities to decode the solution to a

word:

Page 22: Solving Zadeh's Magnus Problem

� The pessimistic and optimistic linguistic

probabilities are depicted here:

Page 23: Solving Zadeh's Magnus Problem

� The Jaccard’s similarities between the

solutions and the members of the vocabulary

are demonstrated in the following table:

Page 24: Solving Zadeh's Magnus Problem

� “The probability that Magnus is blond is

between Likely and Very Likely”

Using the average centroids of the solutions � Using the average centroids of the solutions

we can also say that:

� “The probability that Magnus is blond is

between around 80% and around 89%.”

Page 25: Solving Zadeh's Magnus Problem

� Linguistic Approximation is similar to

rounding numeric values

� The resolution of the vocabulary is important

When vocabularies are small, the pessimistic � When vocabularies are small, the pessimistic

and optimistic probabilities may map to the

same word

� We studied the effect of the size of

vocabularies on the decoded solution

Page 26: Solving Zadeh's Magnus Problem

� Vocabularies with different sizes:

Page 27: Solving Zadeh's Magnus Problem
Page 28: Solving Zadeh's Magnus Problem
Page 29: Solving Zadeh's Magnus Problem

� Tables show the similarities of the solutions

with members of each of the vocabularies

Page 30: Solving Zadeh's Magnus Problem

� Using all of these vocabularies, both the

pessimistic and the optimistic solutions map

to the same word, which is Likely for the first

vocabulary, and is Very Likely for others.vocabulary, and is Very Likely for others.

� For small vocabularies, the total ignorance

present in the problem does not affect the

outcome.

Page 31: Solving Zadeh's Magnus Problem

� Novel Weighted Averages are promising

when dealing with linguistic probabilities

� Our solution builds a probability model for

the problem which obeys a set of axiomsthe problem which obeys a set of axioms

� Is the problem really reduced to calculating

the belief and plausibility of a Dempster-

Shafer Model?