granular knowledge representation and inference using ... · information granule [lawry] an...
TRANSCRIPT
Granular KnowledgeRepresentation and Inference using
Labels and Label Expressions
Jonathan Lawry
Dept. Engineering Mathematics,
University of Bristol,
Bristol, BS8 1TR, UK
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 1/37
Overview
Information granules and granular models.
Representing imprecise labels.
A prototype interpretation of labels.
Imprecise probabilities from imprecise descriptions.
Applications: Case Studies.
Conclusion.
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 2/37
Information Granules
Fundamental to human communication is the ability toeffectively describe the continuous domain of sensoryperception in terms of a finite set of description labels.
Granular modelling permits us to process and transmitinformation efficiently at an appropriate level of detail.
Information Granule [Zadeh] A granule is a clump ofobjects (points) which are drawn together byindistinguishability, similarity, proximity and functionality.
Information Granule [Lawry] An information granule is acharacterisation of the relationship between a discretelabel or expression and elements of the underlying(often continuous) domain which it describes.
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 3/37
Granular Models
Granular Modelling aims to develop formalrepresentations on information granules and embedthese into computational models (intelligent systems).
Granular models (Zadeh) are high-level (oftenrule-based) models which incorporate concepts asrepresented by information granules.
Mechanisms for representing and processinguncertainty and linguistic vagueness are fundamental.
Model transparency and accuracy are dual goals.
Traceability of decision processes is vital in manyapplications.
Key problem areas are: learning, fusion and reasoning.
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 4/37
Deciding What to Say...
Describing the world requires us to make decisionsabout what is the best choice of words when referring toobjects and instances, in order to convey theinformation we intend.
Suppose you are witness to a robbery, how should youdescribe the robber so that police on patrol in thestreets will have the best chance of spotting him?
You will have certain labels that can be applied, forexample tall, short, medium, fat, thin, blonde, etc, but whichare more appropriate (according to convention)?
We contend that this is fundamentally an epistemicproblem (Williamson).
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 5/37
The Epistemic Stance
Within a population of communicating agents, individuals assumethe existence of a set of labelling conventions for the populationgoverning what linguistic labels and expression can be appropriatelyused to describe particular instances.
Making an assertion to describe an object or instance x
involves making a decision as to what labels can beappropriately used to describe x.
An individual’s knowledge of labelling conventions ispartial and uncertain.
This will result in uncertainty about the appropriatenessof labels to describe instance x.
In accordance with De Finetti it is reasonable to modelthis uncertainty using subjective probabilities.
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 6/37
Measures of Appropriateness
As a (big) simplification...
Assume that there is a finite set of labelsLA = L1, . . . , Ln for describing elements of theuniverse Ω.
LE is the set of expressions generated from LA throughrecursive application of the connectives ∧,∨ and ¬.
LA = red, blue, green, yellow . . .LE = red&blue, not yellow, green or not yellow . . .
For θ ∈ LE, x ∈ Ω, µθ (x) = the subjective probabilitythat θ is appropriate to describe x.
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 7/37
Mass Functions
Dx is the complete set of labels appropriate to describex.
Dx = red, pink means that both red and pink can beappropriately used to describe x, and no other labelsare appropriate.
mx : 2LA → [0, 1] is a probability mass function onsubsets of labels.
For F ⊆ LA mx(F ) is the subjective probability thatDx = F .
The mass function mx and the appropriatenessmeasure µ are strongly related...
µθ(x) is the sum of mx over those values for Dx
consistent with θ.Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 8/37
General Relationships
‘x is θ’ requires that Dx ∈ λ (θ) where ∀θ, ϕ ∈ LE
∀L ∈ LA λ(L) = F ⊆ LA : L ∈ F
λ(θ ∧ ϕ) = λ(θ) ∩ λ(ϕ)
λ(θ ∨ ϕ) = λ(θ) ∪ λ(ϕ)
λ(¬θ) = λ(θ)c
This results in the following equation relating mx as µ:
µθ(x) = P (Dx ∈ λ(θ)) =∑
S∈λ(θ) mx(S)
λ(red ∧ ¬pink) = F ⊆ LA : red ∈ F, pink 6∈ F
µred∧¬pink(x) =∑
F :red∈F,pink 6∈F mx(F )
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 9/37
Some Links ...
...to probability For a fixed x µθ(x) forms a probabilitymeasure on LE as θ varies.
If |= θ then ∀x, µθ(x) = 1
If θ ≡ ϕ then ∀x, µθ(x) = µϕ(x)
∀θ,∀x, µ¬θ(x) = 1 − µθ(x)
If |= ¬(θ ∧ ϕ) then µθ∨ϕ(x) = µθ(x) + µϕ(x)
...to DS Theory For conjunctions and disjunctions oflabels appropriateness measure can be interpreted asCommonality and Plausibility functions.
µL1∧...∧Lk(x) =
∑F :L1,...,Lk⊆F mx(F ) = Q(L1, . . . , Lk|x)
µL1∨...∨Lk(x) =
∑F :F∩L1,...,Lk6=∅ mx(F ) = Pl(L1, . . . , Lk|x)
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 10/37
Assuming Consonance
Focal Sets Fx = F ⊆ LA : mx(F ) > 0.
Consonance For F, F ′ ∈ Fx then either F ⊆ F ′ orF ′ ⊆ F .
Ordering Labels Assume that for each x ∈ Ω an agentfirst identifies a total ordering on the appropriateness oflabels. They then evaluate their belief values mx aboutwhich labels are appropriate to describe x in such away so as to be consistent with this ordering.
LE∧,∨ = expressions only involving the connectives ∧or ∨.
Under Consonance ∀θ, ϕ ∈ LE∧,∨, ∀x ∈ Ω it holds thatµθ∧ϕ(x) = min(µθ(x), µϕ(x)) andµθ∨ϕ(x) = max(µθ(x), µϕ(x))
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 11/37
Consonance and Functionality
0
0.2
0.4
0.6
0.8
1
0 2 4 6 8 10
small medium large
0
0.2
0.4
0.6
0.8
1
0 2 4 6 8 100
0.2
0.4
0.6
0.8
1
0 2 4 6 8 10
µL1(x), . . . , µLn(x) ordered so that
µLi(x) ≥ µLi+1
(x) for i = 1, . . . , n − 1
mx (L1, . . . , Ln) = µLn(x)
mx (L1, . . . , Li) = µLi(x) − µLi+1
(x)
and mx (∅) = 1 − µL1(x)
mx (s)
mx (s, m)
mx (m)
mx (m, l)
mx (l)
µm∧¬l (x)
min (µm (x) , 1 − µl (x))
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 12/37
Prototype Theory
The membership of elements in a concept isdetermined by their similarity to certain prototypicalcases (Rosch 1973).
Prototypes may be actual exemplars of the concept(case-based reasoning) or abstractions or aggregations(clustering).
Voronoi Model Neighbourhood Model
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 13/37
Conceptual Spaces: NCS Colour SpindleHue
Chromaticness
Orange
Red
Violet Blue
Green
Yellow
Grey
B
W
Bl R
YG
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 14/37
A Prototype Interpretation
Let d : Ω2 → [0,∞) be a distance function satisfyingd(x, x) = 0 and d(x, y) = d(y, x).
For S, T ⊆ Ω let d(T, S) = infd(x, y) : x ∈ S, y ∈ T.
For Li ∈ Ω let Pi ⊆ Ω be a set of prototypical elementsfor Li.
Let ǫ be a random variable into [0,∞) with densityfunction δ.
Li is appropriate to describe x iff d(x, Pi) ≤ ǫ.
Dǫx = Li : d(x, Pi) ≤ ǫ.
∀F ⊆ Ω mx(F ) = δ(ǫ : Dǫx = F)
Naturally satisfies consonance.
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 15/37
GeneratingDǫx
x
ǫ3ǫ2
ǫ1p1
p2
p3
p4
p5p6
p7
Identifying Dǫx; Dǫ1
x = ∅, Dǫ2x = L1, L2, Dǫ3
x = L1, L2, L3, L4
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 16/37
Appropriateness as Subsets ofǫ
∀x ∈ Ω and ∀θ ∈ LE, I(θ, x) ⊆ [0,∞) is definedrecursively by:
∀Li ∈ LA I(Li, x) = [d(x, Pi),∞)
∀θ ∈ LE I(¬θ, x) = I(θ, x)c
∀θ, ϕ ∈ LE I(θ ∨ ϕ, x) = I(θ, x) ∪ I(ϕ, x)
∀θ, ϕ ∈ LE I(θ ∧ ϕ, x) = I(θ, x) ∩ I(ϕ, x)
Theorem: ∀θ ∈ LE,∀x ∈ Ω I(θ, x) = ǫ : Dǫx ∈ λ(θ)
Corollary: ∀θ ∈ LE,∀x ∈ Ω µθ(x) = δ(I(θ, x))
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 17/37
Area under δ: Appropriateness
I(Li ∧ ¬Lj , x) = [d(x, Pi), d(x, Pj))
ǫd(x, Pi) d(x, Pj)
µLi∧¬Lj(x)
δ(ǫ)
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 18/37
Area under δ: Mass
mx(F ) = δ([maxd(x, Pi) : Li ∈ F, mind(x, Pi) : Li 6∈ F))
ǫ
δ(ǫ)
d(x, P1) d(x, P2) d(x, P3) d(x, P4)
mx(∅)
mx(L1)
mx(L1, L2)
mx(L1, L2, L3)
mx(L1, L2, L3, L4)
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 19/37
Neighbourhood Representation
Let N ǫLi
= x ∈ Ω : d(x, Pi) ≤ ǫ, more generallyN ǫ
θ = x : Dǫx ∈ λ(θ)
Satisfies N ǫθ∧ϕ = N ǫ
θ ∩N ǫϕ, N ǫ
θ∨ϕ = N ǫθ ∪N ǫ
ϕ,N ǫ
¬θ = (N ǫθ )c.
For θ ∈ LE∧,∨ N ǫθ is nested, otherwise not in general.
Alternative characterisation: µθ(x) = δ(ǫ : x ∈ N ǫθ)
Labels represent sets of points sufficiently similar toprototypes (Information Granules).
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 20/37
Example (Uniform δ)Let Ω = R and d(x, y) = ||x − y||. Let Li = about [a, b]where a ≤ b, so that Pi = [a, b].
N ǫLi
= [a − ǫ, b + ǫ]
Let δ be the uniform distribution on [k, r] for 0 ≤ k < r:
a b
r r
k k
1
µLi(x)
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 21/37
Random Sets
A random set is simply a set valued variable.
Random sets are known to be a unifying concept inuncertainty theory e.g. DS theory, possibility theory,fuzzy theory all have random set interpretations.
∀x ∈ Ω Dx is a random set into 2LA (i.e. taking sets oflabels as values).
∀θ ∈ LE N ǫθ is a random set into 2Ω (i.e. taking subsets
of the underlying universe Ω as values).
Single Point Coverage (of Dx) P (Li ∈ Dx) = µLi(x)
(of N ǫθ ) P (x ∈ N ǫ
θ ) = µθ(x)
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 22/37
Imprecise Probabilities
Suppose an agent receives the information ‘x is θ’ whatinformation does this tell them about x?
According to our prototype model, the agent can inferx ∈ N ǫ
θ
From this they can define upper and lower probabilities:For S ⊆ Ω
P (S|θ) = P (N ǫθ ∩ S 6= ∅) = δθ(ǫ : N ǫ
θ ∩ S 6= ∅)
P (S|θ) = P (N ǫθ ⊆ S) = δθ(ǫ : N ǫ
θ ⊆ S)
Single point coverage:spc(x|θ) = P (x ∈ N ǫ
θ ) = δθ(ǫ : x ∈ N ǫθ) ∝ µθ(x)
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 23/37
Possibility Measures
If θ ∈ LE∧,∨ then for 0 ≤ ǫ ≤ ǫ′ we have that N ǫθ ⊆ N ǫ′
θ .
In this case the resulting upper and lower probabilitiesare possibility and necessity measures (Dubois, Prade).
P (S|θ) = Pos(S|θ) = supµθ(x) : x ∈ S
P (S|θ) = Nec(S|θ) = 1 − supµθ(x) : x ∈ Sc
With properties...
Pos(S ∪ T |θ) = max(Pos(S|θ), Pos(T |θ))
Nec(S ∩ T |θ) = min(Nec(S|θ), Nec(T |θ))
Hence spc(x|θ) = µθ(x) is a possibility distribution(Zadeh)
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 24/37
Given a Prior...
Suppose that agent also has knowledge in the form of aprior distribution on Ω.
Given ‘x is θ’ they should determine the posteriorp(x|N ǫ
θ ).
But since ǫ is uncertain this results in second orderprobabilities.
If precise probabilities are requires one solution wouldbe to take an expected value:
p(x|θ) =∫ ∞0 p(x|N ǫ
θ )dǫ.
This satisfies p(S|θ) ∈ [P (S|θ), P (S|θ)]
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 25/37
Example (Fuzzy Numbers)
Let Ω = R and d(x, y) = ||x − y||. Let Li = about 2 so thatPi = 2. In this case N ǫ
Li= [2 − ǫ, 2 + ǫ]
Let δ be the uniform distribution on [0, 1].
Let the prior p on Ω correspond to the uniformdistribution on [0, 10].
0 1 2 3 4
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
µLi(x)
Appropriateness
0 1 2 3 4
0
1
2
3
p(x|Li)
Posterior
0 1 2 3 4
0
0.2
0.4
0.6
0.8
1.0
F (y|Li) F (y|Li)
F (y|Li)
Envelope
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 26/37
Prototype-Based Rule-Learning
Joint work with Yongchuan Tang from ZhejiangUniversity.
Given a database of pairs (~x, y) where y = f(~x) learn aset of rules to represent mapping f .
For label Lj on y with prototypes Pj learn a ruleIF(~x is Li) THEN (y is Lj) where Pi is determined fromthe set of points ~x : (~x, y) ∈ DB and y ∈ Pj.
For input ~x then µLj(y) = µLi
(~x) = δ([d(~x, Pi),∞)).
Determine BetPy(Lj) =∑
F :Lj∈Fmy(F )|F |
Evaluate y =∑
LjBetPy(Lj)cj where cj is the average
of Pj.
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 27/37
Sunspot Database
15 20 25 30 35 40 45 50 55 605
10
15
20
25
30
The widht σ of density function δ(0,σ)
RM
SE
of s
unsp
ot tr
aini
ng d
ata Two Clusters
Three Clusters
One Cluster
|DBi| Clusters
Four Clusters
15 20 25 30 35 40 45 50 55 6020
22
24
26
28
30
32
34
The widht σ of density function δ(0,σ)
RM
SE
of s
unsp
ot te
st d
ata
Two Clusters
Three Clusters
One Cluster
|DBi| Clusters
Four Clusters
1925 1930 1935 1940 1945 1950 1955 1960 1965 1970 1975 19800
20
40
60
80
100
120
140
160
180
200
Year
Num
ber
of S
unsp
ots
Predicted OutputActual Output
RMSE =21.91
0 20 40 60 80 100 120 140 160 180 2000
20
40
60
80
100
120
140
160
180
200
Actual Output
Pre
dict
ed O
utpu
t
RMSE = 21.91
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 28/37
Decision Tree ModelsDx1
Dx2Dx3
LF1 LF2 LF3
LF4
LF5 LF6 LF7
ss ∧ ¬l s, l
s ∧ l l ∧ ¬s
l
ss ∧ ¬l
s, ls ∧ l
ll ∧ ¬s
ss ∧ ¬l
s, ls ∧ l
ll ∧ ¬s
m(G|s, s) m(G|s, s, l) m(G|s, l)
m(G|s, l)
m(G|l, s) m(G|l, s, l) m(G|l, l)
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 29/37
Classification of Weather-Radar Images
The use of weather radars to estimate precipitation isan increasingly important area in hydrology.
Brightband= region of enhanced reflexivity cause byamplified echoes due to scattering of microwaves frommelting snow
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 30/37
Rules for Radar Classification
Attributes: Zh = Reflectivity factor, Zdr = Differentialreflectivity, Ldr = Linear Depolarisation Ration, H =Height of the measurement.
Rules: (DLdr= low,medium) ∧ (DH =
med., high)∧ (DZh= high)∧ (DZdr
= med., high) →Rain : 0.998, Snow : 0.002, Brightband : 0
(DLdr= medium) ∧ (DH = med., high) ∧ (DZh
=med., high) ∧ (DZdr
= low) →Rain : 0.03, Snow : 0.97, Brightband : 0
(DLdr= high) ∧ (DH = med.) ∧ (DZh
=high) ∧ (DZdr
= high) →Rain : 0.02, Snow : 0, Brightband : 0.98
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 31/37
River Severn ForecastingThe River Severn is situated in the South West ofEngland with a catchment area that spans from theCambrian Mountains in Mid Wales to the BristolChannel in England.
We focus on the Upper Severn from Abermule in Powysand its tributaries, down to Buildwas in Shropshire.
The Data consists of 13120 training examples from1/1/1998 to 2/7/1999 and 2760 test examples from8/9/2000 to 1/1/2001, recorded hourly.
Each example has 19 continuous attributes falling intotwo categories; station (water) level measurements andrain fall gauge measurements.
The forecasting problem is to predict the river level atBuildwas at time t + δ (δ = lead time).
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 32/37
Severn Catchment Map
EPNEY
FINHAM
LYDNEY
LUDLOW (TEME)
MUNLYN
LUDLOW (CORVE)
MEIFOD
WELFORD
LONGDON
KEMPSEY BIDFORD
BERRIEW
CAERSWSSWINDON
HASFIELD
KNIGHTON
PERSHORE
STOURTON
CAE HOWEL
MAESBROOK
MALEHURST
BARBOURNE
MARLBROOK
SANDHURST SANDHURST
SHARPNESS
LLANERFYLPOOL QUAY
BRANSFORD
PERDISWELL
KENILWORTH
SHUTHONGER
LLANIDLOES
PONT-Y-GAER
WYRE PIDDLE
STOURBRIDGE
HURCOTT WOOD
HENLEY RIVER
PUXTONMARSH
LEINTWARDINE
MINSTERWORTH
MYTHE BRIDGE
TRIMPLEY RES.
HEWELL GRANGEBURFORD
BRIDGE
STANFORD
DIGLIS
HAMPTON LOADE
LAKEVYRNWY
BRIDGNORTH
BUTTINGTON
CREWGREEN
WELSHBRIDGE
ASHFORD
PLYNLIMON
TEWKS. UPPER POUND
ONIBURY
ACRE LANE
STRATFORD
BRIDGNORTHBUTTINGTON
WELSHBRIDGE
LLANDRINIO
PONTROBERT
ASHLEWORTHSLATE MILL
LEIGH COURT
SEVERN STOKE
SHELLEY CLOSE
CHARLTON KINGS
NEWNHAM BRIDGE
GLOUCESTERDOCKS
DOWDESWELL WATER
CLYWEDOG RES.
TEWKSBURY
CAM P/S
SARN WEN
BROOM
RUGBY
BREDON
DOLWEN
YEATON
WALCOT
SOUTHAM
STUDLEY WARWICK
COSFORD
BURCOTE
TENBURY
EVESHAM
BEWDLEY
CHALFORD
BUILDWAS
COVENTRY
BRYNTAIL
SHIPSTON
TERNHILL
STARETON
ABERMULE
MONTFORD
FECKENHAM
DEERHURST
CAMBRIDGE
LILBOURNE
PRESTWOOD
TIBBERTON
HOOKAGATE
COLEY MILL
CHURCHOVER
HAW BRIDGE
EBLEY MILL
BROMSBERROW
COWNWY WEIR
LITTLE ALNE
OAK COTTAGE
SAXONS LODE
LLANYMYNECH
ALSCOT PARK
VYRNWYWEIR
CHUCHOVER EM
WARDS BRIDGE
WELLESBOURNE
LLANYBLODWEL
HARFORD HILL
KITES HARDWICK
BORETONBRIDGE
RHOS-Y-PENTREF
BESFORD BDGE
BURLINGTONWEIR
KIDDERMINSTER
WEDDERBURN BRIDGE
KNIGHTSFORD BRIDGE
ALNE
EATHORPE
STONELEIGH
EATHORPE
OFFENHAM
RODINGTON
STONELEIGH
CHILDSERCALL
EATON ON TERN
CRUDGINGTON
LEAMINGTON PDW
HINTON ON GREEN
SANDYFORD BRIDGEWHITLEYFORDBRIDGE
BUCKLEY FARM
BARBOURNE LOCK
10 0 10 20 30 405Kilometers
LOCATION OF PERMANENT RIVER LEVELAND FLOW MEASUREMENT STATIONS
IN THE SEVERN CATCHMENT
SOME MINOR SITES HAVE BEEN OMITTEDFOR REASONS OF CLARITY
Tidal level sites not show:AvonmouthIlfracombe
LEVEL MEASUREMENT STATIONS WITHOUT TELEMETRY
LEVEL MEASUREMENT STATIONS WITH TELEMETRY
FLOW MEASUREMENT STATIONS WITH TELEMETRY
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 33/37
36 Hours Ahead Prediction
0 500 1000 1500 2000 2500 30000
1
2
3
4
5
6
7
8
Time (t)
Riv
er L
evel
(m
)
ULID3 PredictionRiver Level at Buildwas
Test Data Set
0 1 2 3 4 5 6 7 80
1
2
3
4
5
6
7
Actual Level (m)
Pre
dict
ed L
evel
(m
)
ULID3 PredictionZero Error Prediction
Scatter Plot
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 34/37
Severn Rules
xAt = time t level at Abermule, xB
t = time t level at Buildwas, xMt = time t level at Meifod
Focal set low: low ∈ DxAt
∧ low ∈ DxBt
→ DxBt+36
= low
Focal set low, medium:low ∈ DxA
t
∧ medium ∈ DxBt
∧ low ∈ DxMt
→ DxBt+36
= low, medium
low ∈ DxAt
∧ medium ∈ DxBt
∧ medium ∈ DxMt
→ DxBt+36
= low, medium
medium ∈ DxAt
∧ low ∈ DxBt
∧ medium ∈ DxMt
→ DxBt+36
= low, medium
Focal set medium:medium ∈ DxA
t
∧ medium ∈ DxBt
∧ medium ∈ DxMt
→ DxBt+36
= medium
Focal set medium, high: medium:high ∈ DxA
t
∧ medium ∈ DxBt
∧ medium ∈ DxMt
→ DxBt+36
= medium, high
high ∈ DxAt
∧ medium ∈ DxBt
∧ high ∈ DxMt
→ DxBt+36
= medium, high
high ∈ DxAt
∧ high ∈ DxBt
∧ medium ∈ DxMt
→ DxBt+36
= medium, high
medium ∈ DxAt
∧ high ∈ DxBt
∧ high ∈ DxMt
→ DxBt+36
= medium, high
Focal set high: high ∈ DxAt
∧ high ∈ DxBt
∧ high ∈ DxMt
→ DxBt+36
= high
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 35/37
Conclusions
Granular modelling aims to provide high-level linguistic(rule-based) models for complex systems.
The approach balances the requirements of predictiveaccuracy and model transparency.
We have proposed random set approach to modelimprecise labels consistent with an epistemic theory ofvagueness.
A prototype theory interpretation has been alsointroduced.
A number of case study applications of granularmodelling have been described.
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 36/37
Shameless Advertising
Modelling and Reasoning with Vague Concepts byJonathan Lawry, Springer 2006
Granular Knowledge Representation and Inference using Labels and Label Expressions – p. 37/37