handout for the course abstract argumentation and interfaces to argumentative reasoning
Post on 15-Jan-2017
48 Views
Preview:
TRANSCRIPT
Abstract Argumentation
and
Interfaces to ArgumentativeReasoning
Handouts
Federico Cerutti
September 2016
Contents
Contents 1
1 Dung’s AF 31.1 Principles for Extension-based Semantics: [BG07] . . . . . 3
1.2 Acceptability of Arguments [PV02; BG09a] . . . . . . . . . . 4
1.3 (Some) Semantics [Dun95] . . . . . . . . . . . . . . . . . . . . 5
1.4 Labelling-Based Semantics Representation [Cam06] . . . . 6
1.5 Skepticism Relationships [BG09b] . . . . . . . . . . . . . . . 9
1.6 Signatures [Dun+14] . . . . . . . . . . . . . . . . . . . . . . . 9
1.7 Decomposability and Transparancy [Bar+14] . . . . . . . . 12
1.8 Extension-based I/O Characterisation [GLW16] . . . . . . . 13
2 Implementations 142.1 Ad Hoc Procedures . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 Constraint Satisfaction Programming . . . . . . . . . . . . . 14
2.3 Answer Set Programming . . . . . . . . . . . . . . . . . . . . 15
2.4 Propositional Satisfiability Problems . . . . . . . . . . . . . 15
2.5 Second-order Solver [BJT16] . . . . . . . . . . . . . . . . . . 23
2.6 Which One? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3 Ranking-Based Semantics 283.1 The Categoriser Semantics [BH01] . . . . . . . . . . . . . . . 28
3.2 Properties for Ranking-Based Semantics [Bon+16] . . . . . 28
4 Argumentation Schemes 334.1 An example: Walton et al. ’s Argumentation Schemes for
Practical Reasoning . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2 AS and Dialogues . . . . . . . . . . . . . . . . . . . . . . . . . 34
5 Semantic Web Argumentation 385.1 AIF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.2 AIF-OWL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6 CISpaces 436.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.2 Intelligence Analysis . . . . . . . . . . . . . . . . . . . . . . . 43
6.3 Reasoning with Evidence . . . . . . . . . . . . . . . . . . . . . 46
6.4 Arguments for Sensemaking . . . . . . . . . . . . . . . . . . 46
6.5 Arguments for Provenance . . . . . . . . . . . . . . . . . . . . 48
Cardiff University, 2016 Page 1
7 Natural Language Interfaces 507.1 Experiments with Humans: Scenarios [CTO14] . . . . . . . 50
7.2 Lessons From Argument Mining: [BR11] . . . . . . . . . . . 55
Bibliography 56
Cardiff University, 2016 Page 2
1 Dung’s ArgumentationFramework
Acknowledgement
This handout include material from a number of collaborators including
Pietro Baroni, Massimiliano Giacomin, Thomas Linsbichler, and Stefan5
Woltran.
Definition 1 ([Dun95]). A Dung argumentation framework AF is a pair
⟨A ,→⟩
where A is a set of arguments, and → is a binary relation on A i.e. →⊆A ×A . ♠
An argumentation framework has an obvious representation as a di-10
rected graph where the nodes are arguments and the edges are drawn
from attacking to attacked arguments.
The set of attackers of an argument a1 will be denoted as a−1 , {a2 :
a2 → a1}, the set of arguments attacked by a1 will be denoted as a+1 , {a2 :
a1 → a2}. We also extend these notations to sets of arguments, i.e. given15
E ⊆A , E− , {a2 | ∃a1 ∈ E,a2 → a1} and E+ , {a2 | ∃a1 ∈ E,a1 → a2}.
With a little abuse of notation we define S → a ≡ ∃a ∈ S : a → b. Simi-
larly, b → S ≡∃a ∈ S : b → a.
Given Γ= ⟨A ,→⟩ and Γ′ = ⟨A ′,→′⟩, Γ∪Γ′ = ⟨A ∪A ′,→∪→′⟩.
1.1 Principles for Extension-based Semantics:20
[BG07]
Definition 2.+ Given an argumentation framework AF = ⟨A ,→⟩, a set
S ⊆ A is D-conflict-free, denoted as D-cf (S), if and only if @a,b ∈ S such
that a → b. A semantics σ satisfies the D-conflict-free principle if and only
if ∀AF,∀E ∈ Eσ(AF) E is D-conflict-free . ♠25
Definition 3. Given an argumentation framework AF = ⟨A ,→⟩, an ar-
gument a ∈ A is D-acceptable w.r.t. a set S ⊆ A if and only if ∀b ∈ A
b → a ⇒ S → b.
The function FAF : 2A 7→ 2A which, given a set S ⊆ A , returns the
set of the D-acceptable arguments w.r.t. S, is called the D-characteristic30
function of AF. ♠
Cardiff University, 2016 Page 3
Dung’s AF • Acceptability of Arguments [PV02; BG09a]
Definition 4. Given an argumentation framework AF = ⟨A ,→⟩, a set
S ⊆ A is D-admissible (S ∈ AS (AF)) if and only if D-cf (S) and ∀a ∈ Sa is D-acceptable w.r.t. S. The set of all the D-admissible sets of AF is
denoted as AS (AF). ♠
Dσ = {AF|Eσ(AF) 6= ;}5
Definition 5.+ A semantics σ satisfies the D-admissibility principle if and
only if ∀AF ∈Dσ Eσ(AF)⊆AS (AF), namely ∀E ∈ Eσ(AF) it holds that:
a ∈ E ⇒ (∀b ∈A ,b → a ⇒ E → b). ♠
Definition 6. Given an argumentation framework AF = ⟨A ,→⟩, a ∈A and S ⊆ A , we say that a is D-strongly-defended by S (denoted as
D-sd(a,S)) iff ∀b ∈A , b → a, ∃c ∈ S \{a} : c → b and D-sd(c,S \{a}). ♠
Definition 7.+ A semantics σ satisfies the D-strongly admissibility prin-ciple if and only if ∀AF ∈Dσ, ∀E ∈ Eσ(AF) it holds that
a ∈ E ⊃D-sd(a,E) ♠
Definition 8.+ A semantics σ satisfies the D-reinstatement principle if and
only if ∀AF ∈Dσ, ∀E ∈ Eσ(AF) it holds that:
(∀b ∈A ,b → a ⇒ E → b)⇒ a ∈ E. ♠
Definition 9.+ A set of extensions E is D-I-maximal if and only if ∀E1,E2 ∈E , if E1 ⊆ E2 then E1 = E2. A semantics σ satisfies the D-I-maximality10
principle if and only if ∀AF ∈Dσ Eσ(AF) is D-I-maximal. ♠
Definition 10. Given an argumentation framework AF = ⟨A ,→⟩, a non-
empty set S ⊆ A is D-unattacked if and only if 6 ∃a ∈ (A \ S) : a → S. The
set of D-unattacked sets of AF is denoted as US (AF). ♠
Definition 11. Let AF = ⟨A ,→⟩ be an argumentation framework. The15
restriction of AF to S ⊆A is the argumentation framework AF↓S = ⟨S,→∩(S×S)⟩. ♠
Definition 12.+ A semantics σ satisfies the D-directionality principle if
and only if ∀AF = ⟨A ,→⟩,∀S ∈US (AF),AE σ(AF,S)= Eσ(AF↓S), where
AE σ(AF,S), {(E∩S) | E ∈ Eσ(AF)}⊆ 2S . ♠20
1.2 Acceptability of Arguments [PV02; BG09a]
Definition 13. Given a semantics σ and an argumentation framework
⟨A ,→⟩, an argument AF ∈Dσ is:
• skeptically justified iff ∀E ∈ Eσ(AF), a ∈ S;
• credulously justified iff ∃E ∈ Eσ(AF), a ∈ S. ♠25
Cardiff University, 2016 Page 4
Dung’s AF • (Some) Semantics [Dun95]
Definition 14. Given a semantics σ and an argumentation framework
⟨A ,→⟩, an argument AF ∈Dσ is:
• justified iff it is skeptically justified;
• defensible iff it is credulously justified but not skeptically justified;
• overruled iff it is not credulously justified. ♠5
1.3 (Some) Semantics [Dun95]
Lemma 1 (Dung’s Fundamental Lemma, [Dun95, Lemma 10]). Given anargumentation framework AF = ⟨A ,→⟩, let S ⊆ A be a D-admissible setof arguments, and a,b be arguments which are acceptable with respect toS. Then:10
1. S′ = S∪ {a} is D-admissible; and
2. b is D-acceptable with respect to S′. ♣
Theorem 1 ([Dun95, Theorem 11]). Given an argumentation frameworkAF = ⟨A ,→⟩, the set of all D-admissible sets of ⟨A ,→⟩ form a completepartial order with respect to set inclusion. ♣15
Definition 15 (Complete Extension).+ Given an argumentation frame-
work AF = ⟨A ,→⟩, S ⊆A is a D-complete extension iff S is D-conflict-free
and S =FAF (S). C O denotes the complete semantics. ♠
Definition 16 (Grounded Extension).+ Given an argumentation frame-
work AF = ⟨A ,→⟩. The grounded extension of AF is the least complete20
extension of AF. GR denotes the grounded semantics. ♠
Definition 17 (Preferred Extension).+ Given an argumentation frame-
work AF = ⟨A ,→⟩. A preferred extension of AF is a maximal (w.r.t. set
inclusion) complete extension of AF. P R denotes the preferred seman-tics. ♠25
Definition 18. Given an argumentation framework AF = ⟨A ,→⟩ and
S ⊆A , S+ , {a ∈A | ∃b ∈ S ∧ b → a}. ♠
Definition 19 (Stable Extension).+ Given an argumentation framework
AF = ⟨A ,→⟩. S ⊆A is a stable extension of AF iff S is a preferred exten-
sion and S+ =A \ S. S T denotes the stable semantics. ♠30
Cardiff University, 2016 Page 5
Dung’s AF • Labelling-Based Semantics Representation[Cam06]
C O GR P R S T
D-conflict-free Yes Yes Yes YesD-admissibility Yes Yes Yes YesD-strongly admissibility No Yes No NoD-reinstatement Yes Yes Yes YesD-I-maximality No Yes Yes YesD-directionality Yes Yes Yes No
Table 1.1: Satisfaction of general properties by argumentation semantics[BG07; BCG11]
S T
P R
C O GR
Figure 1.1: Relationships among argumentation semantics
1.4 Labelling-Based Semantics Representation[Cam06]
Definition 20. Let Γ = Γ be an argumentation framework. A labelling
L ab ∈ L(Γ) is a complete labelling of Γ iff it satisfies the following condi-
tions for any a1 ∈A :5
• L ab(a1)= in⇔∀a2 ∈ a−1 L ab(a2)= out;
• L ab(a1)= out⇔∃a2 ∈ a−1 : L ab(a2)= in. ♠
The grounded and preferred labelling can then be defined on the basis
of complete labellings.
Definition 21. Let Γ = Γ be an argumentation framework. A labelling10
L ab ∈L(Γ) is the grounded labelling of Γ if it is the complete labelling of Γ
minimizing the set of arguments labelled in, and it is a preferred labellingof Γ if it is a complete labelling of Γ maximizing the set of arguments
labelled in. ♠
In order to show the connection between extensions and labellings, let15
us recall the definition of the function Ext2Lab, returning the labelling
corresponding to a D-conflict-free set of arguments S.
Definition 22. Given an AF Γ= Γ and a D-conflict-free set S ⊆A , the cor-
responding labelling Ext2Lab(S) is defined as Ext2Lab(S)≡L ab, where
• L ab(a1)= in⇔ a1 ∈ S20
• L ab(a1)= out⇔∃ a2 ∈ S s.t. a2 → a1
Cardiff University, 2016 Page 6
Dung’s AF • Labelling-Based Semantics Representation[Cam06]
σ=C O σ=GR σ=P R σ=S T
EXISTSσ trivial trivial trivial NP-cCAσ NP-c polynomial NP-c NP-cSAσ polynomial polynomial Π
p2 -c coNP-c
VERσ polynomial polynomial coNP-c polynomialNEσ NP-c polynomial NP-c NP-c
Table 1.2: Complexity of decision problems by argumentation semantics[DW09]
• L ab(a1)= undec⇔ a1 ∉ S∧@ a2 ∈ S s.t. a2 → a1 ♠
[Cam06] shows that there is a bijective correspondence between the
complete, grounded, preferred extensions and the complete, grounded,
preferred labellings, respectively.
Proposition 1. Given an an AF Γ= Γ, L ab is a complete (grounded, pre-5
ferred) labelling of Γ if and only if there is a complete (grounded, preferred)extension S of Γ such that L ab = Ext2Lab(S). ♣
The set of complete labellings of Γ is denoted as LC O (Γ), the set of
preferred labellings as LP R(Γ), while LGR(Γ) denotes the set including
the grounded labelling.10
Remark 1.+ To exercise yourself, try Arg Teach [DS14] at http://www-argteach.
doc.ic.ac.uk/
Cardiff University, 2016 Page 7
Dung’s AF • Skepticism Relationships [BG09b]
GR
C O
P R
GR
C O
P RS T
Figure 1.2: ¹S⊕ relation for any argumentation framework (left) and forargumentation framework where stable extensions exist (right).
1.5 Skepticism Relationships [BG09b]
E1 ¹E E2 denotes that E1 is at least as skeptical as E2.
Definition 23. Let ¹E be a skepticism relation between sets of exten-
sions. The skepticism relation between argumentation semantics ¹S is
such that for any argumentation semantics σ1 and σ2, σ1 ¹S σ2 iff ∀AF ∈5
Dσ1 ∩Dσ2 , EAF (σ1)¹E EAF (σ2). ♠
Definition 24. Given two sets of extensions E1 and E2 of an argumenta-
tion framework AF:
• E1 ¹E∩+ E2 iff ∀E2 ∈ E2, ∃E1 ∈ E1: E1 ⊆ E2;
• E1 ¹E∪+ E2 iff ∀E1 ∈ E1, ∃E2 ∈ E2: E1 ⊆ E2. ♠10
Lemma 2. Given two argumentation semantics σ1 and σ2, if for anyargumentation framework AF EAF (σ1) ⊆ EAF (σ2), then σ1 ¹E
∩+ σ2 andσ1 ¹E
∪+ σ2 (σ1 ¹E⊕ σ2). ♣
1.6 Signatures [Dun+14]
Let A be a countably infinite domain of arguments, and15
AFA = {⟨A ,→⟩ | A ⊆A,→⊆A ×A }.
Definition 25. The signature Σσ of a semantics σ is defined as
Σσ = {σ(F) | F ∈ AFA}
(i.e. the collection of all possible sets of extensions an AF can possess
under a semantics). ♠20
Given S⊆ 2A, ArgsS =⋃S∈S S, PairsS = {⟨a,b⟩ | ∃S ∈S s.t. {a,b}⊆ S}. S
is called an extension-set if ArgsS is finite.
Definition 26. Let S⊆ 2A. S is incomparable if ∀S,S′ ∈S, S ⊆ S′ implies
S = S′. ♠
Cardiff University, 2016 Page 9
Dung’s AF • Signatures [Dun+14]
Definition 27. An extension-set S ⊆ 2A is tight if ∀S ∈ S and a ∈ ArgsS
it holds that if S ∪ {a} 6∈ S then there exists an b ∈ S such that ⟨a,b⟩ 6∈PairsS. ♠
Definition 28. S ⊆⊆ 2A is adm-closed if for each A,B ∈ S the following
holds: if ⟨a,b⟩ ∈PairsS for each a,b ∈ A∪B, then also A∪B ∈S. ♠5
Proposition 2. For each F ∈ AFA:
• S T (F) is incomparable and tight;
• P R(F) is non-empty, incomparable and adm-closed. ♣
Theorem 2. The signatures for S T and P R are:
• ΣS T = {S | S is incomparable and tight};10
• ΣP R = {S 6= ; | S is incomparable and adm-closed}. ♣
Cardiff University, 2016 Page 10
Dung’s AF • Signatures [Dun+14]
Consider
S= { { a,d, e },
{ b, c, e },
{ a,b,d } }
Cardiff University, 2016 Page 11
Dung’s AF • Decomposability and Transparancy [Bar+14]
1.7 Decomposability and Transparancy [Bar+14]
Definition 29. Given an argumentation framework AF = (A ,→),
a labelling-based semantics σ associates with AF a subset of L(AF), de-
noted as Lσ(AF). ♠
Definition 30. Given AF = (A ,→) and a set Args⊆A , the input of Args,5
denoted as Argsinp, is the set {B ∈A \Args | ∃A ∈Args, (B, A) ∈→}, the con-ditioning relation of Args, denoted as ArgsR , is defined as →∩(Argsinp×Args). ♠
Definition 31. An argumentation framework with input is a tuple
(AF,I ,LI ,RI ), including an argumentation framework AF = (A ,→), a10
set of arguments I such that I ∩A =;, a labelling LI ∈LI and a rela-
tion RI ⊆ I ×A . A local function assigns to any argumentation frame-
work with input a (possibly empty) set of labellings of AF, i.e.
F(AF,I ,LI ,RI ) ∈ 2L(AF). ♠
Definition 32. Given an argumentation framework with input15
(AF,I ,LI ,RI ), the standard argumentation framework w.r.t.
(AF,I ,LI ,RI ) is defined as AF ′ = (A ∪I ′,→ ∪R′I ), where I ′ = I ∪
{A′ | A ∈ out(LI )} and R′I = RI ∪ {(A′, A) | A ∈ out(LI )}∪ {(A, A) | A ∈
undec(LI )}. ♠
Definition 33. Given a semantics σ, the canonical local function of σ20
(also called local function of σ) is defined as Fσ(AF,I ,LI ,RI )= {Lab↓A |Lab ∈Lσ(AF ′)}, where AF = (A ,→) and AF ′ is the standard argumenta-
tion framework w.r.t. (AF,I ,LI ,RI ). ♠
Definition 34. A semantics σ is complete-compatible iff the following
conditions hold:25
1. For any argumentation framework AF = (A ,→), every labelling L ∈Lσ(AF) satisfies the following conditions:
• if A ∈A is initial, then L(A)= in
• if B ∈ A and there is an initial argument A which attacks B,
then L(B)= out30
• if C ∈ A is self-defeating, and there are no attackers of C be-
sides C itself, then L(C)= undec
2. for any set of arguments I and any labelling LI ∈ LI , the ar-
gumentation framework AF ′ = (I ′,→′), where I ′ = I ∪ {A′ | A ∈out(LI )} and →′= {(A′, A) | A ∈ out(LI )}∪ {(A, A) | A ∈ undec(LI )},35
admits a (unique) labelling, i.e. |Lσ(AF ′)| = 1. ♠
Cardiff University, 2016 Page 12
Dung’s AF • Extension-based I/O Characterisation[GLW16]
Definition 35. A semantics σ is fully decomposable (or simply decom-posable) iff there is a local function F such that for every argumenta-
tion framework AF = (A ,→) and every partition P = {P1, . . .Pn} of A ,
Lσ(AF) = U (P , AF,F) where U (P , AF,F) , {LP1 ∪ . . . ∪ LPn |LPi ∈ F(AF↓Pi ,Pi
inp, (⋃
j=1···n, j 6=i LP j )↓Piinp ,Pi
R)}. ♠5
Definition 36. A complete-compatible semantics σ is top-down decom-posable iff for any argumentation framework AF = (A ,→) and any parti-
tion P = {P1, . . .Pn} of A , it holds that Lσ(AF)⊆U (P , AF,Fσ). ♠
Definition 37. A complete-compatible semantics σ is bottom-up decom-posable iff for any argumentation framework AF = (A ,→) and any parti-10
tion P = {P1, . . .Pn} of A , it holds that Lσ(AF)⊇U (P , AF,Fσ). ♠
C O S T GR P R
Full decomposability Yes Yes No NoTop-down decomposability Yes Yes Yes YesBottom-up decomposability Yes Yes No No
Table 1.3: Decomposability properties of argumentation semantics.
1.8 Extension-based I/O Characterisation [GLW16]
Definition 38. Given input arguments I and output arguments O with
I ∩O = ;, an I/O-gadget is an AF F = (A,R) such that I,O ⊆ A and I−F =;. ♠15
Definition 39. Given an I/O-gadget F = (A,R) the injection of J ⊆ I to Fis the AF .(F, J)= (A∪ {z},R∪ {(z, i) | i ∈ (I \ J)}). ♠
Definition 40. An I/O-specification consists of two sets I,O ⊆ A and a
total function f : 2I 7→ 22O. ♠
Definition 41. The I/O-gadget F satisfies I/O-specification f under se-20
mantics σ iff ∀J ⊆ I : σ(.(F, J))|O = f(J). ♠
Theorem 3. An I/O-specification f is satisfiable under σ iffS T : >P R: ∀J ⊆ I : |f(J)| ≥ 1
C O : ∀J ⊆ I : |f(J)| ≥ 1∧⋂f(J) ∈ f(J)
GR: ∀J ⊆ I : |f(J)| = 1♣
Cardiff University, 2016 Page 13
2 Implementations
Acknowledgement
This handout include material from a number of collaborators including
Massimiliano Giacomin, Mauro Vallati, and Stefan Woltran.
Comprehensive survey recently published in [Cha+15].5
2.1 Ad Hoc Procedures
NAD-Alg [NDA12; NAD14]
2.2 Constraint Satisfaction Programming
A Constraint Satisfaction Problem (CSP) P [BS12; RBW08] is a triple
P = ⟨X ,D,C⟩ such that:10
• X = ⟨x1, . . . , xn⟩ is a tuple of variables;
• D = ⟨D1, . . . ,Dn⟩ a tuple of domains such that ∀i, xi ∈ D i;
• C = ⟨C1, . . . ,Ct⟩ is a tuple of constraints, where ∀ j,C j = ⟨RS j ,S j⟩,S j ⊆ {xi|xi is a variable}, RS j ⊆ SD
j ×SDj where SD
j = {D i|D i is a
domain, and xi ∈ S j}.15
A solution to the CSP P is A = ⟨a1, . . . ,an⟩ where ∀i,ai ∈ D i and ∀ j,RS j
holds on the projection of A onto the scope S j. If the set of solutions is
empty, the CSP is unsatisfiable.
Cardiff University, 2016 Page 14
Implementations • Answer Set Programming
CONArg2 [BS12]
In [BS12], the authors propose a mapping from AFs to CSPs.
Given an AF Γ, they first create a variable for each argument whose
domain is always {0,1} — ∀ai ∈A ,∃xi ∈ X such that D i = {0,1}.
Subsequently, they describe constraints associated to different defi-5
nitions of Dung’s argumentation framework: for instance {a1,a2} ⊆ A is
D-conflict-free iff ¬(x1 = 1∧ x2 = 1).
2.3 Answer Set Programming
Answer Set Programming (ASP) [Fab13] is a declarative problem solving
paradigm. In ASP, representation is done using a rule-based language,10
while reasoning is performed using implementations of general-purpose
algorithms, referred to as ASP solvers.
AspartixM [EGW10; Dvo+11]
AspartixM [Dvo+11] expresses argumentation semantics in Answer Set
Programming (ASP): a single program is used to encode a particular ar-15
gumentation semantics, and the instance of an argumentation framework
is given as an input database. Tests for subset-maximality exploit the
metasp optimisation frontend for the ASP-package gringo/claspD.
Given an AF Γ, Aspartix encodes the requirements for a “semantics”
(e.g. the D-conflict-free requirements) in an ASP program whose database20
considers:
{arg(a) | a ∈A }∪ {defeat(a1,a2) | ⟨a1,a2⟩ ∈→}
The following program fragment is thus used to check the D-conflict-
freeness [Dvo+11]:
πc f = { in(X )← not out(X ),arg(X );
out(X )← not in(X ),arg(X );
← in(X ), in(Y ),defeat(X ,Y )}.
25
πS T = { in(X )← not out(X ),arg(X );
out(X )← not in(X ),arg(X );
← in(X ), in(Y ),defeat(X ,Y );
defeated(X )← in(Y ),defeat(Y , X );
← out(X ),not defeated(X )}.
2.4 Propositional Satisfiability Problems
In the propositional satisfiability problem (SAT) the goal is to determine
whether a given Boolean formula is satisfiable. A variable assignment
that satisfies a formula is a solution.30
Cardiff University, 2016 Page 15
Implementations • Propositional Satisfiability Problems
In SAT, formulae are commonly expressed in Conjunctive Normal Form
(CNF). A formula in CNF is a conjunction of clauses, where clauses are
disjunctions of literals, and a literal is either positive (a variable) or neg-
ative (the negation of a variable). If at least one of the literals in a clause
is true, then the clause is satisfied, and if all clauses in the formula are5
satisfied then the formula is satisfied and a solution has been found.
PrefSAT [Cer+14b]
Requirements for complete labelling as a CNF [Cer+14b]: for each argu-
ment ai ∈ A , three propositional variables are considered: I i (which is
true iff L ab(ai) = in), Oi (which is true iff L ab(ai) = out), Ui (which is10
true iff L ab(ai)= undec). Given |A | = k and φ : {1, . . . ,k} 7→A .
∧i∈{1,...,k}
((I i ∨Oi ∨Ui)∧ (¬I i ∨¬Oi)∧(¬I i ∨¬Ui)∧ (¬Oi ∨¬Ui)
)(2.1)
∧{i|φ(i)−=;}
I i (2.2)
∧{i|φ(i)− 6=;}
(I i ∨
( ∨{ j|φ( j)→φ(i)}
(¬O j)
))(2.3)
∧{i|φ(i)− 6=;}
( ∧{ j|φ( j)→φ(i)}
¬I i ∨O j
)(2.4)15
∧{i|φ(i)− 6=;}
( ∧{ j|φ( j)→φ(i)}
¬I j ∨Oi
)(2.5)
∧{i|φ(i)− 6=;}
(¬Oi ∨
( ∨{ j|φ( j)→φ(i)}
I j
))(2.6)
∧{i|φ(i)− 6=;}
( ∧{k|φ(k)→φ(i)}
(Ui ∨¬Uk ∨
( ∨{ j|φ( j)→φ(i)}
I j
)))(2.7)
∧{i|φ(i)− 6=;}
(( ∧{ j|φ( j)→φ(i)}
(¬Ui ∨¬I j)
)∧
(¬Ui ∨
( ∨{ j|φ( j)→φ(i)}
U j
)))(2.8)
∨i∈{1,...k}
I i (2.9)20
Cardiff University, 2016 Page 16
Implementations • Propositional Satisfiability Problems
As noticed in [Cer+14b], the conjunction of the above formulae is re-
dundant. However, the non-redundant CNFs are not equivalent from an
empirical evaluation [Cer+14b]: the overall performance is significantly
affected by the chosen configuration pair CNF encoding–SAT solver.
Cardiff University, 2016 Page 17
Implementations • Propositional Satisfiability Problems
Algorithm 1 Enumerating the D-preferred extensions of an AFPrefSAT(Γ)
1: Input: Γ= Γ2: Output: Ep ⊆ 2A
3: Ep := ;4: cnf := ΠΓ
5: repeat6: cnf d f := cnf
7: pre f cand := ;8: repeat
9: lastcompf ound := SatS(cnf d f )
10: if lastcompf ound != ε then
11: pre f cand := lastcompf ound
12: for a1 ∈ I-ARGS(lastcompf ound) do
13: cnf d f := cnf d f ∧ Iφ−1(a1)
14: end for
15: remaining := F ALSE
16: for a1 ∈A \ I-ARGS(lastcompf ound) do
17: remaining := remaining∨ Iφ−1(a1)
18: end for
19: cnf d f := cnf d f ∧ remaining
20: end if
21: until (lastcompf ound != ε∧ I-ARGS(lastcompf ound) != A )
22: if pre f cand != ; then
23: Ep := Ep ∪ {I-ARGS(pre f cand)}
24: oppsolution := F ALSE
25: for a1 ∈A \ I-ARGS(pre f cand) do
26: oppsolution := oppsolution∨ Iφ−1(a1)
27: end for
28: cnf := cnf ∧ oppsolution
29: end if30: until (pre f cand != ;)
31: if Ep =; then32: Ep = {;}
33: end if34: return Ep
Cardiff University, 2016 Page 18
Implementations • Propositional Satisfiability Problems
Parallel-SCCp [Cer+14a; Cer+15]
Based on the SCC-Recursiveness Schema [BGG05].
ab
ef
cd
gh
Cardiff University, 2016 Page 19
Implementations • Propositional Satisfiability Problems
Algorithm 1 Computing D-preferred labellings of an AFP-PREF(Γ)
1: Input: Γ= Γ2: Output: Ep ∈ 2L(Γ)
3: return P-SCC-REC(Γ,A )
Algorithm 2 Greedy computation of base casesGREEDY(L,C)
1: Input: L = (L1, . . . ,Ln := {Sn1 , . . . ,Sn
h}),C ⊆A
2: Output: M = {. . . , (Si,Bi), . . .}3: M :=;4: for S ∈⋃n
i=1 Li do in parallel5: B := B-PR(Γ↓S ,S∩C)6: M = M∪ {(S,B)}7: end for8: return M
BOUNDCOND(Γ,Si,L ab) returns (O, I) where O = {a1 ∈ Si | ∃a2 ∈S ∩a−
1 : L ab(a2) = in} and I = {a1 ∈ Si | ∀ a2 ∈ S ∩a−1 ,L ab(a2) = out},
with S ≡ S1 ∪ . . .∪Si−1.
Cardiff University, 2016 Page 20
Implementations • Propositional Satisfiability Problems
Algorithm 3 Determining the D-grounded labelling of an AF in a set CGROUNDED(Γ,C)
1: Input: Γ= Γ, C ⊆A
2: Output: (L ab,U) : U ⊆A ,L ab ∈LA \U3: L ab := ;4: U := A
5: repeat6: initial f ound := ⊥7: for a1 ∈ C do8: if {a2 ∈U | a2 → a1}=; then9: initial f ound := >
10: L ab := L ab∪ {(a1,in)}11: U := U \a1
12: C := C \a1
13: for a2 ∈ (U ∩a+1 ) do
14: L ab := L ab∪ {(a2,out)}15: U := U \a2
16: C := C \a2
17: end for18: end if19: end for20: until (initial f ound)21: return(L ab,U)
Cardiff University, 2016 Page 21
Implementations • Propositional Satisfiability Problems
Algorithm 4 Computing D-preferred labellings of an AF in CP-SCC-REC(Γ,C)
1: Input: Γ= Γ, C ⊆A
2: Output: Ep ∈ 2L(Γ)
3: (L ab,U)=GROUNDED(Γ,C)
4: Ep := {L ab}
5: Γ=Γ↓U
6: L:= (L1 := {S11, . . . ,S1
k}, . . . ,Ln := {Sn1 , . . . ,Sn
h})
=SCCS-LIST(Γ)7: M := {. . . , (Si,Bi), . . .}=GREEDY(L,C)
8: for l ∈ {1, . . . ,n} do9: E l := {ES1
l := (), . . . ,ESkl := ()}
10: for S ∈ Ll do in parallel
11: for L ab ∈ Ep do in parallel
12: (O, I) := L-COND(Γ,S,Ll ,L ab)
13: if I =; then
14: ESl [L ab]={{(a1,out) | a1 ∈O} ∪ {(a1,undec) | a1 ∈ S \O}}
15: else
16: if I = S then
17: ESl [L ab]= B where (S,B) ∈ M
18: else
19: if O =; then
20: ESl [L ab]=B-PR(Γ↓S , I ∩C)
21: else
22: ESl [L ab]={{(a1,out) | a1 ∈O}}
23: ESl [L ab]= ES
l [L ab]⊗P-SCC-REC(Γ↓S\O, I ∩C)
24: end if
25: end if
26: end if
27: end for
28: end for
29: for S ∈ Ll do
30: E′p := ;
31: for L ab ∈ Ep do in parallel
32: E′p = E′
p ∪ ({L ab}⊗ESl [L ab])
33: end for
34: Ep := E′p
35: end for36: end for37: return Ep
Cardiff University, 2016 Page 22
Implementations • Second-order Solver [BJT16]
2.5 Second-order Solver [BJT16]
http://research.ics.aalto.fi/software/sat/sat-to-sat/so2grounder.
shtml
Given a representation of an argumentation framework such as:
• a(X ) holds iff X is an argument;5
• r(X ,Y ) holds iff X attacks Y ;
then:
• TCF = {@N, M | r(N, M) ∧ s(N) ∧ s(M).}
• TAD ={
∀N | att(N) ⇐⇒ ( a(N) ∧ ∃M | r(M, N) ∧ s(M) ).
∀N | def (N) ⇐⇒ ( a(N) ∧ ∀M | r(M, N)=⇒ att(M) ).
}
• TFP = {TAD . ∀N | s(N) ⇐⇒ def (N).}10
• TGR =
TFP .
@s′,att′,de f ′ :
TFP [s/s′,def /def ′,att/att′] ∧( ∀N | s′(N)=⇒ s(N) ) ∧( ∃N | s(N)∧¬s′(N) )
• TST = {TAD . ∀N | a(N)=⇒ ( s(N) ⇐⇒ ¬att(N) ).}
• TCO = {TFP . TCF .}
• TPR =
TCO.
@s′,att,′ ,def ′ :
TCO[s/s′,def /def ′,att/att′] ∧( ∀N | s(N)=⇒ s′(N) ) ∧∃N | s′(N) ∧ ¬s(N).
The unary predicate s describes the extensions in the various seman-15
tics.
2.6 Which One?
We need to be smartHolger H. Hoos, Invited Keynote Talk at ECAI2014
Features for AFs [VCG14; CGV14]20
Directed Graph (26 features)
Cardiff University, 2016 Page 23
Implementations • Which One?
Structure:
# vertices ( |A | )
# edges ( |→ | )
# vertices / #edges ( |A |/|→ | )
# edges / #vertices ( |→ |/|A | )
density
average
Degree: stdev
attackers max
min
#
average
stdev
max
SCCs:
min
Structure:
# self-def
# unattacked
flow hierarchy
Eulerian
aperiodic
CPU-time: . . .
Cardiff University, 2016 Page 24
Implementations • Which One?
Undirected Graph (24 features)
Structure:
# edges
# vertices / #edges
# edges / #vertices
density
Degree:
average
stdev
max
min
SCCs:
#
average
stdev
max
min
Structure: Transitivity
3-cycles:
#
average
stdev
max
min
CPU-time: . . .
Average CPU-time, stdev, needed for extracting the featuresDirect Graph Features (DG)
Class CPU-Time # featMean stdDev
Graph Size 0.001 0.009 5
Degree 0.003 0.009 4
SCC 0.046 0.036 5
Structure 2.304 2.868 5
Undirect Graph Features (UG)Class CPU-Time # feat
Mean stDev
Graph Size 0.001 0.003 4
Degree 0.002 0.004 4
SCC 0.011 0.009 5
Structure 0.799 0.684 1
Triangles 0.787 0.671 5
5
Best Features for Runtime Prediction [CGV14]
Determined by a greedy forward search based on the Correlation-based
Feature Selection (CFS) attribute evaluator.
Cardiff University, 2016 Page 25
Implementations • Which One?
Solver B1 B2 B3
AspartixM num. arguments density (DG) size max. SCC
PrefSAT density (DG) num. SCCs aperiodicity
NAD-Alg density (DG) CPU-time density CPU-time Eulerian
SSCp density (DG) num. SCCs size max SCC
Predicting the (log)Runtime [CGV14]
RSME of Regression (Lower is better)
B1 B2 B3 DG UG SCC All
AspartixM 0.66 0.49 0.49 0.48 0.49 0.52 0.48PrefSAT 1.39 0.93 0.93 0.89 0.92 0.94 0.89NAD-Alg 1.48 1.47 1.47 0.77 0.57 1.61 0.55SSCp 1.36 0.80 0.78 0.75 0.75 0.79 0.74
Log runtime is defined as
√∑ni=1
(log10( ti )− log10( yi )
)2
n5
Best Features for Classification [CGV14]
Determined by a greedy forward search based on the Correlation-based
Feature Selection (CFS) attribute evaluator.
C-B1 C-B2 C-B3
num. arguments density (DG) min attackers
Classification [CGV14]10
Classification (Higher is better)
C−B1 C-B2 C-B3 DG UG SCC All
Accuracy 48.5% 70.1% 69.9% 78.9% 79.0% 55.3% 79.5%Prec. AspartixM 35.0% 64.6% 63.7% 74.5% 74.9% 42.2% 76.1%Prec. PrefSAT 53.7% 67.8% 68.1% 79.6% 80.5% 60.4% 80.1%
Prec. NAD-Alg 26.5% 69.2% 69.0% 81.7% 85.1% 35.3% 86.0%Prec. SSCp 54.3% 73.0% 72.7% 76.6% 76.8% 57.8% 77.2%
Selecting the Best Algorithm [CGV14]
Metric: Fastest
(max. 1007)
AspartixM 106
NAD-Alg 170
PrefSAT 278
SSCp 453
EPMs Regression 755
EPMs Classification 788
Cardiff University, 2016 Page 26
Implementations • Which One?
Metric: IPC
(max. 1007)
NAD-Alg 210.1
AspartixM 288.3
PrefSAT 546.7
SSCp 662.4
EPMs Regression 887.7
EPMs Classification 928.1
IPC score1: for each AF, each system gets a score of T∗/T, where Tis its execution time and T∗ the best execution time among the compared
systems, or a score of 0 if it fails in that case. Runtimes below 0.01 seconds
get by default the maximal score of 1. The IPC score considers, at the5
same time, the runtimes and the solved instances
1 http://ipc.informatik.uni-freiburg.de/ .
Cardiff University, 2016 Page 27
3 Ranking-Based Semantics
3.1 The Categoriser Semantics [BH01]
Definition 42 ([BH01]). Let Γ= ⟨A ,→⟩ be an argumentation framework.
The categoriser function Cat : A 7→]0,1] is defined as:
Cat(a1)=
1 if a−
1 =;1
1+∑a2∈a−
1Cat(a2)
otherwise5
♠
3.2 Properties for Ranking-Based Semantics[Bon+16]
Preliminary notions
Definition 43. Let Γ = ⟨A ,→⟩ and a1,a2 ∈ A . A path from a2 to a1,10
noted P(a2,a1) is a sequence s = ⟨b0, . . . ,bn⟩ of arguments such as b0 = a1,
bn = a2, and ∀i < n,⟨bi+1,bi⟩ ∈ A . We denote by lP = n the length of P.
A defender (resp. attacker) of a1 is an argument situated at the begin-
ning of an even-length (resp. odd-length) path. We denote the multiset
of defenders and attackers of a1 by R+n {a2 | ∃P(a2,a1) with lP ∈ 2N} and15
R−n = {a2 | ∃P(a2,a1) with lP ∈ 2N+ 1} respectively. The direct attack-
ers of a1 are arguments in R−1 (a1) = a−
1 . An argument a1 is defended if
R+2 (a1)= {a−
1 }− 6= ;.
A defence root (resp. attack root) is a non-attacked defender (resp.
attacker). We denote the mulitset of defence roots and attacks roots of a120
by BR+n (a1) = {a2 ∈ R+
n (a1) | a−2 = ;} and BR−
n (a1) = {a2 ∈ R−n (a1) | a−
2 =;} respectively. A path from a2 to a1 is a defence branch (resp. attackbranch) if a2 is a defence (resp. attack) root of a1. Let us note BR+(a1) =⋃
n BR+n (a1) and BR−(a1)=⋃
n BR−n (a1). ♠
Definition 44. A ranking-based semantics σ associates to any argumen-25
tation framework Γ= ⟨A ,→⟩ is a ranking ºσΓ on A , where ºσ
Γ is a preorder
(a reflexive and transitive relation) on A . a1 ºσΓ a2 means that a1 is at
least as acceptable as a2. a1 ÂσΓ a2 iff a1 ºσ
Γ a2 and a2�σΓ a1. ♠
Definition 45. A lexicographical order between two vectors of real num-
ber V = ⟨V1, . . . ,Vn⟩ and V ′ = ⟨V ′1, . . . ,V ′
n⟩, is defined as V ºlex V ′ iff ∃i ≤ n30
s.t. Vi ≥V ′i and ∀ j < i, Vj =V ′
j . ♠
Cardiff University, 2016 Page 28
Ranking-Based Semantics • Properties for Ranking-Based Semantics [Bon+16]
Definition 46. An isomorphism γ between two argumentation frame-
works Γ= ⟨A ,→⟩ and Γ′ = ⟨A ′,→′⟩ is a bijective function γ : A 7→A ′ such
that ∀a24,a25 ∈ A , ⟨a24,a25⟩ ∈→ iff ⟨γ(a24),γ(a25)⟩ ∈→′. With a slight
abuse of notation, we will note Γ′ = γ(Γ). ♠
Definition 47 ([AB13]). Let ≥S be a ranking on a set of argument A .5
For any S1,S2 ⊆ A , S1 ≥S S2 is a group comparison iff there exists an
injective mapping f from S2 to S1 such that ∀a1 ∈ S2, f (a1) Â a1. An
S1 >S S2 is a strict group comparison iff S1 ≥S S2 and (|S2| < |S1| or
∃a1 ∈ S2, f (a1)Â a1). ♠
Definition 48. Let Γ= ⟨A ,→⟩ and a1 ∈A . The defence of a1 is simple iff10
every defender of a1 attacks exactly one direct attacker of a2. The defence
of a1 is distributed iff every direct attacker of a1 is attacked by at most
one argument. ♠
Definition 49. Let Γ = ⟨A ,→⟩, a1 ∈ A . The defence branch added to a1
is P+(a1) = ⟨A ,→′⟩, with A ′ = {b0, . . . ,bn},n ∈ 2N,b0 = a1,A ∩A ′ = {a1},15
and →′= {⟨bi,bi−1⟩ | i ≤ n}. The attack branch added to a1, denoted
P−(a1) is defined similarly except that the sequence is of odd length (i.e.
n = 2N+1). ♠
Properties
Given a ranking-based semantics σ, Γ= ⟨A ,→⟩, ∀a1,a2 ∈A :20
Abstraction (Abs) [AB13]. The ranking on A should be defined only
on the basis of the attacks between arguments.
Let Γ′ = ⟨A ′,→′⟩. For any isomorphism γ s.t. Γ′ = γ(Γ), a1 ºσΓ a2 iff
γ(a1)ºσΓ′ γ(a2).
Independence (In) [MT08; AB13]. The ranking between two argu-25
ments a1 and a2 should be independent of any argument that is neither
connected to a1 nor to a2.
∀Γ′ = ⟨A ′,→′⟩ ∈ cc(Γ),1 ∀a1,a2 ∈A ′, a1 ºσΓ′ a2 ⇒ a1 ºσ
Γ a2.
Void Precedence (VP) [CL05; MT08; AB13]. A non-attacked argu-
ment is ranked strictly higher than any attacked argument.30
a−1 =; and a−
2 6= ; ⇒ a1 ºσ a2.
Self-Contradiction (SC) [MT08]. A self-attacking argument is ranked
lower than any non self-attacking argument.
⟨a1,a1⟩ 6=→ and ⟨a2,a2⟩ ∈→ ⇒ a1 ºσ a2.
1cc(Γ) denotes the set of connected components of an AF Γ.
Cardiff University, 2016 Page 29
Ranking-Based Semantics • Properties for Ranking-Based Semantics [Bon+16]
Cardinality Precedence (CP) [AB13]. The greater the number of di-
rect attackers for an argument, the weaker the level of acceptability of
this argument.
|a−1 | < |a−
2 | ⇒ a1 ºσ a2.
Quality Precedence (QP) [AB13]. The greater the acceptability of5
one direct attacker for an argument, the weaker the level of acceptability
of this argument.
∃a3 ∈ a−2 s.t. ∀a4 ∈ a−
1 , a3 ºσ a4 ⇒ a1 ºσ a2.
Counter-Transitivity (CT) [AB13]. If the direct attackers of a2 are
at least as numerous and acceptable as those of a1, then a1 is at least as10
acceptable as a2.
a−2 ≥S a−
1 ⇒ a1 ºσ a2.
Strict Counter-Transitivity (SCT) [AB13]. If CT is satisfied and ei-
ther the direct attackers of a2 are strictly more numerous or acceptable
than those of a1, then a1 is strictly more acceptable than a2.15
a−2 >S a−
1 ⇒ a1 Âσ a2.
Defence Precedence (DP) [AB13]. For two arguments with the same
number of direct attackers, a defended argument is ranked higher than a
non-defended argument.
|a−1 | = |a−
2 |, {a−1 }− 6= ; and {a−
2 }− =; ⇒ a1 Âσ a2.20
Distributed-Defence Precedence (DDP) [AB13]. The best defense
is when each defender attacks a distinct attacker.
|a−1 | = |a−
2 | and {a−1 }− = {a−
2 }−, if the defence of a1 is simple and distributed
and the defence of a2 is simple but not distributed, then a1 Âσ a2.
Strict addition of Defence Branch (⊕DB) [CL05]. Adding a defence25
branch to any argument improves its ranking.
Given γ an isomorphism. If Γ∗ =Γ∪γ(Γ)∪P+(γ(a1)), then γ(a1)ÂσΓ+ a1.
Increase of Defence Branch (↑DB) [CL05]. Increasing the length of
a defence branch of an argument degrades its ranking.
Given γ an isomorphism. If a2 ∈ BR+(a1), a2 ∉ BR−(a1) and Γ∗ =Γ∪γ(Γ)∪30
P+(γ(a2)), then a1 ÂσΓ∗ γ(a1).
Addition of Defence Branch (+DB) [CL05]. Adding a defence branch
to an attached argument improves its ranking.
Given γ an isomorphism. If Γ∗ = Γ∪γ(Γ)∪P+(γ(a1)) and |a−1 | 6= 0, then
γ(a1)ÂσΓ+ a1.35
Cardiff University, 2016 Page 30
Ranking-Based Semantics • Properties for Ranking-Based Semantics [Bon+16]
Increase of Attack Branch (↑AB) [CL05]. Increasing the length of
an attack branch of an argument improves its ranking.
Given γ an isomorphism. If a2 ∈ BR−(a1), a2 ∉ BR+(a1) and Γ∗ =Γ∪γ(Γ)∪P+(γ(a2)), then γ(a1)Âσ
Γ∗ a1.
Addition of Attack Branch (+AB) [CL05]. Adding an attack branch5
to any argument degrades its ranking.
Given γ an isomorphism. If Γ∗ =Γ∪γ(Γ)∪P−(γ(a1)), then a1 ÂσΓ∗ γ(a1).
Total (Tot) [Bon+16]. All pairs of arguments can be compared.
a1 ºσ a2 or a2 ºσ a1.
Non-attacked Equivalence (NaE) [Bon+16]. All the non-attacked10
argument have the same rank.
a−1 =; and a−
2 =; ⇒ a1 'σ a2.
Attack vs Full Defence (AvsFD) [Bon+16]. An argument without
any attack branch is ranked higher than an argument only attacked by
one non-attacked argument.15
Γ is acyclic, |BR−(a1)| = 0, |a−2 | = 1, and |{a−
2 }−| = 0 ⇒ a1 Âσ a2.
CP incompatible with QP [AB13]CP incompatible with AvsFD [Bon+16]CP incompatible with +DB [Bon+16]VP incompatible with ⊕DB [Bon+16]
Table 3.1: Incompatible properties
SCT implies VP [AB13]CT implies DP [AB13]SCT implies CT [Bon+16]CT implies NaE [Bon+16]⊕DB implies +DB [Bon+16]
Table 3.2: Dependencies among properties
Cardiff University, 2016 Page 31
Ranking-Based Semantics • Properties for Ranking-Based Semantics [Bon+16]
Property Yes/No Comment
Abs YesIn YesVP Yes Implied by SCTDP Yes Implied by CTCT Yes Implied by SCTSCT YesCP NoQP NoDDP NoSC No⊕DB No Incompatible with VP+AB Yes+DB No↑AB Yes↑DB YesTot YesNaE Yes Implied by CTAvsFD No
Table 3.3: Properties satisfied by Cat [BH01]
Cardiff University, 2016 Page 32
4 Argumentation Schemes
Argumentation schemes [WRM08] are reasoning patterns which generate
arguments:
• deductive/inductive inferences that represent forms of common types
of arguments used in everyday discourse, and in special contexts5
(e.g. legal argumentation);
• neither deductive nor inductive, but defeasible, presumptive, or ab-
ductive.
Moreover, an argument satisfying a pattern may not be very strong by
itself, but may be strong enough to provide evidence to warrant rational10
acceptance of its conclusion, given that it premises are acceptable.
According to Toulmin [Tou58] such an argument can be plausible and
thus accepted after a balance of considerations in an investigation or dis-
cussion moved forward as new evidence is being collected. The investiga-
tion can then move ahead, even under conditions of uncertainty and lack15
of knowledge, using the conclusions tentatively accepted.
4.1 An example: Walton et al. ’s ArgumentationSchemes for Practical Reasoning
Suppose I am deliberating with my spouse on what to do
with our pension investment fund — whether to buy stocks,20
bonds or some other type of investments. We consult with a
financial adviser, and expert source of information who can
tell us what is happening in the stock market, and so forth at
the present time [Wal97].
Premises for practical inference:25
1. states that an agent (“I” or “my”) has a particular goal;
2. states that an agent has a particular goal.
⟨S0,S1, . . . ,Sn⟩ represents a sequence of states of affairs that can be
ordered temporally from earlier to latter. A state of affairs is meant to be
like a statement, but one describing some event or occurrence that can30
be brought about by an agent. It may be a human action, or it may be a
natural event.
Cardiff University, 2016 Page 33
Argumentation Schemes • AS and Dialogues
Practical Inference
Premises:Goal Premise Bringing about Sn is my goal
Means Premise In order to bring about Sn, I need to bring
about Si
Conclusions:Therefore, I need to bring about Si.
Critical questions:Other-Means
Question
Are there alternative possible actions to
bring about Si that could also lead to the
goal?Best-Means
Question
Is Si the best (or most favourable) of the
alternatives?Other-Goals
Question
Do I have goals other than Si whose
achievement is preferable and that
should have priority?Possibility
Question
Is it possible to bring about Si in the
given circumstances?Side Effects
Question
Would bringing about Si have known bad
consequences that ought to be taken into
account?
4.2 AS and Dialogues
Dialogue for practical reasoning: all moves (propose, prefer, justify) are co-
ordinated in a formal deliberation dialogue that has eight stages [HMP01].
1. Opening of the deliberation dialogue, and the raising of a governing
question about what is to be done.5
2. Discussion of: (a) the governing question; (b) desirable goals; (c)
any constraints on the possible actions which may be considered;
(d) perspectives by which proposals may be evaluated; and (e) any
premises (facts) relevant to this evaluation.
3. Suggesting of possible action-options appropriate to the governing10
question.
4. Commenting on proposals from various perspectives.
Cardiff University, 2016 Page 34
Argumentation Schemes • AS and Dialogues
5. Revising of: (a) the governing question, (b) goals, (c) constraints, (d)
perspectives, and/or (e) action-options in the light of the comments
presented; and the undertaking of any information-gathering or
fact-checking required for resolution.
6. Recommending an option for action, and acceptance or non-accept-5
ance of this recommendation by each participant.
7. Confirming acceptance of a recommended option by each partici-
pant.
8. Closing of the deliberation dialogue.
Proposals are initially made at stage 3, and then evaluated at stages10
4, 5 and 6.
Especially at stage 5, much argumentation taking the form of practi-
cal reasoning would seem to be involved.
As discussed in [Wal06], there are three dialectical adequacy condi-
tions for defining the speech act of making a proposal.15
The Proponent’s Requirement (Condition 1). The proponent
puts forward a statement that describes an action and says that
both proponent and respondent (or the respondent group) should
carry out this action.
The proponent is committed to carrying out that action: the state-20
ment has the logical form of the conclusion of a practical inference,
and also expresses an attitude toward that statement.
The Respondent’s Requirement (Condition 2). The statement
is put forward with the aim of offering reasons of a kind that will
lead the respondent to become committed to it.25
The Governing Question Requirement (Condition 3). The job
of the proponent is to overcame doubts or conflicts of opinions, while
the job of the respondent is to express them. Thus the role of the
respondent is to ask questions that cast the prudential reasonable-
ness of the action in the statement into doubt, and to mount attacks30
(counter-arguments and rebuttals) against it.
Condition 3 relates to the global structure of the dialogue, whereas
conditions 1 and 2 are more localised to the part where the proposal was
made. Condition 3 relates to the global burden of proof [Wal14] and the
roles of the two parties in the dialogue as a whole.35
Speech acts [MP02], like making a proposal, are seen as types of
moves in a dialogue that are governed by rules. Three basic character-
istics of any type of move that have to be defined:
Cardiff University, 2016 Page 35
Argumentation Schemes • AS and Dialogues
1. pre-conditions of the move;
2. the conditions defining the move itself;
3. the post-conditions that state the result of the move.
Preconditions
• At least two agents (proponent and opponent);5
• A governing question;
• Set of statements (propositions);
• The proponent proposes the proposition to the respondent if and
only if:
1. there is a set of premises that the proponent is committed to,10
and fit the premises of the argumentation scheme for practical
reasoning;
2. the proponent is advocating these premises, that is, he is mak-
ing a claim that they are true or applicable in the case at issue;
3. there is an inference from these premises fitting the argumen-15
tation scheme for practical reasoning; and
4. the proposition is the conclusion of the inference.
The Defining Conditions
The central defining condition sets out the conditions defining the struc-
ture of the move of making a proposal.20
The Goal Statement: We have a goal G.
The Means Statement: Bringing about p is necessary (or suffi-
cient) for us to bring about G.
Then the inference follows.
The Proposal Statement: We should (practically ought to) bring25
about p.
Cardiff University, 2016 Page 36
Argumentation Schemes • AS and Dialogues
Proposal Statement in form of AS
Premises:Goal Statement We have a goal G.
The Means
Statement
Bringing about p is necessary (or suffi-
cient) for us to bring about G.
Conclusions:We should (practically ought to) bring
about p.
The Post-Conditions
The central post-condition is the response condition.
The proposal must be open to critical questioning by opponent. The
proponent should be open to answering doubts and objections correspond-5
ing to any one of the five critical questions for practical reasoning; as well
as to counter-proposals, and is in charge of giving reasons why her pro-
posal is better than the alternatives.
The response condition set by these critical questions helps to explain
how and why the maker of a proposal needs to be open to questioning and10
to requests for justification.
Cardiff University, 2016 Page 37
5 A Semantic-Web View ofArgumentation
Acknowledgement
This handout include material from a number of collaborators including
Chris Reed. An overview can also be find at [Bex+13].5
5.1 The Argument Interchange Format [Rah+11]
Node Graph(argumentnetwork)
has-a
InformationNode
(I-Node)
is-a
Scheme NodeS-Node
has-a
Edge
is-a
Rule of inferenceapplication node
(RA-Node)
Conflict applicationnode (CA-Node)
Preferenceapplication node
(PA-Node)
Derived conceptapplication node (e.g.
defeat)
is-a
...
ContextScheme
Conflictscheme
contained-in
Rule of inferencescheme
Logical inference scheme
Presumptiveinference scheme ...
is-a
Logical conflictscheme
is-a
...
Preferencescheme
Logical preferencescheme
is-a
...Presumptivepreference scheme
is-a
uses uses uses
Figure 5.1: Original AIF Ontology [Che+06; Rah+11]
5.2 An Ontology of Arguments [Rah+11]
Please download Protégé from http://protege.stanford.edu/ and the
AIF OWL version from http://www.arg.dundee.ac.uk/wp-content/
uploads/AIF.owl10
Representation of the argument described in Figure 5.2
___jobArg : PracticalReasoning_Inference
fulfils(___jobArg, PracticalReasoning_Scheme)
hasGoalPlan_Premise(___jobArg, ___jobArgGoalPlan)
hasConclusion(___jobArg, ___jobArgConclusion)15
hasGoal_Premise(___jobArg, ___jobArgGoal)
___jobArgConclusion : EncouragedAction_Statement
fulfils(___jobArgConclusion, EncouragedAction_Desc)
Cardiff University, 2016 Page 38
Semantic Web Argumentation • AIF-OWL
PracticalInference
Bringing about is my goal
Sn
Si
In order to bring about I need to bring about
Sn
Therefore I needto bring about Si
hasConcDeschasPremiseDesc
hasPremiseDesc
Bringing about being richis my goal
In order to bring about being richI need to bring about having a job
fulfilsPremiseDesc
fulfilsPremiseDesc
fulfilsScheme
supports
supports
Therefore I needto bring abouthaving a job
hasConclusion
fulfils
Figure 5.2: An argument network linking instances of argument andscheme components
Symmetric attack
r → p
r pMP2A1
A2p → q
p
qMP1
neg1
Undercut attack
r MP2A3
A2 s → v
s
vMP1
cut1p
r → p
Figure 5.3: Examples of conflicts [Rah+11, Fig. 2]
claimText (___jobArgConclusion "Therefore I need to bring about hav-
ing a job")
___jobArgGoal : Goal_Statement
fulfils(___jobArgGoal, Goal_Desc)
claimText (___jobArgGoal "Bringing about being rich is my goal")5
___jobArgGoalPlan : GoalPlan_Statement
fulfils(___jobArgGoalPlan, GoalPlan_Desc)
claimText (___jobArgGoalPlan "In order to bring about being rich I
need to bring about having a job")
Cardiff University, 2016 Page 39
Semantic Web Argumentation • AIF-OWL
Relevant portion of the AIF ontology
EncouragedAction_Statement
EncouragedAction_Statement v Statement
GoalPlan_Statement
GoalPlan_Statement v Statement5
Goal_Statement
Goal_Statement v Statement
I-node
I-node ≡ Statement
I-node v Node10
I-node v ¬ S-node
Inference
Inference ≡ RA-node
Inference v ∃ fulfils Inference_Scheme
Inference v ≥ 1 hasPremise Statement15
Inference v Scheme_Application
Inference v = hasConclusion (Scheme_Application t Statement)
Inference_Scheme
Inference_Scheme v Scheme u ≥1 hasPremise_Desc Statement_Description u = hasConclusion_Desc20
(Scheme t Statement_Description)
PracticalReasoning_Inference
PracticalReasoning_Inference ≡ Presumptive_Inference u ∃ hasCon-
clusion EncouragedAction_Statement u ∃ hasGoalPlan_Premise Goal-
Plan_Statement u ∃ hasGoal_Premise Goal_Statement25
RA-node
RA-node ≡ Inference
RA-node v S-node
S-node
S-node ≡ Scheme_Application30
S-node v Node
S-node v ¬ I-node
Cardiff University, 2016 Page 40
Semantic Web Argumentation • AIF-OWL
Scheme
Scheme v Form
Scheme v ¬ Statement_Description
Scheme_Application
Scheme_Application ≡ S-node5
Scheme_Application v ∃ fulfils Scheme
Scheme_Application v Thing
Scheme_Application v ¬ Statement
Statement
Statement ≡ NegStatement10
Statement ≡ I-node
Statement v Thing
Statement v ∃ fulfils Statement_Description
Statement v ¬ Scheme_Application
Statement_Description15
Statement_Description v Form
Statement_Description v ¬ Scheme
fulfils
∃ fulfils Thing v Node
hasConclusion_Desc20
∃ hasConclusion_Desc Thing v Inference_Scheme
hasGoalPlan_Premise
v hasPremise
hasGoal_Premise
v hasPremise25
claimText
∃ claimText DatatypeLiteral v Statement
> v ∀ claimText DatatypeString
Individuals of EncouragedAction_Desc
EncouragedAction_Desc : Statement_Description30
formDescription (EncouragedAction_Desc "A should be brought about")
Cardiff University, 2016 Page 41
Semantic Web Argumentation • AIF-OWL
Individuals of GoalPlan_Desc
GoalPlan_Desc : Statement_Description
formDescription (GoalPlan_Desc "Bringing about B is the way to bring
about A")
Individuals of Goal_Desc5
Goal_Desc : Statement_Description
formDescription (Goal_Desc "The goal is to bring about A")
Individuals of PracticalReasoning_Scheme
PracticalReasoning_Scheme : PresumptiveInference_Scheme
hasPremise_Desc(PracticalReasoning_Scheme, Goal_Desc)10
hasConclusion_Desc(PracticalReasoning_Scheme, EncouragedAction_Desc)
hasPremise_Desc(PracticalReasoning_Scheme, GoalPlan_Desc)
Cardiff University, 2016 Page 42
6 A novel synthesis: CollaborativeIntelligence Spaces (CISpaces)
Acknowledgement
This handout include material from a number of collaborators including
Alice Toniolo and Timothy J. Norman. Main reference: [Ton+15].5
6.1 Introduction
Problem
• Intelligence analysis is critical for making well-informed decisions
• Complexities in current military operations increase the amount of
information available to intelligence analysts10
CISpaces (Collaborative Intelligence Spaces)
• A toolkit developed to support collaborative intelligence analysis
• CISpaces aims to improve situational understanding of evolving sit-
uations
6.2 Intelligence Analysis15
Definition 50 ([DCD11]). The directed and coordinated acquisition and
analysis of information to assess capabilities, intent and opportunities for
exploitation by leaders at all levels. ♠
Fig. 6.1 summarises the Pirolli and Card Model [PC05].
Table 6.1 illustrates the problems of individual analysis and how col-20
laborative analysis can improve it.
Cardiff University, 2016 Page 43
CISpaces • Intelligence Analysis
External Data
Sources
Presentation
Searchand Filter
Schematize
Build Case
Tell Story
Reevaluate
Search for support
Search for evidence
Search for information
FORAGING LOOP
SENSE-MAKING LOOP
Stru
ctur
e
Effort
inf
Shoebox
Ev
Ev
EvEv Ev
EvEv
Ev
Ev
Ev
Ev
Evidence File
Hyp1 Hyp2
Hypotheses
Pirolli & Card Model
Figure 6.1: The Pirolli & Card Model [PC05]
Individual analysis Collaborative analysis
• Scattered Information &Noise
• Hard to make connections
• Missing Information
• Cognitive biases
• Missing Expertise
• More effective and reliable
• Brings together differentexpertise, resources
• Prevent biases
Table 6.1: Individual vs. Collaborative Analysis
Cardiff University, 2016 Page 44
CISpaces • Intelligence Analysis
HarbourKish Farm
KISH
River
Water pipe
Aqueduct
KISHSHIRE
Kish Hall Hotel
Illness among young and elderly people in Kishshire caused by bacteria
Unidentified illness is affecting the local livestock in Kishshire, the rural area of Kish
Figure 6.2: Initial information assigned to Joe
PEOPLE and LIVESTOCK
illness
Water TEST shows a
BACTERIA in the water supply
Answer to POI: "GER-MAN" seen
in Kish Explosion in KISH
Hall Hotel
TIME
Tests on people/livestock POI for suspicious people
Figure 6.3: Further events happening in Kish
Example of Intelligence Analysis Process
Goal: discover potential threats in Kish
Analysts: Joe, Miles and Ella
What Joe knows is summarised by Figs. 6.2 and 6.3
Main critical points and possible conclusions during the analysis:5
• Causes of water contamination → waterborne/non-waterborne
bacteria;
• POI responsible for water contamination;
• Causes of hotel explosion.
Cardiff University, 2016 Page 45
CISpaces • Reasoning with Evidence
6.3 Reasoning with Evidence
• Identify what to believe happened from the claims constructed upon
information (the sensemaking process);
• Derive conclusions from data aggregated from explicitly requested
information (the crowdsourcing process);5
• Assess what is credible according to the history of data manipula-
tion (the provenance reasoning process).
6.4 Arguments for Sensemaking
Formal Linkage for Semantics Computation
A CISpace graph, WAT, can be transformed into a corresponding ASPIC-10
based argumentation theory. An edge in CISpaces is represented textu-
ally as 7→, an info/claim node is written pi and a link node is referred to
as `type where type= {Pro,Con}. Then, [p1, . . . ,pn 7→ `Pro 7→ pφ] indicates
that the Pro-link has p1, . . . , pn as incoming nodes and an outgoing node
pφ.15
Definition 51. A WAT is a tuple ⟨K , AS⟩ such that AS= ⟨L , ,R⟩ is con-
structed as follows:
• L is a propositional logic language, and a node corresponds to a
proposition p ∈L . The WAT set of propositions is Lw.
• The set R is formed by rules r i ∈ R corresponding to Pro links20
between nodes such that: [p1, . . . , pn 7→ `Pro 7→ pφ] is converted to
r i : p1, . . . , pn ⇒ pφ
• The contrariness function between elements is defined as: i) if [p1 7→`Con 7→ p2] and [p2 7→ `Con 7→ p1], p1 and p2 are contradictory; ii)[p1 7→ `Con 7→ p2] and p1 is the only premise of the Con link, then p125
is a contrary of p2; iii) if [p1, p3 7→ `Con 7→ p2] then a rule is added
such that p1 and p3 form an argument with conclusion ph against
p2, r i : p1, p3 ⇒ ph and ph is a contrary of p2. ♠
Definition 52. K is composed of propositions pi,
K = {p j, pi, . . . }, such that: i) let a set of rules r1, . . . , rn ∈R indicate a cycle30
such that for all pi that are consequents of a rule r exists r′ containing pi
as antecedent, then pi ∈ K if pi is an info-node; ii) otherwise, pi ∈ K if pi
is not consequent of any rule r ∈R. ♠
Cardiff University, 2016 Page 46
CISpaces • Arguments for Sensemaking
An Example of Argumentation Schemes for IntelligenceAnalysis
Intelligence analysis broadly consists of three components: Activities(Act) including actions performed by actors, and events happening in the
world; Entities (Et) including actors as individuals or groups, and objects5
such as resources; and Facts (Ft) including statements about the state of
the world regarding entities and activities.
A hypothesis in intelligence analysis is composed of activities and events
that show how the situation has evolved. The argument from cause to ef-fect (ArgCE) forms the basis of these hypotheses. The scheme, adapted10
from [WRM08], is:
Argument from cause to effect
Premises:• Typically, if C (either a fact Fti or an ac-
tivity Acti) occurs, then E (either a fact
Fti or an activity Acti) will occur• In this case, C occurs
Conclusions:In this case E will occur
Critical questions:CQCE1 Is there evidence for C to occur?
CQCE1 Is there a general rule for C causing E ?
CQCE3 Is the relationship between C and E
causal?CQCE4 Are there any exceptions to the causal
rule that prevent the effect E from occur-
ring?CQCE5 Has C happened before E ?
CQCE6 Is there any other C ′ that caused E ?
Formally:
rCE : rule(R,C ,E ),occur(C ),before(C ,E ),
ruletype(R,causal),noexceptions(R)⇒ occur(E )15
Cardiff University, 2016 Page 47
CISpaces • Arguments for Provenance
WasInformedBy
Used
WasGeneratedBy
WasAssociatedWith
ActedOnBehalfOf
WasAttributedTo
WasDerivedFrom
Entity
Actor
Activity
Figure 6.4: PROV Data Model [MM13]
Lab WaterTesting
wasGeneratedByUsed
wasAssociatedWith
pjID:Bacteria contaminates
local water Water
Sample
Generate Requirement
Water monitoring
Requirement
wasDerivedFrom
Used
wasGeneratedBy
wasInformedBy
Monitoring of water supply
used
water contamination
report
Report generation
Used wasGeneratedBy
wasAssociatedWith
wasDerivedFrom
?a1Pattern Pg
Goal
NGOlab
assistant
NGOChemical
Lab
PrimarySource
Time2014-11-13T08-16-45Z
Time2014-11-12T10-14-40Z
Time2014-11-14T05-14-10Z
?a2
?p
?ag
LEGEND
p-Agent
p-Entity
p-Activity
Node
Older p-elements Newer
Figure 6.5: Provenance of Joe’s information
6.5 Arguments for Provenance
Provenance can be used to annotate how, where, when and by whom some
information was produced [MM13]. Figure 6.4 depicts the core model for
representing provenance, and Figure 6.5 shows an example of provenance
for the pieces of information for analyst Joe w.r.t. the water contamination5
problem in Kish.
Patterns representing relevant provenance information that may war-
rant the credibility of a datum can be integrated into the analysis by ap-
plying the argument scheme for provenance (ArgPV ) [Ton+14]:
Cardiff University, 2016 Page 48
CISpaces • Arguments for Provenance
Argument Scheme for Provenance
Premises:• Given p j about activity Acti, entity Eti, or
fact Fti (ppv1)• GP (p j) includes pattern P ′
m of p-entities
Apv, p-activities Ppv, p-agents Agpv in-
volved in producing p j (ppv2)• GP (p j) infers that information p j is true
(ppv3)
Conclusions:Acti/Eti/Fti in p j may plausibly be true
(ppvcn)
Critical questions:CQPV1 Is p j consistent with other information?
CQPV2 Is p j supported by evidence?
CQPV3 Does GP (p j) contain p-elements that lead
us not to believe p j?CQPV4 Is there any other p-element that should
have been included in GP (p j) to infer that
p j is credible?
Cardiff University, 2016 Page 49
7 Natural Language Interfaces
7.1 Experiments with Humans: Scenarios [CTO14]
Scenario 1.B
The weather forecasting service of the broadcasting com-
pany AAA says that it will rain tomorrow. Meanwhile, the5
forecast service of the broadcasting company BBB says that
it will be cloudy tomorrow but that it will not rain. It is also
well known that the forecasting service of BBB is more accu-
rate than the one of AAA.
Γ1.B = ⟨S1.B,D1.B⟩, where:10
S1.B D1.B
s1 : ⇒ sAAA
s2 : ⇒ sBBB
r1 : sAAA ∧ ∼ exAAA ⇒ rainr2 : sBBB ∧ ∼ exBBB ⇒¬ rainr3 : ∼ exaccuracy ⇒ r1 ≺ r2
Γ1.B gives rise to the following set of arguments: A1.B = {a1 = ⟨s1, r1⟩,a2 =⟨s2, r2⟩,a3 = ⟨r3⟩}, where a2 A1.B-defeats a1. Therefore the set of justified
arguments (which is also the unique stable extensions) is {a2,a3}.
Scenario 1.E15
The weather forecasting service of the broadcasting com-
pany AAA says that it will rain tomorrow. Meanwhile, the
forecast service of the broadcasting company BBB says that
it will be cloudy tomorrow but that it will not rain. It is also
well known that the forecasting service of BBB is more accu-20
rate than the one of AAA. However, yesterday the trustwor-
thy newspaper CCC published an article which said that BBB
has cut the resources for its weather forecasting service in the
past months, thus making it less reliable than in the past.
Γ1.E = ⟨S1.E ,D1.E⟩, where S1.E = S1.B∪{s3 :⇒ sCCC}, and D1.E = D1.B∪25
{r4 : sCCC ∧ ∼ exCCC ⇒ cut, r5 : cut ∧ ∼ excut ⇒ exaccuracy}.
Γ1.E gives rise to the following set of arguments A1.E = A1.B ∪ {a4 =⟨s3, r4, r5⟩}. a4 is the unique justified argument, while the defensible ex-
tensions (which are also stable) are {a1,a4}, {a2,a4}.
Cardiff University, 2016 Page 50
Natural Language Interfaces • Experiments with Hu-mans: Scenarios [CTO14]
Scenario 2.B
In a TV debate, the politician AAA argues that if Region
X becomes independent then X’s citizens will be poorer than
now. Subsequently, financial expert Dr. BBB presents a doc-
ument; which scientifically shows that Region X will not be5
worse off financially if it becomes independent.
Γ2.B = ⟨S2.B,D2.B⟩, where:
S2.B D2.B
s1 : ⇒ sAAA
s2 : ⇒ sBBB
s3 : ⇒ sdoc
r1 : sAAA ∧ ∼ exAAA ⇒ poorerr2 : sBBB ∧ sdoc ∧ ∼ exBBB ∧ ∼ exdoc ⇒¬ poorerr3 : ∼ exexpert ⇒ r1 ≺ r2
Γ2.B gives rise to the following set of arguments A2.B = {a1 = ⟨s1, r1⟩,a2 =⟨s2, s3, r2⟩,a3 = ⟨r3⟩}, where a2 A2.B-defeats a1. Therefore the set of justi-10
fied arguments is {a2,a3}.
Scenario 2.E
In a TV debate, the politician AAA argues that if Region
X becomes independent then X’s citizens will be poorer than
now. Subsequently, financial expert Dr. BBB presents a doc-15
ument; which scientifically shows that Region X will not be
worse off financially if it becomes independent. After that, the
moderator of the debate reminds BBB of more recent research
by several important economists that disputes the claims in
that document.20
Γ2.E = ⟨S2.E ,D2.E⟩, where S2.E = S2.B∪{s4 :⇒ sresearch, s5 : sresearch ⇒¬sdoc}, and D2.E = D2.B.
Γ2.E gives rise to the following set of arguments A2.E = A2.B ∪ {a4 =⟨s4, s5⟩}. Therefore, there are two stable extensions which are also the
defensible extensions: {a1,a3,a4} and {a2,a3}.25
Scenario 3.B
You are planning to buy a second-hand car, and you go to
a dealership with BBB, a mechanic whom has been recom-
mended you by a friend. The salesperson AAA shows you a
car and says that it needs very little work done to it. BBB30
says it will require quite a lot of work, because in the past he
had to fix several issues in a car of the same model.
Cardiff University, 2016 Page 51
Natural Language Interfaces • Experiments with Hu-mans: Scenarios [CTO14]
Γ3.B = ⟨S3.B,D3.B⟩, where:
S3.B D3.B
s1 : ⇒ sAAA
s2 : ⇒ sBBB
r1 : sAAA ∧ ∼ exAAA ⇒¬ workr2 : sBBB ∧ ∼ exBBB ⇒ workr3 : ∼ exprof essional ⇒ r1 ≺ r2
Γ3.B gives rise to the following set of arguments A3.B = {a1 = ⟨s1, r1⟩,a2 =⟨s2, s3, r2⟩,a3 = ⟨r3⟩}, where a2 A3.B-defeats a1. Therefore the set of justi-
fied arguments (which is also the unique stable extensions) is {a2,a3}.5
Scenario 3.E
You are planning to buy a second-hand car, and you go to
a dealership with BBB, a mechanic whom has been recom-
mended you by a friend. The salesperson AAA shows you a
car and says that it needs very little work done to it. BBB10
says it will require quite a lot of work, because in the past he
had to fix several issues in a car of the same model. While you
are at the dealership, your friend calls you to tell you that he
knows (beyond a shadow of a doubt) that BBB made unneces-
sary repairs to his car last month.15
Γ3.E = ⟨S3.E ,D3.E⟩, where S3.E = S3.B ∪ {s3 :⇒ s f riend}, and D3.E =D4.B ∪ {r4 : s f riend ∧ ∼ ex f riend ⇒ unnecc_work, r5 : unnec_work ∧ ∼exunnec_work ⇒ exprof essional}.
Γ3.E gives rise to the following set of arguments A3.E = A2.E ∪ {a4 =⟨s3, r4, r5⟩}. Similarly to Scenario 1.E, a4 is the only justified argument20
and there are two stable extensions: {a1,a4}, and {a2,a4}.
Scenario 4.B
After several dates, you would like to start a serious rela-
tionship with J but you turn to ask two close friends of yours,
AAA and BBB, for advice. You have known BBB for longer25
than you have known AAA. AAA tells you that J is lovely and
you should go ahead, while BBB suggests that you should be
very cautious because J might have a hidden agenda.
Γ4.B = ⟨S4.B,D4.B⟩, where
S4.B D4.E
s1 : ⇒ sAAA
s2 : ⇒ sBBB
r1 : sAAA ∧ ∼ exAAA ⇒ gor2 : sBBB ∧ ∼ exBBB ⇒¬ gor3 : ∼ exbest_ f riend ⇒ r1 ≺ r2
30
Cardiff University, 2016 Page 52
Natural Language Interfaces • Experiments with Hu-mans: Scenarios [CTO14]
Γ4.B gives rise to the following set of arguments A4.B = {a1 = ⟨s1, r1⟩,a2 =⟨s2, s3, r2⟩,a3 = ⟨r3⟩}, where a2 A4.B-defeats a1. Therefore the set of justi-
fied arguments (which is also the unique stable extensions) is {a2,a3}.
Scenario 4.E
After several dates, you would like to start a serious rela-5
tionship with J. but you turn to ask two friends of yours, AAA
and BBB, for advice. You have known BBB for longer than
you have known AAA. AAA tells you that J is lovely and you
should go ahead, while BBB suggests that you should be very
cautious because J might have a hidden agenda. After some10
weeks, CCC, who is also a close friend of BBB, tells you that
BBB has been into you for years; BBB is too shy to tell you
about their feelings about you, but are still possessive of you.
Γ4.E = ⟨S4.E ,D4.E⟩, where S4.E = S4.B∪{s3 :⇒ sCCC}, and D4.E = D4.B∪{r4 : sCCC ∧ ∼ exCCC ⇒ possessive, r5 : possessive ∧ ∼ expossessive ⇒15
¬ r1 ≺ r2}.
Γ4.E gives rise to the following set of arguments A4.E = A4.B ∪ {a4 =⟨s3, r4, r5⟩}, with no justified arguments. The stable extensions are: {a1,a4}, {a2,a3}, {a2,a4}.
Results
0
15
30
45
60
PA PB PU
%
Distribution of acceptability of actors’ positions
Base cases Extended cases
Figure 7.1: Distribution of the final conclusion PA/PB/PU, comparingbase cases with extended cases, in percent.
Cardiff University, 2016 Page 53
Natural Language Interfaces • Experiments with Hu-mans: Scenarios [CTO14]
Base Cases Extended Cases
PA PB PU PA PB PU
1, weather 5.0 50.0 45.0 15.8 21.1 63.2
2, politics 5.3 63.2 31.6 21.1 10.5 68.4
3, buying car 0.0 68.2 31.8 23.8 23.8 52.4
4, romance 12.5 68.8 18.8 48.0 36.0 16.0
Table 7.1: Distribution of the final conclusion PA/PB/PU in percent, foreach scenarios. Shading denotes the most likely conclusions.
0
15
30
45
60
U1 U2 U3
%
Distributions of motivations for PU (scenarios 1.B and 3.B)
1.B 3.B
Figure 7.2: Distribution across three categories of justification (U1: lackof information, U2: domain specific reasons; U3: other) for agreementwith the PU position in scenarios 1.B and 3.B.
Cardiff University, 2016 Page 54
Natural Language Interfaces • Lessons From Argu-ment Mining: [BR11]
Base cases Extended cases
RB†
Md∗B RE
†Md∗
E C.D.‡
Rel
evan
ce
1, weather 110.38 6.00 82.92 4.00 46.60
2, politics 107.45 6.00 69.45 4.00 47.19
3, buying car 118.05 6.50 67.45 4.00 44.38
4, romance 48.34 2.00 44.40 2.00 46.57
Agr
eem
ent 1, weather 116.38 6.00 87.18 4.00 46.60
2, politics 103.34 6.00 65.05 4.00 47.19
3, buying car 121.93 6.50 64.33 4.00 44.38
4, romance 44.94 2.00 44.20 2.00 46.57
(a)
Scenario 3.B Scenario 4.B
R3.B†
Md∗3.B R4.B
†Md∗
4.B C.D.‡
Relevance 118.05 6.50 48.34 2.00 47.79
Agreement 121.93 6.50 44.94 2.00 47.79
(b)
Table 7.2: Post-hoc analysis regarding relevance and agreement: pairwisecomparison base-extended cases (a); and between 1.B and 4.B (b). Sta-tistically significant cases (i.e. when |Rx −Ry| > C.D) are highlighted ingrey.† Mean rank as computed with the Kruskal-Wallis test∗ Median‡ Critical Difference, as computed in [SC88] cited by [Fie09] with α= 0.05.
7.2 Lessons From Argument Mining: [BR11]
Bob says: Lower taxes stimulate the economy
Bob says: The government will inevitably lower the tax
rate.
Wilma says: Why?
Challenging
Substantiating
Asserting
Asserting
Challenging
Lower taxes stimulatethe economy
An application of theargument scheme for
Argument from PositiveConsequences
The government willinevitably lower the tax
rate.
Arguing
Bob is credible
Bob is credible
Cardiff University, 2016 Page 55
Bibliography
[AB13] Leila Amgoud and Jonathan Ben-Naim. “Ranking-Based Se-
mantics for Argumentation Frameworks”. In: Scalable Un-certainty Management: 7th International Conference, SUM2013, Washington, DC, USA, September 16-18, 2013. Proceed-5
ings. Ed. by Weiru Liu, V. S. Subrahmanian, and Jef Wi-
jsen. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013,
pp. 134–147.
[Bar+14] Pietro Baroni et al. “On the Input/Output behavior of argu-
mentation frameworks”. In: Artificial Intelligence 217 (2014),10
pp. 144–197. URL: http://www.sciencedirect.com/science/
article/pii/S0004370214001131.
[BCG11] P Baroni, M Caminada, and M Giacomin. “An introduction
to argumentation semantics”. In: Knowledge Engineering Re-view 26.4 (2011), pp. 365–410.15
[Bex+13] Floris Bex et al. “Implementing the argument web”. In: Com-munications of the ACM 56.10 (Oct. 2013), p. 66.
[BG07] Pietro Baroni and Massimiliano Giacomin. “On principle-based
evaluation of extension-based argumentation semantics”. In:
Artificial Intelligence (Special issue on Argumentation in A.I.)20
171.10/15 (2007), pp. 675–700.
[BG09a] Pietro Baroni and Massimiliano Giacomin. “Semantics of Ab-
stract Argument Systems”. In: Argumentation in ArtificialIntelligence. Ed. by Guillermo Simari and Iyad Rahwan. Springer
US, 2009, pp. 25–44.25
[BG09b] Pietro Baroni and Massimiliano Giacomin. “Skepticism rela-
tions for comparing argumentation semantics”. In: Interna-tional Journal of Approximate Reasoning 50.6 (June 2009),
pp. 854–866. ISSN: 0888-613X. DOI: 10.1016/j.ijar.2009.
02.006. URL: http://linkinghub.elsevier.com/retrieve/30
pii / S0888613X09000383 % 20http : / / dx . doi . org / 10 .
1016/j.ijar.2009.02.006%20http://dl.acm.org/
citation.cfm?id=1542547.1542704.
[BGG05] Pietro Baroni, Massimiliano Giacomin, and Giovanni Guida.
“SCC-recursiveness: a general schema for argumentation se-35
mantics”. In: Artificial Intelligence 168.1-2 (2005), pp. 165–
210.
Cardiff University, 2016 Page 56
BIBLIOGRAPHY • BIBLIOGRAPHY
[BH01] Philippe Besnard and Anthony Hunter. “A logic-based the-
ory of deductive arguments”. In: Artificial Intelligence 128
(2001), pp. 203–235. ISSN: 00043702. DOI: 10.1016/S0004-
3702(01)00071-6. URL: http://www.sciencedirect.com/
science/article/pii/S0004370201000716.5
[BJT16] Bart Bogaerts, Tomi Janhunen, and Shahab Tasharrofi. “Declar-
ative Solver Development: Case Studies”. In: Principles ofKnowledge Representation and Reasoning: Proceedings of theFifteenth International Conference, KR 2016, Cape Town, SouthAfrica, April 25-29, 2016. 2016, pp. 74–83. URL: http://www.10
aaai.org/ocs/index.php/KR/KR16/paper/view/12822.
[Bon+16] Elise Bonzon et al. “A Comparative Study of Ranking-based
Semantics for Abstract Argumentation”. In: Proceedings ofthe 30th AAAI Conference on Artificial Intelligence (AAAI’16)1 (2016). arXiv: 1602.01059.15
[BR11] Katarzyna Budzynska and Chris Reed. Whence inference? Tech.
rep. University of Dundee, 2011.
[BS12] Stefano Bistarelli and Francesco Santini. “Modeling and Solv-
ing AFs with a Constraint-Based Tool: ConArg”. In: Theoryand Applications of Formal Argumentation. Vol. 7132. Springer,20
2012, pp. 99–116. ISBN: 978-3-642-29183-8.
[Cam06] Martin Caminada. “On the Issue of Reinstatement in Ar-
gumentation”. In: Proceedings of the 10th European Confer-ence on Logics in Artificial Intelligence (JELIA 2006). 2006,
pp. 111–123. ISBN: 3-540-39625-X.25
[Cer+14a] Federico Cerutti et al. “A SCC Recursive Meta-Algorithm for
Computing Preferred Labellings in Abstract Argumentation”.
In: 14th International Conference on Principles of KnowledgeRepresentation and Reasoning. Ed. by Chitta Baral and Giuseppe
De Giacomo. 2014, pp. 42–51. URL: http://www.aaai.org/30
ocs/index.php/KR/KR14/paper/view/7974.
[Cer+14b] Federico Cerutti et al. “Computing Preferred Extensions in
Abstract Argumentation: A SAT-Based Approach”. In: TAFA2013. Ed. by Elizabeth Black, Sanjay Modgil, and Nir Oren.
Vol. 8306. Lecture Notes in Computer Science. Springer-Verlag35
Berlin Heidelberg, 2014, pp. 176–193. URL: http://link.
springer.com/chapter/10.1007/978- 3- 642- 54373-
9_12.
Cardiff University, 2016 Page 57
BIBLIOGRAPHY • BIBLIOGRAPHY
[Cer+15] Federico Cerutti et al. “Exploiting Parallelism for Hard Prob-
lems in Abstract Argumentation”. In: 29th AAAI Conference- AAAI 2015. 2015, pp. 1475–1481. URL: http://www.aaai.
org / ocs / index . php / AAAI / AAAI15 / paper / viewFile /
9451/9421.5
[CGV14] Federico Cerutti, Massimiliano Giacomin, and Mauro Vallati.
“Algorithm Selection for Preferred Extensions Enumeration”.
In: 5th Conference on Computational Models of Argument.Ed. by Simon Parsons et al. 2014, pp. 221–232. URL: http:
//ebooks.iospress.nl/volumearticle/37791.10
[Cha+15] Günther Charwat et al. “Methods for solving reasoning prob-
lems in abstract argumentation — A survey”. In: ArtificialIntelligence 220 (Mar. 2015), pp. 28–63. ISSN: 00043702. DOI:
10.1016/j.artint.2014.11.008. URL: http://www.
sciencedirect.com/science/article/pii/S0004370214001404.15
[Che+06] Carlos Iván Chesnevar et al. “Towards an argument inter-
change format”. English. In: The Knowledge Engineering Re-view 21.04 (Dec. 2006), p. 293. ISSN: 0269-8889. DOI: 10.
1017/S0269888906001044. URL: http://journals.cambridge.
org/abstract_S0269888906001044.20
[CL05] Claudette Cayrol and Marie-Christine Lagasquie-Schiex. “Grad-
uality in argumentation”. In: Journal of Artificial IntelligenceResearch 23.1 (2005), pp. 245–297.
[CTO14] Federico Cerutti, Nava Tintarev, and Nir Oren. “Formal Ar-
guments, Preferences, and Natural Language Interfaces to25
Humans: an Empirical Evaluation”. In: 21st European Con-ference on Artificial Intelligence. 2014, pp. 207–212. URL: http:
//ebooks.iospress.nl/volumearticle/36941.
[DCD11] DCDC. Understanding and Intelligence Support to Joint Op-erations. Tech. rep. 2011.30
[DS14] Jeremie Dauphin and Claudia Schulz. “Arg Teach - A Learn-
ing Tool for Argumentation Theory”. In: 2014 IEEE 26th In-ternational Conference on Tools with Artificial Intelligence.
IEEE, 2014, pp. 776–783.
[Dun+14] Paul E. Dunne et al. “Characteristics of Multiple Viewpoints35
in Abstract Argumentation”. In: Proceedings of the 14th Con-ference on Principles of Knowledge Representation and Rea-soning. 2014, pp. 72–81.
Cardiff University, 2016 Page 58
BIBLIOGRAPHY • BIBLIOGRAPHY
[Dun95] Phan Minh Dung. “On the Acceptability of Arguments and
Its Fundamental Role in Nonmonotonic Reasoning, Logic Pro-
gramming, and n-Person Games”. In: Artificial Intelligence77.2 (1995), pp. 321–357.
[Dvo+11] Wolfgang Dvorák et al. “Making Use of Advances in Answer-5
Set Programming for Abstract Argumentation Systems”. In:
Proceedings of the 19th International Conference on Applica-tions of Declarative Programming and Knowledge Manage-ment (INAP 2011). 2011.
[DW09] Paul E. Dunne and Michael Wooldridge. “Complexity of ab-10
stract argumentation”. In: Argumentation in AI. Ed. by I Rah-
wan and G Simari. Springer-Verlag, 2009. Chap. 5, pp. 85–
104.
[EGW10] Uwe Egly, Sarah Alice Gaggl, and Stefan Woltran. “Answer-
set programming encodings for argumentation frameworks”.15
In: Argument & Computation 1.2 (June 2010), pp. 147–177.
ISSN: 1946-2166. DOI: 10.1080/19462166.2010.486479.
URL: http://dx.doi.org/10.1080/19462166.2010.
486479.
[Fab13] Wolfgang Faber. “Answer Set Programming”. In: Reasoning20
Web. Semantic Technologies for Intelligent Data Access. Vol. 8067.
Lecture Notes in Computer Science. Springer Berlin Heidel-
berg, 2013, pp. 162–193.
[Fie09] Andy Field. Discovering Statistics Using SPSS (IntroducingStatistical Methods series). SAGE Publications Ltd, 2009. ISBN:25
1847879071.
[GLW16] Massimiliano Giacomin, Thomas Linsbichler, and Stefan Woltran.
“On the Functional Completeness of Argumentation Seman-
tics”. In: Knowledge Representation and Reasoning Confer-ence (KR). 2016.30
[HMP01] D Hitchcock, P McBurney, and P Parsons. “A Framework for
Deliberation Dialogues, Argument and Its Applications”. In:
Proceedings of the Fourth Biennial Conference of the OntarioSociety for the Study of Argumentation (OSSA 2001). Ed. by
H V Hansen et al. 2001.35
[MM13] L Moreau and P Missier. PROV-DM: The PROV Data Model.Available at http://www.w3.org/TR/prov-dm/. Apr. 2013.
[MP02] Peter McBurney and Simon Parsons. “Games that agents play:
A formal framework for dialogues between autonomous agents”.
In: Journal of Logic, Language and Information 11.3 (2002),40
Cardiff University, 2016 Page 59
BIBLIOGRAPHY • BIBLIOGRAPHY
pp. 315–334. URL: http://www.springerlink.com/index/
N809NP4PPR3HFTDV.pdf.
[MT08] Paul-Amaury Matt and Francesca Toni. “A Game-Theoretic
Measure of Argument Strength for Abstract Argumentation”.
In: 11th European Conference on Logics in Artifcial Intelli-5
gence (JELIA’08). 2008, pp. 285–297.
[NAD14] Samer Nofal, Katie Atkinson, and Paul E. Dunne. “Algorithms
for decision problems in argument systems under preferred
semantics”. In: Artificial Intelligence 207 (2014), pp. 23–51.
URL: http://www.sciencedirect.com/science/article/10
pii/S0004370213001161.
[NDA12] S Nofal, P E Dunne, and K Atkinson. “On Preferred Exten-
sion Enumeration in Abstract Argumentation”. In: Proceed-ings of 3rd International Conference on Computational Mod-els of Arguments (COMMA 2012). 2012, pp. 205–216.15
[PC05] P. Pirolli and S. Card. “The sensemaking process and lever-
age points for analyst technology as identified through cogni-
tive task analysis”. In: Proceedings of the International Con-ference on Intelligence Analysis. 2005.
[Pra10] Henry Prakken. “An abstract framework for argumentation20
with structured arguments”. In: Argument & Computation1.2 (June 2010), pp. 93–124. ISSN: 1946-2166. DOI: 10.1080/
19462160903564592. URL: http://www.tandfonline.com/
doi/abs/10.1080/19462160903564592.
[Pre+14] Alun Preece et al. “Human-machine conversations to sup-25
port multi-agency missions”. In: ACM SIGMOBILE MobileComputing and Communications Review 18.1 (2014), pp. 75–
84. ISSN: 15591662. DOI: 10.1145/2581555.2581568. URL:
http://dl.acm.org/citation.cfm?id=2581555.2581568.
[PV02] Henry Prakken and Gerard Vreeswijk. “Logics for Defeasi-30
ble Argumentation”. In: Handbook of philosophical logic 4
(2002), pp. 218–319. ISSN: 0955792X. DOI: 10.1007/978-
94-017-0456-4_3. URL: http://link.springer.com/
chapter/10.1007/978-94-017-0456-4_3.
[Rah+11] Iyad Rahwan et al. “Representing and classifying arguments35
on the Semantic Web”. English. In: The Knowledge Engineer-ing Review 26.04 (Nov. 2011), pp. 487–511. ISSN: 0269-8889.
DOI: 10.1017/S0269888911000191. URL: http://journals.
cambridge.org/abstract_S0269888911000191.
Cardiff University, 2016 Page 60
BIBLIOGRAPHY • BIBLIOGRAPHY
[RBW08] Francesca Rossi, Peter van Beek, and Toby Walsh. “Chap-
ter 4 Constraint Programming”. In: Handbook of KnowledgeRepresentation. Ed. by Vladimir Lifschitz van Harmelen and
Bruce Porter. Vol. 3. Foundations of Artificial Intelligence.
Elsevier, 2008, pp. 181–211. DOI: http://dx.doi.org/5
10.1016/S1574- 6526(07)03004- 0. URL: http://www.
sciencedirect.com/science/article/pii/S1574652607030040.
[SC88] Sidney Siegel and N. John Castellan Jr. Nonparametric Statis-tics for The Behavioral Sciences. McGraw-Hill Humanities/Social
Sciences/Languages, 1988. ISBN: 0070573573.10
[Ton+14] Alice Toniolo et al. “Making Informed Decisions with Prove-
nance and Argumentation Schemes”. In: Eleventh Interna-tional Workshop on Argumentation in Multi-Agent Systems(ArgMAS 2014). 2014. URL: http://www.inf.pucrs.br/
felipe.meneguzzi/download/AAMAS_14/workshops/AAMAS2014-15
W12/w12-11.pdf.
[Ton+15] Alice Toniolo et al. “Agent Support to Reasoning with Dif-
ferent Types of Evidence in Intelligence Analysis”. In: Pro-ceedings of the 14th International Conference on AutonomousAgents and Multiagent Systems (AAMAS 2015). 2015, pp. 781–20
789. URL: http://aamas2015.com/en/AAMAS_2015_USB/
aamas/p781.pdf.
[Tou58] S Toulmin. The Uses of Argument. Cambridge University Press,
Cambridge, UK, 1958.
[VCG14] Mauro Vallati, Federico Cerutti, and Massimiliano Giacomin.25
“Argumentation Frameworks Features: an Initial Study”. In:
21st European Conference on Artificial Intelligence. Ed. by
T. Shaub, G. Friedrich, and B. O’Sullivan. 2014, pp. 1117–
1118. URL: http://ebooks.iospress.nl/volumearticle/
37148.30
[Wal06] Douglas N Walton. “How to make and defend a proposal in a
deliberation dialogue”. In: Artif. Intell. Law 14.3 (Sept. 2006),
pp. 177–239. ISSN: 0924-8463. DOI: 10.1007/s10506-006-
9025-x. URL: http://portal.acm.org/citation.cfm?id=
1238120.1238122.35
[Wal14] Douglas N. Walton. Burned of Proof, Presumption and Argu-mentation. Cambridge University Press, 2014.
[Wal97] Douglas N Walton. Appeal to Expert Opinion. University Park:
Pennsylvania State University, 1997.
[WRM08] Douglas N. Walton, Chris Reed, and Fabrizio Macagno. Argu-40
mentation Schemes. Cambridge University Press, NY, 2008.
Cardiff University, 2016 Page 61
top related