a cognitive neuroscience, dual-systems approach to the sorites paradox
TRANSCRIPT
This article was downloaded by: [Universitat Politècnica de València]On: 27 October 2014, At: 03:23Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Journal of Experimental & TheoreticalArtificial IntelligencePublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/teta20
A cognitive neuroscience, dual-systemsapproach to the sorites paradoxLeib Litman a & Mark Zelcer ba Psychology Department, Lander College , 75–31 150th St.Flushing, NY , 11367b Department of Political Science, Brooklyn College , CityUniversity of New York , Brooklyn , NY , USAPublished online: 18 Jul 2013.
To cite this article: Leib Litman & Mark Zelcer (2013) A cognitive neuroscience, dual-systemsapproach to the sorites paradox, Journal of Experimental & Theoretical Artificial Intelligence, 25:3,355-366, DOI: 10.1080/0952813X.2013.783130
To link to this article: http://dx.doi.org/10.1080/0952813X.2013.783130
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions
A cognitive neuroscience, dual-systems approach to the sorites paradox
Leib Litmana1 and Mark Zelcerb*
aPsychology Department, Lander College, 75–31 150th St. Flushing, NY 11367; bDepartment of PoliticalScience, Brooklyn College, City University of New York, Brooklyn, NY, USA
(Received 1 February 2012; final version received 21 January 2013)
Typical approaches to resolving the sorites paradox attempt to show, in one way oranother, that the sorites argument is not paradoxical after all. However, if one can showthat the sorites is not really paradoxical, the task remains of explaining why it appears tobe a paradox. Our approach begins by addressing the appearance of paradox and thenexplores what this means for the paradox itself. We examine the sorites from theperspective of the various brain systems that are intuitively comfortable with the keyfeatures of the premises of the sorites argument. We suggest that the explicit and implicitcognitive systems are separately responsible for the initial plausibility of the categoricaland inductive premises. The appearance of paradox is a function of our brain’sarchitecture and arises from the conflicting interactions of neurologically distinct systems.
Keywords: sorites paradox; implicit; explicit systems
Introduction
More than just a philosophical curiosity left over from ancient Greece, the sorites paradox has
genuine ramifications for formal logic, set-theoretic definitions, the analysis of natural kinds,
natural language processing and a host of core philosophical issues (Keefe & Smith, 1996).
A typical logico-semantic approach to resolving the paradox attempts to show, in one way or
another, that the sorites argument is not paradoxical after all. However, if one can show that the
sorites is not really paradoxical, then the task remains of explainingwhy it appears to be a paradox.
Our approach to the sorites runs in the opposite direction.We begin by addressing the appearance
of paradox and then explore what this means for the paradox itself. We examine the paradox from
the perspective of the various brain systems that are intuitively comfortable with the premises of
the sorites argument. We suggest that the explicit and implicit cognitive systems are separately
responsible for the initial plausibility of the two premises. The appearance of paradox is a function
of our brain’s architecture and arises from the conflicting interactions of these systems.
Philosophers have lately been appealing to psychological and cognitive phenomena to shed
light on philosophical problems. Dual-process theories of cognition, not unlike the one we
employ, have been used to address questions about knowledge, folk psychology and akrasia (see
Frankish, 2010), and the phenomenon of vagueness has also been studied empirically (Alxatib &
Pelletier, 2011; Serchuk, Harhreaves, & Zach, 2011). The approach we take, i.e. appealing to the
interaction of the different human cognitive systems, has been used to explain a host of problems
in behavioural economics and is the focus of a recent important book on the topic (Kahneman,
2011). We hope our discussion considering a dual-system approach to dealing with a central
q 2013 Taylor & Francis
*Corresponding author. Email: [email protected]
Journal of Experimental & Theoretical Artificial Intelligence, 2013
Vol. 25, No. 3, 355–366, http://dx.doi.org/10.1080/0952813X.2013.783130
Dow
nloa
ded
by [
Uni
vers
itat P
olitè
cnic
a de
Val
ènci
a] a
t 03:
23 2
7 O
ctob
er 2
014
philosophical paradox displays an example of a fruitful interaction between philosophy and the
cognitive sciences.
We first outline the sorites paradox and then show the role that categories play in its
structure. We next review the neurological underpinnings of category formation. Following that,
we outline a dual-process theory of cognition and the corresponding dual-process approach to
category representation. We show that what emerges from this is a picture of how the sorites
paradox is a product of these dual systems of category formation. This leads us to a resolution of
various philosophical questions about the paradox.
The sorites paradox
Natural language contains many vague words. Words such as ‘pink’, ‘tall’, ‘bald’, ‘rich’, ‘hard’,
‘bright’ and ‘table’ can all be considered canonical examples of vague predicates. They are all
vague because there are cases in which it is unclear whether or not the word can be correctly
applied; and indeed sometimes there are cases where it certainly seems that the predicate both
applies and does not apply to one and the same object. So, as the standard examples have it,
someone in the USA whose net worth is four billion US dollars is rich, and someone whose net
worth is only four US dollars is not rich. But much separates four dollars from four billion
dollars. What of individuals with a net worth of a few hundred or thousand dollars, or tens of
thousands, a million or ten millions; are they rich or not? How might one draw a line separating
the rich from the rest of us? The same problem holds for words such as ‘pink’. If you were to
look at a colour chart with thousands of colours like the ones that paint companies distribute, you
may see a colour labelled ‘pink’ and another ‘red’. But there will be a large spectrum of colours
between pink and red. Most of these colours will be judged closer to pink or to red. But there will
be many colour patches that will defy any attempt at consensus about whether that particular
patch is pink or red. (You can easily imagine how the same problem would crop up for words
such as ‘skinny’, ‘light’, ‘fast’ and ‘sour’.)
Any vague predicate can be used to set up a sorites argument, as the sorites paradox is a
product of formal logic combined with vague predicates. Given fairly standard patterns of
formal reasoning, we end up with some very counterintuitive results. We can illustrate this with
astraightforward classical formulation of the inductive sorites:
Categorical premise (CP): One grain of sand does not make a heap of sand.
Inductive premise (IP): For any amount of grains of sand that do not make a heap, adding one grainto those does not make the resulting collection of sand a heap.
Paradoxical conclusion: There are no heaps of sand.
This argument can be formulated positively and negatively and can be generalised to include all
kinds of sorites series. If the argument is truth-preserving, as it appears to be, and if its premises
are true, as they appear to be, then the conclusion, that there are no heaps of sand, should be true
too. But the conclusion is false, hence the paradox.
In this paper, we suggest that human categorisation lies at the heart of the sorites paradox.
Thus, a psychological approach will shed more light on the way the paradox emerges. The world
consists of an unlimited number of potentially different stimuli. Therefore, all organisms need to
segment the environment into categories by means of which physically non-identical stimuli can
be treated as members of the same class (Rosch, 1978). As originally stated by Bruner, Goodnow
and Austin (1956), ‘categorizing serves to cut down the diversity of objects and events that must
be dealt with uniquely by an organism of limited capacities’ (p. 235). Animals lower on the
phylogenetic continuum possess a sophisticated ability to learn about and categorise the stimuli
L. Litman and M. Zelcer356
Dow
nloa
ded
by [
Uni
vers
itat P
olitè
cnic
a de
Val
ènci
a] a
t 03:
23 2
7 O
ctob
er 2
014
in their world (e.g. Squire, 1992a; Squire & Kandel, 1999). But with the evolutionary
development of the cortex, additional learning systems emerged allowing for greater cognitive
complexity (Squire, 1992b). Human beings are able to categorise far more flexibly than, say, the
pigeon. Yet, at the same time, experimental evidence shows that humans share many functional
characteristics of categorisation with other animals (Zentall et al., 2008). Thus, humans seem to
possess multiple mechanisms for categorising information. One human mechanism appears
to be evolutionarily conserved, while the other is more flexible and evolutionarily recent. Key to
understanding how the sorites paradox is represented or manifested in the human brain is the
interaction between these two cognitive systems. The interaction of these disparate
categorisation mechanisms gives rise to the appearance of plausibility of each step in the
sorites paradox.
Our argument proposes that the sorites argument appears so intuitively plausible because we
are the product of a specific evolutionary history. Evolution endowed us with two different
cognitive systems that function in different ways. Each generates a coherent enough set of
beliefs. One system generates the intuitive plausibility (belief) of the key feature of the first
premise of the sorites argument, and the other system generates plausible assent towards the key
feature of the second premise. Our possession and use of both systems together puts us in a
position to treat both of the premises that generate the paradox as plausible.
Let us begin with a rudimentary and evolutionarily ancient form of categorisation – operant
conditioning. In a typical operant conditioning situation (e.g. Hanson, 1959), an animal is given
a reward for responding to a specific stimulus of interest, such as a 500-nm light. The response
profile obtained in these experiments, at times, resembles a bell-shaped curve. The animal not
only responds to a 500-nm light, but also responds to wavelengths that are greater and less than
500 nm, a phenomenon known as generalisation. The probability of a response decreases as one
moves away from the optimal 500-nm stimulus and it does so in a gradual manner. This suggests
that the animal’s representation of the conditioned stimulus, even when obtained under precise
learning conditions, does not have well-defined borders. In other words, the stimulus can be said
to be represented vaguely.
We will refer to these vague and borderless representations as ‘implicit’, as it is difficult to
describe or to state explicitly the necessary and sufficient conditions for when a stimulus will or
will not elicit a response. Rather, the extent to which a stimulus will elicit a response depends on
its similarity to the conditioned stimulus. But ‘similarity’ is itself a vague term and what is
critical here is that no well-defined borders exist between those stimuli that produce a response
and those that do not. A useful way of describing a response function for vaguely represented
stimuli is that it is graded and that the probability of a response lies between 0 and 1. Stimuli that
resemble the conditioned stimulus will get some response, albeit a lower one relative to the
conditioned stimulus, with the strength of the response being a function of similarity.
Because the representation of a conditioned stimulus is vague, it is of the sort that is
amenable to the sorites paradox. In other words, this representation works on the same principle
as IP: for any stimulus that produces a response, changing that stimulus by a small amount
should not completely eliminate that response.
This is not to say that only operant conditioning can produce vague representations or that all
representations that are produced by operant conditioning must be vague. Here, we are merely
suggesting that this evolutionarily ancient categorisation process is one mechanism that is able
to produce vague representations in animals and humans alike. Below, operant conditioning will
serve as a model mechanism with which we can examine mental representations that can be
soritised, so to speak. Indeed, we will speculate that many of the vague human concepts in our
language are acquired through a similar learning process.
Journal of Experimental & Theoretical Artificial Intelligence 357
Dow
nloa
ded
by [
Uni
vers
itat P
olitè
cnic
a de
Val
ènci
a] a
t 03:
23 2
7 O
ctob
er 2
014
Though many linguistic categories are vague, many are not. We refer to these categories as
‘explicit’, because they can be easily explicated and precisely defined. If the human cognitive
system consisted only of explicit categories, it would have no vagueness. Explicit systems can
have ‘heaps’, or ‘non-heaps’, as in CP. They will not form categories that are characterised by
admission criteria that are not precisely delineated.
Given these two systems, the implicit and the explicit, let us ask the following question:
could a cognitive architecture characterised by either a purely implicit or a purely explicit design
alone formulate a sorites paradox? Clearly not. And in this lies our approach: the perception of
paradox is caused by the interaction of the explicit and the implicit categories being represented
in the same brain. Rule-following explicit systems lack the ability to form the kind of
representations that characterise vagueness. Explicit reasoning will allow categories to be
formed only on the basis of fixed rules. Rules, by definition, force one to specify the necessary
and sufficient conditions for category membership. Necessary and sufficient conditions are
formed using explicit standard logical rules. So, a purely explicit system cannot create the vague
categories that make the sorites paradox paradoxical. The purely implicit system, on the other
hand, has no way to formulate a paradox because all of its representations are vague. It does not
‘think’ in terms of yes and no, it only possesses graded representations. Indeed, if we allow for
the values in an implicit system to always be between 1 and 0, as in an idealised bell-shaped
curve, then, by definition, this system is incapable of assenting to (i.e. representing) CP, but can
only respond weakly to it. In addition, of course, the purely implicit brain does not reason
classically. Without logical rules, it simply cannot formulate a sorites paradox.
Both explicit reasoning and implicit representations are jointly necessary for the paradox to
emerge, though neither alone is sufficient. As such, the sorites paradox is the product of an
explicit system that generates non-vague representations superimposed over an implicit system
that generates vague representations. In other words, when the explicit system is forced to
manipulate predicates that were formed in a different way by a different kind of system, we may
get this kind of paradoxical consequence.
Of course, no human being has ever reasoned purely explicitly, only an idealised extremely
primitive computing machine does. If humans did reason exclusively explicitly, we would have
precise definitions for all of our words and concepts, which we do not. Human cognition is
likewise not purely implicit, as we discuss further below. Thus, human cognition instantiates
both implicit and explicit representations, and it likely does so through two separate neural
systems. It is specifically the interaction of implicit and explicit systems as one single
information-processing unit (i.e. our brain) that produces the perception of paradox. But what
does it mean for such a system to exist?
Two cognitive systems
The organisation of the brain allows physiological and psychological processes to be handled by
separate systems that differ from each other both in complexity of function and in evolutionary
history. The brain is organised hierarchically, with subcortical regions being evolutionarily older
and higher cortical areas emerging more recently. For example, physiologically critical life-
sustaining functions are typically found in subcortical regions. However, often more evolved
cortical structures are able to impose finer control over these functions to promote more flexible
behaviour. For example, respiration is controlled in the brain stem that under normal
circumstances exerts automatic control over breathing. Thus, we do not need to be aware of
breathing and are able to breathe in the absence of conscious control. At the same time, the brain
is able to impose top-down voluntary control over breathing, allowing us to consciously hold our
L. Litman and M. Zelcer358
Dow
nloa
ded
by [
Uni
vers
itat P
olitè
cnic
a de
Val
ènci
a] a
t 03:
23 2
7 O
ctob
er 2
014
breath for prolonged periods of time when we so desire. This is an example of the way in which
higher cortical structures can interact with phylogenetically ancient subcortical mechanisms,
allowing for finer control of their function.
Many cognitive processes are similarly structured. Perception, learning, memory and, most
important for our purposes, categorisation can likewise be handled by different systems allowing
for different levels of automaticity, conscious involvement, top-down control and flexibility.
In the last few decades, a general consensus has been building within the cognitive neurosciences
in support of what is known as the dual-process theory of cognition (Frankish, 2010; Kahneman,
2011; Reber, 1993; Schacter, 1987; Squire, 1992a). According to the dual-process theory,
cognitive functions such as emotion, attention, problem solving and others can be handled by
separate cognitive mechanisms that differ from each other in fundamental ways (see Litman &
Reber, 2005). Here, we focus on two processes that can be viewed as two sides of the same coin –
learning and categorisation. A brief review of some of the basic properties that are thought to
characterise these systems will show that categorisation and learning can be handled in both an
explicit and an implicit way by two phylogenetically and functionally distinct mechanisms.
A helpful theoretical framework for thinking about the brain’s learning and categorisation
systems has been described by Reber (1993). According to this theory, the first cognitive
systems to evolve did not require conscious top-down control. Rather, information was acquired
from the environment in a bottom-up way without any top-down conscious modulation. This
system is referred to as the ‘implicit system’. The implicit system developed early on in the
history of the nervous system and once in place the explicit system, which imposes top-down
control over cognitive functions and typically requires conscious awareness, evolved on top of
the already existing implicit structures.
The most well-studied difference between the implicit and the explicit systems (sometimes
thought of as ‘system 1 and system 2’ after Stanovich & West, 1998) has been the respective
conscious accessibility of the information contained within each system (Reber, 1989; Stadler &
Frensch, 1998). But there are also other fundamental differences between them, specifically with
regard to learning and category representation. These differences might correspond to the
changes in the efficiency of cognition that developed over the course of evolution. Because the
implicit system evolved before the explicit system, its mode of operations substantially differs
from that of the explicit system, and since the emergence of the dual-process view in
psychology, one important line of investigation has been to understand the functional
differences between these systems.
In what follows, we (1) discuss the respective evolutionary ages of the category-formation
processes of the implicit and explicit learning systems; (2) explain the inaccessibility of category
boundaries to conscious inspection in implicitly learned concepts; and (3) show the relevance of
the first two points in understanding the cognitive origins of the perception of the sorites
paradox.
Let us first briefly turn to the claim that category-formation processes of the implicit learning
system are evolutionarily ancient. To do this, we need to establish a correspondence between the
way in which certain categories are learned by human beings and other animals.
Herrnstein and Loveland (1964) pioneered investigations in which pigeons displayed a high
degree of ability to respond appropriately to exemplars of a category class. This type of
perceptual learning, which involves sorting stimuli into physically similar groups such as apples
and tables, is very similar to categorisation in humans, in particular to the way in which children
acquire basic-level concepts. It has been proposed (Zentall et al., 2008) that perceptual similarity
guides the responses of non-human animals in the same way as it guides word learning in
humans. For example, Wasserman and his colleagues (Bhatt, Wasserman, Reynolds, & Knauss,
Journal of Experimental & Theoretical Artificial Intelligence 359
Dow
nloa
ded
by [
Uni
vers
itat P
olitè
cnic
a de
Val
ènci
a] a
t 03:
23 2
7 O
ctob
er 2
014
1988) developed a ‘name-game’ procedure to study the acquisition of perceptual concepts that is
analogous to a parent teaching a child to name objects. In the human version, the parent opens a
picture book, points to a picture and asks the child to name it. After a child makes a correct
response, the parent provides positive feedback. For an incorrect response, no positive feedback
is provided, and the parent may try again. After a child is unable to produce the correct response
multiple times, the parent may have to supply it. Using the ‘name-game’, researchers can train
animals to acquire similar concepts. For example, researchers trained pigeons to ‘report’
members of four different categories – cats, flowers, cars and chairs – by pecking four circular
keys surrounding a square viewing screen. Animals readily learned these categories and learned
to correctly classify each stimulus, and they did so within about the same time frame as humans.
There is now a large research literature showing that pigeons readily acquire the ability to
correctly respond to exemplars of a category class (see Zentall et al., 2008). Critically, the
animals are not simply memorising the stimuli and responding to specific memorised exemplars.
They are also able to correctly respond to novel exemplars that were not presented in the training
set, and that vary in terms of perspective, background, colour and number of relevant items
portrayed (Herrnstein, Loveland, & Cable, 1976). This shows that the animals are able to form a
concept that is sufficiently abstract and complex to allow for their responses to be generalised to
substantially novel category exemplars.
What is critical here is that an abstract representation is set up, which allows for
generalisation to new instances. As described above, generalisation sets up category boundaries
that are graded and thus allows for novel stimuli to be grouped based on similarity. Indeed, this is
a fundamental feature of implicit category learning. A key feature of this type of categorisation
is to be able to classify, based on similarity, category members that have not been previously
encountered.
The existence of perceptual category learning in animals possessing a relatively
undifferentiated cortex speaks to the phylogenetic antiquity of this mechanism. In addition,
this suggests that categories that are learned through this mechanism differ from the categories
that are formed through what is perhaps a more familiar explicit mechanism. Category formation
that occurs in situations similar to the ‘name-game’ occurs in a manner that we refer to as
bottom-up. The child, or animal, is never told what a cat or a flower is; they are given no
definition. They are not given well-defined boundary conditions that can guide their future
category membership decisions. Rather, they are presented with an exemplar and are told that it
belongs to a certain category – flower – without being told what its defining boundary
conditions are. As more and more exemplars are presented, and as the child continues to
categorise them though trial and error, more appropriate representations of that category are
formed. But in most cases, even after performance is perfected, the child still does not possess
well-defined rules that can be used to determine what makes something a member of a particular
category, and instead relies on intuitions about similarity. These similarity judgements are a key
feature of vagueness and representations to which IP of the sorites paradox applies. Thus, what
we are suggesting is that many of our vague linguistic predicates are learned through
evolutionarily ancient implicit mechanisms that guide associative learning in other animals.
The specific computations by which the generalisation gradient is produced by the implicit
system take place outside of conscious awareness. There is ample psychological evidence that
the implicit system learns unconsciously. For example, implicit learning mechanisms are
already functioning in infancy. Implicit processes start to parse up the world very much prior to
the development of explicit learning mechanisms. For example, infants as young as three months
of age are surprised when they hear a sentence in their native language presented backwards
(Dehaene-Lambertz & Houston, 1998; Mehler, Jusczyk, Lambertz, & Halsted, 1988), and
L. Litman and M. Zelcer360
Dow
nloa
ded
by [
Uni
vers
itat P
olitè
cnic
a de
Val
ènci
a] a
t 03:
23 2
7 O
ctob
er 2
014
four-day-old infants can already distinguish between their native language and a foreign
language (Dominey & Ramus, 2000; Nazzi, Bertoncini, & Mehler, 1998). The ability to
differentiate between the sounds of different languages suggests that even by that young age
speech is more than just a collection of random noise. Four-day-old infants have already learned
that there are patterns to linguistic sounds. They have a sense of sequential regularity in speech
and are surprised when that regularity is violated. The unfamiliar sounds of a foreign language
are registered by their brains as being out of place.
What is most relevant is that infants can do this before being able to understand language and
before knowing the rules of grammar. An entirely different knowledge system seems to be
operating here, one in which knowledge is not developed through the explicit faculty of rule
knowledge or through memorising facts. This knowledge system extracts regularities in
complex domains through a fluid interaction with the environment and where the knowledge
itself is hard to characterise.
Not all of our linguistic concepts are learned implicitly. Category formation occurs explicitly
when we are presented with a definition of a novel concept that is well specified. For example, an
American accustomed to thinking of long distances in terms of miles may one day be presented
with the concept of a kilometre. Though he/shemay eventually come to use it in a less preciseway,
‘kilometre’ initially has very specific category boundaries that (a) are well defined – 1000metres,
(b) are easily verbalisable, and (c) have borders set up in a top-down definitional way. In other
words, we can be taught what a kilometre is from a teacher or a simple sentence in a dictionary.
These are all features of explicit processing. In addition, (d) the specific set of rules that are used to
define the border of this category is available for conscious inspection. Concepts that can be
specified mathematically, such as kilometre, are not the only ones that can be learned explicitly.
Other examples include concepts whose category boundaries can be specified by if–then rules.
For example, ‘mammal’ can be defined by a number of criteria including the presence of sweat
glands, hair, three middle ear bones, etc. Thus, we can be taught that if an animal possesses these
criteria thenwe should refer to it as a mammal, elsewe should not refer to it as a mammal. Indeed,
deductive reasoning is the hallmark of the explicit system, a point that will become critical below.
Under normal circumstances, the explicit and the implicit category representations exist side
by side, and task and environmental demands determine which representations we use
(Kahneman, 2011; Poldrack & Packard, 2003; Reber, 1993). For example, we categorise the day
into parts, such as morning, afternoon, evening and night. Under normal circumstances, the term
‘evening’ is acquired in an implicit way similar to the ‘name-game’. Most of the time, children
are not given an exact definition of when an evening begins and when it ends. Rather they
acquire an implicit, abstract representation of this term in a bottom-up way, through an
associative learning process during which they see how this term is applied. Of course, ‘evening’
is a vague term, because the sun rises slowly and gradually makes its way across the sky until it
sets, and there is no natural border to demarcate it. Nevertheless, we can choose to define the
term ‘evening’ more precisely if finer use of this term is required. For example, when there was a
need to determine a time for rituals that were biblically mandated to be done in the evening, one
ancient definition of ‘evening’ was specified as ‘the last quarter of the daylight portion of the
day, before sunset’. With this definition, we impose an explicitly defined boundary on an already
existing implicitly represented concept.
Indeed, under normal circumstances, both implicit and explicit processes are synergistically
engaged in category formation (Poldrack & Packard, 2003), and it is only under controlled
experimental conditions that we are able to tease these processes apart. For example, recent
functional magnetic resonance imaging (fMRI) studies (Reber, Gitelman, Parrish, & Mesulam,
2003) show that categories can be formed both by the implicit and the explicit systems, that
Journal of Experimental & Theoretical Artificial Intelligence 361
Dow
nloa
ded
by [
Uni
vers
itat P
olitè
cnic
a de
Val
ènci
a] a
t 03:
23 2
7 O
ctob
er 2
014
conscious access is available for the explicit system only and that both systems are mediated by
different neuroanatomical structures.
To summarise some of the basic properties of the implicit system, we can say that the
boundaries of the categories set up by the implicit system (a) do not have well-defined borders;
(b) are not easily verbalisable; (c) cannot be described with simple if–then statements; (d) are
typically set up over extended time periods, in a bottom-up way, through continuous interaction
with and feedback from the world; and (e) are typically not accessible to conscious awareness.
These characteristics of the implicit system are critical for understanding how the sorites
paradox is represented or manifested in the human brain. Specifically, it seems clear that only
categories that are set up by the implicit system, such as ‘evening’, are susceptible to the sorites
paradox. Categories that can be set up by the explicit system, such as ‘kilometre’, on the other
hand, are not susceptible to the sorites paradox. Furthermore, one of the characteristics of
implicit categorisation is that we are not able to consciously access the specific borders of
implicitly acquired concepts. In other words, we cannot introspect into and consequently
verbalise the precise boundaries of concepts such as ‘evening’. It may be that at least under some
conditions, there is a specific place where even the smallest change in the stimulus can make the
difference in category status. It may also be that no such point exists. Either way, we cannot
verify this through introspection because of the unconscious nature of the system that is involved
in carrying out these computations. Thus, the inaccessibility of these borders to conscious
inspection contributes to their vague status and the perception of paradox. Virtually, any
semantic category that is created by the implicit system is susceptible to the sorites paradox
when operated on by classical logic. Their subsymbolic character and the inaccessibility of the
border to conscious inspection are key features that make implicit representations susceptible to
IP.
Consider for a moment a world in which creatures evolved with only a purely explicit or a
purely implicit cognitive architecture. Would these creatures be able to formulate the sorites
paradox? Surely they could not and it should now be clear why a creature with only an explicit
system will never experience a sense of paradox. It is because all of its semantic categories are
well defined. For this creature, a heap will be defined much as a dollar is defined in our world
(bivalently, with if–then statements). Therefore, much as our explicitly defined words are not
susceptible to the sorites, all of the categories that are made by creatures with only an explicit
system will not be susceptible to the paradox.
What about a creature that has only an implicit system? From its perspective, too, there will
be no perception of paradox because bivalent categories as such do not exist for the purely
implicit system.
Thus, the perception of paradox is created when an implicit category is set up with the aid of
evolutionarily older implicit mechanisms, and when the explicit system, which is a new arrival
on the evolutionary scene, attempts to look in on that category and apply explicit reasoning to it.
The explicit system looks in on and attempts to examine the graded boundaries that are set up, in
the absence of our awareness, by our implicit system. When it does so, it uses its own cognitive
representations. Thus, the sense of paradox is created from the interaction of the implicit and the
explicit systems.
Questions about the paradox
Our analysis sheds light on some of the standard philosophical questions about the sorites
paradox. First, a diagnostic question: exactly what feature of the sorites argument is responsible
for the paradox? En route to solving the problem, many philosophers diagnose it so they can
L. Litman and M. Zelcer362
Dow
nloa
ded
by [
Uni
vers
itat P
olitè
cnic
a de
Val
ènci
a] a
t 03:
23 2
7 O
ctob
er 2
014
isolate the paradox-generating feature of the sorites and adjust accordingly. Some, taking what
we can call an ‘epistemic approach’ (e.g. Field, 1973; Sorensen, 2001), claim that there are
actual boundaries to all predicates, but humans are ignorant of them, and that ignorance is what
creates the paradox. Taking this approach ‘resolves’ the paradox and preserves classical logic by
claiming that there are no real vague predicates. Others (e.g. Dummett, 1975; Read, 1995;
Schiffer, 1998) blame the incoherence of the inference rules that force us to contradict ourselves.
They resolve the paradox by faulting classical inference rules and maintain the vague predicates.
Yet, a third group places the blame with the indeterminacy of the truth value of the predicate,
thereby denying the second premise and allowing that the vague predicate has no determinate
extension. Each of these answers claims that the blame for the paradox lies with a different part
of the sorites argument.
Our analysis claims that it is neither an epistemic, ontological or semantic problem about the
predicates, nor is it the fact that the rules of classical logic trip us up that generates the paradox.
We claim that classical logic by itself is perfectly coherent, and the predicates, while not
determinate or known, are perfectly reasonable and not innately degenerate (paradox causing).
The paradox does not arise from any one component of the argument on its own. It arises from
the interaction of the rules and the predicates in the argument. These rules and predicates were
not ‘designed’ to interact, and the paradox is an artefact of an interaction between two cognitive
systems. Thus, we cannot blame one part or the other as both parts are independently innocent.
A second question asks if there is a way to dissolve the paradox. Besides utilitarian
approaches (e.g. Parikh, 1996) that can help us deal with the paradox without actually solving it,
there are three ways to dissolve paradoxes. The first involves showing that the conclusion of an
argument that was initially judged to be paradoxical is in fact acceptable. Philosophers such as
Unger (1979) do this. They claim that certain concatenations do not qualify as ‘legitimate’
objects. On this view, we may not claim that there are any of these illegitimate vague objects
such as heaps, bald people, rich people, perhaps any people at all. Thus, if we concede that there
are no heaps then the conclusion of the argument is true and there is no paradox.
A second approach to dissolving paradoxes involves showing that a premise or assumption
about the paradox that was initially accepted as true is indeed false. For the sorites paradox, we
find that philosophers have defended rejecting almost every part of the paradox. Some have tried
to neutralise the effect of predicate vagueness by declaring that vague predicates either are or
can all be somehow made precise (e.g. Sorensen, 2001) and we just do not know it. When we
precissify the predicates, or claim that they are not vague after all, we end up denying that they
have penumbras (examples that can be classified under two incompatible predicates), and so we
can analyse vague predicates just like sharp ones. Philosophers have recently proposed many
ways to do this (e.g. Soames, 1998).
However, we can neutralise the problem that logic contributes to the paradox by changing
the logic of the sorites argument or rejecting some part of classical logic. One way to change the
logic is to adopt it with different rules – generally one that denies bivalence, perhaps fuzzy logic
(Zadeh, 1975). By making logic fuzzy, we do not have the problem of vague predicates because
whereas in classical logic every sentence is either true or false (i.e. 0 or 1), in fuzzy logic
sentences containing vague predicates take on an array of definite truth values between 0 and 1
to match the degree of truth of the predicate. Thus, someone may be sort of bald or fairly wealthy
(e.g. 0.24 or 0.628). Fuzzy logic does not allow us to assert categorically that someone is either
rich or not rich. So, if we can include an arbitrary degree of precision and say that someone is
rich to a certain degree, we should be able to overcome the problem of the paradox. Various
alternatives to classical logic have been employed in attempts to dissolve the paradox.
Journal of Experimental & Theoretical Artificial Intelligence 363
Dow
nloa
ded
by [
Uni
vers
itat P
olitè
cnic
a de
Val
ènci
a] a
t 03:
23 2
7 O
ctob
er 2
014
All the above solutions are unsatisfying as they generally require us to deny classical logic or
the existence of vagueness – both counterintuitive solutions as bad as the paradox itself. So, we
advocate an alternative approach that denies us the ability to evaluate the paradox. We claim that
humans are psychologically biased in favour of giving assent to both premises of the paradox,
making the paradox a biological artefact.
Our point is strengthened by the following consideration. We are generally averse to
abandoning either classical logic or vague predicates as part of our natural discourse. The
implausibility of precissification and fuzzy logic as resolutions to the paradox attests to the
strength of the cognitive intuitions involved in preserving both classical logic and vague
predicates. Our own explanation thus exploits (and in some sense synthesises) the strengths of
both sides.
Given that both aspects of the paradox appeal to such strong intuitions, combined with the
intractability of a solution, we do not suggest a semantic dissolution of the paradox. We clearly
cannot naturally combine the classical logic and the vague predicates – both of which humans
find intuitive. Human cognition alternates between considering sharp and vague categories and
the logic of precision and vagueness, or it synthesises the two – depending on the context.
Pragmatic considerations (perhaps along the lines of Parikh’s language-game suggestion) are
generally used to assess which solution is best in a given context. From this perspective, there is
no principled reason to play by the rules of only one or the other system. One can prioritise
which cognitive system one will enlist in some given case with respect to some given situation in
which a vague category places a role. But that will not be the mental system of human
rationality; it is merely one of the two systems we constantly make use of. Each candidate for a
‘solution’ to the sorites paradox must make some sacrifice that will inevitably seem
counterintuitive: e.g. giving up bivalence, truth functionality, vague predicates.
Finally, though there have been some positive steps taken towards accounting for the
appearance of paradox in the sorites argument when we ask what it is about the paradox that
causes us to be perplexed when confronted with it (e.g. Raffman, 1994), our discussion goes the
farthest towards showing the real roots of our confusion. Our analysis also has the advantage of
appealing to well-studied cognitive phenomena, but more germane, it explains the
ineliminability of vagueness from our language and the appeal of classical logic in our
reasoning. We heavily rely on one cognitive system when we form vague categories. Our
language would be severely impoverished (or perhaps even impossible) without the vague
predicates we constantly employ. Thus, despite Russell’s (1923) protestations, they are stuck in
our languages. In addition, we also rely heavily on our explicit system for reasoning (and
occasionally sharp category formation), which is why we prefer our logic follow classical
intuitions about truth functionality, and hence our reluctance to abandon such features of logic as
modus ponens or bivalence.
Conclusion
Our claim amounts to the following statement: the part of the brain responsible for finding vague
predicate talk plausible is one that is rooted in evolutionary antiquity – the part that is
responsible for forming categories and learning implicitly. The part of our brain that is
responsible for finding the part of the argument that relies on logic, explicit definition and
bivalence plausible is evolutionarily more recent and is responsible for symbolic information
processing and explicit cognition. Humans use the two systems together. The explicit system
was added on without dropping the implicit system. This allows us to generate vague categories
and apply logical rules to them. Given the way evolution works, we have no reason to expect
L. Litman and M. Zelcer364
Dow
nloa
ded
by [
Uni
vers
itat P
olitè
cnic
a de
Val
ènci
a] a
t 03:
23 2
7 O
ctob
er 2
014
systems of different evolutionary ages to always function together seamlessly. The explicit
system is not designed to handle the way the implicit system constructs categories, and the
implicit system was not designed with symbolic processing and classical bivalent reasoning in
mind. However, the explicit system ‘imports’ predicates from the implicit system that it may use
in explicit reasoning. This is how we get paradox, and it is the paradoxical quality of sorites
cases that hints at the dual origins of our mental make-up.
Acknowledgements
The authors would like to thank David Rosenthal and the CUNY Cognitive Science group, the Long IslandPhilosophical Society and the Workshop From Cognitive Science and Psychology to an EmpiricallyInformed Philosophy of Logic 2010 for opportunities to present earlier incarnations of this paper, andDavid Pereplyotchik and an anonymous referee for helpful comments.
Note
1. Email: [email protected]
References
Alxatib, S., & Pelletier, F. J. (2011). The psychology of vagueness: Borderline cases and contradictions.
Mind and Language, 26(3), 287–326.
Bhatt, R. S., Wasserman, E. A., Reynolds, W. F. Jr, & Knauss, K. S. (1988). Conceptual behavior in
pigeons: Categorization of both familiar and novel examples from four classes of natural and
artificial stimuli. Journal of Experimental Psychology: Animal Behavior Processes, 14, 219–234.
Bruner, J. S., Goodnow, J. J., & Austin, G. A. (1956). A study of thinking. New York: Wiley.
Dehaene-Lambertz, G., & Houston, D. (1998). Faster orientation latency toward native language in two-
month-old infants. Language and Speech, 41, 21–43.
Dominey, P. F., & Ramus, F. (2000). Neural network processing of natural language: I. Sensitivity to serial,
temporal, and abstract structure of language in the infant. Language and Cognitive Processes, 15(1),
87–127.
Dummett, M. (1975). Wang’s paradox. Synthese, 30(3/4), 301–324.
Field, H. (1973). Theory change and the indeterminacy of reference. Journal of Philosophy, 70(14),
462–481.
Frankish, K. (2010). Dual-process and dual-system theories of reasoning. Philosophy Compass, 5(10),
914–926.
Hanson, H. M. (1959). Effects of discrimination training on stimulus generalization. Journal of
Experimental Pscyhology, 58, 321–334.
Herrnstein, R. J., & Loveland, D. H. (1964). Complex visual concept in the pigeon. Science, 146, 549–551.
Herrnstein, R. J., Loveland, D. H., & Cable, C. (1976). Natural concepts in pigeons. Journal of
Experimental Psychology: Animal Behavior Processes, 2, 285–302.
Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Strauss, and Giroux.
Keefe, R. & Smith, P. (Eds.). (1996). Vagueness: A reader. Cambridge: MIT Press.
Litman, L., & Reber, A. S. (2005). Implicit cognition and thought. In Keith J. Holyoak & Robert G.
Morrison (Eds.), The Cambridge handbook of thinking and reasoning (pp. 431–453). New York:
Cambridge University Press.
Mehler, J., Jusczyk, P., Lambertz, G., & Halsted, N. (1988). A precursor of language acquisition in young
infants. Cognition, 29(2), 143–178.
Nazzi, T., Bertoncini, J., & Mehler, J. (1998). Language discrimination by newborns: Towards an
understanding of the role of rhythm. Journal of Experimental Psychology: Human Perception and
Performance, 24, 1–11.
Parikh, R. (1996). Vague predicates and language games. Theoria-Segunda Epoca, 11(27), 97–107.
Journal of Experimental & Theoretical Artificial Intelligence 365
Dow
nloa
ded
by [
Uni
vers
itat P
olitè
cnic
a de
Val
ènci
a] a
t 03:
23 2
7 O
ctob
er 2
014
Poldrack, R. A., & Packard, M. G. (2003). Competition among multiple memory systems: Converging
evidence from animal and human studies. Neuropsychologia, 41, 245–251.
Raffman, D. (1994). Vagueness without paradox. Philosophical Review, 103, 41–74.
Read, S. (1995). Thinking about logic: In introduction to the philosophy of logic. New York: Oxford
University Press.
Reber, A. S. (1989). Implicit learning and tacit knowledge. Journal of Experimental Psychology: General,
118, 219–235.
Reber, A. S. (1993). Implicit learning and tacit knowledge. New York: Oxford University Press.
Reber, P. J., Gitelman, D. R., Parrish, T. B., & Mesulam, M. M. (2003). Dissociating explicit and implicit
category knowledge with fMRI. Journal of Cognitive Neuroscience, 15(4), 574–583.
Rosch, E. (1978). Principles of categorization. In E. Rosch & B. B. Lloyd (Eds.), Cognition and
categorization (pp. 27–48). Hillsdale, NJ: Erlbaum.
Russell, B. (1923). Vagueness. The Australian Journal of Philosophy and Psychology, 1, 84–92.
Schacter, D. L. (1987). Implicit memory: History and current status. Journal of Experimental Psychology:
Learning, Memory, and Cognition, 13, 501–518.
Schiffer, S. (1998). Two issues of vagueness. The Monist, 81(2), 193–214.
Serchuk, P., Harhreaves, I., & Zach, R. (2011). Vagueness, logic, and use: Four experimental studies on
vagueness. Mind and Language, 26(5), 540–573.
Soames, S. (1998). Understanding truth. New York: Oxford University Press.
Sorensen, R. (2001). Vagueness and contradiction. New York: Oxford University Press.
Squire, L. R. (1992a). Memory and the hippocampus: A synthesis from findings with rats, monkeys, and
humans. Psychological Review, 99, 195–231.
Squire, L. R. (1992b). Declarative and non-declarative memory: Multiple brain systems supporting
learning and memory. Journal of Cognitive Neuroscience, 4, 232–243.
Squire, L. R., & Kandel, E. R. (1999). Memory: From mind to molecules. New York: Scientific American
Library.
Stadler, M. A., & Frensch, P. A. (1998). Handbook of implicit learning. Thousand Oaks, CA: Sage.
Stanovich, K. E., & West, R. F. (1998). Cognitive ability and variation in selection task performance.
Thinking & Reasoning, 4(3), 193–230.
Unger, P. (1979). Why there are no people. Midwest Studies in Philosophy, 5, 411–467.
Zadeh, L. (1975). Fuzzy logic and approximate reasoning. Synthese, 30, 407–428.
Zentall, T. R., Wasserman, E. A., Lazareva, O. F., Roger, K. R., Thompson, R. K. R., & Rattermann, M. J.
(2008). Concept learning in animals. Comparative Behavior and Cognition Reviews, 3, 13–45.
L. Litman and M. Zelcer366
Dow
nloa
ded
by [
Uni
vers
itat P
olitè
cnic
a de
Val
ènci
a] a
t 03:
23 2
7 O
ctob
er 2
014