book of abstracts · web viewbook of abstracts keywords philosophy graduate conference last...

56
1 Third Belgrade Graduate Conference in Philosophy Faculty of Philosophy University of Belgrade Book of Abstracts

Upload: lyanh

Post on 03-Apr-2018

220 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

1

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Book of Abstracts

Page 2: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

2

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Alex KostovaSt. Kliment Ohridski University, Sofia

Truth without Representation: Pluralist Prospects for Overcoming the Analytic - Continental Divide

The ideas for going beyond the analytic-continental divide are neither new, nor ill-argumented. However, this divide is still powerful in shaping the 21-century philosophical life. The aim of this text is to shed a new light on the problem by examining the relationship between the non-representational conceptions of truth that characterize the promising contemporary neorealist movements, and the prospects for actually overcoming the analytic-continental divide in 21-century philosophy. Thus, its task is to analyse how the establishment of ontological and epistemological pluralism, brought forth through such an understanding of truth, could have metaphilosophical implications when it comes to bridging the gap between traditions. In order to do this, I will discuss the following questions: (1) In what sense are the neorealist conceptions of truth – in their pro-naturalist (Ferraris) and anti-naturalist (Gabriel) versions – non-representational? (2) How do they work in a metaphilosophical context concerning bridging the gap between traditions – not only analytic and continental, but also western and eastern? (3) What is their relevance for a potential overcoming of the analytic-continental divide?

First, I will examine the problems that the new realist conceptions of truth are designed to solve. Arguably, there are two realist trends for establishment of a nonrepresentationalist kind of realism, a pro-naturalist (Ferraris) and an anti-naturalist one (Gabriel). To put it bluntly, the first one seeks to establish experience-based and antifoundationalist conception of the access to the world based on rejection of the possibility of a strict correspondence between the representationally independent structures of reality and thought. The second one criticizes the monistic accounts of reality and its cognition through rejection of representational mediation between thought and things.

Then, I will examine how these conceptions influence the discussion of the nature of the analytic-continental divide and its overcoming. The modalities of the interplay between revealing and concealing, absence and presence, appear in the phenomena of representation and ground the relations of commensurability and incommensurability between how things are and how they appear to us. Thus, we will reveal the implications of a new realist account of knowledge that is simultaneously pluralist and fallibilist. I find this ground fruitful for arguing against the analytic-continental divide, since it circles around the problem about the relation between empirical corectness and truth.

Keywords: metaphilosophy, truth, representationalism, pluralism, analytic-continental divide

Page 3: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

3

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Alberto Oya University of Girona

Is it reasonable to believe that miracles occur?

Traditionally, miracles have been defined as supernaturally caused events which are outside the scope of scientific explicability. The motivation for this definition is that it seems to preserve both God’s causal role in the occurrence of miraculous events and their alleged apologetic force: since miracles are scientifically inexplicable, they do not have a natural cause and they can only be explained by appealing to a supernatural one, i.e. God. Thus, this characterization of the miraculous seems to offer the theist the possibility of constructing an ‘argument from miracles’, an argument for showing the existence of God by appealing to the occurrence of scientifically inexplicable events. The justification usually offered for claiming that an event is a miraculous one is grounded on claims that we do not have a scientific explanation for that event and that it is adequately explained by a theistic explanation. This argument is construed on pragmatic grounds. When faced with an event for which we have no scientific explanation and where that event is adequately explained by a theistic one, the most reasonable thing to do is to conclude that it is outside the scope of scientific explicability by sticking to the theistic explanation: since we already have a successful explanation for the event, it would be unreasonable to be without any explanation at all, while hoping for a scientific explanation which may or may not be available one day (it would be, so to say, an unjustified act of faith).

The aim of this talk is to show that this argument does not work neither from an atheistic point of view, nor from a theistic one. From an atheistic point of view, the argument fails for two reasons. First, because theistic explanations are intentional explanations and, as such, they only work if we assume that the explanandum is the result of the intentional activity of a supernatural rational agent, but this is precisely the claim for which the atheist is seeking justification. Second, because even conceding that the hypothesis of God's existence would provide a possible explanation for certain events which do not fit with our current scientific knowledge, this does not mean that the explanatory power of the theistic explanation overrides the ontological cost of positing the existence of a supernatural cause. This seems to be false: when faced with an event that seems to falsify our best scientific knowledge, we have many options which are more rational than completely turning around our best scientific theories. We can decide that the event has not been correctly described, that a hidden variable is interfering or that it has been inadequately measured, and so on. This is a common strategy in the philosophy of science. If this is right, then it should be even more obvious that the high ontological cost of positing a supernatural entity such as God makes sticking to the theistic explanation not reasonable. This explains why, unless there is minimal prior plausibility for God’s existence, the most reasonable thing to do when faced with an event for which we have no scientific explanation – even if it is adequately explained by a theistic explanation – is to simply consider it as an event for which we do not have a scientific explanation now and to wait until a scientific explanation is found. Indirectly, this is to show that miracles have no apologetic force and, consequently, that there is no possibility of constructing an argument from miracles for showing the existence of God.

The mere acceptance of God’s existence, however, does not justify this criterion for identifying miraculous events. Even if we are engaged in a theistic world-view wherein the reality of God is not questioned, we still need to show that

Page 4: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

4

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

theistic explanations have enough explanatory power to constitute an adequate explanation ofevent. The problem is that it is disputable whether the knowledge we have of God's intentions and purposes is enough to make theistic explanations workable. If we focus on the Christian tradition, the miracles reported reflect selectivity in God's interventions, which is inconsistent with His alleged all-good nature: God's interventions seem arbitrary, capricious, and hence unfair. Some (but not all) prayers are answered, some (but not all) sick people get healed, and we can find no reason to explain why God acts on some occasions but not others: all terminally ill children deserve God's helping Hand, but not all children are healed. Some attempts have been made to explain this selectivity in the miracles reported in a way that makes them compatible with the benevolent nature of God, but these answers cannot be applied to all situations (e.g., why did God, who had intervened on other more trivial occasions, remain silent during the Holocaust?).

Since God is an all-good Being, the unfair nature of miracles shows that the notion of miracle is inconsistent with the very notion of God. The only way to explain God’s absence in a way which is consistent with His all-good nature is by claiming that God acts according to a benevolent plan which we cannot comprehend, i.e. that God had reasons for not stopping the concentration camps, and that these reasons are benevolent. This, however, is to accept our ignorance of God's intentions and purposes and hence our incapacity to recognize which events follow God's intentions and purposes, which in turn implies that, even from a theistic perspective, no theistic explanation can constitute an adequate explanation of any event.

The most reasonable thing for both the theist and the atheist to do then, is to conclude that events for which we have no scientific explanation now are in fact scientifically explicable.

Keywords: Argument from Miracles; God; Miracles; Scientific Explanation; Theistic Explanation.

Page 5: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

5

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Andrea RaimondiNorthwestern Italian Philosophy Consortium

Is Intentionality Sufficient for Mentality?

According to a familiar suggestion, intentionality is the mark of the mental. This suggestion is supported in terms of the intentionalist thesis (IT): intentionality is a necessary and sufficient condition for mentality. IT has been rejected by those arguing that there are non-intentional mental states, such as emotions, moods (Bordini 2017, Dretske 1995, McGinn 1996, Searle 1983) and purely qualitative states, like bodily sensations (Antony 1997, Voltolini 2014). Crane (1998, 2001), Mendelovici (2013) and Tye (1995, 2000, 2006) have sought to rebut these objections.

A less discussed anti-IT strategy consists in rejecting the thesis that intentionality suffices for mentality. Nes (2008) tries to reject this thesis, moving from Crane’s (1998; 2001) necessary and conjunctively sufficient conditions for intentionality:

(a) directedness upon an object; (b) aspectual shape.Nes redescribes (a) and (b) in terms of certain features of the intensionality of

reports; he shows that there are non-mental states whose reports have such features, which makes them intentional states. Nes considers and rejects a weak response, according to which one should spell out intentionality in explicitly mental terms, and a strong response, according to which one should ratchet up the necessary and conjunctively sufficient conditions for intentionality. After presenting a variety of strengthenings, I shall show that either fail to exclude all non-mental states or are so strong that they ground new challenges to the thesis that intentionality is necessary for mentality.

Afterwards, I shall consider a third response, i.e. Crane’s (2008) reply, trying to reconstruct it rigorously. Crane argues that the relevant sense of intentionality is that of intentionality-as-representation. Following Crane, I suggest that the intentionalist might want to reformulate (a) and (b) as follows:

(a’) representativeness: the capacity of an intentional state to represent things; (b’) perspectuality: the fact that an intentional state represents things from a

certain perspective.I shall show how Crane’s proposal seems to provide a new form of

intentionalism, able to respond to Nes’s objections, for it lets us distinguish intentional states from non-mental ones (the former have the feature of representing something so-and-so, whereas the latter do not). This is shown by the fact that it is linguistically queer to attribute representative features to non-mental, e.g., dispositional, states: it sounds odd to say that solubility represents dissolving, or that fragility represents breaking.

Finally, I shall argue that Crane’s representationalist proposal is not adequate, for it is grounded on a mere terminological matter: there seems to be no relevant difference between the first account of intentionality and the new one. Even though the notion of representation makes it more difficult to say that non-mental dispositions have representational content, it is equally difficult to individuate the difference between the old notion of intentionality and the new notion of intentionality-as-representation (and this places the burden of proof on Crane). That is, it is rather hard to answer the following question: what does that notion contribute to the explanation of mental states, if anything? As long as this question remains unanswered, Crane’s representationalist proposal cannot be accepted.

Page 6: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

6

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Keywords: mind, intentionality, intensional contexts, representation.

Andreas Pieter de JongUniversity of Manchester

“There is no Santa Claus”: On the Meinongian Treatment of Negative Existentials and Negative There Be-Sentences

In The Nonexistent (2013, pp. 144-148), Anthony Everett points out that any fictional realist account has to explain the intuition that negative existentials are true, even though fictional realists claim that there are such things. A Meinongian account of negative existentials has a straightforward answer to that question: we have those intuitions because negative existentials are true. More specifically, negative existential claim de re of the objects that they do not exist. The Meinongian line of defence is that this account of negative existentials is descriptive of the semantics of ordinary people, as is defended by Reimer (2001; 2001a). Everett levels two criticisms against this line of defence, which he says has “a rather ad hoc flavour and would need careful justification and defense” (2013, p. 144).

The first criticism is that the Meinongian cannot account for why explicit anti-Meinongians do not feel pressure to deny negative existentials pertaining to X, whilst they do feel this pressure in the case of first-order truths about X. The second criticism is that even if the Meinongian can accept negative existentials, she cannot explain why we also use Negative Existential There-be sentences (NETBs, of the form “There is (are) no X(s)”) to express negative existentials. To support the second criticism, Everett maintains that we should interpret the quantification in NETBs as unrestricted in the absence of obvious contextual cues. Everett dismisses the Meinongian response that NETBs without contextual cues are restricted to nonexistent objects. We take him to issue two challenges. The Why Restricted Challenge presses us to answer why NETBs are restricted to existent objects as opposed to being unrestricted. The Why Existence Challenge presses us to explain why NETBs are restricted to existent objects as opposed to another restriction, say concrete objects.

Against the first criticism, we will argue that the disparity that Everett claims is exaggerated. The existence of antirealist strategies (expressivism and fictionalism) shows that not all antirealists about a subject matter have felt the need to become error theorists about that subject matter. However, Everett could argue that those attitudes result from pressure to adopt an alternative semantic construal of first-order truths about X. Therefore, we showcase respectable philosophers that assert first-order truths about X while being antirealist about Xs. Therefore, we maintain that former claim of the disparity is not as severe as Everett claims. In addition, the historical treatment of negative existentials shows that anti-Meinongians feel the pressure to deny negative existentials on their original semantic 1 treatment. Everett may want the Meinongian to explain why the Anti-Meinongians have opted to revise the semantic treatment of negative existentials, rather than become Meinongian. The only way to do that is to attribute a mistake to the Anti-Meinongian or to consider the Anti-Meinongian irrational.

Against the second criticism, we will argue that both challenges can be met. For meeting the Why Restricted Challenge, we cite NETBs from Moltmann (2013; 2015 that quantify over nonexistent objects, such as “there are books in this

Page 7: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

7

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

catalogue that do not exist”. These examples suggest that the context free NETB “there are no buildings that do not exist” should be interpreted as restricted.

For meeting the Why Existence Challenge, we point out that, given Meinongian characterization principles (e.g. Priest 2016 [2005], p. 83), unrestricted NETBs are trivially false. Hence, we need a restriction that makes NETBs nontrivial. A speculative answer to the contrastive question posed by this challenge, we cite Meinong’s prejudice for the actual (1960 [1904], p. 79). Pressed for a reason, we maintain that a restriction to real objects, as opposed to objects of make-believe, seems to be a reasonable. We use a pretence-theoretical characterization of nonexistent objects: i.e. as generated by stipulated rules of a game of make-believe (walton; Evans 1982). Such objects are subject to the aims of those that produce the will and their behavior can be dictated. As opposed to those objects, existent objects do not adapt as easily to the will of sentient beings. This is because we cannot dictate the behavior of these objects, for they behave independently from our stipulations. That makes them worthy of study, because we have to adapt to their behavior.

Keywords: Metaontology, Philosophy of Language, Philosophy of Fiction, Metaphysics.

Anh-Quan NguyenUniversity of St Andrews

Intuitions about Past and Future Value

In asking how goods and harms should be distributed across time, many moral theories accept temporal neutrality. Temporal neutrality requires us to attach no normative significance per se to the temporal location of goods and harm – in short, we shouldn’t be time-biased (Brink 2010).

There are commonly two forms of time-biases, near-bias and future-bias. If an agent is near-biased, she discounts the value of an event the more distant the event is from the present. If an agent is future-biased, she discounts the value of an event if it's in the past rather than the future. While it is commonly accepted that near-bias is rationally impermissible, future-bias enjoys intuitive support. Wanting bad things to be past and good things to be future seems to be so natural that it could be treated as a brute fact about our rationality (Dorsey 2016, Parfit 1984).

Recently however, several authors tried to debunk the intuitive appeal behind future-bias to show that the intuitive support behind it is unstable by showing that future bias does not generalise well to non-hedonic goods and harms and to scenarios involving other people than ourselves. Therefore, the intuitive appeal should not be treated as an argument in favour of the permissibility of future-bias (Brink 2010, Greene forthcoming, Parfit 1984). On the contrary, the instability of our intuition may be used as evidence to show that future-bias is not result of rational processes and should be marked impermissible due to its arbitrariness (Dougherty 2015, Sullivan forthcoming).

This paper argues against the admirable efforts of these authors. Firstly, it willbe argued that future-bias’s intuitive appeal is not lost because it generalises well to non-hedonic goods and harms such as disgraces, personal relationships and virtues or vices as well as to scenarios involving other people. Secondly, it will be shown that future-bias also explains our attitudes towards death and provides the best and simplest explanation as to why the Lucretian symmetry argument against the badness of death fails to convince us. Thirdly, drawing on scenarios provided by

Page 8: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

8

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Scheffler and Schiffrin (2013) this paper argues that future-bias also generalises to collective decisions about society and humanity as a whole.

In summary, not only do the mentioned philosophers fail in debunking the intuitive appeal behind future-bias – our preference for bad things to be past and good things to be future is deeply embedded into our ways of thinking and does not suggest any arbitrariness or instability in reasoning at all. We may continue treat the intuitive appeal behind future-bias as philosophical evidence – and should not be neutral towards the past.

Keywords: Ethics, Time-Biases, Death, Value Theory

Anja CmiljanovićUniversity of Novi Sad

Bohr and Heisenberg: Correspondence and Uncertainty

Planck’s discovery of the quantum of action is the beginning of quantum physics, which has led science to a fundamentally different understanding of the processes in nature. This raised the question about possibilities of objective scientific knowledge of nature, and the most important contributions to these changes came from Copenhagen’s interpretation of quantum physics.

Therefore, in the center of our interest will be Nils Bohr and Werner Heisenberg, who are considered to be founders of Copenhagen’s interpretation. Bohr is considered for the first serious attempt to transform quantum theory into a coherent system through the principle of correspondence. He tried to reconcile classical and quantum theory, representing quantum theory as a “rational generalization of classical mechanics.” However, it is not entirely clear what Bohr implicitly implies under the term “rational generalization”. Over and above, there is also a question which of three possible interpretations of the principle of correspondence is the “right one”. There is the frequency interpretation, the intensity interpretation, and in the end, there is the selection rule interpretation. On the other hand, Heisenberg’s merit is the first mathematically derived scheme of quantum theory and the discovery of the principle of uncertainty. The two principles mentioned above are two basic principles of quantum theory and our attention is to analyze them together with the epistemological problems that arise with them.

The main questions here are how and in what way the principle of correspondence can enable us to reconcile classical and quantum theory, and how it enables us to speak about quantum phenomena in the language of classical physics. Further, Heisenberg’s principle of indeterminacy will be considered in view of the requirement to abandon the causally way of describing the processes in nature in favor of the speech about the probability of certain processes and their outcomes. This raises the question of what type of scientific knowledge we can speak in the future, as well as the role of the observer or subject in the cognitive processes. Analyzing these two principles and their mutual relationship, our final task will be to present a continuous change in the human cognitive relation to the nature, conditioned by the development of quantum mechanics. Since Bohr and Heisenberg had the most important role in this change, its understanding is inseparable from

Page 9: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

9

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

understanding of their, at first sight, contradictory attitudes about the sense and possibilities of scientific knowledge.

Keywords: principle of correspondence, principle of uncertainty, scientific knowledge, quantum theory, electrons

Page 10: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

10

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Anne-Kathrin KochUniversity of Vienna

Etiological challenges and the status of our beliefs

“You only believe this because you have been brought up in a certain way!” is a remark that is more often the subject of private conversation than of philosophical inquiry. My talk will focus on a paper that has attempted to change this fact, and to give an account and an assessment of the challenge implied by “You only believe this because...”-claims – so-called “etiological challenges.” In their essay „Indoctrination anxiety and the etiology of belief“, Joshua DiPaolo and Robert Simpson argue that etiological challenges can undermine the epistemic status of their target beliefs. More specifically, they want to show that etiological challenges can do this by themselves and in a way that is specific to them, as opposed to indirectly, by serving as so-called „indirect pointers“. They are supposed to be able to do this because there are cases where the etiology of the belief itself is the problem, namely when it was induced via a process of indoctrination (instead of acquired through making a simple mistake). Correspondingly, the specific way in which etiological challenges can – according to DiPaolo and Simpson – undermine beliefs, is by inducing indoctrination anxiety.

In my talk, I will look at DiPaolo and Simpson's argument for why we should assume that this is the case. I will attempt to give a charitable reconstruction of their argument as an inference to the best explanation. In a second part, I will then take a first step towards assessing the limits of DiPaolo's and Simpson's suggestion. My main worry is the following: While I agree with DiPaolo and Simpson that their account of etiological challenges and how they relate to indoctrination anxiety is highly plausible, establishing their account as the most plausible account of the phenomenon in question requires further work. I will attempt to show that there argument relies on the hidden premise that they provide the only explanation. My assessment does not do any damage to their account itself. I will close by suggesting two ways in which their account can be defended against my worries: By arguing against real competitors, or taking the stance that their argument is to be understood normatively anyway.

Keywords: epistemology, origin of beliefs

Page 11: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

11

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Daniel Dancák Faculty of Arts and Letters, Catholic University of Ružomberok

Awareness of Unawareness: Are Moral Judgments Illusory?

With development of empirical sciences such as evolutionary biology, psychology, neurosciences etc. it became almost obligatory for metaethical theories to be empirically informed. Some believe that empirical research can settle some philosophical quarrels in this area for good. This idea came to be known as genealogical debunking. The basic point is that metaethical and normative ethical theories corresponding to them, which are too descriptively inaccurate, i.e. which do not fit well with empirical genealogy of moral judgment are effectively debunked. They are not wrong to be sure. A theory might be correct saying that we would be better off arranging our moral lives according to it. However, if empirical data show that moral judgments hardly ever occur the way the theory describes or even prescribes, it is simply too demanding to be treated as just normative, since compliance with such norms is not much more than wishful thinking.

The forthcoming contribution maps the present discussion in metaethics concerning the relationship between moral intuitions and moral reasoning and their role in moral judgment with respect to empirical findings. On one hand there is a view on which moral judgments are caused and largely determined by intuitive processes. Moral reasoning is mere subsequent rationalization of already present intuitive response, while we believe things to be other way round, and thus we are deluded about how moral judgments work. On the other hand, it was shown that moral reasoning can have causal impact on one´s automatic responses. However, this causal influence is rare and often negligible. A more viable solution is to argue that although moral intuitions do trigger moral judgments moral intuitions are accepted or refused on basis of reflection. The question is whether we are aware that our moral judgments work this way.

The claim that we are not largely relies on the observation that our introspective access to our very own mental processes is rather limited. This observation derives from the fact that people´s reports on their mental processes are very often inaccurate or even plainly wrong. This is because, as the claim goes, since people lack conscious access to their mental processes, when explanation is required, they provide it by application of private a priory theory about how mental processes ought to work. Yet, people generally believe they do know how their mental processes work. They believe that reasons they devise are actually responsible for their intuitive responses. Although this is often the case, there are more than few exceptions in which people clearly know that they don´t know what exactly triggered their intuition and that they are putting together hypothetical explanation. I want to suggest that reports on moral judgments pretty much generally fall into this category, since people think of and explain moral judgments in terms of justification, which is more or less independent of causation, i.e. it does not follow that they consider justificatory reasons by which they explain their moral judgments to be causally responsible for their intuitions.

Keywords: awareness, cause, genealogical debunking, justification, reflective endorsement.

Page 12: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

12

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Dario MortiniUniversity of Barcelona

The Conceptual Priority of Knowledge

Knowledge is traditionally understood in terms of a tripartite analysis: to know a proposition p is to have a justified true belief that p. However, Gettier’s renowned counterexamples showed the inadequacy of analysing knowledge in terms of individually necessary and jointly sufficient conditions: in fact, the more conditions were added, the more putative counterexamples were found. Moreover, Linda Zazgebsky (1994) even demonstrated a two-step procedure for constructing Gettier cases: consequently, the possibility of providing a satisfactory analysis of knowledge appeared to be doomed to failure.

In light of such difficulties, Timothy Williamson (2000) vindicated the priority of knowledge over belief, and also argued that knowledge, far from being the analysandum in question, is a primitive theoretical notion. According to Williamson, knowledge should be taken as the rock-bottom unexplained analysans which is prior to standard epistemological notions such as belief, justification, and evidence. Call Williamson’s view Knowledge Primitivism. My aim is two-fold: firstly, I’ll contend that knowledge primitivism can be understood in several distinct ways, depending on the notion of priority in question; secondly, I’ll argue, pace Williamson, that the priority of knowledge ought to be taken primarily as a conceptual priority. In fact, such proposal has the merit of avoiding contentious metaphysical commitments.

The plan is as follows. To begin with, I distinguish between two kinds of priority: representational and metaphysical. The former deals with the concept of knowledge: it is a cognitive priority located at the level of thought and language. By contrast, the latter deals with what knowledge fundamentally is, and its place in the hierarchical structure of the world. To clarify the notion of metaphysical priority, I shall reformulate it in terms of grounding, fundamentality, and mereology.

With such distinction at hand, I’ll then show that Knowledge Primitivism is committed to both kinds of priority. I will argue that the metaphysical priority bit of Knowledge Primitivism crucially hinges on Williamson’s controversial thesis that knowledge is a sui generis mental state. My aim is to show that Knowledge Primitivism does not demand the acceptance of such a metaphysical commitment. This is good news, for the thesis that knowledge is a distinctive mental state raises a number of significant problems with respect to the two kinds of priority.

The previous discussion paves the way to a more promising version of Knowledge Primitivism. To elucidate it, I’ll introduce a novel way of understanding representational priority, based on David Chalmers’s notion of scrutability (Chalmers 2012), which provides a suitable framework for such purpose: by appealing to the notion of scrutability, it is in fact possible to formulate a useful relation of conceptual grounding. Equipped with the notion of conceptual grounding, I’ll proceed to construct the standard epistemological notions. As such my conclusion will be that knowledge is a basic building block of thought.

Keywords: epistemology, knowledge, knowledge-analysis, knowledge-first

Page 13: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

13

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Elise JohnsonUniversity of Leeds

Something about Nothing: Are we thinking about Nothing in the wrong way?

Conceiving of Nothing is a notoriously difficult thing to do, and there are many papers written by philosophers and physicists alike who engage with and discuss Nothing. This is often done in several contexts: whether Something can come from Nothing, whether Nothing is simpler than Something, etc. However, it seems that our concept of Nothing is slippery and holds several different forms for a variety of people, leading to debates as to just what Nothing is, and how it interacts with our current reality.

I will explore and compare two main methods of conceiving of Nothing: The Elimination method and the Positing method. I will firstly outline the Elimination method, drawing upon the Subtraction argument and Kuhn (2013) as an example. The Elimination method is often the preferred method to conceive of Nothing. It begins with our reality, and one by one, systematically removes each thing until we reach Nothing. I will show how this method is problematic when conceiving of Nothing by putting forward four objections: 1) the discrepancy founded upon the metaphysics-relativity which the Elimination method encourages, 2) the misunderstandings from multiple Nothings that arise from the methodology, 3) the bias that Something is the default state of reality, and 4) the assumption that Something is the same “sort” as Nothing. The Elimination method fails to overcome these objections, and whilst these objections do not fully undermine the Elimination method, they do lead to creeping doubts as to whether this is the most efficient and accurate method of conceiving of Nothing.

Secondly, I will put forward the Positing method as an alternative and show how this method overcomes the problems the Elimination method faces. The Positing method draws upon papers from Haufe and Slater (2009) and Heidegger (1953, Trans. R. Manheim) in order to show how the Positing method may be used in different contexts. The method allows us to Posit Nothing as a separate entity to Something, rather than reach Nothing through the Elimination method. Furthermore, I will consider two main objections to this method: 1) the method restricts knowledge on Nothing, and 2), the success of the Elimination method within science through the use of idealisations and models, as discussed by McMullin (1985). I offer responses in order to support the Positing method against these objections, drawing upon Cartwright (1983).

Ultimately, I argue that the Positing method is a far superior method when conceiving of absolute Nothing. The Elimination method may be useful in different contexts, however in this one, it is inappropriate, inefficient, and confusing for all.

Keywords: Metaphysics, Nothing, Reality, Science, Methodology

Page 14: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

14

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Gaétan BoveyUniversity of Neuchâtel

Can ‘intrinsicality’ save the modal account of essence?A critical response to David Denby

Kit Fine has raised important counterexamples against the modal account of essence in his influential paper ‘Essence and Modality’ (1994). Among the different answers that have been proposed in the literature to save and improve the modal account of essence, David Denby argues in ‘Essence and Intrinsicality’ (2014) that essential properties should count as intrinsic in order to address each of Fine’s objections. I expose in the first section of this paper that Denby’s solution has to be abandoned for it provides insufficient and unacceptable results. That is, Fine has highlighted a crucial asymmetry regarding the relation between Socrates and {Socrates}: while it is not essential for Socrates to belong to {Socrates}, it is essential for {Socrates} to contain Socrates.

The problem is that Denby must deny this latter point because he argues that every property being instantiated in virtue of an entity being distinct from its instances cannot count as intrinsic. Moreover, according to Denby’s solution one has to deny that haecceities (e.g. the property of ‘being Socrates’) are essential to their bearers. These undesirable results are the immediate consequence of the characterization of intrinsic properties adopted by Denby – a characterization that is itself subject to serious objections (Marshall & Parsons 2001, Sider 2001).

In the second section of this paper, I propose a characterization of intrinsic properties based on the notion of metaphysical grounding. I then demonstrate that not only this new definition is not subject to the main objections raised against the other accounts of intrinsicality, but also that it can successfully avoid each of Fine’s counterexamples and respect the asymmetry of the relation between Socrates and {Socrates}. Furthermore, I show that according to the characterization I propose, ones is not committed to claim that haecceities are not essential to their bearer.

The final picture I offer is an elegant definition of essential properties in terms of de re modality and intrinsicality that has little commitments, and should be preferred over Denby’s account.

Keywords: essence, modality, intrinsicality, grounding.

Page 15: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

15

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Guido TanaUniversity of Edinburgh

Epistemological Dogmatism and the Problem of the Criterion

Dogmatism is an internalist stance on warrant for perceptual knowledge most recently defended by James Pryor (2000, 2004), Micheal Huemer (2001, 2007, with the name of Phenomenal Conservatism), Brit Brogaard (2013), and Elijah Chudnoff (2011). One of its main features lies in offering a refutation of external world skepticism: Dogmatism argues that experience has a distinctive presentational phenomenology which is able to provide immediate defeasible justification for beliefs about the external world (Pryor 2004:357, Chudnoff 2011:314). The goal of this essay is to argue that Dogmatism's anti-skeptical credentials are fundamentally undermined by its having no resources to answer the problem of the criterion, as exemplified by the Easy Knowledge objection. This problem is a fundamental one for Dogmatism, because it’s anti-skeptical stance consists in delivering an explanation of how defeasible warrant is possible in the first place, something which is not possible if this justificatory skepticism is left untouched. Dogmatism argues that the immediate justification experience provides does not depend on any background beliefs (Pryor 2005:204). If it seems to a subject that p, then, in absence of relevant defeaters, the subject has some degree of justification for believing that p (Huemer 2007:30).

Experiences are not in need of further justification like beliefs are, and we only need to heed our ordinary doxastic practice (2007:54, Pryor 2005:210). This perspective is a fundamentally neo-Moorean proposal. It suffers however of the same lack of dialectical bite against the external-world skeptic as G.E. Moore's own answer. Its particularist and experientialist methodology rules out skepticism from the beginning (Fumerton 2008:42), labelling it “a [epistemological] disease” (Pryor 2004:368). The two opponents appear to speak one past the other on what is required for justification. However, albeit it is irrelevant to the traditional Cartesian worry, Dogmatism can be reassessed conceived as a claim concerning the possibility of having any kind of justification in general, which Dogmatism localizes in experience itself (Coliva 2007:241-42). Dogmatism would therefore be oriented against the normativist justificatory doubt which undermines knowledge assertions in general. This will be shown unfortunately to be a false hope, one that explains precisely why Dogmatism also fails against this different skeptical threat.

In delivering an empirical answer against skepticism concerning warrant about empirical justification in general, Dogmatism runs afoul of the Problem of the Criterion, which states the threat that when we are asked to unambiguously justify our possession of knowledge, the criterion we offer will beg the question. How this affects Dogmatism will be explained through reference to the Easy Knowledge objection (Cohen 2002, 2005). In rejecting the need for knowing the reliability of perceptual presentational experience beforehand, Dogmatism licenses a way of acquiring knowledge which is intuitively too easy. It commits itself to bootstrapping, by using a perception to justify reliance on perception (Fumerton 1995:180, Cohen 2002:318), committing a plainly circular move, one of the usual skeptical dead-ends of the problem of the criterion.

This outcome furthermore elucidates and justifies another objection to Dogmatism, Cognitive Penetration (Markie 2005, 2006, Siegel 2012). By allowing for Easy Knowledge, Dogmatism is guilty of elevating epistemically inappropriate beliefs to the status of justified, epistemically appropriate beliefs. This unfavourable outcome could not be supported by any rational justificatory theory, exhibiting

Page 16: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

16

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Dogmatism's inadequacy in both answering skepticism as well as in providing a warranted epistemic criterion for knowledge.

Keywords: Dogmatism, Skepticism, Easy Knowledge, Criterion, Perceptual Justification.

Joshua Matthan BrownUniversity of Birmingham

The eutaxiological argument and an infinite universe created out of nothing

It is normally taken for granted that the following two propositions are inconsistent:

(1) The universe has an initiating cause of its existence and was created out of nothing.(2) There are an infinite number of past and future events in time and, thus, there was never a time when the universe began to exist.

As such, it is often assumed that if (2) is true, this counts as a defeater for the doctrine of creatio ex nihilo. In response to this, many theistic philosophers maintain (2) is false and (1) is true. One of the most significant contemporary arguments put forth to establish the truth of (1) and the falsity of (2)—and, thus, defend creatio ex nihilo—is the kalām cosmological argument. Given recent developments in cosmology, however, this could be problematic; because there are a growing number of physicists advocating cosmological models that entail the truth of (2) (Cf., Carroll, 2012 p187-89; Fakir, 2000; Veneziano, 2000). Should any of these models turn out to accurately describe the universe, this would render the kalām cosmological argument unsound and seemingly count as a defeater for creatio ex nihilo. Considering this possibility, it behoves defenders of creatio ex nihilo to develop comparable arguments for the truth of (1) that are fully compatible with the truth of (2).

In this paper I endeavour to meet this challenge by advancing a novel argument for the existence of God; namely, the eutaxiological argument. Like the kalām cosmological argument, the eutaxiological argument seeks to establish the truth of (1); and, thus, provide philosophical justification for creatio ex nihilo. Unlike the kalām argument, however, the eutaxiological argument is fully compatible with the truth of (2). This, I shall argue, gives it a decided advantage over the kalām cosmological argument. In addition to proffering the eutaxiological argument as an attractive alternative to the kalām, I also dispel the common assumption that propositions (1) and (2) are inconsistent. I show that, even if the eutaxiological argument is unsound, it is logically possible to believe (1) and (2) are true without falling into contradiction.

I begin by briefly outlining the kalām cosmological argument and listing several reasons why theists might be motivated to reject it as unsound and believe (2) is true (or, at least, possibly true). After this, I introduce the eutaxiological argument for the existence of God, and offer a preliminary defence of its premises. Finally, I conclude by showing how the eutaxiological argument is amenable to the truth of (2). In so doing, I also show it is possible to believe both (1) and (2) are true without falling into contradiction.

Page 17: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

17

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Keywords: Eutaxiological argument, Kalam argument, Cosmology, Creatio ex nihilo, Infinite universe.

Page 18: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

18

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Julian HauserUniversity of Edinburgh

Self-models and predictive processing – Towards a studyOf the boundary of the self

Accounts of the self that refer to self-models have become popular after Thomas Metzinger’s (2003) book on the topic. The theory has been further developed to give a better account of the phenomenal experience of mineness (e.g. Hohwy 2007; Schlicht 2017) as well as the narrative aspects of the self (e.g. Hohwy and Michael forthcoming). The account is especially popular among those who work in predictive processing, the view that the brain’s fundamental task is to predict sensory input (Friston 2010; Hohwy 2013; Clark 2013). The affinity between these two accounts is due to the fact that predictive processing claims that the brain’s predictive capacity is enabled by a generative model – a model of the causal structure of the world that is used to generate predictions.

In this paper I will focus on one of the many questions still open in this new literature: What is the boundary of the self? Am I a mental entity? Brain-bound? Embodied or even extended? Andy Clark (2001) argues that body-external elements can be part of us, Hohwy thinks that the self is subject to a sharp boundary (either around the brain or the body), and Metzinger argues that there is no such thing as a self and therefore no boundary either.

A first difficulty I will tackle is to describe precisely what the question about the self’s extension is about. Roughly speaking, it could be about (1) the extension of the experienced self, (2) the extension of that which is represented by the self-model, and (3) the extension of the substrate that realises the self-model. I argue that the first question is not the one we are after; there are important aspects of the self which are unconscious and which this question does not capture. The third question is also not what we are after, at least unless we also stipulate that the self-model is the only element of the self (a claim I will argue is hard to uphold). This leaves question (2).

Answering question (2) requires an understanding of at least: (a) what is being represented by the self-model and (b) the role the self-model fulfils in the organism. Obviously (a) is important because, on the view considered, what is not represented in the self-model cannot help us determine the boundary of the self. Less self-evidently (a) also matters because the self-model might not be a veridical representation, and in that case could not be a guide to finding out what is part of the self. Point (b) matters because the self-model might not aim to accurately represent some target phenomenon but rather help bring about the state it represents.

The bulk of my paper discusses the interrelations between these two points and how they affect the search for the boundary of the self. I will tentatively argue that extended selfhood is possible and that the self-model’s representation neither aims just at accuracy nor is it purely fictional.

Keywords: philosophy of cognitive science, self, self-model, predictive processing, embedded and extended cognition

Page 19: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

19

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Madeleine HydeUniversity of Stockholm

There are no different kinds of imagining

Imagining is an everyday propositional attitude – i.e. a mental state with a propositional content - that we can employ for various purposes. I can imagine my dream holiday, what a centaur might look like, imagine a new kind of chair or use my imagination to figure out whether it is possible to fit a sofa through my front door. What we have here looks like at least four different uses of our imagination, respectively: (i) picturing how things would be if the world better fitted my desires, (ii) entering into a fiction, (iii) creatively coming up with something novel and (iv) working out what is possible.

Some authors have treated cases like the above as different kinds of imagining (see e.g. (Currie & Ravenscroft, 2002) (Dorsch, 2016) (Williamson, 2016), and (Kind, 2016) for an overview). I argue that such a view is mistaken: that all of these candidate types of imaginative episode are structurally similar enough that we need not say that there is this or that kind of imagining, but merely observe that these are different uses of a single propositional attitude. What tells them apart, as will become apparent, only concerns the individual aim of the imaginative exercise.

My aim will be to scrutinize these four cases of imagining piecemeal, in order to see why they have been variously treated as unique kinds. In doing so, it will also come to light what these cases have in common. In having propositional content, our imaginative episode can build up a picture of a possible world via the object(s) and properties that the content gives us. In each case, our imaginative episode draws its mental imagery from past experience and ‘recycles’ it to come up with something new (i.e. currently non-actual).

This can otherwise be described as a shift in perspective, from the actual to the non-actual: which can focus on just an object, as when we creatively try to imagine something new, or in going from a whole actual scenario or world to a particular possible one: this happens when we imaginatively engage with a work of fiction, and also when we try to imagine what is possible, given our actual circumstances e.g. whether I can climb the tree in front of me. The difference between the last two examples is with respect to how close the imagined possible world is to the relevant actual world (i.e. the one that the imagining subject is situated within). Again, this difference depends upon the aim of the imaginative episode. Thus we find, by trying to pick these uses of the imagination apart, that all that really separates them, at bottom, is the aim behind them.

Keywords: imagination, kinds, propositional content

Page 20: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

20

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Michael KlenkUniversity of Utrecht

A Case for Objective Conditions for Undercutting Defeat

Suppose you are at the bookshop and see Tom Grabit, whom you know to be a notorious thief, come flying out the door, rushing away with a stack of books barely hidden under his coat. As you walk off in astonishment, believing that Tom stole the books, you meet a trustworthy friend who tells you that Tom’s identical-looking twin brother is in town. What does your friend’s testimony do to your belief that Tom stole the books? Arguably, your friend’s testimony reduces or even nullifies your epistemic justification for believing that Tom stole the books.

The phenomenon that accounts for the loss of epistemic justification for your belief about Tom is known as undercutting defeat (Chisholm, 1964; Pollock, 1970, 1995). At the most general level, defeat describes a belief’s ceasing to be epistemically appropriate (Bergmann, 2006, p. 162).

However, there is a lacuna in the current understanding of undercutting defeat (and, by extension, our current understanding of fallibilism). The key question is: when does new information undercut a belief? We could adopt an objectivist or a subjectivist account of defeat. Let’s look at the subjectivist version. According to a subjectivist account of defeat, defeat is perspective-dependent, such that whenever you believe that new information E undercuts your belief B, your believe B is undercut. The subjectivist account of defeat is motivated by what I call the ‘primate of the subjective’ – it takes seriously the subject’s considerations about evidence, without regard of whether or not these considerations are correct. I will show that a pure subjectivist account is untenable in light of a recent discussion by Casullo (2016): on a pure subjectivist view, defeat and hence justification comes too cheap.

In response, several philosophers have tried to take seriously the primate of the subjective and also add to their account of undercutting defeat an idealised perspective-dependent condition for undercutting defeat. According to the idealised condition, if an idealised version of you would believe that new information E undercuts your belief B, your belief B is undercut. I call such accounts ‘subjectivist-idealist’ accounts of undercutting defeat. Will a subjectivist-idealist version work? No. My answer to the key question will be that a ‘subjectivist-idealist’ account of undercutting defeat fails for two reasons. First, though one might be content with describing two different concepts of defeat (applicable in different contexts, perhaps), available subjectivist-idealist accounts fail to distinguish both concepts. They attempt to elucidate the concept of undercutting defeat, but in doing so they depend on incompatible intuitions, as I will show. Second, in philosophical debate, we must rely on an objectivist notion of defeat to establish when new information defeats a position, not when a subject takes a defeater to be a defeater. The failure of the subjectivist-idealist account means that we must be objectivist about undercutting defeat. Objectivists about undercutting defeat argue that not every believed defeater is an actual defeater (Alston, 2002; Casullo, 2016; Melis, 2016; Pollock, 1995).

Keywords: Epistemology, Epistemic Justification, Undercutting Defeat, Fallibilism.

Page 21: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

21

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Mustafa Efe AtesMuğla Sıtkı Koçman University

How Harmless are Idealizations in Scientific Models?

Idealizations can briefly be described as false statements contained in a model. Each model, at least, involves one or more idealizations. Hence, models involve false statements about the world. But, if models involve such false statements, how can they provide reliable explanation and prediction?

Mehmet Elgin and Elliot Sober (2002) claim that some models are explanatory in spite of containing idealizations. According to them, some “models [in evolutionary biology] are explanatory despite the fact that they contain idealizations” (p. 447). The basic idea behind this proposed view is that idealizations of this sort make little difference in the predicted outcome. “The idealizations in a causal model are harmless if correcting them wouldn’t make much difference in the predicted value of the effect variable. Harmless idealizations can be explanatory” (p. 448). Similarly, Michael Strevens (2009) suggests that idealized models are explanatory. According to Strevens, models can explain by causal factors that make difference to the explanandum. If giving an explanation is to give a causal story about the occurrence of a phenomenon, then we need to reveal the difference-making parts. The idealized parts of the model do not make difference in the occurrence of a phenomenon and thus do not play a role in explanation. So, the role of idealization is at best “to point to parts of the actual world that do not make a difference to the explanatory target” (p. 318).

In this paper, I will argue that capability of giving causally relevant explanation and providing nearly the same prediction of phenomena are both necessary but not sufficient conditions for claiming that the idealizations, involved in a model, are harmless. I will give two reasons for this, both of which are connected to each other. First, if we accept these two views, the distinction between abstraction and idealization would be insignificant. This is because abstractions, which are assumed commonly as redundant factors of models, also do not make much difference in model’s explanatory or predictive capacity. However, it seems intuitive to take these two as different modelling strategies. Thus, the one who rests on this view needs to put further argument to challenge this intuition by establishing that the difference is entirely negligible. Second, if my first point is correct, a further condition should be included to identify harmless idealizations. I will suggest that this condition could be satisfied by accounting for why correcting the idealized parts of the model wouldn’t make much difference in its explanations and predictions. To put it differently, an idealization can be treated as harmless not only by saying if correcting them would be useless, but also saying why correcting them would be useless as well. In order to support my argument, I will present a case study as an example in which this further condition is met. The example I will be using is a mathematical model of insolation, which is proposed by Milutin Milanković. My main focus will be to show how Milanković succeeded to explain and predict glacial/inter-glacial periods via a harmless idealization, namely the albedo effect.

Keywords: Models, Idealization, Explanation, Prediction, Philosophy of Science

Page 22: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

22

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Ninni SuniUniversity of Helsinki

Culpability for Implicit Prejudice

This paper argues against the view that when a harmful bias, such as prejudice, is the result of widespread cultural ignorance, agents are non-culpable for the resulting injustice.

The question of culpability for implicit and automatic patterns has been under much discussion ever since studies in empirical psychology showed that even people who explicitly endorse egalitarian ideals may have implicit prejudiced associations. These associations influence one's behavior in a way that may lead an agent to commit an injustice unintentionally. This presents a problem for standard accounts of moral responsibility, because it undermines two central conditions for responsibility: awareness and control (Saul 2013). Others have argued against this (Holroyd 2012, Washington & Kelly 2016). And yet others have tried to find a middle ground, claiming that prejudiced agents are culpable, but not if they were merely unlucky to be born into a prejudiced society (Fricker 2016).

I begin with a definition of implicit bias and prejudice. Then, using a distinction first made by Jules Holroyd I identify the question of culpability that I think is the central one: that of culpability for action influenced by implicit bias. I then present and discuss previous accounts concerning culpability for implicit bias in general, and prejudice especially.

Fricker (2016) assumes a broadly attributionist view of moral responsibility, according to which agents are responsible for more than just those actions they have voluntarily chosen, as long as they can somehow be attributed to the agent, or reveal something about who she is as a person. Fricker traces the grounds for culpability in whether or not the mistake originated in the epistemic system of the agent. By contrast, if someone acquires a mistaken belief from a source she has good reasons to trust, she is epistemically and morally innocent for the mistake. Therefore, Fricker argues, when a harmful bias or prejudice is inherited from the community, the agent cannot be held culpable. The mistake was committed somewhere else within the epistemic community, perhaps a long time ago.

The problem with the argument is that prejudice, defined as motivated maladjustment to evidence, contains both epistemic and motivational aspects. Unlike a false belief, the motivation that underlies prejudice is part of who the agent is: a motivation is part of the emotional construct of a person, even if it originated outside her epistemic system. Therefore the roots of the problem can be traced back to the agent’s own epistemic system, and she’s culpable.

The problem cannot be averted by claiming that the motivational part might not be necessary to uphold prejudice once it has been formed, and that it’s possible for only the epistemic component to percolate through the society. A disposition to make certain kinds of judgments is as much part of a person’s epistemic construct, as a motivation is. An agent with a problematic doxastic disposition can at least be held epistemically accountable.

Keywords: ethics, moral theory, responsibility, implicit bias, prejudice.

Page 23: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

23

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Pavel SkiginUniversity of Pavia

Fact-Independence in a Lockean Theory of Global Justice

The paper explores the issue of global distributive justice within a Lockean framework in the case of substantial human expansion into space, prompted on the one side by recent technological advancements and on the other side by the meta-ethical question of a fact-independent normative theory, such as the “basic value-judgments” of Amartya Sen (1970, p.59) and the “fact-free principles” of G. A. Cohen (2003). Specifically, it argues that since the libertarian argument is founded on the Lockean and the Egalitarian Provisos, that argument would not be sound if technology allows access to extraterrestrial resources thus affecting the crucial issues of original ownership and justice in acquisition. When the arguments of moral philosophy take empirical facts about the world as axiomatic, fundamental changes in these facts may call for a reconsideration of the current theories. Can space exploration be such a game-changer through challenging the premise of finite natural resources at humanity’s disposal? The pivotal role that the limited resource supply plays for the issue of distributive justice is encapsulated by a quote from Mark Twain, which has been repeatedly used in the academic debate about global distributive justice: “Buy land, son; they’re not making it anymore” (Steiner, 1998 p.68; Casal, 2011, p.307).

The first international space treaties (1967 Outer Space Treaty and 1979 Moon Treaty) were founded upon the principle of the Common Heritage of Mankind (Pop, 2009, p.73) and thus bear a striking similarity to the tenets of left-libertarianism. I employ this theory by Steiner, Vallentyne, and Otsuka as a thought experiment to address the issue of extraterrestrial natural resources exploitation in the future. The perspective of human expansion in space certainly raises the question of deeper, non-contingent foundations of ethics. By exploring how the common sense axiom of earth-bound human existence influences distributive justice, we can throw light on the possibility of a fact-independent normative theory.

I demonstrate that in the case of substantial human expansion into space it is impossible to justify global redistribution within a libertarian framework. Specifically, I argue that since the libertarian argument is founded on the Lockean Proviso, it would not be sound if the technology were to allow access to extraterrestrial resources. Such access would affect the crucial issues of original ownership and justice in acquisition since our intuitions about distributive justice are deeply founded in the empirical fact of the finitude of land and other natural resources and phenomena on Earth. To advance my argument I (1) introduce different libertarian concepts of justice in acquisition based on various interpretations of the Lockean Proviso and the left-libertarian Egalitarian Proviso, (2) explore the implications of space exploration for justifications of redistribution based on either Proviso and demonstrate that they do not hold under assumed new conditions.

Keywords: meta-ethics, fact-free principles, distributive justice, left-libertarianism, space exploration.

Page 24: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

24

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Radostina MininaInstitute for the Study of Societies and Knowledge, Bulgarian Academy of Science

What is subjective justification?

The main focus of my research is the nature of subjective justification from epistemological point of view. “What is subjective justification?” is a sub-question of the general epistemological problem about what justification is. In the present philosophical literature this question has been discussed in the context of the debate between epistemological internalism and epistemological externalism.

Two prominent objections in the epistemological literature reveal the existence of subjective justification as a possibly separate sort of justification. These are The New Evil Demon Problem (NEDP) and The Opacity Objections (OO). The NEDP reveals cases of subjective justification without objective justification, while the OO shows that there are cases of objective justification without subjective justification.

The subjective justification can be treated in terms of possession of reasons, a possession of good reasons, or as John Greco insightfully suggests, in terms of cognitive integration.

Based on the above cases, we could take for granted that there is such thing as subjective justification. But despite the existing discussions, what is the nature of subjective justification remains unclear. The question of the nature of subjective justification per se, to all of my knowledge, has not been thoroughly raised in epistemology. In a 1983s paper Richard Feldman differentiates subjective vs. objective justification from the same distinction in ethics. And he brings in some ideas about the notion from ethics but concludes that these notions cannot function in the same way in epistemology.

The only full-fledged theory of the nature of subjective justification is Greco’s theory of cognitive integration. In my talk, I want to challenge the theory using a case of multiple subjective justifications within one person. I will try to argue that the cognitive integration theory of subjective justification proposed by John Greco, does not make the epistemological favor to the reliabilism that it is meant to do. Namely, cognitive integration can diminish one’s epistemic status, not only to enhance it.

The theory of cognitive integration says that S is subjectively justified in believing p iff his belief arise from cognitive process integrated in his cognitive character and not from strange and fleeting process. There are well-known experiments from cognitive science (about associative and rule based systems) showing that a person can have contradictory processes for forming beliefs with different results (presumably well-integrated in her cognitive system, not strange and fleeting), and therefore he is subjectively justified to believe both.

I want to explore cases of multiple subjective justification in a sense stronger than the above one, when one possesses different chains of processes when playing different social roles, i.e. from the point of view of herself as being a different person and to show that integration is not always a virtue or knowledge conducive condition.

Keywords: subjective justification, cognitive integration, reliabilism, virtue epistemology.

Page 25: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

25

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Sanja SrećkovićUniversity of Belgrade

Experiments in Cognitive Science: Reasoning in Non-linguistic Creatures

The presentation will focus on the conclusions derived from the experimental research on both human and animal cognition. The main thesis of the presentation will concern the kinds of experiments that can be used for gaining valuabe insights, as well as the kinds of contributions of various experiments to cognitive science.

With the explosion of empirical research in neuroscience, comparative psychology, and related disciplines, and an expanding tendency among philosophers to engage with these kinds of results, it has become clear that while imagination and thought experiments can contribute immensely to the cognitive science enterprise, it is often difficult to determine from the armchair whether a presumed imagined mind-related property is genuinely possible. In addition to the experimental research on human cognition, enlarging the focus of philosophy of mind to include not just humans, but animals, has a number of advantages, one of which includes consulting a diversity of examples beyond the human case.

Animal minds can shed light on human minds by serving as a foil for comparison. This field has begun to flourish when a special interest was taken in animal mental representation, rationality, consciousness, perception, learning and communication. The results of these studies have had a significant part in answering some of the central questions of cognitive science, for example, the issue of the format of mental representations.

Partly because the study of language has dominated analytic philosophy throughout most of the twentieth century, mental representations were viewed from the linguistic perspective. However, when the focus of research shifts to animal cognition, the linguistic perspective often seems misleading. Many philosophers have wanted to attribute nonlinguistic representations to animals. Several studies of tool use in nonhuman primates gave results that were explainable only by imagistic representations, but evidence for the presence of nonlinguistic representations still doesn’t amount to evidence for the absence of linguistic representations.

I will focus mostly on the analysis of two studies. One study tested for the capability of reasoning by exclusion in great apes – Call's (2004) two cups task. Since the apes proved to be successful at the task, the results were taken by many commenters to indicate a capability for deductive reasoning. However, in the subsequent theoretical research, there have emerged several other interpretations of the apes’ behavior, all of which can explain the experimental results with fewer requirements placed upon their cognition. In order to distinguish between these interpretations, a more recent study was designed – Modey and Carey's (2016) four cups task was an extended, more demanding version of Call’s task.

The aim of my presentation is to expose hidden assumptions behind the researcher’s analysis of the experimental data, and to show that we cannot decide among the competing explanations based on the strategy they proposed. I also present possible improvements for some of the interpretations, which should enable better predictive accuracy. This would place emphasis on different aspects of reasoning, and also on different behavioral signatures to be tested for by future experimental research.

Keywords: Cognitive Science, Experiments, Cognition, Animal Minds.

Page 26: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

26

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Page 27: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

27

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Silvia MilanoLondon School of Economics and Political Science

Bayesian BeautyAdam Elga (2000) introduced to the philosophical literature what has come to

be known as the Sleeping Beauty problem. Since the publication of Elga's paper, the Sleeping Beauty problem has attracted a considerable amount of attention in the philosophical literature. As noted by Michael Titelbaum in a survey article on the topic, the Sleeping Beauty problem attracts so much attention because it connects to a wide variety of unresolved issues in formal epistemology, decision theory, and the philosophy of science" (Titelbaum, 2013, p. 1003), and yet despite the simplicity of its formulation the solution has proved elusive.

Essentially, philosophical opinion on the Sleeping Beauty problem is divided between two camps. One camp, also known as `thirders', argue that the probability that you should assign to Heads, upon waking up, is equal to 1/3. It is generally accepted that this answer violates the Bayesian principles of conditionalization and reflection, but also that it is not vulnerable to diachronic Dutch Books (see e.g. Bradley and Leitgeb, 2006). On the other hand another camp, also known as `halfers', argue that the probability that you should assign to Heads upon waking up is equal to ½. While some versions of the halfer solution (e.g. Lewis, 2001) satisfy conditionalisation, they all seem vulnerable to diachronic Dutch Books. Based on this, some (see e.g. Bradley and Leitgeb, 2006; Briggs, 2009) have argued that halfers should not bet at even odds, and that fair betting odds can come apart from credences in cases involving self-locating beliefs, like the Sleeping Beauty problem. To compound a very perplexing state of affairs, a well-known result by Lewis (2010) (generalised by Skyrms (2009)) proves that an agent can avoid being vulnerable to diachronic Dutch Books if and only if they plan to update their beliefs via conditionalisation.

I argue that the controversy around the Sleeping Beauty problem is resolved once the problem is appropriately represented within a Bayesian framework. In the first part of the paper, I show how to construct a probability space that adequately represents the problem, satisfying three extremely plausible constraints that are fixed by the problem description. Once the problem is correctly represented, the question that we are trying to answer (i.e. What is the probability that the outcome of the coin toss is Heads, given that you wake up today?) turns out to depend on how we fix two free parameters, corresponding to what we take to be the probability that it is the first day of the experiment, given that the result of the coin toss is Heads (resp. given that the result is Tails). Two natural options for assigning values to these parameters turn out to generate the thirder and halfer answers. What emerges from the discussion of the formal representation of the problem is that the disagreement surrounding the solution should be traced back to disagreement about the prior probability assigned to these free parameters, something that should sound neither unfamiliar nor threatening to Bayesians.

Another advantage of my approach is that it allows us to meaningfully state andinvestigate other questions that might be asked to Sleeping Beauty (including, e.g. What is the probability that the outcome of the coin toss is Heads, if this is the last day you wake up?, or What is the probability that it is the first day, given that you woke up?). The answers to all these questions can be systematically derived within my framework, once we plug in a value for the parameters that are left free by the description of the problem.

Page 28: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

28

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Keywords: Bayesian reasoning, self-location, Sleeping Beauty problem.

Page 29: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

29

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Steven CanetUniversity of Wisconsin, Milwaukee

Maximally Contiguous Simples

Ned Markosian, in his paper Simples, makes two major contributions to the field of mereology. The Simple Question asks “What are the necessary and jointly sufficient conditions for an object’s being a simple?” while his much-maligned Maximally Continuous View of Simples (MaxCon) offers an answer to that question that many find uncompelling. In this paper, I offer a new answer to the simple question that maintains the spirit of MaxCon while escaping some of the most potent objections.

In the first section, I expose the inadequacy of MaxCon as an answer to the Simple Question, focusing especially on the Problem of Perfect Contact as developed by Kris McDaniel. It is at this point that I isolate the assumptions made by Markosian that lead to the majority of the criticisms leveled against his view. Accounts of mereological simplicity very often take for granted that 1) space is continuous and dense and 2) our theories that rely on a classical understanding of physics can be easily translated into the non-classical physics of quantum mechanics with little to no substantive philosophical changes. Markosian makes both assumptions, and I argue both lead to trouble for any theory of simplicity.

The second section is dedicated to expositing my own view: the Maximally Contiguous View of Simples. I present a version of Markosian’s MaxCon that operates on a discreet tile-space like that developed by Joshua Spencer. A tile-space posits that space is fundamentally divided up into finitely many extended portions, and it makes no sense to talk about smaller regions. Furthermore, the theory relies heavily on the quantum nature of reality and the indeterminacy that it implies. I leverage the fact that particles exist in a superposition of many states and could collapse into any one in order to formulate a condition on simplicity that circumvents the problem of perfect contact. I argue that this account of simplicity both matches our intuitions about what makes an object simple and avoids the pitfalls that have plagued other accounts. Finally, I quell some fears that may arise regarding my overt reliance on the actual nature of the material world rather than on modal intuitions.

Keywords: Metaphysics, Mereology, Simples, Philosophy of Physics.

Page 30: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

30

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Stephen EvensenBiola Unievrsity

Pascal Wagering: A Reply to Morris

In “Pascalian Wagering” Thomas Morris argues that defenders of Pascal’s wager mistakenly interpret the wager as being epistemically unconcerned. An epistemically unconcerned wager (EUW) holds that the specific probabilities assigned to theism and atheism are irrelevant. Accordingly, defenders of the wager assign atheism a probability just under 1 and theism a probability just above 0. These probabilities are then used in the standard betting equation (E). (E) holds that (Probability x Payoff) - Cost = Expectation. Using EUW in (E), atheism is the multiplication of a very high probability by a very high finite value without any subtracted costs. Theism is the multiplication of a very low probability times infinite value subtracted by finite costs.

Morris identifies four problems with an epistemically unconcerned use of (E). First, (E) is not suited to deal with infinite values. Second, (E) is only applicable when there is a positive direct correlation between the magnitude of cost and the magnitude of probability. Third, (E) assumes that long-term success is compatible with ongoing failures. Fourth, (E) can yield an infinite expected payoff for any scenario which is not logically impossible. Morris intends to diffuse these objections by restricting the wager to cases of epistemic parity. Epistemic parity refers to a situation in which the truth-value of a proposition and its denial are equiprobable.

First, I provide a brief exposition of why Morris includes the epistemic parity condition. Second, I argue that the wager loses most of its applicability unless Morris’ epistemic parity condition is interpreted non-rigidly. Morris claims that epistemic parity can be assessed to a proposition and its denial if a subjective probability of ½ is assigned to each view. Interpreted rigidly, this statement indicates that a subject S conclude that a proposition P and its denial are precisely 50-50. A non-rigid interpretation of epistemic parity would drop the requirement that S assess the evidence in favour of P and its denial as being symmetrically arranged. Rather, a non-rigid interpretation would emphasize that most people tend to look for quick doxastic closure, either in the affirmation of, rejection of, or agnostic stance towards a proposition. Instead of going through the mental labor of assigning precise probabilities to competing propositions, people move quickly to agnosticism apart from an affirmation or rejection of P.

A non-rigid construal of epistemic parity may be susceptible to two objections – the subjectivity objection and the arbitrariness objection. In the case of the former, it may be claimed that the reasons one has for supporting both P and its denial are too subjective for equations like (E). In response, it might be noted that (E) is being pressed into service for the wager. The wager is a tool designed to maximize self-interest. Most people will only be persuaded to pursue a course of action, to bet, if they perform the calculation themselves. For most people, this calculation is likely to be imprecise. The arbitrariness objection to a non-rigid interpretation of epistemic parity is similar to the subjectivity objection. Objections (1) - (4) are a posteriori reflections upon why an EUW, consisting of low probabilities used in (E), are not persuasive. But if someone someone finds the EUW persuasive, there is no need to revise it. In summary, a non-rigid interpretation of epistemic parity has the advantage of increasing the wager’s scope of applicability, while a rigid interpretation of epistemic parity is superior for avoiding objections (1) - (4).

Page 31: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

31

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Keywords: Prudential Belief, Proportionality, Epistemic Parity

Page 32: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

32

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Tadej TodorovićUniversity of Maribor

The Rules of Multiple Realizability: What Constitutes a Genuine Case of Multiple Realizability?

In this paper, I will deal with the question of multiple realizability, as it is known within philosophy of mind. Although perhaps somewhat neglected throughout the 20th century, the concept of multiple realizability and its empirical veracity has lately become a matter of interest to philosophers of mind. One of the pressing questions that has arisen from the recent debates is what constitutes a genuine case of multiple realizability and what criteria we should use when judging whether something is multiply realizable or not. Accordingly, I deal with the question of what set of criteria one should follow when constructing a legitimate case of multiple realizability, and what kind of multiple realizability presents a serious case against psychophysical reductionism.

First, the idea of multiple realizability will be presented, as conceived by its two most prominent advocates, Putnam and Fodor. In what follows, two distinct sets of criteria for multiple realization from opposite camps will be juxtaposed. The first set comes from Shapiro and Polger (2016); they argue that a genuine case of multiple realizability has to match four conditions if it is to jeopardize psychophysical reductionism. Shapiro and Polger are in favour of a modest identity theory and are, at the very least, sceptical of the existence of genuine multiple realizability within philosophy of mind, therefore they will represent the reductionist camp. The second set of criteria for multiple realization will come from philosophers that argue in favour of multiple realizability and non-reductive physicalism; specifically, Aizawa and Gillet (2009). Their set of criteria similarly consists of four conditions that must be met in order for a mental property to be multiply realized.

Both sets of criteria will be presented and clarified through a potential mind-body multiple realizability candidate, which both sides have appraised according to their respective criteria. Furthermore, an additional example (one outside philosophy of mind) of a clear case of multiple realizability will run in parallel throughout the analysis, which will enable us to discern which of the two sets of criteria is to be preferred when venturing to find a legitimate case of a multiply realizable mental state. The multiple realizability thesis, if valid, presents one of the biggest obstacles for psychophysical reductionism, but its veracity and importance is not limited only to philosophy of mind. As Bickle (2008) writes: “what is at stake here should not be underemphasized: nothing less than one of the most influential arguments from late-20th century Anglo-American philosophy, one that impacts not only the philosophical mind-body problem but also the relationship between sciences addressing higher and lower levels of the universe’s organization.”

Keywords: philosophy of mind, multiple realizability, psychophysical reductionism, functionalism.

Page 33: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

33

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Urška MartincUniversity of Maribor

The Analysis of the Examples of Natural and Biological Kinds

In this paper, we are going to analyse the problem of natural and biological kinds. The very question of biological kinds has been a matter of discussion for a while, concerning both philosophers and biologists. In this paper, we are mostly going to build upon the works of Muhammad Ali Khalidi. Thereupon, we wish to answer the following questions: (I) What are natural kinds? (II) What are biological kinds? and (III) Are biological kinds natural kinds?

Firstly, we will try to define what natural kind means, with the help of various authors’ works. We will look at the examples of natural kinds in different sciences, such as chemistry. We will proceed with a definition of biological kind. We will inspect Mayr’s definition of biological kind. The concept of ‘biological kind’ will be compared to the notions of ‘kind’ in other scientific disciplines – particularly in chemistry – and, on the basis of the developed comparative analysis, we will endeavour to clarify if biological kind is a real entity.

We will try to form an analogy between examples in chemistry and examples in biology. This way, we wish to show that if natural kinds exist in chemistry, then they must exist in biology as well. The purpose of this paper is to answer the posed questions and to show, using examples, that biological kinds are natural kinds.

Chemical elements are accepted as undisputed natural kinds (Bird, Tobin, The Stanford Encyclopedia of Philosophy, 2017). However, it is also known that the examples of natural kinds can be the work of humans as well (Bird, Tobin, The Stanford Encyclopedia of Philosophy, 2017). We will look at an example in chemistry. As explained by Bird and Tobin, the example of such a kind is synthesised ascorbic acid – vitamin C (Bird, Tobin, The Stanford Encyclopedia of Philosophy, 2017). This raises the question of whether the chemical kinds with all their components being synthesised, thus being artificial chemical kinds, are yet natural kinds (Bird, Tobin, The Stanford Encyclopedia of Philosophy, 2017). This question is open for discussion (Bird, Tobin, The Stanford Encyclopedia of Philosophy, 2017). With the help of the example above, we will thus try to form an analogy with the examples in biology. Using these examples, we will try to prove that biological kinds are natural kinds.

Keywords: natural kinds, biological kinds, Muhammad Ali Khalidi, philosophy of biology, species.

Page 34: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

34

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Victor TamburiniInstitut Jean Nicod

Grounding truths without a grounding relationThe notion of metaphysical dependence has recently gained prominence in

metaphysics. Metaphysical dependence is a synchronic constitutive form of dependence between worldly entities. Many contemporary writers believe that the notion can guide new work (Fine 2001; Schaffer 2009; Rosen 2010). According to a classical view of the structure of the world, reality comes in several layers. The first layer is composed of fundamental entities that constitute the "basic furniture of the world". Some less fundamental entities metaphysically depend on these fundamental entities.

Next enters the notion of grounding. Grounding is supposed to serve as our most general notion of metaphysical dependence. It is presented as entering a distinctively metaphysical kind of explanation, reflected best by the non-causal ‘because’ in natural language. The grounded -metaphysically dependent - entity is explained by the ground - more fundamental - entity. According to what is perhaps the most natural interpretation of the notion, grounding is a relation holding between facts (Rosen 2010). We will call this view grounding realism.On this view, less fundamental grounded facts metaphysically depend on more fundamental ground facts.

On the linguistic side, sentences that state what grounds what can be called grounding sentences. One approach, that I will adopt here, is to represent grounding as an operator connecting sentences - we will use the symbol ‘<’ for the operator (Fine 2012). Grounding sentences of the form ‘P < Q’ mean roughly the same as ‘Q [(non-causal) because] P’. This sentential connective approach leaves open the possibility of semantic and philosophical interpretations of grounding sentences different from grounding realism.

The goal of this paper is to propose an interpretation of grounding sentences that constitutes an alternative to grounding realism. A case that I take to pose a challenge for all grounding theorists is presented in section (I). In view of my diagnosis for this problematic case, an interpretation of grounding sentences - sentences of the form ‘P < Q’ - that dispenses with a relation holding between facts is offered in section (II). In section (III), several types of seemingly true grounding sentences that cannot be given the interpretation of section (II) are considered. In light of this failure, a distinct interpretation for these types of sentences, and crucially one that dispenses once again with a relation between facts, is offered in section (IV). We will arrive at a disunified interpretation of grounding sentences that will be defended in the final section (V). We will notably argue that our disunified interpretation illuminates the question of the ontological status of non-fundamental entities, i.e. the question of whether they are “really real”.

Keywords: metaphysics, grounding, fundamentality, realism, anti-realism.

Page 35: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

35

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Viktor VojnićFaculty of Humanities and Social Sciences, Rijeka

Ambiguities of Negation in Natural Language

In this talk, I shall try to tackle three problems a speaker is faced with when interpreting a negated sentence; the scope of negation, the metalinguistic representation and the preserving of presuppositions as exemplified by Robyn Carston in the fourth chapter of the book Thoughts and Utterances: The Pragmatics of Explicit Communication (2002) entitled “The Pragmatic of Negation”. First I will try to give an account of some key points of Carston’s text which concern the most common problems of negation. The problem of scope is manifested by our interpretation of the logical connective of negation and where to put it when interpreting a negated sentence. In other words, it is uncertain whether the negation should cover the whole sentence, or just the verb phrase. This distinction gives us at least two possible interpretations of a given sentence. However, another distinction can be made concerning the metalinguistic interpretation of negation; it can be interpreted as both descriptive and metarepresentational. This gives as a total of four logical possibilities when taking into account the scope distinction along with the representational distinction.

According to Carston, if we can interpret a negated sentence as both wide-scope and narrow scope, as well as both descriptive and metarepresentational, we are faced with four possible choices of interpretations:Since we have here two two-way distinctions, there are, in principle at least, four waysin which a given negated sentence may be understood:

(a) narrow-scope descriptive(b) narrow-scope metarepresentational (mention)(c) wide-scope descriptive(d) wide-scope metarepresentational (mention) (Carston 2002: 268)The last problem (preserving of presuppositions) adds to this discussion by

questioning whether there is a clear referent to the negated sentence and is it possible for a negated sentence to have a clear meaning without one. This line of argumentation has been made famous by Bertrand Russell and his famous debate about whether the present king of France is bald or not. I believe a solution to the first two problems can be found by applying the Relevance Theory by Wilson and Sperber (2004), while the solution of the second problem may lie in Carnap’s renowned “Empiricism, Semantics and Ontology” (1956), and it concerns the debate of internal and external questions – questions inside or outside the linguistic framework in use.

Keywords: negation, scope, presuppositions, relevance, linguistic framework.

Page 36: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

36

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Vlasta SikimićUniversity of Belgrade

The role of empirical data in social epistemology of science

After a shift in applied epistemology from the single-agent perspective to the examination of group knowledge acquisition, interest in research of multi-agent dynamics arose. Social epistemology of science, as part of modern epistemology, is concerned with the optimization of scientific inquiry on the group level. Scientific laboratories in many disciplines have become large and complex. In high energy physics (HEP), laboratories often have hundreds of in-house members. The question of optimization of such laboratories has become of practical importance. While the prevailing approach in social epistemology of science was based on modelling abstract hypothetical scenarios e.g. (Borg et al. 2017, Kitcher 1993, Rosenstock et al. 2016, Zollman 2010), HEP laboratories and their organization have also been analyzed based on actual data, e.g. citation metrics (Martin & Irvine 1984a, 1984b, Perović et al. 2016). The relevant project data are the number of researchers and research teams, the project duration, its citation impact, etc. Benefits of a data-driven approach are the unambiguous interpretation of the results, the predictive power, and the corrective potential when it comes to real-life decisions in science. Still, analyses based on project data can be successfully applied only under specific terms. For instance, the citation metrics is field-dependent. This becomes obvious already by comparing impact factors of the prominent journals across disciplines. Also, in different scientific fields, specific team size choices can be justified. Finally, it is questionable whether the citation metrics is an informative parameter in a data-driven analysis of a specific field. In HEP, the consensus about the results is relatively quick and stable over long periods of time (decades). The reason for this is the regular inductive behaviour of the field, which postulates the conservation principles as the core ones. Moreover, the Formal Learning Theory approach demonstrates that the stable consensus is a result of a reliable pursuit (Schulte 2000). Thus, the inductive behaviour of the field guarantees the successful and meaningful applicability of data-driven analyses based on citation metrics.

Using data-mining techniques on data from the high energy physics laboratory Fermilab, Perović et al. (2016) showed that longer experiments are inefficient in comparison to shorter ones. After arguing in favour of a data-driven approach to optimization questions in HEP, I will analyse potential psychological mechanisms that can explain why shorter experiments are more efficient. Specifically, I will analyse the principle of commitment and consistency with previous believes, information cascades, and the authoritarianism within the teams in high energy physics. All these psychological phenomena can be tested using questionnaires, which should also play an important role in empirically calibrated social epistemology of science.

Keywords: social epistemology of science, data-driven analysis, high energy physics.

Page 37: Book of Abstracts · Web viewBook of Abstracts Keywords philosophy graduate conference Last modified by Vanja Subotić Company I

37

Third Belgrade Graduate Conference in PhilosophyFaculty of Philosophy University of Belgrade

Wojciech KozyraStefan Wyszynski University, Warsaw

Donald Davidson’s Theory of RationalityDavidson conceives of rationality in terms of a desire-believe-action coherency pattern

taking place within a single agent. For example, if Mary does not want to go to work because she is tired and she does want to go to work because she wants to earn money, and finally, after considering all pros and cons, she decides not to go and goes, she is to be judged irrational given her believes, desires and her action. She fails to satisfy a basic Davidsonian constrain on rational behaviour: the principle of continence which prescribes, under a threat of irrationality, to follow one’s best judgment. The other important constrain that Davidson puts on the foregoing coherency pattern is that it cannot involve cases of non-rationalizing mental causality, i.e. one cannot make one’s desire efficient in forming a respective believe, e.g. Mary’s desire to have well-shaped calf (Davidson’s example), when it causes Mary’s believe that she has a well-shaped calf, plays a role of a non-rationalizing mental cause and hence, according to Davidson, constitutes a paradigmatic case of irrationality. Other constrains on rational thought and action that Davidson enumerates include laws of propositional calculus, basic cannons of inductive reasoning and elasticity of believes under changing experiential data.

In my presentation I attempt to show a problematic nature of Davidson’s proposal. Most importantly I argue that Davidson’s theory comes out circular (as far as it attempts to explicate rationality by reference to the assumption about what constitutes a proper reason) and that rational constrains he puts on behaviour are at odds with his claim that constitutive of rationality is the internal consistency of attitudes and actions, after all, if „… irrationality... is not a failure of someone else to believe or feel or do what we deem reasonable, but rather the failure, within a single person, of coherence or consistency in the patterns of believe, attitudes, emotions, intentions and actions” (D. Davidson, Problems of Rationality, Oxford 2004, p. 170) then it does not seem clear why someone’s subjectively established coherency between attitudes and actions cannot override their objective constrains. Davidson himself comes close to this problem when he acknowledges that even if I accuse my neighbour of stealing on the basis of “slander evidence” I am rational in doing that so far as I myself deem my evidence sufficient. Adding to these problems I also discuss whether logical devises like propositional calculus are able to reveal the rational (as Davidson thinks they are) rather than being themselves constrained by it.

Keywords: Davidson, rationality, irrationality, reason, principle of continence.