113  presentations,    based  on  ...
...  48  talks
4  poster  presentations,    based  on  ...
...  2  poster

Talks | Poster

Talks

Rudolf Carnap’s Approach to the Problem of Induction

Abstract
In Rudolf Carnap’s work on inductive logic (starting with his 1950), he challenged the traditional view that inductive methods contradict empiricism due to their reliance on a synthetic a priori uniformity assumption. Carnap proposed a logical alternative to frequentist probability, advocating for a probabilistic uniformity assumption. He aimed to categorize all probabilistic statements as analytical and part of an inductive logic. Despite these efforts, we want to argue that Carnap’s account has gaps. It remains to be proven that from his system a probabilistic statement about nature’s uniformity can be derived. Additionally, his approach seems to require meta-probabilistic reasoning, which leads to an infinite regress, leaving his account to induction incomplete.

Presented at:

  • Research Seminar (invited), DCLPS, University of Düsseldorf, 2023-11-14.

Abductive Knowledge vs. Abductive Preference

Abstract
In Alexander Bird’s “Knowing Science” (2022) knowledge plays a dominant role in assessing progress of science. However, one might wonder how we can make any significant step forward in science given such a high epistemic standard. Bird’s answer is twofold: First, regarding the inferential basis, he weakens the constraints for our evidence by arguing for the claim that all our knowledge is evidence (E=K). In this way, we can use whatever we know as evidence for or against our hypotheses and so we are not confined (contra the version of empiricism he criticises) to directly trace back everything to experience and observation. Second, he argues for using an inference method that aims at being perfectly located at the intersection of ampliative and knowledge-preserving inferences. It is his form of abduction, namely inference to the only explanation (IOE) or Holmesian inference that seems to perfectly fill this spot. In this contribution, we will argue that Bird’s justification of IOE is incomplete. In order to complete his account of knowing science, he either has to buy in some form of evidential uniqueness thesis, or he has to agree to be pushed more towards rational preference than knowledge. Since the former is implausible and the latter counters his programme of (meta)knowing science, we argue that this poses a serious dilemma of his account.

Presented at:

  • Inductive Metaphysics: Insights, Challenges and Prospects (contributed), University of Düsseldorf, 2023-08-10.
  • Rhine-Ruhr Epistemology Meeting 2023 (invited), University of Bonn, 2023-06-01.
  • Alexander Bird’s Knowing Science – Author Meets Critics (invited), University of Cologne, 2022-07-01.

From Reduction to Unification: The Case of Cultural Evolutionary Psychology

(Joint work with Karim Baraghith)

Abstract
Cultural evolutionary psychology (Heyes 2018) accounts for the cultural evolution of cognition. It is based on evolutionary psychology and cultural evolutionary theory and aims at unifying both in a synthetic attempt. In this paper, we will show that, in sharp contrast to the reductionism of classical evolutionary psychology, cultural evolutionary psychology provides a unification. As we will argue, the form of its unification is ‘evidential’, and this form is to be preferred against purely ‘structural’ unifications as performed by competing approaches such as ‘dual inheritance theory’ in the nature-culture domain. The main difference between evidential unification and structural unification is that the latter ‘merely’ creates an abstract overarching framework for hypotheses and theories under consideration, without establishing a dependence relation be-tween the respective kinds of evidence. Evidential unification, however, establishes a (mutual) dependence relation between different kinds of evidence and by this brings in further explana-tory power.

Presented at:

  • GAP.11: Conference of the German Society for Analytic Philosophy (GAP) (contributed), HU Berlin, 2022-09-14.
  • GWP.2022: Conference of the German Society for Philosophy of Science (GWP) (contributed), TU Berlin, 2022-08-16.

Patchwork Approaches to Concepts and Different Scales

(Joint work with Philipp Haueis)

Abstract
In philosophy of science, patchwork approaches analyse how scientists use polysemous concepts with multiple related meanings (Wilson 2006, 2017, Novick 2018, Bursten 2018, Novick & Doolittle 2021, BLINDED). These approaches model polysemous concepts as patchworks with multiple patches, i.e. scale-dependent, technique-involving, domain-specific and property-targeting uses of a word. E.g.: in the domain of gases, “temperature” involves kinetic gas theory and refers to mean kinetic energy at the scale of molecules whereas in the domain of solids, “temperature” involves restricted ensemble approaches and refers to frozen order at the scale of polymer chains (Wilson 2017). In general, a patchwork concept is legitimate when its patches include techniques that are reliable, when its domains are homogeneous, and when each patch-specific property is significant to reach an epistemic goal (BLINDED). In this talk, we extend this general work on patchwork concepts by addressing hitherto unanswered questions about the notion of scale.

Patchwork approaches include scale to account for the “tyranny of scales”, i.e. the fact that many entities display different properties or behaviors at characteristic spatial, temporal or kinetic scales (Batterman 2013, Wilson 2017, Bursten 2018). However, this literature leaves important questions unanswered: (1) When does a change of scale generate a novel meaning? (2) Besides reliability, homogeneity and significance, what specific constraint governs concepts which have multiple scale-dependent uses? (3) And how can we relate multiple scale-dependent uses rigorously to one another?

We answer (1) by claiming that a change in scale changes the meaning of a term if there are different discernible regularities about the behavior of the entities. Though not every change of scale in scientific inquiry changes the meaning of a concept, scientific concepts which change their meaning in a scale-dependent manner allow researchers to express more regularities about epistemically significant properties (construed as behaviors of entities but also as dispositions, mechanisms, or quantities).

We answer (2) by introducing the matching constraint: the precision of a technique should match the scale at which an entity displays a property of epistemic significance. This constraint further clarifies the role of techniques in investigating scale-dependent properties and links scale changes to the epistemic goals associated with a patchwork concept. Measurement techniques need to be spatially precise enough to distinguish between two entities at the same scale, and temporally/energetically precise enough to capture regularities of the entities’ behavior that researchers aim to describe, classify or explain.

To answer (3), we link the notion of scale in the patchwork literature to scales in the theory of measurement, such as the nominal, the ordinal, and the cardinal scale. This allows us to use measurement theoretical principles such as the construction of equivalence classes to bridge concepts among different measurement theoretical scales. Using “temperature” as example, our working hypothesis is that the quantitative (the temperature of x), the comparative (x is warmer than y), and the qualitative (x is warm) level can be construed as patches. Relating measurement theoretical scales thus may also be subject to the above-mentioned constraints of reliability, homogeneity, significance, and matching.

Presented at:

  • GWP.2022: Conference of the German Society for Philosophy of Science (GWP) (contributed), TU Berlin, 2022-08-16.

Suppositional Reasoning. Its Logic and Causal Structure

(Joint work with Alexander Gebharter)

Abstract
Suppositions can be distinguished in indicative vs. subjunctive and full vs. partial. In this paper, we propose a causal account of suppositional reasoning that naturally unifies reasoning based on all four kinds of suppositions. We also show how the rather heterogeneous update rules typically used for suppositional reasoning fall out from a single causal structure.

Presented at:

  • Logic and its Philosophy (invited), University of Duesseldorf, Online, 2022-01-14.

Ontology and Ideology Conceptually Revisited: Carving at the joints and worldly conceptual engineering

Abstract
In this talk it is argued that the Quinean distinction between the ontology and ideology of a theory has two main problems: The problem of arbitrariness and the problem of dependence. It is shown that the first problem can be addressed by the help of the approach of carving nature at its joints (cf. Lewis 1986 and Sider 2011). The second problem can be addressed by the help of the approach of worldly conceptual engineering (cf. Cappelen 2019). Since it is possible to revise the original distinction between ontology and ideology in a combined account, one can get rid of both problems at once.

Presented at:

  • Research Seminar of the Department of Philosophy (contributed), University of Cologne, 2021-11-02.
  • Research Seminar of the DCLPS (invited), University of Duesseldorf, 2021-10-26.

Reductionism in the Philosophy of Science and the Problem of Mental Properties

(Joint work with Maria Sekatskaya)

Abstract
Reduction in the philosophy of mind is usually understood in a very strong sense: as a complete reduction of all mental predicates to physical predicates (Fodor, 1982, Kim 1993). In the early stages of logical empiricism, this type of reduction was considered to be about explicit definability/translatability of theoretical predicates with the help of empirical predicates. Typically, in philosophy of mind the accounts that do not subscribe to this type of reduction of mental concepts are classified as non-reductive accounts (Clapp 2001; Walter 2006). This gives the impression that all non-reductive accounts have something in common. In particular, non-reductive physicalists often claim that mental phenomena have a special epistemological status and therefore differ significantly from other natural phenomena. This claim is then used to justify the postulation of differences in ontology. If mental predicates cannot be explicitly defined in terms of physical predicates, then mental properties cannot be reduced to physical properties. However, the step from the failure of explicit definability of mental concepts in terms of physical concepts to proclaiming that mental phenomena are ontologically non-identical to anything physical does not appreciate the complexity of different forms of scientific reduction. In philosophy of science, explicit definability is considered the strongest, but not the only possible, form of reduction. A weaker form of reduction is that of employing bilateral reduction sentences for theoretical predicates such as dispositional terms (cf. Carnap 1936/37). But even this approach was quickly found to be untenable, for which reason a weaker constraint of reduction in terms of empirical confirmability of propositions with theoretical predicates was put forward in the classical empiricist programme (cf. Carnap 1950/62).

Although, historically speaking, logical empiricists such as Carnap and Herbert Feigl took the case of psychological theorizing as a paradigm case for discussing scientific reductions, it seems that the discussions in the philosophy of science and the philosophy of mind have diverged quite a bit and lost relevant points of interaction. In this talk, we outline a framework for better interrelating the discussions. We propose a mapping of different accounts in the philosophy of mind based on the three types of scientific reduction explained above. We argue that eliminativism, particularly type- and token identity theories of the mental, are versions of reductions in the sense of explicit definability, whereas functionalism can be framed as a form of reduction by the help of bilateral reduction sentences: functional definitions of the mental are coarse-grained, similarly to dispositions in the bilateral reductive accounts in the philosophy of science. The latter fact is not very surprising: historically, early dispositionalists can also be seen as both functionalists and physicalists (Ryle 1949; Smart 1959); the controversy between functionalism and reductive physicalism arises only at a later stage (cf.: Block 1978), and is argued against in contemporary approaches (Clapp 2001). Our grouping together of eliminativism, type identity, and token identity theories as three different versions of reduction as explicit definability is presumably more surprising, since type- and token identity theorists are realists, and eliminativists are anti-realists about the mental. We will argue that their respective realism or anti-realism comes not from the different form of reduction employed, but from a different interpretation of ontological consequences of explicit definability. Finally, we tentatively argue that supervenience accounts of the mental can be framed as either accounts of explicit definability or as accounts of reduction by empirical confirmability.

Presented at:

  • ECAP11: European Congress of Analytic Philosophy (contributed), ESAP, Vienna, 2023-08-25.
  • Research Colloquium (invited), University of Duesseldorf, 2023-05-15.
  • Research Colloquium (invited), University of Duisburg-Essen, 2023-01-31.
  • GAP.11: Conference of the German Society for Analytic Philosophy (GAP) (contributed), HU Berlin, 2022-09-14.
  • (Non-)Reductionism in the Metaphysics of Mind (invited), University of Salzburg, 2022-09-08.
  • GWP.2022: Conference of the German Society for Philosophy of Science (GWP) (contributed), TU Berlin, 2022-08-16.
  • Inductive Methods in Ethics, Metaphysics, and Philosophy of Mind (invited), Saint Petersburg State University, Online, 2021-10-01.

Epistemic Engineering: The interplay of meta-induction and abduction in the justification of laws of nature

(Joint work with Gerhard Schurz)

Abstract
Meta-induction is a prediction method that allows overcoming the problem of induction by, first, re-engineering the fundamental epistemic goal of the justification of induction from reliability justifications to optimality justifications; and, second, by employing the past track record of induction to provide a non-circular a posteriori justification of induction as an optimal choice for making a prediction (cf. Schurz 2019 and Feldbacher-Escamilla forthcoming). This main line of reasoning was recently contested by the claim that such an approach can serve only as a justification of predicting a single (the next) event, but not for justifying induction as a general prediction method (cf., e.g., Sterkenburg 2022).

In this talk, we argue that the objection can be addressed by the help of a principle of cognitive coherence and a weak inductive uniformity assumption. Whereas the former principle seems to be fundamental, we argue that the latter can be justified by the help of abductive reasoning. We indicate how abductive reasoning can be justified in a “meta-abductive” way and outline what effect such an approach has for the justification of laws of nature.

Presented at:

  • New Work on Induction and Abduction (contributed), Düsseldorf, Online, 2021-09-30.

AI for a Social World — A Social World for AI

Abstract
AI is not only supposed to help to tackle social problems but it is also frequently used to in fact solve such problems. AI-assisted systems play an increasingly important role in the legal domain, the health sector, environmental research, public policy-making and the like. Research in this field is numerous and diverse. In this talk, we want to argue, however, that it is also interesting to have a look at the opposite direction: How can our knowledge of the social world and its structural features help us to approach problems of AI? In particular, we will investigate how a social perspective on problems of justification helps us to address epistemic problems of machine learning theory.

Presented at:

  • Ringvorlesung (invited) of the Paracelsus Medical University Salzburg, 2021-11-02.
  • Research Seminar (invited) of the Carl Friedrich von Weizsäcker Center, University of Tübingen, 2021-04-21.

Unification and Explanation: A causal perspective

(Joint work with Alexander Gebharter)

Abstract
In this talk, we discuss two influential views of unification: mutual information unification (MIU) and common origin unification (COU). We propose a simple probabilistic measure for COU and compare it with Myrvold’s (2003,2017) probabilistic measure for MIU. We then take a causal perspective and investigate how the two measures perform in different elementary causal structures and shed new light on the relation between unification and explanation. The upshot of this will be that causal structure is crucial for how the measures perform and how they relate to explanatory power. To account for these findings we finally add a causal constraint to our probabilistic measure for COU.

Presented at:

  • BSPS 2021: The Annual Conference of the British Society for the Philosophy of Science (contributed), BSPS, Online, 2021-07-08.
  • PSA 2020: Conference of the Philosophy of Science Association (PSA) (contributed), PSA, Baltimore, 2021-11-12.
  • ECAP10: European Congress of Analytic Philosophy (contributed), ESAP, Utrecht, 2020-08-25.

Carnap on the Mind-Body Problem and Non-Classical Reductionism

Abstract
One traditionally very impactful approach to the mind-body problem is physicalism. According to some interpretations of physicalism, all mental objects, terms or theories are reducible to physical ones. In this sense, physicalist theories are also reductionist. The most famous representative of such a reductive account in the early stages of modern philosophy of science is Rudolf Carnap who tried to construct a framework for reductionism in his Aufbau. Although Carnap’s reductionist aspirations underwent several changes and became increasingly liberal, they were and still are considered to be too restrictive in order to properly account for the psychological realm. In this paper, we discuss the different stages of Carnap’s reductionism and argue that it can be naturally extended to a form of non-classical reductionism that allows for increased applicability also in the psychological realm.

Presented at:

  • Volitions, Intentions and Mental Causation (invited), Saint Petersburg State University, 2020-11-30.

Causal Inference in Evidence-Based Policy. A tale of three monsters and how to defeat them

(Joint work with Alexander Gebharter)

Abstract
In modern society everyone agrees that a planned policy should be backed up by evidence before implementation. But what exactly is good evidence for the efficacy of such a policy? The paradigmatic example for good evidence are randomized control trials (RCTs). In this lecture we will discuss the dangers that come with accepting RCTs as good evidence for a planned policy to the background of recent advances in the causal modeling and automated search literature. We argue that an RCT that showed that a policy worked in a specific domain, though it points at the right causal connection between the policy and the intended outcome, does typically only reveal a small part of the more general causal structure. But way richer causal information is required for reliably predicting a policy’s efficacy in an intended domain. We then discuss ways in which automated causal search procedures can be used to provide such information and what, in their light, is good evidence for the efficacy of a planned policy and the role left to play for human decision makers.

Presented at:

  • Public Fellowship Lecture (invited), IMTO University, Saint Petersburg, 2020-10-02.

The Many Faces of Generalizing the Theory of Evolution

(Joint work with Karim Baraghith)

Abstract
Ever since proposals for generalizing the theory of natural evolution have been put forward, the aims and ambitions of both proponents and critics have differed widely. Some consider such proposals as merely metaphors, some as analogies, some aim at a real generalization and unification, and some have even proposed to work out full reductions. In this contribution it is argued that these different forms of generalizing the theory of evolution can be systematically re-framed as different approaches for transferring justification from the natural to the cultural realm, and that their differences are basically a matter of degree. With the help of such a classification it should become clearer what to expect, but also what not to expect from the different approaches.

Presented at:

  • DGPhil.2020/21 (contributed), University of Erlangen-Nuremberg, 2021-09-08.
  • Thinking about the Cultural Evolution of Thinking (contributed), University of Duesseldorf: Duesseldorf Center for Logic and Philosophy of Science (DCLPS), 2021-01-22.
  • Science as a Public Good (contributed), Saint Petersburg State University: Russian Society for History and
    Philosophy of Science (RSHPS), 2020-11-28.

Abductive Conceptual Engineering

Abstract
This talk investigates virtues of creative abductive concept formation and its application to conceptual engineering. It is shown that virtues of the abductive framework are in line with conditions of adequacy put forward in conceptual engineering. It is argued that a widened interpretation of the Carnapian conditions, i.e. the similarity, the exactness, the fruitfulness, and the simplicity requirement, is still guiding revisionary projects. It is then outlined that the latter three of the Carnapian conditions have a direct match in creative abductive concept formation, but that also the requirement of similarity has relevant bearing on the issue.

Presented at:

  • CONCEPT Brown Bag Seminar (invited), University of Cologne, 2021-06-30.
  • DCLPS Research Seminar (invited), University of Duesseldorf, 2021-06-29.
  • 4th TiLPS History of Analytic Philosophy Workshop (contributed), Tilburg University, 2020-12-14.
  • Lunchtime Talk: Center for Philosophy of Science, University of Pittsburgh (invited), Pittsburgh, 2020-01-17.

Modeling Creative Abduction Bayes Net Style

(Joint work with Alexander Gebharter)

Abstract
Schurz (2008) proposed a justification of creative abduction on the basis of the Reichenbachian principle of the common cause. In this paper we take up the idea of combining creative abduction with causal principles and model instances of successful creative abduction within a Bayes net framework. We identify necessary conditions for such inferences and investigate their unificatory power. We also sketch several interesting applications of modeling creative abduction Bayesian style. In particular, we discuss use-novel predictions, confirmation, and the problem of underdetermination in the context of abductive inferences.

Presented at:

  • EPSA19: Conference of the European Philosophy of Science Association (EPSA) (contributed), EPSA, Geneva, 2019-09-13.
  • CLMPST16: 16th Congress of Logic, Methodology and Philosophy of Science (contributed), DLMPST, Prague, 2019-08-09.
  • PSA 2018: Conference of the Philosophy of Science Association (PSA) (contributed), PSA, Seattle, 2018-11-02.
  • Concept Formation in the Natural and the Social Sciences (contributed), Philosophy Department at the University of Zurich, Zurich, 2018-10-19.
  • MuST 2018: Munich-Sydney-Tilburg/Turin Conference on Models of Explanation (contributed), Center for Logic, Language and Cognition (LLC), Turin, 2018-06-12.

Simplifying Simplicity

Abstract
There exists a variety of notions of simplicity such as the notion of ontological and explanatory parsimony, the simplicity of a model via counting the model’s parameters, the simplicity of theories via counting components of normal forms of axioms of the theory, etc. These notions are employed in a variety of approaches such as, e.g., the approach of explanation by unification, inference to the best explanation and non-ad hoc hypothesising. Many of them rely on the assumption that the underlying notion of simplicity bears some epistemic value, which makes a relevant epistemic difference between unified vs. case-by-case explanation, simple vs. complex models, general vs. ad hoc hypotheses. To assume such an extra epistemic value of simplicity goes hand in hand with the assumption that simplicity is truth-apt. And one important way to argue for the truth-aptness of simplicity consists in putting forward constraints of the model selection literature and to show that simpler models are less prone to overfit erroneous data than complex models are. However, this strategy is based on the particular notion of the simplicity of a model in form of the number of the model’s parameters, and it is unclear how this notion of simplicity relates to the other mentioned ones. In this paper, we argue that these notions are related to each other via structural equations. By applying an idea of Forster and Sober (1994) we show how, e.g., probabilistic axioms or laws can be reformulated as structural equations; these can then be used to assign numbers of parameters to such axioms or laws, and hence allow for applying established complexity measures that simply count the number of parameters. By this, one can provide an exact translation manual for the number of parameters approach to the other notions of simplicity; and this, in turn, can be employed for transferring the epistemic value of simplicity granted for the former domain to the latter one.

Presented at:

  • CLMPST16: 16th Congress of Logic, Methodology and Philosophy of Science (contributed), DLMPST, Prague, 2019-08-08.
  • Understanding Defectiveness in the Sciences (contributed), Instituto de Investigaciones Filosóficas at Universidad Nacional Autónoma de México (UNAM), Mexico City, 2019-06-03.
  • Simplicities and Complexities (contributed), The Epistemology of the LHC, Bonn, 2019-05-22.
  • GWP.2019: Conference of the German Society for Philosophy of Science (GWP) (contributed), GWP, Cologne, 2019-02-26.

Confirmation Based on Analogical Inference. Bayes meets Jeffrey

(Joint work with Alexander Gebharter)

Abstract
Certain hypotheses cannot be directly confirmed for theoretical, practical, or moral reasons. For some of these hypotheses, however, there might be a workaround: confirmation based on analogical reasoning. In this paper we take up Dardashti et al.’s (2015) idea of analyzing confirmation based on analogical inference Baysian style. We identify three types of confirmation by analogy and show that Dardashti et al.’s approach covers two of them. We then highlight possible problems with their model as a general approach to analogical inference and argue that these problems can be avoided by supplementing Bayesian update with Jeffrey conditionalization.

Presented at:

  • Research Colloquium (invited), Research Group “From Perception to Belief and Back Again”, Bochum, 2021-07-06.
  • BSPS 2019: The Annual Conference of the British Society for the Philosophy of Science (contributed), BSPS, Durham, 2019-07-19.
  • Analogical Reasoning in Science and Mathematics (contributed), MCMP, Munich, 2018-10-27.
  • Issues in Medical Epistemology (contributed), CONCEPT – Cologne Center for Contemporary Epistemology and the Kantian Tradition, Cologne, 2017-12-15.

An Optimality-Argument for Equal Weighting

Abstract
There are several proposals to resolve the problem of epistemic peer disagreement which concentrate on the question of how to incorporate evidence of such a disagreement. The main positions in this field are the equal weight view, the steadfast view, and the total evidence view. In this paper we present a new argument in favour of the equal weight view. As we will show, this view results from a general approach of forming epistemic attitudes in an optimal way. By this, the argument for equal weighting can be massively strengthened from reasoning via epistemic indifference to reasoning from optimality.

Presented at:

  • Joint Session: The Open Session (contributed), Aristotelian Society and Mind Association, Durham, 2019-07-21.
  • Social Epistemology and Joint Action in Science (contributed), University of Salzburg, Workshop organised by the Düsseldorf Centre for Logic and Philosophy of Science (DCLPS), 2014-09-04.

Meta-Abduction. Inference to the best prediction

Abstract
In this paper we provide an exact characterisation of abduction which aims at inferring hypotheses, explanations or theories on the basis of data; the two main relevant factors in doing so are likelihood of the data given the inferred hypotheses and simplicity or unificatory power of the hypotheses; we briefly discuss an argument for the epistemic value of simplicity and unificatory power and show how inferences based on them regarding explanations allow for optimality justification. Finally, we also outline how abduction as an inference to the best prediction can be justified by employing the framework of meta-induction not only for the likelihood but also the simplicity factor.

Presented at:

  • IACAP 2019: Conference of the International Association for Computing and Philosophy (IACAP) (contributed), Universidad Nacional Autónoma de México (UNAM), Mexico City, 2019-06-06.

Meta-, Anti-, Induction

(Joint work with Paul Thorn)

Abstract
We briefly discuss the classical problem of induction and the new riddle of anti-induction. Afterwards, we outline the main argument of Gaerdenfors (1990) for resolving the new riddle, discuss the meta-inductive approach to the classical problem of induction, and strengthen Gärdenfors’ argument against anti-induction by help of expanding optimality considerations of the theory of meta-induction.

Presented at:

  • IACAP 2019: Conference of the International Association for Computing and Philosophy (IACAP) (contributed), Universidad Nacional Autónoma de México (UNAM), Mexico City, 2019-06-06.
  • Research Seminar (invited) of the Munich Center for Mathematical Philosophy (MCMP, LMU Munich), Munich, 2019-05-15.

Citizen Science and Social Responsibility

(Joint work with Alexander Christian)

Abstract
Since the mid-2000s, philosophers of science as well as scholars in science and technology studies have increasingly focused their attention on the participation of laypersons in research processes. Described as “citizen science” and “participatory science” (e.g. Irwin 1995, Curtis 2018), illustrate that the concepts of citizenship and participation might be essential to understanding the intricate relationships between scientists and laypersons in research settings and the varying degrees of agency laypersons participating in research processes have. One important research question in this context is whether the participation of laypersons affects the moral awareness and motivation of individual scientists and scientific communities when it comes to their professional as well as social and civic responsibilities. By participating in research processes laypersons might contribute more than their mere workforce – thereby contributing to the production of scientific knowledge – by affecting research agendas and the design of experiments and empirical studies.
In this paper, we address the question of such an influence in two steps. First, we map the various ways in which laypersons can contribute to research processes; in order to do so, we distinguish several “parameters” relevant for such an interaction as, e.g., the epistemic and civic context, an underlying axiology, different agential roles, and various degrees of expertise. This allows us to draw a clear picture of several interaction-possibilities between science and society and prepares the ground for focusing on particular types of such interactions. In a second step, we discuss whether there is historical and contemporary evidence that in particular types of participation of laypersons in different areas of research processes fosters moral awareness and motivation to accept social responsibilities among scientists. By examining autobiographies and biographies of scientists (e.g. Archibald Cochrane) and systematically reviewing the literature on the association between the change of moral awareness and motivation of professional agents and the participation of laypersons in research we aim to substantiate the following thesis: A quite important and up to now only rarely investigated benefit of citizen science is that the participation of laypersons in research processes unintendedly and subtly fosters well-ordered science. This concerns particularly the mapping of societal preferences in research agendas and the consideration of nonscientific values in restricting research methods (cf. Kitcher 2001, Kitcher 2011). Yet different to approaches relying on institutionalized discourse situations bridging epistemic differences between laypersons and professional agents, citizen science brings about well-ordered science by mere participation of laypersons, their agency and the perception of laypersons as quasi-peers.

Joint slides presented by Alexander Christian at:

  • Citizen Science: New epistemoogical, ethical and political challenges (contributed), Université Jean-Moulin: IDEX, Lyon, 2019-07-06.

Transcendental Deduction as Abduction

Abstract
One form of abduction consists in an inference to the best explanation. Transcendental deduction, on the other hand, is sometimes described as an inference to the only possible explanation. If there is only one possible explanation, then, for trivial reasons, it is also the best explanation. Such a link between both forms of inferences was stressed by interpreting transcendental deduction as a form of abduction (cf. Rosenberg 1975 and Vahid 2006), but also by reconstructing pragmatist abduction as a form of transcendental deduction (cf. Apel 1981 and Gava 2008). The former approach brings a pragmatist interpretation of transcendentalism with it, whereas the latter provides a transcendentalist interpretation of pragmatist abduction. In this talk, we take up the approach of framing transcendental deduction as a form of abduction. However, we also relate our approach to a truth-apt reduction of the pragmatic factors in abductive inferences. This allows us to frame transcendental deductions as abduction with no pragmatist flavour.

Presented at:

  • Return of the Kantians: Kant and Contemporary Epistemology. (contributed), CONCEPT – Cologne Center for Contemporary Epistemology and the Kantian Tradition, Cologne, 2019-05-31.
  • The Possibility of Metaphysics: Between Inductive, Analytic, and Transcendental Arguments (contributed), DCLPS, Düsseldorf, 2019-02-01.

Newton’s Abductive Methodology. A Critique on Duhem, Feyerabend, and Lakatos

Abstract
The Newtonian research program consists of the core axioms of the Principia Mathematica, a sequence of force laws and auxiliary hypotheses, and a set of methodological rules. The latter underwent several changes and so it is sometimes claimed that, historically seen, Newton and the Newtonians added methodological rules post constructione in order to further support their research agenda.

An argument of Duhem, Feyerabend, and Lakatos aims to provide a theoretical reason why Newton could not have come up with his theory of the Principia in accordance with his own abductive methodology: Since Newton’s starting point, Kepler’s laws, contradict the law of universal gravitation, he could not have applied the so-called method of analysis and synthesis. In this paper, this argument is examined with reference to the Principia’s several editions. Newton’s method is characterized, and necessary general background assumptions of the argument are made explicit. Finally, the argument is criticized based on a contemporary philosophy of science point of view.

Presented at:

  • Science as a Public Good (contributed), Saint Petersburg State University: Russian Society for History and
    Philosophy of Science (RSHPS), 2020-11-28.
  • ISRHPS 2018: Conference of the Israel Society for History and Philosophy of Science (contributed), The Van Leer Jerusalem Institute, Jerusalem, 2018-06-10.

Success-Based Inheritance in Cultural Evolution

(Joint work with Karim Baraghith)

Abstract
Generalized Darwinism suggests to model cultural development as an evolutionary process where traits evolve through variation, selection, and reproduction (cf. Mesoudi 2011). Although this paradigm presupposes some strong similarities between natural and cultural evolution, it also clearly allows for several dissimilarities in the models. One such dissimilarity consists in different forms of inheritance: Whereas in natural evolution inheritance consists of the transmission of discrete units, in the cultural realm it is common to assume that inheritance is a more or less continuous mixing of traits. The latter is sometimes also called `blending inheritance’. In this paper we characterize blending inheritance in detail. In order to do so we first discuss classical models of cultural evolution and population dynamics (cf. Boyd and Richerson 1988, Schurz 2011). Then we hint at some problems of these models and introduce our model which combines relevant features of both. Thereby blending inheritance is implemented as a form of success-based social learning. This allows for general results about such a variant’s fitness while at the same time problems and restrictions of the former models are avoided.

Presented at:

  • The Generalized Theory of Evolution (contributed), DCLPS, Düsseldorf, 2018-02-01.
  • NNPS 2017: Meeting of the Nordic Network for Philosophy of Science (contributed), NNPS, Copenhagen, 2017-04-21.

Abductive Philosophy and Error

Abstract
Timothy Williamson argues that abduction should become a key method of philosophy, similarly as it is already a key method of the natural sciences. ‘Abduction’ is understood here as inference to the best explanation, where an explanation is better than another one if it makes the evidence more plausible and is simpler. Now, it is quite clear what the epistemic value of making evidence plausible consists in. However, regarding simplicity it is debatable whether it bears epistemic value or not. Williamson (2016) suggests to spell out the truth-aptness of simplicity via constraints put forward against overfitting (noisy) data containing errors. But then, if abduction should be justified for applications in philosophy, it remains open what kind of error one is talking about. In this paper we try to spell out in pragmatic terms what might be called `error in philosophical data’ by bibliometric means. We then apply the approach in a case study on post-Gettier investigations of knowledge.

Presented at:

  • Williamson on Abductive Philosophy (contributed), Vienna Forum for Analytic Philosophy, Vienna, 2017-10-07.

The Synchronized Aggregation of Beliefs and Probabilities

Abstract
In this paper, we connect debates concerning several doxastic systems. First, there is the debate on how to adequately bridge quantitative and qualitative systems of belief. At the centre of this discussion is the so-called Lockean thesis, according to which a proposition A is believed by an agent iff the agent’s degree of belief in A exceeds a specific threshold r>1/2. It is well known that this thesis can come into conflict with other constraints on rational belief, such as consistency and deductive closure, unless great care is taken. Leitgeb’s (2014) stability theory of belief provides an elegant means for maintaining the Lockean thesis, consistency, and deductive closure. The theory is based on the notion of P-stability: A proposition, A, is P-stable (for a probability function P) iff for all propositions B consistent with A: P(A|B)>1/2. This stability theory of belief also expands very nicely to other constraints for belief simpliciter and degrees of belief. So, e.g., the property of being P-stable preserves among AGM-belief revision and Bayesian updating via conditionalization (cf. Leitgeb 2013).

We are going to expand the investigation of stability preservation to several further doxastic systems, as, e.g., theories of justification (Dutch book arguments, quantitatively and qualitatively), theories of higher order evidence, and also debates concerning how to adequately aggregate qualitative belief sets, on the one hand, and degrees of belief, on the other. Regarding the latter, several constraints on opinion pooling and social choice are discussed in the literature centering on Arrow’s (1950) impossibility results and similar results regarding qualitative beliefs, shown by List and Pettit (2002). Given this debate, it is quite natural to ask whether qualitative and quantitative aggregation can be performed in a “synchronized” way. We will show some possibility as well as impossibility results regarding the constraint of stability preservation in social context of opinion pooling and judgement aggregation.

Presented at:

  • XXIV. Deutscher Kongress für Philosophie (contributed), DGPhil, Berlin, 2017-09-25.
  • European Epistemology Network (EEN) 2016 (contributed), EHESS: Institut Jean-Nicod, CNRS, Paris, 2016-07-06.
  • GWP.2016: Conference of the German Society for Philosophy of Science (GWP) (contributed), GWP, Düsseldorf, 2016-03-09.

Probability Aggregation and Optimal Scoring

This talk received the Best Presentation Award in the series Springer Lecture Notes in Computer Science at KI2020.

Abstract
We discuss the problem of optimizing probability aggregation with the help of different scoring rules or loss measures. By this we intend to combine basic insights of probabilistic opinion pooling with optimality results of formal learning theory. The framework can be applied, e.g., to Hans Reichenbach’s best alternative approach on the problem of induction.

Presented at:

  • KI2020: 43rd German Conference on Artificial Intelligence (contributed), Bamberg, 2020-09-21.
  • ECAP9: European Congress of Analytic Philosophy (contributed), ESAP, Munich, 2017-08-26.
  • Joint Session: The Open Session (contributed), Aristotelian Society and Mind Association, Edinburgh, 2017-07-15.

Lockean Thesis and Non-Probabilism

Abstract
In philosophy of science and epistemology there are qualitative as well as quantitative theories of belief. A common proposal to bridge these two types of theories is the so-called Lockean thesis according to which every proposition with a degree of belief above a specific threshold is also believed simpliciter. However, the lottery paradox as well as the preface paradox show that classical principles of quantitative belief, qualtitative belief and Lockean bridging are not jointly satisfiable in general. One way out of this problem is to modify some doxastic principles as, e.g., conjunctive closure of belief simpliciter (this was the route taken up by Henry Kyburg, 1970). Another way out is, e.g., to restrict Lockean bridging to so-called stable systems of belief (this is the route taken up by Hannes Leitgeb, 2014). However, there is also a third approach which was, to our best knowledge, not taken up by anyone up to now regarding the Lockean thesis, namely to modify the probability calculus in order to uphold orthodox belief simpliciter and Lockean bridging. In this paper we are going to explore which kind of probability logic (a fuzzy logic) follows from the latter two constraints.

Presented at:

  • International Conference of the Italian Society for Logic and Philosophy of Science (contributed), SILFS, Bologna, 2017-06-21.

Yet Another Argument Against Preemption

Abstract
In this presentation we provide an argument against the approach of (Constantin & Grundmann, 2017) to preemption by epistemic authority. We will do so by specifying their formal set-up, then characterising their epistemic notion of an authority and the principle of preemption, expanding the set-up by reasonable assumptions of reliabilism, and finally show that, given this assumptions, preemption turns out to be false.

Presented at:

  • The Epistemology of Expert Judgment (invited), Department of Philosophy of the University of Cologne, Cologne, 2023-10-19.
  • Believing on Authority (invited), Department of Philosophy of the University of Innsbruck, Innsbruck, 2017-05-22.

The Struggle for Epistemic Superiority in Medical Research

(Joint work with Alexander Christian)

Abstract
A wider public has recently become aware of problems with moral integrity in medical and pharmaceutical research. Clinical trials have come to be viewed more distrustfully due to incidents like the suppression of evidence in the Tamiflu® case (Payne, 2012) or harms to test-subjects like the death of a proband in France in January 2016 (Ministère-des-Affaires-Sociales, 2016-01-15). Such defects lead, amongst others, to an alarming loss of trust in research generally. Simultaneously, the discussion of good scientific practice has focused on the prevention of bias and the limitation of the corruptive influence of conflicts of interest (Lieb et al, 2011).
In philosophy of science and research ethics is a tendency to treat institutional responses to corrupting influences as pro-/reactive measures which are installed to secure the integrity of research. We will reconstruct recent rearrangements of epistemic authorities in medical and pharmaceutical research in terms of conflicting regimes (e.g., Gibbons et al, 1994) and their struggle for epistemic superiority. We first present recent changes in medical research like the introduction of clinical trial registries and the codification of corresponding guidelines in medical journals. Our thesis is that the broad concept of epistemic regimes provides a fruitful framework, since it allows for the inclusion of a socio-political perspective on changing of collectives in academia, industry, and public authorities.

Presented at:

  • 4S/EASST BCN-2016 (contributed), 4S/EASST, Barcelona, 2016-09-01.

An Historical and Systematic Sketch of the Debate about Values in Science

Abstract
The debate about the permissiveness of value judgements in science lasts now more than one century. It can be divided into three phases (cf. Schurz and Carrier 2013): The first phase in which Max Weber formulated the so-called ‘value-neutrality postulate’. According to this postulate value judgements should be avoided in science or should be at least clearly marked as such judgements. The second phase which coincides with the so-called ‘Positivismusstreit’ in German sociology. In this phase proponents of critical theory as, e.g., Juergen Habermas argued against critical rationalists as, e.g., Karl Popper with the help of emancipatory reasons in favour of the value-ladenness of science. And finally the third phase which took place mainly in English speaking countries and in which new theoretical arguments in favour of the value-ladenness thesis were put forward.

In this contribution a historical and systematic sketch of the debate about values in science will be given. Then the main arguments of the third phase will be explicated. Finally, the role of the workshop’s topics on decisions under risk in areas of public interest as, e.g., climate-, food- and geosciences as well as medicine will be systematically pointed out.

Presented at:

  • Workshop: Risk Assessment and Values in Science (contributed), DCLPS, Salzburg, 2015-09-02.
  • Objectivity in Science (contributed), TiLPS, Tilburg, 2015-06-10.
  • PhD-Symposium of the Austrian Society for Philosophy (ÖGP) (contributed), ÖGP, Innsbruck, 2014-12-05.
  • Tagung für Praktische Philosophie (contributed), International Research Center Salzburg, Salzburg, 2014-11-13.

Epistemic Normativity of Social Reliabilism

Abstract
Epistemological investigations of belief in philosophy differ from such investigations in psychology. While psychologists focus on the question how real agents actually form beliefs and gather knowledge, philosophers investigate normative questions about more or less idealized agents. The problem of how to interpret this epistemic normativity led many authors to an instrumentalist point of view, claiming, as e.g. Quine (1986) did, that “normative epistemology is a branch of engineering. It is the technology of truth-seeking […] it is a matter of efficacy for an ulterior end, truth […]. The normative here, as elsewhere in engineering, becomes descriptive when the terminal parameter is expressed”. Several forms of epistemic consequentialism assume analogue to a deontic means-end-principle, which is often used to express normative instrumentalism in ethics, an epistemic means-end-principle also for norms of knowledge and belief of the following form: If M is an optimal means in order to achieve epistemic goal G, then, since G is on its very basis epistemically ought or rationally accepted, M is also rationally acceptable. This principle has at least two components that require further clarification: (i) the concept of an epistemic goal and (ii) the concept of an optimal means to achieve such a goal. In this paper we focus on a clarification of the second concept and show how optimality results of the theory of strategy selection allow for spelling out the normative part of rationality in social reliabilism, a generalization of Hume’s approach to testimony.

Presented at:

  • GAP.9: Conference of the German Society for Analytic Philosophy (GAP) (contributed), GAP, Osnabrück, 2015-09-15.
  • SOPhiA 2015: Salzburg Conference for Young Analytic Philosophy (contributed) University of Salzburg, Salzburg, 2015-09-04.
  • JustGroningen (contributed), Department of Philosophy of the University of Groningen, Groningen, 2015-08-22.
  • The Odds for Bayesianism (contributed), University of Vienna, 2015-05-28.
  • Norms of Reasoning (invited), Department of Philosophy of the University of Bochum, Bochum, 2014-09-23.
  • ECAP8: European Congress of Analytic Philosophy (contributed), ESAP, Bucharest, 2014-08-29.
  • Research Seminar (invited) of the Department of Philosophy of the University of Bremen, Bremen, 2014-05-08.

A Conventional Foundation of Logic

Abstract
Gottlob Frege was the first to provide a comprehensive attempt of constructing mathematics out of a logical foundation alone. In subsequent investigations also the foundations of logic were discussed quite extensively by providing fundamental principles for distinguishing logical truths from non-logical ones. Three main approaches can be differentiated in these investigations: Belnap’s structural rules approach (1962), Quine’s approach of substitution salva congruitate (1979), and Tarski’s invariance approach (1986). All three suggestions face some problems in distinguishing adequately the logical from the non-logical vocabulary. In this paper the approach of Belnap is put a step further by providing a foundation that is conventional only.

Presented at:

  • SLMFCE VIII: Conference of the Spanish Society for Logic, Methodology and Philosophy of Science (contributed), SLMFCE, Barcelona, 2015-07-07.
  • Conference of the Austrian Society for Philosophy (ÖGP) (contributed), ÖGP, Innsbruck, 2015-06-06.
  • Formal Methods in Science and Philosophy (contributed), Inter-University Centre Dubrovnik, Dubrovnik, 2015-03-26.

Weak Predictivism, Ad hoc Modification, and Intervention

Abstract
Interventions play an important role in causal modelling and decision theory. They allow for the discovery of causal relations as well as for figuring out one’s influence in making a decision for performing a specific action. With increased interventional knowledge one gathers more fine-grained causal and decision-theoretical knowledge. On the other hand, introducing interventions into a causal system may also increase the system’s complexity; since increased complexity is correlated with decreased predictive accuracy, one seems to get extra knowledge on cost of decreased predictive accuracy. In this talk we are going to analyse this relation in an Akaike information theoretical framework. We will show that for so-called structural interventions (i.e. hard or arrow-braking interventions) the gathering of extra knowledge by interventions is not on cost of predictive accuracy.

Presented at:

  • Workshop with Christopher Hitchcock (invited), DCLPS, Düsseldorf, 2015-04-24.

Automatic Metaphor Interpretation

(Joint work with Laurenz Hudetz)

Abstract
Given that metaphors can be important parts of arguments and that the common methods for evaluating literal claims and arguments are not (directly) applicable to metaphorical ones, several questions arise: In which way are metaphors important? How do metaphorical premises of an argument support its conclusion? What is an adequate evaluation procedure for metaphorical claims and arguments? In this talk we will give answers especially to the first and second question and indicate how an answer to the third question might look like. Metaphors in arguments—so our analysis—introduce some very general assumptions about the domain of investigation and these general assumptions—spelled out explicitly—are in support of the conclusion of the argument. To render our analysis more precisely we will outline an implementation of automatic metaphor recognition and interpretation with the help of structural semantics. By applying such an implementation it is aimed at reducing the question of evaluation to that one of evaluating by logical or probabilistic means literal arguments.

Presented at:

  • Annual Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB) (contributed), AISB at University of Kent, Canterbury, 2015-04-21.
  • SOPhiA 2013: Salzburg Conference for Young Analytic Philosophy (contributed) University of Salzburg, Salzburg, 2013-09-12.

The Gene-Meme-Analogy in Cultural Evolutionary Theory

Abstract
Daniel Dennett is one of the most prominent representatives of investigations of cultural evolution in the framework of the gene-meme-analogy, set up in 1976 by Richard Dawkins. According to this analogy one fruitful way of expanding the theory of natural evolution to the domain of cultural phenomena is simply to widen the basic concept of replicators from the domain of genes to a broader domain containing genes and so-called memes. This approach of explaining cultural development avoids the usual problems of classical approaches, as, e.g., sociobiology which tries to reduce cultural phenomena to natural phenomena in terms of gene-reproduction, -variation and -selection only, and which faces the problem of explaining the rapidness and complexity of cultural development. Nevertheless, one main problem of the meme-approach to cultural evolution is the systematic ambiguity in the usage of the basic notions of cultural evolutionary theory. Here is a short list of the quite different meanings of the expression ‘meme’ used by the most important followers of this cultural evolutionary research programme:

* Imitable entities: Memes are all things that are capable of being imitated (Dawkins and Blackmore).
* Information: Memes are acquired information, also storable outside of the brain, as, e.g., in books and computers (Dennett).
* Brain dispositions: Memes are dispositions of the brain to store (represent) information and cause behaviour (Schurz).
* Brain software: Memes are software parts of the brain (Dawkins)
* Neuromemes: Memes are electrochemical states of multiple neurons, so-called ‘neuromemes’, i.e. configurations in one node of a neuronal network that is able to induce the replication of its state in other nodes (Aunger)

So, one may wonder how by such a diverse understanding of the basic notion of cultural evolutionary theory one could expect some fruitful and interesting results of this theory.

In this talk it is shown by a formal investigation of the gene-meme-analogy that—although there is much diversity in the usage of this notion by Dennett et al.—there is also a core meaning (namely ‘reproducibility’ and ‘adequate variability’) in these diverse usages which allows one to subsume the different investigations to one research programme of cultural evolution. It is also indicated that the diversity of meanings of the meme-concept fits into the current state of establishing this analogy and that as one main consequence of this fact one best considers the meme-approach to cultural evolution as a very general and only heuristical framework for more detailed investigations of, e.g., complexity theory etc.

Presented at:

  • Rudolf-Carnap-Lectures 2014 with Daniel Dennett (contributed), Department of Philosophy of the University of Bochum, Bochum, 2014-03-12.
  • Research Seminar (contributed) of the Department of Philosophy of the University of Düsseldorf, Düsseldorf, 2013-06-11.

On the Inductive Validity of Conclusions by Analogies

Abstract
Philosophy of science aims at an adequate explication of the ‘degree by which a hypothesis h is confirmed by some evidence e’. Such an explication must take into account all factors which are relevant in determining this degree and which are according to an ordinary understanding factors like, for example, the number of instances reported in the evidence, the variety of the instances, and the amount of ‘background’ information contained in the evidence. (cf. Achinstein 1963, p.207). The first mentioned factor is sometimes also called the ‘factor of perfect analogy’.

Thomas Nagel claimed that it is in principle impossible to construct a measure for confirmation where a multiplicity of factors is involved since such a measure should allow us to order hypotheses linearly according to their increasing confirmational influence. But for the plurality of factors involved, a linear ordering is impossible (cf. Nagel 1939, pp.68–70).

Rudolf Carnap replied to this critique with the claim that if one is able to provide a numerical measurement for each of these factors, then it might be possible to construct a function ‘that would take some weighted average of the values determined, and which would yield a unique number as the final degree of confirmation [… like] a professor’s task of devising an adequate numerical grading system for students in his class. The professor also considers a number of distinct ‘factors’: hour examinations, papers and the final test.’ (cf. Achinstein 1963, p.208) With his research programme on logical probabilities Carnap tried to provide such a measure for the role analogies play in the confirmation of a hypothesis by some evidence (cf. Carnap 1950). Although he succeeded in showing that so-called ‘perfect analogies’ have confirmational power, he failed in showing that something similar holds also for imperfect analogies (roughly put: in the case of an imperfect analogy there are some counterexamples to mainly positive instances). In a series of papers many probabilistic confirmation-theorists tried to solve this problem (cf., e.g., Hesse 1964, Hesse 1966, Achinstein 1963, Niiniluoto 1981), but in the eye of many philosophers of science their tryings turned down the research programme of logical probabilities to a degenerative research programme (cf., e.g., Spohn 1981).

In this talk the cornerstones, main principles and failings of this development are presented in a historical overview. An alternative justification of the inductive validity of conclusions by analogies is then outlined within the framework of non-classical (transitive) confirmation-theory.

Presented at:

  • PhD-Symposium of the Austrian Society for Philosophy (ÖGP) (contributed), ÖGP, Innsbruck, 2013-11-28.

Diversity, Meta-Induction, and the Wisdom of the Crowd

Abstract
It can be shown that some meta-inductive methods are optimal compared to competing methods inasmuch as they are in the long run the most successful methods in a prediction setting (cf. especially Schurz 2008). Meta-inductive methods build their predictions on competing methods, depending on their past success. Since they depend on other methods, they normally decrease the diversity or independence within a setting. However, some very important results of social epistemology show that diversity in a setting is highly relevant for the whole performance within the setting which is the so-called ‘influence of diversity on the wisdom of a crowd’, where one may observe that a group’s averaged estimation on an outcome is more accurate than the average individual estimation due to diversity within the group.

So, at first glance it seems that meta-inductive methods are valuable for their own sake, but not for the sake of a whole group of methods’ performance. For this reason Paul Thorn and Gerhard Schurz investigated recently the influence of meta-inductive methods on the performance of a group in more detail. Since there are no general results about this influence in a broad setting, they performed simulations for quite specific settings. The main result of their argumentation and simulations is that ‘it is not generally recommendable to replace independent strategies by meta-inductive ones, but only to enrich them’ (cf. Thorn & Schurz 2012).

In this paper a complementary summary of the mentioned investigation on meta-induction and the wisdom of the crowd effect is provided. In especially it is shown that, whereas meta-inductive methods allow one to account for the traditional problem of induction by making a step to a meta level, investigations of social epistemology, which make a similar step to a meta level by using a wisdom of the crowd effect, are able to account similarly for object level problems as, e.g., the problem of how to deal with peer disagreement. In situations where both problems and solutions get together, the new problem of how meta-inductive methods influence the group’s performance arises. With the help of simulations in a setting where especially diversity is highly influential, we will take a complementary view at this problem. Among the simulations is also a case of Paul Feyerabend’s diversity argument, claiming that progress in science is sometimes possible only via diversity in or plurality of theories and methods (cf. Feyerabend 1993, p.21, p.107). Also more general simulations of investigations about the importance of diversity in order to justify some kinds of positive discrimination or diversity in interdisciplinarial research on cost of average competence will be modelled in the meta-inductivistic framework and investigated in detail.

Presented at:

  • International Conference of the Italian Society for Logic and Philosophy of Science (contributed), SILFS, Rome, 2014-06-20.
  • EPSA13: Conference of the European Philosophy of Science Association (contributed), EPSA, Helsinki, 2013-08-29.
  • BSPS 2013: The Annual Conference of the British Society for the Philosophy of Science (contributed), BSPS, Exeter, 2013-07-05.

Concept Formation and Reduction by Analogies

Abstract
In philosophy of science concept formation and reduction is usually discussed with respect to definability. In the paper at hand this discussion is expanded to an investigation of concept formation and reduction by analogies.

Analogies are frequently used in scientific explanations and descriptions. Indicators for analogical reasoning are comparing phrases like ‘similar as’, ‘likewise’ and ‘analogically’. A prototypic analogy that is discussed often in philosophy of science is that one established between the concepts of fluid physics and the concepts of electromagnetism: in order to explain some concepts of electromagnetism often interrelations between these two different areas are stressed. So, e.g., one can describe potential difference by the help of pressure difference in a pipe filled with liquid. It will be shown that many kinds of such analogies bear some important features of contextual definitions and by this allow one to expand the classical reductionistic frame also to reductions by analogies.

With the help of a detailed investigation of some further examples, e.g. the gene-meme analogy, it is hoped to achieve some new clarifying insights into the conceptual and argumentative usage of analogies.

Presented at:

  • European Conference on Argumentation (contributed), Argumentation Lab, Universidade Nova de Lisboa, 2015-06-11.
  • The Role of Analogies in Argumentative Discourse (contributed), Faculty of Lettres of the University of Coimbra, Coimbra, 2013-05-04.
  • Research Seminar (contributed) of Pro Scientia, Salzburg, 2013-04-10.
  • Research Seminar (invited) of the Department of Philosophy of the University of Düsseldorf, Düsseldorf, 2012-06-19.
  • PhD-Symposion of the Austrian Society for Philosophy (ÖGP) (contributed), ÖGP, Salzburg, 2012-05-18.
  • ECAP 7: European Congress of Analytic Philosophy (contributed), ESAP, Milan, 2011-09-05.

Is Mereology Ontologically Innocent? Well, it depends …

Abstract
Mereology, the theory of parts and wholes, is often taken to be an adequate framework for semantical theories and theories of metaphysics. Perhaps the most well-known examples within a mereological framework are theories of spatio-temporal objects. Although it is also possible to embed such theories in a set theoretical framework, many ontologists think that mereology is more favourable than set theory with respect to this domain insofar as it seems to be in one way or another ontologically innocent (cf. Lewis 1991). According to this thesis the mereological fusion of some entities is nothing over and above the entities.

Recent discussions of the thesis of ontological innocence, e.g. (Yi 1999) and (Cameron 2007), come to a negative result, that is, undermine it. In our presentation we are going to demonstrate that an adequate answer to the question whether this thesis holds relies crucially on the underlying theory of reference. There are two types of theories of reference: (a) theories of single reference as, for example, provided in classical formal semantics, and (b) theories of plural reference. Theories of plural reference can be subdivided into (b1) theories of plural predication as, for example, established by the founder of mereology Stanisław Lesniewski (1929) and (b2) theories of plural quantification as effectively introduced by (Boolos 1984). Our investigation will show that the thesis of ontological innocence of mereology holds fully in the framework of (b2), partly in the framework of (b1) and not at all in the framework of (a). So our answer to the question whether mereology is ontologically innocent will be: well, it depends on your theory of reference.

Presented at:

  • International Conference of the Italian Society for Logic and Philosophy of Science (contributed), SILFS, Milan, 2012-11-20.
  • The Character of the Current Philosophy and its Methods (9th) (contributed), Slovak Academy of Sciences, Bratislava, 2012-03-02.

Popper and Feyerabend on Ad-Hoc Modifications and Confirmation

Abstract
In this contribution we are going to explicate Paul Feyerabend’s views on ad-hoc-modifications. In a first step we will provide some definitions of auxiliary terms and the term ad-hoc-modification itself. We will show that Feyerabend’s usage of this term coincides with Karl Popper’s general characterization of ad-hoc-modifications, namely as modifications of a theory that decrease in some way or another the empirical content of the theory.

In a second step we’ll reformulate a problem of such a characterization which was already posed by Adolf Grünbaum in 1976: for it can be shown that according to such a characterization no ‘repairing’ modification T2 of a falsified theory T1 (that is: T2 is a modification of T1 and for some fact e: T1 is falsified by e, but T2 is not falsified by e) is an ad-hoc-modification. Since very often theories are modified for falsificational or disconfirmational reasons, this is a very unwelcome result. But we will indicate that by a simple and plausible reformulation of the empirical content of a theory this problem of Feyerabend’s (and Popper’s) characterization can be solved.

In a third step we’ll consider Feyerabend’s discussion of examples of ad-hoc-modifications of the history of science, in especially Galileo Galilei’s physical theories. We’ll show that according to Feyerabend ad-hoc-modifications are sometimes necessary for the progress of a ‘successful’ research programme. But we will also show that this — in his view opposing position — is the traditional and common position within philosophy of science.

Presented at:

  • Feyerabend 2012 (contributed), Department of Philosophy of the Humboldt-University Berlin, Berlin, 2012-09-26.
  • XXII. Deutscher Kongress für Philosophie (contributed), DGPhil, Munich, 2011-09-15.
  • 9th Conference of the Austrian Society for Philosophy (ÖGP): Crossing Borders (contributed), ÖGP, Vienna, 2011-06-03.
  • Die Wiederverzauberung der Welt? Technik zwischen Aufklärung, Fortschritt, Mythos und Magie (contributed, TU Darmstadt, Darmstadt, 2010-08-20.

Intergenerational Justice and the Nonidentity-Problem

Abstract
One problem of intergenerational justice concerns the question whether the present generation bears moral responsibility for future generations and how to justify such a responsibility. A way of justification is to take a person-affecting view of ethics, according to which no action is morally bad per se, but only with respect to someone (e.g. with respect to future generations). Against a person-affecting view sometimes the so-called non-identity-problem is posed. This problem states that there are actions which — from a person-affecting point of view — have to be considered as bad for a person, although without execution of the action the person would not exist. In this talk I will try to show how this conflict can be solved by an exact analysis of its structure.

Presented at:

  • 35th International Wittgenstein Symposium (contributed), International Ludwig Wittgenstein Society, Kirchberg am Wechsel, 2012-08-06.

Language Dependence Redeflated

Abstract
Most of the common theories on verisimilitude are shown to be language dependent in the following sense:

An evaluation of theories is language dependent iff the evaluation leads to different results among synonymous theories. In detail: let > be a partial order for theory evaluation. Then > is language dependent iff there are T1,…,T4 such that T1 and T3 are synonymous and T2 and T4 are synonymous and it holds that T1>T2 and T4>T3. Note that two theories are called synonymous iff they have a common definitional extension.

Miller thinks that language independency is a condition of adequacy for theory evalution. In this talk it is tried to show that Miller’s use of the expression ‘T1 and T2 are synonymous’ is in a specific sense only a characterisation of the meaning of the expression ‘T1 and T2 can be extended to synonymous theories’ and that by this fact language independency in the described sense shouldn’t count as a condition of adequacy. For this purpose three reasons will be supplied:

First, an argument is sketched according to which language dependency in the Millerean sense allows one to map different degrees of probability to synonymous theories.

Second, it is tried to show that Miller’s adequacy condition is implausible in some cases where true (and not only false) theories are compared with respect to verisimilitude.

Third, it is indicated how Miller’s adequacy condition can be weakened in order to avoid the mentioned problems.

Presented at:

  • BSPS 2012: The Annual Conference of the British Society for the Philosophy of Science (contributed), BSPS, Stirling, 2012-07-06.
  • Trends in Logic XI (contributed), Studia Logica, Bochum, 2012-06-04.
  • Brüche, Brücken, Ambivalenzen. Trennendes und Verbindendes in der Philosophie (contributed), TU Darmstadt, Darmstadt, 2009-09-10.

Religious Mind Identified as Collective Mind

Abstract
Sometimes the cognitive part of human mind is modelled in a simplified way by degrees of belief. So, e.g., in philosophy of science and in formal epistemology agents are often identified by their credence in a set of claims. This line of dealing with the individual mind is currently expanded to group identification by attempts of finding adequate ways of pooling individual degrees of belief into an overall group credence
or, more abstractly speaking, into a collective mind.

In this paper I’ll model religious people’s mind as such a collective mind. Religious people are therein identified by a set of degrees of belief containing religious and profane credence. So, e.g., within a religious context a person may be sure that some statement is true whereas the same person lacks non-religious support for such a credence and hence may doubt the truth of that statement within a profane context. A first adequacy result for this identification is provided with the help of a re-interpretation of the so-called Dutch Book argument which states that one’s degrees of belief should satisfy the axioms of probability theory. A feature of the given re-interpretation is its acceptability from a religious point of view.

Since in such modellings of human mind rationality is sometimes seen to be essential for personal identity, I try in a second step to extract some desiderata for the rationality of religious people’s mind from results on group identification (group agency). I will show that some parts of solutions to the so-called Discursive Dilemma are applicable to problems regarding the rationality of a person with religious and profane credence in disagreement.

Presented at:

  • Me, myself, and I; Constructing and Re-constructing Identity (contributed), Classics Department of the University of Leeds, Leeds, 2012-06-06.
  • Research Seminar (contributed) of Pro Scientia, Salzburg, 2012-05-02.

A Reliabilistic Justification of the Value of Knowledge

Abstract
In this contribution the socalled Meno-Problem will be discussed. With respect to theories the problem is the following question: Why is it epistemologically more valuable to know a true theory than to simply believe it? A classical answer in reabilist accounts to this problem refers to the value of the operation which is used for gathering knowledge. But there is a gap in the argumentation as far as one is not allowed to derive from this assumption the conclusion that also the result of the operation is valuable. We are going to show a difference between true theories which are just believed and theories which are also known to be true. This difference seems to allow one to close the mentioned gap.

Presented at:

  • 34th International Wittgenstein Symposium (contributed), International Ludwig Wittgenstein Society, Kirchberg am Wechsel, 2011-08-12.

One Dogma of Analyticism

Abstract
According to the usual distinction between syntax, semantics and pragmatics, no semantic definition of analyticity uses pragmatic concepts like observation term or theoretical term. In this contribution we are going to show that a very weak semantic conception of analyticity, which seems to be included in many other conceptions of analyticity, is inadequate. For this purpose we give a method for transforming theories with a fully synthetic empirical basis into logical equivalent theories with an analytic empirical basis. We draw the following conclusion: If any definition of analyticity is adequate at all, then it is a pragmatic one.

Presented at:

  • CLMPS14: 14th Congress of Logic, Methodology and Philosophy of Science (contributed), DLMPS, Nancy, 2011-07-20.

Formal Methods in Ethics – Exemplified in Democratized Morality

Abstract
In this talk an advantage of applying formal methods in ethics is discussed. First, the view of democratic morality is indicated, namely the nontrivial thesis that modern moral norms are increasingly democratic justified. This view raises some problems, e.g. the problem of providing adequate principles for justification. So, secondly, it is shown how formal criteria of rationality allow one to solve such problems, at least partly. This result confirms the claim that formal methods can be used in ethics fruitfully.

Presented at:

  • The Character of the Current Philosophy and its Methods (8th) (contributed), Slovak Academy of Sciences, Bratislava, 2011-03-02.

Richard Dawkins’ ‘Main Argument’ from a Philosophy of Science Point of View

(Joint work with Albert J.J. Anglberger and Stefan H. Gugerell)

Abstract
Richard Dawkins, opponent of creationism, intelligent design, and theology is one of the key figures of modern atheism. Already in 1976 he attracted much attention with his popularizing book ‘The Selfish Gene’. In 2006 he reached, again, a broad public with his book ‘The God Delusion’. Especially the fourth chapter of the book, entitled ‘Why there almost certainly is no God’, makes clear that Dawkins’ main aim is to battle the core of intelligent design and creationism, i.e. the so-called ‘God Hypothesis’: ‘This chapter has contained the central argument of my book’ (cf. Dawkins 2006, p. 187). Dawkins furthermore claims that ‘if the argument of this chapter is accepted, the factual premise of religion – the God Hypothesis – is untenable. God almost certainly does not exist. This is the main conclusion of the book so far’ (cf. Dawkins 2006, p.189). In this talk his main argument is investigated and critizised from a philosophy of science point of view.

Presented at:

  • Research Seminar (invited) of the Philosophical Society Salzburg, Salzburg, 2011-01-26.

Poster

An Optimality-Argument for Equal Weighting

This poster received the Best Poster Award of the GAP at GAP.10.

Abstract
Two peers have an epistemic disagreement regarding a proposition, if their epistemic attitudes towards the proposition differ. The question of how to deal with such a disagreement is the problem of epistemic peer disagreement. Several proposals to resolve this problem have been put forward in the literature. Most of them mainly concentrate on the question of if, and if so, to what extent one should incorporate evidence of such a disagreement in forming an epistemic attitude towards a proposition. Classical is the so-called “equal weight view” which suggests to generally incorporate such evidence by equally weighting. At the other end of the spectrum is the so-called “steadfast view” which suggests to generally not incorporate such evidence. In between are views that suggest incorporating such evidence from case to case differently as, e.g., the total evidence view.

In this paper we want to present a new argument in favour of the equal weight view. A common argument for this view stems from a principle one might want to call the “principle of epistemic indifference”: If the epistemic attitudes of n individuals are, regarding their rational formation, epistemically indistinguishable (i.e. the individuals are epistemic peers), then each attitude should be assigned a weight of 1/n. However, as we will show, the equal weight view results from a more general approach of forming epistemic attitudes towards propositions in an optimal way. By this the argument for equal weighting can be massively strengthened from reasoning via indifference to reasoning from optimality.

Presented at:

  • EPSA19: Conference of the European Philosophy of Science Association (EPSA), EPSA, Geneva, 2019-09-11–2019-09-14.
  • PSA 2018: Conference of the Philosophy of Science Association (PSA), PSA, Seattle, 2018-11-01–2018-11-04.
  • GAP.10: Conference of the German Society for Analytic Philosophy (GAP), GAP, Cologne, 2018-09-17–2018-09-20.

Meta-Induction as Opinion Pooling Dynamics

Abstract
Meta-induction is a method for making predictions about future events on the basis of past predictions (that’s the inductive part of the method) and scores of epistemic agents (that’s why it is called ‘meta’). Opinion pooling is a method which tries to aggregate the opinions of epistemic agents about theories (nets of beliefs) etc. into one group opinion. So, whereas theories of meta-induction are usually on beliefs in descriptions of single events, theories on opinion aggregation are usually on beliefs in whole theories etc. And, whereas theories on opinion aggregation are usually on static opinions, theories of meta-induction are usually on dynamic opinions (i.e. change in opinions of an epistemic agent). Within this project we combine these approaches to a, by meta-induction, dynamified theory of opinion pooling. That is: We carry the dynamic part of meta-induction into the theory of opinion pooling.

Presented at:

  • OeAW Meeting 2014: Grant Award Ceremony of the Austrian Academy of Sciences (OeAW), OeAW, Vienna, 2014-03-07.