Utrecht Philosophy of Astronomy & Cosmology Conference
The COSMO-MASTER team and UPAC group are organising a three-day conference on the philosophy and foundations of astronomy, astrophysics & cosmology, taking place in Utrecht (NL) from 3-5 September 2025. This conference follows after (27-30 Aug), and directly after the Spacetime Matters conference in Utrecht (1-2 Sept). We welcome physicists, philosophers, and anyone else who may be interested. Registration has now closed.
Organisers: n.c.m.martens@uu.nl, a.e.ferreirodeaguiar@uu.nl, s.h.vergouwen@uu.nl
Wednesday 3 Sept
| 10:30 - 11:00 | Registration, with tea/coffee
Chair: Niels Martens |
|---|---|
| 11:00 - 12:15 | (John Hopkins 木瓜福利影视) Is it worth explaining our low-entropy past?A low-entropy boundary condition at the Big Bang sets up the thermodynamic arrow to be a contingent fact, and as many proclaim, an unusual one. They have used adjectives like 鈥榮pecial鈥, 鈥榓typical鈥, 鈥榰nnatural鈥, and 鈥榮urprising鈥(Price 1996; Carroll 2023). In fact, Penrose (1989) demonstrates that even under a very simplistic approximation of our universe鈥檚 total degrees of freedom, if God were to randomly stick a pin in phase space, then the odds of locating the region containing our Big Bang would be 1/10^10^123. Does this thought experiment suggest that our low-entropy past requires further explanation? We contend that an explanation is not required, but nevertheless allowed. Further explanation is not required insofar as a constraint like the Past Hypothesis is empirically adequate and could be part of a future, complete theory. Yet the search for such an explanation could lead to better cosmological theories. We thus explore alternatives to the Past Hypothesis, and by appealing to a methodology of abductive reasoning, we argue that some of them o!er deeper explanations of our low-entropy past.
Joint work with .
|
| 12:15 - 12:45 | (木瓜福利影视 of Namur) Looking for Quasi-Experimental Methods in AstrophysicsFrom early unaided stargazing to today's international space telescopes, observation is usually considered a central feature of astronomy. Due to the inability to physically manipulate celestial entities, astronomy relies almost exclusively on observational data rather than traditional experiments (Jacquart, 2020). This alleged lack of experimental methods in astrophysics, grounded on the distinction between observation and experiment, has sparked epistemological debates regarding astronomers鈥 practices (Hacking, 1989; Perovic, 2021).
The experimental method is commonly considered to be well suited for the study of causal relationships due to its ability to test counterfactuals by controlling and manipulating experimental conditions. However, making such causal inferences is arguably not unique to experiments. Observational sciences, and astrophysics in particular, can indeed establish causal relationships between classes of objects and processes (such as, for instance, supernovae 1a). To do so, it has been suggested that- astronomers can 鈥渃onduct/observe鈥 quasi-experiments in the context of a so-called 鈥渃osmic laboratory鈥 (Anderl, 2016), i.e. a universe so vast and diverse that it is assumed that all the possible and relevant initial conditions that an experimenter would have considered are already set up somewhere. In this laboratory view, astronomers must therefore use methods such as quasi-experiments to statistically sample, analyse and model the diversity of cosmic phenomena already available to them.
The present talk aims to investigate whether quasi-experiments in the context of a cosmic laboratory are an accurate description of the actual practise of modern astrophysics in establishing general causal relationships. To this end, a literature review of astrophysics papers was conducted in two steps using the Astrophysics Data System (ADS) which is a digital library portal where major astronomy and physics publications are indexed and searchable by query. First, a full-text search among all the peer-reviewed astronomy papers included in ADS targeted the most common quasi-experimental techniques used in other fields by using queries directly related to these designs, e.g. propensity score matching or regression discontinuity. Only twelve articles were found that mentioned the use of these particular methods suggesting that quasiexperiments are still marginal in the astronomy community. However, by extending the search to some key elements of quasi-experimental designs, such as 'pre-test' or 'control group', the number of articles rose up to 1400. These findings suggest that, while quasi-experimental approaches are not explicitly framed as such within astrophysics, their underlying principles may be embedded in the methods used to make causal inferences.
Second, a systematic review of a random sample (N=1000) of all peer-reviewed astronomy papers published in 2024 was conducted. The aim was to find papers in which researchers attempt to make causal claims in their results, to examine the underlying epistemological characteristics of the methods used, and to compare them with the literature on natural and quasi-experimental experiments. This analysis and results will provide insights into the epistemic paths that astrophysicists follow when trying to make different forms of causal inference.
|
| 12:45 - 14:00 | Lunch (see bottom of this page for options)
Chair: Ruward Mulder |
| 14:00 - 14:30 | (Tufts 木瓜福利影视 & Harvard) Astronomy, Cosmology, and the Distant PastAstrophysics and cosmology aim at understanding the state and evolution of the universe over billions of years, making them sciences of the deep past. But are they also historical sciences in some deeper sense? In this talk, I will argue that astronomy and cosmology have strong methodological and epistemic similarities with (other) historical sciences such as geology and paleontology.
I begin by considering some standard ways of characterizing historical sciences in the philosophical literature, including the emphasis on trace-based methods (Cleland 2002; Anderl 2016), and narrative explanation (Fox 2021). For example, the cosmic microwave background, sometimes called 鈥渞elic radiation鈥 is a relatively pristine trace that is informative about the early universe (at the time of recombination). Although astronomers, like other historical scientists, sometimes perform experiments that are informative about historical targets (Boyd 2023; Currie 2018), much of our knowledge in these scientists depends on what we are able to infer from their traces.
The main focus of the talk will then be on two primary case studies in which scientists seem to be dealing with some distinctly historical questions. First, in the case of the so-called 鈥淗ubble tension鈥, there is a failure of coherence between precise measurements of H0 based on features of the early and late universe. One thing that scientists must consider in this situation are revisions to the cosmic distance ladder. I point out that this case that has strong parallels with revisions of the geological time scale discussed by Bokulich (2020). Second, observed black hole populations raise questions about their evolution. Thus scientists have recently been interested in understanding black hole formation channels鈥攖he processes by which black holes form and gain mass over time. This includes explaining the evolution of observed supermassive black holes and binary black hole mergers. This case illustrates further methodological parallels to be drawn with drawing inferences from, for example, the fossil record.
Overall, these cases highlight the roles of coherence testing, consilience, and epistemic iteration in handling the challenges faced by astrophysics and cosmology qua historical science. My talk will conclude with some discussion of the extent to which we should be optimistic about the philosophical insights to be gained from grouping astronomy and cosmology with other historical sciences.
|
| 14:30 - 15:00 | (木瓜福利影视 of Cincinnati) Sorting Out Cosmic Time: Missing Convergence in Cosmochronology and a Solution from Narrative-MakingA central topic in last century astronomy is measuring the age of the universe and dating its components. This so-called 鈥渃osmochronology鈥 provides key temporal data for cosmological and galactic models. Cosmochronology involves constructing dynamical models of long-term regular celestial processes, such as the cooling of white dwarfs and the decay of isotopes, and inferring from the observed state of those objects to the time elapsed since their creation.
As a measurement practice, cosmochronology faces methodological challenges. First, Chang鈥檚 (2004) circularity problem arises as constructing accurate dynamical models for time measurement conversely requires independent ways of gaining temporal data. This necessitates the cross-validation of several cosmochronological methods. In philosophy of measurement, the convergence of results is expected to mitigate this circularity: obtaining close values of the same quantity using independent methods can remove local artifacts and validate the results and methods (Woodward 2006). That said, cosmochronology鈥檚 second challenge arises as different methods produce different temporal data of different types or quantities. Therefore, their convergence on one quantity is unachievable in principle. One cannot directly compare the age of globular clusters, the age of the galaxy in which they reside, and the duration of nucleosynthesis processes inferred from isotope abundances. Cosmochronology is also complicated by general difficulties of astronomy as a historical science, such as the sparsity of evidence and the lack of experiments (Yao 2023).
I provide a philosophical reconstruction for how cosmochronology managed to stabilize as a reliable measurement system starting from numerous discrepant results. While the conditions and rationales for numerical convergence do not apply, I argue that one methodology from the historical sciences provided a practical solution. I call it narrative triangulation. As opposed to comparing measurements of the same quantity, multiple measurements of different quantities are situated along a hypothetical storyline to coarsely constrain each other. Narrative triangulation serves two functions. First, it sets up a context to connect and compare measurements. Astronomers construct master narratives of key turning points in the universe鈥檚 history, such as the evolution of large-scale structures, waves of star formation and the development of galactic structures. Measurement results constrain each other in a narrative, and mutually compatible measurements can be identified. Certainly, coarse comparisons within a hypothetical narrative cannot warrant endorsing or discarding methods. This comes to the second function of narrative triangulation. Instead of evaluating measurements centering on individual quantities, competing storylines serve as the unit of measurement evaluation. In 1980-90, the cosmochronological community considered several storylines about an older or younger universe, each including mutually compatible time measurements of different celestial bodies. As measurements proliferated, each storyline absorbed compatible results and discarded marginal ones. By this process, cosmochronology stabilized around a story that grew larger, finer-grained and increasingly coherent. This explains how the epistemic difficulty of evaluating measurements of individual quantities does not hinder the stabilization of a set of measurements on the community level.
Overall, narrative triangulation explains cosmochronology鈥檚 success and provides a solution to measurement鈥檚 general issue of lacking numerical convergence in practice (Stegenga 2012; Bokulich 2020).
|
| 15:00 - 15:40 | Coffee
Chair: Phillip Helbig |
| 15:40 - 16:10 | (TU Dortmund) How Disparate Models are Linked: Cosmic Messenger ParticlesIn astroparticle physics, cosmic rays are measured by means of particle detectors arranged to telescopes. To explain the origin of cosmic rays, the heuristic model of messenger particles is employed to make the bridge between disparate models based on particle physics, astrophysics, cosmology, and thermodynamics. The talk discusses this heuristic model, the way in which it underpins causal stories about single events of cosmic origin detected on earth, and the question of how the causal stories based on this model can be reconciled with the irreducibly probabilistic aspects of data taking and data analysis in astroparticle physics. To capture the explanatory features of this model, Salmon鈥檚 causal-mechanical explanation of 1984 is specified in terms of his conserved quantities approach of 1997. The talk shows how this specified model of causal explanation applies to the IceCube experiment at the South Pole and the explanation of two recent seminal results obtained by it: the signal of a neutrino traced back to a blazar as its most probable source (2018), and the measurement of neutrinos from the Galactic plane (2023). Both results are based on di2erent versions of causal analysis and the corresponding data analysis.
The heuristic conception of messenger particles gives rise to a causal-mechanistic model of energy transfer from a cosmic source to an earth-bound particle detector or telescope. A peculiar feature of this model is that it makes it possible to combine the causal explanation of single events with the irreducibly probabilistic explanation of quantum processes and the statistical methods of data analysis required for them. The talk will discuss how this works. At first sight, the following features of data analysis in astroparticle physics seem incompatible:
(I) The causal-mechanistic model of energy transfer from a cosmic source to the detection of a signal by an earth-bound particle detector supports an individual causal story about the origin of that signal.
(II) The probabilistic methods of data analysis and the quantum processes to which they apply only support probabilistic explanations
However, both features become compatible assuming an epistemic understanding of probability, as will be shown. This understanding of probability is in accordance with the interpretation of signals by the physicists in terms of confidence levels, but also with the fact that the results of quantum measurements allow for a post-factual causal reconstruction of what has happened.
|
| 16:10 - 17:15 | Panel What can astronomy & cosmology do for philosophy, and vice versa?
Moderator: Antonio Ferreiro (Utrecht 木瓜福利影视)
|
| 17:15 - 18:00 | Reception
|
| 20:15 - 21:45 (coffee at 19:45) | Public Event co-organised with the Descartes Centre, Observing the Shadow - Interview with Peter Galison (Harvard) on the history, philosophy and culture of black hole imagingis the Joseph Pellegrino 木瓜福利影视 Professor in history of science and physics at Harvard 木瓜福利影视. He currently directs the interdisciplinary at Harvard, is a member of the Event Horizon Telescope collaboration, and founder of the of the next generation Event Horizon Telescope collaboration. His books include ; ; ; and, with Lorraine Daston, . His latest feature film is (Netflix).
This public event features a screening of Galison鈥檚 short movie , after which Professor (History of Science; Physics, 木瓜福利影视 of Amsterdam) will interview Peter on his experience with the (next generation) EHT collaboration, the role of history and philosophy in black hole imaging, and the connection between art, philosophy and science. Audience members will get a chance to ask questions after the interview.
Practical information Date: Wednesday 3 September 2025 Location: Academiegebouw/木瓜福利影视 Hall - Domplein 29, Utrecht Time: 19:45 - 20:15 Coffee/tea in Westerdijk room 20:15 - 21:45 Short movie and interview in Belle van Zuylen room
You are all welcome, but please note there is limited space (70 people)
|
Thursday 4 Sept
| 9:00 - 9:30 | Walk-in with tea/coffee
Chair: Sanne Vergouwen |
|---|---|
| 9:30 - 10:45 | (Jagiellonian & Harvard) Between local and global in philosophy of spacetimePhilosophers often explicitly define what it means to be a local spacetime property, and then state that global properties are all those which are not local. This is somewhat unfortunate, for two reasons: first, it turns out that inequivalent definitions of local properties have been used in the foundations of physics literature, making "global spacetime property" ambiguous. Second, it leaves no place for spacetime properties of intermediate sizes, leading to a disconnect with some areas of theoretical physics. I will discuss some of the ways of defining such quasi-local properties and the sense in which reliance on them can improve our epistemic situation. In the second half of the talk I will discuss some of the philosophical reactions to quasi-local definitions of black holes. Here, I will argue that many of the reasons for resisting them are misplaced.
|
| 10:45 - 11:30 | Coffee
Chair: Antonios Papaioannou |
| 11:30 - 12:00 | (木瓜福利影视 of Arizona) The Logic of Simulation-Based Cosmology via Cosmic Rarities: A Critical (Re)AssessmentA common method in computational cosmology to test a cosmological model, such as 螞CDM, is to look for analogs of cosmic rarities in cosmological simulations. If the target rarity is a galaxy cluster that is believed to consist of predominantly dark matter, a halofinding algorithm (or a halo finder) is typically employed to find its analogs. Among the popular target rarities are the Bullet Cluster and El Gordo鈥攂oth are colliding/merging galaxy clusters with high relative velocity that are believed to be hardly compatible with a 螞CDM cosmology. I examine the logic(s) embedded in this method by surveying two studies focusing on the Bullet Cluster. They utilize different 螞CDM simulations (based on different computer codes) but employ the same halo finder. Thompson et al. [2015] successfully identifies a few analogs and concludes that 螞CDM is not challenged by the Bullet Cluster, while Kraljic and Sarkar [2015] does not and reaches the opposite conclusion. Despite their differing identifying strategies, to make their respective arguments, the two studies rely on the same conditional chain:
鈥 a conditional between the tenability of the cosmological model being tested (鈥淭鈥) and the reality-resemblance of the cosmological simulation based on it (鈥淩鈥);
鈥 a conditional between R and the producibility of desired analogs (鈥淧鈥).
The arguments are however directionally opposite. It then seems that the logical relations between T and R, and between R and P, are determined by the halo finder鈥檚 outcome, which sounds problematic. The only, rather trivial, solution is to require both conditionals to be biconditionals. That is,
鈥 T if and only if R;
鈥 R if and only if P.
Nonetheless, neither (bi)conditional is well-established, significantly undermining their respective conclusions. I argue that this logic is inherently flawed for this method鈥檚 intended purpose, and maintain that such studies can hardly be conclusive about the cosmological model being tested. It is also unclear how the outcome鈥攅ither positive or negative鈥攊s supposed to change the credence of T, likely giving rise to more disagreements centered on this method. At most, such studies may help us gain some potentially useful insight into the behavior of the cosmological simulation (and the computer code that generates it) in use with respect to the target rarity, though the informativeness is likely to be limited. I then show the generalizability of my point by examining several other studies that focus on the Bullet Cluster and/or El Gordo and involve considerations of an alternative to 螞CDM called MOND. Not all studies use halo finders, but the same issue naturally arises due to their structural and methodological similarity. I also offer an alternative path for simulation-based studies in cosmology as guiding new unknown physics.
|
| 12:00 - 12:30 | (木瓜福利影视 of Milan) Cosmological Simulations: New Epistemological ChallengesModels are frequently discussed in philosophical literature in terms of approximations, idealizations, and simplifications of the target phenomena. In works such as (Cartwright 1983), (Van Fraassen 1980), and others, models are portrayed as non-denotative representations that nonetheless serve as effective tools for explanation and prediction. In this work, I argue that simulations in cosmology operate as a nested structure of 鈥渕odels of models鈥, for they start with a theoretical representation of a phenomenon and then incorporate additional approximations of various physical mechanisms. These approximations are introduced at different stages of the modeling process to ensure computational tractability. Each stage is shaped by underlying theoretical assumptions about the phenomena, the relationships between different models, and the constraints of computational methods. I will then discuss two epistemological problems that emerge from such a characterization of simulations in cosmology.
Cosmological simulations (for example: Millennium (Boylan-Kolchin, et.al. 2009), Illustris (Nelson, et.al. 2015), EAGLE (Schaye, et.al. 2015)) cannot calculate the physical behavior of their components at every scale, and thus they use approximations at multiple levels. For example, instead of calculating the gravitational pull between every pair of particles, gravity is simulated by combining a long-range particle-mesh approach with a short-range hierarchical tree method, allowing for an efficient calculation of gravitational contributions. At the same time, hydrodynamics approximations are used instead of individual particle dynamics to avoid solving differential equations at every point. Finally, physical processes like star formations and black hole accretions occur at scales much smaller than the resolution of the simulation, requiring empirical or semi-analytical models. That is, instead of simulating every detail of how gas clouds collapse to form stars, the simulation might rely on parameters calibrated from observations to determine how quickly a star forms.
All these approximations define a structure of nested sub-models that will produce, for example, the tracing of the distribution of dark matter, or the evolution of large-scale structures. However, this nested structure of models (which ultimately constitutes the simulation) brings about a series of epistemological problems worth discussing. First, and with respect to the whole nested structure, simulations are validated through retrodiction (Weinberg, et.al. 2013), that is, by reproducing known phenomena. That is, while simulations can produce new predictions, it is not possible to validate such predictions. Second, cosmological simulations are nested from 鈥渟mall to big鈥 (Beisbart, Norton 2012), carrying over opacity problems and systematics. For example, the variety of different modeling practices, used to account for phenomena that are not resolved by the simulation results in a lack of a unified theoretical framework, posing a challenge to both realism and ontological commitments. In addition, different models introduce different degrees of freedom to match observations (data-driven modeling), rather than deriving numerical values from first principles. This makes cosmological simulations subject to possible underdetermination, thereby making it difficult to isolate which aspects of the general simulation are empirically supported.
|
| 12:30 - 14:00 | Lunch (see bottom of this page for options)
Chair: Antonio Ferreiro |
| 14:00 - 15:15 | (木瓜福利影视 of Barcelona) Beyond precision cosmologyCosmology has a standard model: with few parameters, now observationally determined at the 1 percent level, the model describes an impressive array of high-precision observations of the Universe near and far, early and late. This model is known as the Lambda Cold Dark Matter (螞CDM) model. The 螞CDM model is deeply rooted in 20th鈥檚 century鈥檚 fundamental physics. As a result, cosmology offers a powerful route to probe the many facets of fundamental physics as a whole. However, there is a big difference between modelling and understanding. Despite this resounding success the model is ultimately phenomenological: it establishes a robust framework in which some fundamental issues remain unresolved. It is reasonable to expect that the 螞CDM model might be an effective model. The "unreasonable effectiveness鈥 of the 螞CDM model offers challenges and opportunities. With observations becoming increasingly precise, it is reasonable to expect that 鈥渟omething's gotta give" and that the 螞CDM model will show some cracks. Are the practitioners equipped to recognise and deal with potential cracks?
|
| 15:15 - 15:45 | (Stockhholm 木瓜福利影视) Dark Matter Realism ReconsideredMartens (2020) has recently argued against dark matter realism (see also Vanderburgh 2014, Allz茅n 2021). The starting point for Martens鈥 argument is the breakdown of scientific realism in a metaphysical, semantic and epistemic commitment. The metaphysical commitment is that there is a mind-independent reality; the semantic that scientific claims have truth value; and the epistemic that scientific claims are true. The issue, according to Martens, is that the current dark matter concept is too vague to warrant the semantic commitment. Instead, he advocates for an 鈥渋ndefinitely suspended realism鈥 (7) about dark matter: while dark matter realism is currently premature, it could be warranted in future if research makes the dark matter concept more precise.
In response, Vaynberg (2024) has pointed out that the success of 螞CDM does not require any specification of dark matter鈥檚 particle nature. Specifically, the success of 螞CDM depends on it positing dark matter that behaves as a collisionless fluid, is electromagnetically neutral and non-baryonic. These properties, argues Vaynberg, are the properties 鈥渋n virtue of which [dark matter] is causally connected to the phenomena it is supposed to explain鈥 (6), and they pick out a unique kind. Moreover, this dark matter concept has convincingly been detected in observations of the Bullet Cluster. As such, realism about dark matter is warranted.
I find Vaynberg鈥檚 argument convincing, and in line with current scientific understanding. However, it doesn鈥檛 address the conceptual thinness of dark matter that has been pointed out by Martens. In order to diffuse these concerns, one needs to distinguish between two different dark matter concepts at play in science, which I call astrophysical dark matter and fundamental dark matter. Astrophysical dark matter is the dark matter concept that emerges out of modern cosmological and astrophysical practice鈥攖he dark matter concept that has been detected. This is a thin common core concept of dark matter (Antoniou 2023, De Baerdemaeker 2021, Martens 2022): it lists the properties that any candidate for dark matter鈥檚 fundamental nature has to satisfy (particle or otherwise). In contrast, fundamental dark matter is any description of a dark matter candidate with fully specified properties. This means that there are many different versions of the fundamental dark matter concept, ranging from WIMPs to axions and primordial black holes.
I submit, in line with Vaynberg, that anti-realist concerns that dark matter has not been 鈥榙irectly confirmed鈥 conflate empirical confirmation for fundamental dark matter with empirical confirmation for astrophysical dark matter. Martens is right that it would be premature to be realists about fundamental dark matter. But while the former may remain elusive forever, the latter has been empirically confirmed (Weatherall 2021, Vaynberg 2024). In response to concerns about semantic thinness, I further argue that the resulting realist commitment to dark matter is more substantive than Martens recognizes: the astrophysical dark matter concept excludes certain candidate explanations for dark matter phenomenology (e.g., neutrino dark matter). It also guides further research into fundamental dark matter.
|
| 15:45 - 16:30 | Coffee
Chair: Guy Hetzroni |
| 16:30 - 17:00 | (ENS Paris) Modelling Circularity Present in Dark Matter Detection ExperimentsDark matter detection experiments face persistent challenges in interpretation and crosscomparison of results. This paper presents an in-depth investigation into the multifaceted use of models within dark matter detection, proposing a comprehensive taxonomy that distinguishes among background theory, theoretical models, phenomenological models, experimental models, and data models. It will be argued that, while the background theory establishes abstract causal relations and mathematical constraints, it does not directly yield testable results. Rather, it provides a necessary backdrop against which more specific models are developed, and these subsequent models must be constructed with autonomous, independently sourced constraints in order to avoid circular reasoning in the interpretation of experimental data.
The paper categorizes the models into five distinct types. Background theory offers the general structural and relational conditions, but without an explicit object domain. The theoretical model, by contrast, incorporates a domain of objects that introduces additional constraints and specifies interaction properties. This is followed by the phenomenological model, which translates abstract relations into a causal narrative by incorporating the specific details of the particles that are under investigation. The experimental model then adapts these phenomenological constraints to the specifics of detector design and experimental setup in order to communicate between the experimental data and theoretical framework. Finally, data models are developed from the experimental outcomes with the experimental model in mind, serving to either compare observed results with the predictions, or can be used as constraints on the experimental model that will be contrasted against the phenomenological one.
A central argument of the paper is that experimental models must be methodologically independent from their theoretical and phenomenological counterparts. In dark matter detection, this independence is crucial because the detectors are tasked not only with discovering whether dark matter exists, but also with finding out the particle鈥檚 mass. The dual role of the detector complicates interpretation; since similar interaction signals can be produced by particles with different masses and velocities, the experimental model is burdened with disentangling these overlapping parameter spaces. Moreover, because different experiments employ distinct target materials and methods, direct model-independent comparisons across experimental results remain elusive.
In response to these challenges, the paper also examines several alternative approaches that have been proposed in the literature. Competing models offer divergent causal explanations for the astrophysical phenomena that originally motivated the dark matter hypothesis and can take into account the inconsistent and difficult to interpret results that the detectors have captured. Additionally, model-independent methodologies have been advanced to mitigate the reliance on uncertain phenomenological inputs, though these too face limitations due to residual experimental uncertainties.
The paper will conclude that the persistent lack of conclusive dark matter detection, despite substantial indirect evidence from cosmology, underscores the necessity of critically re-examining the hierarchical and autonomous nature of model construction in experimental practice. By clearly delineating the roles and dependencies of various model types, the paper provides new insights into why inconsistencies persist in the field.
|
| 17:00 - 17:30 | (木瓜福利影视 of Western Ontario) Dark Matter and MOND: A Debate About Ontology or Methodology?While dark matter represents a crucial component of the standard model of cosmology (螞CDM) and is needed to explain the behaviour of many astrophysical phenomena, the persistent lack of direct detection of dark matter particles led some philosophers to question its real existence. Arguments against the dark matter hypothesis often emphasize the existence of an alternative theory of gravity, called MOND, which intends to be a genuine rival to 螞CDM by correctly accounting for galactic phenomena without the need to introduce a new unobservable entity into the theory. Arguments made in favour of MOND often emphasize its predictive superiority against the ad hoc nature of 螞CDM.
Drawing from George Smith鈥檚 analysis of Newton鈥檚 methodology of gravity research in the Principia, I argue that focusing exclusively on successful predictions obscures a far more complex logic of theory testing, according to which discrepancies between predictions deduced from the theory and observation do not falsify a theory of gravity themselves. Rather, what falsifies the theory is its inability to iteratively identify 鈥渟econd-order phenomena鈥, namely robust physical sources that are responsible for the deviation from the calculations.
One important requirement of this approach is that once a second-order phenomenon is identified, it must be incorporated back into the idealized calculations of the theory and the process is repeated iteratively: subtler discrepancies result for which additional, subtler sources must be identified and clearly isolated. The proposed sources should not merely be postulated to resolve the discrepancy in question: they must lead to further detectable differences in other phenomena that can be isolated and investigated, and therefore to allow to iteratively continue the process of identifying increasingly subtler physical difference-making-details along with the difference they make. Therefore, discrepancies become an important tool for gaining epistemic access to the physical world when these lead to the continuous, iterative discovery of second-order phenomena.
The main goal of my talk is to relate Smith鈥檚 logic of theory-testing to the contemporary MOND/dark matter debate. Drawing from a recent work by Chris Smeenk and James Weatherall, I will argue that while the inference to dark matter conforms to this logic, MOND鈥檚 modification of Newton鈥檚 law of gravity does not. Therefore, a prominent challenge that MOND leverages against 螞CDM consists not in a rejection of the latter鈥檚 ontology, but more precisely of the evidential reasoning used by the model to establish functional relationships among diverse phenomena. I conclude by pointing out that in the absence of direct evidence, the MOND/dark matter debate should focus on the question concerning whether accepting dark matter into cosmological research has the potential of enabling further access into regimes not susceptible to direct experimentation, rather than focusing on the question concerning its real existence.
|
| 17:30 | Group photo |
| 19:00 | Conference Dinner, , Maskeradezaal |
Friday 5 Sept
| 9:30 -10:00 | Walk-in with tea/coffee
Chair: Gauvain Leconte-Chevillard |
|---|---|
| 10:00 -10:30 | (木瓜福利影视 of Rennes) Understanding and evaluating exploratory modelling in highuncertainty contextsAstrochemistry, similarly to other astrosciences, studies certain domains where the theoretical background can be lacking and observations can be sparse and uncertain, making it difficult to constrain model development. In these high-uncertainty fields, agreement between models and observations does not always indicate a genuinely representative relationship and models can fail to provide genuine understanding of the target system. In order to address these challenges, scientists have begun utilising a class of models and modelling approaches that are considered to be exploratory. Unlike traditional models that prioritise prediction and alignment with observations, exploratory models and approaches focus on deepening the understanding of the models and consequently facilitating a new avenue of epistemic access to the target system. Considering the scarcity of observational constraints, the way these models iteratively develop becomes a central issue in ensuring their epistemic claims remain credible. Evaluating this process, however, even under an adequacy-for-purpose view, is difficult without empirical criteria, especially when the end goal is to gain insights into the target system under consideration. This situation is exacerbated when the models rely on a significant number of estimations because of computational and experimental costs, resulting in a loss of accuracy and the oversight of potentially physically significant processes.
Despite these difficulties, exploratory models and approaches are essential in attaining knowledge in all the astronomical sciences and thus understanding how they make iterative progress is essential. Therefore, I will discuss the justification for using these models and the context in which they are applied, characterise their key features, and consider how to tackle the question of model evaluation for these approaches. For this talk, I will focus on chemical network models of astronomical sources, as they present an interesting case study, where observations do not provide the whole picture, yet researchers can explore the model and its consequences through estimations of the initial conditions and propagating them forward in time. I will compare two specific examples鈥攖he relatively simple, though observationally inaccessible, Big Bang Nucleosynthesis (BBN) network, and the more complex chemical network of a protostellar disc鈥攖o highlight the differences and shared challenges in their development and evaluation.
|
| 10:30 -11:00 | (木瓜福利影视 of Rennes) Tuning, Ad Hoc Hypothesizing and the Fate of Astrophysical ModellingThe current state of astrophysics, at least with respect to the question of structure formation in the universe, delivers an interesting perspective on the question of underdetermination, at least as it relates to domains where few constraints guide the modelling process. Understanding how structure forms, from galaxies to cosmic filaments, is a key question to discriminate between rival cosmological models, especially rival dark matter models with self-interacting, fuzzy, warm or cold dark matter. Given the enormous dynamic range of processes that must be covered and the non-linear nature of the mathematics involved, addressing this question could not be done without relying on computer simulations, that back in the 1970鈥檚 only included a few hundreds of particles, Newtonian dynamics and numerical parameters supplementing the approximations made. As discrepancies between simulations and observations were discovered, addressing them by adding a new dimension to the model became the almost automatic response mechanism, based on the assumption that these disagreements all stemmed from having neglected an eventually relevant physical process. Most often, however, the latter would be implemented through parameters constrained solely on the basis of empirical considerations, i.e., tuned to reproduce observations. Such responses to discrepancies have resulted in a brand-new kind of artificial underdetermination, whose whole origin eventually reduces to fine-tuning. In this talk, I argue that this underdetermination problem is nothing but the other side -the 鈥渕odelling鈥 side- of a well-known coin, that of using ad hoc hypotheses to preserve a theory from refutation, and that in both cases, escaping discrepancies between theoretical expectations and observations without in any way increasing or widening the empirical content of the theory should be forbidden. This adhocness razor not only breaks cases of artificial underdetermination, but also permits to step back from the race to ever-comprehensive representations of the target system, always more complex, always at a higher resolution, by offering the grounds to determine what constitutes an adequate level of complexity for different modelling scenarii. The mathematical concept of an effective dimension, as defined by Puy et al. (2022), suffices to fully flesh out and implement our razor. The last part of the talk will present the significant successes that applying such a rule would have in constraining and sorting out legitimate responses to the aforementioned and socalled 鈥淪mall Scales Challenges鈥 in astrophysics, as well as in providing much needed guidelines for model building in context of high uncertainties.
|
| 11:00 - 11:40 | Coffee
Chair: Aude Corbeel |
| 11:40 - 12:10 | (木瓜福利影视 of Geneva) Solve to sort: a new classification of spacetime singularitiesSpacetime singularities raise important questions within the foundations of general relativity (GR). One such question concerns their classification. While physicists and philosophers of physics agree that there are different types of singularities, they do not agree on how to classify them. The currently canonical classification by Ellis and Schmidt (1977) is based on mathematical differences between types of singular structures. An important as much as overlooked concern is that this categorization lacks physical significance because it fails to distinguish physically different types of singular structures (Curiel, 1999; Taub, 1979). After systematically pointing out the shortcomings of Ellis and Schmidt鈥檚 classification, I propose a new classification of spacetime singularities based on how they can (or cannot) be solved.
There is a growing body of evidence in research programs in quantum gravity (QG) that suggests spacetime singularities may be eliminated (e.g. Bojowald 2001; Mathur 2005). This supports the expectation that spacetime singularities disappear in a theory more fundamental than GR. However, not all of them need or can be solved merely by quantum gravitational effects (Horowitz and Myers, 1995; Natsuume, 2001; Singh, 2009). After presenting the main methods and mechanisms implemented to get rid of spacetime singularities, I argue that we should use singularity resolution as a criterion for classifying spacetime singularities. I provide an example of such a classification based on the idea that different types of singularities are associated with different ways in which they can be solved.
The last section of the paper outlines the advantages of this proposal. In particular, the new classification offers a straightforward and physically relevant understanding of the differences between the various types of singularities. In this respect, my suggestion overcomes the shortcomings of the extant canonical classification. Moreover, this work engages with the debates on the nature of spacetime singularities and their fate in QG. More specifically, it demarcates a clearcut distinction between spacetime singularities that are solved only by resorting to QG and other types of singularities by highlighting how only a few features are relevant for singularity resolution in QG. This also provides new perspectives for discussing expectations and attitudes toward spacetime singularities (Earman, 1996; Ellis et al., 2018; Crowther and Haro, 2022). I conclude by defending the suggestive idea that the key to enhancing our understanding of spacetime singularities may lie in the physics beyond GR.
|
| 12:10 - 12:40 | (木瓜福利影视 of Geneva) & (木瓜福利影视 Institute Sophia) A Taxonomy of Black Hole Ontologies Through the Firewall ParadoxBlack holes are a central object of study in contemporary theoretical physics, among other reasons because they might hold important clues for the search of a quantum theory of gravity. One of the reasons why black holes are expected to have something to say about quantum gravity is the information loss paradox, which involves the apparent failure of unitarity in the context of black hole evaporation. Of particular importance in recent developments regarding the black hole information loss paradox is the firewall paradox in its manifold incarnations. The firewall paradox, at its core, concerns the incompatibility between unitarity, semiclassical gravity, and standard assumptions about entanglement in the context of black hole evaporation, and has thus far lead to wide ranging insights into the physical and conceptual structure of quantum black holes.
In this talk, our goal will be to propose an analysis of the basic conceptual structure of the firewall paradox, and to argue that, through our analysis, one can classify different solutions to the paradox in terms of certain basic assumptions about black holes that they abandon. As an upshot of this, we will show that these different approaches to black hole physics correspond to different philosophical and ontological perspectives on black holes. In particular, we will see that solutions to the firewall paradox can be classified into three different routes corresponding to: (1) regarding black holes as emergent from an underlying ordinary quantum system (what the physics literature calls the Central Dogma of black hole physics); (2) regarding black holes as nothing more than a collection of degrees of freedom sitting on a membrane at the event horizon; and (3) an eliminativist position regarding black holes, according to which they should not be part of physical ontology.
After mapping these three views to the different ways of solving the firewall paradox we will discuss how, by appropriately reformulating their basic tenets, they can be unified into a single framework to understand quantum black holes. In particular, we will show how, starting from the central dogma, views (2) and (3), appropriately reformulated, can be recovered as different limits of a single underlying quantum theory, defined by the central dogma. Hence, the quantum system defined by the central dogma gives the fundamental ontology of the black hole, while the membrane and eliminativist views provide different, derivative descriptions of the same underlying fundamental quantum system.
Overall, we take our analysis to provide a new and interesting perspective on the structure of the firewall paradox and the ontological picture afforded by its different solutions. Moreover, our work points the way forward towards a unified picture of the ontology of quantum black holes in light of recent work on firewalls.
|
| 12:40 - 14:10 | Lunch (see bottom of this page for options)
Chair: Jamee Elder |
| 14:10 - 14:40 | (木瓜福利影视 of Namur) Major Transitions in the Evolution of the Universe: The Case of the Inflation-Reheating Transition.Cosmology is often portrayed as a natural historical science (Lambert & Reisse 2008, 163; Butterfield 2014, 58; Pearce 2017). Scientific and popular accounts of our current understanding of the universe frequently include a "chronology" divided into distinct epochs or eras (Planck era, inflation epoch, matter-dominated era, etc.). Yet this periodization is not without puzzles. Some epochs overlap (e.g., the electroweak epoch and the inflationary era); some are defined by physical processes (like nucleosynthesis), others by dominant energy contributions (like radiation or matter domination); some last for mere instants, while others span eons. This raises several issues. Methodological questions include, notably: How are the boundaries of a cosmological epoch determined? Is there a unified criterion to circumscribe a cosmological era, or are they characterized in different ways? Related ontological questions arise as well: What occurs during the transition between epochs? Are these transitions smooth and continuous, or do they involve threshold events marking clear ruptures?
This talk addresses these questions by drawing a comparison between cosmology and another natural historical science: evolutionary biology. In that field, Maynard Smith and Szathm谩ry (1997) introduced the concept of "Major Transitions" to describe eight pivotal events in the evolution of living beings (e.g. the eukaryogenesis or the apparition of multicellular organisms) 鈥 episodes that involve fundamental changes in biological structure, complexity and ecological systems. I begin by stressing important differences in how scientists study the evolution of life and the evolution of the Universe 鈥 differences that reflect the fact that cosmology is not a natural science in the same sense as evolutionary biology, paleontology, or geology (Cleland 2002, 2011; Currie 2024). However, I argue that the conceptual distinction between minor and major transitions is one that can be transferred from evolutionary biology to cosmology and that it is useful for identifying events in cosmic evolution that stand out due to the radical novelties they introduce 鈥 novelties that require new theoretical constructs and new methodologies to describe the behavior of the universe.
As a case study, I focus on one of the leading hypotheses in primordial cosmology, according to which the very early universe underwent a rapid phase of accelerated expansion known as cosmic inflation. Inflation ended approximately 10鈦32 seconds after the expansion began, through a process known as reheating, during which the inflaton field decayed into the particles of the Standard Model. I argue that this transition exemplifies what some philosophers (Humphreys 2016; Guay & Sartenaer 2016, Fletcher 2021) describe as diachronic emergence: although the particles of the Standard Model emerged on the basis the inflaton field, they nonetheless represent something fundamentally new that dramatically change the behavior of the universe. This makes the transition from inflation to reheating a major transition and justifies the distinction between the inflationary epoch and the cosmic eras that followed.
In conclusion, I suggest that this approach can be fruitfully applied to identify other major transitions in the chronology of the universe, and to clarify the extent to which cosmology can truly be considered a natural historical science.
|
| 14:40 -15:10 | (Monash 木瓜福利影视) Imagery, Imagination, and Understanding in AstrophysicsScientific articles, for instance in the field of astrophysics, are often filled with a variety of images. In philosophical studies, these images are usually analyzed in terms of their function within the scientific argument presented in the article. However, not all images that can be found in astrophysical articles are relevant to the scientific argument, which prompts the question of why they are included in the first place. Using the example of the so-called 鈥淪tellar Graveyard鈥 plot, I argue that the work of Letitia Meynell (2017, 2018, 2020) provides a valuable description of this kind of imagery. That is, there are images used in astrophysical literature that may not be necessary for the scientific argument being made, but function as an aide for the visual imagination of the reader. These kinds of aides can help with mentally visualizing certain spatial configurations and the causal relationships within them, ultimately allowing the scientist to imagine hypothetical 鈥渨hat-if-things-had-been-different scenarios,鈥 which Meynell (2020) relates to scientific understanding. Moreover, according to the work of Henk de Regt (2017) understanding a scientific theory entails grasping its 鈥渜ualitatively characteristic consequences.鈥 The visual imagination can be a useful tool for considering the qualitative consequences of a theoretical model, effectively furthering scientific understanding through a kind of mental simulation. Throughout astrophysical articles, imagery that play the role of aiding the imagination are prevalent (e.g. 鈥淰an den Heuvel diagrams鈥 of binary evolution). After all, astrophysics is a historical science as opposed to an experimental one (Cleland 2002), and figuring out the astrophysical processes that have produced certain signals (e.g. gamma-ray bursts or gravitational waves) is often not straightforward. The astrophysical method, which can be described as 鈥渟aving the phenomena鈥 (e.g. Hacking 1989), often includes a reconciliation of a theoretical model and observation, where the accuracy of the fit provides confidence in the underlying astrophysical model. The consideration of hypothetical models and scenarios (cf. Meynell 2020) is useful in this search for models that could fit observations (and therefore provide scientific explanation), which is why the imagination can play an important role not only in science in general but specifically in astrophysics. Astrophysicists might therefore choose to use figures in their articles that may not be necessary for the scientific argument, but instead are meant to spark the imagination of the reader, as exemplified by figures such as the Stellar Graveyard plot and Van den Heuvel diagrams.
|
| 15:10 - 15:45 | Coffee
Chair: Niels Martens |
| 15:45 - 17:00 | (Harvard 木瓜福利影视) Philosophy in Space: The Photon Ring and the Chaoticity of SpacetimeIn 2019, the Event Horizon Telescope imaged the supermassive black hole M87* in the Virgo Cluster. Building on a century-long history of black hole investigations, the resulting photograph of billion-degree gas circulating outside the horizon helped lock in a community-wide acceptance of supermassive black holes as real. Within the glow of this doughnut of billion-degree plasma, lies the delicate trace of the thin, bright photon ring (pure light set in orbit around the black hole). Einstein, a century ago, taught us that light traces out the geometry of spacetime; here, this ring (succession of rings) tells us everything about the black hole, not just its mass but also, and for the first time, a direct measure of its spin. In this paper, I want to explore in some detail, visually, what the photon ring reveals about spacetime near the horizon, strictly speaking, an edge of the universe. We see here not the chaos of Mandelbrot-like fluid flow, but chaos in the fabric of spacetime itself. I will conclude with some reflections on electrodynamics, relativity, and what we want from scientific understanding.
|