When Simplicity Gets Complicated
The role of simplicity as a virtue in theory selection is ironically complicated. Philosophers generally agree that simplicity is a virtue (though some seriously question this!), but they do not agree on how to assess it. One question, for example, is: Where does simplicity matter most?
Some philosophers think we should follow Occam’s Razor when engaging in metaphysical theorizing, meaning we should not multiply entities beyond necessity. At first glance, this seems straightforward—more entities suggest less likelihood, right? However, there are difficulties with this view. First, there’s the matter of counting. Should we count only new kinds of entities, or do we include individual instances as well? Because what about cases where one person posits just one more entity than someone else, where the first person already had an enormous number of them? For example, suppose Jack posits 5,000,000 particles, whereas Jane posits only 4,999,999. It seems odd to think that Jack’s theory is significantly less simple than Jane’s. However, if Jack posits twenty-five controlling deities at the fundamental level of reality and Jane posits only one, then it does seem like Jane’s theory is simpler in a significant respect.
This point is made by Jonathan Schaffer, which is one reason he proposes replacing The Razor with The Laser, which states that we should not multiply fundamental entities beyond necessity. If you’ll excuse an extended quotation, Schaffer provides a thought experiment to motivate The Laser:
“So, in order to display how The Razor and The Laser can pull apart, imagine that Esther posits a fundamental theory with 100 types of fundamental particle. Her theory is predictively excellent and is adopted by the scientific community. Then Feng comes along and—in a moment of genius—builds on Esther’s work to discover a deeper fundamental theory with 10 types of fundamental string, which in varying combinations make up Esther’s 100 types of particle. This is intended to be a paradigm case of scientific progress in which a deeper, more unified, and more elegant theory ought to replace a shallower, less unified, and less elegant theory. Feng’s theory is evidently better in every relevant methodological respect. Yet if one counts by total entities, as per The Razor, one will get the case of Esther and Feng backward. For Esther’s total ontology is actually a proper subset of Feng’s. Feng believes in everything Esther believes in (both token-wise and type-wise): he believes in her particles, the atoms they compose, the chemicals they comprise, and the organisms they form, etc. Plus, Feng believes in more: he also believes in the strings underneath it all (he believes in these types and in their tokens). So, by the lights of The Razor, Feng’s theory is an affront to ontological economy for positing these additional strings. It is to be strongly dispreferred, all else being equal. This is obviously backward, as far as sound methodological counsel is concerned. Feng’s theory is obviously no affront to ontological economy but—when judged purely by the methodological virtues—is evidently a more economical, tighter, and more unified improvement. It is The Laser that gets this right.”1
I largely agree with Schaffer here, especially regarding his eventual “bang for buck” proposal, which suggests that one should optimize the balance between minimizing fundamental entities and maximizing derivative (non-fundamental) entities, particularly useful ones.
Still, there are concerns with Schaffer’s proposal, including the issue of overgeneration, which he addresses in his article, among other difficulties (I direct the interested reader there, for his defense).
But to offer my own perspective, consider the following:
Classical theism posits that at the fundamental level of reality, there is one absolutely simple entity—God—who is omnipotent, omniscient, and perfectly good. In conjunction with traditional principles, such as the idea that the good is naturally self-diffusive and the principle of plenitude, this one entity anticipates an enormous hierarchy of being, from the lowest bits of physical reality to an entire choir of angels.
From a “bang for buck” perspective, it’s hard to see how classical theism can be beaten. However, some might argue that including things like angelic entities adds (costly) complexity. After all, wouldn’t everyone agree that adding ghosts, fairies, or pixies complicates a proposal in a way that harms its likelihood? Wouldn’t they?
Well, maybe. On the one hand, it does seem problematic to add ghosts or fairies to the mix willy-nilly. However, the reason why isn’t immediately obvious. Why is adding one more squirrel not costly, or even one thousand more squirrels, but a single ghost is? Is it because a new kind is introduced? (Another complication—indeed, a notorious one—is how we define or categorize kinds: Is a new species of plant considered a new kind? What about a new fundamental particle?) Or does the issue have less to do with economy and more with explanation? This is where I think Schaffer’s emphasis on “especially useful” derivative entities comes in, highlighting the obvious point that other things are not always equal. In other words, perhaps the reason protest at the commitment of ghosts is that they don’t see them as useful or doing any important theoretical work, or that there is a better explanation for why people talk about such things apart from their actually existing (pranks, creaky pipes, cultural memes, etc.). But maybe that’s wrong. While I’m not particularly into ghost stories, perhaps some accounts genuinely defy explanation without ghost-like entities. If that’s the case, then we shouldn’t be so hostile to including ghosts, just as we shouldn’t be hostile to including something like the soul if it proves necessary to make sense of certain phenomena.2
Another consideration is that for ghosts to be included in a worldview without adding costly complexity, there should be reasonable justification for their existence based on the theory's core components or other supporting evidence. Without such justification, their inclusion likely becomes an independent assumption that increases complexity in a costly way. Angels, by contrast, have long been argued to be a natural consequence of classical theism, given the assumption that God creates, and many believe there is evidence for their existence as well (though I understand some make similar claims about ghosts; I just haven’t explored those accounts—I’m open to it!). Classical theism anticipates a higher realm of purely spiritual entities that play significant theoretical roles, from explaining the unfolding of the cosmos (God delegates roles in creation) to addressing the problem of evil (not all delegates are benevolent), and more. While many will reject the necessity of these entities, others argue they are quite indispensable. The point is that assessing simplicity is challenging—extremely challenging—especially with provisos like "all else being equal," "useful entities," and so forth, because the issue is deeply intertwined with complex explanatory considerations.
Ultimately, these are just some of the considerations that incline me to think that what matters most, with respect to simplicity, is what is happening at the fundamental level. If nothing else, it seems clear that fundamental entities are more costly than non-fundamental entities. Perhaps one doesn’t need to go as far as Schaffer and say all non-fundamental entities are an ontological free lunch; perhaps one could say they cost far less than fundamental entities—only a penny compared to a dollar, or something like that. And it costs even less if one introduces another individual instance but not another kind. I have no idea, however, how to quantify this idea rigorously, but maybe we don’t have to. Even if we say that adding just one more particle is costly, we can intuitively see that it is very, very cheap—so cheap it’s not worth fussing over, especially if some other theory already has millions of them. We can also see that one hundred fundamental entities are very costly indeed if another theory posits just one that can explain the hundred entities of the previous theory. This holds true—obviously so, I think—when that one fundamental entity is in addition to the prior hundred, provided it offers a more unified account. (If that’s correct, then in the broader theist vs. naturalist debate, even if the theist incurs costs regarding the total number of entities, that cost is more than offset by the economy gained at the fundamental level, or so I would argue.)
But wait, there’s more! Here’s a consideration from Alex Pruss:
“Consider two theories.
Theory 1: There is a single asexually reproducing ancestor of all life on earth, Ag, who came into existence at time A and split into two almost genetically identical descendants, Bof and Bok, at time B, and there was no biological life before Ag.
Theory 2: There is a pair of almost genetically identical ancestors of all life on earth, Bof and Bok, who came into existence at time B, and there was no biological life before Bof and Bok.”3
As Pruss points out, Theory 2 is simpler insofar as it posits fewer entities overall, but Theory 1 has greater explanatory merit by making sense of the genetic similarity. However, Pruss contends that Theory 1 is actually virtuously simpler because, when considering simplicity, we should only count the entities or kinds not explained by the theory itself. In this case, Theory 1 is committed to just one theory-unexplained entity, whereas Theory 2 posits two. Pruss concludes that Theory 1 wins on both explanatory and simplicity grounds.
This assessment seems to map well with our intuitions on the matter, which, if right, means positing angels won’t hurt the theistic theory much, if at all.
To further illustrate, Pruss offers another example:
“Theory 3: A meteorite deposited some organic chemicals 4.1 billion years ago that combined to produce life.
Theory 4: A meteorite deposited some organic chemicals 3.9 billion years ago that combined to produce life.
As far as the details given, there’s no difference in complexity between the two theories. But Theory 3 commits us to many more organisms in the history of the earth—200 million years’ worth of organisms. And when we consider evolutionary theory, Theory 3 would commit us to significantly more kinds of organisms as well—over those 200 million years, surely there would be many more species. However, these added entities (or kinds) shouldn’t be counted as increasing the complexity of Theory 3 over Theory 4. Why not? The best explanation seems to be that these added entities and kinds are easily explained by the theory in question.”4
Pruss has a follow up defense, but what’s been reported so far should be sufficient for our purposes. Again, all this suggests that what is happening at the fundamental level is most important for simplicity considerations, where something is a fundamental commitment if it is not explained by anything else within that theory.5 Some argue that something is fundamental if it is left unexplained, which is generally acceptable, I think, though I would like to leave open the possibility that something — perhaps only one thing — can explain itself fully (i.e. God).
Finally, consider how what occurs at the fundamental level can lead one to reinterpret or reconceptualize what happens at the non-fundamental level, potentially affecting the overall simplicity of an account either positively or negatively. Suppose, for example, the naturalist posits at the fundamental level a rather large collection of physical entities with certain arbitrarily limited properties. These are brute and unexplained. In contrast, the theist posits God—just one entity, with no properties (as understood in the context of divine simplicity) and no arbitrary limits on power, knowledge, goodness, etc.—who explains what the naturalist holds as fundamental. Now, it’s true that the theist has everything the naturalist does, sort of, plus something extra. Sort of, because the matter is more complicated than that, for the following reason: the naturalist has brute, unexplained contingencies or necessities or both, not to mention all that arbitrary complexity at the fundamental level, which seems hugely problematic regarding intrinsic likelihood. The theist, on the other hand, avoids the issue of arbitrary complexity at the foundational level and escapes the commitment to brute or unexplained contingencies or necessities by positing an entity capable of explaining everything, including itself. This, traditionally, is the very concept of God: the ultimate self-explained (not self-caused) explainer of everything, who must be a very special sort of entity—simple, unbounded, and perfect (see my book for the argument on this).
I make no claim to having exhausted this complicated subject; my aim was simply to provide a few considerations on why I believe simplicity matters most at the fundamental level and why having more entities overall doesn't necessarily lead to a vicious complication. For further thoughts on the matter, I recommend the debate book between Kenny Pearce and Graham Oppy, where they each assess which theory—theism or naturalism—is simpler according to different accounts of simplicity (ontological, theoretical, ideological). Joshua Sijuwade has a good paper on the matter, Matthew Adelson offers helpful thoughts over at his blog, as does Joshua Rasmussen. That’s probably enough links for now—I wouldn’t want things to get too complicated ; )
https://philpapers.org/rec/SCHWNT-2
Or to issue another example: being an essence realist is probably more complicated than being an essence nominalist, but this bothers me very little, since I think essence nominalism is inadequate.
https://alexanderpruss.blogspot.com/2012/11/simplicity.html
Ibid.
At the risk of further complicating the simplicity debate, consider another point from Pruss, which support the idea that simplicity is not merely a tie-breaker. While I’ve often said simplicity acts as a tie-breaker (and it does), that doesn’t mean it’s merely a tie-breaker. Pruss’s example aims to show that simplicity can be a decisive factor even when theories don’t perfectly tie in their fit to the data—assuming such a perfect tie ever actually happens. Another difficulty, of course, is the lack of any clear, universally agreed-upon standard for what it means for theories to “equally fit” the data. My hunch is that in Pruss’s competing examples, most people would perceive the theories as equally fitting the data, even if this isn’t technically true—close enough to make the point.
However, this illustration would be misleading—at least within the context of metaphysical theorizing, which is our main concern—without mentioning something like Nancy Cartwright’s view in How the Laws of Physics Lie. Cartwright suggests that scientific theories are often idealizations—simplified models that don’t exactly describe reality but make the world more manageable and comprehensible, especially for prediction and technological development. She argues that the laws of physics are effective not because they are literal truths but because they are useful tools within specific contexts. This context plausibly underscores the importance of simplicity in certain areas of scientific theorizing: simpler models are more practical and broadly applicable, even if a more complex theory might fit the data slightly better in some respects. Given that such theories are often idealizations, simplicity becomes a practical consideration, if not a necessity. This is especially true, according to Cartwright’s interpretation, since a particular scientific model presents an idealized version of a concrete situation, deliberately omitting complicating features to facilitate its use, rather than because it better corresponds to reality. Whether this commits one to an instrumentalist rather than a realist view of science is an important discussion, but definitely one for another time.
Either way, our concern extends beyond simplicity in scientific theorizing, since in metaphysical theorizing, the aim is not to create idealizations for practical use but to uncover the actual (true!) structure of reality. Metaphysics is not (contemporary) physics!