Post with 2 notes
A Response to a Comment
I’ll stop being so skeptical that we can solve our resource problems by going to outer space when and if people start to colonize Antarctica.
I mean, there are lots of minerals in Antarctica! It’s an entire continent that has never been exploited by human miners before, and, IIRC, parts of Antarctica are a geological continuation of some of those fabulously mineral-rich strata in South Africa. And it’s much, much more convenient than space — the fuel cost for getting to Antarctica is pretty negligible, you don’t need to bring your own air and water with you, and there’s no health problems resulting from prolonged exposure to zero-gravity. Antarctica is cozy compared to space. So, if you’re going to argue that, Any Day Now, our Glorious Future in Space will arrive and we’ll start mining asteroids (and it better be literally any day, time is running out), you’ve got to explain why we haven’t densely settled Antarctica yet.
(P.S. Waiting until the glaciers all melt doesn’t count.)
I would like to thank Pure Americanism for this comment, and I want to discuss it in some detail. Firstly, however, I want to clear up a potential ambiguity. While the article on which I was commenting was discussing asteroid mining in the relatively short-term future (the author quotes a source saying, “It’s ‘going to happen much sooner than I think a lot of people realise,’ Lewicki insists. ‘We’re not decades away… there’s [sic] companies ready to do this now’.”), my comments are primarily concerned with a longer-term future.
In the comments from Pure Americanism above, we find, “Any Day Now, our Glorious Future in Space will arrive,” so I wanted to mention that the author of the article might be interpreted as speaking in terms of “any day now,” but I was not. In fact, in my comments I explicitly observed that well-intentioned legislation is likely to slow down the process of human expansion off the surface of Earth
The time scales employed in futurism are important. This should go without saying, but I think a reminder is in order. Recently I wrote this to a couple of friends:
I have come to think that the spectacular failures of earlier futurism – the sort of thing now ridiculed in many books like, Wasn’t the Future Wonderful? – were at least in part futurists allowing themselves to be manipulated by the news cycle, which makes certain demands of relevancy upon those who embark upon a career as a public intellectual. Thus there is great pressure to make forecasts for what the world will be like in 25 years, because many people expect to live 25 years, and very little interest in getting predictions for 250 years, because no one expects to be around this long. Thus a lot of predictions are off by a factor of 10. There is real and substantial change in 250 years, but often very little change in 25 years.
I picked the number 25 because I had recently read an article about an article written 25 years ago about the dim, distant future of LA in 2013. This is the sort of thing that makes people laugh at futurism, because, of course, LA doesn’t look anything like the pictures that were used with the article. But take a chunk of 250 years instead of 25 years: if we think back to 250 years ago, we find a world without electricity, before the industrial revolution, and with no United States. The world of 1764 was very different from the world of 2014, and it seems likely that the world of 2264 will be at least as different from 2014.
Precisely because the world, with few exceptions, changes very little over a lifetime, it is often difficult to believe how different the world was in the past, and how different it will be in the future. Yet the features of the world than remain constant, or nearly constant, throughout the life of an individual, and therefore give the impression of permanence, are anything but permanent. What could seem more permanent that the Earth itself? And yet we know that the continents have rearranged themselves continuously during the four and a half billion years of the Earth’s existence.
With that in mind, it should not be too difficult to see that the fantastic world of science fiction, of human beings traveling throughout the solar system and the galaxy, as strange as it may seem, is no stranger or unlikely than the strange and unlikely circumstance that has resulted in seven billion human beings confined to the surface of Earth. Our present situation is no more permanent than the ages that preceded it, when the Earth was alternately entirely covered by ice or covered in steaming carboniferous swamps. And these are appropriate comparisons, because when we think of the human future in the cosmos, we must think on geological (if not cosmological) time scales.
On these time scales, present day concerns of resource depletion do not play a major role in history. Our present concern with fossil fuels is an artifact of the early stages (i.e., the first two hundred years) of the industrial revolution. As I see it, we ought to be much more concerned about peak population than peak oil. As the industrialization of Latin America, Asia, and Africa continues, raising living standards in these regions of the world to those levels of the industrialized democracies of the west, population growth is likely to follow the trends of other industrialized regions, with growth leveling off, and eventually falling below replacement level.
I am not, then, forecasting recourse to space resources as a “solution” to present day resource depletion problems, although I do think that the industrialization of space will eventually play a role in providing resources for Earth-based populations. I have discussed this previously in Plans for asteroid mining emerge and Will future extraterrestrial history repeat terrestrial history?
The important thing to understand about resources in space is that, while the barriers to space are very high — much higher than getting to Antarctica now, and much higher than getting from Europe to the Western hemisphere in 1492 and subsequent years — once you do have a sustainable presence in space, mineral resources and energy resources are, for all practical purposes, boundless. Of course, these resources aren’t infinite (at least in this universe), but they are far more plentiful than any human population in the universe could exhaust in the foreseeable future. Space is a hostile environment, but it can be made clement to human life with these nearly endless resources. In contrast, the rich mineral strata of Antarctica are just another finite portion of the Earth’s surface, and they could be readily exhausted in a way that space-based resources could not be readily exhausted.
Going into space is not about solving our resource problems. Here on Earth we’ve gotten really good at using our resources more sparingly than in the past, and we’re going to continue to get better as technology improves. And even as the technology for resource extraction and resource consumption becomes more efficient, alternative technologies will also continue to improve, and eventually these alternatives will begin to replace non-renewable resources. The most obvious example of this is the improvement of solar electric cell efficiency. Before we have used up all our fossil fuels, we will be able to convert the global electricity grid to a decentralized system of renewables without any drop in standards of living. (Again, this is the most obvious scenario, but it is not the only scenario — cf. my post Synchrony in Energy Markets.) On the contrary, standards of living will improve in industrialized regions due to reduced pollution, and in developing regions due to inexpensive, mobile, decentralized energy technologies.
I am sure that there will be many people — perhaps most people, perhaps ninety-nine percent of humanity — who will be content to spend their time updating their Facebook profiles, taking recreational drugs made safe through technologically advanced pharmacology, and immersing themselves in virtual worlds of adventure, excitement, and sexual fantasy. But even as the world settles into a comfortable routine of stagnancy, there will still be a number of persons who want a future that is no longer to be found on the surface of Earth. Those who do not wish to live among the last men in their stationary state (of which Mill wrote admiringly, but for some of us would be a torment, cf. Addendum on Technological Unemployment), will go into space.
The same continuously improving technologies that will make us so comfortable on Earth that few will choose to leave, will also eventually tip the economic calculus of going into space where this becomes sufficiently inexpensive and convenient that passage into space will be neither prohibitively expensive or limited to a handful of individuals in state-sponsored space programs. This might come about by gradual technical improvements, by an unexpected scientific breakthrough, by a space elevator, or all of these together with other methods we cannot now imagine.
If civilization does not permanently stagnate, and if it remains within the paradigm of industrial-technological civilization (or some successor civilization in which science, technology, and engineering play a similarly central role), then human beings will certainly go into space, even if only a minority whose interest is merely curiosity, excitement, and adventure. However, in addition to this minimal scenario of demographic minority extraterrestrialization, there are also likely to be pragmatic forms of human expansion into space that will include business, industry, and existential risk mitigation, just as there will eventually be some exploitation of the resources of space for terrestrial benefit (though not as the primary motivation for human space travel).
Simply getting into space, of course, is only the beginning. Other factors will enter into which human beings thrive in space. Pure Americanism has mentioned, “health problems resulting from prolonged exposure to zero-gravity,” but this assumes that the only possible way in which human beings can live off the surface of Earth is in a micro-gravity environment like the ISS, and there is no reason to make this assumption. Other planetary bodies can be adapted for human habitation, and we can build structures in space that imitate gravity. However, it is worth noting that health will be an important factor for those leaving Earth to live elsewhere in the cosmos. Just as the individuals who go into space will be self-selected by their desire to live away from Earth, they will also be physically selected. Living in micro-gravity, low gravity, high gravity or simulated gravity environments will be strongly selective. There may be individuals who desire to live in these other environments, but who find that they are physical incapacitated by them. But due to human genetic variability, there will be some individuals who happened to be well-adapted to differing gravity environments. Such individuals will be favored in the expansion into space and they will pass on their genetic legacy to their children. Humanity will evolve under extraterrestrial selection pressures. .
For thousands of years both individuals and societies have derived hope for the future from a soteriological and eschatological conception of life and the world, and I would argue that this hope has been central to the growth and expansion of civilization, even where this hope turned out to be chimerical. The Axial Age mythologies of agrarian-ecclesiastical civilization can no longer perform this function. It is this visceral realization that traditionalist formulations of hope no longer address the human condition in the context of industrial-technological civilization that is responsible, in part, for contemporary anomie, and not the nature of industrial-technological civilization, as is commonly charged.
Certainly not for everyone, but for a few people (myself included) the hope for a future in which human possibilities are greatly expanded by a human presence in space, and the existential risk mitigation that is to be derived form self-supporting human communities off the surface of Earth, is an adequate object of hope. Whatever becomes of me personally, this is a future that I want for humanity, as it is a future of greater value than a future confined to the surface of Earth. And, as I have argued above, even if only a tiny minority desires such a future, and the scientific, technological, and industrial achievements of our civilization continue, all of this can come about while the majority of the human population does nothing, goes nowhere, and desires only their comfort and security. But that isn’t everyone.
Recently I was watching a Sam Harris video in which he said that atheism is a problematic term to employ because every religious person thinks that they have a knock-down argument against atheism. (In many talks Harris has said that we don’t need a word for people who don’t believe in religion any more than we need a word for people who don’t believe in astrology.) Harris said it’s better simply to go “under the radar” and to “destroy bad ideas” wherever they are to be found.
Harris is both right and wrong. He is right that there is no need for a term for non-theists any more than a need for a term for non-astrologers or non-Ouija board believers or non-cheese eaters. Harris is also right that there is no need for non-religious individuals to identify themselves as atheists, and that the atheist label is used to marginalize criticisms of religion.
Harris is on more problematic ground when he talks about destroying bad ideas wherever they are found. How are we to distinguish “bad” ideas from “good” ideas? And even if you can unambiguously identify a bad idea, it is not at all clear that it can be destroyed. Indeed, I would argue that the worst ideas are nearly impervious to destruction.
Let me begin by saying that there is a distinction that needs to be made between attempting to show that an argument is fallacious and attempting to show that an idea is fallacious. An argument can be shown to be fallacious by demonstrating a formal or material fallacy, i.e., that the argument makes a logical mistake or assumes premises that are false (or, at least, not confirmed). The value of an argument can be rationally assessed and discussed in that spirit, but it is not at all clear that an idea as an idea can be rationally assessed or debated on rational principles.
Now, most arguments that you encounter are likely to be based on, to grow out of, ideas, but in refuting the argument are you not refuting the idea. Almost everyone has had the experience of arguing with someone who immediately offers up a new argument for the same idea after their earlier arguments have been shown to be untenable (for those who are honest enough to recognize that their arguments can be shown to be untenable — and this is rare). After a few iterations of this it becomes clear that the “argument” was a rationalization, and the real appeal is the idea and not the argument that supposedly supports the idea.
Elsewhere I have written about the difference between perennial ideas and defunct ideas; in my last post, Time and Tide, I wrote that, “A perennial idea is never refuted.” Truth be told, people are loathe to give up their ideas, perennial or not, and most are aware at some level that an idea cannot be refuted. This is not to say that all ideas are equally valuable or well-founded. They are not. An idea may be misguided or misleading, petty or pernicious, but in and of itself it is not true or false, though it will be meaningful or meaningless, valuable or worthless.
If, instead of attempting to prove that an idea is false or fallacious, you attempt to prove that an idea is meaningless or worthless, you will be doing so vis-à-vis someone who is already convinced of the meaning and value of this idea, who is in fact basing their arguments on this idea, and the likelihood that such an individual will give up their treasured idea is close to zero.
There is a small percentage of the population that is willing to listen to criticism of their fundamental assumptions as to how the world works, but the vast majority of people are either unwilling to listen to such criticism, or they are unable to understand even the possibility that the central idea around which they have constructed their life is a figment of their imagination.
While an idea cannot be refuted, it is probably true that a bad idea can be discredited. However, among the true believers in an idea, the attempt by others to discredit an idea is seen as all the more reason for the believer to demonstrate their unwavering faith; the attempt to discredit a bad idea, then, may have an unintended backlash effect that invigorates the defenders of the idea.
In Beowulf’s Old English:
se geweald hafað
sæla ond mæla; þæt is soð Metod.
In Chaucer’s Middle English:
For thogh we slepe, or wake, or rome, or ryde,
Ay fleeth the tyme, it nyl no man abyde.
And in Edmund Spenser’s modern English:
For, all that from her springs, and is ybredde,
How-euer fayre it flourish for a time,
Yet see we soone decay; and, being dead
To turne again vnto their earthly slime:
Time waits for no man, nor does it wait for ideas.But time is more kind to some ideas than to others, just as some men show the ruin of their youth earlier than others, while some gently age and look distinguished rather than decayed.
There is no more obviously outdated idea than failed futurisms of the past, and I have written about this on several occasions, since it is so easy to dismiss all attempts at futurism when we consider how wrong twentieth century predictions of the future were. I’m going to write on this again soon, but today I want to make a distinction between the kind of futurist ideas that seem painfully dated and the kind of ideas that age more gracefully.
Ideas of the future that remain unrealized are those that show their ruin early. Once the moment for a particular future passes, it passes irretrievably into the past and carries with it the stamp of the era whose vision it is. Such ideas become dated, and they are dated because they were never widely adopted and are therefore identified with the time in which they experienced their brief efflorescence.
As time passes, an idea that was never realized or widely adopted becomes less and less likely to be acted upon, and in terms of ideas of future human society that means that a failed futurist idea becomes more closely associated with the past than with the present. No one wants to show how dated and out of fashion they are by investing their hopes in the future that is already passé in the present.
Distinguished ideas that age gracefully are those that are enthusiastically adopted and undergo rapid development as a result of competition in the marketplace of ideas; these ideas become commonplace in our lives even while they continuously evolve. As a result, these ideas do not seem dated. Familiar ideas age and evolve incrementally before our eyes, so we usually don’t notice it.
Think of someone who decorates their home in a particular style that is clearly identifiable with a particular stage in the development of popular culture. They live in the home every day and don’t notice their decor becoming more faded and dated year in and year out. When they pass away and the house is sold or inherited, it feels like a time warp to walk inside because the decor is so clearly identifiable with a particular period of history.
This isn’t really the best example, however, since interior decoration doesn’t change in the way that ideas adopted by popular culture evolve, but it does illustrate the role of familiarity in the perception of datedness.
But there is an additional wrinkle — a wrinkle in time. In popular culture there is often a failure to distinguish between trivial ideas that briefly become fashionable, are talked about by everyone, and then disappear, and ideas that really do explain some feature of the world, but not on the time scale of popular culture, so when such ideas become briefly popular and then seem to fail to explain short term events, they are drop out of popular usage, but the phenomenon they explain continues to work away in the background, even if unnoticed.
This has been the case most recently with globalization, and before that with secularization and with several other futurist ideas. Talking heads now routinely mock the idea of globalization, even as global trade flows increase and global institutions are progressively more integrated. Similarly, the rise if Islamic militancy was regarded as definitive proof of the failure of secularization theories, but recently a few scholars have been returning to secularization and reassessing the theory in light of evidence that clearly points to the growth of secularism in wealthy, industrialized countries.
I previously addressed these considerations in Confirmation and Disconfirmation in History, in which I discussed the changing currents of history is assessing whether Marxism has been validated or refuted by history. A perennial idea is never refuted, but returns time and time again; each time it seems to be discredited beyond the possibility of future resurrection, it appears again — perhaps with a different name and in a different formulation, but the same idea nevertheless.
The idea that time waits for no man is a commentary upon the transitory nature of all things of this world — sic gloria transit mundi. However, we could just as well invoke this idea to explain the eternal recurrence of perennial ideas.
There is a passage from Foucault that I have quoted many times, which is one of my favorites from this works:
“A real science recognizes and accepts its own history without feeling attacked.”
It just occurred to me today that we might say that same about the future of science:
“A real science recognizes and accepts its own future without feeling attacked.”
A pure and thorough-going complementarity between past and future would only be possible if our knowledge of past and future were symmetrical, which it is not. But it should be pretty clear that contemporary science, in so far as it glimpses future iterations of the discipline, would feel profoundly inadequate in the face of what may come out of science, when sufficiently advanced.
It is a staple of pop-culture futurism that science a hundred years from now many be as different from contemporary science as contemporary science is different from science a hundred years ago — before the confirmation of relativity, before quantum theory, before plate tectonics, before the expansion of the universe, before genetics, and so on. In other words, science a hundred years ago is barely recognizable today as science, and the same may be true a hundred years’ hence.
But whenever dissecting pop-culture futurism one must keep in the forefront of one’s mind what the message is to the contemporary audience, which is the real target of futurism. Some of these claims about science becoming rapidly outdated are sincere, but some are based upon an implicit non-progressivism and the contemporary equivalent of a cyclical theory of history.
Having had the misfortune of being exposed to a lot of the looniest forms of conspiracy theory present in our culture today, I can tell you that the cyclical theory of history is alive and kicking in the popular mind, and there is no more familiar idea to the listeners of late night radio programs than the idea that civilization has emerged repeatedly on Earth and achieved a high level of technological development, only to be destroyed by its own hubris.
The idea of science being outdated in the future is related to this idea of cyclical history, because cyclical history maintains at bottom that there is no progress, and this must include the claim that there is no real scientific progress either. Therefore the falsification of past science by present science, and the eventual falsification of present science by future science, points to the idea that science does not better approximate truth over time, but only revolves in a vast cycle along with the rise and fall of civilizations, with no real progress being made.
In the context of a cyclical theory of history, science could recognize its past and future without feeling attacked, because all science is equal and no science is closer to the truth than any other science, because all science is eventually falsified.
The fact that science does feel attacked by being presented with its past, which now seems perverse and unworthy of being called “science,” and would feel attacked if presented with a future iteration of itself, is, in this sense, a hopeful sign, as it suggests that real progress is made in science, and that scientists know this so well that they feel both insulted and challenged when the painful history of the follies of science is spread out before them.
It would be an interesting exercise to develop the above idea in the context of Kuhnian paradigm shifts. I leave this as an exercise to the reader.
In my last post, Pernicious Metaphysics, I referenced an earlier post, Metaphysical Fallacies, and now just today I learned that I have been anticipated by several decades in my use of the phrase “metaphysical fallacies,” which plays a prominent role in Hannah Arendt’s book The Life of the Mind.
I’ve written about Hannah Arendt previously in relation to her work on mass man, an idea developed in her The Origins of Totalitarianism, and in relation to perhaps her most famous work, Eichmann in Jerusalem, which I discussed in Historical Consciousness for its Own Sake. I’ve also skimmed several of her books, being particularly interested in On Revolution and Between Past and Future, but until today I don’t think I had ever cracked the covers of The Life of the Mind (or, if I did, it didn’t make much of an impression on me).
No matter that Arendt formulated the problematic of totalitarianism that we still use today to discuss Nazism and other forms of fascism, Arendt still has not been forgiven for writing Eichmann in Jerusalem. Having heard of the book and the controversy surrounding it, I read it. Having read it, I didn’t get why she took so much flack over it. I had to read a number of essays about its reception before I began to understand the controversy surrounding the book, which, as I said, still hasn’t gone away. There was a piece in Slate from 30 October 2009, The Evil of Banality: Troubling new revelations about Arendt and Heidegger by Ron Rosenbaum, which takes up the controversy as though decades had not passed in the meantime.
In this article it is not only new charges about Arendt’s sources that are aired, but questions about her relationship to Heidegger. Anyone who has read this or my other blog knows that I am no fan of Heidegger (cf. Ott on Heidegger and Conduct Unbecoming a Philosopher). And I, too, wonder why Arendt played the crucial role she did in rehabilitating Heidegger after the war. It certainly wasn’t naïveté, either about Heidegger or his association with the Nazis or about Heidegger’s philosophy. Arendt was not naïve. It is probably much simpler than that. Heidegger was an old friend, and Arendt forgave him. Now, the rest of us may not forgive Heidegger, but it seems incomprehensible (if not unconscionable) to say to another person that they should not forgive an old friend, not matter how undeserving.
This, however, is not what I set out to write about today, but there is a sense in which the digression on Heidegger is relevant, since in her exposition of metaphysical fallacies Arendt used Heidegger is her example of what she calls the “basic” metaphysical fallacy. Arendt took up metaphysics only to diagnose the discipline in term of “metaphysical fallacies” — irony, perhaps? — and she wrote that, “The basic fallacy, taking precedence over all specific metaphysical fallacies, is to interpret meaning on the model of truth. The latest and in some respects most striking instance of this occurs in Heidegger’s Being and Time, which starts out by raising ‘anew the question of the meaning of Being.’ Heidegger himself, in a later interpretation of his own initial question, says explicitly: ‘“Meaning of Being” and “Truth of Being” are the same’." (p. 15)
For Arendt, metaphysics has revealed itself as consisting only of fallacies; once we deflate or deny the fallacies, there is nothing left. Nevertheless, there is a certain value in these fallacies:
”…the only record we possess of what thinking as an activity meant to those who had chosen it as a way of life is what we could call today the ‘metaphysical fallacies.’ None of the systems, none of the doctrines transmitted to us by the great thinkers may be convincing or even plausible to modern readers; but none of them, I shall try to argue here, is arbitrary and none can be simply dismissed as sheer nonsense. On the contrary, the metaphysical fallacies contain the only clues we have to what thinking means to those who engage in it — something of great importance today and about which, oddly enough, there exist few direct utterances.”
For Arendt, all of metaphysics is Pernicious Metaphysics, and all of it fallacious — but there are lessons to be learned from these fallacies, because, while fallacious and pernicious, metaphysics is neither arbitrary nor nonsense. Metaphysics, then, is a record of valuable errors; philosophy consists, on this view, of object lessons.
This is a surprisingly positivist position to take, implying, as it does, a perfectly simply and unproblematic world hidden beneath the layers of metaphysical fallacy, waiting for us if only we can penetrate through all the fallacies and lay hold of this thing in itself which, seen in its nakedness, presents though with no difficulties whatsoever.
Of course, we have heard this time and again from twentieth century philosophers, and I don’t want to reduce the subtlety of Arendt’s position to some schematic, positivistic denial of metaphysics. Indeed, while I do not exactly agree with Arendt, I am quite sympathetic to her position. I agree that metaphysical fallacies, when they are committed, are not arbitrary and not nonsense. They deserve our study and attention. I would maintain additional, however, that there remains the possibility of metaphysics beyond metaphysical fallacy, which, like science as we understand it today, is never quite right, and always subject to revision, but which nevertheless, incrementally, step by painful step. more closely approximates the world the more carefully we learn to ask metaphysical questions and even to hazard an answer to them.
Arendt herself take a step in this direction in her analysis of the “basic” metaphysical fallacy committed by Heidegger. If the identification of being with meaning is a metaphysical statement, and also a metaphysical fallacy, then the assertion of the non-identity of being and meaning is also a metaphysical statement, but not a metaphysical fallacy.
I wrote above that Arendt outlines a position quite close to twentieth century positivism in its various iterations; in another sense, Arendt’s position vis-à-vis metaphysics can be likened to something much more recent: the speculative realist critique of Kantian correlationism. Here is Quentin Meillassoux on correlationism:
“…the central notion of modern philosophy since Kant seems to be that of correlation. By ‘correlation’ we mean the idea according to which we only ever have access to the correlation between thinking and being, and never to either term considered apart from the other.” Quentin Meillassoux, After Finitude: An Essay on the Necessity of Contingency, p. 5
I wrote about this previously in De-Coupling Intentionality. The speculative realistis tend to be quite heavily influence by Heidegger, so again we see (if you will forgive me) the correlation. The equation between thinking and being, characteristic of intentionality in phenomenology, is not precisely the correlation that interests Arendt, but the “basic fallacy” described above she does put in the form of an equation, the equation of meaning and truth, and she does so in the context of a work on thinking. Thus the correlationism that Arendt critiques is the correlation of meaning and truth. We could even call this a form of intentionality. Arendt’s proposed de-coupling of meaning and truth, as against Heidegger’s explicit equation of “Truth of Being” and “Meaning of Being” is no less a metaphysical thesis than their coupling in Heidegger.
If we can substitute thinking, salva veritate, for meaning, the Husserlian correlationism of thinking and being and the Heideggerian correlationism of meaning and being are in turn correlated. In so far as thinking is meaningful in Arendt — and, as she says, “metaphysical fallacies contain the only clues we have to what thinking means to those who engage in it” — the critique of Heideggerian correlationism and Husserlian correlationism (or Kantian, if you prefer) coincide.
Post with 1 note
In my post on metaphysical fallacies I quoted W. H. Walsh’s stories that illustrated a conflict over metaphysical principles, the first involving the claim that things don’t just pass clean out of existence, and the second involving the claim that things don’t just happen for no reason at all.
The first of these metaphysical principles is the corollary of a principle famously to be found in Lucretius: ex nihilo nihil fit — from nothing, nothing comes — which implies, into nothing, nothing goes. In the twentieth century Alfred North Whitehead called this the ontological principle: “there is nothing which floats into the world from nowhere” (Process and Reality).
The second of these metaphysical principles — that things don’t happen for no reason at all — is well known as the principle of sufficient reason, which has a history in western philosophy as distinguished as that of the ontological principle. I regard the principle of sufficient reason as among the most pernicious of metaphysical principles, that has misled generations of philosophers and others into a teleological conception of the world, that is only in the modern world — and not even the modern world only, but more narrowly, the world since industrialization — being challenged by a non-teleological way of understanding, as science becomes more sophisticated and is able to squeeze out all the gods from the gaps.
The principle of sufficient reason and the teleological conception of the world that follows from its systematic application are not precisely metaphysical fallacies or metaphysical biases, but what might be called systematically misleading metaphysics, or perhaps pernicious metaphysics. In many cases, metaphysical biases are identical to principles that are central to the metaphysical systems one rejects; when a metaphysical bias is consciously adopted as a principle, it can no longer be called a bias, as it is now an explicit methodology.
It was Gilbert Ryle who first formulated the idea of systematically misleading expressions. In his concern for language, Ryle was a man of his time, exemplifying what has come to be called the “linguistic turn” in philosophy (interestingly, the linguistic turn in found in both analytical and continental philosophy). But it is not only expressions that can be systematically misleading. Expressions that are perfectly clear and not intrinsically misleading for linguistic purposes may encapsulate a systematically misleading idea, and a systematically misleading idea is what I mean when I say that the principle of sufficient reason and the ontological principle are systematically misleading metaphysics. The fact that they accord so well with our intuitions is part of the problem; if they did not, we would not have to struggle against. them.
It could be argued that the principle of sufficient reason is a particular case of the ontological principle, such that things don’t happen for no reason at all because for something to happen for no reason at all would require that this event appeared out of nothingness, which violates the ontological principle. If the ontological principle is the foundation of the principle of sufficient reason, and the principle of sufficient reason is a pernicious metaphysical principle, then we should seek the origins of pernicious metaphysics in the ontological principle.
If the ontological principle is the root of all evil metaphysics, the fons et origo of a perniciously teleological conception of the world, then if we are going to get to the root of the matter we must call the ontological principle into question, whatever its intuitive standing. Does something ever come out of nothing? Can there be a creatio ex nihilo? These are extremely tendentious ways of formulating the problem; let us try to find a somewhat less tendentious way to approach this.
As the historical sciences yield an ever more detailed account of a temporal world, a world in which time is the central organizing principle, the concept of emergence is becoming ever more important. Emergence is one of the central concepts of temporal metaphysics.
The ontological principle commits us exclusively to a position of weak emergence, in which the properties observed to be emergent from complex systems are unpredicted and unexpected, that is to say, emergence is here an epistemological doctrine. It is only with strong emergence that emergence is an ontological doctrine according to which qualitatively new properties appear that are ontologically distinct from the properties of lower, less complex levels of a system.
Wherever there is strong emergence, there is ontological novelty, and wherever there is ontological novelty, an ontological threshold is passed. it could be argued that this new ontological threshold does not come from nothing because we know what preceded it, and we know the substrate from which it emerged, and we know that a new level of complexity in the substrate produced an ontological novelty, but if we insist in every case that the ontological novelty is nothing but those preceding conditions, then we are committed a priori to a reductivist position.
If we allow the possibility that there are instances both of weak and strong emergence, and we are not to insist upon reductivism in every case, then we must acknowledge that in cases of strong emergence ontological novelty violates the ontological principle; in other words, we must recognize a limitation to the ontological principle — the ontological principle is neither absolute nor unconditioned.
From a conditional ontological principle that recognizes exceptions we can derive a conception of the world in which developmental processes produce qualitatively new forms over time. In such a world of ontological development, we tremble always on the verge of ontological novelty.
Where exactly the point is when we pass over into ontological novelty is not always plain, nor should be assume that there must be a discrete point. This is a problem related to the sorites paradox. I wrote above that metaphysical biases can be identical to principles that are central to the metaphysical systems one rejects; just so, in this spirit, many philosophical theories have their origin in a shift of perspective that rechristens a paradox as a principle, and so we might speak of a sorites principle instead of a sorites paradox. An incrementalist conception of the world embraces the sorites principle and understands that fundamentally new forms, forms that are new in essence, emerge from what Alfred Russel Wallace called The Tendency of Varieties to Depart Indefinitely from the Original Type.
Some time ago in my post Finding Paley’s Watch, I began to sketch a non-teleological conception of the world. This post has been little read, and it is probably not obvious that I was suggesting something fundamentally new. I need to return to this idea to give it a fuller and more systematic exposition, and to do so in light of the discussion above of the pernicious metaphysical principles that lie behind the teleological conception of the world that has gone largely unquestioned in the history of western metaphysics. We have learned to question specific cases of teleology, and have freed large parts of science from teleological thinking, but we need to pass beyond a fragmentary and opportunistic formulation of non-teleological thought to a metaphysical non-teleology that conceives the world entire in non-teleological terms.
Page 1 of 89