This is conceived as an informal and spontaneous annex to my more extensive blog, Grand Strategy: The View from Oregon

1st September 2014


Religious Experience in Industrial-Technological Civilization


In my previous post, Settled and Nomadic Religious Experiences, I made the observation that settled societies tend to better provide the cosmological and social functions of mythology, while nomadic societies tend to better provide the mystical and personal functions of mythology. (These are the functions of mythology according to Joseph Campbell.)

Today we have but one vital civilization, and that is the planetary industrial-technological civilization that is the context of all our lives. Industrial-technological civilization is a settled civilization, but it is not the settled civilization of agrarian-ecclesiastical civilization, in which religion was the organizing principle of civilization. Compared to its central role in agrarian-ecclesiastical civilization, religion and mythology have been marginalized by industrial-technological civilization, in which the central organizing principle is technical, i.e., the procedural rationality of science, law, government,regulation, and industrial management.

Almost everyone in the world today leads a settled life, but it is settled in a slightly different way than lives were settled in the previous paradigm of civilization when better than ninety percent of the population lived and worked on the land. Industrialization requires a mobile work force, so that the multi-generational households that typified settled life in the pre-industrial era have been largely abandoned, and families live spread out over a vast geographical area, and sometimes — a sign of our planetary civilization — all over the world. Thus in the context of this settled civilization, many individuals lead quasi-nomadic lives, whether putting all they own into a car and driving across the country to a new life elsewhere (semi-settled life), or regularly packing a suitcase for a business trip to the other side of the world. 

With these changes to settled life as a result of industrialization, it is to be expected that there are both continuities and discontinuities in the religious lives and religious experiences of individuals. The marginalized nomadic peoples who once existed alongside settled civilization and kept the memory of nomadic life alive are almost entirely gone. We continue to have the settled mythologies — albeit altered over time — but no mythologies of nomadic peoples served the mystical and personal functions of mythology.

And the social and cosmological functions were called into question by the wrenching changes wrought by industrialization. Civilization passed through this transition without collapse, and so retained the religious institutions of agrarian-ecclesiastical civilization into industrial-technological civilization, further marginalizing personal and mystical religious experience. This is another motif in Joseph Campbell’s thought that recurs repeatedly: we have preserved the myths of the past, but they no longer function for us because conditions have changed so radically.

This is not only true of Western civilization. Oddly enough, it was when I was in Kyoto, visiting the Sanjūsangen-dō temple  (well worth seeing, by the way, as is all of Kyoto) when it really came home to me how religious traditions of the medieval past continue to serve as outdated symbols for a social order that no longer exists. In front of the thousand Kannon figures for which the temple is famous, there stands of line of representative figures from Japanese society. Only, they no longer represent representative figures from Japanese society; they are representative figures from a pre-industrialized Japanese society that no longer exists, but anyone who views this assemblage immediate understand the symbolic message that is being communicated.

This moment of insight at Sanjūsangen-dō was before I had invested a substantial portion of my life listening to the lectures of Joseph Campbell. With this personal realization under my belt, it was natural that I would be strongly attracted to Joseph Campbell’s interpretation of the role of religious mythology in contemporary society. And I mention Japan just because I have spent more time in Japan than any other country, with the exception of Norway and the US, though Japan is especially interesting because of the continuity of its traditions.

Japan is one of the most intensively industrialized societies in the world, and while the Japanese have retained many of their traditions, the changes wrought by industrialization have been as wrenching as anywhere in the world. If you go to a major Japanese temple on a major holiday, you will find that the temple resembles nothing else so much as a major Japanese railway station: crowded with people moving in all directions, surrounded by booths in which people are conducting business, and jostling to get through the crowd to reach one’s destination.

Of course, the Japanese religious experience is very different from the Western religious experience, but they have this in common: both retained the religious institutions of the pre-industrial past into the industrialized society of the present, and even as these ancient religious institutions continue to shape society, industrialized society also shapes how these religious traditions are expressed in the industrialized world.

Japan, like the West, has its contemplative orders, and many individuals dedicate their lives to the cultivation of specifically religious experience. Buddhism in particular emphasizes institutionalized forms of meditation, and has a sophisticated explanation of forms of experience that emerge from this meditative practice. Mystical and personal religious experience still has an important place, but this is not a mass communal experience, and in so far as the experience belongs to a spiritual elite, it is not likely to be the experience that reconciles the consciousness of the ordinary individual with the experiences of the world (the mystical function of mythology) or to be that which the ordinary individual turns in their hour of need (the personal function of mythology). Similar remarks can be made regarding Western contemplative orders, which are arguably even more marginalized in Western society.

Thus while in settled societies, and especially settled industrialized societies, the mystical and the personal functions of mythology tend to atrophy, the human need for these functions does not atrophy, and individuals seek the satisfaction of these needs through whatever channel will provide them. So the immediate, personal, engaged spiritual experience once provided by a shamanistic ritual is now provided in the context of a public spectacle. At a nightclub you have drumming (along with other forms of music that emphasize rhythm, because its relationship to dance and the movement of the human body), dancing, and the use of psychotropic substances (alcohol, cigarettes, recreational drugs) that come together in a distinctive experience. These elements together constitute an effective algorithm of ecstasy. It should be no surprise to anyone that one of the most popular kind of music in clubs is called “trance.” The distinctive experience produced is that of an ecstatic trance. Individuals lose themselves in the crowd, enter into a flow state, and continue until they collapse or the ritual comes to an end.  

While going to a nightclub or a rock concert is not explicitly a religious duty, it has all the elements of shamanic rituals — “independent of any body of doctrine, any orthodoxy, or any ecclesiastical hierarchy,” as I wrote in Settled and Nomadic Religious Experiences. People find their rituals where they can, even if it is merely the ritual of going to Starbucks for coffee.

It seems highly unlikely, however, that individuals would turn to the experience of a club or a concert in their hour of need, when they seek guidance in life. This is perhaps the most problematic aspect of industrialized society, and the area of life in which our pluralism is most evident. Different individuals seek guidance in different ways. Some recur to ancient communal traditions, some seek comfort in their social support networks (friends and family), and some seek professional help. The professional help takes the form of psychotherapeutic intervention, and in our time the therapy industry has become remarkably sophisticated. In so far as therapy is a technical discipline it fully embodies the spirit of procedural rationality that is central to industrial-technological civilization. 

A further comment remains to be made: in the most advanced industrialized nation-states, where the developments outlined above are most in evidence, civil society has reached a remarkable degree of stability and affluence, so that far fewer individuals than in the past experience the kind of trauma that is likely to trigger an hour of need in which one’s existential milieu is called into question. The kinds of trauma that do occur are likely to be the sort that are dealt with by means of psychotherapy, which I have already discussed above.


Tagged: Sanjūsangen-dōKyotoreligionindustrializationindustrial-technological civilizationmysticismreligious experiencemythologyritual


30th August 2014

Post with 5 notes

Settled and Nomadic Religious Experiences

The religious modes of thought typical of settled peoples usually focus on orthodoxy, that is to say, unquestioned belief in particular propositions. A body of religious propositions to which all members of a society are expected to assent requires a priestly class to formulate these propositions and to preside over their interpretation. This priestly class is either identical with the ruling political class (in a system of Kaiseropapism) or it exists as a parallel hierarchy, resembling the hierarchical structure of the political class but distinct from it (as in medieval Europe).

It has been the peoples who have been settled for the longest period of time — the Semitic peoples of the Levant — who gave us the “religions of the book,” that is to say, Judaism, Christianity, and Islam, in which a written text (or texts) plays a central role in religious observance. The cataloging of dogmas in encyclopedic works of theology was the natural extension of this institutionalization of religious experience.

The religious modes of thought of nomadic peoples tend to be more orientated toward shamanic ritual practices — ritual practices that occur independent of any body of doctrine, any orthodoxy, or any ecclesiastical hierarchy. The use of mind-altering substances, drumming, and dancing (the elements of mystical experience that I have discussed previously in Algorithms of Ecstasy) are focused on personal mystical experience and not doctrinal conformity. Similarly, stories of myths told round a camp fire are likely to change slightly with each re-telling, and so naturally adapt to the changing circumstances of a nomadic people. Here it is the story and not the book that is central. The religious experience of nomadic peoples is far less likely to be mediation by institutions than the religious experience of settled peoples.

It was the Turks, who were a nomadic people before they settled down in the form of the Ottoman Empire, who gave us the whirling Dervishes, unconcerned with orthodoxy and more interested in ecstasy than obedience, who seek altered states of consciousness through a spinning dance. Here practices likely to induce a mystical experience have become institutionalized after the fashion of a settled society, but still retain the stamp of their shamanistic origins. 

Joseph Campbell often made a fourfold distinction in the functions served by mythology, these four being, 1) the mystical, 2) the cosmological, 3) the social, and 4) the personal. The mystical function is the reconciliation of consciousness (sentience) with the world that consciousness finds; the cosmological function is an explanatory framework that accounts for the way the world is; the social function is both a descriptive and normative account of the social order; the personal function is providing a guide for the individual’s life, and being that which the individual calls upon in his hour of need. (I give these summaries from memory, without consulting any of Campbell’s texts, so please make allowances for that if the above is not clear.)

If we look at the different characteristics of religious experience in settled and nomadic societies, it seems clear that nomadic societies are much more likely to adequately fulfill the mystical and personal functions, while settled societies are more likely to adequately fulfill the cosmological and social functions. Is it possible to imagine a society in which all four functions of mythology could be experienced with equal vividness? Might there be the possibility of a society in which an organic mythology, naturally arising from the common experiences of a people, could have the personal immediacy of mystical experience on the one hand, that serves as a vital guide to life, while also incorporating some systematic reflection on the world and society? 

This isn’t really a question about religion and religious experiences, as much as it is a question about the structure of human society. Is it possible to structure a human society so that fulfillment of all the functions of a myth are possible?  

Tagged: Mevlevi Orderwhirling dervishnomadismshamanismritualinstitutionalizationreligionmystical experiencemysticismJoseph Campbellmythology


29th August 2014


Talking the Talk and Walking and Walk


When a principle is taken personally we say that a person not only “talks the talk” but also “walks the walk.” When there is a glaring gap between the talk and the walk, we take that individual for a hypocrite who says, “Do as I say, not as I do.”

But sometimes it is very difficult to determine the relation between walk and talk, or between principle and practice. Personal behavior does not always have a direct relationship to moral principles, and an indirect relationship is not always hypocrisy. Let me try to explain what I mean by this.

I was thinking about these issues recently after reading the article Put down the smart drugs – cognitive enhancement is ethically risky business, by Nicole A Vincent and ,

This is one of those articles, not uncommon on the internet, where the comments section is at least as interesting, if not more interesting, than the article itself. This above-linked article is from an online publication called “The Conversation” (I have been increasingly noticing their content recently, but almost always in the context of disagreement), and it can be said that the comments became a true conversation — the discussants were intellectually engaged and demonstrated a high degree of “idea flow” (a term I only recently learned from Cadell Last’s presentation at the IBHA conference).

In the comments on this article about cognitive enhancement drugs — also known as nootropics — one of the authors, who had argued against the use of cognitive enhancement drugs in the article explicitly acknowledged having formerly used one of the drugs most discussed. Nicole A Vincent wrote in a comment to her own article:

Ok, so here’s my own disclosure: I have used modafinil in the past, but not these days. And for me it worked a treat. Never experienced jetlag, even after multiple 24 hour trips between The Netherlands and Australia, or Australia and the US, and I could just keep working unimpaired, which is precisely what I did. And did and did and did…

The 30 or so 200mg tablets (obtained online), which I took in half-quantity doses in the morning for three or four days after traveling to the other side of the globe, paired up with sleeping aids at night to help me get a restful sleep for a similar three to four days, lasted a year and a half, but eventually they ran out. In retrospect, I worked too much in that time. The ability to do this became a reason to not say “no” to participate in workshops or conferences. I don’t think this was a good thing.

I guess I wasn’t greatly surprised by by this, but there is always a sense of cognitive dissonance when regulators reveal their own lessons learned which led them to their current position.

If you asked me personally about my use of chemicals, I would tell you that I don’t make use of any drugs, legal or otherwise — that includes the most common forms of legalized drugs such as alcohol, tobacco, and coffee. I can easily imagine that if someone were to ask me about this, and heard my response, they might assume that I was opposed to the use of drugs, and likely that I would also be opposed to cognitive enhancement drugs. This assumption would be wrong. Although I don’t use any drugs myself, I am completely indifferent to whether others use drugs, and I have no objection whatsoever to the use of cognitive enhancement drugs.

Moreover, I would be likely to say that it will be nearly impossible to stop the use of cognitive enhancement drugs, and that their efficacy and use will slowly and gradually become the norm. Trying to regulate them will only slow down this process, not stop it, which may serve a purpose in terms of reducing (or at least managing) social tension, but one cannot pretend that any such action does not come at a cost at least equal to that of making no attempt to regulate nootropics. 

I personally don’t use cognitive enhancement drugs but in principle have no objection to them; the author of the above-linked article has personally used cognitive enhancement drugs, but is in principle opposed to them. This precise mirror image of positions not only reveals a gap between the personal and the principled, but also suggests that we are both hypocrites. 

Am I failing to walk the walk of transhumanism (at least, the thin end of the wedge of transhumanism) even while talking the talk? Am I contributing to a society in which people routinely use cognitive enhancement drugs to obtain a competitive advantage, possibly at the cost of long-term health benefits? Not exactly.

If others want to risk their health with unknown side-effects from drugs, whether nootropics or otherwise, I want the individual to be the judge of their own best interest. I believe that it is in my interest not to ingest any chemicals (except those present in my processed food), but if others feel otherwise, that is fine with me. In choosing not to use cognitive enhancement drugs I may suffer material consequences from my decision, as others who take these drugs are able to function at a higher level. So I am voluntarily accepting a potential handicap for avoiding a potential risk. I may fall behind. So be it.

And there is an open question whether any of the drugs now called nootropics are cognitive enhancers, or merely stimulants. One of the contributors to the online discussion, Alan W. Shorter, wrote, “…this article was not about cognitive enhancers at all; just stimulants.” One might similarly observe that the article wasn’t about really about the use of cognitive enhancement drugs at all, but rather an examination of the kind of society in which we should live: a regulatory nanny state in which individuals are “nudged” (or even shoved) by government policies into behaviors approved by the state, or a open society in which individuals can make real choices about how they will live their lives.

The world is full of self-destructive individuals and self-destructive opportunities. We should not be surprised that the two often meet up. Cognitive enhancement drugs might be the occasion or further self-destructive behavior (as in the film Limitless, often cited in relation to nootropics), but they are not likely to be as destructive as alcohol, which has ruined countless lives but remains widely available. It seems we prefer the devil we know. 

Tagged: cognitive enhancementnootropicsModafinilsmart drugsNicole A VincentEmma A. Janehypocrisy


29th August 2014

Link with 1 note

Legroom row diverts second US flight →

Call it a sign of the times. For the second time in one week, a US flight has been diverted and forced to land due to a conflict among passengers over reclined seats. Air Marshalls on US flights now have to respond to angry travelers who have lost their personal space to others, rather than responding to terrorists, as was the primarily intent of their presence.

Anyone who has flown on a US economy flight in recent years knows that US airlines have been steadily reducing seat room in order to get more seats on each plane. The airlines want to get more seats on each plane in order to earn a little more revenue from each flight, because most legacy US air carriers have been running at a loss for several years, if not several decades.

Now US airlines have reduced the room in the economy cabin to the point that fights are spontaneously breaking out among passengers. This indicates that a threshold of diminishing returns is approaching. Further reductions in personal space would mean increased disorder plaguing flights, and if pursued further would eventually become unworkable. 

If we contextualize this reduced personal space on-board in the overall airline experience, which is pretty dismal, it is no surprise that fights are breaking out on flights. In the past couple of years I have flown economy class and business class and first class. Even on my most expensive flights the service was poor. First class on both US Airways and American was particularly poor; Delta and United were a little better.  

Air travel is now the default form of transportation. What buses, trains, and even ships were in the past, airplanes are today. Air liners are the buses of the skies, and airports are the bus depots of this transportation infrastructure. But you can’t exactly operate an airline like you would a bus, because you can’t pull over on the shoulder of the road and push out an unruly passenger.

Hopefully the airlines will realize that they are approaching a threshold of on-board crowding that is unsustainable, and they will pull back even if it means having to raise prices. If they fail to learn the obvious lesson, things will only get worse.

Tagged: airlinesair travelseat roomeconomy classtraveltourismUS AirwaysAmerican Airlines


23rd August 2014

Post with 1 note

Foreshortening or Anthropocentrism?

On Illusions in Time…

At the 2014 IBHA conference, futurist Joseph Voros suggested that past instances of descent (his preferred term for civilizational failure) are viewed from a foreshortened perspective, so that we see them as obvious compact events in the past, whereas from the perspective of a participant in the midst of the descent in question civilizational failure would not have been obvious (at least, as a compact, discrete event). It might even be invisible. In other words, individuals living through the decline and fall of Rome would not have known it to be such.

While formulated in many ways, this is a familiar theme. Another speaker at the IBHA conference (Ken Baskin, if I remember correctly) made a similar point when he implied that we are living in an axial age now, though we may not realize it. Our perspective from within history means that we see the past and the future differently from how we see the present, and these different perspectives result in temporal illusions that may distort an historical event by magnifying or diminishing it.

That we might already have passed the tipping point of decline or even extinction is a commonplace, and one that I have previously addressed, as when I wrote…

“If we fail to do what is necessary to perpetuate the human species and thus precipitate the end of the world indirectly by failing to do what was necessary to prevent the event, and if some alien species should examine the remains of our ill-fated species and their archaeologists reconstruct our history, they will no doubt focus on the problem of when we turned the corner from viability to non-viability. That is to say, they would want to try to understand the moment, and hence possibly also the nature, of the suicide of our species. Perhaps we have already turned that corner and do not recognize the fact; indeed, it is likely impossible that we could recognize the fact from within our history that might be obvious to an observer outside our history.”

There is, however, another way to consider this question. While our perspective from within history may make it difficult or impossible for us to see that we are in the midst of an historical development as it happens, it is just as true that our perspective from within history means that we are likely to see historical developments that are not in fact happening, and it is only as a result of out being embedded in history and interested in the outcome that we attribute a larger significance and meaning to events that have been blown out of proportion by our perspective. In short, the appearance of an historical development may be mere appearance that is a result of our anthropocentrism, and that the reality is that nothing of great import is happening.  

The same metaphor of foreshortening applies here as well. A foreshortened perspective on events in the distant past makes them small and compact; a foreshortened perspective on events right up close to us makes them loom unnaturally large, and it is the effect of present events looming terrifyingly large before us that is an egocentric or anthropocentric view of history that always magnifies the present, even if the present is not particularly historically interesting or important.

The idea that we now live in a unique moment in history — something one hears all the time, and which in fact I heard repeatedly at the IBHA conference — is so patently an expression of our perspective on history that I wonder that more people don’t hear how they sound when they say things like this. I’ve been working on this problem for some time (though I have not yet arrived at any definitive formulations), so I notice it right away when I hear people talking about our temporal uniqueness.

It is interesting to note that doomsday arguments begin by assuming exactly the opposite, i.e., by assuming our temporal mediocrity, positioning the present somewhere midway between speciation and extinction. Yet I can easily imagine a devotee of the idea of the present moment as a moment of unique crisis asserting the uniqueness of the present moment in the doomsday argument by its centrality in history. With this observation we see that the ascription of uniqueness is not based on evidence, but is a template placed over the present for its interpretation. 

Tagged: Joseph Vorosdescentcivilizational failureanthropocentrismtemporal illusionfuturismdoomsday argumentforeshortening


20th August 2014


Scaling Up Origins of Life Research

In my previous post, Time Out of Mind, I mentioned my early reaction to the Miller-Urey experiment, and how my inability in my youth to understand more than four billion years of terrestrial history made me wonder why the experiment wasn’t simply continued until something living crawled out of the experimental apparatus.

Since the time of Miller and Urey’s experiment an enormous amount of work has been done in origins of life research, both from a bottom-up perspective, involving numerous experiments that demonstrate the synthesis of organic molecules from inorganic precursors, as well as a top-down perspective, most notably the molecular phylogeny of Carl Woese, which points to what the earliest common ancestor of all life of Earth may have been like (short answer: probably an extremophile). 

The possibilities for the origins of life on Earth now extend far beyond Darwin’s “warm little pond,” an Oparin ocean, or Miller and Urey’s primordial atmosphere. One of the most interesting developments has been the discovery of black smokers under the oceans, where superheated water spews out from vents, and there are organisms that live off this heat and without photosynthesis (chemosynthetic thermophiles). We know now, for example, that interesting chemical reactions can also take place in ice and frozen environments. Many other possibilities have been suggested and investigated. (On this I recommend the works of Robert Hazen, both his lectures for The Great Courses and his book Genesis: The Scientific Quest for Life’s Origin.)

While my youthful naivete about running the Miller-Urey experiment until it produced something more interesting than brown sludge was misguided, it wasn’t absolutely or completely misguided. I have returned to the thought occasionally in a more sophisticated form, and it is interesting to imagine what kind of science might be undertaken if the resources were available.

If origins of life research could get the funding for very large scale projects (like, e.g., the ISS, ITER, NIF, and the LHC) — that is to say, if origins of life research were to become big science — it would be possible to undertake a long-term, large-scale origins of life research project that could imitate the conditions of the early Earth on a much larger scale, not not merely the atmospheric conditions, as were simulated in the Miller-Urey experiment, but a range of different conditions present on the early Earth at different times as it passed through its dramatic earliest formation.

Imagine, if you will, an experiment on the scale of Biosphere 2, or larger, except beginning with an enclosed and sterile environment simulating not just the primordial atmosphere, but, like Biosphere 2, having several distinct climates — an ocean-like unit, agitated to produce waves and with hydrothermal vents under it, a frozen landscape, a barren desert, and the like. In addition to particular environments, there would also need to be some kind of simulation of the large scale processes of Earth, since planetary and climatological changes have played a major role in the evolution of life on Earth. Most importantly, some way to simulate the effects of tectonic plate movement, and the exchanges between environments that occur as a result, and Milankovitch cycles, which have driven climate cycles on Earth, would be necessary.

If it were possible to build an enormous environment simulating the early Earth across a range of climates, and to run this experiment for years or, better, decades, I suspect that we would learn some interesting things that origins of life research to date has not yet discovered. Just as physicists often observe that new advances in physics come from larger machines that can reach higher energy levels, so too the historical sciences (which include geology, climatology, biology, paleontology, inter alia) will probably benefit from much larger machines and experiments that can can be allowed to run for extended periods of time. With the historical sciences, it is more time that is needed, rather than more energy (although an experiment such as I described would require an enormous amount of energy, it is will well within the limits of contemporary technology). Particle physics doesn’t require as much time, but for new discoveries it requires higher energy levels.  

Such an experiment would have to be open-ended (like the open-ended history of Earth itself), and would, in some senses, not precisely conform to the scientific method as it is now pursued in more constrained contexts.

I am reminded of a now-famous quote from Roger Revelle:

"Human beings are now carrying out a large scale geophysical experiment of a kind that could not have happened in the past nor be reproduced in the future."

While we cannot precisely reproduce the early Earth, we could at least simulate parts of Earth’s history and its climatological and biological development, and this might allow us to model some of the consequences that Revelle suggested could not have happened before and could not be repeated. Such an undertaking would not only shed light on origins of life research, but also on our current geophysical and ecological questions that stand at the center of the contemporary debate about the direction of industrial-technological civilization. 

Tagged: origins of lifeMiller-Urey experimentBiosphere 2big scienceblack smokershydrothermal ventOparin oceanwarm little pondRoger Ravelleearth science


19th August 2014


Time Out of Mind


My interest in science was piqued at a quite early age, and I began reading articles and textbooks long before I had the knowledge or understanding to appreciate what I was reading. It was, in fact, this constant reading in science that eventually built up my background knowledge and made it possible for me to understand what I was reading, but my early reading was exploratory, and in addition I read a lot of garbage that I eventually abandoned. Science, however, proved itself to be the real deal, and so I did not abandon it.

One of the episodes of my early intellectual life that I remember was the first time that I read about the Miller-Urey experiment; I was quite young at the time, probably between 10 and 14. This is probably one of the most famous experiments in 20th century science, and it marks the beginnings of origins of life research. I was quite fascinated by this, but I asked myself, if this experiment could produce what were often called “the basic building-blocks of life,” why not leave the apparatus intact and let the experiment run until something crawls out the other end?

This demonstrates the extent to which I had no conception whatsoever of geological time, much less cosmological time. If some knowledgeable adult had wandered along to explain to me that the apparatus would have to run for several billion years, and in far less time the apparatus itself would have collapsed into dust, I might have started to understand at that time, but no one did in fact point this out to me. I had to develop my own sense of geological time over a period of many years and decades.

My lack of any grasp of geological time may also be compared to another early memory of mine. Still as a young child, my mother took me to hear a talk by an astronomer who spoke at Clatsop Community College in Astoria, Oregon. I can still remember how the astronomer talked about the expansion of the universe, and how he compared the universe to loaf of raisin bread. When you start, the raisins are close together in the dough, but as the dough raises and cooks all the raisins are farther away from each other because the whole loaf has increased in size. When I heard this I felt a little sad, as it made me think that individual stars are getting farther apart, and that would make interstellar travel more difficult. I did not understand at that time that galaxies are the basic building blocks of the universe for astronomers, and even if galactic clusters are moving apart, that doesn’t mean that individual stars within a given galaxy are moving farther apart.

My melancholy was not entirely unjustified, however. Recently I wrote about the “end of cosmology” thesis (in The End of Archaeology?), such that the expansion of the universe will leave galaxies isolated and observers unable to determine the expanding nature of the universe — this would certainly be a blow to large-scale spacefaring civilizations (under these conditions there could be no intergalactic civilizations) and thus to my wishes for a human future in the universe.

An appreciation of the scale of time necessary to understand evolution or the scale of space necessary to appreciate the expansion of galactic clusters is a matter of perspective, and this perspective is usually developed over the course of a lifetime of studying these large-scale phenomena. Is it possible to communicate these scales of space and time to the uninitiated?

This was a question that came up many times (at least implicitly) at the IBHA “big history” conference that I recently attended (cf. Day 1, Day 2, and Day 3). Given my personal experience of coming to an adequate appreciation of the scales of space and time of big history — something which, as an autodidact, I eventually came to on my own — I would have liked to have seen a more explicit treatment of this theme.

There are many pedagogical devices that have been and might be employed to give an intuitive exposition of the size of the world. Which of them are most effective, and in which circumstances? This is a concrete scientific question that could be answered with a little empirical research. While different kinds of learners would probably find different intuitive methods to be effective, we could probably come up with some good rules of thumb about what works.

What strikes me as most important (and most difficult) is simply “waking up” someone from their quasi-Kantian dogmatic slumbers and getting them to realize that there is more to the world than revealed in their immediate personal experience of space and time. Getting beyond this initial egocentric bias is the first step, but is this egocentric bias an obstacle or an opportunity? Is it best to start from the personal experience of time and history and slowly move out from there, step by step, or to seek a dramatic break with the personal and to effect a paradigm shift that places the individual within a much larger cosmological context?

It strikes me now, after writing the above, another theme of the IBHA “big history” conference was the human need for meaning, and when I wrote above about getting the individual to see that there is more to the world than their personal experience of it, I realize that much of the need for meaning is expressed in terms exactly like this, e.g., there is more to the world than what we can see. The human hunger for meaning coupled with our cognitive biases usually develops this feeling in an non-naturalistic direction, so then the question for intellectual development becomes whether this feeling can be exapted by science and taken in a naturalistic direction, so that the individual is shown that there is indeed a great depth to the world that far transcends our personal experience, but that we can come to some knowledge of it if we will but direct our minds to the problem.

Tagged: cosmological timegeological timetime scalesbig historyClatsop Community Collegeend of cosmology


18th August 2014

Post with 2 notes

A Rant about Operating Systems

Someone once said that if you live in a Frank Lloyd Wright house, you have to live inside Frank Lloyd Wright’s head: the sense of Wright’s thinking was so completely expressed in his architecture that his designs had a de facto normative result of forcing occupants of Wright buildings to live as Wright intended them to live, which was the way that Wright thought people ought to live. This is not necessarily the way that most people want to live.

Just so, the users of computers are forced to live inside the heads of software engineers at Microsoft, Apple, and the increasing number of mobile platforms (which I do not use and of which I am therefore ignorant). This is not a pleasant experience.

Last Saturday night my computer simply failed to start. I punched the button like I always do, and it wouldn’t start up. It had given no indication of failure, but fail it did. So the next day I went and bought a new computer, because I have to have a computer to do my business. I did not at all want to buy a new computer, and most of all I did not want to have to endure using Windows 8, but it was Hobson’s Choice: the choice between what is offered and nothing. 

So, how horrible is Windows 8? Well, it may not be as bad as Mac OS in terms of its grating juvenilism (something I previously discussed in Infantilism in the Information Age), but it is pretty much as bad as I expected — or worse. People used to talk about the “craplets” that appeared on the screen of their newly purchases computers. The Windows 8 OS is craplets on steroids: hundreds of “apps” that are supposed to be there for my convenience are foisted upon me. Already I have literally spent hours throwing away the garbage that I don’t want and don’t need that is cluttering this computer and slowing down its operation. This is incredibly irritating. I feel that Bill Gates (though he is no longer CEO of Microsoft) personally owes me several days of my life that have been lost struggling against this inane operating system.

Yes, against. Among the greatest hindrances to productivity today are the trendy ideas of software engineers that they push on an unsuspecting public. One often has the feeling that one is working against one’s computer to try to get it to do the simple and obvious thing that you are trying to do, and to avoid having your computer try to sell you something that you don’t want. If network television in the past was irritating for is constant punctuation of its programming with paid advertising, the computer age in media is that much worse, as you cannot accomplish the most basic functions in your business day without being assaulted by something that is pushed at you to “help” you. Thanks. I don’t need that kind of help.

The perfect operating system would be silent and invisible, operating completely anonymously in the background, making it possible for you to do your work with the least hindrance. In other words, a perfect operating system should be like the perfect squire to a knight, as described by Jaime Lannister in Game of Thrones, when he is praising the young man who squired for him, and whom he then kills moments later:

"You knew when you were needed and when to go away. It’s a rare talent. Most of my squires, they mean well, but young men with big jobs, they tend to overdo them." (Season 2, episode 7, "A Man Without Honor")

The problem with operating systems is that the companies that design, build, market and distribute them are never content to be merely on operating system. They want to be recognized, they want to be a brand, they want to be the focus of your attention, the Belle of the ball, your universal go-to reference, or, worst of all, your best friend.

So I continue to struggle with my new computer. It tries relentlessly to force me to use my computer in the way approved by Microsoft as the “correct” way that computers today are “supposed” to be used, and I push back just as relentlessly, trying to use the computer the way I want to use it. I’m not defeated yet. Unbowed, I battle on. 

End of rant.

Tagged: Windows 8Microsoftoperating systemcomputersmiserable failurerantcrapletscrapwarejuvenilisminfantilismJaime Lannister


17th August 2014


AI, Robotics, and the Future of Jobs →

This excellent article was brought to my attention by Siggi Becker. It lays out the key themes, the arguments that are being made now, and some informed opinion in an attempt to understand the impact of present automation trends continued into the future.

This issue is of some interest to me as I have blogged extensively about what is now called technological unemployment, including…

I have noted many times that automation and technological unemployment have been features of future studies from the Luddites through the 1970s. There were routine predictions of mass unemployment, as well more optimistic scenarios of 20 or even 10 hour work weeks, and societies re-structured around an economy of maximized abundance. (This is one of those spectacular failures of twentieth century futurism that has made contemporary futurists today more cautious and circumspect.) The economy shook off both the optimists and the pessimists, and muddled through — though it did, to be sure, create many new jobs in new industries.

The assumption is that this will happen again. It has become a commonplace among contemporary economists to point out (rightly) that in all previous technological disruptions to the economy, after a “temporary phase of maladjustment” (often causing significant human anguish that is not recorded in economic statistics) the economy always goes on to create more jobs in new industries. This is beyond dispute. 

What is up for dispute, however, is whether the next wave of technological unemployment brought about by AI and automation will be like every previous historical development. Induction tells us that the future will be like the past, but we also know from historical experience that unprecedented events do occur, and it is usually just when we are getting comfortable with a high level of inductive confirmation that we are blindsided by an unexpected and unpredicted development that upends every historical precedent. On this side of the equation, we must note that we have more computer power at our disposal now than ever before in history, that this computer power and its AI capabilities are steadily increasing, and that these trends may at some point pass an unprecedented threshold.

The correct question to ask, then, is not “Will automation displace jobs?” We know that it will do so. The correct question is, “Will newly automated sectors of the economy contribute to job creation in new industries and in other sectors?” This, if we are going to be honest, is unknown. Historical precedent suggests that that answer is “Yes,” but we cannot exclude a priori the possibility of a new phase in the development of industrial-technological civilization, however unlikely.

Tagged: AIartificial intelligenceautomationhistorical precedentinductiontechnological unemployment


14th August 2014

Post with 1 note

Post-Singularity Humanity


Kenneth Clark in his Civilisation: A Personal View, said that most great movements of thought last about twenty years before they enter into terminal decline. This seems to be the case with the technological singularity.

The abstract for Vernor Vinge’s original 1993 paper on the technological singularity, “The Coming Technological Singularity,” reads as follows. 

"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."

Now, in 2014, a little more than twenty years later, futurists are starting to distance themselves from the technological singularity (not withstanding the fact that the academic publisher Springer just came out with the massive anthology Singularity Hypotheses: A Scientific and Philosophical Assessment). I saw this last week when I attended the 2014 IBHA “big history” conference, and we are seeing more articles like this: The “Singularity” is cancelled. For now.

Whether or not the technological singularity remains a useful concept at this point I will not consider, but I will ask, even if the concept remains valid and useful, what will it matter to humanity if the technological singularity comes about?

If “the human era will be ended” means that humanity will be extinct, then it’s all over for humanity except for the memory preserved of us by the machines; however, if “the human era” simply means “the era during which humanity dominated Earth,” then there remains a place in history for a marginalized humanity. What might the life of marginalized humanity be like?

Even if, in the fullness of time (a fullness brought about by the technological singularity), human being comes to seem outmoded and hopelessly limited in comparison to what is possible on an absolute scale, there will still be a small number who will identify a certain intrinsic value in the human condition, and who will therefore continue to embody the human condition as we know it today, even if “better” embodiments — enhanced embodiments — are available, and there is no stigma attached to enhancement.

Under conditions of greatly increased technology that would make human being and the human condition seem outmoded — a condition that would be facilitated by machine superintelligence — the technology would be readily available that would make the classic expansionist vision of a human future in space possible. Since the human condition as it is (and not what it might be in a post-singularity era), i.e., that human condition in which some few would find intrinsic value, involves a desire for exploration, adventure, and danger, there would be those among the enthusiasts of original human nature who would want to enact this expansionist vision. 

In other words, for post-singularity humanity, in a scenario that does not involve human extinction, there is no intrinsic reason to suppose that “traditional” forms of futurism could not (or would not) be realized, and the means would be readily available to do so.

I have approached this idea earlier from another angle. Some time ago in my post The Fractal Structure of Exponential Growth I wrote:

"…something like the technological singularity could occur, but it would just as rapidly disappear from our view. We would go on devoting an hour to a leisurely lunch, even while at far higher magnifications of time further revolutions of exponential growth were going on unseen by us."

The machines may leave us in the dust and go on to other things, but in doing so they may leave us with the wherewithal to accomplish some of our most optimistic visions of the future.   


Tagged: singularitytechnological singularityVernor Vingesuperintelligencehuman naturehuman beinghuman conditionfuturism