Post with 4 notes
The religious modes of thought typical of settled peoples usually focus on orthodoxy, that is to say, unquestioned belief in particular propositions. A body of religious propositions to which all members of a society are expected to assent requires a priestly class to formulate these propositions and to preside over their interpretation. This priestly class is either identical with the ruling political class (in a system of Kaiseropapism) or it exists as a parallel hierarchy, resembling the hierarchical structure of the political class but distinct from it (as in medieval Europe).
It has been the peoples who have been settled for the longest period of time — the Semitic peoples of the Levant — who gave us the “religions of the book,” that is to say, Judaism, Christianity, and Islam, in which a written text (or texts) plays a central role in religious observance. The cataloging of dogmas in encyclopedic works of theology was the natural extension of this institutionalization of religious experience.
The religious modes of thought of nomadic peoples tend to be more orientated toward shamanic ritual practices — ritual practices that occur independent of any body of doctrine, any orthodoxy, or any ecclesiastical hierarchy. The use of mind-altering substances, drumming, and dancing (the elements of mystical experience that I have discussed previously in Algorithms of Ecstasy) are focused on personal mystical experience and not doctrinal conformity. Similarly, stories of myths told round a camp fire are likely to change slightly with each re-telling, and so naturally adapt to the changing circumstances of a nomadic people. Here it is the story and not the book that is central. The religious experience of nomadic peoples is far less likely to be mediation by institutions than the religious experience of settled peoples.
It was the Turks, who were a nomadic people before they settled down in the form of the Ottoman Empire, who gave us the whirling Dervishes, unconcerned with orthodoxy and more interested in ecstasy than obedience, who seek altered states of consciousness through a spinning dance. Here practices likely to induce a mystical experience have become institutionalized after the fashion of a settled society, but still retain the stamp of their shamanistic origins.
Joseph Campbell often made a fourfold distinction in the functions served by mythology, these four being, 1) the mystical, 2) the cosmological, 3) the social, and 4) the personal. The mystical function is the reconciliation of consciousness (sentience) with the world that consciousness finds; the cosmological function is an explanatory framework that accounts for the way the world is; the social function is both a descriptive and normative account of the social order; the personal function is providing a guide for the individual’s life, and being that which the individual calls upon in his hour of need. (I give these summaries from memory, without consulting any of Campbell’s texts, so please make allowances for that if the above is not clear.)
If we look at the different characteristics of religious experience in settled and nomadic societies, it seems clear that nomadic societies are much more likely to adequately fulfill the mystical and personal functions, while settled societies are more likely to adequately fulfill the cosmological and social functions. Is it possible to imagine a society in which all four functions of mythology could be experienced with equal vividness? Might there be the possibility of a society in which an organic mythology, naturally arising from the common experiences of a people, could have the personal immediacy of mystical experience on the one hand, that serves as a vital guide to life, while also incorporating some systematic reflection on the world and society?
This isn’t really a question about religion and religious experiences, as much as it is a question about the structure of human society. Is it possible to structure a human society so that fulfillment of all the functions of a myth are possible?
When a principle is taken personally we say that a person not only “talks the talk” but also “walks the walk.” When there is a glaring gap between the talk and the walk, we take that individual for a hypocrite who says, “Do as I say, not as I do.”
But sometimes it is very difficult to determine the relation between walk and talk, or between principle and practice. Personal behavior does not always have a direct relationship to moral principles, and an indirect relationship is not always hypocrisy. Let me try to explain what I mean by this.
I was thinking about these issues recently after reading the article Put down the smart drugs – cognitive enhancement is ethically risky business, by Nicole A Vincent and Emma A. Jane,
This is one of those articles, not uncommon on the internet, where the comments section is at least as interesting, if not more interesting, than the article itself. This above-linked article is from an online publication called “The Conversation” (I have been increasingly noticing their content recently, but almost always in the context of disagreement), and it can be said that the comments became a true conversation — the discussants were intellectually engaged and demonstrated a high degree of “idea flow” (a term I only recently learned from Cadell Last’s presentation at the IBHA conference).
In the comments on this article about cognitive enhancement drugs — also known as nootropics — one of the authors, who had argued against the use of cognitive enhancement drugs in the article explicitly acknowledged having formerly used one of the drugs most discussed. Nicole A Vincent wrote in a comment to her own article:
Ok, so here’s my own disclosure: I have used modafinil in the past, but not these days. And for me it worked a treat. Never experienced jetlag, even after multiple 24 hour trips between The Netherlands and Australia, or Australia and the US, and I could just keep working unimpaired, which is precisely what I did. And did and did and did…
The 30 or so 200mg tablets (obtained online), which I took in half-quantity doses in the morning for three or four days after traveling to the other side of the globe, paired up with sleeping aids at night to help me get a restful sleep for a similar three to four days, lasted a year and a half, but eventually they ran out. In retrospect, I worked too much in that time. The ability to do this became a reason to not say “no” to participate in workshops or conferences. I don’t think this was a good thing.
I guess I wasn’t greatly surprised by by this, but there is always a sense of cognitive dissonance when regulators reveal their own lessons learned which led them to their current position.
If you asked me personally about my use of chemicals, I would tell you that I don’t make use of any drugs, legal or otherwise — that includes the most common forms of legalized drugs such as alcohol, tobacco, and coffee. I can easily imagine that if someone were to ask me about this, and heard my response, they might assume that I was opposed to the use of drugs, and likely that I would also be opposed to cognitive enhancement drugs. This assumption would be wrong. Although I don’t use any drugs myself, I am completely indifferent to whether others use drugs, and I have no objection whatsoever to the use of cognitive enhancement drugs.
Moreover, I would be likely to say that it will be nearly impossible to stop the use of cognitive enhancement drugs, and that their efficacy and use will slowly and gradually become the norm. Trying to regulate them will only slow down this process, not stop it, which may serve a purpose in terms of reducing (or at least managing) social tension, but one cannot pretend that any such action does not come at a cost at least equal to that of making no attempt to regulate nootropics.
I personally don’t use cognitive enhancement drugs but in principle have no objection to them; the author of the above-linked article has personally used cognitive enhancement drugs, but is in principle opposed to them. This precise mirror image of positions not only reveals a gap between the personal and the principled, but also suggests that we are both hypocrites.
Am I failing to walk the walk of transhumanism (at least, the thin end of the wedge of transhumanism) even while talking the talk? Am I contributing to a society in which people routinely use cognitive enhancement drugs to obtain a competitive advantage, possibly at the cost of long-term health benefits? Not exactly.
If others want to risk their health with unknown side-effects from drugs, whether nootropics or otherwise, I want the individual to be the judge of their own best interest. I believe that it is in my interest not to ingest any chemicals (except those present in my processed food), but if others feel otherwise, that is fine with me. In choosing not to use cognitive enhancement drugs I may suffer material consequences from my decision, as others who take these drugs are able to function at a higher level. So I am voluntarily accepting a potential handicap for avoiding a potential risk. I may fall behind. So be it.
And there is an open question whether any of the drugs now called nootropics are cognitive enhancers, or merely stimulants. One of the contributors to the online discussion, Alan W. Shorter, wrote, “…this article was not about cognitive enhancers at all; just stimulants.” One might similarly observe that the article wasn’t about really about the use of cognitive enhancement drugs at all, but rather an examination of the kind of society in which we should live: a regulatory nanny state in which individuals are “nudged” (or even shoved) by government policies into behaviors approved by the state, or a open society in which individuals can make real choices about how they will live their lives.
The world is full of self-destructive individuals and self-destructive opportunities. We should not be surprised that the two often meet up. Cognitive enhancement drugs might be the occasion or further self-destructive behavior (as in the film Limitless, often cited in relation to nootropics), but they are not likely to be as destructive as alcohol, which has ruined countless lives but remains widely available. It seems we prefer the devil we know.
Post with 1 note
On Illusions in Time…
At the 2014 IBHA conference, futurist Joseph Voros suggested that past instances of descent (his preferred term for civilizational failure) are viewed from a foreshortened perspective, so that we see them as obvious compact events in the past, whereas from the perspective of a participant in the midst of the descent in question civilizational failure would not have been obvious (at least, as a compact, discrete event). It might even be invisible. In other words, individuals living through the decline and fall of Rome would not have known it to be such.
While formulated in many ways, this is a familiar theme. Another speaker at the IBHA conference (Ken Baskin, if I remember correctly) made a similar point when he implied that we are living in an axial age now, though we may not realize it. Our perspective from within history means that we see the past and the future differently from how we see the present, and these different perspectives result in temporal illusions that may distort an historical event by magnifying or diminishing it.
That we might already have passed the tipping point of decline or even extinction is a commonplace, and one that I have previously addressed, as when I wrote…
“If we fail to do what is necessary to perpetuate the human species and thus precipitate the end of the world indirectly by failing to do what was necessary to prevent the event, and if some alien species should examine the remains of our ill-fated species and their archaeologists reconstruct our history, they will no doubt focus on the problem of when we turned the corner from viability to non-viability. That is to say, they would want to try to understand the moment, and hence possibly also the nature, of the suicide of our species. Perhaps we have already turned that corner and do not recognize the fact; indeed, it is likely impossible that we could recognize the fact from within our history that might be obvious to an observer outside our history.”
There is, however, another way to consider this question. While our perspective from within history may make it difficult or impossible for us to see that we are in the midst of an historical development as it happens, it is just as true that our perspective from within history means that we are likely to see historical developments that are not in fact happening, and it is only as a result of out being embedded in history and interested in the outcome that we attribute a larger significance and meaning to events that have been blown out of proportion by our perspective. In short, the appearance of an historical development may be mere appearance that is a result of our anthropocentrism, and that the reality is that nothing of great import is happening.
The same metaphor of foreshortening applies here as well. A foreshortened perspective on events in the distant past makes them small and compact; a foreshortened perspective on events right up close to us makes them loom unnaturally large, and it is the effect of present events looming terrifyingly large before us that is an egocentric or anthropocentric view of history that always magnifies the present, even if the present is not particularly historically interesting or important.
The idea that we now live in a unique moment in history — something one hears all the time, and which in fact I heard repeatedly at the IBHA conference — is so patently an expression of our perspective on history that I wonder that more people don’t hear how they sound when they say things like this. I’ve been working on this problem for some time (though I have not yet arrived at any definitive formulations), so I notice it right away when I hear people talking about our temporal uniqueness.
It is interesting to note that doomsday arguments begin by assuming exactly the opposite, i.e., by assuming our temporal mediocrity, positioning the present somewhere midway between speciation and extinction. Yet I can easily imagine a devotee of the idea of the present moment as a moment of unique crisis asserting the uniqueness of the present moment in the doomsday argument by its centrality in history. With this observation we see that the ascription of uniqueness is not based on evidence, but is a template placed over the present for its interpretation.
In my previous post, Time Out of Mind, I mentioned my early reaction to the Miller-Urey experiment, and how my inability in my youth to understand more than four billion years of terrestrial history made me wonder why the experiment wasn’t simply continued until something living crawled out of the experimental apparatus.
Since the time of Miller and Urey’s experiment an enormous amount of work has been done in origins of life research, both from a bottom-up perspective, involving numerous experiments that demonstrate the synthesis of organic molecules from inorganic precursors, as well as a top-down perspective, most notably the molecular phylogeny of Carl Woese, which points to what the earliest common ancestor of all life of Earth may have been like (short answer: probably an extremophile).
The possibilities for the origins of life on Earth now extend far beyond Darwin’s “warm little pond,” an Oparin ocean, or Miller and Urey’s primordial atmosphere. One of the most interesting developments has been the discovery of black smokers under the oceans, where superheated water spews out from vents, and there are organisms that live off this heat and without photosynthesis (chemosynthetic thermophiles). We know now, for example, that interesting chemical reactions can also take place in ice and frozen environments. Many other possibilities have been suggested and investigated. (On this I recommend the works of Robert Hazen, both his lectures for The Great Courses and his book Genesis: The Scientific Quest for Life’s Origin.)
While my youthful naivete about running the Miller-Urey experiment until it produced something more interesting than brown sludge was misguided, it wasn’t absolutely or completely misguided. I have returned to the thought occasionally in a more sophisticated form, and it is interesting to imagine what kind of science might be undertaken if the resources were available.
If origins of life research could get the funding for very large scale projects (like, e.g., the ISS, ITER, NIF, and the LHC) — that is to say, if origins of life research were to become big science — it would be possible to undertake a long-term, large-scale origins of life research project that could imitate the conditions of the early Earth on a much larger scale, not not merely the atmospheric conditions, as were simulated in the Miller-Urey experiment, but a range of different conditions present on the early Earth at different times as it passed through its dramatic earliest formation.
Imagine, if you will, an experiment on the scale of Biosphere 2, or larger, except beginning with an enclosed and sterile environment simulating not just the primordial atmosphere, but, like Biosphere 2, having several distinct climates — an ocean-like unit, agitated to produce waves and with hydrothermal vents under it, a frozen landscape, a barren desert, and the like. In addition to particular environments, there would also need to be some kind of simulation of the large scale processes of Earth, since planetary and climatological changes have played a major role in the evolution of life on Earth. Most importantly, some way to simulate the effects of tectonic plate movement, and the exchanges between environments that occur as a result, and Milankovitch cycles, which have driven climate cycles on Earth, would be necessary.
If it were possible to build an enormous environment simulating the early Earth across a range of climates, and to run this experiment for years or, better, decades, I suspect that we would learn some interesting things that origins of life research to date has not yet discovered. Just as physicists often observe that new advances in physics come from larger machines that can reach higher energy levels, so too the historical sciences (which include geology, climatology, biology, paleontology, inter alia) will probably benefit from much larger machines and experiments that can can be allowed to run for extended periods of time. With the historical sciences, it is more time that is needed, rather than more energy (although an experiment such as I described would require an enormous amount of energy, it is will well within the limits of contemporary technology). Particle physics doesn’t require as much time, but for new discoveries it requires higher energy levels.
Such an experiment would have to be open-ended (like the open-ended history of Earth itself), and would, in some senses, not precisely conform to the scientific method as it is now pursued in more constrained contexts.
I am reminded of a now-famous quote from Roger Revelle:
"Human beings are now carrying out a large scale geophysical experiment of a kind that could not have happened in the past nor be reproduced in the future."
While we cannot precisely reproduce the early Earth, we could at least simulate parts of Earth’s history and its climatological and biological development, and this might allow us to model some of the consequences that Revelle suggested could not have happened before and could not be repeated. Such an undertaking would not only shed light on origins of life research, but also on our current geophysical and ecological questions that stand at the center of the contemporary debate about the direction of industrial-technological civilization.
My interest in science was piqued at a quite early age, and I began reading articles and textbooks long before I had the knowledge or understanding to appreciate what I was reading. It was, in fact, this constant reading in science that eventually built up my background knowledge and made it possible for me to understand what I was reading, but my early reading was exploratory, and in addition I read a lot of garbage that I eventually abandoned. Science, however, proved itself to be the real deal, and so I did not abandon it.
One of the episodes of my early intellectual life that I remember was the first time that I read about the Miller-Urey experiment; I was quite young at the time, probably between 10 and 14. This is probably one of the most famous experiments in 20th century science, and it marks the beginnings of origins of life research. I was quite fascinated by this, but I asked myself, if this experiment could produce what were often called “the basic building-blocks of life,” why not leave the apparatus intact and let the experiment run until something crawls out the other end?
This demonstrates the extent to which I had no conception whatsoever of geological time, much less cosmological time. If some knowledgeable adult had wandered along to explain to me that the apparatus would have to run for several billion years, and in far less time the apparatus itself would have collapsed into dust, I might have started to understand at that time, but no one did in fact point this out to me. I had to develop my own sense of geological time over a period of many years and decades.
My lack of any grasp of geological time may also be compared to another early memory of mine. Still as a young child, my mother took me to hear a talk by an astronomer who spoke at Clatsop Community College in Astoria, Oregon. I can still remember how the astronomer talked about the expansion of the universe, and how he compared the universe to loaf of raisin bread. When you start, the raisins are close together in the dough, but as the dough raises and cooks all the raisins are farther away from each other because the whole loaf has increased in size. When I heard this I felt a little sad, as it made me think that individual stars are getting farther apart, and that would make interstellar travel more difficult. I did not understand at that time that galaxies are the basic building blocks of the universe for astronomers, and even if galactic clusters are moving apart, that doesn’t mean that individual stars within a given galaxy are moving farther apart.
My melancholy was not entirely unjustified, however. Recently I wrote about the “end of cosmology” thesis (in The End of Archaeology?), such that the expansion of the universe will leave galaxies isolated and observers unable to determine the expanding nature of the universe — this would certainly be a blow to large-scale spacefaring civilizations (under these conditions there could be no intergalactic civilizations) and thus to my wishes for a human future in the universe.
An appreciation of the scale of time necessary to understand evolution or the scale of space necessary to appreciate the expansion of galactic clusters is a matter of perspective, and this perspective is usually developed over the course of a lifetime of studying these large-scale phenomena. Is it possible to communicate these scales of space and time to the uninitiated?
This was a question that came up many times (at least implicitly) at the IBHA “big history” conference that I recently attended (cf. Day 1, Day 2, and Day 3). Given my personal experience of coming to an adequate appreciation of the scales of space and time of big history — something which, as an autodidact, I eventually came to on my own — I would have liked to have seen a more explicit treatment of this theme.
There are many pedagogical devices that have been and might be employed to give an intuitive exposition of the size of the world. Which of them are most effective, and in which circumstances? This is a concrete scientific question that could be answered with a little empirical research. While different kinds of learners would probably find different intuitive methods to be effective, we could probably come up with some good rules of thumb about what works.
What strikes me as most important (and most difficult) is simply “waking up” someone from their quasi-Kantian dogmatic slumbers and getting them to realize that there is more to the world than revealed in their immediate personal experience of space and time. Getting beyond this initial egocentric bias is the first step, but is this egocentric bias an obstacle or an opportunity? Is it best to start from the personal experience of time and history and slowly move out from there, step by step, or to seek a dramatic break with the personal and to effect a paradigm shift that places the individual within a much larger cosmological context?
It strikes me now, after writing the above, another theme of the IBHA “big history” conference was the human need for meaning, and when I wrote above about getting the individual to see that there is more to the world than their personal experience of it, I realize that much of the need for meaning is expressed in terms exactly like this, e.g., there is more to the world than what we can see. The human hunger for meaning coupled with our cognitive biases usually develops this feeling in an non-naturalistic direction, so then the question for intellectual development becomes whether this feeling can be exapted by science and taken in a naturalistic direction, so that the individual is shown that there is indeed a great depth to the world that far transcends our personal experience, but that we can come to some knowledge of it if we will but direct our minds to the problem.
Post with 2 notes
Someone once said that if you live in a Frank Lloyd Wright house, you have to live inside Frank Lloyd Wright’s head: the sense of Wright’s thinking was so completely expressed in his architecture that his designs had a de facto normative result of forcing occupants of Wright buildings to live as Wright intended them to live, which was the way that Wright thought people ought to live. This is not necessarily the way that most people want to live.
Just so, the users of computers are forced to live inside the heads of software engineers at Microsoft, Apple, and the increasing number of mobile platforms (which I do not use and of which I am therefore ignorant). This is not a pleasant experience.
Last Saturday night my computer simply failed to start. I punched the button like I always do, and it wouldn’t start up. It had given no indication of failure, but fail it did. So the next day I went and bought a new computer, because I have to have a computer to do my business. I did not at all want to buy a new computer, and most of all I did not want to have to endure using Windows 8, but it was Hobson’s Choice: the choice between what is offered and nothing.
So, how horrible is Windows 8? Well, it may not be as bad as Mac OS in terms of its grating juvenilism (something I previously discussed in Infantilism in the Information Age), but it is pretty much as bad as I expected — or worse. People used to talk about the “craplets” that appeared on the screen of their newly purchases computers. The Windows 8 OS is craplets on steroids: hundreds of “apps” that are supposed to be there for my convenience are foisted upon me. Already I have literally spent hours throwing away the garbage that I don’t want and don’t need that is cluttering this computer and slowing down its operation. This is incredibly irritating. I feel that Bill Gates (though he is no longer CEO of Microsoft) personally owes me several days of my life that have been lost struggling against this inane operating system.
Yes, against. Among the greatest hindrances to productivity today are the trendy ideas of software engineers that they push on an unsuspecting public. One often has the feeling that one is working against one’s computer to try to get it to do the simple and obvious thing that you are trying to do, and to avoid having your computer try to sell you something that you don’t want. If network television in the past was irritating for is constant punctuation of its programming with paid advertising, the computer age in media is that much worse, as you cannot accomplish the most basic functions in your business day without being assaulted by something that is pushed at you to “help” you. Thanks. I don’t need that kind of help.
The perfect operating system would be silent and invisible, operating completely anonymously in the background, making it possible for you to do your work with the least hindrance. In other words, a perfect operating system should be like the perfect squire to a knight, as described by Jaime Lannister in Game of Thrones, when he is praising the young man who squired for him, and whom he then kills moments later:
"You knew when you were needed and when to go away. It’s a rare talent. Most of my squires, they mean well, but young men with big jobs, they tend to overdo them." (Season 2, episode 7, "A Man Without Honor")
The problem with operating systems is that the companies that design, build, market and distribute them are never content to be merely on operating system. They want to be recognized, they want to be a brand, they want to be the focus of your attention, the Belle of the ball, your universal go-to reference, or, worst of all, your best friend.
So I continue to struggle with my new computer. It tries relentlessly to force me to use my computer in the way approved by Microsoft as the “correct” way that computers today are “supposed” to be used, and I push back just as relentlessly, trying to use the computer the way I want to use it. I’m not defeated yet. Unbowed, I battle on.
End of rant.
Post with 1 note
Kenneth Clark in his Civilisation: A Personal View, said that most great movements of thought last about twenty years before they enter into terminal decline. This seems to be the case with the technological singularity.
The abstract for Vernor Vinge’s original 1993 paper on the technological singularity, “The Coming Technological Singularity,” reads as follows.
"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."
Now, in 2014, a little more than twenty years later, futurists are starting to distance themselves from the technological singularity (not withstanding the fact that the academic publisher Springer just came out with the massive anthology Singularity Hypotheses: A Scientific and Philosophical Assessment). I saw this last week when I attended the 2014 IBHA “big history” conference, and we are seeing more articles like this: The “Singularity” is cancelled. For now.
Whether or not the technological singularity remains a useful concept at this point I will not consider, but I will ask, even if the concept remains valid and useful, what will it matter to humanity if the technological singularity comes about?
If “the human era will be ended” means that humanity will be extinct, then it’s all over for humanity except for the memory preserved of us by the machines; however, if “the human era” simply means “the era during which humanity dominated Earth,” then there remains a place in history for a marginalized humanity. What might the life of marginalized humanity be like?
Even if, in the fullness of time (a fullness brought about by the technological singularity), human being comes to seem outmoded and hopelessly limited in comparison to what is possible on an absolute scale, there will still be a small number who will identify a certain intrinsic value in the human condition, and who will therefore continue to embody the human condition as we know it today, even if “better” embodiments — enhanced embodiments — are available, and there is no stigma attached to enhancement.
Under conditions of greatly increased technology that would make human being and the human condition seem outmoded — a condition that would be facilitated by machine superintelligence — the technology would be readily available that would make the classic expansionist vision of a human future in space possible. Since the human condition as it is (and not what it might be in a post-singularity era), i.e., that human condition in which some few would find intrinsic value, involves a desire for exploration, adventure, and danger, there would be those among the enthusiasts of original human nature who would want to enact this expansionist vision.
In other words, for post-singularity humanity, in a scenario that does not involve human extinction, there is no intrinsic reason to suppose that “traditional” forms of futurism could not (or would not) be realized, and the means would be readily available to do so.
I have approached this idea earlier from another angle. Some time ago in my post The Fractal Structure of Exponential Growth I wrote:
"…something like the technological singularity could occur, but it would just as rapidly disappear from our view. We would go on devoting an hour to a leisurely lunch, even while at far higher magnifications of time further revolutions of exponential growth were going on unseen by us."
The machines may leave us in the dust and go on to other things, but in doing so they may leave us with the wherewithal to accomplish some of our most optimistic visions of the future.
Page 1 of 87