Tuesday, December 8, 2009

Darwin 101 for politicians

No other idea in human history has had a more profound impact on modern society than the evolutionary theory, independently conceived by Charles Darwin and Alfred Russell Wallace 150 years ago. In neat, concise, transparent terms, it explains the multiversity of lifeforms that have replenished Earth since billions of years, the shooting of species from other species and, ultimately, the emergence of human beings. Its explanatory power is omnipotent. Evolution occurs because of natural selection. In the ever-changing environment of our planet creates conditions where certain, randomly appearing, traits in a species favor particular individuals over others, who then go on to procreate more often. By procreating they pass those favorable traits to their offspring, and this, over time, changes the whole profile of the species. A new species is thus born. From humble bacteria and viruses to sophisticated, intelligent, car-driving mammals, there has been not a single case in biology that has countered evolutionary theory. On the contrary, evolution has been confirmed again and again, not only by fossil records and discoveries in geology and the past climate of Earth but, more importantly perhaps, by probing into the nexus of life, the cell itself. The discovery of DNA as the principal promulgator of genetic information across generations has confirmed Darwin and Wallace beyond any doubt. We, and the apes, and the fish and the trees and every living thing, are indeed the products of natural selection, the descendants of a common unicellular great-great-grandfather that appeared on Earth some four billion years ago.

It is therefore not at all surprising that such a powerful idea spilled over quickly from the curious observation of amphibious lizards and garrulous birds in the Galapagos Islands and the Amazon forest, and entered the controversial realm of human society. Francis Galton, Darwin’s cousin, was first to coin the term “eugenics” and reinterpret evolutionary theory as political philosophy. Faced with the dilemma of altruism that lead to medicine being supplied to the “poor”, the “feeble-minded” and the “sick”, and therefore ease the propagation of “inferiors”, he suggested that human society should develop towards a world of “superiors” by selectively breeding the best with the best. Being a liberal, Galton proposed that active measures should prune amongst the “poor” for those with talent.

His proposition inspired, several decades later, the instigation of the welfare state in the UK. In the US, a virtually apartheid state in the beginning of the 20th century, eugenics were taken more literally and several States introduced forced sterilization of mentally-ill patients and others who were deemed “inferior”. Interestingly, Karl Marx was very excited about evolutionary theory. He sent Darwin a copy of his book “Das Kapital”, and wrote to his friends how evolution and class struggle seemed to go hand in hand happily. As species evolved through strife towards perfection so would human society, ultimately arriving at a classless utopia through the uprising of the oppressed workers. And yet later on, in Soviet Union, Darwinism fell quickly out of favor. Orthodox communists of the Stalinist era supported instead the acquisition of new genetic characteristics during one’s lifetime, which could then be passed on to one’s offsprings. This idea is called “Lamarckism” (from the French naturalist Jean-Baptiste Lamarck) and it made more sense to communist visionaries who dreamt of shaping man as the ultimate altruist. Trofim Lysenko, a notorious charlatan agronomist, denounced Darwin with the full support of the Communist Party and applied the flawed Lamarckian theory in Soviet agricultural projects, with disastrous effects.

Evolutionary theory trumps all other explanations about life on Earth. Nevertheless, it is one thing to scientifically and unequivocally explain why bats have wings, or why we have five fingers in each hand instead of six, and totally different why the variation in intelligence, or prowess, or beauty. Implying a genetic cause for such inequalities - as evolutionary theory surely does - smacks of biological determinist: i.e. that you are who you are because you were born like this, and there is nothing you can do about it. Which goes against the liberal ideology of freedom to chose and the right to prosper, as well as the socialist ideal of equality and justice. Thus, Social Darwinism, the idea that competition drives evolution in human societies, ultimately collided with almost every color in the political spectrum of the 20th century, except perhaps fascism and Nazism which saw in the “survival of the fittest” the foundation of their totalitarian and racist ideologies. Eugenics, originally inspired by a liberal thinker in Victorian England, was corrupted and ultimately led to the crematoria of Auschwitz. That tragic event alone was enough to tarnish Social Darwinism with a terrible reputation which persists to our day.

Enter sociobiology, which many critics consider as Social Darwinism in disguise. Originally inspired to explain complex behavior in the animal world, sociobiology is routinely applied to interpret just about everything, from the dot com and real estate bubbles, to why rich men attract women more and why wealth is relative and never absolute. A new breed of economists, inspired by evolutionary thinking, takes issue with Adam Smith’s original assumption of rational players in the economy. These “behavioral economists” explain the stock markets in terms of instincts genetically inherited from our hominid ancestors that rummaged the ancestral savannas of Africa. Greed that drives markets up is seen as a battle for status amongst players who want to increase their wealth and therefore their chances of mating with a preferred member of the opposite sex. Fear – the factor that swings markets down - as the result of not wanting to lose high status, which in human societies is determined, and defined, by money.

Notable sociobiologists such as E.O. Wilson and Richard Dawkins, contend that sociobiology is science, not ideology. That, unlike Social Darwinism, it does not say what ought to be done butwhy something happens. Yes, but surely if one has an explanation for social behavior then oneought to be compelled to take action according to that scientific knowledge. If Darwinism explains, for example, murder as a way for hierarchically-low (i.e. poorer or destitute) young males to assert their dominance, then punitive measures should be restructured accordingly. Moreover, critics of sociobiology, such as the late evolutionary biologist Stephen Jay Gould, argue that traits such as genre or identity are social constructs and have nothing to do with inherited genes. It is nurture not nature, they say, that carries the day and therefore a socialist utopia of socio-economic equality is ultimately feasible.

The debate of nature versus nurture is bound to define political thinking in the current century too. For, if evolution provides us with a scientific insight to human nature, we must rid ourselves of ideology, whether liberal or socialist, and embrace a neutral, scientific perspective on society. The only caveat, alas so common in science, is to distinguish between cause and effect. Are we humans what we are because of our genes? Or are we because of what we think ourselves to be?

Tuesday, December 1, 2009

Literary Constructivism

Immanuel Kant held that there is a world of “things in themselves” but owing to the its radical independence from human thought, that is a world we can know nothing about; thus stating the most famous version of constructivism. Thomas Kuhn took the point further by stating that the world described by science is a world partly constituted by cognition, which changes as science changes.
Constructivism as an idea becomes very obscure when one tries to determine the connection of “things in themselves”, e.g. the “reality” of, say, electrons, and the “scientific concept” of an electron. My take of the problem is that science is a human endeavor and therefore inherently and implicitly bound by our brain’s capabilities, however, we do not “imagine” the world. The world exists, it is just not catalogued.
Things become a lot less murky when we shift from science to literature. Here, the author does not pretend to understand “things in themselves”. The literary agenda differs from the scientific one because it is assumed to be human-made. Books and book-worlds are of the imagination and for the imagination. Whenever I read about a flower I know that it is not the flower “out there”, in the “real” garden, but it is an imaginary flower, a rose inside the imagination of the writer given to me to sample and behold in my own imagination. Thus, free of scientific intentions and false pretensions about realities, literature is constructivist by its own nature. The writer constructs her own world. There are no electrons in that world, only thoughts, of electrons, sometimes, perhaps – but words nevertheless.
But, what is the real difference between a literary and a scientific narrative? There is much difference, one would argue. The Big Bang could be described by words, and therefore constitute a narrative in a literary sense, however, that narrative is underpinned by observation, experimentation and – most importantly perhaps – mathematicalization. Yes, math seems to make the big difference. The mathematical correspondence between scientific theories and the ticking of natural events, is the definitive factor that appears to differentiate a scientific narrative from a literary one.
But does it, really? Can’t one “translate” maths into narratives too? I would suggest that it ispossible. Even the most obscure of mathematical entities can be described in words, and indeed it must be described in words if it is to be understood. If one draws a parallel between math and music, then yes music is beyond words, but so is everything else of the Kantian “things in themselves”, and only when music is descriptively articulated (as an “experience”), it becomes part of the communal pool of ideas. Therefore, one arrives at a scientific narrative that is multi-layered and has many subplots, but starts to look like a literary narrative more and more. Could we then take the point one step further and suggest that, as literary criticism is the evolutionary force of literature, guiding it towards new directions, scientific peer review does similar wonders with determining directions of scientific research, shifting paradigms, and revolutionizing theories (i.e. narratives)?

The Idea Delusion

Recently, a postgraduate student asked me, as part of his thesis (http://vasilis-thesis.blogspot.com/) , to comment on the following claims by Keynes and Galbraith.
Keynes claimed that: "The ideas of economists and political philosophers (etc), both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else.” While Galbraith argued that: "Ideas would be powerful only in a static world because they are inherently conservative."
The student’s question was: These statements were said almost 45, and more, years ago. According to your opinion, which of the fore mentioned statements seems valid in the modern world?
My answer was: “I would definitely side with Galbraith. Of course ideas appear to be powerful and they seem to exercise enormous influence in the way we perceive and interpret the phenomena of the world. I believe that Pol Pot is a case in fact, as well as the hapless millions that were killed because of political ideas that he carried inside his head. But were those millions killed because of political ideology, or because this ideology "happened" to take root in Pol Pot's head, who "happened", through an amazing series of coincidences to live long enough and become powerful enough in order to enact those ideas against the poor people of Cambodia? Alas, our world is not shaped by decisions but by happenstance, by the dynamic confluence of sometimes interconnected and sometimes disparate factors, that lead one way or the other, by virtue of systemic forces beyond our control - or comprehension. Only by hindsight do economists and political philosophers apply ideas to the facts and develop their "theories". The feeling of "power" ascribed in their ideas is therefore an illusion, and it has been so not only today but always and forever.”

I would like to expand somewhat on my answer. Firstly, let me point out that I am referring to economic and political philosophy, restricting therefore my tackle of the term “Idea” - at least for now - to these fields.

I believe the ultimate acid test that shows the delusion that I am claiming is very simple. Economic and political theorists develop ideas which explain (or try to explain) the past. Even those theorists who claim that they speak about “today” or the “present” are in fact speaking only about the past; the recent past but the past nevertheless. The “present” is physically and intellectually unattainable as any Zen story will easily convince you. All economic and political theory, every “big idea” – Marxism, anarchism, liberalism, whatever - fails in various degrees when it comes to predicting the future. Why is this so? Why can’t anyone tell us how the world or the economy will be in five, or ten years time or, in fact, tomorrow morning?

Two reasons: Firstly, economic and political theories do not have – not yet anyway – a solid scientific base on the natural sciences. Secondly, economic and political theorists do not really believe that they need such a base in order to develop and debate their ideas. They inherently accept that the economy and the society are governed by non-natural forces and laws and that they are “human-made” – whatever that may mean. Therefore, they strongly contend that their theories and ideas are, or potentially could be, the driving forces of economic and social phenomena. That, for example, “believing” in market economics, “accepting” the theory and developing instruments that support it (e.g. IMF, World Bank, etc.) will surely transform the world into a free market economy. That invading Iraq and installing a western-type democracy will transform Iraq into a western-type democracy. These notions are delusions, of course. It should be very obvious to any rationally thinking person that human beings and societies are not controlled, “closed” environments of deterministic interactions. We are a social class of animals partaking in the evolution of the cosmos whether we want it, think it, or not.
I suggest that the roots of this “Idea Delusion” are Platonic and rest with Plato’s Republic. It is, however, about time, that such a delusion is revised and economists and sociologists begin to approach the workings of society in the same way as natural philosophers and natural scientists do. We need a new paradigm of social and economic theory based on biology and systems theory with predictive powers. Ideologies belong to the past.

Simulation and non-local realism

Realism is the viewpoint according to which an external reality exists independent of observation. According to Bell’s theorem any theory that is based on the joint assumption of realism and locality (meaning that local events cannot be affected by actions in space-like separated regions – something that Einstein would not swallow) clash with many quantum predictions. In such cases, “spooky action at a distance” is necessarily assumed in order to explain phenomena such as quantum entanglement. This is called non-local realism. A recent paper by Simon Groblacher et al (An experimental test of non-local realism, Nature, Vol. 446, 19th April 2007, pp. 871-5) showed that giving up the concept of non-locality is not sufficient to be consistent with quantum experiments, unless certain intuitive features of realism are also abandoned. Let us see how these results may correspond to assumptions made into our simulation-based New Narrative.
I would like to take an example from cosmology, indeed the very simulation of the standard cosmological model. The logic of the simulation goes like this. Assumption 1: The universe maps unto an external reality which, somehow (i.e. via known, or unknown-as-yet, natural laws) maps also unto our coupled media of detection instruments plus consciousness. This may be called “the perceived universe”. It usually depicts an image of the cosmos, galaxies and gas clusters spreading in all directions and in all magnificence. A mathematical model is then developed based on the prevailing cosmological theory that aims to explain the “perceived universe”. This model runs on a computer, which is the external reality substrate of the simulation. In other words, there is, or so we assume, a “reality” of hardware that runs our simulation (assumption 2). The result of the simulation is also an image of the cosmos. Comparing the two images we refine the model further until the two images appear identical. When we achieve an identical pair of images then we conclude that our mathematical model has been a successful one, i.e. a valid description of external reality.
It is obvious that our conclusion may be potentially flawed, on the basis of our two main assumptions. Furthermore, our assumptions call upon the quantum nature of the cosmos which, as the aforementioned paper has demonstrated, seems to reject non-local realism. Thus, we are left with a revision of assumptions about realism.
Which happen to be inherent in the New Narrative, the most poignant revision of reality being the decoherence of Selfhood.
By deconstructing the Self, by rejecting the narcissism of psychoanalysis, by re-introducing a mystical layer of dualism in the nature of consciousness, we arrive at a counterfactual definiteness and in a world not completely deterministic. The result may be a chorus of out-of-tune artwork but it is also a result closer to what our best validated experiments show. If “external reality” is a wonderland of curious objects and events then our revised “inner reality” of the New Narrative is an equally exotic place. But should we take such a phantasmagoric correspondence as a sign of progress? Should we convince ourselves that we have, at last, by entering our self-simulated world, arrived serendipitously at the Holy Grail of “reality”? Ironically perhaps, the very dynamics of decoherence prevent us from answering such questions. When determinism is thrown out of the window, then all one can do is lean onto the ledge and peer outside, in wander and astonishment, to the changing view of a perplexing world beyond our wildest imagination. And that is exactly what the New Narrative tries, and forever fails, to describe.

Downloading Consciousness

Frank Tipler, in his new book “The Physics of Christianity” makes a number of interesting – some would even say amusing - claims with regards the “end of days”, as he sees it. I would like to focus on two of those claims, predictions in fact, which Tipler estimates that will occur by the year 2050. The claims are:

1. Intelligent machines [will exist] more intelligent than humans
2. Human downloads
[will exist], effectively invulnerable, and far more capable than human beings

I should go very quickly through the first claim, suggesting that – depending how one measures “intelligence”, computers far outsmart humans even today. The few remaining computational problems that deal mostly with handling uncertainty and comprehending speech I expect to be effectively sorted out sooner that the date Tipler suggests. I see no trouble with that. Artificial Intelligence (AI) is an engineering discipline solving an engineering problem, i.e. how to furnish machines with adequate intelligence in order to perform executive tasks in situations and/or environments humans would better not.
The second claim, however, is truly fascinating. To suggest a human download is equivalent to suggesting codification of a person’s person into a digital (or other, but digital should suffice) format. Once a “digital file” of a person exists then downloading, copying, and transmitting the file are trivial problems. But should we really expect downloading human consciousness by the year 2050 – or ever? There are four possible answers to this question. Yes (Tipler or the techno-optimist), No (the absolute negativist), Don’t know (the agnostic crypto-techno-optimist), Cannot know even if it happened (the platonic negativist).
Let us now take these four responses in turn, and in the context of the loosely defined term “consciousness” as the sum total of those facets that when acting together produce the feeling of being “somebody” in the I of each and every one of us.

1. The Techno-optimist (Yes). This view assumes life as an engineering problem and thus falls back to the AI premise of solving it. The big trouble with this view is that if consciousness is indeed an engineering problem (i.e. a tractable, solvable problem), then it is also very likely that it is hard problem indeed. “Hard” is engineering can be defined in relation to the resources one needs in order to solve a problem. Say for example that I would like to build a bridge from here to the moon. Perhaps I could design such a bridge but when I get down to develop the implementation plan I will probably find out that the resources I need are simply not available. Similarly, with consciousness, one may discover that in order to codify consciousness in any meaningful way one might need computing resources unavailable in this universe. This may not be as far-fetched as it sounds. For example, if we discover that the brain evokes the feeling of I, by means of complex interactions between individual neurons and groups of neurons (which seems to be a reasonable scenario to expect), then the computational problem becomes exponentially explosive with each interaction. To dynamically “store” such a web of interactions one would need a storage capacity far exceeding the available matter in our universe. But let us not reject the techno-optimist simply on these grounds. What we know today may be overrun tomorrow. So let us for the time try to keep our options open and say that the “Yes” party appears to have a chance of being proven right.

2. The absolute negativist (No). Negativists tend to see the glass half-empty. In the case of Tipler’s claim, the “No” party would suggest that the engineering problem is insurmountable. Further, they would probably take issue with the definition of consciousness, claiming that you cannot even start with solving a problem which you cannot clearly define. I would say that both these arguments fall short. Engineering problems are very often ill-defined and yet solutions are being found. And the “impossibility” of finding adequate memory to code someone’s mind is something that we will have to wait and see if it truly turns out this way. The negativists, in this case, may also include die-hard dualists.

3. The agnostic (crypto-techno-optimist) responds “skeptically” and is a subdivision of the techno-optimist. She is a materialist at heart but no so gun-ho as the true variety of the techno-optimist.

4. The platonic negativist (cannot know even if it happened). Now here is a very interesting position, the true opposite of the techno-optimist. The platonic negativist refuses to buy Tipler’s claim on fundamental grounds. She claims that it is not possible to tell whether such thing will occur. In other words, the engineer of 2050 may be able to demonstrate downloading someone’s consciousness but she, the platonic negativist, will stand up and question the truth of the demonstration. How will she do such a thing? I will have to expand on this premise – which is, in fact, the neo-dualist attack on scientific positivism. Suffice it to say that she will base her antithesis on the following: Any test to confirm that someone’s consciousness has been downloaded will always be inadequate, based on Gödel’s theorem.

Of course, the very essence of the aforementioned debate, i.e. whether or not consciousness can be downloadable, lies at the core of the New Narrative with respect to the revisionist definition of humaneness. But this is a matter that needs to be further discussed.

Utopias and Dystopias

Thomas More published in 1516 the book Utopia, based on the Republic of Plato. The Greek origins of the term, as well as the inspiration are rather telling. The word means the place that does not exist. Plato, in his original work, which is aiming to provide a context for what he defines as “virtue” (αρετή), positions his thesis on the unattainable. True virtue can only be realized in the world of ideas “out there”; not in our coarse world of shadows. The Republic is by definition unattainable. For the mind of Plato’s contemporaries, for his listeners if you will, “utopianism” did not hold the same meaning as it did in the 16th century, or indeed in our present time. In other words, the unattainable did not imply the will to attain; the interpretation must have been more literal, the way mathematics is, or even better perhaps geometry. There is no such thing as a perfect sphere. A perfect (platonic) sphere is unattainable, but that does not mean that having less than perfect spheres in the real world is a let down. The Republic is a “perfect” society (at least according to its writer) in the same sense. That is exactly the difference in the narrative that I am trying to underline: that the utopian of ancient times was a different animal from the utopian of modern times. The modern version is someone who has not given up on the idea of attainment. In fact, rather the contrary: the attainment of a utopia is a goal in itself. This is profoundly evident in the totalitarian dreams and nightmares of fascism, Nazism, as well as Marxists: the Perfect society and the Perfect man (and woman). This is a major shift in understanding utopias between the ancient and the modern, and I could not stress enough the importance and repercussions of this shift. When Aldous Huxley published “Brave New World” in 1932, the foundations of a utopian/dystopian narrative have already been laid. Indeed they have replaced liberal realism – if ever such thing existed. Utopias or dystopias are attainable. Either by action or inaction. Climate change is a point of fact, as well as the idealist-utopian perception of a westernized Middle-East that led President Bush to invade Iraq. The New Narrative at work is aiming to fulfil its unattainable prophecies.

Post-modernist narratives as simulacra

The end of WWII may be regarded as a watershed in the political and cultural history of the West. The devastated European hinterland, the hecatombs of the Holocaust, the radioactive emptiness of Hiroshima and Nagasaki, demonstrated adequately enough that there were no limits to what human beings can do when it came to war and systematic killing. History, the narrative of human affairs marked by the recurrent outburst of butchery, came to a sudden halt. What caused the halt were the horrors of WWII.
Since then war, and therefore history (the description of wars), became sublimated, in a pseudo-Nietzschenean sense. This pseudo-sublimation is in fact a simulation. Thus, the US and the Soviet Union, instead of continuing after WWII with a nuclear war, they went into “Cold War”, i.e. a non-war, a sublimation of war, a simulated war, a televised war (in Korea, Vietnam, etc.). History became a simulation of history.
The biggest victim of this pseudo-sublimation was the collective rejection of reality, evident in culture and politics. Culture became a simulation of culture and politics a simulation of politics.
By the word simulation I mean the representation of reality in an iconic form, in order to control it. Examples are the edifices of gods or the orthodox icons: the object becomes with time an object of worship, it “attains” a holiness of its own, it becomes a “superobject”.
Naturally, as the simulation becomes better and better it begins to control its creators. The one who worships the icon gets furious at someone who does not – in this case the iconoclast receives the wrath of the icon-worshiper. Another example is simulated war games. The “adversaries” act as if they are truly fighting inside a computer-simulated environment of a war theatre. Very soon, however, the war becomes “real” in the sense that the adversaries “forget” that this is just a simulation and become emotionally involved in the process. They actually “feel” pain. Of course, they do not die when shot at, but the “die” in every other sense; and in a simulation this is just as good as dying for real.
History is simulated not through some supercomputer of course. We are not plugged into some kind of “Matrix”. I will argue, on the definition of the New Narrative that I already have given, that this simulation is the New Narrative. What do I mean by that? The New Narrative began before WWII, but it became the dominant expression of culture soon after the fall of the Berlin Wall in November 1989. Derrida has declared “Il n’y a pas des ‘hors texte’”. I understand the sense of the « texte » as the New Narrative, the main characteristics of which are the ubiquity of television, and the impact of advertising on the collective consciousness. Of course, literature, the visual arts and cinema, are also part of the New Narrative. But I believe that it is television and advertising that support the pseudo-sublimation of the animal instincts in us that in any other case would have used nuclear weapons without second thought. Of course, in order for the simulation to work violence must be there too, which explains why there is so much violence in TV, video games, etc.
It is very interesting here to draw a parallel with the Edo period in feudal Japan, following centuries of bloody interstrife. During the Edo period, war was all but outlawed. The sogun kept in Kyoto the heads of the lord families, thereby securing their obedience. The samurai, the warrior class, suddenly out of a business (or way of life), developed the martial arts as a “way of the mind”, incorporating elements of Zen. At the same time the arts flourished, but as a detached approach to life. Zen, is the ultimate simulation. It is nothingness. In the beginning of the 21st century western culture (and with it the rest of the world too), adheres daily to Zen-fascist slogans of the type “Just do it!” (of Nike shoes). It lives for the moment. The future is constructed through computer models that simulate climate change, orbits of threatening meteorites, spreads of epidemics – in other words threats to survival that do not exist in the NOW. Why is that? Why is the world so afraid, when the world environment has never been more secure? Again, the reason is that history is simulated so that we retain the existence of fear, so important in order to feel anything at all, whilst at the same time we inherently trust the system to secure us from “real” mutual obliteration.
I will explore further the results of this phenomenon. Suffice to say for the time being, that post-humanism is the logical consequence of simulated history. Neuro-prostheses, moral relativism and refusal of external realities (ie. the “sinking” of minds into minds) construe the new social reality, ever more distanced from the real. The problem of course with this is that the simulation has created such an environment that it is virtually impossible to distinguish what is real and what is not. The image was fused with the object.

Science as Religion

Science and Religion can be easily compared in the context of narrative. Both are narratives (see also “A Book for the Universe”). Religious narratives always interrelate an explanation of the world as well as a set of moral instructions. Within the set of moral instructions we must include ritual, which is by definition a moral instruction too. Science, since Descartes, has excluded itself from advising on rituals or good behavior and has concentrated solely in the business of explanation.
Although much doubt on the explanatory power of Science still exists amongst many of our fellow human beings, I will argue that any well-meaning person with the capability - and courage - of rational thinking ought to ultimately accept Science as the best of all possible explanations of the world. One may even argue that one does not need to “believe” in Science in order to accept its validity; in contrast to a belief system such as Religion. I, on the contrary, will argue that acceptance of Scientific Truth and belief in Religious Truth are not as dissimilar as they appear.
Religion assumes a Higher Intelligence in trying to make sense of the cosmos; this intelligence (a “God”) may be self-conscious (“theism”) or not (“deism”); but without It there can be no complete explanation. Science, based on its methodology and the amazing and self-evident observation that the Universe is so finely tuned for life, has arrived at a very puzzling conclusion: that the Universe is either a product of blind chance (“one of many universes”) or that there is a “fifth element” at work which always “obliges” a Universe to arrive at a life-supporting version. This “fifth element” could be an as yet unknown law of nature that shapes a causelessly-created Universe into an anthropic one. The first hypothesis is supported by the “democrats” and the second one by the “aristocrats” (see “Spontaneous Dichotomy in scientific debate”).
Arguably, both scientific hypotheses require a big amount of belief, at least for now, since none can be proved or disproved. String theory as well as closed-loop quantum theory are trying to suggest feasible experiments in order to examine which one of the two hypotheses is the true one; but until now none has succeeded in doing so. One hopes that they soon will. But if Science and Religion both share a certain degree of belief, what of the moral instructions? If I am to compare the two then I should take issue with the second axis of a religious narrative too, which tells me how -and why - I should behave in a certain way; eat fish and not pork, for example, or face Mecca and bow five times a day, or sacrifice a cock on a full moon, or never to tell lies. Lately (see also “The Book of the Universe") it has been argued by many prominent scientists and philosophers that Science can also do exactly that: teach Humankind a moral code of self-regulation and mutual respect. Science is becoming more and more like Religion.

A story for the Universe

Modern science is a hypertext narrative describing the birth and evolution of the Universe. Its chapters interconnect in multifarious ways with the many branches of scientific enquiry – and this includes the humanities - and many of the chapters are being written even today. Many important details are still missing, but arguably most of the work has already been done. Some, the “Platonists” (see “Spontaneous dichotomy in scientific debate”), would argue that the Scientific Corpus may be totally revised in the future and that indeed we may be very near that tipping point in history. I will argue that this could conceivably happen but it will only affect a small part of the Book of the Universe. It will revise the understanding we have for its beginning, it might even revise the understanding that we have about the origins of life; but it will not re-write the Book of the Universe. The narrative has been written and delivered, what is left to do is editorial work.
It is not therefore surprising that many contemporary philosophers and scientists, notably Richard Dawkins, Daniel Dennet and others, argue that science has already given to the world an excellent explanation of just about everything. They argue, in the name of universal peace and brotherhood that the Book of the Universe is taught across the globe, to everyone, to children of all nations, so that scientific understanding replaces religious belief. Their argument, which I happen to approve, is that religion and irrational belief systems in general, are too dangerous ideas to permeate the nations of a technologically advanced, nuclear-armed world like ours. On the contrary Science, as narrated in the Book of the Universe, unites in a rational and wondrous way all races of the planet, in the common appreciation and respect of nature, so instrumental in establishing some kind of peaceful co-existence and, ultimately, survival.
What interests me about their argument is that I find in it a fine example of the interplay between literature, science and the society of the future. But I will have to return to this point that I am making and expand it further.

The New Narrative

I need to define what I mean by the word “Narrative”. Since the Lacroix cave paintings - or even earlier perhaps - humanity has felt the compelling need to record itself. Human social evolution has thus been intricately related to descriptions of events, personalities and, most importantly, ideas. These descriptions are narratives by definition, i.e. they follow a specific structure which reflects the way human minds understand the world. Although narratives can be both explicit and implicit, their structure is always relational: ideas, events and personalities are always related to one another and to their time. Even when referring to things past or prophesize things of the future, narratives are always interpreted in the present; this is a very important point which explains why different eras interpret the same narratives in a different way. Narratives are stored in Libraries. By this term I am generalizing on the concept of narrative storage, which takes place in a variety of media. Media are the storages of narratives and every society uses technology to improve on the media. Thus a Library can be made up from a collection of media, such as stone or clay tablets, papyri, scrolls, books, museums, architecture, etc.
Narratives are not just for show. Their role is not decorative. Narratives, once born, define society. They are the steam engine of societal progress, or regress. Because they are the cumulative repositories of ideas narratives are the drivers of change. Whenever great civilizations fell, it was because they somehow lost their Libraries. I mean this in the literal as well as the metaphorical sense. Forgetfulness is the loss of narrative, and this is true not only in various well-documented amnesias but also in the case of societies and cultures at large. Examples abound throughout history, but I will only mention here the case of the Mayan civilization which collapsed as soon as the greater part of its narrative was lost.
Having established the definition of Narrative, Library and Media, let us now turn our attention to the definition of the “New Narrative”.
I will claim that the New Narrative was born out of the Internet revolution (complemented by cable TV and satellite communications) in the 1990s and that it differs from all previous narratives of the past because its Libraries are interconnected. The New Narrative by virtue of its genesis created a New Hyper-Library where every other narrative that has survived the test of time is stored somewhere there, as a node within a vast network of interconnectedness enabled by contemporary technology. Our society, like the societies of the past, is also defined by its narrative; in our case our New Narrative of networked media. The most prominent example of how the New Narrative affects societal evolution is advertising. Advertising is a spontaneous synthesis of ideas derived from media archives, the synthesis providing an extension of the New Narrative. A television ad reflects what we believe for ourselves, or what we think that we believe. It compels us to consume because the New Narrative can only survive through the constant maintenance and expansion of its Library; and this can only safely occur in a liberal, free-market world. But one needs to return to the interconnectedness of the New Narrative to liberal politics.

Literature, Science and the Society of the Future

Thomas S. Kuhn in his landmark opus “The Structure of Scientific Revolutions” surmises from the study of contemporary historiography that the big revolutions in science have been changes of World View. I will argue in this short introduction that World Views are expressed through Literary Narratives and are defined by them. In so doing, Literary Narratives compose a vision for the Future. This composition is undertaken by society and its exponents, namely intellectuals, and, in the case of our contemporary society, intellectuals linked through the network of media at large. They, acting as attractors in a large chaotic system, synthesize the aspirations and the fears of the present; interpret the memories of the past – which in themselves are also narratives; and ultimately generate a New Narrative, which is always a blueprint for the Future. This blueprint does not have to be concise. An intrinsic quality of any narrative is that it may be self-contradictory. It may indeed encompass circular arguments of the Gödelean type without any danger of self-destruction. Narratives hold, like minds do, because they do not have to be logically consistent. By containing the seeds of internal dissent they continuously evolve. One may thus consider society as a work in progress, the work being historiography by hindsight and debate by aspiration. Science, as the main drive of societal evolution since the European Renaissance, is the key arbitrator in this narrative process. Technology, the offspring of science, not only empowers societal change but, increasingly since the Industrial revolution and throughout the 20th century, acts as the principal mirror upon which society reflects during the process of self-evaluation and redefinition of itself. The process of the New Narrative is an interactive composition that takes place in the now. The most explicit case of this process, where narrative connects science, technology and the Vision of the Future, is science fiction literature, or any kind of literature that fuses and integrates scientific ideas – which in our times is virtually all literature. I argue “virtually all literature” because I believe that even those who make an effort not to include science and technology in their stories are implicitly partaking in the process of the New Narrative, by assuming a position of self-ascribed “innocence”. You can view them as heroins of Marquis de Sad, partaking in the orgy by default whilst touting their unassailable virginity. Under “Literature” one should also include - apart from novels, plays and poetry - new forms of literary narrative such as films, video and installations, and – in the wider sense of narrative art, the rest of the fine arts too.

Temnothorax de Condorcet

How the collective intelligence of social animals can provide a new paradigm for ecological decision-making (notes for a flecture at Panteion University)

Marquis de Condorcet (1743-1794) was a pioneer in applying mathematics to the social sciences. His jury theorem states that if each member of a voting group is more likely than not to make a correct decision, the probability that the highest vote of the group is the correct decision increases as the number of members of the group increases. The sum of information available per juror is also proportional to the probability of the correct decision, which means that the more individuals jurors know (or understand) the more likely is that a correct decision will be reached. Democracies are thus theoretically, or potentially, better than dictatorships because they can solve problems better by means of collective decisions. However, as a recent article in the Economist correctly pointed our, human societies are prone to groupthink which neutralizes the benefits of the jury theorem.

Groupthink is typically found in modern parliaments where members vote along party lines, regardless of information available. In societies at large groupthink often manifests as a result of brainwashing by the media. For example, once the financial crisis became headline news terrorists “disappeared”. Terrorists and terrorism, was the mainstay of media output since 9/11, as if the world was at the brink of being blown up by a mad suicide atom-bomber. Groupthink in the western world meant that citizens conditioned their political thinking under the spectre of terrorist threat, mainly coming from so-called Islamo-fascists dreaming of resurrecting the caliphate of the Middle Ages. All that has disappeared now, and replaced with a new enemy of the people: the amoral bankers. The new fear is losing everything, job, savings, home in a black financial hole.

Along with the terrorists global warming has also disappeared from the foreground. Who cares about the melting of arctic ice when there is a meltdown of financial institutions? And so on.

So the question arises how are we, the human race, become able to deal with global problems (poverty, ecology, financial institutions, epidemics, etc.) when we are constantly swayed by the forever trembling cyclopic eye of the media? How can we, the jurors, arrive at the best possible decisions, when information is restricted, or conditioned by groupthink?

A possible answer may come from evolutionary biology. Darwinian sociobiologists studying the behaviour of social animals such as bees and ants, are beginning to understand the way those creatures chose between various options. The ant species Temnothorax albipennis, studied by Nigel Franks of the University of Bristol, establishes a new nest by attenuating information-sharing of best routes amongst scouts. When a suitable place is identified the scouts begin to lead other scouts to the new site. To speed things up, the ants have developed a strategy whereby efficiency is increased by means of leading scouts back to the nest via the quickest route during the phase of migration. Thus, by going to and fro, more scouts become familiar with the route and the speed by which migration takes place increases. This type of dynamic – termed by the researches “reverse tandem runs” – resembles the loading of connections in a network.

Perhaps then, the ants shows us the way too. Human networks may provide the answer to getting over groupthink, and utilizing human collective intelligence and collective decision-making. There are two main characteristics in human networks: firstly, that most individuals have few connections (some friends and family members only) and only a few are highly connected (the “connectors”); secondly, networks are clustered, i.e. they tend to build around specific social groups (e.g. sharing the same profession, or hobby, etc.). If we manage to find strategies in human networks whereby we connectors increase the load of connections by means of increased information, then we may be able to get over the groupthink problem. Connectors have an obvious evolutionary motive to perform this task, namely to safeguard - or increase - their prominence and status in the network. To validate information passed to a network by the connectors, one may use the power of clustering. In clusters, peers are able to perform instant validation of information. This is apparent in readers’ commentary in news websites, as well as wikis.

Collective decisions are complex but so are the problems that currently face humanity. Current global institutions, such the UN, or the IMF, or the G7, are built on 20th century ideas and technologies. We must now develop new technologies and ideas in order to tap on the collective intelligence of human networks. The democracy of the 21st century will have to be less representative and more direct, in a totally new way.

The ignorant miracle-workers

Is science the surest way of arriving at truth? Can we validate its worth beyond anyone’s doubt? Surely, the limits of knowledge have been discussed ad nauseum by the ancients. Aristotle did not approve of Platonic metaphysics, but ask any string theorist what she thinks about the laws of nature and she will tell you maths. Where is maths? Where does it reside, before expressing itself in the motion of bodies or the flow of fluids? Where does music go when the instruments stop their play? Historians of science tell us that once upon the Middle Ages science and magic were twin sisters, Siamese twins living together side by side and forcefully separating not before the time of Descartes. His definition of res extensa was followed by logical positivism a few centuries later; but no one would have given a toss if it wasn’t for the Industrial Revolution. I stand firmly behind this argument: if it wasn’t for the engineering miracles that were produced as a result of scientific discovery, science would have been little more than a pastime for gentlemen and gentle ladies of plenty means and time to spare. Everyone had to bow to the miracles of science because the damn thing worked – and it did so better than prayer. Planes fly after all, not by well-wishing (although I often see many fellow passengers pray during take off) but by engines roaring and good wing design. But do we really know why they fly? I would argue that we do not, not really. We do have a good set of equations available, and a sound theory of aerodynamics that we teach to college students, but this corpus of descriptions sits uncomfortably on top of vast, unwavering void of stark ignorance. At the end of the scientists’ day what remains in the Petri dish, or the computer printout, or the spectrum of a far away galaxy, are unanswered questions followed by more unanswered questions. Some call this a virtue. And why not: there is certainly something alluding to heroism in a person willing to face mysteries whilst remaining agnostic. Heroics apart, however, the bottom line is that working the miracles of science was, and still remains, the biggest mystery of all. The body of knowledge is riddled with holes, curious singularities where our notions precariously stand. I would like to give three ready examples of such “singularities”. First, the Big Bang; and of course all that follows it, which is the whole of physics. Our descriptions of the universe, mathematical as they are, should not be confused with knowledge. Secondly: Life, the origin of. How did it come about? Thirdly: the mind. If you have doubts about those three examples, let me put them in another way. The litmus test of true knowledge is the power of reproduction. If I know something - truly know it, not suppose it - then I can reproduce it, nominally or otherwise. If we knew, or came to know, the nature of the Universe, of Life and of the Mind, we could easily reproduce all three of them. We do not (not “yet” some will say, but I dispute that). What we do (re)produce are similes; or simulations of. Scientists are ignorant miracle-workers performing in the circus of history while the rest of world watches in amazement. How long more will the show last? A good answer would be “when the miracles run out”. And then what? What will follow science? A retro-religious era perhaps?

Our post-scientific era

Counterknowledge, the corpus of pseudofactual narratives that dominate much of today’s discourse, shocks many in the scientific community. I often talk to scientists who cannot comprehend why intelligent people, some with science degrees, are so gullible that they take homeopathy drugs, read their horoscopes and believe that aliens frequently visit our planet aboard UFOs. Richard Dawkins has been prominent in forging a camp of polemical atheists who, presumably fed up with counterknowledge, have raised their intellectual arms against the resurgence of religion. Meanwhile, creationism gains ground in the west and is the dominant belief in the Muslim world as well as among Muslims living in western countries. I am told that the President of China has been reported claiming that Chinese vessels circumnavigated the world in 1421 and established colonies in South America. Is the world going crazy? It seems to me that the world has entered a post-scientific era. The Enlightment project, still unfinished, is on the defense everywhere. A medieval mentality had returned whereby belief is more important than fact, where connections and patterns between disparate things are put together in order to “prove” the most incredible things. The media, applying the only filter they care for (i.e. ratings) propagate these narratives and thus legitimize them further. The results range from comical to tragic. People in South Africa have been dying of AIDS because their ex-President believed that the disease is not caused by a virus but by social conditions. Scientists have a new social responsibility. They cannot hide in their labs, watch the other way, delegate the issue to politicians. If they do, soon there may be no labs. If the trends of today are left unchecked, then in the not-so-distant future tax money may be diverted to building astrological observatories and laws may be enacted that require the ritualistic blessings of "enlightened" beings in order for society to function. Dawkins has been criticized for causing a “polarization” between science and religion. My opinion is that he has not caused anything of the kind; he has simply shown to the rest of us that such a polarization already exists and we should wake up and do something about it. The future could well be of a world in possession of nuclear technology and the lack of rational thinking. Imagine the Crusaders attacking medieval Jerusalem with atom bombs and you’ll get the picture. This is the definition of a nightmare.


Augmenting physical ability by making use of techno-prostheses is as instinctive to primates as the sticks that some chimpanzees use to extract termites from their nests. The whole edifice of technological civilization has been exactly that, to implement knowledge collected on natural processes in order to achieve supernatural ends. It should therefore not come as a surprise that a fusion of machine and body has become ever so prominent in the last few years. The difference is one of interface, or to be more succinct, of intimacy. It is one thing to sit in your car and drive at a hundred miles per hour and a completely different thing to be running at a hundred miles per hour using a pair cyber-legs – or is it? I would argue that although it may “feel” different it is basically one and the same. But I guess the real issue with cyborg technology is not adding a few degrees of extra functionality to our bodies. Simply by wearing a pair of glasses and correcting my shortsightedness I have already done so. The real issued emerges when the human brain is part of the interface, when the intimacy between body and machine reaches the level of our neurons. Deep brain stimulation works miracles with patients suffering with severe symptoms of Parkinson’s disease, and yet the ethical repercussions of this “intrusion” send shivers up the backs of ordinary folk. Is this the dawn of a post-human era? Of creatures half-machine and half-human? Where the “self” is modulated by electrical currents and electrodes implanted in the brain? And what would that mean for Free Will? These are too many questions to ask at once, so let me try to unravel each one in turn, in the light of the New Narrative. A central thesis of the New Narrative is the deconstruction of Self. This is something that began in earnest with the introduction of psychoanalytic theory in the mainstream culture. The “discovery of the subconscious” blew the foundations of assumed rationality sky high. It is perhaps rather amazing that it took a century for economists to factor the human subconscious into their theories – but this is, I believe, a fine example of the permeability of the New Narrative, a subject that I shall return to. To return to my current analysis, the result of deconstructing the Self has been that cyber dreams are interpreted as horrors, in the same manner that a room of magic mirrors modulates our reflection to the extend that it becomes another “us” out-there. The rationalists would have no trouble realizing that cyborg technology does not alter a thing. But we are not rationalists, not any more. We are the heroes of a narrative that self-describes our existence using a new code of ethics based on deconstruction. According to this code, we are all post-human, in the sense that our biology has been enhanced by technology, chemically, electrically, mechanically, members of an interconnected hive called the Web, our “collective consciousness”. The questions we therefore ask are completely out of context. When we ask, for example, where is “Free Will” in the case of electrical brain modulation, we are directing the question to our past, not our present, and certainly not to our future. Thus, the question lingers on unanswered, for it is unanswerable. A better question might have been: can we modulate Free Will in order to achieve a more harmonious society?

Eugenics Reloaded

Eugenics was a liberal vision because, at the time of Sir Francis Galton, it was radical and against the Victorian class system. By going beyond the class structure, eugenics envisioned a future world of enhanced humans irrespective of class background. It was a truly egalitarian vision inspired by Darwinism and aiming for a balance between nature and nurture.
Following the destruction of the European class system after the carnage of WW1, egalitarian ideas were split between the Left focusing more on the “nurture” side of the argument and the Right corrupting the “nature” side and replacing it with “race”. Liberalism – expressed in the few remaining parliamentary democracies - found itself in the uneasy middle, a follower rather than a leader, a defender of its hijacked ideology.
The extreme Left in Soviet Union and the extreme Right in Nazi Germany were responsible for genocide; the former in “re-education gulags” the latter in “concentration camps”. It was thus that eugenics got a bad name, particularly from the Nazi atrocities which were linked to eugenics during the Nuremberg Trial. The line of the defence for the Nazi criminals was that they did little else compared to what the Americans were doing in their own country by means of forceful sterlization programs. The irony is, of course, that the Nazis while exterminating the Jews were aiming to destroy not an "inferior" race but an antagonistic one, a people who despite their small number had contributed immensely in the European civilisation. Race was a pretext; and this is why a big number of European Christians eagerly joined the Nazis in the slaughter.
Egalitarianism was redefined by the European Left after the war as in direct opposition to eugenics – conveniently forgetting the millions that were dying in Siberia.
But the idea has refused to disappear, because it bods with the fundamental value system of most human beings, i.e. the enhancement of our abilities. In the 21st century eugenics is not used as a term anymore (in order not to elicit negative reactions), but the idea is there, alive and well, manifesting both in technologies that intervene in the genetic make-up of the unborn (“designer babies”), as well as in technologies that may enhance already born humans. How many of us would refuse to becoming cleverer, stronger, healthier, younger and more sexually potent?

The dilemmas of enhancement
There are at least three major moral and political dilemmas that I would like to discuss. The first has to do with the control of the eugenics technologies. Should one support the liberal, free-market economics model, where private companies sell the technologies to the consumers? Or should one involve the State? And to what degree? The dilemma is obvious. If we follow a free-market approach we may arrive at a new class system, where the ultra rich will be able to use the expensive technology to enhance themselves and their offspring. We may end up with a superhuman class, the “GenRich” as it is often called. If we make eugenics a state-controlled commodity, then we uneasily reproduce a totalitarian scenario for the future. One must not forget that the Nazi party was a socialist one.
The second dilemma that I would like to discuss has to do with the technologies themselves. Both pre-natal genetic interventionism and post-natal enhancement (genetic or otherwise) have merits that need to be discussed. For example, in the case of post-natal enhancement how much down the road to becoming cyborgs we go? Finally, the third issue for discussion would be our motivation for human enhancement. One may argue that this is obvious: self-interest. One wants to be an enhanced person because it improves his or her competitiveness in the world. It is precisely the meaning of competitiveness that needs to be discussed. In a planet heading for an environmental tipping point competitiveness may not be the correct strategy, but collaboration. Altruism should be enhanced at the expense of selfishness. But, assuming a genetic disposition for those two social traits, how much does it matter which trait to select for? Is human behaviour governed by genetics? Or is it a result of framing the right game, as many game theorists would argue? And if so, what other reasons we may have for human enhancement? Colonizing another planet may be one of them. For example if humans are ever going to survive on Mars they will have to genetically change; the gravity of the planet is less and its atmosphere (even after terraforming) thinner. Is Eugenics the correct strategy for space colonization?
Synopsis for a Café Scientifique delivered in Thessaloniki


Complexity theory studies non-linear emergent phenomena whereby networked interactions produce self-organization at ever higher levels. At certain threshold values of network interactivity certain “jumps” occur – called “saltations” – and the system changes behaviour.
Despite the many advocates of complexity theory, the idea is facing many obstacles and often fails to inspire those that it should, people such as evolutionary biologists, neuroscientists, or political scientists. I believe that there are two main reasons for this. The first is cultural. Complexity theory is not being taught, at least not adequately enough, to young students of biology or political science. Their University departments are populated by professors who made their names and careers by following deterministic paths of thinking. As a systems engineer, I was surprised to discover the level of scepticism that complexity theory faces in scientific circles. The culture of engineering is of course different from the culture of science, which may also explain the second reason for the evident mistrust. Engineering is happy when things work. Science is only happy when there is an explanation of why things work. In this sense, complexity theory appears to be “mysterious”. It lacks a fundamental law. In the eyes of a scientist it may just be an alternative, clever mathematical way of describing something very trivial and adequately understood, for example the motion statistics of gas particles, or macroscopic quantum phenomena such as magnetization.
And yet, a fundamental law may indeed exist behind saltations: a variant of the second law of thermodynamics, yet to be discovered. If this is proven to b e true, we may be able to explain, inter alia, evolution. Why did life “jump” from bacteria to uni-cellar eukaryotes, and then to multi-cellar organisms? What determined the threshold of biological complexity in order for new life forms, ever more complex, to emerge?
The work of microbiologist Carl Woese is of particular importance here. Woese sees bacteria in terms of networked communities rather than individual cells, and interprets their evolutionary history as driven by non-linear self-organization.
A worldview based on complexity opens an entirely novel interpretation of natural phenomena. By using computer models to simulate phenomena of emergence we may be doing something a lot more: introducing into the cosmos computations that create new levels of complexity, a genesis of numbers that may lead in the re-programming of life.