new utopias for old?

5
Engineering & Technology February 2014 www.EandTmagazine.com 30 UTOPIAS TECHNOLOGICAL FUTURE LAURENCE WHITELEY/NB ILLUSTRATION THE HISTORY OF human invention often seems precariously unplanned, yet the questing human mind remains bent on finding meaning in and for our grand schemes, however compelling the evidence for innovation via accident and randomness. Surely, human invention must be heading somewhere – otherwise, what does it all mean? In striving to devise a meta-significance of the Internet, for example, philosophers and pundits have come up with interpretive theories that can sometimes seem cobbled together from established notions of socio-technological evolution, such as the effects of the telegraph, the car, radio and TV, or other world-changing advances. But the rapid rate of development of the Internet, and its intrinsically open and unregulated nature, has made it a natural environment for advocates of totally free speech and uncensored expression. No wonder connections have been argued (by technology historians like John Markoff and others) between the Internet ‘consciousness’, the personal computer industry, and the societal subcultures of the 1960s. The tech-literate generations born in that decade and after represented a significant change in the relationship between ‘ordinary people’ and computer technology. It was foreseen that the world would need more technologists to run the computer systems that were gradually automating many workplace tasks, and playing an integral role in commercial management. Universities started to make their systems available to students to learn with, on a time-share basis. As these systems became more connected, users were able to communicate with each other in prototypical virtual communities. By the end of the 1970s the means to conduct these virtual engagements from the privacy of one’s own bedroom where within sight. Skills learned in extracurricular computer classes, and the culture of self-realisation via ‘virtual’ exploration, meant that in the 1970s and 1980s many young technologists entered the adult workplace knowing more about the technology than the incumbent ‘experts’ – and they were not backward in asserting their knowhow. This was a major turnaround. While the computer establishment had been characterised by big international computer corporations like IBM – whose technical experts wore white lab coats, and whose executives were uniformed in conservative suits and ties, the technology they owned was viewed as oppressive rather than liberating. “Most of our generation scorned computers as the embodiment of centralised control,” wrote Californian author Stewart Brand, born in 1938. But a contingent of Brand’s generation “embraced computers, and set about transforming them into tools of liberation”. There’s more irony in the fact that the 1960s, the decade in which IBM established itself as the epitome of clean-cut corporatism, also engendered hippies and other countercultural movements and ‘alternative’ ideologies. California was not the only US state that between 1965 and 1972 attracted thousands of ‘New Communalists’ seeking an utopian alternative lifestyle between 1965 and 1972; it was also around that time a state becoming identified with high-tech innovation. As Professor Fred Turner of Stanford University’s Department of Communication says in ‘From Counterculture to Cyberculture’ (2006), “New Communalists... often embraced the collaborative social practices, the celebration of technology, and the cybernetic rhetoric of mainstream military-industrial-academic research”. He adds: “Analysts of digital utopianism have dated the communitarian rhetoric surrounding the introduction of the Internet to what they have imagined to be a single, authetically revolutionary social movement that was somehow crushed or co-opted by the forces of capitalism”. Time magazine writer Charles Cooper observed in 2005 that by rights the East Coast should have bested the West Coast in the expansion of the American computer industry: “The East Coast computing axis, which ran from just north of New York City (where IBM housed its headquarters, up to Cambridge and the Massachusetts Institute of Technology), was rich in talent, money and pedigree… But most of the groundbreaking research [in computing] was getting done in California.” Soon even IBM had set up its innovation centre in Silicon Valley. Turn on, turn off, turn on again It might have been expected that the commerce-driven Silicon Valley and the subcultures would grate on each other as abrasively as the tectonic plates that occasionally clash deep below the Californian soil. In fact there was a willing cross-fertilisation between the two outlooks. Historians have noted how the ‘democratisation’ of computing, brought first by the personal computer revolution of the 1980s, and then by the connectedness revolution of the 1990s, were influenced by the communal values of self-reliance and open-access espoused by counter culture figureheads of the earlier decades. The computer geeks soon had their own NEW UTOPIAS Will technology provide a perfect future for the ascent of man? Or is it wishful thinking by techno-pundits who want to believe human progress is all toward a utopian state of existence? By James Hayes, Piers Bizony, Chris Edwards

Upload: p

Post on 23-Dec-2016

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: New Utopias for Old?

Engineering & Technology February 2014 www.EandTmagazine.com

30 UTOPIASTECHNOLOGICAL FUTURELA

UREN

CE

WHI

TELE

Y/N

B IL

LUST

RATI

ON

THE HISTORY OF human invention often seems precariously unplanned, yet the questing human mind remains bent on finding meaning in and for our grand schemes, however compelling the evidence for innovation via accident and randomness. Surely, human invention must be heading somewhere – otherwise, what does it all mean?

In striving to devise a meta-significance of the Internet, for example, philosophers and pundits have come up with interpretive theories that can sometimes seem cobbled together from established notions of socio-technological evolution, such as the effects of the telegraph, the car, radio and TV, or other world-changing advances.

But the rapid rate of development of the Internet, and its intrinsically open and unregulated nature, has made it a natural environment for advocates of totally free speech and uncensored expression. No wonder connections have been argued (by technology historians like John Markoff and others) between the Internet ‘consciousness’, the personal computer industry, and the societal subcultures of the 1960s.

The tech-literate generations born in that decade and after represented a significant change in the relationship between ‘ordinary people’ and computer technology. It was foreseen that the world would need more technologists to run the computer systems that were gradually automating many workplace tasks, and playing an integral role in commercial management. Universities started to make their systems available to students to learn with, on a time-share basis. As these systems became more connected, users were able to communicate with each other in prototypical virtual communities. By the end of the 1970s the means to conduct these virtual engagements from the privacy of one’s own bedroom where within sight.

Skills learned in extracurricular computer classes, and the culture of self-realisation via ‘virtual’ exploration, meant that in the 1970s and 1980s many young technologists entered the adult workplace knowing more about the technology than the incumbent ‘experts’ – and they were not backward in asserting their knowhow.

This was a major turnaround. While the computer establishment had been characterised by big international computer corporations like IBM – whose technical experts wore white lab coats, and whose executives were uniformed in conservative suits and ties, the technology they owned was viewed as oppressive rather than liberating. “Most of our generation scorned computers as the embodiment of centralised control,” wrote Californian author Stewart Brand,

born in 1938. But a contingent of Brand’s generation “embraced computers, and set about transforming them into tools of liberation”.

There’s more irony in the fact that the 1960s, the decade in which IBM established itself as the epitome of clean-cut corporatism, also engendered hippies and other countercultural movements and ‘alternative’ ideologies. California was not the only US state that between 1965 and 1972 attracted thousands of ‘New Communalists’ seeking an utopian alternative lifestyle between 1965 and 1972; it was also around that time a state becoming identified with high-tech innovation.

As Professor Fred Turner of Stanford University’s Department of Communication says in ‘From Counterculture to Cyberculture’ (2006), “New Communalists... often embraced the collaborative social practices, the celebration of technology, and the cybernetic rhetoric of mainstream military-industrial-academic research”. He adds: “Analysts of digital utopianism have dated the communitarian rhetoric surrounding the introduction of the Internet to what they have imagined to be a single, authetically revolutionary social movement that was somehow crushed or co-opted by the forces of capitalism”.

Time magazine writer Charles Cooper observed in 2005 that by rights the East Coast should have bested the West Coast in the expansion of the American computer industry: “The East Coast computing axis, which ran from just north of New York City (where IBM housed its headquarters, up to Cambridge and the Massachusetts Institute of Technology), was rich in talent, money and pedigree… But most of the groundbreaking research [in computing] was getting done in California.” Soon even IBM had set up its innovation centre in Silicon Valley.

Turn on, turn off, turn on againIt might have been expected that the commerce-driven Silicon Valley and the subcultures would grate on each other as abrasively as the tectonic plates that occasionally clash deep below the Californian soil. In fact there was a willing cross-fertilisation between the two outlooks. Historians have noted how the ‘democratisation’ of computing, brought first by the personal computer revolution of the 1980s, and then by the connectedness revolution of the 1990s, were influenced by the communal values of self-reliance and open-access espoused by counter culture figureheads of the earlier decades. The computer geeks soon had their own

NEW UTOPIAS Will technology provide a perfect future for the ascent of man? Or is it wishful thinking by techno-pundits who want to believe human progress is all toward a utopian state of existence?

By James Hayes, Piers Bizony, Chris Edwards

Page 2: New Utopias for Old?

31

www.EandTmagazine.com February 2014 Engineering & Technology

FOR OLD?figureheads – emerging visionaries and techno-libertarians such as Steves Jobs and Wozniak (while not forgetting their contemporary William H Gates III).

Stewart Brand also argued that the counterculture’s scorn for centralised authority was what spurred the philosophical foundations of not only the ‘leaderless’ Internet, but also the entire personal computer revolution.

Brand was writing at the dawn of public Internet ubiquity; it’s less likely that the ‘leaderless’, decentralised ideal still holds in 2014’s world of dominant, highly-resourced online brands like Google and Amazon, around which lesser Web-based entities revolve in a kind of gravitational thrall.

The rise of the virtual world, and better insights into the effect the Internet has had – and continues to have – on most aspects of society, has encouraged a range of ‘isms’ that in many respects match those of the 1960s in their unorthodoxy.

Theories such as, for example, singularitarianism, transhumanism, extropianism, and cyber-utopianism – and even facets of cyberdelia and the so-called ‘California ideology’ – have been propounded over the last decade as ideological constructs based in interpretive theories that might be pseudo-scientific; but are also founded on pseudo-theories in that many of their premises are based on evidence drawn from the real-world application of Internet technology.

At first hearing, these species of technological utopianism sound more Gordon Moore than Thomas More, and bespeak a visionary grounding that’s based on forecasts and extrapolations of a range of technology trends, all of which are open to dispute. Most are informed by an aspiration toward making the world a better place.

So do the theories usually classed under the general heading of ‘technological utopianism’ provide anything of value for the earnest technologist looking to better understand the tools of their profession – or even the average human curious about how computer technology is affecting their lives? What, in short, are they about?

Technological singularity, or simply ‘the singularity’, is defined as a theoretical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilisation, and perhaps human nature. The capabilities of such an intelligence might be hard for humans to comprehend, so the technological singularity is seen sometimes as an occurrence beyond which future developments in human history become >

How much utopian thinking evolved from Darwin (left)? Cosmologist Martin Rees (top) and futurologist Ray Kurzweil (bottom) believe technology is putting evolution on a date with destiny

Page 3: New Utopias for Old?

Engineering & Technology February 2014 www.EandTmagazine.com

32 UTOPIASTECHNOLOGICAL FUTURE

< not only unpredictable but even beyond our current powers of understanding.

Transhumanism is a cultural and intellectual movement promoting the aim of transforming the human condition fundamentally by developing – and making available – technologies to enhance human intellectual, physical, and psychological capabilities. Transhumanist thinking studies the potential benefits and hazards of emerging technologies that could overcome basic human limitations. It also addresses ethical matters involved in developing and using such technologies. Some transhumanists predict that human beings may eventually transform themselves into beings with such greatly expanded abilities that they justify a state of being known as ‘posthuman’.

Extropianism, meanwhile, is an evolving framework of values and standards for continuously improving the human condition; extropians believe that advances in science and technology will at some future point enable humans to live indefinitely. Cyber-utopianism is a concept put forward by Russian writer Evgeny Morozov, which partially derides the belief that online communication of its very nature emancipatory, and that the Internet innately favours the oppressed rather than the oppressor. Morozov argues that this belief is naive, and stubborn for its refusal to acknowledge its pitfalls. He reportedly blames the former hippies for promulgating

this misguided utopian belief in the 1990s.All of these idealistic concepts are – to a

degree – concerned with how computer technology and the digital existence it enables may be contributing to the eventual establishment of some kind of refined, enhanced state of human existence in which the problems that beset us in the ‘real’ world can be alleviated or left behind altogether as we realise that the virtual world offers us a more satisfying, more ‘perfect’ mode of being than conventional physical existence.

It might sound airy-fairy stuff to the average IT professional faced with everyday chores like pulling cables or fixing software glitches; but these frameworks of values and standards for making sense of the effect computer technology is having on everyone who uses it are at least a first step to meta-philosophies that, some might argue, mankind will need as technological progress starts to challenge our human capacity to comprehend its full ramifications.

Technological singularityAround 70 years ago, the first digital computers relied on fragile vacuum tubes and punched-paper cards, but already their designers knew that they could become so much more than mere calculating machines, if only the clumsiness of 1940s hardware could be overcome by suitable electronics. Lecturing in 1951, computer scientist Alan Turing said: “It seems probable that once the machine-thinking

method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits.” At some stage therefore, Turing advised, we should “have to expect the machines to take control…”

In conversation with fellow mathematician Stanislav Ulam in 1958, the Hungarian-American computing pioneer John von Neumann suggested that the march of technology “gives the appearance of approaching some essential singularity in the history of the race, beyond which human affairs, as we know them, cannot continue.” Then in 1965 Irving Goode, an outstanding mathematician who had worked alongside Alan Turing as a wartime code-breaker, predicted “an intelligence explosion” trigged by “an ultra-intelligent machine that designs even better machines. The first intelligent machine is the last invention that man need ever make, provided it is docile enough to control.”

If the distinction between us and our machines begins to blur, then perhaps the question of docility vanishes, because we would hardly want to destroy ourselves (would we?)… Twenty years later, at the University of California, Los Angeles (UCLA), an idea known as transhumanism took shape, just at the point when the recently-arrived personal computers were on their way to becoming commonplace. It became realistic to imagine a time, a few decades down the

Page 4: New Utopias for Old?

33

www.EandTmagazine.com February 2014 Engineering & Technology

road, when biology and electronics might be merged, so that both brain and body can be augmented with additional powers of memory, intelligence, social connectivity, agility, and longevity. According to the transhumanist manifesto, we will become ‘better than well’.

Entrepreneur and futurist Ray Kurzweil is now working on GoogleBrain, a new version of the familiar search engine with the express purpose of actually understanding search queries instead of just robotically sniffing-out keywords. Kurzweil is also the leading proponent of ‘The Singularity’, a term derived from von Neumann’s remark, and which Kurzweil defines as “an era during which our intelligence will become increasingly nonbiological and trillions of times more powerful than it is today”. In his many books, articles and lectures, he confidently predicts “the dawning of a new civilisation that will enable us to transcend our biological limitations and amplify our creativity”.

For Kurzweil, the point of no return has already been reached because of what he calls the Law of Accelerating Returns. Technological change is exponential, he argues. The rate of advance is itself speeding up: “So we won’t experience 100 years of progress in the 21st century – it will be more like 20,000 years of progress (at today’s rate).”

To put Kurzweil’s argument simply, imagine a Greek philosopher from 2,000 years ago, and a clever Medieval clockmaker, magically transported to the early 20th century. Both of them may well have grasped the basics of steam engines, motor cars, aircraft; even the wire telegraph might not have seemed startling. On the other hand, the advances that mankind has experienced over just the last three decades, from deep space exploration to instant global communications, would leave them stunned, while to them biotechnology would be incomprehensible. How, then, would we citizens of the early 21st century respond if we could catch a glimpse of the technologies awaiting us in just a few decades’ time?

Kurzweil and his followers believe that a crucial turning point will be reached around the year 2030, when information technology achieves ‘genuine’ intelligence, at the same time as biotechnology enables a seamless union between us and this super-smart new technological environment. Ultimately the human-machine mind will become free to roam a universe of its own creation, uploading itself at will on to a “suitably powerful computational substrate”. We will become essentially god-like in our powers.

Scientists from other fields of research share this vision of epic change just around the corner. Eminent cosmologist Martin Rees asserts that “post-human intelligence will develop hypercomputers with the processing power to simulate living things – even entire worlds”. These worlds will become places where we can really ‘live’, as we test our ability to reshape the meaning of existence itself.

This could sound like unabashed science fiction (predicated on an attainable societal state of man-machine harmony), until we remind ourselves of what is already out there. We have artificial limbs, for instance, activated by nerve impulses. There’s Darwinian software that designs its own improved successor code.We are on the verge of wearable computing, from Google

Glass to increasingly technologically-adept smartwatches (see E&T Vol 8 No 12). Already the distinction between the averagely wealthy human and their ‘hardware’ is blurring. It is also obvious that certain kinds (and perhaps, all kinds) of virtual experience will become indistinguishable from ‘real’ world sense impressions, once we’ve gone beyond rectangular plasma screens and gone seriously three-dimensional.

The ‘technological singularity’ concept has its critics, of course. Kurzweil and others assume a smooth, unstoppable acceleration in technological progress, regardless of resource shortages or potential limits to electronic miniaturisation, say. There is much speculative talk of quantum computers, nanorobots, and other futuristic devices for transforming our bodies and boosting our brain power. To be bluntly skeptical, aspects of technological singularity could appear to have been conceived by clever men of a certain age, who are addicted to vitamin supplements (as Kurzweil freely admits), and who don’t want to shuffle off before they have been able to witness their predictions coming true.

Nevertheless, a report entitled ‘Converging Technologies for Improving Human Performance’, commissioned in 2003 by the US National Science Foundation (NSF), takes seriously the idea that we are approaching “a convergence of nanotechnology, biology, information systems and cognitive science that will have immense individual, societal and historical implications”. The report also predicts “fast interfaces directly between the human brain and machines, enabling new modes of interaction between people”. In short, concepts that 20 years ago seemed like geeky dreams now look perfectly plausible.

There may be some snakes lurking in this technological paradise. For instance, the NSF’s expectation of “access to collective knowledge while safeguarding privacy” seems risible in 2014. Futurologist Ian Pearson, formerly of BT, and now running his own consultancy called Futurizon, does not see the Technological Singularity (or the ‘Convergence’, or whatever we want to call it) as a remedy for all our future adversities.

“It won’t bring a utopia,” he says. “Will the future be better than today, worse, or just different? I can’t answer that question.”

But maybe we’ll find out soon enough. Vernor Vinge, formerly a professor of mathematics at San Diego State University (SDSU), has written many essays and works of fiction based on the ‘singularity’ theme. He insists that “we are on the edge of change comparable to the rise of human life on Earth”, no less, and the time to start thinking about it is right now. “For all my rampant technological optimism, sometimes I think I’d be more comfortable if I were regarding these transcendental events from 1,000 years’ remove instead of 20.”

TranshumanismDoes human evolution need a helping hand? The idea that homosapiens is not evolving does not seem controversial and a lot of people believe it. Naturalist David Attenborough wrote in the >

Inspirational computer industry figureheads such as the late Steve Jobs combined counterculure values with corporate cool

Page 5: New Utopias for Old?

Engineering & Technology February 2014 www.EandTmagazine.com

34 UTOPIASTECHNOLOGICAL FUTURE

< Daily Telegraph last autumn: “We stopped natural selection as soon as we started being able to rear 95 to 99 per cent of our babies that are born.”

Biologists were quick to point out that natural selection is still happening and influences human evolution not just because only citizens of the richest nations of the world have low infant mortality, but because death is not the only contributor to natural selection. Many children are simply never conceived because their would-be parents made choices that rendered them less likely to conceive, as this is not just about contraception, as satirised in Mike Judge’s 2007 movie ‘Idiocracy’. Women with larger numbers of children were found by Stephen Stearns and colleagues from Yale University to be on average shorter, stouter, and likely to experience a later menopause.

These natural-selection pressures are not necessarily the ones we want. Responding to Attenborough’s article, anthropologist Professor John Hawks of the University of Wisconsin wrote: “The traits that make a difference to selection are not typically the ones that matter to the health of 70-year-olds.”

That is arguably where transhumanism comes in, by providing ways of altering the body to make it less susceptible to the effects of ageing and to evolve in a direction humans choose rather than a direction that population pressures deliver. Transhumanism has splintered into many different forms but all assume that progress in technology will make it possible to build better humans or, if not better, with a much longer lifespan. Whether this is an ethically desirable outcome and will lead to a better human race remains an important but unanswered question.

Transhumanists such as utilitarian philosopher David Pearce argue that individuals will have a choice over what they will be able to upgrade, and whether to upgrade. However, we do not yet know the price of the upgrade, whether everybody will be able to afford it and whether those individual choices will benefit the rest of us.

Many transhumanists argue that superintelligence will pave the way for a more equitable, less destructive future. As smart people are not immune from making unfortunate choices, this may be one of transhumanism’s more courageous assumptions. So, if we pass by the moral questions of building better humans, is it even possible?

If you look at the directions in research, it may be a process that is hard to avoid and, if you squint hard enough, we may already be there. Technology already extends us – it’s just not part of our bodies in most cases; yet it is getting much closer, even for everyday objects. Take Google Glass, for example. The backlash against the computerised spectacles is arguably the first salvo to go mainstream in what will be a long-running debate over the merits (or otherwise) of transhumanism.

As a removable piece of eyeware, Google Glass is perhaps not obvious as a transhumanist technology, but the backlash focuses on concerns that it potentially gives wearers an advantage, such as the ability to perform face recognition on strangers or monitor and record a meeting covertly. If we assume that retinal implant technology will improve over time, it is not a major stretch to the point where the vision-processing technology in Google Glass is deployed if not in the eye itself, but in a far less obvious wireless connection to a wearable computer.

For other prosthetic technologies, we are still dealing with the low reliability, performance, and robustness of electronics and mechanical systems versus the biological. The research focus is in how much engineers can repair failing parts of the body; the mind is the next step. Researchers are trying to find ways to fight the effects of neurodegenerative diseases such as Alzheimer’s that, although they do not extend life, would reduce the proportion of life spent needing personal care.

In the autumn of 2013, US defence research agency DARPA began to talk about its brain-repair projects, one focusing on implants to treat depression, chronic pain,

and post-traumatic stress disorder (PTSD) using techniques such as deep-brain stimulation. A second project, Restoring Active Memory (RAM), is an attempt to build electronic devices that can repair brain damage and reverse memory loss. Although there are clear applications for injured soldiers, spinoffs from the work, assuming it succeeds, might prove useful for dealing with the effects of dementia.

Electronics, however, has a lot of catching up to do. As most other prosthetics, biology today does a much better job than a computer. The machine’s sole advantage now is that it can record and store data more reliably than a group of biological neurons, which need repeated stimulation to maintain their connections and the memories those connections represent.

Technology in 2014 is faced with two problems. The first is the suitability of existing von-Neumann architectures for emulating brain function. Research is continuing into neuromorphic elements that are better at behaving like neurons and synapses although it remains unclear which parts of the biological neuron are essential for a machine that operates like a brain and which may even be able to achieve consciousness.

The second problem is a possible slowdown in electronic integration. The two-dimensional scaling that characterised the Moore’s Law era is within sight of reaching its limits: at some point we will simply not have enough atoms to construct a working switch beyond the final technology node. Using the third dimension to scale-up is a realistic alternative but we may not be able to obtain the same degree of scaling in cost that we have had with integrated-circuit development over the past half century.

Pro-brainers?Russian businessman Dimitry Itskov is confident that the technology will develop. He decided to set up the 2045 Initiative with a plan to build a complete replacement for the human brain based on some form of electronic technology by the middle of this century. Rather than aim just for that endpoint, the team has outlined a series of four phases that make the ultimate aim seem more tractable. By the end of 2020, for example, the 2045 Initiative expects to have ‘affordable’ robotic avatars controlled by some form of brain-computer interface, building on techniques already in place that allow limited control of prosthetic limbs using electronic implants.

The next decade would, according to the 2045 group, turn the avatar into a life-support system for the brain, followed by an accurate computer model of a conscious brain that would provide the means to transfer the mind from the biological brain to a machine emulation. As long as the computer is repairable and powered, it would offer the mind immortality.

As we still do not understand the processes of the brain, a 40-year timescale is ambitious but assuming that the research into brain-assisting implants proceeds quickly, we may encounter an increasing number of hybrid minds that are, to some degree, transhuman if not enhanced humans. Using biology to redesign ourselves may prove to be a more viable near-term approach for long-lived transhumans.

Gerontologists such as Aubrey de Grey of the SENS Research Foundation focus on extending the life of the human body. De Grey believes that the human lifespan could be stretched to centuries using therapies that subvert the processes that lead to bodies degrading with age and repair damage.

The idea of life (or youth extension, at least) has begun to capture attention from mainstream companies. Google created the firm Calico, headed by former Genentech CEO Art Levinson, to find ways to deal with what Larry Page described as the “challenge of ageing”.

As transhumanism focuses largely on the self, most branches of the philosophy do not have a ready answer as to whether future humans will make good choices for the many. But, even though the ambitious plans of mind uploading and bodily enhancement seem fanciful today, mainstream technological development is delivering technologies that our descendants, if not our future selves, will see as ‘transhuman’. *

There’s more online...Blade Runner and technology’s past futurehttp://bit.ly/eandt-blade-runnerTianjin Eco-city - blueprint for the futurehttp://bit.ly/eandt-blue-utopiaQuatermass 60th anniversary: a technologist’s herohttp://bit.ly/eandt-Quatermass60

‘I’m a technological optimist... There will be many enhancements for human existence as well

as drawbacks. Whatever’s coming next, it won’t be hell.’

Professor Vernor Vinge