Designer Babies: Myth or Reality

Solent People’s Theatre, Portsmouth. The performance and discussion reviewed took place on 13 March 2004.

Following the performance of Brave New World, a panel assembled to discuss developments allegedly foreseen in Huxley’s dystopian tale, specifically pre-implantation genetic diagnosis (PGD) and the ongoing furore over its use or misuse.

Juliet Tizzard, editor of BioNews, director of Progress, and a keen advocate of genetic science, went head-to-head with Josephine Quintavalle of Comment on Reproductive Ethics (CORE), an outspoken critic of the likes of IVF and cloning. For Tizzard, the state ought to extend access to reproductive technologies – including allowing parents to use the technology have a child who can act as a donor to a sibling with a life-threatening condition. This, instead of ushering in the state-directed cloning of the Hatchery, would promote parental choice.

Quintavalle, not one to understate her case, equated such parents with slave owners. The slave’s child doesn’t exist for its own sake, she said, and nor does a child subject to PGD. Ellie Lee, lecturer in social solicy and author of Abortion, Motherhood and Mental Health, countered that the Human Fertilisation and Embryology Authority (HFEA) has chosen to interpret the children’s best interest narrowly, ignoring reference to the interests of the family as a whole.

Caroline Jones, lecturer in law at University of Southampton, sought greater clarity on the status of embryonic cells, and guidelines on how to regulate disputes if ‘things go wrong’. Yet for Lee, the overriding problem is the increasing preoccupation with parenting, and an eroding of the autonomy of family life. More regulation would only undermine this further.

Whilst there is clear blue water between the positions held by Quintavalle and Tizzard (and by implication, Ellie Lee), most people occupy the agnostic middle ground. This was made clear by a number of contributions from the floor. Perhaps we shouldn’t be rushing ahead. Perhaps we shouldn’t be having the debate at all. Then again, if it were preferable not to have a debilitating condition, surely it would be logically preferable not to bring an affected child into this world. Can we trust the authorities not to go to far?

In an impromptu and perhaps mischievous poll by the chair, Tony Gilland of the Institute of Ideas asked whether we could trust parents themselves. There was a hesitant but majority ‘yes’ vote.

Our Final Century: will the human race survive the twenty-first century?

Martin Rees, Astronomer Royal and former President of the British Association for the Advancement of Science, is (apparently without irony) the latest of science’s discontents to pen a sensationalist collection of alarmist projections.

The recurring themes of risky runaway technology, impending global catastrophe and underlying anti-humanist sentiment suggest a shared cultural template for not so dissenting voices. As with Susan Greenfield’s Tomorrow’s People, Rees engages in reckless scenario-building, moving from one to another with just enough rapidity to give the impression of imminent doom. Surely we should be worried when the Royal Society research professor at Cambridge University, joins the director of the Royal Institution to condemn humanity to its inevitably sorry fate? Each is so apparently eminent after all.

But, like Greenfield, Rees is a crude technological determinist. The internet is charged with creating ‘sharper social segmentation’ and isolation. The communications technologies are inducing panic as apparently evidenced by the anthrax scares in the US, and the UK’s foot and mouth disease epidemic. And, admittedly with some justification this time, Rees argues that new technologies will continue to allow greater ‘leverage’ to those with the darkest of designs, that is, everyone from the Unabomber to Al Qaeda. Yet none of this tells us why acts of nihilistic violence are on the increase, or why society is so atomised and fearful. ‘It is easier to conceive of extra threats than of effective antidotes’ he says. Or explanations.

Rees’s proposed solution to the supposedly escalating threats that society and the planet face is to establish criteria whereby ‘we can rule out catastrophe with a confidence level that reassures us.’ To this end he recommends potentially disastrous experiments be put to public consultation to ensure that any risks entailed fall below what is collectively deemed an ‘unacceptable threshold.’ But the technical exercises advocated by Rees will satisfy no-one, least of all his lay experts. Risk consciousness is all pervasive and not at all susceptible to cost-benefit analysis. The much-heralded democratisation of science, already instituted in the fields of biotech and reproductive technologies, introduces yet another drag factor into these already highly regulated fields of enquiry.

Indeed, such fears, as Rees recognises, tend to limit the scope for scientific endeavour. Economic short-termism and ethical considerations are symptomatic of wider trends, rather than responsible for holding back cutting-edge science, as he claims. The panicky climate in the business world and the rise of the ‘ethics committee’ is testament to our anxious times. The spectre of Monsanto as corporate ogre, and the retreat of an elite feeding off the popular resonance of reactionary lobbyists, means the social and economic potential of GM crops isn’t even debated.

In citing areas of research bereft of ‘compensatory benefit’ outside the lab, Rees suggests a profound decoupling of science and society. The notion of harnessing technical advance to effect social progress is largely absent from this book. Again, as with Tomorrow’s People, people don’t feature much except as grotesques intent on crimes against… Well, humanity (for what that’s worth). Consequently, he finds the notion of anything other than the lightest of humanity’s future footprints unimaginable. From technological to demographic determinism, an urban population explosion in the developing world fills Rees with dread. The global population, he confesses, is likely to fall post-2050 from a peak of 8 billion. Yet, he continues, the planet could sustain up to 10 billion people living in ‘capsule hotels’ Tokyo-style perhaps, on a rice-based subsistence diet.

Rees gets all nostalgic for the 4.5 billion years that preceded us when ‘nothing happened suddenly’. We are the unwelcome ‘unprecedented spasm’ gate-crashing the biodiverse party with our agriculture and incessant radio-noise, hurtling chunks of metal into orbit. He describes the search for alien life, or another ‘blue dot’, as the most significant for science since Darwin. ‘Ever since Copernicus, we have denied ourselves a central location in the universe’, he continues. But in asking, ‘Is their life on Mars?’ Rees evokes Ziggy Stardust’s alienated plea rather than the confident pioneering spirit of the Space Age. But the ‘right stuff’ values of this era aren’t so much ‘antiquated’ as at odds with today’s mission-fatigue. Rees’s concern for humanity’s ‘biological or cultural legacy’ reveals an identity-crisis behind Our Final Century, and perhaps explains its broad appeal. Apparently he doesn’t do much star-gazing anymore. In fact, this is a scientist on the couch – if we are alone in the universe, he imagines, such a counter Copernican revelation would at least ‘boost our cosmic self-esteem’.

As a consequence, Our Final Century, a book about the future, is depressingly unable to imagine a tomorrow that transcends the present in any meaningful way. Rees worries that the relentless accumulation of near negligible risks will eventually result in catastrophic consequences, perhaps within a generation or two. On the strength of the apocalyptic imaginings of two of Britain’s most distinguished scientists, I worry that the ‘cumulative impact’ of their improbable scenarios on rational debate, and already failing nerves, is far more of a threat to our best interests. If, as Rees rightly says, ‘the stakes are high in opening new worlds’ we need to rediscover the confidence that take’s society beyond their generation’s survivalist ethic.

From Dystopia to Myopia: Metropolis to Blade Runner

From Metropolis to Blade Runner, representations of the city often suggest a bleak view of the future. Has the image of the city become more dystopian? Does culture provide us with an imaginary future, or does it presage the way that we will influence the real future? The film session at the Future Vision: Future Cities conference, held at the London School of Economics on 6th December 2003, looked at the changing historic visions of the city using cinematic examples from different periods. 

For Kim Newman, a novelist and film critic who spoke at the event, filmic depictions of the future are very much reflective of their times. Typically they fail at the box office but acquire cult status in retrospect. Their very downbeat projections and dark fantasies are strangely seductive. It is, I think, worth noting that films such as Alphaville, Blade Runner and Dark City (discussed below) adopt film noir-like devices to portray shadowy, brutal streets through which their lone anti-heroes prowl. This perhaps reflects a brooding cynicism pervading contemporary thought on all things urban after Fritz Lang’s classic Metropolis.

H G Wells dismissed Metropolis (1927) as a mix of ‘almost every possible foolishness, cliché, platitude, and muddlement about mechanical progress and progress in general’. The creators of the Blade Runner cityscape, on the other hand, openly acknowledged their heavy debt to Lang’s vision. Was Wells right? Xan Brooks, film editor at The Guardian (London) online, speaking at the event, described the film as a modernist representation of an ordered society, exhibiting the sense that there was a relatively uncontested view of where humanity was heading. Despite the theme of industrial conflict, I would add, there was at least a shared framework of meaning. That is lacking today.

The apparent absence of a futuristic vision on celluloid in the post-war period arguably reflected a deep pessimism in the Western cultural elite with regards ‘progress’. The sci-fi classics of the 50s tended to substitute alien encounter for the ‘red menace’ of the Cold War. In Japan, the cultural impact of Hiroshima and Nagasaki were being exorcised in the incredible guise of Godzilla (1954), described by Stephen Barber (Projected Cities: cinema and urban space, 2002) as a ‘spectacularly mutating form engaged in a direct, irresoluble combat against the surfaces of the city’. Was this just a run of the mill B-movie or was it an early example of the city as polluted landscape? Godzilla, with his ‘radioactive breath’, is the result of American nuclear testing. But the toxic lizard takes on a malignant resonance of its own in its intent on destroying Tokyo. 

There seemed to be an optimistic cultural turn in the 1960s but the ambivalent attitude to technology and notions of progress seemed to persist in a modified form. In Alphaville (1965) for instance, Jean-Luc Godard presents a dystopian nightmare world hostile to individuality, love and self-expression. Godard was apparently thinking of calling it Tarzan versus IBM. The film warns of the ‘computerised horrors of the city’. The hero of the piece seeks his own reality by battling against its cold rationality and artificiality. This privileging of the emotions was a significant departure. After all, Wells’ lambasting of the sentimentality of Metropolis would be inadmissible to the advocates of the counterculture. 

Stanley Kubrick’s adaptation of Anthony Burgess’s 1962 novel, A Clockwork Orange (1971), was filmed as the counterculture gave way to punk nihilism. It unapologetically indulges us in the amorality and brutality of urban thuggery. This film seems to represent a turn away from the concerns of its dystopian predecessors with mechanical progress, the toxic city and counter-cultural idealism. How do we account for this abandon of such grand themes, or the ‘vision thing’? Unlike the earlier films, Kubrick presented not just a bleak depiction of the future, but a near future in which both city and its most marginal inhabitants are utterly degraded. This quintessentially British dystopia of the period (when considered alongside Derek Jarman’s anarchic Jubilee) is worth comparing with the much grander degradation of the screen adaptation of Philip K Dick’s Do androids dream of electric sheep? (1968).

Aldous Huxley dubbed Los Angeles ‘the City of Dreadful Joy’ and in one of his post-Brave New World novels, a ‘ruinous sprawling ossuary’ subject to ‘deforestation, pollution and other acts of ecological imbecility’. In Blade Runner (1982), Ridley Scott added cheap neon, digitalized advertising hoardings and teeming streets to bring this particular LA up to date. According to Xan Brooks, the film presented a post-modern collage as opposed to the ordered cityscape of Metropolis. The old and the new coexist, he said. This is certainly I think, in contrast to Lang’s portrayal of opposing worlds, the elite cityscape against the mechanized workers slaving below.

William Gibson was writing Neuromancer (1984) as Blade Runner opened in cinemas. He claimed not to have seen the film until well into writing his novel. However, each has been credited with initiating the cyberpunk era of science fiction. The introduction of the virtual dystopia to the genre was seemingly grafted onto the themes of urban decay and moral crisis visited in A Clockwork Orange and Blade Runner. It was as if the ‘punk’ had vacated the brutal alleyways of 70s London and the sprawl of LA to stalk cyberspace instead. But how has the dawn of ‘virtual reality’ impacted on the film city of the future?

In Dark City (1998), Alex Proyas presents a stylised metropolis, an ominous and dark dreamscape. Arguably Blade Runner still casts a shadow over these later films. Yet, like Neuromancer before it, Proyas paves the way for The Matrix trilogy in as far as it ‘depicts a world that is illusory and malleable’. For me though, Dark City is a retreat from engagement with the city as a material or social entity. The political industrial dynamic of Metropolis and the gritty urban realism of Blade Runner are shelved. Alphaville may have been anti-rational but it didn’t indulge in the mystical contortions of these films. We may associate the birth of new ageism with the 1960s but only in the 1990s (alongside Lord of the Rings, Harry Potter, et al) was it to really take hold. Why is this? 

The renowned academic Russell Jacoby has said: ‘The world stripped of anticipation turns cold and grey’. In contemporary cinema, fantasy is the antidote. From the late 90s on, there has been a marked retreat into the inner world, into childhood and away from dirty, complicated reality. This is a dramatic break from Lang’s clearly framed if simplistic depiction of the workings of a futuristic city. As a moral tale,Metropolis towers above the relativist creations that followed. Fractured, partial conceptions of the future dominate today. Indeed, Barber has noted the ‘wry abuse of, or oblivion directed at, linear narration’ in contemporary explorations of the urban.

But is this solely a cultural phenomenon? I would argue that, on the contrary, it reflects the loss of the cohering influence of the defining political projects of the 20th century. As ideological and institutional foundations have crumbled, so have our social narratives and their cultural expressions. Unlike the lead in Dark City , we have a diminished sense of self that cripples our potential to shape the world around us. The future is thus narrowed in its conception or emptied of meaning. Lang’s work is arguably impressive today because its breadth and mastery are counter to the low horizons we now set ourselves. In virtually every sphere of life, those bold enough to present ambitious visions of the future are met with cynicism. This amounts to a short-sightedness that denies the creative capacity of human agency. And, if it goes uncorrected, will inhibit our potential to conceive a future worth realising.

Note: Thanks to Sandy Starr for advice and comments.

Tomorrow’s World

Future Vision: Future Cities, London School of Economics

‘One year ago, Tomorrow’s World was cancelled,’ announced Austin Williams, convenor of the one-day conference Future Vision: Future Cities. Indeed, as the knowing laughs from the audience suggested, even though the reference was to the former BBC flagship of TV Science, the implications are wider than that. As one-day conference on attitudes to risk, urban life and the future, came to a close, delegates were left to ponder where we go from here.

Martin Wright, editor in chief, Green Futures, envisioned, well, a green future. The ‘polluting, messy, noisy’ carbon-fuelled age will be replaced by the quiet and clean hydrogen-fuelled technologies, he predicted. Wright described a future characterised by greater ‘connectivity’, not just in information flows, but also with the movement of populations and the transmission of diseases. Such a globalised future would mean a greater dependence on the reserves held by politically unstable states. Thus was his vision of the future undermined by a fear of increasing threats to world peace. His demands for clean technology were as much driven by his belief that terrorists would be put of attacking a benign power source, as they were for the future of the planet.

Kevin McCullagh, director of Foresight, Seymour Powell, described Wright’s depictions of a sustainable future as a ‘barrier to innovation’. Despite this, environmentalist projections have come to the fore as the West has lost its vision of the future. It is telling, he said, that we still refer to Kubrick’s 2001: A Space Odyssey, or the experimental spirit of the Sixties to conjure up a more optimistic take on what could be.

Science writer and former senior manager at the SETI Institute, Greg Klerkx was far more optimistic describing the period from the launch of sputnik (1958) to the Challenger disaster (1986) as the ‘first space age’. Perhaps routine space flights are still a fiction, but satellite communications are with us; extra-terrestrial mining may be a way off, but we can effectively track the use of earthly resources.

Jeremy Newton, chief executive, National Endowment for Science and Technology in the Arts (NESTA), on the other hand, argued that transport policy has been taken over by the heritage industry. Indeed the only transport innovation we allow ourselves is a ‘machine for reversing time’, the celebration of cutting edge 19th Century technology in the form of trams and bicycles! This presentation which challenged the accepted vision of transport was well received for its wit and whimsy. However, Williams questioned whether simply exposing the folly of a reactionary transport strategy really address the societal shift that now views cars as a problem, lauds pedestrian and decries mobility. ‘Saying that we are in favour of better modes of transport is of little impact if we are unable to challenge the climate of opinion that says that walking and cycling are more responsible means of mobility.’

For Claire Fox, director, Institute of Ideas, we have a problem not only with the future, but we are also ‘profoundly alienated from the past’. The perceived side-effects of our former self-indulgences are projected into a future where human intervention can only bring unpredictable, and more importantly, undesirable outcomes.  Opposed to the futurology exhibited by some taking part in the conference, this state of affairs amounts to paralysis, a stifling ‘presentism’.

But for Fox, the adoption of a futuristic outlook would not in itself change things. We can’t design ourselves out of the problem. Innovators are as likely as their ‘eco-worrier’ contemporaries to internalise the gloomy thinking that characterises our times. Ironically, she said, for all their rhetoric about saving the planet, ‘future generations will be ill-served’ by such anti-human apologists.

Whatever your concern, be it the future of technology, of humanity, or of the planet, it is hard to deny that the ‘vision thing’ is conspicuous by its absence, except in vacuous assertions of the need for ‘Blue Sky thinking.’ FV: FC was an important opportunity to explore what is at stake not only for the urban-dwellers of tomorrow, but for those of us in the here and now.

Tomorrow’s People: how 21st century technology is changing the way we think and feel

by Susan Greenfield

In Tomorrow’s People, Greenfield, renowned neuroscientist and director of the Royal Institution, indulges her literary ambitions to create a speculative dystopia owing much to Huxley.

In this updated Brave New World, she imagines a near-future when the likes of genetic modification, nanotechnology and cybernetics conspire to leave us in a ‘passive, sensory-laden state’. Our sci-fi imaginings, says Greenfield, tend to present a high-tech world in which we nevertheless remain human, our essential being unchanged. However, the intrusion of these 21st century sciences will alter our lives beyond recognition.

Increasingly, the physical world will itself become an interface of ‘tangible bits’ where we exchange CVs via the electro-conductive sweat of a handshake, and communicate via the e-broidery of our ‘softwear’. Augmented reality (AR) applications will turn us into cyborgs. Chip-embedded spectacles projecting a superimposed image onto the retina will be used to aid engineering design, for example, pinpointing sections for maintenance or manufacture. Perhaps a little further off, allowing parents to peer into an artificial womb and track their child’s development.

At home, the little ones will play with their ‘smart toys’, that mirror their development, as each grapples with its environment; or amuse themselves assembling a kind of sub-atomic nanotech Lego. Meanwhile, their flexi-operative parents will ‘plug-in-and-play’, their serotonin depleted brains episodically provoked to virtual ‘desk rage’, as performance stats are relayed to the virtual boss. They will socialise ‘remotely’, or be promiscuously lost in virtual sex role-play with a designer partner of their choice. All the while, the Hyperhouse, with its ‘electronic spine’ will teem with smart appliances, activated by bodily sensors adjusting ambience and functionality accordingly.

Beyond the not so private sphere, populations will diverge further as the uneven application of these technologies leads to ‘speciation’. In Greenfield’s most optimistic scenario, there will be no international development as such, but a wiring up of cottage industries, equipping ‘every village with an electronic library’! The spectre of apocalyptic bio- and cyber-terrorism will reign. We will wear air quality monitoring devices, keep the hi-tech equivalent of a gas mask in the bathroom cabinet, and our offices and homes will be equipped with sophisticated air-filtration systems.

It may already be apparent that Tomorrow’s People is ambivalent about the future. It is also profoundly anti-human in outlook. Post 9/11, Greenfield finds it ‘harder to regard the human element as a constant force for good’. But her pessimism seems more deep-rooted. For her, the self is a fragile expressive entity, and ‘the firewall of our sense of individuality’ in increasing danger of being breached – by the collective, ie. other people! In the home of the future, you’ll need a ‘real room’ retreat from the interactive noise, but will equally find the offline experience exposing and disorientating. The desire for real time stimulation will draw us to the sporting arenas and their ‘seething mass of sweaty humanity’, a frightening and distasteful prospect for Greenfield.

Greenfield’s view that ‘human nature’ has changed little since our ancestors got off all fours, bares little scrutiny. If this were the case, these new technologies might indeed by unnerving. However, the shaping of every tool since the carved animal bone has also helped to shape our minds, our environments and our social organisation. Only deaf separatists would claim that cochlear implants erode the identity of its beneficiaries – but surely this example of early cybernetics is just an extension of historical precedent.

Like any good dystopian, Greenfield captures something of our lives today and projects it into the future. We are certainly living increasingly individuated lives, alienated and fearful of each other, but technology is not making us this way. Already, she notes, some of us float in and out of a virtual world, with our hands-free mobiles, oblivious to those around us. But text messaging and virtual-dating, for example, are popular because of their distancing qualities, the antithesis of what communication technologies are ostensibly for. Instead of being understood as a means to mastering our environments, technical advance can take on a threatening mystical quality, and end up mediating our anxieties.

Greenfield is anxious that ‘text-based unambiguous knowledge’ will give way to associative hypertext. But this would be a consequence of the relativisation of knowledge, a cultural phenomenon, not a technological one. Similarly, Greenfield wonders whether ‘science has made us less accountable for our actions’. Simply put, no it hasn’t. This very sense of humans being deeply vulnerable, with little agency (a sense to which she seems to subscribe), is doing this all on its own. Consequently, the erosion of the private sphere has been underway for some time, with the intrusion of the state, not IT, being the primary driver.