Mind Children: A Terrifying Future and Our Present Roadmap

Written in 1988, Hans Moravec's Mind Children is an at times breathless exultation of the march of cybernetics and an exhortation of techno-humanism. Thirty years on from its publication, it does not appear as a curious detour in thinking or terribly outdated in its claims. Instead its values seem to have encroached further into the culture, taking deeper root within the echelons of power in Silicon Valley and disseminating themselves through the technology used by millions of consumers. We find ourselves living in the future so predicted, part of the transition phase toward species-wide extinction. After a prolonged dark age, artificial intelligence has produced startling breakthroughs and is now being deployed in the service of mindless capitalism, threatening to eliminate whole sectors of human work while virtual worlds grasp for ever-more-accurate replications of reality to reassure us of falsehoods as we do next to nothing to solve our looming ecological crisis. Mind Children remains an instructive introduction to the destructive thinking that underlies much of the development and investment unfolding today.

The field of cybernetics burst onto the scene of public consciousness with the publication of Norbert Wiener's Cybernetics (Hayles, 1999). The book and subsequent writings and developments in the field did much to eliminate the distinction between animals and machines. This didn't simply elevate the status of electro-mechanical machines in the discourse, but also debased the animal in the process, covering over those aspects not adequately covered by theory. For example, consciousness does not factor at all in the cybernetic view, so that over time it goes from a thorny problem whose solution is unknown to an illusion and finally something that doesn't even need to be addressed, if it was ever there to begin with. While Wiener was quite reluctant to follow through all the implications of cybernetics, still clinging to a conception of the self rooted in the old liberal humanist frame, Moravec represents a later generation of researchers, one so seemingly unconcerned with the deep implications of his philosophy and with moral questions that it would be comical if it weren't also so frightening.

Moravec opens the book by proclaiming that the postbiological future of self-improving, self-replicating machines acting independently from humans is inevitable (Moravec, 1988). This claim to inevitability is perhaps used by techno-humanist proponents as much for its utility in allowing them to avoid dealing with the question of why such a future would be at all desirable as for their belief in it. Moravec sees the body and the mind as in an uneasy tension and laments that while genes are able to live on through progeny, the mind dies with death. The postbiological world then offers an opportunity for the mind to be "freed" of the body.

In her examination of the history of cybernetics, N. Katherine Hayles finds in Norbert Wiener someone acutely uncomfortable with his body. She notes that he could not throw horseshoes in even approximately the right direction and had to abandon a career in biology early on because he was too clumsy to do the lab work (Hayles, 1999). Hayles sees Wiener's disembodiment as influencing his analogical line of thinking (Hayles, 1999) and it is not difficult to imagine a similar level of self-alienation as present in Moravec and underscoring his ideas. How profoundly self-alienated must one feel to speak of the mind being "freed" of the body? It does not occur to Moravec to ask whether such a thing could even be possible. He imagines the mind as by definition separable from the body and from that possibility extrapolates to a desire for separation. That thinking can serve our genes, the idea of a healthy mind-body dynamic, is alien to Moravec who can conceive only of a master-slave relation, one where he sees the rightful roles wrongly reversed1.

On the point of the inevitability of machines becoming human-independent, Moravec does not offer why this should be the case, nor answer where they might get their values and their motives. Moravec's cybernetic paradigm allows him to see autonomy and intentionality without invoking the human element or anything even resembling it.

Moravec foresees robots with human-level intelligence as being common in fifty years time (twenty from present day). At least in the short term he sees people as being able to earn good income writing software for robots as the robots displace people from numerous industries. Such an optimistic view assumes that there will be a viable software market (something not at all clear when 'information must be free' is a dominant philosophy) and fails to account for what happens when all the low-hanging fruit is taken. This was seen on the iPhone's app store near its introduction, where plenty of individuals were able to score big with non-reproducible results. As the app store became saturated, the prospects for individual software makers rapidly diminished. Moravec believes that programs will have limited lifespans (Moravec, 1988) as they become outdated and need to be replaced with newer and better ones. This ignores the issue of entrenchment, where it is more cost-effective to moderately update the existing program than to adopt a new one, so that whoever presents a solution that is 'good enough' early enough goes on to take a commanding lead and eventually becomes monopolistic. Moravec also sees there being room for lots of software developers, with multiple programs needing to be written by multiple authors (Moravec, 1988), down to the level of separate programs for separate tasks - akin to the individual processes running on a computer. While a boon for software developers, such an outcome would be a nightmare for consumers and businesses who must decide between dozens of competitors for potentially hundreds of tasks. Limited human cognitive attention tends to favour large bundles, implying that the software field will become dominated by a few key monolithic players.

All this criticism though still assumes that software even needs to be written at all twenty years hence. With machine learning, it is not the algorithms themselves that are valued, but the datasets on which they are trained. And while datasets are highly valued, the people who generate all that data certainly are not (Lanier, 2013). Netflix uses user behaviour to improve its recommendation engine, but users see none of the monetary reward that Netflix exacts from studying their behaviour. Not to single out Netflix, since this behaviour is typical of all companies that now use machine-learning and is exhibited down the line from Amazon to Google to Facebook.

Things start to get unnerving when Moravec writes on symbiosis and the boundary between the natural and the artificial in human-machine partnerships dissolving. My unease comes not from the dissolution of such boundaries per se, but from the revelation of the technological roadmap laid out and its consequences that we are currently living. Moravec writes that Alan Kay, the guru of the Xerox PARC group, envisaged a book-sized personal computer (called the Dynabook) with a high-resolution colour display, a radio link to a worldwide computer network and acting as secretary, mailbox, reference library, amusement centre and telephone (Moravec, 1988). We now can recognise this vision as fully realised in the modern smartphone. Moravec then introduces the idea of magic glasses, exemplified at present with virtual reality headsets, as superseding the Dynabook, and magic gloves as extending the sense of touch to the virtual realm. These ideas are presented as though their mere novelty, or the possibilities they promise, merit them being brought into existence. The rise of smartphones and social media seem to have brought with them increasing alienation, anxiety, depression and decreasing attention, awareness, and capacity for critical thought. While there are things to be said for how we use technology, technology also uses us in often inescapable ways and many of these new technologies threaten to further diminish us in ways that we will not be able to resist simply by 'using them responsibly'. The tech industry appears to be barrelling forward to realise the technologies that Moravec prophesises, and just like Moravec fails to address the why of it. Virtual reality headsets certainly allow for increased immersion in a virtual realm, but why is that desirable, or preferable to increased materiality for the virtual? Certainly there are plenty of practical use cases for virtual reality, particularly as it pertains to three-dimensional design in product design, engineering and architecture, but it is taken for granted that to achieve this the user must be removed from the physical world and placed into the virtual when in actuality bringing the virtual into the physical world matches closer to our intuitions2.

Further in the book Moravec begins to imagine with barely contained excitement the possibilities as human and machine, real and virtual become more indistinguishable. Moravec speculates that as automation takes over, lacking automation will be a non-option as global competition enforces its innovate or die condition. Moravec claims that machines will grow to outclass people, but refrains from specifying in what ways (Moravec, 1988). Certainly from the perspective of people as economic units it is possible that machines will outclass them, but this reflects as much a dismal conception of humanity as it does the achievements of machines. Moravec then spends a curious amount of time imagining what he calls a robot bush, a set of massively recursive robot manipulators (think fingers upon fingers upon fingers), enabling extremely fine and precise manipulation. In what I believe to be a rather telling passage, Moravec asks us to imagine inhabiting such a body. Reading it I could almost hear his overjoyed squeal about as acutely as I felt the urge to wretch. The why of inhabiting such a body is never specified3.

Moravec continues, speculating on how humans might "keep up" with the accelerating pace of machines and prevent themselves from being left behind. Moravec envisions the body as an encumbrance, a limitation to be overcome and a shell to be left behind, saying that it is not enough to simply transplant the brain, the mind needs to get out of the body (Moravec, 1988). Moravec envisions digital mirrors taking over for you when you die, implying that this would enable one to cheat death. Moravec notes how in split brain patients there is the appearance of each brain half hosting an intelligent, fully conscious, intelligent human personality. He imagines a computer simulation inserting itself between the two hemispheres in such a subject: "After a while it begins to insert its own messages into the flow, gradually insinuating itself into your thinking, endowing you with new knowledge and new skills." (Moravec, 1988, p. 112) Moravec does not consider that in so doing the simulation may change your capacity for thought in ways that may be more limiting, unwittingly optimising your brain only to the maximisation of specified outcomes.

Moravec continues his speculation, turning his gaze to the topic of body-hopping and multiple copies: "With enough widely dispersed copies, your permanent death would be highly unlikely." (Moravec, 1988, p. 112) Moravec makes clear that he takes a pattern-identity position on the question of what constitutes the self. In the pattern-identity position it is the pattern and the process that define the person, not the supporting substrate, what Moravec calls a "jellylike body". He contrasts this position with the body-identity position whereby a person is defined by the "physical stuff". Absent is any acknowledgement that both pattern and physical could be vital and interrelated in defining the person. The pattern-identity position values the relation of the individual to a network above all else, appearances over substance, and denies any concerns of consciousness as important to defining the self. All that matters is whether a pattern can be interpreted as being human in order to qualify it for that status4. Given this position it is possible to see how Moravec can consider digital mirrors as a way of cheating death, or of multiple copies of oneself making death practically nonexistent. Moravec goes on to admit that immortality is impossible since the upgrades necessitated by the march of progress will eventually make the "old you" obsolete, saying "... personal immortality by mind transplant is a technique whose primary benefit is to temporarily coddle the sensibility and sentimentality of individual humans." (Moravec, 1988, p. 121-122)

More enhancements await those willing to give up the sentimentality of the flesh, such as the speeding up of mental faculties to give one more time. Absent from Moravec's discussion of the idea is any recognition that a universal speed up would result in no effective time being gained, since we would all be working at a new pace. That one would get more time is only possible provided speed-ups are not universal, and preferably concentrated amongst an elite few. Always hiding under the surface of techno-humanism is intense societal stratification as so much of our ambitions for self-improvement are simply concerned with being better than our fellow humans.

Moravec conceives of immense supercivilisations containing uploaded human and animal minds, expanding outwards from the sun, converting non-life into "mind". The optimistic aspiration of techno-humanism then promises not only to destroy the entire planet (while failing to grasp or preserve its beauty) but to continue that destructive project across the cosmos. Oh joy. From the pattern-identity position, simulated people are real, and Moravec notes that it would be "fun" to resurrect all past inhabitants of the earth and run them through a simulation. Despite seeing simulants as real, Moravec expresses no acknowledgement of the moral and ethical questions raised by such simulating.

Moravec's thinking represents a culmination of sorts of cybernetic thinking. Information was dematerialised, then valued above everything else. Claude Shannon, the inventor of the modern conception of information theory, was himself very careful about where the theory could be applied, but his followers were far more reckless in their application (Hayles, 1999). Moravec is like a child who has inherited all sorts of toys that he does not understand and rather than investigate their uses and limitations proceeds to stretch them further and further with delirious glee.

At times Mind Children reads like the fantasies of a disturbed teenage boy, but it is rendered important by the influence its ideas have in the technology sector. Technological determinism is quite well entrenched in technology fields, with engineers working furiously to realise the "inevitable". Tech industry leaders may not always be so vocal about their contempt for embodied humans and lack of belief in humanity, increasing the import of Moravec's exuberance. The dizzying pace with which he jumps around to bigger and more ambitious projects and the sometimes contradictory nature of his underlying assumptions all evince the lack of critical examination regulating technology development.

Footnotes


1 I suspect body alienation to be quite pervasive in modern western society. Within the master-slave paradigm, people view themselves as using or being used by their bodies, and this expresses itself in unhealthy ways; in harmful activities like anorexia and in excessive obsessions around fitness. The love-hate dynamic feeds off itself, so that it is not surprising that there is disdain and devaluing of the flesh in parallel with its being coveted and commodified.
2 An example of what I mean would be an alternative to the HTC Vive where the trackers placed around the room are instead projectors as well and the user uses no headset but instead experiences the world as normal while the virtual world is instantiated into the room via holography.
3 Assuming such a device were useful and economically justifiable (not to mention feasible from a control perspective), it would seem to go without saying that it would be operated by software, with perhaps only the simplest of human inputs.
4 Nevermind that it is not clear that a pattern can be interpreted as such in anything other than a mind embodied in "physical stuff".

References


Hayles, N. K. (1999) How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. London: The University of Chicago Press, Ltd.

Lanier, J. (2013) Who Owns the Future? New York: Simon & Schuster.

McLuhan, M. (1964) Understanding Media: The Extensions of Man. London: Routledge & K. Paul.

Moravec, H. (1988) Mind Children: The Future of Robot and Human Intelligence. London: Harvard University Press.

Wiener, N. (1961) Cybernetics: or Control and Communication in the Animal and the Machine Second Edition. Cambridge, Massachusetts: The MIT Press.