And now for the
fantastical...
"I think the
brain is like a program in the mind, which is like a computer, so
it's theoretically possible to copy the brain on to a computer and so
provide a form of life after death." Not a believer in the
conventional concept of an afterlife, theoretical physicist Stephen
Hawking was speaking during an appearance at the Cambridge Film
Festival in late September.
Hawking has long
suffered from motor neuron disease. When diagnosed fifty years ago
at the age of 21, he was told he had three years to live. His
primary means of communication with the outside world now is through
a computer interface. Hawking's comments at the film festival take
the concept of artificial intelligence to a level previously explored
primarily in works of science fiction.
Artificial
intelligence has been around for a while. The Turing test, a test of
a computing machine's ability to exhibit intelligent behavior
indistinguishable from that of a human, goes back to 1950. The term
artificial intelligence was first used in 1955. I worked on one
form of artificial intelligence, the development of an "expert
system", for a brief period back in the mid-80's - not quite the
vacuum tube era but almost 30 years ago. The objective was to
capture in a computer program the knowledge base and decision-making ability of an expert. The computer program could then be used, for example, to help
in the response to a plant emergency or to anticipate and correct potential manufacturing problems.
The increase in computing power has been literally exponential, but some experts say that Moore's Law ("computing power doubles every two years") will soon be coming to the end of its usefulness . We are reaching the physical limits of manufacturing silicon chips. Not to worry. The 2012 Nobel Prize in Physics was awarded to two quantum computing pioneers - Serge Harocheand David Wineland. Quantum computing, still in its early developmental stages, will make the billion-fold increase in digital computing power since the mid-1950's seem like the step from finger-counting to the abacus.
Ten years ago, a National Science Foundation and Department of Commerce sponsored paper noted that "Four transforming tools have emerged:
nanotechnology for hardware, biotechnology for dealing with living
systems, information technology for communication and control, and
cognition-based technologies to enhance human abilities and
collective behavior." The continuing
advances in computing capability and these other areas have led to speculation as to what the future holds.
Computers are everywhere
and are already enhancing human capabilities in areas such as
communications, medicine and vehicle safety. Robots are already
capable of mowing lawns and cleaning houses. They soon will be, or
already are, predicting monster storms, mapping vineyards for better wine production, clearing jellyfish blooms, fighting brain tumors,
taking over exhausting and dangerous mining activities, teaching Japanese calligraphy, watching over sleeping patients in nursing homes, and, of course, in the form of superstar robot the Mars Rover, exploring space.
Science fiction has envisioned even more amazing scenarios - computing machines achieving self-sufficiency, a technological singularity, and, to Hawking's comments, personalities downloaded into computers. Each of these poses some interesting philosophical questions.
If computers or robots achieve self-sufficiency, to what degree should they be controlled and what rights should they have? Science fiction (and science) writer Isaac Asimov's Three Laws of Robotics addresses the question:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Scientists are taking the possibility of self-sufficiency seriously. A February 2009 meeting under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI) discussed the "extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards". [Ref. 1]
We have no way of predicting what a super-intelligence might actually mean for humanity and whether they would bother at all with the Zeroth Law. Commentators have noted that evolution "has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind." Some have gone so far as to say that one result of a super-intelligent entity might be the extinction of humanity. [Ref. 1]
Peter F. Hamilton's Commonwealth Saga and his Void Trilogy - five books set in the same fictional universe - present a world with faster than light travel, advanced post-physical civilizations, genocidal aliens, galaxy-devouring anomalies, and downloaded personalities. The novels relate events over a span of more than a thousand years and many of the main characters are present throughout this time. The lynchpins in this capability for extended life are the "memory cell" and advanced bioengineering. Every so often, an individual's memories are downloaded into a memory cell. When it's time to get young again or if the individual suffers a physical death, the memory cell is loaded into a re-fashioned body.
There are several important questions to be asked when considering these examples of advanced artificial intelligence. "Can computers or robots achieve enough self-sufficiency to become sentient?" "How would we know if they do?" "Can an individual human personality be completely downloaded into a computer?" "Would that downloaded personality be the individual?"
Which brings us to the crux of the matter - consciousness. Consciousness is one of our great unanswered questions. It is at the core of the human person and experience - I am aware that is I who am experiencing the external world. But no experiment has ever located it within the physical brain. It seems, to me at least, that to expect consciousness to spontaneously arise in a physical object when it attains a certain amount of intelligence is an extraordinarily long shot. As to the question of downloading a human personality into a computer, historian and philosopher of science John S. Wilkins writes: "Physical differences make a difference, and ahead of time we can’t predict what will have an effect on whether silico-me will behave the same way vivo-me does. While I am sure that we will manage to simulate “me” or somebody like me (more likely Stephen Hawking than me), the simulate won’t be “me” in any deep sense. It’s either going to be a representation of me that has some dynamical properties in common with me but not all, or it’s going to be a deep simulation of me that lives a day every so many months of realtime. And even then it won’t be exactly like me..."
Nevertheless, concepts such as robot sentience, machine super-intelligences, and near immortality make for great entertainment. Science fiction, like all fiction, depends on the reader's "willing suspension of disbelief." While reading science fiction may not increase our empathy as much as reading literary fiction (Science, October 3; Slate, October 28), it does exercise our imaginations and sense of wonder. Occasionally, in the best of the genre, it allows us to think about some of the deeper questions of the universe.
Reference and Images
Ref.1 is the ask.com's wiki entry on technological singularity. Futurist Ray Kurtz predicts a date of 2045 for the singularity event.
Robot image is from Popular Science website.
Peter F. Hamilton collage is from his website.
Interesting Reading
For a unique look at consciousness, read neuroscientist Giulio Tononi's Phi: A Voyage from the Brain to the Soul. From the Goodreads review: "Not since Gödel, Escher, Bach has there been a book that interweaves science, art, and the imagination with such originality. This beautiful and arresting narrative will transform the way we think of ourselves and the world."
No comments:
Post a Comment