A student writing a research paper on robots requested an interview via e-mail;
here are his six questions and my responses.
* * * * *
1) What is the biggest threat of advancing robots into the future?
First, I would tend to agree
with Isaac Asimov that future robots will probably come with built-in
safeguards to prevent any sort of rebellion. The danger will come when, as
seems inevitable, their intelligence begins to exceed human intelligence. Then,
as writers like Vernor Vinge have noted, we will be completely unable to
predict what they will do, and their superior abilities will make the
achievements of humanity less and less important. Further, though there is no
reason to think that they will have any desire to enslave or oppress humans,
such super-intellligent robots will inevitably come to dominate all human
affairs, for the simple reason that, in any group or society, power naturally
tends to flow to the most intelligent being in the room.
2) What kind of positive side
effects do you think would come of advancing robots anyway?
Adopting a broader
perspective, and not worrying about the petty concerns of humanity, one can say
that more intelligent robots would be a boon to the future of the universe in
general, since they would be capable of developing theories and technology that
humans may not be able to develop. Even if humans were no longer in control,
and felt like second-class citizens, their lives would probably be better
because of their robots' accomplishments.
3) Do you think any more
rules should be added to Asimov's three rules, now that we have the internet
and more advancements?
This
is actually a matter now being debated by specialists in robotics. One person I
know about who has written about extending Asimov's laws is Roger Clarke, and I
just found that his paper on the subject is
now online. From a personal perspective, I would first agree
with the later Asimov that something like his Zeroth Law is needed: if a robot
faces a choice between saving one person and saving 100 people, the robot
requires a law that would lead it to save the 100 people, even though it would
violate the First Law by leading to the death of one person. Also, it always
seemed logical to me that Asimov's Third Law should be followed by a Fourth Law
like this: "A robot must protect the existence of other robots as long as such
protection does not conflict with the First, Second, and Third Law." That is,
we will want robots to protect human beings, but we would also want them to
protect other robots, to feel a sort of altruism toward their own kind as well
as our own species. Finally, Asimov always assumed that humans would always be
humans, and robots would always be robots, and there would never be any problem
in distinguishing between the two. But we now know that various sorts of
cyborgs, combining organic and mechanical parts, are possible and perhaps
inevitable. So, if we have a human who retains an organic brain but has that
brain directly connected to a computer that assists in her thinking, should
that person be considered a human, or a robot, in the context of the Three
Laws? Would a robot have to rescue or obey such a being as a human? Given the
choice between saving that being and saving a "real," 100-per cent human
person, would the robot save the latter person, on the grounds that she is more
human than the cyborg? I'm not sure how these issues could be resolved within
the framework of Asimov's Laws, but perhaps they would need to be proceeded by
what Clarke might term a Meta-Law like the following: "For the purposes of
these Laws, any being primarily controlled by a human intelligence shall be
considered human, and any being primarily controlled by a mechanical
intelligence shall be considered a robot."
4) Should we build robots
that are exactly like humans, or only ones with a specific task?
Actually, I think the
original Star Wars movie provides a reasonable answer to that question.
If robots are designed to perform tasks that do not involve a lot of
interaction with human beings, like R2-D2, there is no reason to make them look
like humans, and any sort of functional design would be fine. However, if
robots are designed to regularly interact with people as servants or advisers,
it would be better to give them a human form, like C-3PO. Whether we should
extend this principle to make such robots look exactly like humans, so that
they cannot be distinguished from humans, is another question, since people
might be discomfited if they could not immediately be sure that their friend's
companion is a human or a robot. In one story with a name I do not recall, all
robots were required to have writing on their forehead that identified them as
robots, to prevent this problem. Perhaps small children, who might be
frightened even by something humanoid like C-3PO, might be best served by robots
that looked exactly like humans, but in other cases, this would not seem to be
necessary—a general human form should be sufficient to facilitate interaction
with humans.
5) Should we create
protection robots for the elderly or important people? Or would the risk be too
high?
In fact, given that society
faces the increasing problem of elderly people living alone who have no one
there to help them in emergencies, robots might be the ideal solution: instead
of something like Life Alert, an endangered person might signal her robot to
come to her aid. Beyond such simple tasks, there is the broader question of
whether a robot would ever have the far-ranging intelligence and imagination to
function as an effective bodyguard, since members of the Secret Service, for
example, must be constantly vigilant against every sort of possible assault on
the President. Still, robots could also be provided with superior sense organs
that would allow them to see or hear threats that humans could not detect.
Perhaps truly important people could be best protected by a team of humans and
robots. As for the "risk" involved in such robots, any piece of machinery might
malfunction at any time, but as indicated I have little concern that such a
robot might, for example, suddenly attack a human.
6) If so, should there be an
emergency shut-down button in case of a malfunction?
From one perspective, if a
protective robot did happen to suddenly get out of control and start a
destructive rampage, an emergency switch to turn it off would be helpful. On
the other hand, if a robot protector could be easily disabled, that would
diminish its value as a protector, since an evildoer might find and employ that
switch to leave his intended victim vulnerable to an attack. This is a question
that might be addressed by asking another question: since human beings also
"malfunction" sometimes and do things that we don't want them to do, should all
humans have an emergency mechanism installed in their bodies that would, say,
cause them to immediately become unconscious if the button is pressed? Again,
there is a certain logic behind the idea, but some obvious problems as well.
Overall, if we are building reliable robots to perform such functions, we
should be able to build them so that there is no real danger of a ruinous
problem that would require emergency measures.