9)
Have you ever built anything you considered really "intelligent"?
Hugo de
Garis: No. I'm living one generation too soon for that. My
kids may see the first artificially intelligent creatures towards
the end of their lives.
Richard
Wheeler: I think all human artefacts (many non-human as well)
are "intelligent" in their own way; that is, they exist only in light
of "intelligent" creation and perspective. However, although I have
built things that are surprising, useful, and thought provoking, I
would never call any of them "intelligent" - perhaps because they
were all mostly designed in the knowledge management tradition of
deterministic encoding and representation. They have, broadly, lacked
that spark of emergent alchemy, which we seem to perceive as intelligence.
10)
Do you think human evolution is at an end, or will humans continue
to evolve?
Hugo de
Garis: Human evolution in the sense of ultra-slow Darwinian
molecular evolution of our DNA is more or less at a standstill, because
in our modern culture even the "unfit" survive. Modern technology
and high living standards keep the "weaker" members of our population
alive who would have died in harsher times. Social evolution however
is another matter and is increasing at an exponential rate, with total
human knowledge doubling every decade or so. I believe that humanity
is now at the point of creating a new form of evolution, i.e. artilectual
evolution, which will occur at electronic speeds inside artilects
of massive proportions. I suspect that this transition from biological
to artilectual evolution is inherent in nature and has probably already
occurred zillions of times in the universe. Our solar system is a
billion years younger than others are, so there have probably been
artilects around for aeons. The universe is therefore probably teeming
with artilects at various levels of development, and utterly ignoring
us. Artilects communicating with human beings would be like human
beings communicating with rocks.
Richard
Wheeler: Evolution, especially in human terms, is a very difficult
topic. It is commonly heard that human evolution has shifted from
the organic (better toolmakers) to the purely inorganic (better tools)
- that technology is an evolutionary "extension" to humankind, and
as such, has taken over from Darwin. The more technologically fit
among us, it would seem, are prospering at an ever-increasing rate.
While I would not argue with this concept, I see real challenges growing
out of the fundamental unfitness of humankind to control the artefacts
it is beginning to create; surely the day will come when technology
takes over as the dominant form of life on the planet - it has been
widely proposed that this may even be nature's plan as human life
becomes increasingly untenable. I do not necessarily view this as
a bad thing, and believe it may be reasonable to assume that the mechanisms
nature uses to keep organisms in check, and from outgrowing or destroying
their niche, may apply to humanity as well. However, I believe that
the great promise of technology and A.I. is still as tools to advance
and improve the human condition. A.I. is not unique in this perspective.
We created the hammer because our hands are terrible at pounding in
nails. No one was afraid of the hammer until someone sharpened one
end. We created the calculator and modern computing devices for the
same reasons - because our minds are terrible at manipulating numbers.
Many people are looking to A.I. in a similar fashion - to create tools
to monitor and manipulate systems, which we are unable to understand
ourselves, and like all tools, A.I. is undoubtedly dangerous. The
real problem is the power a hammer, calculator, or A.I. device gives
its creator - as before: the tools are getting smarter, but we are
not.