2)
What do you think of the present state of Artificial Intelligence?
Hugo de Garis:
Not much. Humanity still doesn't have the tools to understand the
brain, so we don't know how to build true artificial intelligence.
Also, our computers are still orders of magnitude away from the brain's
capacities. I think we will have to wait another few decades for Moore's
law to give us true nanotech, which in turn will give us powerful
new tools to unravel the brain's secrets. We will see a rapid transfer
of ideas and principles from neuro science to neuro engineering. Then
we will have true A.I. with at least human level abilities.
Richard Wheeler:
A.I., like science in general, is in its infancy - of course, this
gives us the opportunity to contribute to the very root of the field;
be there at its birth, as it were. The 20th century has left us with
an impressive legacy of human myopia, greed, and hubris, which is
strongly reflected in our artefacts and technologies. Despite how
it may seem, we really know very little about ourselves, our minds,
our planet, or our universe - and these deficiencies have been passed
on to (and have held back) A.I. since the days of Turing. {Alan Turing
(dates) - one of the pioneers of computing] Curiously, it has not
been conceptual or theoretical advances, which have fuelled growing
research and innovation in A.I. (even today's cutting-edge A.I. has
existed in theoretical form for 50+ years), but the availability of
fast inexpensive PC hardware. Similar future advances in computational
hardware (especially evolutionary hardware) will surely bring about
similar effects. The rise of chaotic, quantum, and other natural computing
devices will undoubtedly spark a vast revolution in what we now call
A.I.
3)
What are its limitations?
Hugo de Garis:
The laws of physics, and in the early days, human imagination. I'm
a former theoretical physicist so I like to look at the numbers coming
from calculations that show the theoretical limits of phenomena. The
answer to the above question is an example of the kind of calculation
I like thinking about. If physics says that something is possible,
then it can probably be done. Physics says massive artilects are possible,
so they will probably be built this century (given the anticipated
rate of scientific and technological progress).
Richard Wheeler:
Humankind has always wanted to free itself from the limitations of
our physical form (we even created angels to embody this desire),
and I assume that "machinekind" will necessarily share this passion.
Perhaps at the root of all things we consider "conscious" there is
a primitive awareness (and connection) with some fundamental experience
of things interacting in the universe. Consciousness might be the
perception of an object perceiving information as it moves through
time - we do not require a physical presence to qualify this. Much
of human religion and mythology revolves around reunion with the "root
node" of universal creation, and in this way we are promised the unobtainable:
persistence. Humankind has been ever more ingenious in its attempts
to cheat impermanence - finally recording our histories in written
and visual form, then in animated form, and finally sending some of
our artefacts out into space. Surely the one theme that has emerged
from the world's religion and mythology is that a human does not need
a physical body to remain human; for this we created the spirit. Indeed,
"the letter may not need the horse to carry it". In some ways A.I.
is the next step in this process - it might eventually offer a channel
into a form of everlasting presence. Unlike humanity, I do not see
any fundamental limitations to A.I., except perhaps that it is dependent
upon us to give it birth. A.I. will have the whole universe to explore.
4)
What is the most profound question humanity will face in the next
100 years?
Hugo de Garis:
Should we build artilects? In other words, should humanity run the
risk that if these godlike creatures are built that they might decide
for whatever reason that human beings are a pest who should be destroyed?
The stake in this issue is the survival of the human species. Once
this issue is taken seriously by most thinking people (which is certainly
not yet the case today) the passion level will be extremely high.
What is more important to humanity than its own survival? To some,
the answer to this question will be - "the creation of artilects".
Richard Wheeler:
Probably the most profound question will be whether, as a species,
there is a way forward for us before we burn out the planet, ourselves,
and each other. Sadly, most western eyes turn to "science" for the
answer while, as a species, we may be biologically determined to fail.
Curiously, a possible answer to these problems might be to create
powerful A.I. thinking machines, which, in the presence of our own
ignorance, may be able to help us solve some of the fundamental problems
facing humanity (we are already seeing the beginning of this trend
- weather control analysis, gene sequencing, etc.). Having exhausted
the resources of man, we are increasingly looking to summon the mind
of God.