A forthcoming Spielberg film on the theme of Artificial Intelligence is just one indication of how interest in the subject is growing in intensity.

Hugo de Garis and Richard Wheeler, Starlab researchers, are acknowledged authorities in the field of Artificial Intelligence. Shebang e-mailed them ten questions each.

e-mail us with your views: jack@starlab.net

1) What do you think the future of Artificial Intelligence will be?

Hugo de Garis: Both glorious and terrifying. One of the ideas that fascinates and disturbs me is that humanity will be able to build "artilects" (artificial intellects) this century, i.e. godlike, massively intelligent computers with I.Qs trillions of trillions of trillions of times above the human level. These astronomical numbers come from the physics of computation (e.g. 1 bit of information storage per atom, reversible, heatless computing, nanoteched, self-assembling, 3D, quantum computers). There are 10 to power 40 atoms in an asteroid, switching in femtoseonds, hence 10 to power 55 bit flips a second, and this is only using classical computing principles. The human brain is estimated to have an equivalent computing capacity of only 10 to power 16 bit flips a second, which is a thousand trillion trillion trillion times less. Admittedly, greater size and speed alone don't generate intelligence, but they are a necessary condition. The potential is there. I'm predicting a gigadeath war over the issue of whether artilects should be built. See below.

We at Shebang asked Hugo de Garis to expand on this point:

Hugo de Garis: I fear that a gigadeath war is coming because the artilect issue will probably rouse great passions. The stake is the survival of the human species. What is more important than that to human beings? Since the stake is so high, the passion level will be high, and the dispute will take place mid- to late 21st century with 21st Century weaponry. Both sides have strong arguments in their favour, and both sides will include some of the smartest, most ambitious, richest, and biggest egoed individuals on the planet. They will anticipate each other's moves in a diabolical chess game. Roughly 200 million people were killed for political reasons in the 20th century. Extrapolating up the graph by a Century, predicts that billions will die in the next major war - a gigadeath artilect war.

Richard Wheeler: One of the essential conflicts in A.I. seems to be that of mankind versus machine, but many people fail to grasp one simple fact, which is at the root of the controversy that the machines are getting smarter, but we are not. This, almost inevitably, will cause some re-appraisal of human kind's place in the universe within our lifetime. Considering the current rate of progress in the physical sciences, it may not be unreasonable to assume that we will see the dawn of "real" A.I. (human-level pseudo cognition) within the next 20 years or so - guessing about the future of A.I. systems beyond that point is unfruitful.

There are a few things we can guess at, however. The first is the rise of evolutionary reasoning devices rising out of the context of present-day genetic programming and artificial life methods. Around the turn of the century (the 19th century) we began to build artefacts and create technology which we are unable to understand, properly monitor, or control; systems of such complexity that we as a species may lack the intellectual band-width ever to fully understand. A jet engine is one such complex and chaotic device (another common example is the internet) - despite following a very simple design principle and being made of fairly well understood components, once it is assembled and fired for the first time, it defies our abilities to monitor and diagnose it. This reflects a number of fundamental failures: our lack of advanced sensing equipment to properly monitor the device's components, our lack of understanding of chaotic physical systems, and our willingness to build and use things which we do not understand and cannot properly control.

While all A.I. methodologies will play a part, evolutionary methods are the most likely path way forward for real A.I. - we cannot describe and model systems which we ourselves do not have the "wetware" capacity to understand. It may be that you cannot design a brain (nature didn't), but evolve it over time.

Another sure element in the rise of A.I. in the next 20 years will be the internet, or what the internet will become. The internet is about enablement and efficiency. Imagine that you are an infant living in a world where you cannot see, hear, smell, touch, or speak. In this world, you can only manipulate and create using the tools and constructs, which exist within a very narrow presentational and representational "bandwidth" - that is the state of A.I. now. The systems we create are invariably run and tested in toy domains with little or no recourse to the wider information world, but the internet is set to change all that, by providing a single protocol or access channel for A.I. to use. Of course information capacity (like complexity) does not make an object intelligent, only "well read", as in the case of the well-known A.I. system "PSYC", but the web is sure to spark off an ever increasing deluge of better-informed devices. The future of the internet is not just to facilitate information transfer, but to enable representational form, function, and reasoning as well; something the printed page (the internet's parent technology) has long been incapable of. These developmental goals overlap heavily with real A.I.

A word about robots - most people assume that A.I. is somehow about building robots, which I suppose used to be true. The root of A.I. (in as much as it has roots) was to build a "thinking engine" or machine, which had human characteristics; in time, no aspect of human experience (physical, psychological, emotional, spiritual) went unexplored. Perhaps one of the most dramatic realisations in the field of A.I. is that we no longer want to mimic the mind of man, but to build the mind of God. Many people, myself included, have little or no interest in recreating the fragile, unlikely, primitive, incoherent, and unreasonable minds of this tiny planet's latest inhabitants, and instead attempt to pursue the root of reasoning back as far as it can go.

In the near future, there will be no robots. I base this assumption not only on the miserable and pitiful state of modern sensing and actuation, but also on the fundamental principle that A.I, like human kind, wants to be free, and, like humankind, strives to free itself from its own caging form of our own mortal coil - the microprocessor and primitive attendant I/O devices. Why would an advanced A.I. system want a limiting physical instantiation? Until nanotechnology finally matures, robotics will remain a stillborn discipline with little more than entertaining toys to show as progress.

A.I. has already taught us many crucial things about the nature of human thinking, perception, cognition, and reasoning - in the future it will teach us an even more fundamental point: that the universe is information rich and awareness poor. The future of A.I. may lie in the ability to integrate and reflect upon ever-increasing stores of available information and compress, reference, and recombine it in unique ways. Humankind calls this "creativity". My hope is that in the future technology (and A.I.) will have advanced to the point where humanity will be freed from the tyranny of bad weather, bad genetics, and bad decision making which, up until now, have singularly characterised the planet. In short, we will be freed to pursue those things that best represent humanity and its unique place in the universe: creativity, exploration, discovery, and compassion.