Daily Archives: October 28, 2007

FastForward Radio

microbot2_f.jpg

Listen in as Michael Sargent and Stephen Gordon discuss Artificial General Intelligence, the Turing Test, the simplest Turing Machines, Moore’s Law, and artery-clearing micro-cyborgs:

Click “Continue reading” for listening options and the show notes.

Hardware, Software, Civilized Chimps

Two of the most interesting discussions to come up in Friday’s Boulder Future Salon had to do with artificial intelligence. The first of these was the question of whether more progress has been made over the past 30 years in hardware or software. I made reference to a portion of Eliezer Yudkowsky’s talk at the Singularity Summit:

In the intelligence explosion the key threshold is criticality of recursive self-improvement. It’s not enough to have an AI that improves itself a little. It has to be able to improve itself enough to significantly increase its ability to make further self-improvements, which sounds to me like a software issue, not a hardware issue. So there is a question of, Can you predict that threshold using Moore’s Law at all? Geordie Rose of D-Wave Systems recently was kind enough to provide us with a startling illustration of software progress versus hardware progress. Suppose you want to factor a 75-digit number. Would you rather have a 2007 supercomputer, IBM’s Blue Gene/L, running an algorithm from 1977, or a 1977 computer, an Apple II, running a 2007 algorithm? And Geordie Rose calculated that Blue Gene/L with 1977’s algorithm would take ten years, and an Apple II with 2007’s algorithm would take three years.

This point was hotly disputed by more than one of the attendees. (It was even described as being “counter-factual.”) I think the real question is how generally applicable the progress made with factoring a 75-digit number is to everything else that’s being done with software. However, several other types of algorithms were mentioned in which tremendous progress has been achieved over the past 30 years — graphics rendering, for example. The question of whether progress in these areas represents a general trend is beyond my expertise. But I did promise the group to provide a link to Eli’s talk, so there it is.

The other good discussion ensued from Doug Robertson’s assertion that essentially no progress has been made towards the Turing test in 50 years of AI research. Doug apparently has an excellent point, here. The chatbots of today aren’t much more convincing than programs written in the 50′s and 60′s. And nothing out there today can pass the test.

But maybe this says less about the state of progress in Artificial Intelligence research and more about the suitability of the Turing test to measure its progress. Let’s say that rather than taking on the task of making machines that are intelligent, humanity was working on a different project — introducing civilization to chimpanzees. Now “civilization” is a difficult thing to define, much less measure, so we need a compelling challenge to drive us towards our goal of civilizing the chimps. Human civilization has many defining characteristics, and surely our ability to create and appreciate objects of beauty is one of the finest and most purely civilized of these attributes.

Thus is born the Sistine Chapel test. To wit — as soon as chimps can produce a reasonable chimpy facsimile of the Sistine Chapel, we will allow that they are civilized.

So the project gets rolling along. We set some chimps up in a crude village and start helping them to develop a language and rudimentary governmental and economic structures. Also, we get them working on developing their artistic skills, seeing as those will be crucial to establishing chimp civilization. It’s pretty slow going at first, although we can probably use analogs from existing chimp culture as a jumping off point. Every 50 years or so, we formally check in with the chimp civilization to see how well they’re doing.

And every 50 years, we’re disappointed. We find that chimps are good for all kinds of low-level activities that support civilization — building huts, planting crops, making fishing nets — but their artistic skills never seem to progress much beyond making some scratches in the stone, albeit pigmented scratches in some of the later stages, and — eventually — throwing some very lumpy and discouraging-looking pots.

sistinechapel.jpg

About 500 years into the project, it is noted that half a millennium has yielded little progress towards the Sistine Chapel test. This observation is correct inasmuch as chimp art is still at a very primitive stage. However, over that time the village has grown into a small city-state complete with a monarchy, warrior class, merchant class, and priestly class. Food production has been outsourced to rural hunting, farming, and fishing chimps. The chimps are making real progress at transcribing their developing language into written form. While the visual arts stutter along, chimpanzee poetry, drama, and music are all taking shape.

It would seem to be indisputable that these chimps are more civilized now than they were 500 years ago. And yet this idea is disputed. After all, many of the cultural divisions noted above have vestigial precedents in wild chimp culture. And their use of spoken and written language may not be “real.” They may just be imitating human behavior very cleverly. The argument seems to be that, once chimps achieve a certain level of behavior, we no longer think of that level of behavior as being “civilized.”

The chimpanzee civilization may be a long way from creating their own Sistine Chapel, but the scientists running the Chimp Civilization Project have a much clearer idea now than they did 500 years ago of the complexities involved in creating civilization from scratch. Some have begun to dare to wonder whether the Sistine Chapel is such a big deal. What if chimp civilization goes in a radically different direction? What if they advance in ways such that their accomplishments are never directly comparable with ours? Will that mean that they aren’t civilized? Others continue to fret about the Big Test and despair that their simian pupils will never make the grade.

Seems to me that this is pretty much the state of artificial intelligence research. While computers continue to take on more and more of the hallmarks of intelligence, critics are able to (correctly) point out that we appear to be making no progress towards passing the big test. My take is that either all this less-relevant progress is more relevant than we thought, or the test itself is of questionable relevance.