A lengthy and mostly positive portrayal. Check it out.
Monthly Archives: October 2007
Michael Anissimov on the Singularity
In addition to providing Michael’s fascinating insights into the meaning of the term “singularity” and reflections on the risks involved, this video captures a little of the atmosphere at the Singularity Summit 2007 opening reception. It was, as you can see, a beautiful setting. The music was playing, the wine was flowing, and everybody was talking Singularity. A little glimpse of heaven!
Predicting the Future with Math
Via GeekPress, here’s a profile of Bruce Bueno de Mesquita, whose game-theory methodology for predictiong the future we discussed in a recent FastForward Radio. It’s an interesting methodology:
The elements of the model are players standing in for the real-life people who influence a negotiation or decision. At each round of the game, players make proposals to one or more of the other players and reject or accept proposals made to them. Through this process, the players learn about one another and adapt their future proposals accordingly. Each player incurs a small cost for making a proposal. Once the accepted proposals are good enough that no player is willing to go to the trouble to make another proposal, the game ends. The accepted proposals are the predicted outcome.
To accommodate the vagaries of human nature, the players are cursed with divided souls. Although all the players want to get their own preferred policies adopted, they also want personal glory. Some players are policy-wonks who care only a little about glory, while others resemble egomaniacs for whom policies are secondary. Only the players themselves know how much they care about each of those goals. An important aspect of the negotiation process is that by seeing which proposals are accepted or rejected, players are able to figure out more about how much other players care about getting their preferred policy or getting the glory.
Bueno de Mesquita has achieved an impressive list of correct predictions using this approach, although as Karl Hallowell recently reminded us, people in this line of work tend to play up their successes. For example, how impressive is it that he predicted that the UK would leave Hong Kong 12 years before it happened? Wasn’t their lease about to expire, anyway? On the other hand, two independent evaluations of his work (one by the CIA and one by fellow academics) have shown him to be about 90% accurate.
The earlier article that I read about de Mesquita in Good magazaine mentioned that he is scrupulous in not making information such as the outcome of the 2008 Presdiential elections availabe. This is interesting, in that he doesn’t mind making sweeping statements about what policies will and will not work regarding Iran:
The details of his study of negotiation options with Iran are classified, but Bueno de Mesquita says that the broad outline is that there is nothing the United States can do to prevent Iran from pursuing nuclear energy for civilian power generation. The more aggressively the U.S. responds to Iran, he says, the more likely it is that Iran will develop nuclear weapons. The upshot of the study, Bueno de Mesquita argues, is that the international community needs to find out if there is a way to monitor civilian nuclear energy projects in Iran thoroughly enough to ensure that Iran is not developing weapons.
If real, the ability to make accurate predictions about the future represents a unique form of power. How interesting that he leaves this particular matter open-ended. As described above, wouldn’t Beuno de Mesquita’s methodology have provided an outcome to the situation with Iran? It’s notable that here he talks about how things will work out if… Of course, it’s possible that he’s just being evasive becuase he’s not allowed to talk about the results. But I can’t help but wonder whether he has seen the future, he doesn’t like what he sees there, and now he’s trying to do something to stop it.
Would such an act represent an abuse of Bueno de Mesquita’s (hypothetical) power? The fact that he won’t give away presidential election results indicates that he doesn’t want his information to be used to change how things would have otherwise worked out. But then that’s absurd. Somebody is paying his company to make predictions (the State Department is mentioned as one of his clients) and you can be sure that they are acting on the information.
So it’s possible that Bueno de Mesquita sees a very bad end coming to the Iran situation, and he is giving the above warning as a means of trying to prevent it. And that is just a little scary.
FastForward Radio

Listen in as Michael Sargent and Stephen Gordon discuss Artificial General Intelligence, the Turing Test, the simplest Turing Machines, Moore’s Law, and artery-clearing micro-cyborgs:
Click “Continue reading” for listening options and the show notes.
Hardware, Software, Civilized Chimps
Two of the most interesting discussions to come up in Friday’s Boulder Future Salon had to do with artificial intelligence. The first of these was the question of whether more progress has been made over the past 30 years in hardware or software. I made reference to a portion of Eliezer Yudkowsky’s talk at the Singularity Summit:
In the intelligence explosion the key threshold is criticality of recursive self-improvement. It’s not enough to have an AI that improves itself a little. It has to be able to improve itself enough to significantly increase its ability to make further self-improvements, which sounds to me like a software issue, not a hardware issue. So there is a question of, Can you predict that threshold using Moore’s Law at all? Geordie Rose of D-Wave Systems recently was kind enough to provide us with a startling illustration of software progress versus hardware progress. Suppose you want to factor a 75-digit number. Would you rather have a 2007 supercomputer, IBM’s Blue Gene/L, running an algorithm from 1977, or a 1977 computer, an Apple II, running a 2007 algorithm? And Geordie Rose calculated that Blue Gene/L with 1977’s algorithm would take ten years, and an Apple II with 2007’s algorithm would take three years.
This point was hotly disputed by more than one of the attendees. (It was even described as being “counter-factual.”) I think the real question is how generally applicable the progress made with factoring a 75-digit number is to everything else that’s being done with software. However, several other types of algorithms were mentioned in which tremendous progress has been achieved over the past 30 years — graphics rendering, for example. The question of whether progress in these areas represents a general trend is beyond my expertise. But I did promise the group to provide a link to Eli’s talk, so there it is.
The other good discussion ensued from Doug Robertson’s assertion that essentially no progress has been made towards the Turing test in 50 years of AI research. Doug apparently has an excellent point, here. The chatbots of today aren’t much more convincing than programs written in the 50′s and 60′s. And nothing out there today can pass the test.
But maybe this says less about the state of progress in Artificial Intelligence research and more about the suitability of the Turing test to measure its progress. Let’s say that rather than taking on the task of making machines that are intelligent, humanity was working on a different project — introducing civilization to chimpanzees. Now “civilization” is a difficult thing to define, much less measure, so we need a compelling challenge to drive us towards our goal of civilizing the chimps. Human civilization has many defining characteristics, and surely our ability to create and appreciate objects of beauty is one of the finest and most purely civilized of these attributes.
Thus is born the Sistine Chapel test. To wit — as soon as chimps can produce a reasonable chimpy facsimile of the Sistine Chapel, we will allow that they are civilized.
So the project gets rolling along. We set some chimps up in a crude village and start helping them to develop a language and rudimentary governmental and economic structures. Also, we get them working on developing their artistic skills, seeing as those will be crucial to establishing chimp civilization. It’s pretty slow going at first, although we can probably use analogs from existing chimp culture as a jumping off point. Every 50 years or so, we formally check in with the chimp civilization to see how well they’re doing.
And every 50 years, we’re disappointed. We find that chimps are good for all kinds of low-level activities that support civilization — building huts, planting crops, making fishing nets — but their artistic skills never seem to progress much beyond making some scratches in the stone, albeit pigmented scratches in some of the later stages, and — eventually — throwing some very lumpy and discouraging-looking pots.

About 500 years into the project, it is noted that half a millennium has yielded little progress towards the Sistine Chapel test. This observation is correct inasmuch as chimp art is still at a very primitive stage. However, over that time the village has grown into a small city-state complete with a monarchy, warrior class, merchant class, and priestly class. Food production has been outsourced to rural hunting, farming, and fishing chimps. The chimps are making real progress at transcribing their developing language into written form. While the visual arts stutter along, chimpanzee poetry, drama, and music are all taking shape.
It would seem to be indisputable that these chimps are more civilized now than they were 500 years ago. And yet this idea is disputed. After all, many of the cultural divisions noted above have vestigial precedents in wild chimp culture. And their use of spoken and written language may not be “real.” They may just be imitating human behavior very cleverly. The argument seems to be that, once chimps achieve a certain level of behavior, we no longer think of that level of behavior as being “civilized.”
The chimpanzee civilization may be a long way from creating their own Sistine Chapel, but the scientists running the Chimp Civilization Project have a much clearer idea now than they did 500 years ago of the complexities involved in creating civilization from scratch. Some have begun to dare to wonder whether the Sistine Chapel is such a big deal. What if chimp civilization goes in a radically different direction? What if they advance in ways such that their accomplishments are never directly comparable with ours? Will that mean that they aren’t civilized? Others continue to fret about the Big Test and despair that their simian pupils will never make the grade.
Seems to me that this is pretty much the state of artificial intelligence research. While computers continue to take on more and more of the hallmarks of intelligence, critics are able to (correctly) point out that we appear to be making no progress towards passing the big test. My take is that either all this less-relevant progress is more relevant than we thought, or the test itself is of questionable relevance.
Boulder Future Salon Considers "Moore's Law"
Last night (Friday, October 26th), at Phil’s kind invitation, I had the distinct pleasure of attending the Boulder Future Salon’s monthly meeting and participating in a lively and far-flung consideration of the month’s selected topic: “Moore’s Law”
Boulder Future Salon Considers “Moore’s Law”
Last night (Friday, October 26th), at Phil’s kind invitation, I had the distinct pleasure of attending the Boulder Future Salon’s monthly meeting and participating in a lively and far-flung consideration of the month’s selected topic: “Moore’s Law”
Simplest Turing Machine Is Universal
Big news from Kurzweil and from Stephen Wolfram’s blog:
University of Birmingham Alex Smith has won a $25,000 prize for proving that the simplest possible Turing machine is in fact universal, Stephen Wolfram has announced.

It has only two states and three colors, yet it can do any calculation that the computer with which you’re reading this blog entry can do. In fact, it can do any calculation that could be performed by a megacomputer consisting of your machine networked to every other machine on the planet.
Wolfram expounds:
We’ve come a long way since Alan Turing’s original 1936 universal Turing machine–taking four pages of dense notation to describe.
There were some simpler universal Turing machines constructed in the mid-1900s–the record being a 7-state, 4-color machine from 1962.
That record stood for 40 years–until in 2002 I gave a 2,5 universal machine in A New Kind of Science.
We know that no 2,2 machine can be universal. So the simplest possibility is 2,3.
And from searching the 2,985,984 possible 2,3 machines, I found a candidate. Which as of today we know actually is universal.
From our everyday experience with computers, this seems pretty surprising. After all, we’re used to computers whose CPUs have been carefully engineered, with millions of gates.
It seems bizarre that we should be able to achieve universal computation with a machine as simple as the one above–that we can find just by doing a little searching in the space of possible machines.
But that’s the new intuition that we get from NKS. That in the computational universe, phenomena like universality are actually quite common–even among systems with very simple rules.
So what’s the big deal about a two-state, three-color computer? What can you do with it? Well, that’s the point. It’s a universal machine, so technically you can do anything on it. Anything.
Run Microsoft Excel?
Yep.
Guide nanobots around in your circulatory system?
Sure.
Model an uploaded version of me?
Er, I don’t see why not.
Model entire worlds?
Hmmm…
Wow, that’s a lot to be able to do with two states and three colors. But assuming that those latter two applications are possible at all, there’s no reason why they can’t be done with this machine.
Well That's Just Ridiculous
Nolan Bushnell, the man who founded Atari, is described here as taking the position that, where video games are concerned, it’s all been downhill since Pong. Of course, he doesn’t really say any such thing, and with good reason.
Everybody knows that it’s all been downhill since QBert.

Well That’s Just Ridiculous
Nolan Bushnell, the man who founded Atari, is described here as taking the position that, where video games are concerned, it’s all been downhill since Pong. Of course, he doesn’t really say any such thing, and with good reason.
Everybody knows that it’s all been downhill since QBert.
