• Karl Hallowell

    I arrived two hours late for the show. I rode in on Amtrak and got off at the wrong exit. On a Saturday, that’s major trouble. The entire downstairs part of the auditorium had filled so I ended up in the balcony. Missed Kurzweil and two other speakers, but heard everyone from Sebastian Thrun onwards.

    There was a great showing, I’d say. I didn’t calculate how many people showed up, but it was more than half full. The balcony was sparsely populated so they had plenty of room, but the audience had a lot of energy.

    Bill McKibben definitely had the best delivery of those I heard. Part of it was a technical advantage. He was the sole “doubting Thomas” and didn’t have true competition from a significant fraction of the speakers (who spoke either of technical points or projects like Thrun’s “autonomous car” entry). But he also managed to avoid my pet peeve of using specialized jargon in a talk to a general audience (no “accelerating change”, “super-exponential growth”, etc) while most of the other speakers did not. I don’t know what portion of the audience was naturally sympathetic, but McKibben was able at several points to score applause so he did connect with the audience despite his more hostile stance to the Singularity and Transhumanism.

    I do agree with Mike Treder’s assessment. Bill McKibben spent too much of his time bullying strawmen. As I see it, the overall level of happiness is not equivalent to whether or society meets the needs of its citizens. So comparing US society with a more primitive society at the level of happiness ignores that the people who are really unhappy in the primitive society die of starvation, disease, injury, and other adverse conditions. Also, I don’t see that overall happiness should be a primary goal.

    So on to the speakers I actually heard. Each had twenty minutes. The conference will be pod-cast in a few days (“within three days” was promised) so you can compare what really happened to my interpretation of what happened.

    Sebastian Thrun had an interesting presentation about autonomous driving (computer controlled automobiles) and Stanford’s entry into the into the DARPA Grand Challenge 2005. Aside from the amusing videos of the failures (mostly of Stanford’s competitors, Berkeley failures were particularly well represented), he described the interest details of how the Stanford entry determined it was on a road and the difficult terrain that these vehicles had to cross in this contest.

    His two points were that deaths from human failure while driving automobiles on the road was a leading cause of death in the US (I think it would have been even stronger if he had pointed out the global deaths from automobiles are on the order of half a million deaths each year and virtually all of these are due to human failure). Second, that the same tasks that humans normally find easy (like figuring out where the road is) are extremely hard for computers right now. He claims that someone will develope a vehicle capability of negotiating city streets by 2010 and that a significant fraction (30%) of vehicles on the road will be autonomous by 2030.

    Cory Doctorow, as I recall, often writes about digital rights management (DRM). His talk rehashes that argument very well. I was a bit confused by his term, “agency” at first till I realized it was about enabling control of a technology. Eg, if you’re allowed to do a lot with a tool, then you have more control over what you can do with that tool. But in addition, it becomes possible for someone to find a new, unprohibited use for that tool. This is agency.

    Doctorow’s prime example was comparing CD’s with DVD’s, the latter are protected by DRM schemes. If you hypothetically bought $1000 of DVD’s (with music, movies, etc on them), ten years ago, you would have the use of listening to those DVD’s (he later mentions subscription models where the content can go away). You don’t really own the material so you can’t use it in a different way. OTOH, a $1000 worth of CD’s bought ten years ago could now be hosted on peer-to-peer networks and stored on your i-pod. The lack of DRM for CD’s means that the value of those CD’s increases as technology advances and people find new uses for it.

    This leads to his ultimate criticism of Elsevier, a publisher of scientific journals. In the past, with paper journals, buying a journal meant you kept that journal. Elsevier’s modern subscription model means that as long as you keep paying, you have access to Elsevier’s journals, but that access can go away when you don’t. In Doctorow’s mind, this was particularly evil.

    Eric Drexler’s talk was somewhat predictable. I heard much the same five or six years ago when I last heard him talk. He discussed nanotech manufacturing including four possible “materials” (proteins, DNA, nanotubes, and “cluster structures”) for bringing this about. He focused on “productive” nanoscale manufacturing systems particularly how they could recursively build useful human-scale devices and a few hurdles and milestones for this potential technology.

    Two new points were that politicians were interested to the point that they named nanotech specifically. For example, he quoted the President of India (as an aside, it is interesting how both China and India have highly placed leaders with strong technology backgrounds, perhaps the developed world should follow suit).

    Perhaps more important was his new caveats. First, he mentions he’s no longer with the Foresight Institute, but rather with Nanorex, Inc., which apparently developes design software for nanotech manufacturing systems. Second, he pointedly stepped away from self-replicating nanotech.

  • Karl Hallowell

    Heh, to continue with my really long comment.

    Max Moore discusses “wisdom”. The idea is that there is a group of cognition skills that aren’t traditional intelligence, but (as I understand it) determine either how well you can determine the consequences of actions or aid interaction with other beings (eg, “social skills” for humans). He also mentioned causes of foolishness. Entities have bounded awareness and can hold incorrect theories about themselves, the world, and other beings.

    Good speaker but the content was kind of fluffy and vague. His ultimate conclusion was that building greater intelligence without greater wisdom could lead to a lot of counterproductive action (humanity has a long history of this, so the concern appears merited).

    Christine Petersen gave a much better presentation that I expected. I too heard her five or six years ago at the same time as her ex-husband, Eric Drexler. She started by discussing the problem of what to do with the weakest members of society. Basically, they need protection of various kinds, particularly from the more ruthless. An example was that of social engineering. It’s just not that hard to extract important secrets from people who keep their password taped to the side of the monitor or who will blurt it to someone over the phone who claims to be from the IT department.

    She then veers into security, using operating system (OS) security as a model. She then proceded to discuss some common features of OS security and occasionally refered to a “secure OS” (something of an oxymoron). I gather the point was societies and operating systems have common features. They’re frameworks for allowing multiple entities with potentially conflicting interests to do stuff using common resources. A key problem in each case is preventing someone from subverting the system to acquire access to various resources.

    For societies, this is done through laws, constitutions, contracts, balance of power, etc. As society becomes more computerized, it becomes easier for a super-intelligent entity to subvert the system. I gather one simple example would be hacking to blatantly transfer ownership of property.

    So in this near future society, Petersen sees several things that need to be done. We first need to have “computer security” (I assume this means a better paradigm than the current one rather than some perfect ideal), “smart contracts” (contracts described in a mathematically precise programming language that can be automated), and automated mutual defense systems sufficiently intelligent to take on (or perhaps just slow down) a super-intelligent adversary.

    The key idea is to make the use of force unprofitable. Definitely, worth watching when the podcast comes up.

    I didn’t like John Smart’s talk, “Searching for the Big Picture”. As a speaker, he was as good as the rest, even gracefully soldiering through sound technical difficulties, but his use of jargon was particularly egregious.

    He started by discussing what he considers the key property of intelligence: it’s ability to efficiently use available resources and to actually “drive” future efficiency improvement, terming it “MEST-compression” (where MEST is “Mass, Energy, Space, Time”). Smart’s ideas have been discussed here before.

    I gather that he describes what MEST-compression means, various ways that intelligence has improved MEST-compression, and makes predictions about future MEST-compression. But I really don’t know. I had trouble merely keeping up with the blizzard of buzzwords even if the audience were Transhumanists completely versed in the Singularity basics. For example, he spends a few seconds discussing the “energy landscape” (the graph of the energy of a system over the parameters or states of the system) and “J curves” (the graph of the interest in a developing technology as described by Kurzweil), but doesn’t explain what those mean. And he name drops the Maslow Hierarchy with no further comment. Sorry, but that’s bad form.

  • Karl Hallowell

    Eliezer Yudkowsky spoke of the “intelligence explosion” and distinguished it from other concepts of the Singularity. I approve of this sort of elaboration since various terms in the area (in particular, “Singularity”!) are overused and vague in definition.

    He notes that he sees no reason that a newly created AI would be malicious or friendly. AI isn’t obviously good or evil. Further, he decries limited notions of intelligence (much as Max More did earlier) and our attempts to classify intelligence. If we can’t classify people how can we classify what will probably be a much more varied group?

    At this point he introduces the idea of “mind space” which is sort of a platonic space of all possible minds, perhaps interpretable as some sort of huge abstract list of parameters or characteristics that minds can have. Then the problem of creating a friendly AI becomes that of pulling a “good” mind out of mind space.

    The only real problem I see is how does a less intelligent mind distinguish a good mind from a bad one? We might be able to pluck the right mind from mind space, but how do we know whether we made a good choice beforehand? I see that he addresses these issues in his supplemental materials that he refers to during the talk.

    Bill McKibben, the final speaker, was the oddball. Alone of the group, he brought a very pessimistic viewpoint to the Summit. As I indicate way at the begining, they could have picked someone considerably weaker. McKibben was very effective despite communicating through a screen.

    From way back in the balcony, McKibben’s image was very hard to make out. Even regular speakers were miniscule from that point, but he was particularly hard to make out (with only his head and upper torso visible) even through the image was physically slightly larger than the surrounding participants. I think a stronger visual cue was needed to clue in the audience sitting far away from the stage.

    McKibben spoke of “being good enough”. In his viewpoint, the quest for greater intelligence, immortality, and related goals through technology was an example of the premise that “more is better”. As he notes, this is human nature, but it wasn’t in his opinion a good idea.

    Here’s some of the problems he considers. Current challenges become boring. If everyone can more or less trivially run a marathon, then that accomplish means nothing. Death and human aging are a big part of what we are. If that is removed, then we become bland. As Mike Tredder notes, he worries that technology inhabits our abilities to be happy. He speaks of isolation, that technology makes us more isolated from the community and less happy as a result.

    In addition to Tredder’s concerns, I find several things annoying about McKibben’s talk. First, challenges do often cease to be challenging. Tic Tac Toe and many logical puzzles cease to be challenging once you figure them out. Yet our happiness doesn’t decline as a result. Instead, we can find more complex and interesting puzzles to occupy us. In a similar fashion, even if marathonso or other challenges become trivial, we can always find new challenges worthy of our capabilities.

    Second, immortality might turn us in shallow procrastinators with other bland characteristics, but in my opinion, that is already a big problem. I don’t see living longer as making the problem any worse. And when you toss in that we’ll actually have to deal with some long term problems that we’d otherwise put off on a later generation, you see that it might actually help fix some of these problems. There’s also supposed to be some sort of benefit that you get from having your friends and relatives die frequently. I fail to see what that benefit is. Especially when you consider that even in the uploaded society, death will still exist.

    The final thing that bugged me was his emphasis on slowing or preventing the development of certain technologies like germ line genetic modification (altering the genes that you pass on).

    Current history indicates that many new technologies have a creepiness factor that inhibits their immediate adoption (as discussed here before), but that resistance goes away with time as the population comes to terms with it (and as new creepy distractions come up). So McKibben’s emphasis on delay is like holding back the tide. It doesn’t seem to accomplish anything since the technology gets adopted anyway.

    Instead, I think a better approach is to assume that this technology (or something like it) will get out and society will have to deal with the consequences. If it leads to a breakdown in communication, unhappiness, or some other ill, then how do we deal with that? McKibben doesn’t discuss this.

    As Tredder noted, Ray Kurzweil did a fair job of summarizing the points of the speakers and made some interesting comments on them (which he was scheduled to do). But then he dominated the early question and answer phase as well. Also, one of the questioners had a habit of talking a lot before asking a question, perhaps he got better as the Q/A session went on. I snuck out at this point.

  • Karl Hallowell

    Well, that’s my take. I hope that I abused the comments section just right and not too much. Would have been nice, if I had sat with someone I knew and could compare notes, but I did what I could.

  • ivankirigin

    I was there. It was very entertaining.

    I’ll comment somewhere else in greater depth. I also thought about a topic or two for a book or movie. Read army-of-davids into that, because I’m talking about a quick and dirty DIY for both.

    Either way, I can say a few things.

    Ray should have organized his slides better. I’m tired of his tossing far, far too many slides showing linear curves in log-log space and saying “don’t have time”. Presumably he knew he was speaking for only 30 minutes for quite some time, so speakers that toss in old slides rather than making a presentation that fits the event tend to annoy me. This is the 3rd time I’ve seen it. The rhetorical device of “having SO much more evidence but no time” is tiresome.

    Later, he was good about responding to Bill McKibben, pointing out how technology enables us to create and experience art in wonderful ways. How life is about experience of knowledge in various forms — not death or farmers markets.

    Thrun gave the most well-organized and convincing presentation. It was also the least contingent upon hard AI. Then again, this is coming from someone who wants to work on Grand Challenge III.

    Hofstadter was sublime. He is clearly a genius.

    Drexler has an exciting roadmap.

    Cory Doctorow inspired me to join EFF. He is very, very smart and an excellent speaker. I like folks like him that didn’t need slides. Yudkowsky would have been better served with around 2 slides instead of 50. Generally Yudkowsky spoke like an 8th grade forensics competitor. He was very funny and interesting also :-D

    I’m not sure why talk of NanoTech is so futuristics, as Bill McKibben is living in the tiniest echo chamber I’ve ever seen. He feels life for everyone is about what makes him happy. He also clearly hates libertarians and their silly ideas like limited government. Rather, he is a fan of the community making decisions — read “mob rule”. People like him will ban research that will make your life better. He thinks that not dieing from disease is equivalent to being unable to die. Clearly he has never had a proper introduction to the face of a fast moving bus. His talk of energy and environment is 100% incorrect, as at least a half dozen accelerating technologies will solve our energy problems. Clearly it is not “enough” — no matter how much he wants us to ride bikes.

    The idea that somehow I lose humanity because I can run marathons easily is just plain short sighted. Marathons are hard because of the will needed to push systems to their limitations. There will always be limitations. There will always be desire to push beyond them. There will always be a “wall” that needs to be overcome, whether physical or psychological. Imagine the IronMan competitions 10,000 years ago. You swim 5 miles, bike over 100 miles, and then run a marathon. My guess is that no human alive 10,000 years ago could have finished it in times anywhere near today’s best — they simply were unfit to do it, literally.

    Also, bill totally doesn’t understand recent trends for super empowered individuals. I make enough money to transition to concentrated solar power, use 100X less gasoline, buy a solar water heating system, pay extra for nuclear and windmill electricity, and ride my bike everywhere. I also am aware enough of the multiple dimensions of reasons to do it. People like me are becoming more common, not less. It is a clear lack of contradiction that my long term interests are in the singularity and my short term actions will lead me to use 95% renewable resources.

    The conflict is artificial. I like my community. I like my farmers market. I am responsible. And, I will embrace the singularity.

    Talk of super weapons and killer AI is relevant. Any talk of my losing identity or humanity of community is a incorrect.

    I would say he was a good presenter. I think that’s because he had no slides. Come to think of it, “present-er” is a pretty good label to contrast him to futurists :-D
    The best line of his was something like “we’re going from ‘turn the other cheek’ to ‘install motorized titanium alloy jaws to chew gravel into sand’”. I just like the imagery. Yes, by the way, I would like titanium teeth as long as they were white, painless, and required less maintenance than my current set.

    Theil, the moderator, asked some slightly interesting questions, but pushed them way too far — asking non futurists for dates for SuperAI is a waste of time. He generally did a horrible job. A robot would have given better time signals and could have managed the Q&A better. That robot would only need to be very, very weak AI to beat him :)

    Generally, I was extremely annoyed that they must have had 80-200 submitted questions, and only took 2 — then taking a ridiculous amount of time for around 2 dumb audience questions. The guy asking for an “assignment” was clearly an idiot, but wasn’t told to be quiet — and the conference ended on that note. Written questions allow for the queries to be more thoughtful and organized. They also could have been moderated to pick some good ones. They picked two bad ones. Annoying.

    There was no talk of space, which is idiotic when talking about extinction level events. Viruses don’t matter as much if you have thousands of space stations with necessarily solid entry procedures.

    The best elevator pitch to allay fears of super intelligence was the “cheesecake fallacy” by Yudkowsky:
    -Imagine the BIGGEST cheesecake a human team could make. Pretty big, eh?
    -Now imagine the BIGGEST cheesecake a super-intelligence AI could make. Must be HUGE. Mile high. The future will be full of cheesecake!.
    -The fallacy: just because an AI _can_ do something, doesn’t mean it will. This relates to both killing every human, and solving every human problem.

  • Karl Hallowell

    Imagine the IronMan competitions 10,000 years ago. You swim 5 miles, bike over 100 miles, and then run a marathon. My guess is that no human alive 10,000 years ago could have finished it in times anywhere near today’s best — they simply were unfit to do it, literally.

    My take is that it would depend on what culture the human came from. There are some cultures very dependent on running long distances, rowing, or swimming and others that were completely agriculture based and relatively unfit (I imagine). The former probably would be fit enough that *with training* they could compete.

  • http://www.kirigin.com ivankirigin

    My point is that even without gene therapy, steroids, or machine enhancements, the capability of people is expanding.

    Health improvements and knowledge, both fundamentally part of the continued exponential increase in information based systems, are the reason our capabilities improve.

    Also, the wealth from our technology allows us to focus on trying to push the envelope of human ability.

    So the Marathon Runners that Bill McKibbin praises as being quitisentially human in their pushing the limits of human will are, in fact, a product of the low-speed portion of accelerating change.

    These blinders to changes that have already occured make Bill’s point anachronistic by design. His definition of what it is to be human is accurate for this generation, but not for generations long ago, and not for generations in the future.

    To me, that means he should be ignored. More specifically, he should be included in the conversation, because he in sincere, but his policy recommendations should be ignored.