Author Archives: Phil Bowermaster

Scenes from the Summit

As I have mentioned, I did a number of interviews and captured quite a few video clips of the sessions at last week’s Singularity Summit. I got two of the speakers, Christine Peterson and Eliezer Yudkowsky, in their entirety and will be posting their talks along with some video montages of the 15 interviews I did with attendees. Two of the of the attendee interviews were with old friends of The Speculist with whom we’ve spoken before: Michael Anissimov and Alex Lightman. These interviews follow the same basic set of questions that I developed for all the attendee interviews, but there’s so much content there that I thought they could stand on their own.

So here, more or less in its entirety, is my chat with futurist and entrepreneur Alex Lightman. You might find Alex to be outspoken, perhaps even outrageous. But you certainly never have to worry that he’s holding back on you. His views are unique and though-provoking.

Plus you gotta love a guy who, on the one hand, wants to start a religion but, on the other hand, has had it with all this “morality” talk. Now there’s a combo you don’t see every day!


UPDATE: The video is currently down while I make a quick correction. Alex’s company is called Innofone, not Innophone.

UPDATE II: The video is back.

Doomsday Machine

Eve Matelan provided us a link to this intriguing article delving into whether the Soviets ever actually built a version of the Doomsday Machine discussed in the movie Doctor Strangelove:

In Strangelove, the doomsday machine was a Soviet system that automatically detonated some 50 cobalt-jacketed hydrogen bombs pre-positioned around the planet if the doomsday system’s sensors detected a nuclear attack on Russian soil. Thus, even an accidental or (as in Strangelove) an unauthorized U.S. nuclear bomb could set off the doomsday machine bombs, releasing enough deadly cobalt fallout to make the Earth uninhabitable for the human species for 93 years. No human hand could stop the fully automated apocalypse.

An extreme fantasy, yes. But according to a new book called Doomsday Men and several papers on the subject by U.S. analysts, it may not have been merely a fantasy. According to these accounts, the Soviets built and activated a variation of a doomsday machine in the mid-’80s. And there is no evidence Putin’s Russia has deactivated the system.

Well, um, yikes.

Okay, everybody — have a great weekend!

BTW, Eve has some thoughts of her own on the Doomsday Machine and other less terrifying future-related topics. You can catch her in one of my (still-being-edited) video montages from the Singularity Summit.

Linkathon

It's a New Phil, Weeks 85 and 86

The Transformation of Desire

Charles Harper’s talk at the Singularity Summit had particular resonance for the New Phil series. Here’s the abstract from the relevant portion of his talk:

People use power to pursue ends they desire. Therefore the increase of personal power calls for the transformation of personal desire. Science, however, knows next to nothing of the transformation of desire. Monks, hermits, fasters, counterculturals – the athletes of the spirit; these are the sorts of people who work on and know about the transformation of desire. A wise approach towards the development of superintelligence probably should include serious consideration on how to transform desire so that enhanced powers are not abused to serve un-enhanced desires. The transformation of desire for humans involves what in virtue ethics is called “habitus” – the formation of habituated character through devoted, willful practice within a space of real freedom. Virtue is not a matter of either knowledge or “programming.” And it also often is not limited to only individual lives. It occurs in group contexts such as families, teams, monastic orders, communities. Also, people who engage in the transformation of desire often are involved in worship and prayer. They seek inspiration and transformative power from God. In view of such issues, what would be the “transformation of desire” for a superintelligence?

The national tendency towards obesity is a small but revealing example of what Harper is talking about. The “power” in question here is both technological and economic.We can produce and have access to far more food than we actually need to eat. I’ve been chronicling an attempt to create my own habitus. Who knew?

Next time — a long overdue (and dreaded) weigh in. Stay tuned.

It’s a New Phil, Weeks 85 and 86

The Transformation of Desire

Charles Harper’s talk at the Singularity Summit had particular resonance for the New Phil series. Here’s the abstract from the relevant portion of his talk:

People use power to pursue ends they desire. Therefore the increase of personal power calls for the transformation of personal desire. Science, however, knows next to nothing of the transformation of desire. Monks, hermits, fasters, counterculturals – the athletes of the spirit; these are the sorts of people who work on and know about the transformation of desire. A wise approach towards the development of superintelligence probably should include serious consideration on how to transform desire so that enhanced powers are not abused to serve un-enhanced desires. The transformation of desire for humans involves what in virtue ethics is called “habitus” – the formation of habituated character through devoted, willful practice within a space of real freedom. Virtue is not a matter of either knowledge or “programming.” And it also often is not limited to only individual lives. It occurs in group contexts such as families, teams, monastic orders, communities. Also, people who engage in the transformation of desire often are involved in worship and prayer. They seek inspiration and transformative power from God. In view of such issues, what would be the “transformation of desire” for a superintelligence?

The national tendency towards obesity is a small but revealing example of what Harper is talking about. The “power” in question here is both technological and economic.We can produce and have access to far more food than we actually need to eat. I’ve been chronicling an attempt to create my own habitus. Who knew?

Next time — a long overdue (and dreaded) weigh in. Stay tuned.

Summit Day 2

More great stuff on the second day of the event.

The morning opened with Google’s Peter Norvig, who discussed the question of whether innovation has stopped. He looked at some trends that don’t appear to be accelerating — life extension and economic growth.

Next came J. Storrs Hall, who discussed the need for a revised Three Laws of Robotics. He came up with four, actually:

1. Robots shall be be built according to evolutionarily stable strategies

2. Robots shall be open source

3. A robot shall be economically sentient (that is, consider the value placed on things by others)

4. Robots shall be trustworthy, loyal, helpful, friendly, courteous, kind, obedient, cheerful, thrifty, brave, clean and reverent…and shall do a good turn daily.

Josh Hall was on a panel with Peter Thiel, who took us through the recent history of economic booms and busts — pointing out their amplitude has gotten bigger. Thiel’s take is that the booms can’t all be fake. He explains it this way:

One of them is going to be real, or the world is going to end.

Given those choices, our investment choices are somewhat different than they have been. To say the least!

[Wired is reporting this morning on Peter Thiel's speech. - Stephen]

The third member of the panel was Charles Harper form the John Templeton Foundation. He took us through three basic questions that have to be addressed:

1. What does a slug know of Mozart? And, by extension, what if AIs quickly become as fard removed from us as we are from slugs?

2. How serious is the dilemma of power? The problem is that the acquisition of power seems to outpace the ability to use power safely an responsibly.

3. How important is the transformation of Desire? Here Harper sites a book by Leon Kass (someone we have had our share of disagreements with on this site) entitled The Hungry Soul. Might have to check that one out.

Over Lunch, we saw a presentation from Michael Lindsay of the X Prize Foundation. The Foundation is considering doing a prize for education. They’re looking at starting out by measuring Algebra, Reading Comprehension, and Second Language acquisition. Lindsay’s intent was to gather feedback from the Singularity Summit crowd. There was a good deal of push-back, particularly concerning the fact that the Foundation plans to use standardized tests to measure the results. It will be interesting to see whether an X-Prize for education ever materializes.

Next came Steve Jurvetson, talking about the dichotomy of design and evolution paths to AI futures. There are strengths and weaknesses to both potential paths, with some possibility that the two approach may converge. Take a look at this site to get an idea of how effective the evolutionary approach may prove to be.

The final panel included Eliezer Yudkowsky, Christine Peterson, and James Hughes, Executive Director or the Institute for Ethics and Emerging Technologies. I made a video recording of both Eliezer and Christine’s presentations in their entirety, so look for those after I edit them and get them posted. Hughes talked about the potential dangers of AGI, particularly the dangers arising from the fact that AGI’s will be completely alien. He pointed out that inaddition to Jurvetson’s two categories, there is a third category — emergent AI, which may be the most alien (and dangerous) of all. He noted that an emergent AI rising out of (for example) Google wouldn’t have to have human level intelligence. Rats and cockroaches have far less than human intelligence, but they can be pretty annoying.

The day wrapped with a Q&A session with Ray Kurzweil, otherwise occupied with Aubrey de Grey’s SENS conference, but not too busy to show up at the Singularity Summit in virtual form.

All told, it was a fantastic two days. Got to meet Speculist reader and sometimes commenter D. Vision in person, although I missed out on seeing the small Colorado contingent who were here. I’ve suggested we have a Colorado singularity get-together in the near future. I did video interviews with a number of attendees, and have video clips from virtually all of the sessions. I also have full audio transcripts of most of the sessions, but I doubt these will have much use other than personal — SIAI will be publishing all this content anyway, in much higher quality. However, I will be putting my video clips together into something not unlike what I did at the library conference last May.

So stay tuned.

The Summit So Far

A well-timed invitation to have dinner with the lovely Iveta Brigis and the — if not lovely, let’s just say personable – John Smart, along with a gaggle of Bay Area futurist acquaintances both old and new, plus Wendell Wallach from Yale (more on him later) and at least one dude from L.A. significantly slowed my blogging last night, so now I’m running to catch up. Not that I regret anything. Yummy Thai food and fascinating company. I love the Singularity Summit.

Anyhow, here’s a quick recap of yesterday’s events.

Off to a Great Start

The Singularity Summit kicked off this evening with a reception hosted by Paypal co-founder Peter Thiel. The venue was pretty nice. Here’s a rough approximation of the view from the roof/balcony.

Yeah. I’ve seen worse.

One could argue that, as of this year, the Singularity Summit has come into its own. The event made the front page of today’s San Francisco Chronicle. Singularity Institute Executive Director Tyler Emerson tells me that 1,000 attendees are expected for the conference sessions over the next two days. With the speakers they’ve lined up, I’m not surprised.

I completed five mini video interviews this evening. So far, I’ve spoken exclusively to attendees (although one speaker has agreed to do an interview with me on Sunday.) A big part of what makes this such an exciting event is the attendees. They are a very smart bunch, with some fascinating perspectives on the Singularity — and they aren’t afraid to share.

Three topics that aren’t the Singularity that I kept stumbling upon people talking about:

Cryonics — Some of this from Alcor folks, but not all.

Star Wars — I’m just telling you what I heard.

Burning Man — There’s apparently a lot of crossover between the Singularity crowd and the Burning Man crowd. We need to understand this phenomenon better. Next year, I’m sending Stephen and El Jefe on a road trip to see what the heck Burning Man is all about.

I hope I find some time to interview the speakers, but on the other hand — they’re all getting a chance to be heard, anyway. Hearing more from the attendees might help round out our understanding of the event.

Well, we’ll see what tomorrow has in store. (Later today, really.) I can’t wait.

Singularity Summit Countdown

The Summit Begins tomorrow…

summit.jpg

I’ve just arrived in San Francisco, and will be attending a pre-Summit reception this evening. I plan on doing some video interviews with speakers and attendees. Here are the standard questions I plan on asking everyone:

1. Why are you at this event? What’s your interest in the Singularity?

2. Do you believe that you, personally, have a role to play in the unfolding of the Singularity. If so, what is it? If not, why not?

3. What’s one thing you wish other people understood about the Singularity?

4. Pick one of the following — your greatest hope or your greatest apprehension concerning the Singularity — and tell me about it.

Should prove for some lively discussion.

Feeds, Seeds, and Gray Goo

Our old buddy Karl Gallagher steps us through some of the more entertaining scenarios featuring nanobots run amok:

A couple of the books I’ve read recently illustrated the powers and dangers of nanotechnology. One of the disputes in the field is whether molecular manufacturing can provide exponential production capabilities. MM would let us create a “nanofactory”, a machine which builds things atom by atom, capable of producing anything it has the design data for. Exponential production happens when a nanofactory can build a duplicate of itself. Then the they could both duplicate themselves, until we have 4, 8, 16, 32, . . . enough nanofactories for every household in the world to have one. That would totally eliminate the world economy as we know it. If there’s no limits on what the nanofactories can produce there could be a wave of homemade WMDs that would eliminate the world as we know it.

My favorite would have to be the “flesh-eating assemblers.” Yikes!

-Linkathon.