Author Archives: Phil Bowermaster

It's About Damn Time Somebody Said It

I think we might need to send Norm Augustine a FastForward Radio coffee mug. And if President Obama pays attention to what Augustine is saying, and runs with it, I’m ready to send him FOUR of the coveted cups — that’s one each for the Prez, the first lady, and each of the girls (for cocoa, of course). As has been widely reported elsewhere, the Exploration Beyond Low Earth Orbit sub-group had fairly provocative things to say at the get-together of Augustine’s Human Space Flight review committee. Let’s begin with their first point:

/– Human expansion into the solar system must be accepted as the key goal. Otherwise, manned spaceflight is pointless.

Absolutely. Read it again.

I believe that is one of the most beautiful statements I’ve ever seen. And it’s the first time in a loong time (if ever) that we’ve seen something so bold and visionary come from anyone acting in even a quasi-official governmental capacity.

It’s a pretty straightforward choice. Either we’re going someplace, or we’re going nowhere. Either we’re moving human influence, human activity, and human presence beyond this planet, or we’re staying here.

You either get busy living or you get busy dying.

The rest of the points are pretty exciting, too. I was particularly taken with this one:

/—- All options should include fuel depots.

If you wonder why this is a big deal, our friend Mr. Simberg can explain it for you a lot better than I can.

It’s About Damn Time Somebody Said It

I think we might need to send Norm Augustine a FastForward Radio coffee mug. And if President Obama pays attention to what Augustine is saying, and runs with it, I’m ready to send him FOUR of the coveted cups — that’s one each for the Prez, the first lady, and each of the girls (for cocoa, of course). As has been widely reported elsewhere, the Exploration Beyond Low Earth Orbit sub-group had fairly provocative things to say at the get-together of Augustine’s Human Space Flight review committee. Let’s begin with their first point:

/– Human expansion into the solar system must be accepted as the key goal. Otherwise, manned spaceflight is pointless.

Absolutely. Read it again.

I believe that is one of the most beautiful statements I’ve ever seen. And it’s the first time in a loong time (if ever) that we’ve seen something so bold and visionary come from anyone acting in even a quasi-official governmental capacity.

It’s a pretty straightforward choice. Either we’re going someplace, or we’re going nowhere. Either we’re moving human influence, human activity, and human presence beyond this planet, or we’re staying here.

You either get busy living or you get busy dying.

The rest of the points are pretty exciting, too. I was particularly taken with this one:

/—- All options should include fuel depots.

If you wonder why this is a big deal, our friend Mr. Simberg can explain it for you a lot better than I can.

FastForward Radio: Risks, Dystopias, and Unsettling Futures

The World Transformed, Part 6

Not all transformations are necessarily good. What risks do we face in a coming
age of massive change?

risk.jpg

Phil Bowermaster and Stephen Gordon welcome journalist and futurist Sonia Arrison, and the Lifeboat Foundation’s
Philippe Van Nedervelde to talk about the risks that accompany massive technological
change. We face risks to our privacy, our safety, and our very survival. How
do we take advantages of the wonderful opportunities new technologies present
in the face of such risks?

WorldTransformed4.jpg

Archived recording available here:

Listen to FastForward Radio... on Blog Talk Radio


About our guests:

Sonia Arrison is an
author and policy analyst who has studied the impact of new technologies
on society for more than a decade. She is a Senior Fellow at the California-based
Pacific Research Institute, and she is on the Board of Directors of Humanity
+.
soniaarrison.jpg
Philippe Van Nedervelde is the Executive
Director of the Foresight Institute in Europe. He is also the Chief Executive
Officer and Founder of E-spaces, and he is the Director of the Informational
Transparency Division of the Lifeboat Foundation, where he also serves on the board of directors.
philippevn.jpg

Addition, Subtraction Part 2: Robocops

With the Gates-Crowley-Obama dust-up likely to sort itself out in a day or two over a pitcher of suds, I got to thinking about how and why such a controversy arose in the first place, and the role that human enhancement technology and robotics might play in mitigating such situations in the future.

So what happened? One of Cambridge Massachusetts’ finest, a white officer, arrested a black Harvard professor for disorderly conduct after being called to the scene on suspicion that the professor was, in fact, a burglar attempting to break into his (own) home. Although there is contributory behavior from the neighbor making the 911 call and (clearly) from Professor Gates himself, most of the behavior that is under scrutiny is that of the arresting officer, James Crowley. It is asserted that he is guilty of:

1) Racial profiling of Gates in the first place, or

2) Overreacting to Gates’ response to the whole affair, misusing the charge of Disorderly Conduct to punish Gates either for the pseudo crime of “contempt of cop” or for being an “uppity black man,” or

3) Both of the above.

Having less than complete information on this case, I will refrain from commenting on whether either or both of those might be true (or whether anyone behaved “stupidly.”) But a lot of this comes down to what it is reasonable to expect a person to say or to think under a given set of circumstances. A good deal of the controversy hinges on what was going on inside Crowley’s head.

all of which reminds me of this idea from my recent musings on human enhancement by way of subtraction:

A great enhancement for people who want to make it in sales or show business or any number of other ventures would be the removal of the fear of rejection, along with some related forms of social anxiety. The individual who has no fear of being turned down, and who doesn’t mind asking for something any number of times, has a distinct advantage over people who shy away from being too aggressive. That person also runs the risk of being feared and despised for being so obnoxious — but then he or she wouldn’t care about that.

Cops and other early responders might benefit from enhancements that enable quick thinking and physical strength / speed. But they might also benefit from having certain tendencies suppressed, such as the desire to lash out at someone who has already surrendered or the overarching fear of physical danger (although maybe not — that’s generally a pretty useful trait.) But what if we could suppress any tendency towards racial prejudice? Or what if cops could shut down a key piece of their egos before going on duty, making it unlikely that they would misapply a charge such as Disorderly Conduct because they felt personally disrespected?

Most of the controversy in a case such as Gates / Crowley would disappear, because either things wouldn’t have gone the way they did in the first place or, if they did, because there would be no doubt as to the arresting officer’s motives.

Enhanced human cops are one possible remedy to these controversies — another would be artificially intelligent robot police officers. In that world, the cops might have to be trained to avoid engaging in “bio profiling.” And civilians might protest that they are being condescended to just because the police happen to think a million times faster than they do. The idea of “robocops” might sound a little scary, and there are no doubt myriad potential problems that would arise from deploying a robotic police squad. But there would also be advantages in terms of the baggage that such officers wouldn’t be carrying around. Accusing a robot of racial profiling or having power go to its head (assuming that those truly are things outside the scope of its design) would make about as much sense as accusing my lawnmower of sexually harassing the women in my neighborhood.

Friday Videos — Apollo 11 Anniversary Edition

Harvey recommends these two blasts from the past:

I remember seeing the ads for these and BEGGING my mom to get them. I have no idea what I expected, but I was majorly disappointed. As I recall, they were like Tootsie Rolls that went down too fast.

Awesome! That is one tall alien!

Atlas Hugged

Speculist blogger Stephen Gordon has written a very interesting essay on the Atlas Shrugged phenomenon, exploring whether Ayn Rand’s novel is, in some sense, coming true in our world today. He decided to publish this piece on a different blog because of the political nature of what he wrote. But, hey, I’m linking to it from here because anything Stephen writes is worth taking the time to read.

As I have noted before, Atlas Shrugged is essentially a science fiction story. It was set in the near future at the time it was written, a world we would now consider a past-future. The plot relies on technologies that didn’t exist at the time the book was written — Reardon metal, recovery of oil from exhausted wells, the torture gizmo the bad guys use on John Galt, etc. Rand also implies that in the future the US government has been restructured, referring repeatedly to a monocameral “national legislature.”

However, in Rand’s fiction, the story does not arise from technological or historical developments. It is driven by philosophy. So a discussion of whether her novel is being realized in the world today naturally turns into a discussion of political philosophy, which is not Speculist material. It’s interesting to note, however, that Stephen describes, in addition to the political / philosophical form, an emergent, economic form of Atlas shrugging that is somewhat orthogonal to Rand’s concept. In that model, it is a different Atlas who shrugs.

Likewise, I think there are some other “Atlas Shrugs” scenarios that are well within the realm of the topics we explore here. For example Stephen writes:

Welfare recipients have two barriers between themselves and a better lifestyle. They have the first natural barrier that all people face – they have to find the energy and ambition to work harder, or get an education to work for better money. Recipients also have an artificial barrier – they would lose the largess that is making their lives fairly comfortable. A marginal improvement in their productivity could actually result in a net loss of income. So it would take a significant improvement of their productivity before they’d see any benefit to their lifestyle at all. That’s a bigger obstacle to productivity than some people can overcome. So they, quite rationally, work less than they would have otherwise.

Forget the politics of welfare for a moment. What’s interesting to me here is the notion that some individuals do better economically when all productive activity is outsourced. Is this a social problem that needs to be remedied or just a sneak preview of the future economic life of all of us?

Over the past couple of centuries, human economic productivity has increased in unprecedented ways, deriving from “outsourcing” of productive labor to machinery. Since the machines have, up to now, mostly needed human beings to operate them, it wasn’t always clear that outsourcing was taking place. But the machines keep getting smarter, and working their way further and further up the management ladder.

The digital revolution is only now truly being felt in the productive sectors of the economy. Things get very interesting as the complexity of intelligence embedded into the machinery of production continues to grow. We could be 10-20 years away from the equivalent of a human intelligence falling below Kurzweil’s magical $1000 price point. At that point, EVERY activity currently performed by human beings could reasonably be done more cheaply and efficiently by a computer — assuming that it is possible, legal, ethical, etc. to “buy” that capability for that price.

At that point, Atlas might shrug again, this time when the truly productive sector of the economy excises its least contributing part: humans. You may think these are completely different circumstances, but maybe not. In situation A, producers reject the high cost of doing business brought about by taxation and stop producing for ungrateful consumers. In situation B, producers reject the burden of pretending that the (perhaps perfectly grateful) consumers need to be part of the production cycle at all.

Then what happens? We ALL become welfare recipients? For that to work very well, we might have to outsource the running of government to the machines along with everything else. If that image is too scary — and it will probably seem a lot more rational when the time comes, seeing as we will have witnessed human-or-greater artificial intelligences in action at that point,rather than trying to imagine having our world be run by some massive IVR system — consider the alternative in which each of us owns our own outsourced means of production in the form of a universal assembler. Either way, you have the same result — a world in which none of us has any more motivation to be “productive” than Stephen’s welfare recipients.

What do we do then? Take your pick. I’ll probably spend a lot of time blogging and podcasting, or rather the future equivalents of those activities. Also, I’ll probably do a lot of traveling — on earth, in space, and in virtual worlds. Later on, if we get tired of amusing ourselves and decide we want to become productive again, we’ll have to think about massive cognitive enhancements to enable us to operate at the level of our robot overlords. In a world run by them, true productivity will have to do with increasing knowledge, increasing capability, and creating beauty, and those activities will be occurring at a level that it is impossible for us to imagine in this world.

Most likely, those of us who want to remain productive on those terms will just join up with the machines. It will be the last chapter of the story: Atlas Embraces.

UPDATE: An anonymous reader points out that that last line should really be “Atlas Hugged.” Now how did I miss that? Thanks for the new title, reader!

Note: I need hardly mention that if you want to leave a comment about how “liberals” or “wingnuts” or the president or the obstructionist republicans are ruining the world, this is not a blog where we talk about those things. I believe Stephen is accepting comments over at The Last Pragmatist, however.

FastForward Radio — Achieving Friendly Artificial Intelligence

The World Transformed, Part 5

Are
the robots going to take over?

The Matrix
and Terminator movies present a nightmare world in which
artificially intelligent machines pit themselves against humanity with
devastating consequences. Could something like that really
happen?  Do we face a future in which machines that are smarter
and potentially much more powerful than human beings view us in a
hostile manner? Or will these machine be indifferent to us?

Or will
they be our friends?

href="http://www.blogtalkradio.com/fastforwardradio/2009/07/22/Achieving-Friendly-Artificial-Intelligence"> alt="artificialintelligence.jpg"
src="https://www.blog.speculist.com/archives/artificialintelligence.jpg"
style="border: 0px solid ; width: 300px; height: 276px;">

Phil Bowermaster and Stephen Gordon welcome a panel of leading thinkers
on artificial intelligence to explore these issues:

id="ctl00_ContentMain_UpcomingShow_lblShowDescription">In the near
future, is machine intelligence going to equal or overtake human
intelligence in terms of speed and capability?

If so,
what can we do to make sure these new intelligences are on our side?

What are the implications of sharing our world with artificial
intelligences who are as smart as (or much smarter than) we are?

href="http://www.blogtalkradio.com/fastforwardradio/2009/07/22/Achieving-Friendly-Artificial-Intelligence"> alt="WorldTransformed4.jpg"
src="https://www.blog.speculist.com/archives/WorldTransformed4.jpg"
style="border: 0px solid ; width: 300px; height: 300px;">


Archived recording available here:

Listen to FastForward Radio... on Blog Talk Radio


About our guests:

border="0" cellpadding="2" cellspacing="2">
id="ctl00_ContentMain_UpcomingShow_lblShowDescription">Eliezer
Yudkowsky
is the world’s foremost researcher on Friendly AI and
recursive self-improvement. He is a research fellow with the href="http://www.singinst.org/">Singularity Institute for Artificial
Intelligence.
eliezer_yudkowsky_2.jpg
id="ctl00_ContentMain_UpcomingShow_lblShowDescription">James Hughes
is the Executive Director of the Institute
for Ethics and Emerging Technologies
, and he is the producer and
host of the weekly syndicated public affairs talk show href="http://feeds.feedburner.com/ChangesurferRadio">Changesurfer Radio
Jtie3.jpg
id="ctl00_ContentMain_UpcomingShow_lblShowDescription">Ben Goertzel
is the chief science officer and acting CEO of href="http://www.novamente.net/">Novamente. He is the Director of
Research for the Singularity
Institute for Artificial Intelligence
.
BenGoertzel.jpg

Happy Moonday!

40 years ago today two human beings, a couple of American guys, became the first members of our species — or, we can be reasonably sure, any species from this planet — to set foot on another world.

As “other worlds” go, we were always lucky to have the moon, so big and yellow and tempting and sitting right there in our backyard. It only makes sense that our first steps into space would culminate in a journey there and back. Apollo 11 was a proof of concept. We are also fortunate to have so many interesting planets and moons (as well as asteroids, comets, and hard-to-classify stuff flying out there in and beyond the Oort Cloud) orbiting the sun with us. Apollo 11 was the first step towards our reaching out to explore all of them…and beyond.

Sure, we can argue about whether we went to the moon the right way. Working from earth orbit (rather than lunar orbit) with reusable spacecraft would have almost certainly been a better model. If we had built the space shuttle and a space station before going to the moon, we would understand better why we have those things. Rather than being these odd anticlimactic artifacts, symbols of the long decline of our manned exploration of space, they would have been the core infrastructure for pushing outward. In that model, sub-orbital flights lead to orbital flights, leading to a permanent presence in orbit, leading to moon missions, leading to a permanent presence on the moon and, at the same time, missions deeper into space.

Of course, even had we chosen to get there using such a model, there’s no guarantee that we wouldn’t have abandoned our spacefaring ambitions soon after that first trimuphant moon landing. The reason that a national deprioritization of space exploration was all but inevitable has to do with the other big argument people like to get into about that first flight to the moon: that we went there for the wrong reasons. The Commies had beaten us into space. We needed to take an an unambiguous lead in the space race, show ‘em who’s boss. So we chose to go to the moon…

…which, unfortunately, meant that the whole enterprise had a one-off, stunt-like quality to it. We didn’t need a space shuttle or a space station — we needed a great big enormous white rocket with “USA” emblazoned on the side. We went with the lunar orbit model because it was cheaper and we could make it happen faster. We didn’t need to worry about infrastructure or where we would go next. There was no tomorrow — we were going to the moon!

I think the problem with arguing about how we went or why we went is that it ignores the likely alternative to going to the moon the way we did, for the reasons we did. The likely alternative is not that we would have gone a different way, for a different set of reasons. The likely alternative is that we never would have gone.

Imagine a version of the year 2009 in which an eventual trip to the moon is as ephemeral and unlikely a notion as a return to the moon (or a manned mission to Mars) is in our timeline. We should celebrate Apollo 11 whole-heartedly. It was the first step, even if the next step is going to be much later coming than many of us would have liked. And what a glorious first step it was.

Happy Moonday, all. If you’re interested in reliving the event in real-time, Ken B. suggests checking out this site.

Future Values

My previous post on enhancement by way of subtraction as well as addition — and possible downsides that could result when people have the option to delete characteristics that we have traditionally that of as being core to human nature — has led to some interesting commentary from reader Jeff Allbright. Jeff writes:

My message is that there is nothing more important or central to “transhumanist” thought than achieving a effective framework for the promotion of an increasing context of hierachical, fine-grained, evolving values, promoted via methods increasingly effective, in principle, over increasing scope of interaction….

Likewise, while it’s increasingly meaningless to talk of modifications as inherently “good” or “bad”, it is increasingly meaningful, important, and urgent that we learn to effectively assess and evaluate actions, relative to our evolving values, rationally expected to promote those evolving values over increasing scope.

This puts me in mind of the discussion we had a couple years back about how Asimov’s Three Laws of Robotics might be updated to make them useful as goals or design principles for those looking to establish an ethical framework for artificial intelligence. Restated as goals, the three laws become:

1. Ensure the survival of life and intelligence.

2. Maximize the happiness, freedom, and well-being of individual sentient beings.

3. Ensure the safety of individual sentient beings.

Astute readers will note that I flipped the order of the second and third goals from my original formulation, which reflected Asimov’s order. Last time out, several readers protested that valuing safety over happiness and freedom ultimately creates a nanny state (or worse) in which freedom gets snuffed out altogether. I’m still not convinced that would necessarily be the case, but I do see the risk.

Anyway, as noted last time out, these goals work as well for human beings as robots, and something like this list might serve as the core for the “context of values” that Jeff calls for, above. Interestingly, I don’t think there is anything particularly “new” on the list. As with the Declaration of Singularity, I think we can get a pretty good idea about our future values by looking at the values we have in the present, many of which we have carried with us for a long time.

Addition, Subtraction

Talking about human augmentation the other night, we ended on a kind of down note when the subject turned to potential “augmentations” that inhibit or eliminate certain characteristics. We’re all for adding strength, mental focus, whiteness of tooth, and so forth, but what about taking things away? Generally speaking, subtraction is as good a method as addition — if not better — when it comes to achieving certain outcomes. For example, inhibiting myostatin looks like a much safer and healthier way of eliminating body fat and building muscle than the traditional “addition” approach, anabolic steroids.

For many of the changes we discussed, it’s not a matter of addition or subtraction, but a trading of one characteristic for another. RU Sirius talked about changing skin color. Even if people begin to experiment with a lot of options not provided on the original human palette, it would be kind of prissy to view such experimentation as a departure from core “humanity” rather than simply an extension of it. Even with a much more radical procedure such as a sex change operation — and we spent some time talking about the significance of coming improvements to those procedures — one state of being human is swapped for another.

But what happens if, as described in Greg Egan’s fiction and elsewhere, some people decide they don’t want the whole sex / gender thing at all? Would a completely sexless human be just as human as you or I? This one is a little bit trickier, but ultimately I think sex- and gender-free humanity would represent another extension of humanity, rather than a departure from it. They would certainly represent an unexpected variation on the human template, one that a lot of people would be uncomfortable with — but then I think there are a lot of those coming.

Then we got onto the subject of personality traits. I have suggested more than once that a great enhancement for people who want to make it in sales or show business or any number of other ventures would be the removal of the fear of rejection, along with some related forms of social anxiety. The individual who has no fear of being turned down, and who doesn’t mind asking for something any number of times, has a distinct advantage over people who shy away from being too aggressive. That person also runs the risk of being feared and despised for being so obnoxious — but then he or she wouldn’t care about that.

Is that person still human? Sure. But what happens if somebody decides to take the next step? Imagine a truly ruthless person who decides to hit the Delete key on all empathy with his or her fellow human beings. This individual has all the advantages of someone who removes fear of rejection, and then some. The lack of inhibition would go well beyond asking for things; we’re talking about someone who isn’t shy about taking things, and who doesn’t care about what happens to anyone who gets in the way.

We’re talking about a very dangerous person.

Now, surely, once we remove this trait we’re talking about a real departure from humanity, aren’t we? Well, my squishy and romanticized view of of who and what we are says yes, that’s a real departure. But reality says no. PJ Manney pointed out that we already have such people among us, that they make up a small but appreciable percentage of the population. They’re called sociopaths.

So if future technologies enable people to select in favor of sociopathy, it would not represent a departure from the human template. Needless to say, I hope we don’t see too much of that. But it won’t be a wholly new subtraction; it will be one that human evolution has already tried out and allowed. This serves as a reminder that there are risks with any new technology. Those who think the big “risk” associated with human enhancement technology is that future teenagers might opt to sport dorsal fins are missing the big picture. Look at people around you, and consider all the different qualities they possess. Any of those qualities is subject to magnification.

Still, I think a lot more people will be inclined to give themselves a brain boost or a prettier singing voice or the ability to breathe underwater (and possibly even the aforementioned dorsal fin) than will want to hack away parts of themselves that are more or less universally valued. In any case, if the future means that we will have to deal with people who have had some important human features deleted, that is a way that the future will be similar to, not different from, the present.