Author Archives: Phil Bowermaster

FastForward Radio — From Sub-Human to Post-Human in Three Easy Steps!

Phil Bowermaster and Stephen Gordon discuss the future of human and machine evolution:

1. What are the challenges faced in trying to develop a human-level artificial intelligence?

2. When do humans stop being human?

3. What will be the relationship between humanity and post-human artificial intelligence?

Daunting questions with some potentially surprising answers!


Archived recording available here:

Listen to FastForward Radio... on Blog Talk Radio

FastForward Radio — Three Words

Phil Bowermaster and Stephen Gordon have three words for you about the future:

dontmissit2.jpg

Possibilities are open to us as individuals, as a nation, and as a civilization that are unprecedented in all of human history. The only thing standing between each of us and an extraordinary future is the choices we make today and tomorrow.

There’s a wonderful future waiting for you out there. Whatever you do — don’t miss it.

Archived recording available here:

Listen to FastForward Radio... on Blog Talk Radio

An Idea for Health Care: the Mad Robot Scenario

I’m sure everyone now knows that the House of Representatives narrowly approved the health care reform bill over the weekend. Attention will now turn to the Senate, who will soon be voting on their own version of the bill. If it passes, the two bills will be reconciled into a unified version to be signed by the president — who will almost certainly sign anything that makes its way through to him. Without getting into the debate about the benefits and costs of the current bill(s), or the question of whether the approaches they suggest represent an optimal (or even desirable) approach to reforming health care, my real beef with the current debate about health care is that everyone involved sets the bar far too low and assumes that whatever system we end up with will have to involve a series of win-lose scenarios, the hallmark of any zero-sum game.

These win-lose scenarios are based on conventional assumptions which, to their credit, have been correct throughout history. We assume, for example, that the cost of medical care will continue to rise. And we assume that future medical resources will be inadequate to meet all needs. And we therefore assume that someone (a person, a set of persons, or the market on its own) will ultimately make the tough decisions about who gets care and who doesn’t.

I listed those assumptions in the order that I believe they are likely to become invalid. As I wrote on this subject not too long ago:

I would guess that fewer than 20% of the problems that doctors routinely encounter account for 80% (or more) of the time they spend with patients, and that many of these would be good candidates for automating. Offloading 80% of the tasks doctors currently perform would be the equivalent of having five times as many doctors on hand to apply their expertise to the treatment and prevention of illness. The total amount of medical care available would increase geometrically. And, since the vast majority of this care would be automated, the total cost of care would plummet.

Most people aren’t comfortable with the idea of automated health care. It’s sounds risky and dehumanizing. Of course, looking back, online banking sounded pretty risky and dehumanizing when it was first introduced. But now that many of us have been doing it for a while, we understand the difference between the simple tasks that are easily handled in the interactive environment and the more complex ones that require talking to a human being or (as a last resort) actually showing up in person at the bank. Obviously, medical care is much more complicated than banking. But information technology is much better at handling complex tasks than it was in the recent past, and in the near future will be far more so.

A health care reform initiative that I could get excited about would be one that recognizes the incredible potential of technology, especially information technology, to make health care massively more available and less expensive. But solving the health care problem turns out to be only one good reason for pursuing cutting edge artificial intelligence research directed at automating health care.

In a recent blog post, Michael Anissimov writes about the risks involved in having sentience emerge in an AI system used for the kinds of applications that currently represent the leading edge in practical (narrow) AI research:

An AI that maximizes money for an account, optimizes traffic flow patterns, murders terrorists, and the like, might become a problem when it copies itself onto millions of computers worldwide and starts using fab labs to print out autonomous robots programmed by it. It only did this because of what you told it to do — whatever that might be. It can do that better when it has millions of copies of itself on every computer within reach. It might even decide to just hold off on the fab labs and develop full-blown molecular nanotechnology based on data sets it gains by hacking into university computers, or physics and chemistry textbooks alone. After all, an AI recently built by Cornell University researchers has already independently rediscovered the laws of physics just by watching a pendulum swing. By the time roughly human-level self-improving AIs are created, likely a decade or more from now, the infrastructure of the physical world will be even more intimately connected with the Internet, so the new baby will have plenty of options to get its goals done, and — best of all — it will be unkillable.

Once an AI with a simplistic goal system surpasses the capability of humans around it, all bets are off. It will no longer have any reason to listen to them unless they already programmed it to in a full-proof way, a way where it wants to listen to them because it needs to to fulfill its utility function.

A more basic example given is an artificial intelligence which has been programmed to build certain structures on the moon, and given no other instructions. So from its point of view, finding better and better ways to build more and more of these moon constructs is good, and all other considerations are irrelevant. All of which means that this machine will build its moon-towers right on top of the crushed bodies of the lunar colonists and never give the matter a second thought.

Yes, it’s the Mad Robot scenario. But you see, the robot isn’t really mad, although as Michael points out, it might well be — from our point of view — a complete psychopath. The robot is working in a completely sane, logical, and consistent way on a very simple set of goals without the benefit our moral sensibilities. These sensibilities, it turns out, are hugely complex and we won’t necessarily find an adequate way to encode them before the emergence of the first human-level AI.

Failing to achieve a truly moral AI who seeks to build towers on the moon, we want to make sure that we at least create an AI who seeks to build towers on the moon without killing anybody. That is to say “without killing anybody” (along with “without stealing non-moon-tower designated funds” and possibly something like “without ripping other planets apart in order to get more moon-tower materials” to give just a couple of examples) actually becomes part of the goal. So then building moon towers is not enough. The AI’s utility function will be satisfied if it creates a moon tower a certain way, but not if it goes about doing it the wrong way.

So we want anti-terrorist AI systems who take out sleeper cells, but understand that it’s no good if they take out Grandma’s bridge club in the process; likewise we want trading systems which will stop at almost nothing when it comes to maximizing profits, with that “almost” including things like, say, rendering the US dollar completely worthless.

However, no matter how carefully we go about creating these complex goal systems, there is always the possibility of unintended consequences. So we have to be extremely careful.

All of which brings us back to the subject of automating health care. If we start dedicating cutting-edge AI technology towards increasing human health and making more and more kinds of treatment easier and more widely available, we will not only achieve the benefits I described earlier, we will also face the possibility that this health care AI achieves sentience and “goes mad.”

So then we would have a greater-than-human intelligence working non-stop to make itself better and better at making us better and better. Okay, granted, there’s still an awful lot that can go wrong with that scenario. But if we were to program enough “withouts” into that system, I’d say there’s a pretty good upside there, too. I’d feel a lot better about a sentient AI emerging with this set of goals than any that I can think of that are currently being addressed by real-world artificial intelligence applications.

The iPhone, Free Markets, and Alternative Energy

In a comment on our recent discussion about energy, Harvey notes that “the free market is a myth.”

This is, of course, absolutely correct.

The free market is a myth in the same way that freedom of speech is a myth and that freedom of religion is a myth. Ideally, anyone can say anything he or she wants. In reality, it’s better to avoid committing libel or shouting “fire!” in a crowded theater. Ideally, there would be no interference, government or otherwise, in one’s spiritual beliefs or the practices derived from them. In reality, religious practices can’t be used as an excuse to exploit or endanger others, or to deprive them of their freedom.

Perfect freedom of speech is likely to remain beyond our grasp, but the ideal of “freedom of speech” is a good thing even if it is a myth. It reminds us that speech should be as free as we can get it. The same is true,for religion and — I believe — for markets.

Look at what happens when markets are made more free. Apple has demonstrated this very well over the past couple of years by turning the business model for mobile telephone applications on its head. Before the iPhone came along, mobile apps were a highly protected “walled garden.” The carriers and and the equipment manufacturers didn’t want anybody but them to play with their sandbox toys.

Apple changed all that. If you want to build an app for the iPhone, you just need to follow their standards. The doors have been flung open wide, and to what result? An explosion of creativity, and an explosion of new business. And not just for Apple. The app developers benefit by being able to profit from their work, and the consumers benefit by having a device whose value is (potentially) increased with each new app downloaded — if not each new app developed.

But of course, it isn’t really a “free market.” It’s just a lot more free than what existed before — which is great. But Apple is still setting those standards and deciding who can or can’t develop an application run on their platform.

So we might talk about an idealized, mythical free market that is completely unconstrained, but there’s a limit to how free you can get. The reality is that markets have to be regulated and that businesses often seek to protect their interests, not only through direct competition in the marketplace, but also by leveraging social and government pressures.

To address the question of energy, we’ve had no new nuclear power plants built in the US for some decades now. Who prevented that from happening? Well, the people, of course, especially environmentalists. And the government.

Anyone else?

Okay, you can call me overly suspicious, but I can’t help but imagine that the oil companies might have played a role. New players have a hard time competing in a “free market” when established players take steps to make sure that the market isn’t really all that free.

Another example: the vast majority of the money that has been pumped into biofuels in this country has gone towards corn-based ethanol — only attaching multiple hamster wheels to our vehicle’s drive trains and trying to get the little rascals to spin in unison would be less practical approach. The farm lobby has worked tirelessly to get the government behind this non-free-market (and ultimately not workable) approach.

We need to see an alternative energy market which is as dynamic and creative as the iPhone app market. Of course, the former would work on a time scale, and be supported by a group of players, several orders of magnitude slower and smaller than the latter. But the result would be the same — a big boost in business for the alternative fuel players and a rich new set of choices for the consumers.

How do we create a more level playing field to make that possible? I don’t know. But it’s possible that the government might have a role to play. The iPhone app success story is a great tribute to the free market, as are so many stories of huge business success built on the Internet. But as Stephen reminded us last week, the Internet itself was not a product of the free market. It was a government project.

There are any number of ways the government might help: research initiatives, tax incentives, push prizes. Or maybe they could simply enforce reasonable regulation on new businesses and industries, while allowing the ultimate economic good of our country — rather than pressure tactics from lobbyists — determine which new technologies are introduced and in what time frame.

But now, I guess I’m just dreaming, eh?

FastForward Radio — More About the Future of Energy

Phil Bowermaster and Stephen Gordon continue their discussion from last week about the future of energy:

What role does energy technology play in the unfolding of the major transformations we are currently experiencing?

What are the economic impacts of choosing to change energy sources…or choosing not to change?

What are the unexpected energy solutions that might prove to be game-changers?

Tune in and find out!

FFRNewLogo9J.jpg




Archived recording available here:

Listen to FastForward Radio... on Blog Talk Radio

Great Headline, Great Idea

Every now and then you see something that just puts a smile on your face:

Humans, Shmumans: What Mars Needs Is an Armada of Robots and Blimps

Airships may be the key component in a new robotic system for exploring the celestial bodies most likely to harbor life like Mars and Jupiter’s moon, Titan.

The dirigibles would provide regional observations and autonomous command for ground-based vehicles, while maintaining contact with orbiters.

It’d be a new role for airships, which were the wonder of the aerial world in the days before airplanes (and rockets and space shuttles).

An armada of robots and airships. On Mars! Now that’s the future, folks. The future is supposed to be interesting and fun.

Here’s my artistic take on Mars in the near future:

blimpsonMars.jpg

But where are the robots, you ask? Piloting the blimps, of course.

Climate Change

Lovely autumn scenes from the metro Denver area. About 2 feet so far and now it’s really coming down. I can’t shovel fast enough to keep up.

snowhouse.jpg

snowfence.jpg

That drift on the fence peaks at about eye level, with heavy stuff up to shoulder level.

FastForward Radio — Energy and the Future with Brian Wang

Phil Bowermaster and Stephen Gordon welcome futurist Brian Wang back to FastForward Radio to talk about energy and the future.

  • What role does energy technology play in the unfolding of the major transformations we are currently experiencing?
  • What are the economic impacts of choosing to change energy sources…or choosing not to change?

  • What are the unexpected energy solutions that might prove to be game-changers?

Tune in and find out!

FFRNewLogo9J.jpg

Archived recording available here:

Listen to FastForward Radio... on Blog Talk Radio






About Our Guest

Brian Wang is a futurist
who blogs about all things future-related at href="http://nextbigfuture.com/">NextBigFuture.
He is the Director of Research for the Lifeboat Foundation and a member
of the Center for Responsible Nanotechnology Task Force.
Brian-Wang-sm.jpg

Friday Videos — Greatest Song of all Time Edition

Harvey sends us this visually stunning dance remix of the classic Istanbul:

This is good, but it lacks the magic and deep meaning of the lyrics of the song.

Many years ago Harvey sent me a mix tape (sorry, kids, if you don’t know what that is — no time to explain) entitled “Tunes Harv Digs” which really needs to be recreated as an an iTunes playlist for all the world to enjoy. I rediscovered the tape some years later, when my older daughter was five or six, and she loved it, especially the They Might Be Giants cover of Istanbul.

Here’s the TMBG version, as acted out by the Tiny Toons:

Hannah and I decided that Istanbul is the greatest song of all time. It has yet to be dethroned for me — haven’t checked in with her on it lately.

However, if it has any competition at all, it comes from the epic Birdhouse in Your Soul:

If you want more TMBG, check out their TED “talk:

Their last song, Alphabet of Nations, is pretty awesome.