I’m sure everyone now knows that the House of Representatives narrowly approved the health care reform bill over the weekend. Attention will now turn to the Senate, who will soon be voting on their own version of the bill. If it passes, the two bills will be reconciled into a unified version to be signed by the president — who will almost certainly sign anything that makes its way through to him. Without getting into the debate about the benefits and costs of the current bill(s), or the question of whether the approaches they suggest represent an optimal (or even desirable) approach to reforming health care, my real beef with the current debate about health care is that everyone involved sets the bar far too low and assumes that whatever system we end up with will have to involve a series of win-lose scenarios, the hallmark of any zero-sum game.
These win-lose scenarios are based on conventional assumptions which, to their credit, have been correct throughout history. We assume, for example, that the cost of medical care will continue to rise. And we assume that future medical resources will be inadequate to meet all needs. And we therefore assume that someone (a person, a set of persons, or the market on its own) will ultimately make the tough decisions about who gets care and who doesn’t.
I listed those assumptions in the order that I believe they are likely to become invalid. As I wrote on this subject not too long ago:
I would guess that fewer than 20% of the problems that doctors routinely encounter account for 80% (or more) of the time they spend with patients, and that many of these would be good candidates for automating. Offloading 80% of the tasks doctors currently perform would be the equivalent of having five times as many doctors on hand to apply their expertise to the treatment and prevention of illness. The total amount of medical care available would increase geometrically. And, since the vast majority of this care would be automated, the total cost of care would plummet.
Most people aren’t comfortable with the idea of automated health care. It’s sounds risky and dehumanizing. Of course, looking back, online banking sounded pretty risky and dehumanizing when it was first introduced. But now that many of us have been doing it for a while, we understand the difference between the simple tasks that are easily handled in the interactive environment and the more complex ones that require talking to a human being or (as a last resort) actually showing up in person at the bank. Obviously, medical care is much more complicated than banking. But information technology is much better at handling complex tasks than it was in the recent past, and in the near future will be far more so.
A health care reform initiative that I could get excited about would be one that recognizes the incredible potential of technology, especially information technology, to make health care massively more available and less expensive. But solving the health care problem turns out to be only one good reason for pursuing cutting edge artificial intelligence research directed at automating health care.
In a recent blog post, Michael Anissimov writes about the risks involved in having sentience emerge in an AI system used for the kinds of applications that currently represent the leading edge in practical (narrow) AI research:
An AI that maximizes money for an account, optimizes traffic flow patterns, murders terrorists, and the like, might become a problem when it copies itself onto millions of computers worldwide and starts using fab labs to print out autonomous robots programmed by it. It only did this because of what you told it to do — whatever that might be. It can do that better when it has millions of copies of itself on every computer within reach. It might even decide to just hold off on the fab labs and develop full-blown molecular nanotechnology based on data sets it gains by hacking into university computers, or physics and chemistry textbooks alone. After all, an AI recently built by Cornell University researchers has already independently rediscovered the laws of physics just by watching a pendulum swing. By the time roughly human-level self-improving AIs are created, likely a decade or more from now, the infrastructure of the physical world will be even more intimately connected with the Internet, so the new baby will have plenty of options to get its goals done, and — best of all — it will be unkillable.
Once an AI with a simplistic goal system surpasses the capability of humans around it, all bets are off. It will no longer have any reason to listen to them unless they already programmed it to in a full-proof way, a way where it wants to listen to them because it needs to to fulfill its utility function.
A more basic example given is an artificial intelligence which has been programmed to build certain structures on the moon, and given no other instructions. So from its point of view, finding better and better ways to build more and more of these moon constructs is good, and all other considerations are irrelevant. All of which means that this machine will build its moon-towers right on top of the crushed bodies of the lunar colonists and never give the matter a second thought.
Yes, it’s the Mad Robot scenario. But you see, the robot isn’t really mad, although as Michael points out, it might well be — from our point of view — a complete psychopath. The robot is working in a completely sane, logical, and consistent way on a very simple set of goals without the benefit our moral sensibilities. These sensibilities, it turns out, are hugely complex and we won’t necessarily find an adequate way to encode them before the emergence of the first human-level AI.
Failing to achieve a truly moral AI who seeks to build towers on the moon, we want to make sure that we at least create an AI who seeks to build towers on the moon without killing anybody. That is to say “without killing anybody” (along with “without stealing non-moon-tower designated funds” and possibly something like “without ripping other planets apart in order to get more moon-tower materials” to give just a couple of examples) actually becomes part of the goal. So then building moon towers is not enough. The AI’s utility function will be satisfied if it creates a moon tower a certain way, but not if it goes about doing it the wrong way.
So we want anti-terrorist AI systems who take out sleeper cells, but understand that it’s no good if they take out Grandma’s bridge club in the process; likewise we want trading systems which will stop at almost nothing when it comes to maximizing profits, with that “almost” including things like, say, rendering the US dollar completely worthless.
However, no matter how carefully we go about creating these complex goal systems, there is always the possibility of unintended consequences. So we have to be extremely careful.
All of which brings us back to the subject of automating health care. If we start dedicating cutting-edge AI technology towards increasing human health and making more and more kinds of treatment easier and more widely available, we will not only achieve the benefits I described earlier, we will also face the possibility that this health care AI achieves sentience and “goes mad.”
So then we would have a greater-than-human intelligence working non-stop to make itself better and better at making us better and better. Okay, granted, there’s still an awful lot that can go wrong with that scenario. But if we were to program enough “withouts” into that system, I’d say there’s a pretty good upside there, too. I’d feel a lot better about a sentient AI emerging with this set of goals than any that I can think of that are currently being addressed by real-world artificial intelligence applications.