An Idea for Health Care: the Mad Robot Scenario

By | November 9, 2009

I’m sure everyone now knows that the House of Representatives narrowly approved the health care reform bill over the weekend. Attention will now turn to the Senate, who will soon be voting on their own version of the bill. If it passes, the two bills will be reconciled into a unified version to be signed by the president — who will almost certainly sign anything that makes its way through to him. Without getting into the debate about the benefits and costs of the current bill(s), or the question of whether the approaches they suggest represent an optimal (or even desirable) approach to reforming health care, my real beef with the current debate about health care is that everyone involved sets the bar far too low and assumes that whatever system we end up with will have to involve a series of win-lose scenarios, the hallmark of any zero-sum game.

These win-lose scenarios are based on conventional assumptions which, to their credit, have been correct throughout history. We assume, for example, that the cost of medical care will continue to rise. And we assume that future medical resources will be inadequate to meet all needs. And we therefore assume that someone (a person, a set of persons, or the market on its own) will ultimately make the tough decisions about who gets care and who doesn’t.

I listed those assumptions in the order that I believe they are likely to become invalid. As I wrote on this subject not too long ago:

I would guess that fewer than 20% of the problems that doctors routinely encounter account for 80% (or more) of the time they spend with patients, and that many of these would be good candidates for automating. Offloading 80% of the tasks doctors currently perform would be the equivalent of having five times as many doctors on hand to apply their expertise to the treatment and prevention of illness. The total amount of medical care available would increase geometrically. And, since the vast majority of this care would be automated, the total cost of care would plummet.

Most people aren’t comfortable with the idea of automated health care. It’s sounds risky and dehumanizing. Of course, looking back, online banking sounded pretty risky and dehumanizing when it was first introduced. But now that many of us have been doing it for a while, we understand the difference between the simple tasks that are easily handled in the interactive environment and the more complex ones that require talking to a human being or (as a last resort) actually showing up in person at the bank. Obviously, medical care is much more complicated than banking. But information technology is much better at handling complex tasks than it was in the recent past, and in the near future will be far more so.

A health care reform initiative that I could get excited about would be one that recognizes the incredible potential of technology, especially information technology, to make health care massively more available and less expensive. But solving the health care problem turns out to be only one good reason for pursuing cutting edge artificial intelligence research directed at automating health care.

In a recent blog post, Michael Anissimov writes about the risks involved in having sentience emerge in an AI system used for the kinds of applications that currently represent the leading edge in practical (narrow) AI research:

An AI that maximizes money for an account, optimizes traffic flow patterns, murders terrorists, and the like, might become a problem when it copies itself onto millions of computers worldwide and starts using fab labs to print out autonomous robots programmed by it. It only did this because of what you told it to do — whatever that might be. It can do that better when it has millions of copies of itself on every computer within reach. It might even decide to just hold off on the fab labs and develop full-blown molecular nanotechnology based on data sets it gains by hacking into university computers, or physics and chemistry textbooks alone. After all, an AI recently built by Cornell University researchers has already independently rediscovered the laws of physics just by watching a pendulum swing. By the time roughly human-level self-improving AIs are created, likely a decade or more from now, the infrastructure of the physical world will be even more intimately connected with the Internet, so the new baby will have plenty of options to get its goals done, and — best of all — it will be unkillable.

Once an AI with a simplistic goal system surpasses the capability of humans around it, all bets are off. It will no longer have any reason to listen to them unless they already programmed it to in a full-proof way, a way where it wants to listen to them because it needs to to fulfill its utility function.

A more basic example given is an artificial intelligence which has been programmed to build certain structures on the moon, and given no other instructions. So from its point of view, finding better and better ways to build more and more of these moon constructs is good, and all other considerations are irrelevant. All of which means that this machine will build its moon-towers right on top of the crushed bodies of the lunar colonists and never give the matter a second thought.

Yes, it’s the Mad Robot scenario. But you see, the robot isn’t really mad, although as Michael points out, it might well be — from our point of view — a complete psychopath. The robot is working in a completely sane, logical, and consistent way on a very simple set of goals without the benefit our moral sensibilities. These sensibilities, it turns out, are hugely complex and we won’t necessarily find an adequate way to encode them before the emergence of the first human-level AI.

Failing to achieve a truly moral AI who seeks to build towers on the moon, we want to make sure that we at least create an AI who seeks to build towers on the moon without killing anybody. That is to say “without killing anybody” (along with “without stealing non-moon-tower designated funds” and possibly something like “without ripping other planets apart in order to get more moon-tower materials” to give just a couple of examples) actually becomes part of the goal. So then building moon towers is not enough. The AI’s utility function will be satisfied if it creates a moon tower a certain way, but not if it goes about doing it the wrong way.

So we want anti-terrorist AI systems who take out sleeper cells, but understand that it’s no good if they take out Grandma’s bridge club in the process; likewise we want trading systems which will stop at almost nothing when it comes to maximizing profits, with that “almost” including things like, say, rendering the US dollar completely worthless.

However, no matter how carefully we go about creating these complex goal systems, there is always the possibility of unintended consequences. So we have to be extremely careful.

All of which brings us back to the subject of automating health care. If we start dedicating cutting-edge AI technology towards increasing human health and making more and more kinds of treatment easier and more widely available, we will not only achieve the benefits I described earlier, we will also face the possibility that this health care AI achieves sentience and “goes mad.”

So then we would have a greater-than-human intelligence working non-stop to make itself better and better at making us better and better. Okay, granted, there’s still an awful lot that can go wrong with that scenario. But if we were to program enough “withouts” into that system, I’d say there’s a pretty good upside there, too. I’d feel a lot better about a sentient AI emerging with this set of goals than any that I can think of that are currently being addressed by real-world artificial intelligence applications.

  • http://protoposthuman.blogspot.com Proto

    You’d have to program very careful definitions of “better” for this not to go horribly wrong.

    A runaway healthcare AI could decide to prevent people from leaving their homes so as not to risk injury or infection. It would probably also completely control our intake of food and our use of various chemical products. If it makes a mistake, it could end up killing all of us. It it’s perfect, it would totally wreck our economy.

    And let’s not even think about what happens if a healthcare AI is also programmed to maintain our mental health.

  • Harvey

    Maybe the youngsters would take the shots better from one of the Teletubbies (robot Teletubbie). Or, maybe that would ruin the show for them (which, I guess, wouldn’t be such a bad thing). You know, as for me, I think I’d better like a robot (not a Teletubbie though) giving me the finger in the butt exam.

  • http://blog.speculist.com Phil Bowermaster

    I remember in the movie Same Time Next Year, Alan Alda is talking about getting a prostate exam from a female doctor.

    The doctor asks, does this make you uncomfortable?

    He answers that, yes, it does.

    The doctor then asks, is it because I’m a woman?

    To which Alda replies: no, I get uncomfortable when anybody does that to me.

  • http://wheretheresawilliam.blogspot.com Will Brown

    You know, I quite like Anissimov’s website, and you are quite right to sound the occassional cautionary note Phil, but why, he laments, do these cautionary myths always leap straight to the most unlikely scenarios? Sticking with the medical AI example, where from/why/how would what would essentially be a Smart Stethescope gain control of a scalpel (or alternative delivery mechanism of your fevered imagination’s preference – some I can suggest might actually seem somewhat probable-sounding) and by what reasonable (there’s that word again, Phil :) ) process would the device gain competance using the undesigned-for tool? Not least in importance, (and no bad fictional plot devices allowed) why would anyone sit still long enough to become “victimised” by the machine-run-amok in the first and last fargin’ place!?!

    Frankly, I don’t grok the fear that somehow a diagnostic tool with an auto-pilot function could “wake up” spontaneously and transform into Dr. Moreau. An AI is a manufactured intellect that functions within designed parameters. Once circumstance exceeds those limits, outside (presumably human – or are we really suggesting outsourcing decision-making authority to some entirely hypothetical entity having no as-yet discovered physical existence?)(see my “no bad fic” pre-condition above) input/authorisation would necessarily be required before any action could be determined. If, otoh, the argument shifts to an AGI intellect (and that would be a conceptual shift – the two are not synonimous), why would such an independant intellect ever consent to cooperate within such a constraining environment at all (never mind the basic market-driven query regarding the financial expendature necessary to waste such a resource by so limited an application in the first place)? Really, I’m curious, why is it ever considered to be the least bit plausable that people smart enough to build such a technology at all are yet so stoopid as to overlook such a basic threat potential during the R&D trials process?

    Let me reiterate, I think this and related queries are valuable and even necessary. I suggest that they would be better structured as platforms for potential mitigation practices (it seeming unrealistic to argue for functional technology to rectify potential failings of as-yet still only potential other technology) to resolve or redress forseeable problems associated with a technology advancement. That might lead to fruitful lines of future development, but would also require some measure of riguer be applied to the threat scenario being suggested.

    And Michael could let his trousers dry out. :)

    Oh, and Harvey, why not forego the finger entirely? How about a smoothie with 100k+ micron-sized ‘bots to attach to and sample the architecture of our intestinal tracts instead? Even if it were necessary to wear a telemetry belt or girdle for a couple/three days and nights to receive and record their output, the subsequent data ought to be a good deal more evocative then the present arrangement. Think of it as giving the finger to the butt exam. You could follow that up with another smoothie containing the drugs otherwise delivered by shots too.

    I will take a hard stand and prophesy that Teletubbie-style robots would be the death of robotics amongst humans.

  • http://blog.speculist.com Phil Bowermaster

    Will –

    Sometimes Michael (or Eliezer Yudkowsky) will toss off the example of the AI gone mad that tiles over the entire solar system with smiley faces. This sounds crazy because such behavior seems more like something a spam bot would do than an entity possessed of true intelligence. But the point here is that an intelligence built up from whole cloth, which has at best some subset of our values, might very well tend to behave in ways that seem crazy…to us. Your thoughts on whether an AGI would be willing to do one thing or another begs the question by assuming an AGI with some of Will Brown’s (or Phil Bowermaster’s, for that matter) disdain of the idea of just being used to serve someone else’s purpose. Intelligence does not equal that disdain, nor does it necessarily require it. It is part and parcel of our intelligence because we evolved this way.

    A very simple diagnostic tool seems an unlikely candidate for emergent sentience, but a self-teaching, skills-acquiring diagnostic and treatment tool which is programmed to improve its own capabilities in support of its goals might surprise you. (I mean, come on — you’ve heard of doctors thinking they’re God? Here we’re just talking about one that decides it’s human, and starts acting accordingly.)

    >>Really, I’m curious, why is it ever considered to be the least bit plausable that people smart enough to build such a technology at all are yet so stoopid as to overlook such a basic threat potential during the R&D trials process?

    I see no evidence that developers of trading systems (arguably the most advanced narrow AIs currently being developed) are giving this sort of risk any consideration whatsoever. Yes, but surely we’re many generations from one of those systems actually becoming a danger? We probably are, but those generations tend to pass pretty quickly. Smiley-faced-tiling AIs aren’t the only intelligent beings who suffer from myopia.

  • Sally Morem

    I like the concept of the mad, but good, robot. Probably wouldn’t make it in Hollywood screenwriting circles though. Not enough car chases or spaceship chases, darn it.

    It might make for a fun SF short story, though.

  • http://wheretheresawilliam.blogspot.com Will Brown

    So, having now calmed down a bit I confess I got a bit carried away in my generalized frustration.

    What I find most annoying is the tendancy to conflate the capabilities of an AI with those of an AGI. The distinction as I understand it (per Josh Hall amongst others) is that an AI is designed to operate at near-to-greater-than human capability within a relatively narrow range of expertise. This is often characterised as idiot savant, but an AI is anything but an idiot – within it’s range of capability it’s near-as-nevermind Einstein re-born. Outside that range of designed capability, an AI cannot have or develop interest or capability without some external agency extending the AI’s intellectual infrastructure. An AGI otoh is engineered to be much more broadly capable and to exercise self-directed improvement in a largely self-guided fashion. Thus an AGI by definition must have the inherent capability to expand it’s intellectual infrastructure (not only limited to memory capacity) as it’s development requires expansion.

    From this very simplified example I hope it is plain that an AI lacks the physical ability to even recognise some activity not designed into it as being applicable to itself, let alone somehow auto-magicly develop mastery of whatever the mechanics and sourcing of smiley-faced tiles might include. An AGI quite possibly could, but that argues for a very strong inhibition against any such expansion of capability without consultation and/or approval of some other agency (itself not necessarily human you understand). The fact that there is potential for problem does not lead to suspension of development pending mitigation of problem potential. The appropriate response is to engineer multiple and reinforcing error checking and error avoidance mechanisms into the proto-AGI instead.

    I think the classic example of an AI from sci/fi is the “autodoc” (thus my allusion to a stethescope) or the spaceport or harbor approach and landing/docking control. Although potentially complex, these are narrow applications of an intellect and I argue that such an engineered entity would not be provided with the potential capability to deviate away from the intended application if only as a fundamental safety precaution. If that be true, then by what mechanism would an AI “go mad” that didn’t result in either repetitious operation or shut-down? In either case, interacting entities (human or otherwise) would recognise fault – or at least error – and take the appropriate precautionary response allocated to their condition within the emergency response protocol all such operations already employ using existing tech and human operators.

    Obviously, an AGI necessarily requires the ability to grow, but such a more fundamentally complex entity wouldn’t long agree to confine itself to the closely restricted environment of an AI any more than either of us would long agree to remain students in the third grade. Like both of us, at some point it would simply refuse to cooperate further until circumstance changed to some more acceptable condition. The fear that it might “go crazy” and tile over the immediate galactic environs begs a lot of questions (and wouldn’t refusal to acknowledge others be a more likely “crazy” response?), though I do agree that a mechanism(s) for frustration relief will definately have to be considered during such an entity’s development and early growth cycle. I trust that your own kid’s potential to come after you with a carving knife some particularly frustrating 3:00am doesn’t keep you sleepless at night either, does it?

    My point is that conflating the capabilities of one class of intellect onto the other doesn’t help to advance understanding of either and potentially works against development at all. I do apologise if my tone got a bit overbearing. Oh and Phil, I bet you all of those trading programs you mention have an un-overridable prohibition against initiating trading activity without explicit authorisation from some stipulated external source (which could take the form of a pre-calculated protocol covering certain anticipated circumstances). Want to wager whether or not these proto-AI’s also come to have a limit placed upon the extent and weight assigned to market-related search functions prior to their being put into actual service? I submit these are explicit examples of the type of risk aversion protections I mentioned. The actual developers of this technology aren’t the people who are ultimately paying for it all. They (the buyers) will have to be convinced that their current condition will be safeguarded as well as potentially improved (you like GPS, would you let it drive your kids to school for you without similarly cautious consideration before buying it?) or the tech will never make trade one outside of a closed-loop simulation. If the guys building these programs actually haven’t paid attention to these concerns yet, they will once the checkbook tells them to.

  • POUNCER

    All this is why I’d rather work for giving human beings computer-like off-line, backup-able memory;math processing skills; modeling capability, etc, as opposed to trying to give computers human-like goal-directed, solution-seeking “intelligence”.

    No doubt the end will converge to the middle, but still.

  • Sally Morem

    I agree with Pouncer. I also think augmenting already existing human brains will prove to be much easier than creating AIs from scratch. So upgraded humans are far likelier to happen earlier than true AIs.