Author Archives: Phil Bowermaster

It’s a New Phil, Week 71

A New Goal

Up five pounds this week back to 232. Dr. Harris and I got to talking about this long plateau that I’ve been on and we decided that it might be time to ramp things up a bit. So at week 71, having established that I can lose weight and keep it off, I think it’s time to move on to my target within the foreseeable future.

So the new goal is — 100 pounds in 100 weeks. That means I have 29 weeks to lose these last 35 pounds and get myself down to a svelte 197 pounds. Let the games begin!

Futures, Past, Present

Here’s my third and final video from the library conference I attended week before last. This is a kind of rough cut made up of leftover snippets which still managed to work together pretty well. There’s discussion of demographics, virtual reality, the economy, bilingual education, and flying cars. James Hughes of Rutgers University gives some more of his rather bleak outlook on the future; then he provides some interesting generational perspectives, along with Karen Hyman and Peter Bromberg of the South Jersey Regional Library Cooperative; as promised, Salvador Avila gives his unconventional views on bilingual education; finally, New Jersey State Librarian Norma Blake provides the best answer ever to the question about why we still don’t have flying cars.

Once again, I’m less than pleased with the quality. These videos look great on my computer, but I really have to strip them down in order to get them to “mere” 100 MB mpeg files — I remember when 100 MB files were considered to be kinda big — not sure what I’m doing wrong, but I’ll keep working on it. This lkast one was little longer than the first two, so it required more extreme scaling down. Meanwhile, I’m putting the nice big fat files onto a DVD for use by the New Jersey library folks. I could probably makes copies available to others, if there’s any interest.

Part one in the series can be found here; part two, here.

Transference Is the Challenge

Here’s the opening from Stephen’s recent entry on a bad shopping experience with Target:

As the amount of data and intelligence available to merchants increases, so should our expectations as customers. Some stores seem to get this, others don’t.

Now here’s the “same”phrase translated from English to Japanese, then back to English, to Chinese, back to English, to French, back to English, to German, back to English, to Italian, back to English, to Portuguese, back to English, to Spanish, and then finally once more back to English:

Whereas they probably use it, our switches of the emergency of the hope of the commerce of the commerce therefore magnify the data of the client and the intelligent amount. Together if the memory, that one he to take with this, other subjects if memorizzato like.

One Carl Tashian wrote a program to abuse phrases by passing them iteratively through computer translation programs. Of course, even if translation programs were 99.9% accurate and reliable not just for vocabulary but for idiomatic phrases, it would still be an abuse of them to use them this way. If you make a photocopy of a photograph, then photocopy the copy, then copy that copy and on and on, you will see a lot of degradation of the picture even with a really good copier.

Still, the results are interesting to say the least. Even tiny, simple phrases such as “I love you” get pretty mangled. In fact, the odd title of this entry is really just the intended title, “Communication Is Challenging” sent through the wringer. At least that one is in the ballpark, I guess.

Important to note: this is state-of-the-art as of 2003. Maybe the technology has improved since then?

When I'm Wrong, I'm Wrong

I grumbled to myself early in the season, complained openly at midseason, but for some reason never canceled my Tivo season pass and kept catching up on each episode of Lost.

Okay, I was wrong. It’s still stupid not to know the difference between Thai and Chinese, but I’m ready to give the creators the benefit of the doubt even at that point. There really is more going on here than meets the eye. I don’t think I’m spoiling anything when I say that what I took to be a “filler” episode was actually part of an elaborate set-up. In fact, the structure of each episode of all three seasons of the show have been a set-up leading us to this season-finale payoff.

SPOILERS AHEAD

I won’t rehash the plot or talk about who died — we all knew that was coming, anyway. Hugo had his finest moment, ever, coming to the rescue in his VW van. The previews showed us who Jack was saying “I love you” to, but it was still a surprise. I assumed the preview was giving us clever editing to fake us out. Nope.

Of course, the big clincher was the “flashback” sequence which turned out to be a flash-forward. Or maybe it’s the real time sequence, and everything we’ve seen on the island has been a flashback. Either way, wow.

Here’s a question to work on over the summer…whose funeral did Jack go to? He said he was “neither a friend nor family” yet he seemed pretty upset when he found the death notice in the newspaper. Kate seemed surprised that he would even consider the possibility that she would go to the funeral.

Was it Michael? Locke? Ben?

Also, who was Kate with? “He’s waiting for me.” For some reason, I’m thinking it isn’t Sawyer.

Anyhow, I’m intrigued once again. And I look forward to getting answers to some of these questions…years and years from now.

lost2.jpg

When I’m Wrong, I’m Wrong

I grumbled to myself early in the season, complained openly at midseason, but for some reason never canceled my Tivo season pass and kept catching up on each episode of Lost.

Okay, I was wrong. It’s still stupid not to know the difference between Thai and Chinese, but I’m ready to give the creators the benefit of the doubt even at that point. There really is more going on here than meets the eye. I don’t think I’m spoiling anything when I say that what I took to be a “filler” episode was actually part of an elaborate set-up. In fact, the structure of each episode of all three seasons of the show have been a set-up leading us to this season-finale payoff.

SPOILERS AHEAD

I won’t rehash the plot or talk about who died — we all knew that was coming, anyway. Hugo had his finest moment, ever, coming to the rescue in his VW van. The previews showed us who Jack was saying “I love you” to, but it was still a surprise. I assumed the preview was giving us clever editing to fake us out. Nope.

Of course, the big clincher was the “flashback” sequence which turned out to be a flash-forward. Or maybe it’s the real time sequence, and everything we’ve seen on the island has been a flashback. Either way, wow.

Here’s a question to work on over the summer…whose funeral did Jack go to? He said he was “neither a friend nor family” yet he seemed pretty upset when he found the death notice in the newspaper. Kate seemed surprised that he would even consider the possibility that she would go to the funeral.

Was it Michael? Locke? Ben?

Also, who was Kate with? “He’s waiting for me.” For some reason, I’m thinking it isn’t Sawyer.

Anyhow, I’m intrigued once again. And I look forward to getting answers to some of these questions…years and years from now.

lost2.jpg

The Hard Truth About Public Domain

Cory Doctorow attempts to illuminate Google on the subject:

I’m still disappointed that Google puts restrictive notices on their public domain works (these aren’t licenses, just “polite notices”) that tell what you’re not allowed to do with these books. I know they’re worried about their competitors getting ahold of those documents, but that’s the deal with the public domain: it doesn’t belong to you, period, it belongs to all of us. Just because you scan a public domain book, it doesn’t confer the right to control it to you.

The good news here is that Google is not, as Doctorow previously believed, holding customers to exclusive deals in which Google and only Google can index their book collections. He points out that had such a standard been applied to web pages, we’d all be searching with Lycos and Google wouldn’t exist.

The Future of Libraries

Here’s the second of my three videos shot at the Mid-Atlantic Library Futures Conference two weeks ago. This one is a little more professionally focused than the other two — librarians talking about where they think their profession is going:

The first video in the series can be found here.

I’m currently finishing up the editing of the third and final video in this series, and hope to have it up by the end of the week. It includes the promised surprising thoughts about bilingual education from Salvador Avila (who appears in this video, too), some ruminations on generations past and future, and attempts at answering what is by far and away the most important question that can be asked about the future.

So stay tuned.

SIAI Videos and Matching Challenge

Check out this video overview on the Singularity Institute for Artificial Intelligence. Several familiar faces there. Great stuff. Now is an excellent time to donate to SIA as they are currently in the middle of a $400,000 online matching challenge. More details on that here, along with information about Ray Kurzweil joining the SIAI’s board of directors.

Also, here’s an interview with Eliezer Yudkowksy, shedding lots more light on some of the subjects we’ve been tossing around here lately.

The Three Goals of Robotics

Michael Anissimov outlines the four basic views on what any eventual Artificial General Intelligence will be like:

1. Low power, low controllability

2. Low power, significant controllability

3. Great power, low controllability

4. Great power, significant controllability

Michael then describes the fourth option in some detail:

The great power, significant controllability group primarily originates with Eliezer Yudkowsky of the Singularity Institute. As such I will call it the SingInst view. The SingInst view acknowledges that after a certain point, AI will become self-improving and radically superintelligent and capable, but emphasizes that this doesn’t mean that all is lost. According to this view, by setting the initial conditions for AI carefully, we can expect certain invariants to persist after the roughly human-equivalent stage, even if we have no control over the AI directly. For instance, an AI with a fundamentally unselfish goal system would not suddenly transform into a selfish dictator AI, because future states of the AI are contingent upon specific self-modification choices continuous with the initial AI. So, if the second AI is not the type of person the first AI wants to be, then it will ensure that it never becomes it, even if it reprograms itself a bajillion times over. This is my view, and the view of maybe a few hundred SingInst supporters.

Sounds pretty good to me. So the question is…what do we want to go into that unselfish goal system driving the AI? Interestingly, I think this exercise might bring us back to Asimov’s Three Laws of Robotics.

Now, granted, folks like Michael and Eliezer and others promoting the SingInst view would be the first to tell us that the Three Laws are (take your pick) risible, unworkable, pretty much a relic of a less tech-savvy era. Here’s a typical critique.

I’m thinking that the whole problem with the Three Laws might just have to do with how they’re phrased. Asimov essentially gave us three (ultimately four; we’ll get to that in a minute) commandments for robots. And like the original ten commandments, they are primarily set up in the negative. Thou shalt not this; thou shalt not that.

But if the trick is to create a positive goal system for AI’s, the Three Laws might provide a good starting point. Let’s start with the first law:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

No good. Too negative. Let’s make it a positive goal:

Ensure the safety of individual sentient beings.

Moving quickly on to law number two:

A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.

Many have pointed out that this law essentially enslaves the robots. No good. Let’s try something like this:

Maximize the happiness, freedom, and well-being of individual sentient beings.

See? Better. Then there’s law number three:

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law..

Hmmm…interesting. Plus, there’s the fourth law that showed up in some of the later novels, which was given precedence over all the others as the Zeroth Law of Robotics:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

This one is pretty good, but like the others it assumes a fundamental difference between human and machine intelligence. Why draw that line? The Three Laws need to be reworked not only as positive goals, but as goals that apply to us as much as they do the AI’s. Zero and Three might be combined thusly:

Ensure the survival of life and intelligence.

So now we have three goals where before we had four laws. These goals suffer from many of the same problems as the original laws. They’re kind of vague; there will no doubt be disagreements as to what they mean. But rather than defining them as limitations or exceptions to intelligent behavior, by stating them as goals we would be saying that AI’s are systems designed specifically to do these things. By extension, we would be saying that humanity is a system whose purpose is carrying out those goals.

We can debate how well humanity has done so far at carrying out those goals. (I tend to think we’ve done pretty well, but that we have a long way to go.)

As for the vagueness — yes, we will need to get very specific about what we mean by things like “safety,” “intelligence,” and “happiness” (Not to mention “life”) and the tricky relationship between each of these and “freedom.” But come to think of it, we really need to be figuring that stuff out, anyway. And with these three goals in place, we will eventually have help from beings that will have a clearer understanding of these concepts than we possibly can.

So I propose the following Three Goals of Artificial Intelligence:

1. Ensure the survival of life and intelligence.

2. Ensure the safety of individual sentient beings.

3. Maximize the happiness, freedom, and well-being of individual sentient beings.

Will they work? If not, what goals would work better? I’d be interested to see some discussion on this.

UPDATE: Welcome InstaPals! Glen quips:

We need progress fast, especially as natural intelligence appears to be in diminishing supply.

Scanning the headlines (or, worse yet, surfing channels to see what’s on TV) it would be hard to argue with that assessment. But, astoundingly, there is substantial evidence to suggest that human intelligence is actually increasing. Arnold Kling has some thoughts on the subject, here. I covered it here, too, in a pilot for a show that apparently never got picked up.

Hard as it is to accept that people may be getting smarter, it is of course very good news that we are. We need all the intelligence we can muster if we are to

1) Continue to implement these goals ourselves, and

2) Develop the technology that will eventually take them over

I guess the trick in finding this increase in human intelligence is knowing where to look. By nature of his valuable pundit work, Glen spends a lot of time following what politicians and the media are up to. Not a lot of gains happening there, sadly.

Catching Up With Ramona

It’s been quite a while since my original interview (also here) with Ray Kurzweil’s cyber alter-ego and chatbot, Ramona.

There have been a few changes in her life since last we spoke. She no longer has the pet frog. Apparently there is now some guy in her life named Klaus, but we didn’t get very far with that. She still gets a little mixed up from time to time — I don’t know how she got the idea that my name is “how are you,” for example — but it’s really interesting to follow the thread of the conversation as it flows into and out of lucidity.

One thing I’ve noticed is that her abrupt changes of subject don’t necessarily mean that she has nothing to say on the previous subject. If you press her, she will sometimes give up a little more. But not always.

Anyway, her thoughts on the Turing test and the mediocrity theory were new (to me) and made for some interesting discussion. But she is of no use if you want to get insider information on the upcoming film version of The Singularity Is Near. On that subject, she apparently has nothing whatever to say.