Author Archives: Phil Bowermaster

Transmasculinity

Nope, I didn’t make that word up. A quick Google search reveals 156 pages which reference it. I don’t know if they mean the same thing I do, however.

When I use the term, I’m thinking of a characteristic that might apply to transhumans. So whereas humans have masculine and/or feminine qualities, transhumans may possess transmasculine and/or transfeminine qualities. Plus there are likely to be other options that we can’t quite get our heads around yet, seeing as we’re physiologically tied to a dichotomy which allows us to reproduce and which has played into how we organize society at the most fundamental level — and which has therefore been significantly reinforced by our own societal structures over the past many thousands of years.

But as humanity undergoes significant changes, how much of all that will change, and how rapidly? A plausible future is one in which human beings, or rather transhuman / posthuman intelligences, resident in a silicon or subsequent substrate and largely (or totally) freed from the limits of the previous substrate — including the template of human physiology — switch genders as easily as we change clothes. Or it might be a better analogy to say that they will switch genders as easily as we change hats, since most of us consider hats to be pretty optional and often choose not to wear one at all.

Now that’s a pretty alien world for me to try to imagine. But we already have hints of it in our current substrate. Transvestites and transsexuals both represent attempts by human beings at our current level of development to define gender, or at least some aspects thereof, on their own terms. But these early forays into the gender-optional lifestyle will be seen as crude and primitive by gender-switching intelligences living in what we would think of as virtual, electronic worlds. And I think it’s fair to guess that the increased ease with which that kind of change can be made will only make it a much more popular option than it is now. Consider this recent news story:

Sam and Kat met in the virtual world Second Life. And although they shared all kinds of intimacies in Second Life, the real people have never laid eyes on each other.

That didn’t seem to matter to Sam. He fell pretty hard for his avatar sweetie. They bonded intellectually, emotionally, and yes, thanks to Second Life animations, even physically.

Here’s where it gets complicated. Unlike his avatar, which is female, in real life, Sam is a man. A married man. And the person behind the blonde, curvaceous Kat? Married. And, quite possibly, a man, too.

We’re still in the very early days of this sort of thing. Second Life can be a compelling world for many of its inhabitants, but they can’t really live there the way we live in the real world. Not yet, anyway. Some would argue that the residents can’t achieve the same level of identification with their avatars as they have with their “real” selves resident in their real bodies. That part I’m less sure of. Fundamentally, our lives take place in our brains. We can feed our brains with real experiences in the form of sensory input from our environment and the people around us, or we can feed it with wholly imaginary input, or we can compromise between the two — most of us do this — or we can find an alternative, which is what people who live some portion of their lives in places like Second Life have done.

As human beings from this substrate move into the next substrate, questions will arise as to the meaning and relevance of terms that are hugely significant in our present-day lives. What do the words “male,” “female,” “heterosexual,” homosexual,” etc. mean to an intelligence which can experience a good deal of its life without reference to a physical body? Or that no longer even has a physical body? And what will those terms mean to a subsequent generation which originates in the new substrate, beings that don’t necessarily start out with gender as a fundamental part of their identity?

I think the differences between those two generations will be enormous. For example, if I ever move into the new substrate, I expect to pretty much remain a guy. From where I sit today, I can’t imagine wanting to switch to a female identity. Of course, even today in Second Life, I don’t change much about myself.


Phil Speculaas is pretty much just an electronic version of Phil Bowermaster. I didn’t give myself wings or re-make myself as a translucent plasma being or any of that kind of stuff — much less make myself into a woman. And, yes, I would consider the third option a much more significant change. But that’s me. Old substrate. The two guys(?) in the story above are a little more flexible than I am on some of these issues, but maybe not as flexible as they would like to think. I don’t know any of the facts of the case, so I cheerfully admit that the following is purely speculative and will gladly admit the error should subsequent facts emerge (or if they already have and I haven’t seen them.)

It’s not hard to imagine a guy taking on a female persona in Second Life because he thinks girl-on-girl action is kinda hot…so long as the girl with whom he’s having said action is actually a girl. Here we could have an example of two guys caught in a rather absurd trap, where they don’t mind being virtual lesbians but might have a HUGE problem with being virtual gay guys. Again, I don’t know that those are the facts in this case, but it’s pretty easy to imagine such a scenario.

All of which is an extremely long-winded way of saying that — like it or not — we’re going to carry a good deal of thoughtspace into the new substrate about gender and sex. Part of that will be our notions of masculinity and femininity. A fascinating discussion on these topics has ensued in the comments to Stephen’s recent review of The Dangerous Book for Boys. Stephen explains how a book that encourages boys to engage in activities that naturally tend to be appealing to boys has somehow managed to become controversial for doing so. Reading the review (and having followed some of the extensive coverage on the book over at InstaPundit) I took the naive position that there must be less to this controversy than meets the eye.

The comments proved me wrong. Stephen pointed to a thread on Amazon in which the following delightful assessment surfaced:

What are they talking about ‘de-masculination’ of men. Masculinity is based solely on pushing women down, to prop up some exaggerated sense of self about themselves. Its all bull. Only the weak need illusions like that.

Not to be outdone, some readers of the Speculist responded with some equally strident thoughts, such as this:

When applied to boys, feminization is a nasty word. They are not designed to be feminized. The attempt to do so will create wimps and thugs, and a lack of well adjusted men (strong, stable and capable of appropriate treatment of women).

But hold on: feminization isn’t just bad for boys:

I think “feminization” is bad for girls too. I think it’s just a special form of consumer-ization based on the sterotype that women like to shop; let’s teach girls to shop too. (all the way to games about the being at the mall)

I have to give special props to reader Jaafar who brought us into the neighborhood where I think these discussions always end up living, name-calling:

I posted a positive review of this book, which DID include the key sentence, “Men and women are not interchangable parts.” A few days later, a really nasty commented zinged in, alleging (like one of the posters here) that her DAUGHTER had fallen madly in love with the book etc. etc. and so forth.

Rather than reply with the obvious fact that — if the report was true, that made her daughter a tomboy — I simply deleted the review, and the comment went with it.

Jaafar doesn’t go into the details of what made the comment nasty, unless disagreeing with him is nasty in and of itself — or perhaps airing one’s dirty laundry about having a tomboy DAUGHTER who likes to engage in the activities described in the book is disreputable enough to be called “nasty.” But the distaste with which he tosses out the word tomboy is not an unfamiliar one.

I was born in 1962 and attended public schools in the 1960′s and 1970′s. So depending on whom you ask, I either suffered under the early stages of a dehumanizing PC agenda that was trying to “feminize” me, or I enjoyed the last few halcyon days of a golden era when boys were allowed to be boys. In point of fact, I think the latter is closer to the truth, but let me just point out one important feature of that lost golden age: it could pretty much suck for boys who weren’t terribly interested in boy stuff.

Personally, I hated sports when I was a kid. I read a lot, and very indiscriminately — meaning that I cheerfully read books that my older sister was reading or had just finished, even though many of these were “girls’” books. When I was in the 5th grade, I wanted an Easy Bake oven for Christmas. It’s hard to imagine in the age of Emeril what a stigma there once was around a boy showing an interest in cooking. For these points of divergence with mainstream boyhood, I was rewarded with a label which — unlike tomboy — was never considered complimentary: I was a sissy, later a queer or fag. Some thought it cute for a girl to be a tomboy; nobody ever thought it was cute for a boy to be a sissy.

What I would have really liked when I was a kid was for other people to leave me alone and let me be who I was. But no such luck — and I don’t think the peer pressure to be “masculine” had as much to do with trying to raise a just and noble and manlike society as it did preventing boys from growing up to be homos. I was always getting crap about how I walked. There was this bizarre belief that boys who “walked funny” had started down a one-way highway that ended in Queersville.

But as we all know, the opposite of crazy is still crazy. And in this case, we get craziness on a much vaster scale. My longed-for world of live-and-let-live never really came about. Instead, rather than persecuting the few boys who don’t have traditional masculine inclinations, we’ve put a system in place that persecutes that vast majority of boys who do have such inclinations. Progress!

The persecution I suffered — actually, I’m uncomfortable with that word. I don’t want to cheapen that word. Some people in the world suffer real persecution. Being mocked on the playground for being a nancy-boy is no cakewalk, but it ain’t persecution, either. The stigmatization that I suffered then, and the oppressive PC nonsense that boys are subjected to today, stem from the same problem — highly negative views of what we mean by “masculinity” and “femininity.” Note that in the above-quoted comments, both masculinity and femininity are held accountable for human brutality. And being feminized is apparently something we should wish on neither boys nor girls.

You see this all the time. Ever heard of testosterone poisoning? Ha ha ha. Ever heard the theory that society is falling apart because it’s becoming feminine? That one is a little less hilarious, I’m afraid. Prager makes some good points, but the idea that there’s something “feminine” about a society throwing reason and justice away in favor of weakness and emotion is about as helpful as that old feminist chestnut about how all heterosexual sex is rape.

I don’t know what the future holds in store for the concepts of masculinity and femininity even in this substrate, much less any subsequent ones. I do believe, however, that the concepts will be more useful for us if we begin to see them as positives rather than stand-ins for the things we hate the most. Using “femininity” to mean weakness and cowardice gets us no further than using “masculinity” to mean vulgarity and brutality. To take a more positive approach, perhaps we could define masculinity as physical courage; a desire to build things; an enthusiasm for solving problems and overcoming challenges. And then maybe we could define femininity as a desire to nurture others; to create beauty; to build community. (And those definitions are probably way off base, but let’s not get hung up in the specifics for just a moment.)

If those are the things that it means to be masculine and feminine, then — irrespective of natural inclinations, which will still have most men going one way and most women going the other — there is nothing to be feared in encouraging either masculine or feminine behavior in either boys or girls. Since the two are complementary, it would seem that the more you have of each, the better of you’re likely to be. Add to that a societal default position of letting people be what they are, absent doing any harm to others (I know — I’m dreaming) and it seems to me that we have the beginnings of a recipe for both transmasculinity and transfemininity, two different terms which might end up referring to the exact same thing. Namely, a quality of masculinity or femininity which has grown to include what’s best from both of of these important sets of characteristics.

Interestingly, from that perspective, it’s possible that our concepts of masculinity and femininity will prove more persistent and durable — because they are ultimately more valuable — than our concepts of gender and sex.

UPDATE: Thanks for the link, Glenn!

Sniff This One: It's Dead

Some forward-looking musicians take us to a post-Singularity world where the intelligence that dominates is distinctly unfriendly. Pretty funny, though. (Warning, one line is not terribly work or family friendly.)

Hat-tip: Ivan Kirgin

Sniff This One: It’s Dead

Some forward-looking musicians take us to a post-Singularity world where the intelligence that dominates is distinctly unfriendly. Pretty funny, though. (Warning, one line is not terribly work or family friendly.)

Hat-tip: Ivan Kirgin

Biggest Bird Ever

T-Rex sized, in fact:

Researchers in China have unearthed the bones of a gigantic bird-like dinosaur, dwarfing anything else in its category.

Alive, the beast is thought to have been 8 metres long, 3.5 metres high at the hip and 1,400 kilograms in weight — 35 times as heavy as its next largest family members and 300 times the size of smaller ones such as Caudiperyx. It has been classified as a new species and genus: Gigantoraptor erlianensis.

The evolution of bird-like features had long been thought to be accompanied by a decrease in size, meaning the smaller the species, the more bird-like it is likely to be and vice versa. The new discovery shows that isn’t necessarily true.

Gigantoraptor had long arms, bird-like legs, a toothless jaw, and probably a beak. There are no clear signs as to whether it was feathered. However, judging from its close affinity to other dinosaurs known to have been feathered, Xing Xu of the Institute of Vertebrate Paleontology and Paleoanthropology in Beijing speculates that it was.

I’m guessing this bad boy didn’t fly. And maybe that’s just as well…

dinobird.jpg

Rome Revisited

This is neat:

An international team of architects, archaeologists and experts spent 10 years working on a real-time 3D model of the city called Rome Reborn.

Some 7,000 buildings were scanned and reproduced using a model of the city kept at a Rome museum.

Users enter the city at the time of Constantine and see inside buildings.

The simulation takes place in AD320, which is said to be the city’s peak, when it had grown to a million inhabitants.

I understand that the HBO TV series Rome was canceled primarily because of high production costs. The producers insisted on having everything in the show authentic down to the last detail. Not to suggest that the 10-year project described here was easy or cheap, but I bet it cost only a fraction of the budget for one season of Rome.

But that’s okay; we don’t need TV shows if we want to visit ancient Rome:

Talks are said to have begun with Linden Labs to make the entire simulation available on the internet through the company’s virtual world Second Life.

At this rate, some of us may be living there soon, should we choose to.

romecolumns.jpg

The Three Goals, Game Theory, and Western Civilization

A while back, I wrote about the possibility of updating the Three Laws of Robotics as goals in order to make them a more practical means of getting at a friendly artificial general intelligence. This kicked off some interesting discussion, including some debate as to whether my “goals” really aren’t just rules rephrased. In which case, the argument went, they probably wouldn’t help all that much. Michael Anissimov commented:

What would work better would be transferring over the moral complexity that you used to make up these goals in the first place.

Also, as you point out, these goals are vague. More specific and useful from a programmer’s perspective would be some kind of algorithm that takes human preferences as inputs and outputs actions that practically everyone sees as reasonable and benevolent. Hard to do, obviously, but CEV (http://www.singinst.org/upload/CEV.html) is one attempt.

That’s really the crux. Moral complexity does exist in algorithmic form…within our brains. And that goes to the difference between laws and goals. My goals are what I’m trying to do, both morally and in other areas. There are some sophisticated software programs running in my brain made up of things that I’ve been taught, things I’ve figured out for myself, and things that are built in. All of these add up to provide me the tendency to act a certain way in a certain situation. The strategies that drive that software are my moral goals.

Laws, on the other hand, exist outside of myself. I am not specifically programmed to do unto others as I would have them do unto me. I have some tendencies in that direction, but there’s nothing stopping me from acting otherwise, and — let’s face it — I often do. I have tendencies to be nice, fair, just, etc., but I also have tendencies to try to get what I want, to get even with those who have wronged me, to try to be a bigshot, and so on. These tendencies compete with each other, and my behavior overall is some rough compromise.

An artificial general intelligence (AGI) built as a reverse-engineered human intelligence would be in the same position. It would have the “moral complexity” Michael mentioned, but also the baggage of competing tendencies. You could no more guarantee such an intelligence’s compliance with a rule or set of rules than you could a human being’s.

A law like the Golden Rule is a high-level abstraction of certain strategies (algorithms) that produce a desired set of results. On a conscious level, I can use that abstraction to determine whether my behavior is where I want it to be:

Wife complained of being chilly when I got up at 5:00 AM to work out. Covered her with blanket. Good.

Sped up on highway in attempt to keep a guy trying to merge from going ahead of me. Not so good.

Commenter on blog revealed that he doesn’t really understand the subject at hand. Ripped him to shreds. Bad.

Through discipline and practice, I can “program myself” with it to try to move my tendencies in that direction. But I can’t write it into my moral source code and set it as an unbreakable behavioral rule. That’s partly because it’s too vague and partly because I simply lack that capability.

Presumably, I could be externally constrained always to follow the Golden Rule, no matter what. If my actions were being constantly monitored, and I was told that the I would be killed immediately upon violating the rule…I’d certainly do my best, now wouldn’t I?

Still, I’d have a hard time believing that anyone holding me in such a position was much of a practitioner of that rule him or herself. If the people trying to enforce the rule on me in this manner told me that it was for my own good — that they were trying to make me a better person — I don’t know that I’d buy it. And if I figured out that they were only doing this to protect themselves from harm I might to do to them, I think I would pretty annoyed with them (to say the least.)

I would expect a reverse-engineered human intelligence to feel the same way, so I don’t think attempting to constrain an AGI in such a manner would be a particularly good idea, especially not if we have a reasonable expectation that it will eventually be smarter and more powerful than us. On the other hand, letting it use the process I described above — evaluating its own behavior against a defined standard — an AGI might achieve far better results than I have, if only because it can think faster and would have much more subjective time in which to act. This is the notion of recursive self-improvement that matoko kusanagi referred to. The trouble with recursive self-improvement on its own, as Eliezer Yudkowsky and others have pointed out, is that if the AI starts “improving” in a direction that’s bad for humanity, things could get out of hand pretty quickly.

If the artificial intelligence is a modified version of human intelligence, or new intelligence built from scratch, we raise the possibility of building a moral structure into the intelligence, rather than trying to enforce it from outside. That’s the idea behind the the Three Laws and my Three Goals — that they would somehow be built in. But they certainly can’t be built in in anything like their current form. Michael Sargent (and others) pointed out the weakness of that approach, the less important goals have to take the back seat to the more important ones:

Each Goal must have a clear and unbreakable priority over the others that follow it and thus, in the order stated, collective continuity trumps individual safety (“The needs of the many outweigh the needs of the few, or the one.”), individual safety (broadly construed, ‘stasis’) trumps individual liberty (‘free will’), and happiness (‘utility’, a notoriously slippery concept for economists and philosophers to get a firm intellectual grip on) trumps both individual liberty and individual well-being (allowing potentially self-destructive behavior on the individual level insofar as that behavior doesn’t exceed the standard established for ‘safety’ in Goal 2).

I see the reasoning here, but I’m not 100% convinced. Consider the goals that drive a much simpler AI, system — the autopilot system found on any jet airliner. The number one unbreakable goal has got to be don’t crash the plane. But there are many other goals that might drive such a system:

Don’t move in such a way as to make the passengers sick.

Don’t waste fuel.

In landing, don’t go past the end of the runway.

Above all, the system will seek to ensure that first goal. But within the context of ensuring that first goal, it also has to do everything it can to ensure the others. And, yes, it can and must sacrifice the others from time to time in service of the first. So the plane might temporarily move in a nauseating way, or it might waste fuel, or it might even slide past the end of the runway if doing any of those things help ensure the first goal.

Reader TJIC suggested that an AI programmed to meet the Three Goals as I defined them…

1. Ensure the survival of life and intelligence.

2. Ensure the safety of individual sentient beings.

3. Maximize the happiness, freedom, and well-being of individual sentient beings.

…would end up creating a nanny state wherein human freedom is always sacrificed to individual safety. And he may well have a point, but I would argue that just as an autopilot can be calibrated to allow whatever what we deem the appropriate relationship between having the flight not crash and not make us sick, so could these three goals be calibrated in such a way so as to maximize human freedom within an acceptable level of individual risk — whatever that might be.

Getting back to the vagueness problem, it’s hard to calibrate the goals as stated, seeing as they are written in an awkward pseudo-code that we call human language. If we want to improve on the algorithms that are built into human intelligence, or develop entirely new ones — in other words, if we’re going to come up with algorithms that will provide us the ends stated in the goals — we’re going to have to do it mathematically.

But that isn’t necessarily going to be an easy thing to do. Eliezer Yudkowsky argues that developing an AI and setting it to work on doing some good thing are relatively easy compared to the third crucial step, making sure that that friendly, well-intentioned AI doesn’t accidentally wipe us out of existence while trying to achieve those good ends:

If you find a genie bottle that gives you three wishes, it’s probably a good idea to seal the genie bottle in a locked safety box under your bed, unless the genie pays attention to your volition, not just your decision.

Again, I think this goes to the issue of calibration of the system. Eliezer wants to calibrate what the AGI does with the coherent, extrapolated volition of humanity. Volition is an extremely important concept. Earlier, I mentioned the golden rule. If I decide that I’m going to do unto others as I would have them do unto me, I might start handing out big wedges of blueberry pie to everybody I see. After all, I like pie and I would love it if people gave me pie. But if I give my diabetic or overweight or blueberry-allergic friends a wedge of that pie, I wouldn’t be doing them any favors. Nor would I be doing what I wanted to do in the deepest sense.

Eliezer describes the concept of extrapolated volition as meaning not just what we want, but what we would want if we knew more, understood better, could see farther. Coming up with a coherent extrapolated volition for all of humanity is a tall order, especially if we’re doing it not just for the sake of conversation, but in order to enable a system which will try to realize that which is within our volition.

I like to think that humanity’s CEV would look a lot like the three goals that I’ve written. And I honestly believe that the algorithms that power human progress do work, in a rough and general way, towards those goals, which is why people are generally freer, safer, and happier than they have been in the past — though obviously not without many, many, appalling and horrific exceptions. So perhaps our calibration efforts involves feeding the AGI algorithms that will enable it to speed our progress towards those goals while cutting the exceptions way down. Or eliminating them, if that’s somehow possible.

So to finally come around to it, what will those algorithms look like?

Maybe we can take hint from the study of Game Theory. Robert Axelrod held two tournaments in the early 1980′s in which computer programs competed against each other in an attempt to identify the optimal winning strategy for playing the iterative version of the the famous Prisoner’s Dilemma. In the one-off version of the game, the optimal strategy is to screw the other guy. (This is not the sort of thing we want to go teaching the AGI, at least not in isolation!) However, when multiple rounds of the game are played, something else begins to emerge:

By analysing the top-scoring strategies, Axelrod stated several conditions necessary for a strategy to be successful.

Nice
The most important condition is that the strategy must be “nice”, that is, it will not defect before its opponent does. Almost all of the top-scoring strategies were nice. Therefore a purely selfish strategy for purely selfish reasons will never hit its opponent first.

Retaliating
However, Axelrod contended, the successful strategy must not be a blind optimist. It must always retaliate. An example of a non-retaliating strategy is Always Cooperate. This is a very bad choice, as “nasty” strategies will ruthlessly exploit such softies.

Forgiving
Another quality of successful strategies is that they must be forgiving. Though they will retaliate, they will once again fall back to cooperating if the opponent does not continue to play defects. This stops long runs of revenge and counter-revenge, maximizing points.

Non-envious
The last quality is being non-envious, that is not striving to score more than the opponent (impossible for a ‘nice’ strategy, i.e., a ‘nice’ strategy can never score more than the opponent).

Therefore, Axelrod reached the Utopian-sounding conclusion that selfish individuals for their own selfish good will tend to be nice and forgiving and non-envious. One of the most important conclusions of Axelrod’s study of IPDs is that Nice guys can finish first.

Bill Whittle has written recently that the qualities listed above underpin western civilization, and help to explain why the West has out-competed other civilizations, who operate using different strategies:

Now, this is where my own analysis kicks in, because frankly, nice, retaliating, forgiving and non-envious pretty much sums up how I feel about the West in general and the United States in particular. The web of trust and commerce in Western societies is unthinkable in the Third World because the prosperity they produce are fat juicy targets for people raised on Screw the Other Guy. Crime and corruption are stealing, and stealing is Screwing the Other Guy. It’s short-term win, long-term loss.

I would add that if we look at the three goals as goals for humanity rather than for artificial intelligence, we see better progress towards them in western societies than elsewhere. In the tournament, the winning strategy, embodying all of the above characteristics, was called tit-for-tat. Interestingly, the computer program driving that strategy consisted of only four lines of BASIC code. That’s very interesting, and it suggests a startling possibility — like a simple recursive formula producing a complex Mandelbrot image, the moral complexity we’re looking for might just be packed into a very simple set of mathematical relationships.

So in order to develop and calibrate an Artificial General Intelligence that carries out our three top goals (or that helps us to achieve our coherent extrapolated volition) one of the important parameters to explore is how the AI relates to us and to other AIs. The secret might ultimately lie in playing nice with the AI, and teaching it to play nice with us and with other AIs. Not just because we want it to be nice, but because nice turns out to be — at a mathematical level — the best way to play.

UPDATE: This entry has been republished at the website of the Institute for Ethics and Emerging Technologies.

Meet the SpecuFrog

This blog has been around for several years now, and it’s long past time we adopted an official mascot. (Don’t ask me why; it just is. Sheesh.)

Anyhow, I nominate this newly discovered South American frog who went to all the trouble of evolving itself into a color scheme highly compatible with the CSS styles used by this blog, not to mention my retro 70′s blacklight aesthetic sensibilities. A purple frog is such a good idea that had one not existed, we would have to invent it. This creature has shown a lot of initiative in saving us that trouble.

So without further ado, I give you the Frog of the Future.

purplefrog.jpg

The remote area where the frog was found is a veritable cornucopia of biodiversity:

Including the new species, the scientists observed 467 species at the two sites, ranging from large cats like panthers and pumas, to monkeys, reptiles, bats and insects.

Of the 467 species observed, 24 were completely new species. Normally when there’s talk of finding a whole bunch of new species, we’re talking insects. Not this time. Dung beetles and ants are only part of a picture that includes purple frogs and armored catfish — which is not a new discovery, but rather a species previously believed extinct. This is huge — it’s like a mini, land-based Galapagos.

So now all we need is a name for our mascot. It will temporarily be known as Phil Jr. until somebody comes up with something catchier.

UPDATE FROM STEPHEN:

How about:

fffrog.JPG