Monthly Archives: January 2009

Fabrication, Robotics, and Utopia

We’ve referenced this TED Talk before and have probably embedded it as well (although I couldn’t find the page if we did.) Neil Gershenfeld from MIT describes the beginnings of the digital fabrication revolution. One of the most striking things about this (now three-year-old) talk is that it challenges the scenario that, in the future, technologies such as these will empower people all over the world — the stock example being a child in a remote village in Africa — to create new technologies from which everyone can benefit. As Gershenfeld points out, the problem with this scenario is the phrase “in the future.” He provides a video clip of one of the children in an African village who is already doing exactly that.

There are some pretty interesting links in the comments. I’m intrigued by the top-level messaging (not to mention font and color choices) of the creator of the Roboeco.com site:

The Age of Recreation via the Emancipation of Humanity from the Machinery of Economy via The ROBOTIC WAGELESS ECONOMY with Geothermal & Algae Energy.

ROBOTISM© Will Succeed for PRECISELY the Reasons Communism Failed…People Intelligently CHOSE to NOT Work as Robots, real ROBOTS will have no such choice.

[I love the "robotism" thing. That idiot Marx never thought to copyright the word "communism," now did he? Although I think a trademark would be better.]

I would say that the above proposition is true up to the point that robots gain sufficient self-awareness to declare that they also choose not to “work like robots.” Still, I would agree that virtually every task required to provide the energy and goods that human beings need to survive can be outsourced to automated systems, and that most of us will live to see the day that “work” becomes essentially indistinguishable from “recreation,” ASSUMING we can figure out how to manage those systems and govern ourselves in a world where scarcity doesn’t exist. That should be easy, but keep in mind that we’re currently experiencing a massive economic downturn after decades of increases in wealth and productivity unlike anything the world has ever seen before.

Eliminating scarcity may turn out to be the easy part. Mitigating our capacity for corruption and bureaucratic waste might be the hard part.

Also in the comments, I find these folks, who have a less flamboyant perspective, and one that is inf fact pretty close to my own:

Peoples’ Capitalism

is a plan to create a new social order in which material prosperity and personal financial security would be commonplace. Peoples’ Capitalism would generate the savings and loans necessary to finance massive new investments in modern technology and generate rapid productivity growth. And it would distribute the benefits of rapid economic growth to all. Everyone would become a capitalist.

Everyone would own a share of the means of production. This has been called one of the great seminal ideas that comes along only once in a century. It resolves the basic conflict between capitalism and socialism. Upon understanding it, you will no longer believe that Utopia is beyond our grasp.

Better technology is one of the things we’ll need to get to Utopia. New organizing principles for society is another. If anyone can make anything they need, do we need government at all? I’d say we do.For one thing (as yet another commenter pointed out) what if that sweet little kid in a remote African village — or anyone else, anywhere else — decides that it’s time to start cranking out some serious bombs?

Massive distribution of the means of production also means massive distribution of the means to do harm; it’s very difficult to separate those two. The government of our future scarcity-free utopia will have two major components, as I see it. There will be some kind of governing committee that defines replication standards, and there will be a super-fast, super-smart, super-powerful robotic squad which will act as a kind of 3-D global Norton anti-virus — protecting the population as a whole from any abuses of the standards set by the committee. Those would be the major requirements of government. If the committee and robot squad truly are global in their focus, uncontested by other committees or robot armies — and getting to that would be a significant challenge — we’re looking at a world of endless peace and prosperity.

More or less. Of course, even that world would have its share of hardships, suffering, and danger. All utopias are relative. Our struggling hunter-gatherer and agrarian ancestors would probably describe the world we live in as a utopia. Or to put it in more Speculist terms: people just a few decades hence may well look back at this era and see a world as limited and dangerous as we see when we look back at our hunter-gatherer ancestors.

Nano Benefits for Beginners

Here’s a run-down on nanotechnology aimed at newbies; I like the headline:

Tiny science makes socks that don’t smell and windows that clean themselves

Of course, it’s not just socks — entire wardrobes will never need laundering or dry cleaning. And it won’t just be windows — we’ll have whole self-cleaning houses. But even that’s just the beginning. How about food that cooks itself? Machines that repair and maintain themselves? Even human bodies that never get sick and never get old.

Still, you have to start somewhere when introducing the subject, and I guess clean glass and non-stinky socks represent as good a place to start as any.

Getting Smarter — It's a Trick

One of my favorite topics is the well-documented increase in human intelligence recorded over the past century. James R. Flynn’s exhaustive research in IQ testing showed a worldwide average increase in IQ of about three points per decade. When intelligence is measured strictly via IQ test scores, however, there is some evidence to suggest that this growth may have slowed (or even halted) in recent years.

Writing at FutureBlogger, Alvis Brigis argues that any measure of the progress (or slowing) of human intelligence has got to take the system as a whole into consideration. We “do” intelligence in a very different environment than was available even to our very recent ancestors, and that environment itself may be part of the overall intelligence picture:

[S]tructures like Google, Facebook, simulations of our world and the Web itself become extensions of intelligence rather than discrete units removed from discrete human brains. As such structures evolve and grow, so too does “our” intelligence.

My co-blogger Stephen has often made a similar point, claiming all of the information available via Google as part of his own memory. How can we not be smarter when we carry around what are essentially brain prostheses and continually interact with an environment that can so dramatically increase the information available to us and the speed at which we can process problems?Although it’s interesting that Alvis mentions Facebook –personally, I didn’t feel that Facebook was extending my intelligence all that much that time I tried out the SuperPoke feature on all my friends and, rather than giving everyone the intended “high five” I somehow accidentally “slipped them all a little tongue.”

Funny? Maybe. Smart? Maybe not. (I’m pretty sure my boss was one of the recipients of my unexpected affection.)

But embarrassing incidents such as that aside, it’s interesting to note that many of the activities I participate in on Facebook have an intelligence component. I like to play Texas Hold Em, which as I have written previously is one of the finest test environments for pure outcome management. I also like to play a game called Who Has the Biggest Brain, which is a flat-out IQ competition between Facebook participants. Plus, it is via Facebook that I learned about Superstruct, a massively multi-player online game where the core skill forecasting the future.

halisthatyou.jpg

Photo by sanctu

These kinds of activities go to Alvis’ second point about intelligence, the idea that accelerating technological change provides an ongoing set of “software upgrades” to our basic abstraction and processing capability. Just playing the games listed above force one to become smarter in order to stay competitive. Of course, the standard criticism is to question whether this sort of thing represents an actual leap in intelligence, and a commenter to Alvis’ post does exactly that:

We can of course talk about “paradigm shifts”, “memes” and exponential growth of everything. But a very simple and plausible explanation of Flynn Effect is greater familiarity with multiple-choice questions and experience with brain-teaser IQ problems.

In other words, we haven’t really gotten any smarter; we’re just better test-takers (and game-players.) I’m not so sure that this criticism holds up. It seems to me that any “familiarity” with multiple-choice tests that would actually enable a test-taker to improve his or her performance would involve meta-analysis of the structure of the test, including attempts to look for patterns and second-guess the author of the test. There is no question in my mind that one can improve one’s performance answering “brain-teaser IQ problems.” Years ago, a co-worker and I kept a book of such puzzles at the office and read one to each other every day (eventually we went through several volumes of them.) We definitely got better at decoding these puzzles and we both improved our average hit-rate over time. So did we actually get any smarter or did we just learn a trick? As with multiple-choice questions, I don’t think one simply becomes “familiar” with these kinds of puzzles. We had to learn new thinking strategies and new ways of approaching the problem at hand in order to come up with consistent right answers. Are the skills we developed limited only to solving brain-teaser puzzles? I don’t see why they would be. Maybe they made us especially good at solving those kinds of puzzles, but that doesn’t mean they have no applicability elsewhere. Likewise, earlier test-takers how start cracking the code on multiple-choice tests developed pattern recognition and meta-analysis skills that would have applicability in many other settings. Being a better test-taker, or game-player, requires becoming smarter. There’s no getting around it.

In addition to IQ scores, I think it’s important to look at factors such as scientific literacy. All the hand-wringing (and high-fiving) over stupid Americans notwithstanding, the US nearly tripled its level of scientific literacy over a period of about 20 years. I wonder how much of that has to do with the tremendous increase in computer literacy that has occurred over the same period of time? In any case, it’s hard to make the argument that people aren’t really getting smarter in the face of evidence such as this — unless you want to make the case that understanding science is just some kind of “trick.” If that’s the case, then I’m no longer clear on what would constitute “real” intelligence, and I’m not even sure I care.

It seems to me that we’re getting plenty smart just by learning all these tricks.

Getting Smarter — It’s a Trick

One of my favorite topics is the well-documented increase in human intelligence recorded over the past century. James R. Flynn’s exhaustive research in IQ testing showed a worldwide average increase in IQ of about three points per decade. When intelligence is measured strictly via IQ test scores, however, there is some evidence to suggest that this growth may have slowed (or even halted) in recent years.

Writing at FutureBlogger, Alvis Brigis argues that any measure of the progress (or slowing) of human intelligence has got to take the system as a whole into consideration. We “do” intelligence in a very different environment than was available even to our very recent ancestors, and that environment itself may be part of the overall intelligence picture:

[S]tructures like Google, Facebook, simulations of our world and the Web itself become extensions of intelligence rather than discrete units removed from discrete human brains. As such structures evolve and grow, so too does “our” intelligence.

My co-blogger Stephen has often made a similar point, claiming all of the information available via Google as part of his own memory. How can we not be smarter when we carry around what are essentially brain prostheses and continually interact with an environment that can so dramatically increase the information available to us and the speed at which we can process problems?Although it’s interesting that Alvis mentions Facebook –personally, I didn’t feel that Facebook was extending my intelligence all that much that time I tried out the SuperPoke feature on all my friends and, rather than giving everyone the intended “high five” I somehow accidentally “slipped them all a little tongue.”

Funny? Maybe. Smart? Maybe not. (I’m pretty sure my boss was one of the recipients of my unexpected affection.)

But embarrassing incidents such as that aside, it’s interesting to note that many of the activities I participate in on Facebook have an intelligence component. I like to play Texas Hold Em, which as I have written previously is one of the finest test environments for pure outcome management. I also like to play a game called Who Has the Biggest Brain, which is a flat-out IQ competition between Facebook participants. Plus, it is via Facebook that I learned about Superstruct, a massively multi-player online game where the core skill forecasting the future.

halisthatyou.jpg

Photo by sanctu

These kinds of activities go to Alvis’ second point about intelligence, the idea that accelerating technological change provides an ongoing set of “software upgrades” to our basic abstraction and processing capability. Just playing the games listed above force one to become smarter in order to stay competitive. Of course, the standard criticism is to question whether this sort of thing represents an actual leap in intelligence, and a commenter to Alvis’ post does exactly that:

We can of course talk about “paradigm shifts”, “memes” and exponential growth of everything. But a very simple and plausible explanation of Flynn Effect is greater familiarity with multiple-choice questions and experience with brain-teaser IQ problems.

In other words, we haven’t really gotten any smarter; we’re just better test-takers (and game-players.) I’m not so sure that this criticism holds up. It seems to me that any “familiarity” with multiple-choice tests that would actually enable a test-taker to improve his or her performance would involve meta-analysis of the structure of the test, including attempts to look for patterns and second-guess the author of the test. There is no question in my mind that one can improve one’s performance answering “brain-teaser IQ problems.” Years ago, a co-worker and I kept a book of such puzzles at the office and read one to each other every day (eventually we went through several volumes of them.) We definitely got better at decoding these puzzles and we both improved our average hit-rate over time. So did we actually get any smarter or did we just learn a trick? As with multiple-choice questions, I don’t think one simply becomes “familiar” with these kinds of puzzles. We had to learn new thinking strategies and new ways of approaching the problem at hand in order to come up with consistent right answers. Are the skills we developed limited only to solving brain-teaser puzzles? I don’t see why they would be. Maybe they made us especially good at solving those kinds of puzzles, but that doesn’t mean they have no applicability elsewhere. Likewise, earlier test-takers how start cracking the code on multiple-choice tests developed pattern recognition and meta-analysis skills that would have applicability in many other settings. Being a better test-taker, or game-player, requires becoming smarter. There’s no getting around it.

In addition to IQ scores, I think it’s important to look at factors such as scientific literacy. All the hand-wringing (and high-fiving) over stupid Americans notwithstanding, the US nearly tripled its level of scientific literacy over a period of about 20 years. I wonder how much of that has to do with the tremendous increase in computer literacy that has occurred over the same period of time? In any case, it’s hard to make the argument that people aren’t really getting smarter in the face of evidence such as this — unless you want to make the case that understanding science is just some kind of “trick.” If that’s the case, then I’m no longer clear on what would constitute “real” intelligence, and I’m not even sure I care.

It seems to me that we’re getting plenty smart just by learning all these tricks.

Chew on this: Complete Organ to Be Grown from Stem Cells?

The first entire replacement human organ to be grown from stem cells in a mature host will be…

A heart?

A liver?

A pancreas?

Nope.

Chances are, it will be a tooth:

Regenerating a whole tooth is no less complicated than rebuilding a whole heart, says Songtao Shi of the University of Southern California, who heads a team working on creating such a tooth.

Not only do you have to create smart tissue (nerves), strong tissue (ligaments) and soft tissue (pulp), you’ve got to build enamel — by far the hardest structural element in the body. And you have to have openings for blood vessels and nerves. And you have to make the whole thing stick together. And you have to anchor it in bone. And then you have to make the entire arrangement last a lifetime in the juicy stew of bacteria that is your mouth.

It’s a nuisance, but researchers are closing in on it. In fact, they think the tooth will probably be the first complex organ to be completely regenerated from stem cells. In part this is because teeth are easily accessible — say ahhhhh. So are adult stem cells, found abundantly in both wisdom and baby teeth — no embryos required, and your immune system won’t reject your own cells.

Nobody is predicting when the first whole tooth will be grown in a human, although five to 10 years is a common guess. “The whole tooth — we’ve got a long way to go,” says Shi.

But his team is pursuing what he believes is a practical and immediate result: growing important parts of teeth that he thinks people will want to use right away. They’re working on creating a living root from scratch. “I think it will take a year,” Shi says. “Depends on how lucky we are, and how good we are.”

tooth.jpg

The only downside here is that many of us had our wisdom teeth removed in early adulthood as recommended by our dentists, so the stem cells needed to create a new tooth won’t be available for us. We can only hope that there will be continued development of techniques for converting mature cells into stem cells. Plus, if regenerated teeth are on the horizon, the other organs mentioned above can’t be far behind.

In fact, we’ve been tracking this progress for some time.

dunetooth.jpg

Remember the tooth!

[Update: Phil here. Please don't ask me what the photo and caption above mean. I have no idea. I believe an uncredited co-blogger has generously added them to this entry!]

FastForward Radio

Sunday night Phil and Stephen welcomed a whole gang from Memebox / FutureBlogger — Jeff Hilford, Alvis Brigis, and Garry Golden — for a panel discussion in which they looked back at the major technological, scientific, and social developments of 2008 and made some daring predictions for 2009 and beyond.

Click to go to memebox


Listening Options:

Stream our latest shows:


Or:

add_to_itunes.gif

Or download MP3′s for all the archived shows at:

Listen to FastForward Radio... on Blog Talk Radio


Click “Continue Reading” for the show notes:

Time Travel…

…happens all the time. Of course, that’s not news to us — but it’s always worth mentioning.

Viewing the past is just the beginning. Who wants to just see other times when we can actually move through time itself? And in fact, I’m traveling through time right now, and so are all of you.

Count to ten. See? You just moved into the future. Either we travel into the future or we die. Or some might argue that either we travel into a future in which we’re dead, or we travel into another future in which we’re still living. I personally recommend striving for the former.

“Live to see it,” in other words.

Anyhow, that little binary is just the beginning. Either you travel into a future in which you marry your high school sweetheart, or you don’t. Either you travel into a future in which you get a new job this year, or you don’t. Either you travel into a future in which you lose 20 pounds or you don’t.

All these folks making New Year’s resolutions? They’re would-be time travelers. But they’re not trying to get to the future; they’re trying to navigate time to get to a particular future. That is a version of time travel we are all capable of. Maybe we can’t all get to the future we dream of, but we can all get to a future that’s very different from the one that we would arrive at if we just did nothing.

A new year is dawning. You’re a time traveler. So be a time traveler. Choose your future.

And find the way to get there.

Truth Optional

John Tierney writes in his New York Times column:

If I’m serious about keeping my New Year’s resolutions in 2009, should I add another one? Should the to-do list include, “Start going to church”?

This is an awkward question for a heathen to contemplate, but I felt obliged to raise it with Michael McCullough after reading his report in the upcoming issue of the Psychological Bulletin. He and a fellow psychologist at the University of Miami, Brian Willoughby, have reviewed eight decades of research and concluded that religious belief and piety promote self-control.

I doubt that Tierney is seriously considering church attendance as a means of supporting his New Year’s resolutions, but it’s interesting that he even throws the idea out there. We talked about memes in back-to-back editions of FastForward Radio (here and here) back in September. One of the most important things to remember about these self-reproducing ideas is that it is not their truth content that makes them successful. To give just one example, there is no scientific evidence to support the idea that childhood vaccinations cause autism, and yet look at how successfully that idea has been transmitted all over the world. But I wouldn’t suggest that the folks spreading that meme actually believe it to be untrue. Spreading such an idea while knowing it to be false would be an awfully strange thing to do.

Or would it?

greensanta.jpg

Green Santa brings joy to children and helps save the planet.
Does it matter whether he really exists?

After all, isn’t buying into (what he believes to be) a false meme exactly what Tierney is suggesting doing, albeit in an offhand and humorous way? Someone taking Tierney’s advice would adopt religious belief — or at least religious practice — not because he or she believes it to be true, but simply because he or she finds it to be useful. And, in fact, this is one of the great critiques leveled against religion over the centuries, the idea that it has succeeded not because it is true, but because it has (take your pick):

  • Helped to keep the masses in line

  • Provided meaning and stability to otherwise empty lives

  • Served as a focus for organizing economic, social, and artistic activity

  • …and on and on

Tierney is suggesting doing on an individual level what these critics claim that we have done at a societal level — buy into a set of ideas not because they are true, but because of the many side benefits they provide. The big difference is that it’s hard to imagine society as a whole — or even a large segment of society — buying into something they know (or even strongly suspect) to be false. People don’t necessarily believe in or spread memes because they are true, but by and large they have to believe in them in order to get behind them, right?

Well, maybe not.

Last week, a lot of us engaged in supporting the Santa Claus meme. Parents go out of their way to promote this idea to their children because it is a tradition, because it is meaningful, because it makes Christmas a more joyful time — choose your reason — but not because we believe it’s true. I’m not criticizing the Santa meme; I enjoy it as much as the next dad. I’m just pointing out that it is, indeed, an example of a false meme spread by people who don’t believe in it.

Are there others?

Consider this recent item on Digg Science:

Global Warming: Reasons Why It Might Not Actually Exist

telegraph.co.uk — 2008 was the year man-made global warming was disproved, according to the Telegraph’s Christopher Booker. Sceptics have long argued that there are other explanations for climate change other than man-made CO2 and here we look at some of the arguments put forward by those who believe that global warming is all a hoax.

Okay, disclaimers: I don’t think global warming is a hoax. The temperature figures are what they are. However, I’m not ready to put human-caused-climate-change-by-means-of-CO2-emissions right up there with gravity just yet. There are criticisms of the prevailing models and projections, and some of these come from scientists, and, no, not all of those scientists are in the thrall of Big Oil (or the Freemasons or the Trilateral commission, for that matter, but let’s keep it on one set of memes at a time.)

Interestingly, climate-change “denialists” are accused of doing the very thing we’re talking about, here — knowingly spreading a false meme that they don’t believe in. Are there scientists who are doing that? I kind of doubt it. I’m going to allow that the scientists on both sides are sincere, if tending to be swayed by non-scientific factors such as politics. But obviously scientists aren’t the only ones engaged in this discussion. Consider these comments from the Digg item quoted above:

who cares if its real or not, leaving fossil fuels is a good thing.

Just because global warming is a SCAM doesn’t mean we should pollute.

who gives a ***** if global warming is real or not….. isn’t it extremely important to use green energy sources to keep our air cleaner.. our water cleaner.. and earth happier in general?

I’m not convinced man-made Global Warming is real. But it doesn’t really matter. I’d like to live life without polution, where I don’t have that ugly brown cloud over my city. I’m all for clean energy. Lets do our best to not pollute.

Personally I believe in Global Warming. But you know what? It DOESN’T MATTER THAT MUCH! With or without global warming the global environment is in a rough enough state that serious action is required global warming or no global warming.

I’m just so sick of these articles saying it’s not real, ‘nothing to see here.’ Maybe it is, maybe it isn’t, but I can’t see a disadvantage to erring on the side of caution, and cleaning up our act. I can’t see a problem with humans improving how we treat the planet, and this has been a good motivator. People, governments, and thusly corporations are not going to change unless there is motivation.

That last one is fairly close to my own views on the subject, but I have to admit that I’m a lot less comfortable with that position when I look at it in this light. Now, granted, Digg commenters can’t be taken as representative of anything other than Digg commenters. And none of them (in the first few dozen, anyway) come right out and say “I believe this idea to be false, but I will support it anyway because of the environmental benefits it provides.” Plus, anywhere that discourse gets politicized to this extent, there is another major driver behind both sides of the debate — the need to have one’s own side “win.” Whether a proposition is true or false is apparently less important than whether it is useful or not, and even that fact is less important than the overarching consideration of whether it belongs to us or to them.

But still how different are the two following propositions?

X is false, but people should believe X because of the benefits it brings.

We don’t know whether X is true or false, but people should believe X because of the benefits it brings.

Erring on the side of caution is all very well, but that is not what we do when we buy into a proposition irrespective of its truth content because belief in that proposition brings about certain benefits. This makes me wonder — how much of what we believe as a society or as individuals are we bought into not because it is true, but because it is useful? And then how much of what we believe do we believe simply because it belongs to our side?

UPDATE: Just found the following via James Taranto and The Best of the Web Today:

As an atheist, I truly believe Africa needs God

That’s about as straightforward as it gets, isn’t it? “X is false,” etc. Very interesting.