Author Archives: Phil Bowermaster

Nano Benefits for Beginners

Here’s a run-down on nanotechnology aimed at newbies; I like the headline:

Tiny science makes socks that don’t smell and windows that clean themselves

Of course, it’s not just socks — entire wardrobes will never need laundering or dry cleaning. And it won’t just be windows — we’ll have whole self-cleaning houses. But even that’s just the beginning. How about food that cooks itself? Machines that repair and maintain themselves? Even human bodies that never get sick and never get old.

Still, you have to start somewhere when introducing the subject, and I guess clean glass and non-stinky socks represent as good a place to start as any.

Getting Smarter — It's a Trick

One of my favorite topics is the well-documented increase in human intelligence recorded over the past century. James R. Flynn’s exhaustive research in IQ testing showed a worldwide average increase in IQ of about three points per decade. When intelligence is measured strictly via IQ test scores, however, there is some evidence to suggest that this growth may have slowed (or even halted) in recent years.

Writing at FutureBlogger, Alvis Brigis argues that any measure of the progress (or slowing) of human intelligence has got to take the system as a whole into consideration. We “do” intelligence in a very different environment than was available even to our very recent ancestors, and that environment itself may be part of the overall intelligence picture:

[S]tructures like Google, Facebook, simulations of our world and the Web itself become extensions of intelligence rather than discrete units removed from discrete human brains. As such structures evolve and grow, so too does “our” intelligence.

My co-blogger Stephen has often made a similar point, claiming all of the information available via Google as part of his own memory. How can we not be smarter when we carry around what are essentially brain prostheses and continually interact with an environment that can so dramatically increase the information available to us and the speed at which we can process problems?Although it’s interesting that Alvis mentions Facebook –personally, I didn’t feel that Facebook was extending my intelligence all that much that time I tried out the SuperPoke feature on all my friends and, rather than giving everyone the intended “high five” I somehow accidentally “slipped them all a little tongue.”

Funny? Maybe. Smart? Maybe not. (I’m pretty sure my boss was one of the recipients of my unexpected affection.)

But embarrassing incidents such as that aside, it’s interesting to note that many of the activities I participate in on Facebook have an intelligence component. I like to play Texas Hold Em, which as I have written previously is one of the finest test environments for pure outcome management. I also like to play a game called Who Has the Biggest Brain, which is a flat-out IQ competition between Facebook participants. Plus, it is via Facebook that I learned about Superstruct, a massively multi-player online game where the core skill forecasting the future.

halisthatyou.jpg

Photo by sanctu

These kinds of activities go to Alvis’ second point about intelligence, the idea that accelerating technological change provides an ongoing set of “software upgrades” to our basic abstraction and processing capability. Just playing the games listed above force one to become smarter in order to stay competitive. Of course, the standard criticism is to question whether this sort of thing represents an actual leap in intelligence, and a commenter to Alvis’ post does exactly that:

We can of course talk about “paradigm shifts”, “memes” and exponential growth of everything. But a very simple and plausible explanation of Flynn Effect is greater familiarity with multiple-choice questions and experience with brain-teaser IQ problems.

In other words, we haven’t really gotten any smarter; we’re just better test-takers (and game-players.) I’m not so sure that this criticism holds up. It seems to me that any “familiarity” with multiple-choice tests that would actually enable a test-taker to improve his or her performance would involve meta-analysis of the structure of the test, including attempts to look for patterns and second-guess the author of the test. There is no question in my mind that one can improve one’s performance answering “brain-teaser IQ problems.” Years ago, a co-worker and I kept a book of such puzzles at the office and read one to each other every day (eventually we went through several volumes of them.) We definitely got better at decoding these puzzles and we both improved our average hit-rate over time. So did we actually get any smarter or did we just learn a trick? As with multiple-choice questions, I don’t think one simply becomes “familiar” with these kinds of puzzles. We had to learn new thinking strategies and new ways of approaching the problem at hand in order to come up with consistent right answers. Are the skills we developed limited only to solving brain-teaser puzzles? I don’t see why they would be. Maybe they made us especially good at solving those kinds of puzzles, but that doesn’t mean they have no applicability elsewhere. Likewise, earlier test-takers how start cracking the code on multiple-choice tests developed pattern recognition and meta-analysis skills that would have applicability in many other settings. Being a better test-taker, or game-player, requires becoming smarter. There’s no getting around it.

In addition to IQ scores, I think it’s important to look at factors such as scientific literacy. All the hand-wringing (and high-fiving) over stupid Americans notwithstanding, the US nearly tripled its level of scientific literacy over a period of about 20 years. I wonder how much of that has to do with the tremendous increase in computer literacy that has occurred over the same period of time? In any case, it’s hard to make the argument that people aren’t really getting smarter in the face of evidence such as this — unless you want to make the case that understanding science is just some kind of “trick.” If that’s the case, then I’m no longer clear on what would constitute “real” intelligence, and I’m not even sure I care.

It seems to me that we’re getting plenty smart just by learning all these tricks.

Getting Smarter — It’s a Trick

One of my favorite topics is the well-documented increase in human intelligence recorded over the past century. James R. Flynn’s exhaustive research in IQ testing showed a worldwide average increase in IQ of about three points per decade. When intelligence is measured strictly via IQ test scores, however, there is some evidence to suggest that this growth may have slowed (or even halted) in recent years.

Writing at FutureBlogger, Alvis Brigis argues that any measure of the progress (or slowing) of human intelligence has got to take the system as a whole into consideration. We “do” intelligence in a very different environment than was available even to our very recent ancestors, and that environment itself may be part of the overall intelligence picture:

[S]tructures like Google, Facebook, simulations of our world and the Web itself become extensions of intelligence rather than discrete units removed from discrete human brains. As such structures evolve and grow, so too does “our” intelligence.

My co-blogger Stephen has often made a similar point, claiming all of the information available via Google as part of his own memory. How can we not be smarter when we carry around what are essentially brain prostheses and continually interact with an environment that can so dramatically increase the information available to us and the speed at which we can process problems?Although it’s interesting that Alvis mentions Facebook –personally, I didn’t feel that Facebook was extending my intelligence all that much that time I tried out the SuperPoke feature on all my friends and, rather than giving everyone the intended “high five” I somehow accidentally “slipped them all a little tongue.”

Funny? Maybe. Smart? Maybe not. (I’m pretty sure my boss was one of the recipients of my unexpected affection.)

But embarrassing incidents such as that aside, it’s interesting to note that many of the activities I participate in on Facebook have an intelligence component. I like to play Texas Hold Em, which as I have written previously is one of the finest test environments for pure outcome management. I also like to play a game called Who Has the Biggest Brain, which is a flat-out IQ competition between Facebook participants. Plus, it is via Facebook that I learned about Superstruct, a massively multi-player online game where the core skill forecasting the future.

halisthatyou.jpg

Photo by sanctu

These kinds of activities go to Alvis’ second point about intelligence, the idea that accelerating technological change provides an ongoing set of “software upgrades” to our basic abstraction and processing capability. Just playing the games listed above force one to become smarter in order to stay competitive. Of course, the standard criticism is to question whether this sort of thing represents an actual leap in intelligence, and a commenter to Alvis’ post does exactly that:

We can of course talk about “paradigm shifts”, “memes” and exponential growth of everything. But a very simple and plausible explanation of Flynn Effect is greater familiarity with multiple-choice questions and experience with brain-teaser IQ problems.

In other words, we haven’t really gotten any smarter; we’re just better test-takers (and game-players.) I’m not so sure that this criticism holds up. It seems to me that any “familiarity” with multiple-choice tests that would actually enable a test-taker to improve his or her performance would involve meta-analysis of the structure of the test, including attempts to look for patterns and second-guess the author of the test. There is no question in my mind that one can improve one’s performance answering “brain-teaser IQ problems.” Years ago, a co-worker and I kept a book of such puzzles at the office and read one to each other every day (eventually we went through several volumes of them.) We definitely got better at decoding these puzzles and we both improved our average hit-rate over time. So did we actually get any smarter or did we just learn a trick? As with multiple-choice questions, I don’t think one simply becomes “familiar” with these kinds of puzzles. We had to learn new thinking strategies and new ways of approaching the problem at hand in order to come up with consistent right answers. Are the skills we developed limited only to solving brain-teaser puzzles? I don’t see why they would be. Maybe they made us especially good at solving those kinds of puzzles, but that doesn’t mean they have no applicability elsewhere. Likewise, earlier test-takers how start cracking the code on multiple-choice tests developed pattern recognition and meta-analysis skills that would have applicability in many other settings. Being a better test-taker, or game-player, requires becoming smarter. There’s no getting around it.

In addition to IQ scores, I think it’s important to look at factors such as scientific literacy. All the hand-wringing (and high-fiving) over stupid Americans notwithstanding, the US nearly tripled its level of scientific literacy over a period of about 20 years. I wonder how much of that has to do with the tremendous increase in computer literacy that has occurred over the same period of time? In any case, it’s hard to make the argument that people aren’t really getting smarter in the face of evidence such as this — unless you want to make the case that understanding science is just some kind of “trick.” If that’s the case, then I’m no longer clear on what would constitute “real” intelligence, and I’m not even sure I care.

It seems to me that we’re getting plenty smart just by learning all these tricks.

Chew on this: Complete Organ to Be Grown from Stem Cells?

The first entire replacement human organ to be grown from stem cells in a mature host will be…

A heart?

A liver?

A pancreas?

Nope.

Chances are, it will be a tooth:

Regenerating a whole tooth is no less complicated than rebuilding a whole heart, says Songtao Shi of the University of Southern California, who heads a team working on creating such a tooth.

Not only do you have to create smart tissue (nerves), strong tissue (ligaments) and soft tissue (pulp), you’ve got to build enamel — by far the hardest structural element in the body. And you have to have openings for blood vessels and nerves. And you have to make the whole thing stick together. And you have to anchor it in bone. And then you have to make the entire arrangement last a lifetime in the juicy stew of bacteria that is your mouth.

It’s a nuisance, but researchers are closing in on it. In fact, they think the tooth will probably be the first complex organ to be completely regenerated from stem cells. In part this is because teeth are easily accessible — say ahhhhh. So are adult stem cells, found abundantly in both wisdom and baby teeth — no embryos required, and your immune system won’t reject your own cells.

Nobody is predicting when the first whole tooth will be grown in a human, although five to 10 years is a common guess. “The whole tooth — we’ve got a long way to go,” says Shi.

But his team is pursuing what he believes is a practical and immediate result: growing important parts of teeth that he thinks people will want to use right away. They’re working on creating a living root from scratch. “I think it will take a year,” Shi says. “Depends on how lucky we are, and how good we are.”

tooth.jpg

The only downside here is that many of us had our wisdom teeth removed in early adulthood as recommended by our dentists, so the stem cells needed to create a new tooth won’t be available for us. We can only hope that there will be continued development of techniques for converting mature cells into stem cells. Plus, if regenerated teeth are on the horizon, the other organs mentioned above can’t be far behind.

In fact, we’ve been tracking this progress for some time.

dunetooth.jpg

Remember the tooth!

[Update: Phil here. Please don't ask me what the photo and caption above mean. I have no idea. I believe an uncredited co-blogger has generously added them to this entry!]

Time Travel…

…happens all the time. Of course, that’s not news to us — but it’s always worth mentioning.

Viewing the past is just the beginning. Who wants to just see other times when we can actually move through time itself? And in fact, I’m traveling through time right now, and so are all of you.

Count to ten. See? You just moved into the future. Either we travel into the future or we die. Or some might argue that either we travel into a future in which we’re dead, or we travel into another future in which we’re still living. I personally recommend striving for the former.

“Live to see it,” in other words.

Anyhow, that little binary is just the beginning. Either you travel into a future in which you marry your high school sweetheart, or you don’t. Either you travel into a future in which you get a new job this year, or you don’t. Either you travel into a future in which you lose 20 pounds or you don’t.

All these folks making New Year’s resolutions? They’re would-be time travelers. But they’re not trying to get to the future; they’re trying to navigate time to get to a particular future. That is a version of time travel we are all capable of. Maybe we can’t all get to the future we dream of, but we can all get to a future that’s very different from the one that we would arrive at if we just did nothing.

A new year is dawning. You’re a time traveler. So be a time traveler. Choose your future.

And find the way to get there.

Truth Optional

John Tierney writes in his New York Times column:

If I’m serious about keeping my New Year’s resolutions in 2009, should I add another one? Should the to-do list include, “Start going to church”?

This is an awkward question for a heathen to contemplate, but I felt obliged to raise it with Michael McCullough after reading his report in the upcoming issue of the Psychological Bulletin. He and a fellow psychologist at the University of Miami, Brian Willoughby, have reviewed eight decades of research and concluded that religious belief and piety promote self-control.

I doubt that Tierney is seriously considering church attendance as a means of supporting his New Year’s resolutions, but it’s interesting that he even throws the idea out there. We talked about memes in back-to-back editions of FastForward Radio (here and here) back in September. One of the most important things to remember about these self-reproducing ideas is that it is not their truth content that makes them successful. To give just one example, there is no scientific evidence to support the idea that childhood vaccinations cause autism, and yet look at how successfully that idea has been transmitted all over the world. But I wouldn’t suggest that the folks spreading that meme actually believe it to be untrue. Spreading such an idea while knowing it to be false would be an awfully strange thing to do.

Or would it?

greensanta.jpg

Green Santa brings joy to children and helps save the planet.
Does it matter whether he really exists?

After all, isn’t buying into (what he believes to be) a false meme exactly what Tierney is suggesting doing, albeit in an offhand and humorous way? Someone taking Tierney’s advice would adopt religious belief — or at least religious practice — not because he or she believes it to be true, but simply because he or she finds it to be useful. And, in fact, this is one of the great critiques leveled against religion over the centuries, the idea that it has succeeded not because it is true, but because it has (take your pick):

  • Helped to keep the masses in line

  • Provided meaning and stability to otherwise empty lives

  • Served as a focus for organizing economic, social, and artistic activity

  • …and on and on

Tierney is suggesting doing on an individual level what these critics claim that we have done at a societal level — buy into a set of ideas not because they are true, but because of the many side benefits they provide. The big difference is that it’s hard to imagine society as a whole — or even a large segment of society — buying into something they know (or even strongly suspect) to be false. People don’t necessarily believe in or spread memes because they are true, but by and large they have to believe in them in order to get behind them, right?

Well, maybe not.

Last week, a lot of us engaged in supporting the Santa Claus meme. Parents go out of their way to promote this idea to their children because it is a tradition, because it is meaningful, because it makes Christmas a more joyful time — choose your reason — but not because we believe it’s true. I’m not criticizing the Santa meme; I enjoy it as much as the next dad. I’m just pointing out that it is, indeed, an example of a false meme spread by people who don’t believe in it.

Are there others?

Consider this recent item on Digg Science:

Global Warming: Reasons Why It Might Not Actually Exist

telegraph.co.uk — 2008 was the year man-made global warming was disproved, according to the Telegraph’s Christopher Booker. Sceptics have long argued that there are other explanations for climate change other than man-made CO2 and here we look at some of the arguments put forward by those who believe that global warming is all a hoax.

Okay, disclaimers: I don’t think global warming is a hoax. The temperature figures are what they are. However, I’m not ready to put human-caused-climate-change-by-means-of-CO2-emissions right up there with gravity just yet. There are criticisms of the prevailing models and projections, and some of these come from scientists, and, no, not all of those scientists are in the thrall of Big Oil (or the Freemasons or the Trilateral commission, for that matter, but let’s keep it on one set of memes at a time.)

Interestingly, climate-change “denialists” are accused of doing the very thing we’re talking about, here — knowingly spreading a false meme that they don’t believe in. Are there scientists who are doing that? I kind of doubt it. I’m going to allow that the scientists on both sides are sincere, if tending to be swayed by non-scientific factors such as politics. But obviously scientists aren’t the only ones engaged in this discussion. Consider these comments from the Digg item quoted above:

who cares if its real or not, leaving fossil fuels is a good thing.

Just because global warming is a SCAM doesn’t mean we should pollute.

who gives a ***** if global warming is real or not….. isn’t it extremely important to use green energy sources to keep our air cleaner.. our water cleaner.. and earth happier in general?

I’m not convinced man-made Global Warming is real. But it doesn’t really matter. I’d like to live life without polution, where I don’t have that ugly brown cloud over my city. I’m all for clean energy. Lets do our best to not pollute.

Personally I believe in Global Warming. But you know what? It DOESN’T MATTER THAT MUCH! With or without global warming the global environment is in a rough enough state that serious action is required global warming or no global warming.

I’m just so sick of these articles saying it’s not real, ‘nothing to see here.’ Maybe it is, maybe it isn’t, but I can’t see a disadvantage to erring on the side of caution, and cleaning up our act. I can’t see a problem with humans improving how we treat the planet, and this has been a good motivator. People, governments, and thusly corporations are not going to change unless there is motivation.

That last one is fairly close to my own views on the subject, but I have to admit that I’m a lot less comfortable with that position when I look at it in this light. Now, granted, Digg commenters can’t be taken as representative of anything other than Digg commenters. And none of them (in the first few dozen, anyway) come right out and say “I believe this idea to be false, but I will support it anyway because of the environmental benefits it provides.” Plus, anywhere that discourse gets politicized to this extent, there is another major driver behind both sides of the debate — the need to have one’s own side “win.” Whether a proposition is true or false is apparently less important than whether it is useful or not, and even that fact is less important than the overarching consideration of whether it belongs to us or to them.

But still how different are the two following propositions?

X is false, but people should believe X because of the benefits it brings.

We don’t know whether X is true or false, but people should believe X because of the benefits it brings.

Erring on the side of caution is all very well, but that is not what we do when we buy into a proposition irrespective of its truth content because belief in that proposition brings about certain benefits. This makes me wonder — how much of what we believe as a society or as individuals are we bought into not because it is true, but because it is useful? And then how much of what we believe do we believe simply because it belongs to our side?

UPDATE: Just found the following via James Taranto and The Best of the Web Today:

As an atheist, I truly believe Africa needs God

That’s about as straightforward as it gets, isn’t it? “X is false,” etc. Very interesting.

Better All The Time #41

Let’s set the mood with a little harp music, shall we?

Hope you all enjoy this special Christmas and Babymoon edition of Better All The Time. The Specu-Wife and I are off to Hawaii for Christmas, our last getaway as a more or less independent couple (with my daughter grown and in college) before the arrival of our new baby daughter in April.

I’ll be on hiatus for a week or so. Here’s wishing you all a wonderful and joyous holiday season.

Aloha, and Meli Kelikimaka!