Monthly Archives: January 2009

Taking Less from Moore

Back in 1987 computer shoppers were conditioned to spend about $2000 for a new computer. Buying a computer was a major purchase. It was the sort of thing that you did once for your kid as he went off to college. You expected the student to use the machine all four years of college and maybe afterward.

That’s what my parents did for me as I packed up for college that year. Five years later I replaced that machine with another desktop that was much more powerful (it ran Windows!) for about $1,800.

I benefitted from something I’ll call “Moore’s Dividend.” Moore’s law predicts that “the cost of a given amount of computing power falls by half every 18 months.” That meant that in 1992 I could have:

a) paid much less for the same computer power as I had originally taken to college, or

b) spent the same money ($2000) for much more computer.

I went with option B. Mostly. Options “A” and “B” are really two ends of a continuum. I was closer to the “B” side, but I also spent $200 less.

And that’s the way it went. Every couple of years I’d buy a new machine. Each would be much more powerful but would cost about $200 less. I and most other computer shoppers tended to take most of the Moore dividend in added computer power. We did this because greater power added functionality. We also felt compelled to do this because our operating systems and software grew larger regardless of functionality. We felt we had to grow to keep up.

On January 15th The Economist had an article on the rise of the cheap sector of the computer business. Netbooks, of course, are the most obvious sign of this, but there are other indicators that computer shoppers are taking their Moore dividend in cash rather than computer power.

Even Microsoft has cottoned on: the next version of Windows is intended to do the same as the last version, Vista, but to run faster and use fewer resources. If so, it will be the first version of Windows that makes computers run faster than the previous version.

The Economist blames the recession for this, but my question is: is going small a bad thing? I’ve long had a fascination for the low end of the computer revolution. I think its a good thing when a kid with just a few bucks can cross the digital divide. I’m looking forward to a future landmark: the day when McDonalds offers a Mclaptop for $10 with the purchase of a Happy Meal.

I bought a netbook this Christmas for my kids. It’s no toy: it can handle word processing, spreadsheets, email, browsing, iTunes, and simple games without difficulty. I’m sure I could use it to run the switchboard for the FastForward Radio show. For many people, perhaps most, its all the computer they need. That’s the upside: almost everyone in the developed world can now get their hands on a “good enough” computer.

If there’s a downside its this: if there were killer ap that required more computer power, many people would find that they “need” more computer. The lack of a killer ap is fueling the race to the bottom more than the recession.

Case in point – Vista underperformed in the market, in part, because it offered the normal new OS bloat without a significant increase in functionality. Now, since the next killer ap has yet to arrive, the next Windows will shave the bloat. Its probably a smart move on Microsoft’s part, but I’m wishing we had the killer ap.

And what would that be? I’ll wager its either AI-driven personal digital assistants or virtual worlds will come of age.

2200 Will Never Come

Michael Anissimov explains:

2200 will never come. Our brains will be accelerated by a factor of millions before 2100. 2200 won’t be for millions of years.

It’s not an entry — just an offhand comment in a thread about a picture of Mars terra-formed. Gotta love Michael’s blog. The Nuclear Test is great, too.

Life in the Real World

We noted last week the possibility that our entire universe may just be a low-res 3D rendering of the real 2D universe, which exists out on the boundary of what we normally think of us “the universe.” The hologram that we live in is extremely coarse compared to the boundary universe — our “pixels” are 19 orders of magnitude greater than those used in the real universe.

I’m thinking that the real universe needs higher resolution in order to contain the same structures that our universe does, only in two dimensions rather than three. But surely four or five orders of magnitude would take care of that? That still puts the boundary universe at a resolution 15 orders of magnitude higher than what’s possible here.

So the boundary universe is potentially encoded at a level of detail 1,000,000,000,000,000 times greater than our universe. This raises some questions.

– What is the boundary universe doing with all that information? Is it keeping better track of things than we are in this universe? Is information about the past available in more detailed form there?

– Are we just a projection of the boundary universe, or are we what’s going on in there? I mean — is what’s happening in there just a two-dimensional version of a guy at a keyboard, is there some kind of uberPhil in the boundary universe writing a blog post that is 15 orders of magnitude more sophisticated than this one?

– If this (highly improbable) picture of the universe were to turn out to be true, should all metaphysical and cosmological speculations (including the ones I’m making right now) be tabled until we understand the boundary universe better?

Anyhow, that’s what goes on in my head up here in the big, grainy, blurry holographic construct that we call the universe.

upfromnothing.JPG

A Day to Remember

January 27, 1967.

Many years ago the great British explorer George Mallory, who was to die on Mount Everest, was asked why did he want to climb it. He said, “Because it is there.”

Well, space is there, and we’re going to climb it, and the moon and the planets are there, and new hopes for knowledge and peace are there. And, therefore, as we set sail we ask God’s blessing on the most hazardous and dangerous and greatest adventure on which man has ever embarked.

John F. Kennedy – September 12, 1962

apollo1patch.jpg

And, of course, tomorrow is also a day to remember:

There’s a coincidence today. On this day 390 years ago, the great explorer Sir Francis Drake died aboard ship off the coast of Panama. In his lifetime the great frontiers were the oceans, and a historian later said, ‘He lived by the sea, died on it, and was buried in it.’ Well, today we can say of the Challenger crew: Their dedication was, like Drake’s, complete.

The crew of the space shuttle Challenger honoured us by the manner in which they lived their lives. We will never forget them, nor the last time we saw them, this morning, as they prepared for the journey and waved goodbye and ‘slipped the surly bonds of earth’ to ‘touch the face of God.’

Ronald Reagan – January 28, 1986

challengerpatch.jpg

Next Sunday, too:

In the skies today we saw destruction and tragedy. Yet farther than we can see, there is comfort and hope. In the words of the prophet Isaiah, “Lift your eyes and look to the heavens. Who created all these? He who brings out the starry hosts one by one and calls them each by name. Because of His great power, and mighty strength, not one of them is missing.”

The same Creator who names the stars also knows the names of the seven souls we mourn today. The crew of the shuttle Columbia did not return safely to Earth; yet we can pray that all are safely home.

May God bless the grieving families. And may — may God continue to bless America.

George W. Bush — February 1, 2003

columbiapatch.jpg

And while we’re on the subject, we should also remember these brave individuals:

Soyuz 1

Soyuz 11

sovietpatches.jpg

Unstress Your Cells

FuturePundit has the scoop on some interesting research reported in Cell Metabolism

A new study in the January 7th issue of Cell Metabolism, a Cell Press publication, helps to explain why obese people and animals fail to respond to leptin, a hormone produced by fat that signals the brain to stop eating. What’s more, they show that two FDA-approved drugs might restore leptin sensitivity, offering a novel treatment for obesity.

” Most importantly, our study is the first success in sensitizing obese mice on a high-fat diet to leptin,” said Umut Ozcan of Harvard Medical School. “If it works in humans, it could treat obesity.”

When leptin was first discovered some 13 years ago, it led to great excitement in the field, Ozcan said. Studies showed that leptin administered to obese mice that lacked the hormone lost weight. The buzz over leptin’s potential as an obesity therapy soon waned, however, because obese animals and people don’t respond to the hormone. Efforts to find drugs that act as leptin sensitizers over the years have also failed.

A part of cells known as the endoplasmic reticulum (ER) is involved in many cellular processes including protein manufacturing, lipid and carbohydrate synthesis, and other functions. Stress in the ER appears to play a role in a metabolic disorder linked to obesity. These researchers decided that perhaps ER stress played a role in reduced response of the brain’s hypothalamus to leptin.

Leptin looked exciting, at first, but it didn’t seem to pan out. Now we know why.

So, you want to cut your appetite? Stop stressing the endoplasmic reticulum. Gosh, if only it were as easy as it sounds!

As a side benefit, cutting the ER stress might be crucial to fighting aging as well. See how all this stuff works together?

FastForward Radio — Setting the Future Agenda Part 1

Phil, Stephen, and Michael, with special guest PJ Manney, set the agenda for the future the Speculist way.


Listening Options:

Stream our latest shows:


Or:

add_to_itunes.gif

Or download MP3′s for all the archived shows at:

Listen to FastForward Radio... on Blog Talk Radio


The gang started with this question: “What are we most looking forward to in the coming years?” Possibilities include d

  • The end of aging and other diseases?
  • New production technologies and the end of poverty and hunger?
  • New energy technologies?
  • Technologies to clean up the environment?
  • Artificial intelligence?
  • Artificial Virtual Worlds?
  • The REAL Space Age?

Time only permitted getting through half the list on this program. Sunday they will finish working through the list and discuss what should be on the agenda, how we prioritize the agenda items, and what dependencies exist between them.

Government Jobs

The worrying graph shown below has made the rounds in the blogosphere over the past few days.

govemanufacturing6908.jpg

Here we see how the United States now has more people employed by government than work in manufacturing and construction. Some read these numbers and see certain doom. It’s the tipping point! How can the economy possibly survive when fewer and fewer of us produce any wealth, while more and more of us get a share of what wealth is produced through labor that is not productive (at least not in the economic sense)?

To begin to answer that question, I offer the following two videos. You only need to watch a little of the first one to get the basic idea. The second one is pretty short

So how many human beings are required to assemble a Ford Model T in the early 20th century versus a Toyota (the comments suggest possibly a Tercel) in the early 21st century? It’s all about the productivity numbers. These figures are what you have to consider before bemoaning the loss of jobs in manufacturing and construction:

productivity4707.jpg
Source: Bureau of Labor Statistics

Over time, these increases in productivity mean that it takes fewer and fewer people to produce goods. So unless we’re going to produce way more than we could ever consume, it’s inevitable that more and more people will be employed in non-productive jobs. A possible endgame is that one day the robots will do all the real work, and those of us who aren’t government bureaucrats will work in the corporate world with job titles like “Director of Organizational Emphasis” or “Senior Manager, Strategic Thoughtspace.”

Or maybe we’ll drop the charade and just let the robots put us all on some kind of allowance. If everyone gets whatever they need (in the material sense) from helpful productive robots, a very different economy takes hold. In that world, the “wealthiest” individuals might be those who come up with the best ideas, or who display the most talent, or who hold the most sway with other people — or with the robots.

It’s a daunting idea, but it certainly sounds like more fun than having us all end up working for the government.

UPDATE: Yeah, but that future when the machines are capable of taking over is decades, maybe centuries away, right?

Don’t count on it.

Never Say Never

No matter who you voted for or what your expectations are for Barack Obama’s presidency, today is a great day for America. Peggy Noonan writes in the Wall Street Journal:

And this has grown old, and maybe it’s the last time to say it, history moving so fast, but there’s something we all know so well that we are perhaps forgetting to see it in the forefront. But a long-oppressed people have raised up a president. It is moving and beautiful and speaks to the unending magic and sense of justice of our country. The other day the journalist John O’Sullivan noted that 150 years after slavery, a black man stands in the place of Lincoln in the inaugural stands, and this country has proved again that anything is possible, that if we can do this we can do anything. That is a good thing to remember at a difficult time.

A lot of people thought they would never live to see this day, and were wrong. The future came faster than expected. It tends to do that, which is why I’m a little disturbed by Colin Powell’s remarks on the inauguration in light of the celebration of MLK day

Even with Barack Obama’s election as President, Powell also talked about not letting King’s dream die.

“He would never rest. He would never be satisfied. He would still be beating that drum,” Powell told the crowd.

Sorry, I have to take issue with the word “never.” If Powell just meant that Dr. King would “still not be satisfied,” then I can certainly see that. But to say that he would “never” be satisfied is to argue that a satisfactory resolution of race relations in this country is not achievable, that it lies perpetually out there somewhere beyond the horizon.

On the most recent FastForwad Radio, we discussed a potential coming Utopia. I argued that Utopias are achievable but that they are always relative and that they don’t seem like “Utopia” to the people who live there. The reason is that by “Utopia,” we tend to mean a future in which no more problems exist. That probably can’t happen. Completely solve any of the world’s major problems — poverty, disease, war — and you will still have a world in which problems exist. The people who live in that world, though far better off than we are, will still believe that they have difficult lives, filled with dangers and risks.

But in his “I Have a Dream” speech, Dr. King did not describe a world in which all problems are solved. He described a world in which one major, complex, ugly, and seemingly unsolvable problem was eliminated. The power of the speech is predicated on the idea that somehow, maybe, someday, the dream will come true. If he had prefaced his remarks with the words “Now, of course, none of this will ever happen, but…” how effective would that speech have been?

Believing the future we want is possible is a major contributing factor in how we bring it about. We must be careful about how we throw that word “never” around.