<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: There&#8217;s Always the Space Ark</title>
	<atom:link href="https://blog.speculist.com/scenarios/theres-always-t-1.html/feed" rel="self" type="application/rss+xml" />
	<link>https://blog.speculist.com/scenarios/theres-always-t-1.html</link>
	<description>Live to see it.</description>
	<lastBuildDate>Thu, 16 Dec 2021 08:21:00 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6.1</generator>
	<item>
		<title>By: Jim Strickland</title>
		<link>https://blog.speculist.com/scenarios/theres-always-t-1.html#comment-417</link>
		<dc:creator>Jim Strickland</dc:creator>
		<pubDate>Mon, 25 Apr 2005 22:28:47 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=275#comment-417</guid>
		<description><![CDATA[re: robot takeover

Extraordinarily unlikely.  Yes, it probably is possible to build AIs.  Yes, it probably is possible to build human level ones, but why would anyone bother, save for proof of concept?  For most applications a human level (or better) AI is far more intelligence than needed.  I do not, for example, want a car with a human intelligence that will disagree about where we should go.  I want a car with artificial dog intelligence - avoids running into things, goes home when the driver is too drunk to drive, and comes when called.  A corollary to this is that any useful human level AI, in order to BE useful, will be subservient.  And any dunce who builds robot slaves who aren&#039;t happy and fulfilled as robot slaves (and therefore not inclined to change their status) deserves what he or she gets.

Seriously though, the idea of robots taking over is ridiculous simply for the cost.  Even with self replicating factories, it&#039;s still easier, cheaper (and much more fun) to make more people. We, too are a self-replicating system, our raw materials are naturally occurring, and our designs and to some extent software have had a million years&#039; debugging.

-Jim]]></description>
		<content:encoded><![CDATA[<p>re: robot takeover</p>
<p>Extraordinarily unlikely.  Yes, it probably is possible to build AIs.  Yes, it probably is possible to build human level ones, but why would anyone bother, save for proof of concept?  For most applications a human level (or better) AI is far more intelligence than needed.  I do not, for example, want a car with a human intelligence that will disagree about where we should go.  I want a car with artificial dog intelligence &#8211; avoids running into things, goes home when the driver is too drunk to drive, and comes when called.  A corollary to this is that any useful human level AI, in order to BE useful, will be subservient.  And any dunce who builds robot slaves who aren&#8217;t happy and fulfilled as robot slaves (and therefore not inclined to change their status) deserves what he or she gets.</p>
<p>Seriously though, the idea of robots taking over is ridiculous simply for the cost.  Even with self replicating factories, it&#8217;s still easier, cheaper (and much more fun) to make more people. We, too are a self-replicating system, our raw materials are naturally occurring, and our designs and to some extent software have had a million years&#8217; debugging.</p>
<p>-Jim</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Dave Schuler</title>
		<link>https://blog.speculist.com/scenarios/theres-always-t-1.html#comment-416</link>
		<dc:creator>Dave Schuler</dc:creator>
		<pubDate>Mon, 18 Apr 2005 18:47:09 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=275#comment-416</guid>
		<description><![CDATA[And the Earth being swallowed up by a black hole will be even less likely if &lt;a href=&quot;http://www.nature.com/news/2005/050328/full/050328-8.html&quot; rel=&quot;nofollow&quot;&gt;they don&#039;t exist&lt;/a&gt;.]]></description>
		<content:encoded><![CDATA[<p>And the Earth being swallowed up by a black hole will be even less likely if <a href="http://www.nature.com/news/2005/050328/full/050328-8.html" rel="nofollow">they don&#8217;t exist</a>.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Engineer-Poet</title>
		<link>https://blog.speculist.com/scenarios/theres-always-t-1.html#comment-415</link>
		<dc:creator>Engineer-Poet</dc:creator>
		<pubDate>Fri, 15 Apr 2005 22:34:48 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=275#comment-415</guid>
		<description><![CDATA[A civilization capable of building a Dyson sphere wouldn&#039;t care much about the fate of a single planet; it probably would have used all the planets in its system for building materials.

As I&#039;ve speculated elsewhere, a GRB caused by the merger of two neutron stars would probably be predictable from gravitational radiation.&#160; (You&#039;d have some warning of a supernova from the neutrino burst associated with the core collapse, but it would be hours and not months or years.)&#160; Now, I freely confess that I&#039;m not as up on radiation physics as I could be, but consider that causing serious planetary damage with gamma rays requires a very large amount of energy to be deposited in the atmosphere.&#160; Suppose that you stick a mass of material, perhaps only centimeters thick on average, some distance (tens of thousands to millions of miles) away from the planet in the direction of the source.&#160; This material stops some of the gamma rays, but mostly it &lt;i&gt;scatters them&lt;/i&gt;.&#160; The photons deflected, most of the radiation goes somewhere other than the planet.

For any statistical range of scattering angles, there is a distance where only an arbitrarily small fraction of the radiation still hits the planet.&#160; Set that small enough to manage the atmospheric effects and get the positioning (timing of objects in various orbits!) of the shields just right, and the problem is solved.&#160; Also, just covering the sky with scatterers won&#039;t do; you also have to keep off-axis radiation from being scattered onto the planet.

Learning enough about the physics of neutron-star mergers and supernovae to have that timing dead-nuts on is going to be a big part of the trick.

Preventing nuclear attacks is more of an exercise in psychology than physics.&#160; Sure, it matters NOW, but in the grand scheme of the universe it&#039;s small potatoes. ;-)]]></description>
		<content:encoded><![CDATA[<p>A civilization capable of building a Dyson sphere wouldn&#8217;t care much about the fate of a single planet; it probably would have used all the planets in its system for building materials.</p>
<p>As I&#8217;ve speculated elsewhere, a GRB caused by the merger of two neutron stars would probably be predictable from gravitational radiation.&nbsp; (You&#8217;d have some warning of a supernova from the neutrino burst associated with the core collapse, but it would be hours and not months or years.)&nbsp; Now, I freely confess that I&#8217;m not as up on radiation physics as I could be, but consider that causing serious planetary damage with gamma rays requires a very large amount of energy to be deposited in the atmosphere.&nbsp; Suppose that you stick a mass of material, perhaps only centimeters thick on average, some distance (tens of thousands to millions of miles) away from the planet in the direction of the source.&nbsp; This material stops some of the gamma rays, but mostly it <i>scatters them</i>.&nbsp; The photons deflected, most of the radiation goes somewhere other than the planet.</p>
<p>For any statistical range of scattering angles, there is a distance where only an arbitrarily small fraction of the radiation still hits the planet.&nbsp; Set that small enough to manage the atmospheric effects and get the positioning (timing of objects in various orbits!) of the shields just right, and the problem is solved.&nbsp; Also, just covering the sky with scatterers won&#8217;t do; you also have to keep off-axis radiation from being scattered onto the planet.</p>
<p>Learning enough about the physics of neutron-star mergers and supernovae to have that timing dead-nuts on is going to be a big part of the trick.</p>
<p>Preventing nuclear attacks is more of an exercise in psychology than physics.&nbsp; Sure, it matters NOW, but in the grand scheme of the universe it&#8217;s small potatoes. <img src='https://blog.speculist.com/wp-includes/images/smilies/icon_wink.gif' alt=';-)' class='wp-smiley' /> </p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Karl Hallowell</title>
		<link>https://blog.speculist.com/scenarios/theres-always-t-1.html#comment-414</link>
		<dc:creator>Karl Hallowell</dc:creator>
		<pubDate>Fri, 15 Apr 2005 21:33:36 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=275#comment-414</guid>
		<description><![CDATA[Well, I think most of the other dangers pale compared to nuclear war. IMHO, we&#039;ll long be at the stage where a war could take out most of the human race - even after we become strongly established in space. The only valid solution in the long term is diversification at the galactic level.

And what&#039;s not properly addressed here is that the response to 9/11 was at least as damaging as the attack itself, and that human civilization is overcentralized. It is vulnerable to terrorist attack and that is what causes the overreaction.

In the US, if a terrorist group acquires nukes and sets one off, then there really isn&#039;t any good way to protect the infrastructure. A place like Manhattan, Silicon Valley, Houston, etc is too concentrated. That creates the climate for overreaction.]]></description>
		<content:encoded><![CDATA[<p>Well, I think most of the other dangers pale compared to nuclear war. IMHO, we&#8217;ll long be at the stage where a war could take out most of the human race &#8211; even after we become strongly established in space. The only valid solution in the long term is diversification at the galactic level.</p>
<p>And what&#8217;s not properly addressed here is that the response to 9/11 was at least as damaging as the attack itself, and that human civilization is overcentralized. It is vulnerable to terrorist attack and that is what causes the overreaction.</p>
<p>In the US, if a terrorist group acquires nukes and sets one off, then there really isn&#8217;t any good way to protect the infrastructure. A place like Manhattan, Silicon Valley, Houston, etc is too concentrated. That creates the climate for overreaction.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Kathy</title>
		<link>https://blog.speculist.com/scenarios/theres-always-t-1.html#comment-413</link>
		<dc:creator>Kathy</dc:creator>
		<pubDate>Thu, 14 Apr 2005 19:30:40 +0000</pubDate>
		<guid isPermaLink="false">http://localhost/specblog/?p=275#comment-413</guid>
		<description><![CDATA[This is a fabulous post, Phil. How do you do it?

I have missed roaming the blogosphere lately.]]></description>
		<content:encoded><![CDATA[<p>This is a fabulous post, Phil. How do you do it?</p>
<p>I have missed roaming the blogosphere lately.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
