<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>The Speculist &#187; Defining Humanity</title>
	<atom:link href="https://blog.speculist.com/category/defining_humanity/feed" rel="self" type="application/rss+xml" />
	<link>https://blog.speculist.com</link>
	<description>Live to see it.</description>
	<lastBuildDate>Thu, 25 Jul 2019 23:07:25 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6.1</generator>
		<item>
		<title>More Thoughts on Human Augmentation</title>
		<link>https://blog.speculist.com/defining_humanity/more-thoughts-o-1.html</link>
		<comments>https://blog.speculist.com/defining_humanity/more-thoughts-o-1.html#comments</comments>
		<pubDate>Tue, 20 Nov 2007 06:43:38 +0000</pubDate>
		<dc:creator>Phil Bowermaster</dc:creator>
				<category><![CDATA[Defining Humanity]]></category>

		<guid isPermaLink="false">http://localhost/specblog/?p=1396</guid>
		<description><![CDATA[From Brian Wang, our guest on Sunday&#8217;s FastForward Radio. Additionally, Brian presents some other ideas about how we go about getting to the kind of future we&#8217;re looking for, including this analogy that he referenced on the show: I think of the Tom Hanks character in Saving Private Ryan on the opening Omaha beach sequence. [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>From <a href="http://advancednano.blogspot.com/2007/11/interviewed-by-speculistcom.html">Brian Wang,</a> our guest on Sunday&#8217;s <a href="https://www.blog.speculist.com/archives/001559.html">FastForward Radio</a>.  Additionally, Brian presents some other ideas about how we go about getting to the kind of future we&#8217;re looking for, including this analogy that he referenced on the show:<br />
<blockquote>
<p>I think of the Tom Hanks character in Saving Private Ryan on the opening Omaha beach sequence. Some soldiers mistakenly believed it was better to hide behind the steel crosses on the beach or to not creatively attack the pill boxes that had them pinned down. I think of the difficult goals of getting space colonized in a major way or conquering diseases and making significant progress against age deterioration as pill boxes that have us pinned down on a dangerous beach. Just because the time has been stretched out to decades, centuries, millenia does not mean that we are not collectively on a dangerous beach. We can and should do a lot over the next 50 years and beyond.</p></blockquote>
<p>Read the whole thing.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.speculist.com/defining_humanity/more-thoughts-o-1.html/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The Three Goals, Game Theory, and Western Civilization</title>
		<link>https://blog.speculist.com/artificial_intelligence/the-three-goals-2.html</link>
		<comments>https://blog.speculist.com/artificial_intelligence/the-three-goals-2.html#comments</comments>
		<pubDate>Sun, 10 Jun 2007 12:46:19 +0000</pubDate>
		<dc:creator>Phil Bowermaster</dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Defining Humanity]]></category>
		<category><![CDATA[Economics]]></category>
		<category><![CDATA[Humanity]]></category>
		<category><![CDATA[Intelligence]]></category>
		<category><![CDATA[Philosophy]]></category>

		<guid isPermaLink="false">http://localhost/specblog/?p=1211</guid>
		<description><![CDATA[A while back, I wrote about the possibility of updating the Three Laws of Robotics as goals in order to make them a more practical means of getting at a friendly artificial general intelligence. This kicked off some interesting discussion, including some debate as to whether my &#8220;goals&#8221; really aren&#8217;t just rules rephrased. In which [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>A while back, I wrote about the possibility of <a href="https://www.blog.speculist.com/archives/001300.html">updating</a> the <a href="http://en.wikipedia.org/wiki/Three_Laws_of_Robotics">Three Laws of Robotics</a> as goals in order to make them a more practical means of getting at a friendly artificial general intelligence. This kicked off some interesting discussion, including some debate as to whether my &#8220;goals&#8221; really aren&#8217;t just rules rephrased. In which case, the argument went, they probably wouldn&#8217;t help all that much. Michael Anissimov commented:<br />
<blockquote>
<p>What would work better would be transferring over the moral complexity that you used to make up these goals in the first place.</p>
<p>Also, as you point out, these goals are vague. More specific and useful from a programmer&#8217;s perspective would be some kind of algorithm that takes human preferences as inputs and outputs actions that practically everyone sees as reasonable and benevolent. Hard to do, obviously, but CEV (<a href="http://www.singinst.org/upload/CEV.html">http://www.singinst.org/upload/CEV.html</a>) is one attempt.</p></blockquote>
<p>That&#8217;s really the crux. Moral complexity <em>does</em> exist in algorithmic form&#8230;within our brains. And that goes to the difference between laws and goals. My goals are what I&#8217;m trying to do, both morally and in other areas. There are some sophisticated software programs running in my brain made up of things that I&#8217;ve been taught, things I&#8217;ve figured out for myself, and things that are built in.  All of these add up to provide me the tendency to act a certain way in a certain situation. The strategies that drive that software are my moral goals.</p>
<p>Laws, on the other hand, exist outside of myself. I am not specifically programmed to <em>do unto others as I would have them do unto me.</em> I have some tendencies in that direction, but there&#8217;s nothing stopping me from acting otherwise, and &#8212; let&#8217;s face it &#8212; I often do. I have tendencies to be nice, fair, just, etc., but I also have tendencies to try to get what I want, to get even with those who have wronged me, to try to be a bigshot, and so on. These tendencies compete with each other, and my behavior overall is some rough compromise.</p>
<p>An artificial general intelligence (AGI) built as a reverse-engineered human intelligence would be in the same position. It would have the &#8220;moral complexity&#8221; Michael mentioned, but also the baggage of competing tendencies. You could no more guarantee such an intelligence&#8217;s compliance with a rule or set of rules than you could a human being&#8217;s.</p>
<p>A law like the Golden Rule is a high-level abstraction of certain strategies (algorithms) that produce a desired set of results. On a conscious level, I can use that abstraction to determine whether my behavior is where I want it to be:<br />
<blockquote>
<p>Wife complained of being chilly when I got up at 5:00 AM to work out. Covered her with blanket. <em>Good.</em></p>
<p>Sped up on highway in attempt to keep a guy trying to merge from going ahead of me. <em>Not so good.<br />
</em><br />
Commenter on blog revealed that he doesn&#8217;t really understand the subject at hand. Ripped him to shreds. <em>Bad.</em></p></blockquote>
<p>Through discipline and practice, I can &#8220;program myself&#8221; with it to try to move my tendencies in that direction. But I can&#8217;t write it into my moral source code and set it as an unbreakable behavioral rule. That&#8217;s partly because it&#8217;s too vague and partly because I simply lack that capability. </p>
<p>Presumably, I could be externally constrained always to follow the Golden Rule, no matter what. If my actions were being constantly monitored, and I was told that the I would be killed immediately upon violating the rule&#8230;I&#8217;d certainly do my best, now wouldn&#8217;t I?</p>
<p>Still, I&#8217;d have a hard time believing that anyone holding me in such a position was much of a practitioner of that rule him or herself. If the people trying to enforce the rule on me in this manner told me that it was for my own good &#8212; that they were trying to make me a better person &#8212; I don&#8217;t know that I&#8217;d buy it. And if I figured out that they were only doing this to protect themselves from harm I might to do to them, I think I would pretty annoyed with them (to say the least.)</p>
<p>I would expect a reverse-engineered human intelligence to feel the same way, so I don&#8217;t think attempting to constrain an AGI in such a manner would be a particularly good idea, especially not if we have a reasonable expectation that it will eventually be smarter and more powerful than us. On the other hand, letting it use the process I described above &#8212; evaluating its own behavior against a defined standard &#8212; an AGI <em>might </em>achieve far better results than I have, if only because it can think faster and would have much more subjective time in which to act. This is the notion of <a href="http://quantumghosts.blogspot.com/2006/09/friendly-ai-possible-friendly.html">recursive self-improvement</a> that matoko kusanagi referred to. The trouble with recursive self-improvement on its own, as Eliezer Yudkowsky and others have pointed out, is that if the AI starts &#8220;improving&#8221; in a direction that&#8217;s bad for humanity, things could get out of hand pretty quickly.</p>
<p>If the artificial intelligence is a <em>modified</em> version of human intelligence, or new intelligence built from scratch, we raise the possibility of building a moral structure into the intelligence, rather than trying to enforce it from outside. That&#8217;s the idea behind the the Three Laws and my Three Goals &#8212; that they would somehow be built in. But they certainly can&#8217;t be built in in anything like their current form. Michael Sargent (and others) pointed out the weakness of that approach, the less important goals have to take the back seat to the more important ones:<br />
<blockquote>
<p>Each Goal must have a clear and unbreakable priority over the others that follow it and thus, in the order stated, collective continuity trumps individual safety (&#8220;The needs of the many outweigh the needs of the few, or the one.&#8221;), individual safety (broadly construed, &#8216;stasis&#8217;) trumps individual liberty (&#8216;free will&#8217;), and happiness (&#8216;utility&#8217;, a notoriously slippery concept for economists and philosophers to get a firm intellectual grip on) trumps both individual liberty and individual well-being (allowing potentially self-destructive behavior on the individual level insofar as that behavior doesn&#8217;t exceed the standard established for &#8216;safety&#8217; in Goal 2).</p></blockquote>
<p>I see the reasoning here, but I&#8217;m not 100% convinced. Consider the goals that drive a much simpler AI, system &#8212;  the autopilot system found on any jet airliner. The number one unbreakable goal has got to be <em>don&#8217;t crash the plane.</em> But there are many other goals that might drive such a system:<br />
<blockquote>
<p><em>Don&#8217;t move in such a way as to make the passengers sick.</p>
<p>Don&#8217;t waste fuel.</p>
<p>In landing, don&#8217;t go past the end of the runway.</em></p></blockquote>
<p>Above all, the system will seek to ensure that first goal. But within the context of ensuring that first goal, it also has to do everything it can to ensure the others. And, yes, it can and must sacrifice the others from time to time in service of the first. So the plane might temporarily move in a nauseating way, or it might waste fuel, or it might even slide past the end of the runway if doing any of those things help ensure the first goal.</p>
<p>Reader TJIC suggested that an AI programmed to meet the Three Goals as I defined them&#8230;<br />
<blockquote>
<p>    1. Ensure the survival of life and intelligence.</p>
<p>    2. Ensure the safety of individual sentient beings.</p>
<p>    3. Maximize the happiness, freedom, and well-being of individual sentient beings.</p></blockquote>
<p>&#8230;would end up creating a nanny state wherein human freedom is always sacrificed to individual safety. And he may well have a point, but I would argue that just as an autopilot can be calibrated to allow whatever what we deem the appropriate relationship between having the flight not crash and not make us sick, so could these three goals be calibrated in such a way so as to maximize human freedom within an acceptable level of individual risk &#8212; whatever that might be.  </p>
<p>Getting back to the vagueness problem, it&#8217;s hard to calibrate the goals as stated, seeing as they are written in an awkward pseudo-code that we call human language. If we want to improve on the algorithms that are built into human intelligence, or develop entirely new ones &#8212; in other words, if we&#8217;re going to come up with algorithms that will provide us the ends stated in the goals &#8212; we&#8217;re going to have to do it mathematically.</p>
<p>But that isn&#8217;t necessarily going to be an easy thing to do. <a href="http://www.singinst.org/upload/CEV.html">Eliezer Yudkowsky</a> argues  that developing an AI and setting it to work on doing some good thing are relatively easy compared to the third crucial step, making sure that that friendly, well-intentioned AI doesn&#8217;t accidentally wipe us out of existence while trying to achieve those good ends:<br />
<blockquote>
<p>If you find a genie bottle that gives you three wishes, it&#8217;s probably a good idea to seal the genie bottle in a locked safety box under your bed, unless the genie pays attention to your volition, not just your decision.</p></blockquote>
<p>Again, I think this goes to the issue of calibration of the system. Eliezer wants to calibrate what the AGI does with the coherent, extrapolated volition of humanity. Volition is an extremely important concept. Earlier, I mentioned the golden rule. If I decide that I&#8217;m going to do unto others as I would have them do unto me, I might start handing out big wedges of blueberry pie to everybody I see. After all, I like pie and I would love it if people gave <em>me </em>pie. But if I give my diabetic or overweight or blueberry-allergic friends a wedge of that pie, I wouldn&#8217;t be doing them any favors. Nor would I be doing <em>what I wanted to do </em>in the deepest sense.</p>
<p>Eliezer describes the concept of extrapolated volition as meaning not just what we want, but what we <em>would</em> want if we knew more, understood better, could see farther. Coming up with a coherent extrapolated volition for all of humanity is a tall order, especially if we&#8217;re doing it not just for the sake of conversation, but in order to enable a system which will try to realize that which is within our volition.</p>
<p>I like to think that humanity&#8217;s CEV would look a lot like the three goals that I&#8217;ve written. And I honestly believe that the algorithms that power human progress <em>do </em>work, in a rough and general way, towards those goals, which is why people are generally freer, safer, and happier than they have been in the past &#8212; though obviously not without many, many, appalling and horrific exceptions. So perhaps our calibration efforts involves feeding the AGI algorithms that will enable it to speed our progress towards those goals while cutting the exceptions way down. Or eliminating them, if that&#8217;s somehow possible. </p>
<p>So to finally come around to it, what will those algorithms look like?</p>
<p>Maybe we can take hint from the study of Game Theory.  <a href="http://en.wikipedia.org/wiki/Robert_Axelrod">Robert Axelrod</a> held two tournaments in the early 1980&#8242;s in which computer programs competed against each other in an attempt to identify the optimal winning strategy for playing the iterative version of the the famous <a href="http://en.wikipedia.org/wiki/Prisoner%27s_dilemma">Prisoner&#8217;s Dilemma</a>. In the one-off version of the game, the optimal strategy is to screw the other guy. (This is not the sort of thing we want to go teaching the AGI, at least not in isolation!) However, when multiple rounds of the game are played, <a href="http://en.wikipedia.org/wiki/Prisoner%27s_dilemma#The_iterated_prisoner.27s_dilemma">something else</a> begins to emerge:<br />
<blockquote>
<p>By analysing the top-scoring strategies, Axelrod stated several conditions necessary for a strategy to be successful.</p>
<p><strong>Nice</strong><br />
    The most important condition is that the strategy must be &#8220;nice&#8221;, that is, it will not defect before its opponent does. Almost all of the top-scoring strategies were nice. Therefore a purely selfish strategy for purely selfish reasons will never hit its opponent first.</p>
<p><strong>Retaliating</strong><br />
    However, Axelrod contended, the successful strategy must not be a blind optimist. It must always retaliate. An example of a non-retaliating strategy is Always Cooperate. This is a very bad choice, as &#8220;nasty&#8221; strategies will ruthlessly exploit such softies.</p>
<p><strong>Forgiving</strong><br />
    Another quality of successful strategies is that they must be forgiving. Though they will retaliate, they will once again fall back to cooperating if the opponent does not continue to play defects. This stops long runs of revenge and counter-revenge, maximizing points.</p>
<p><strong>Non-envious</strong><br />
    The last quality is being non-envious, that is not striving to score more than the opponent (impossible for a â€˜niceâ€™ strategy, i.e., a &#8216;nice&#8217; strategy can never score more than the opponent).</p>
<p>Therefore, Axelrod reached the Utopian-sounding conclusion that selfish individuals for their own selfish good will tend to be nice and forgiving and non-envious. One of the most important conclusions of Axelrod&#8217;s study of IPDs is that Nice guys can finish first.</p></blockquote>
<p><a href="http://www.ejectejecteject.com/archives/000157.html">Bill Whittle</a> has written recently that the qualities listed above underpin western civilization, and help to explain why the West has out-competed other civilizations, who operate using different strategies:<br />
<blockquote>
<p>Now, this is where my own analysis kicks in, because frankly, nice, retaliating, forgiving and non-envious pretty much sums up how I feel about the West in general and the United States in particular. The web of trust and commerce in Western societies is unthinkable in the Third World because the prosperity they produce are fat juicy targets for people raised on Screw the Other Guy. Crime and corruption are stealing, and stealing is Screwing the Other Guy. Itâ€™s short-term win, long-term loss.</p></blockquote>
<p>I would add that if we look at the three goals as goals for humanity rather than for artificial intelligence, we see better progress towards them in western societies than elsewhere. In the tournament, the winning strategy, embodying all of the above characteristics, was called <a href="http://en.wikipedia.org/wiki/Tit_for_Tat">tit-for-tat</a>. Interestingly, the computer program driving that strategy consisted of only four lines of BASIC code. That&#8217;s very interesting, and it suggests a startling possibility &#8212; like a simple recursive formula producing a complex Mandelbrot image, the  moral complexity we&#8217;re looking for might just be packed into a very simple set of mathematical relationships.</p>
<p>So in order to develop and calibrate an Artificial General Intelligence that carries out our three top goals (or that helps us to achieve our coherent extrapolated volition) one of the important parameters to explore is how the AI relates to us and to other AIs. The secret might ultimately lie in playing nice with the AI, and teaching it to play nice with us and with other AIs. Not just because we want it to be nice, but because nice turns out to be &#8212; at a mathematical level &#8212; the best way to play.</p>
<p>UPDATE: This entry has been republished at the website of the <a href="http://ieet.org/index.php/IEET/more/1751/">Institute for Ethics and Emerging Technologies</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.speculist.com/artificial_intelligence/the-three-goals-2.html/feed</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>Human Rights for a Chimp</title>
		<link>https://blog.speculist.com/defining_humanity/human-rights-fo-1.html</link>
		<comments>https://blog.speculist.com/defining_humanity/human-rights-fo-1.html#comments</comments>
		<pubDate>Thu, 05 Apr 2007 07:39:50 +0000</pubDate>
		<dc:creator>Phil Bowermaster</dc:creator>
				<category><![CDATA[Defining Humanity]]></category>

		<guid isPermaLink="false">http://localhost/specblog/?p=1124</guid>
		<description><![CDATA[Via SlashDot, an interesting development in an Austrian Court: Court to rule if chimp has human rights He recognises himself in the mirror, plays hide-and-seek and breaks into fits of giggles when tickled. He is also our closest evolutionary cousin. A group of world leading primatologists argue that this is proof enough that Hiasl, a [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Via S<a href="http://slashDot.org">lashDot</a>, an interesting development in an Austrian Court:<br />
<blockquote>
<p><a href="http://observer.guardian.co.uk/world/story/0,,2047459,00.html?gusrc=rss&#038;feed=12">Court to rule if chimp has human rights</a></p>
<p>He recognises himself in the mirror, plays hide-and-seek and breaks into fits of giggles when tickled. He is also our closest evolutionary cousin.</p>
<p>A group of world leading primatologists argue that this is proof enough that Hiasl, a 26-year-old chimpanzee, deserves to be treated like a human. In a test case in Austria, campaigners are seeking to ditch the &#8216;species barrier&#8217; and have taken Hiasl&#8217;s case to court. If Hiasl is granted human status &#8211; and the rights that go with it &#8211; it will signal a victory for other primate species and unleash a wave of similar cases.</p></blockquote>
<p>Hiasl&#8217;s story is not a happy one. If he were human, everyone would agree that his rights have been repeatedly, horribly violated: stolen from his family while still a baby; sent to a distant country where he was to be sent to a lab as the subject of whatever experiments were paying that month; snatched once again from the sanctuary where he eventually ended up only to be threatened once again to be sent to a different lab. </p>
<p>The folks bringing this case are attempting to get human status for Hiasl so that he can be appointed a legal guardian, and be protected from any further threat of being shipped off to a lab. Presumably, his being granted human status would save all chimps in Austria (and possibly the EU) from the same fate. I&#8217;m no lawyer, but I can&#8217;t help but think that if one chimp is human, they&#8217;re all pretty much human.</p>
<p><center><img alt="chimp.jpg" src="https://www.blog.speculist.com/archives/chimp.jpg" width="292" height="279" /><br />
</center></p>
<p>It probably wouldn&#8217;t take long for gorillas and orangutans to get similar legal status. Would all the big primate houses in all of Europe&#8217;s zoos be shut down? Or would the zookeepers manage to get legal guardian status for &#8220;their&#8221; apes &#8212; even if they did, viewing of these animals by the public would presumably be illegal. Plus, how long before less human-like, yet intelligent and beloved animals such as dogs and horses are also granted human status? And once horses, beef industry notwithstanding, wouldn&#8217;t cows get the same rights? Pigs are more intelligent than horses &#8212; surely they would be granted human status as well. And of course, cats.</p>
<p>Before long, we&#8217;ve got the PETA dream of all animals having recognition under the law equal to that of human beings. </p>
<p>Would that be a bad thing? Getting past the idea that it sounds kind of <em>crazy</em> &#8212; and the lifestyle change it would involve for someone like me who enjoys fishing and who loves sitting down to a big medium rare porterhouse &#8212; I can at least understand the impulse behind wanting to create that world. A world in which animals no longer suffer pain at the hands of capricious humanity doesn&#8217;t strike me as a particularly bad idea, nor as a totally <a href="https://www.blog.speculist.com/archives/001225.html">unreasonable expectation</a>. I have argued in the past that technology will soon get us to the point where we can enjoy eating real meat and wearing real leather without killing animals, and that those who like to hunt and fish will be able to experience those things virtually, again with no animals being harmed. </p>
<p>At that point, I think laws will be passed protecting animals from being killed; these will be a natural extension of our current laws preventing cruelty and mistreatment of animals. Animals don&#8217;t have to be human in order to enjoy such protections under the law.</p>
<p>Around the same time, we may see a new set of pretenders to human legal status &#8212; artificial intelligences. If chimps are granted human status by this Austrian court, it could prove significant in a subsequent legal battle over the rights of a computer with chimp-level (or greater) intelligence. After all, if Hiasl is deemed human under the law, how could a creature with even greater intelligence, greater sensitivity, and greater self-awareness be found <em>not </em>to be human?  If there is no precedent from Hiasl&#8217;s case, lawyers arguing for the AI&#8217;s rights will have to rely on a different strategy; the way forward may prove more difficult if courts tend to rally around protecting the line between humanity and other entities.</p>
<p>Which leads to another thought &#8212; how long before Transhuman Rights becomes a recognized specialty within the practice of law?</p>
<p>UPDATE: Yikes. The linked story is dated April 1. I&#8217;m having a bad week, here. Either this business of passing off April Fool&#8217;s jokes as real news has gotten totally out of hand, real life has become indistinguishable from an April Fool&#8217;s joke, or I&#8217;m just a big dope. No, I&#8217;m not asking for anyone to weigh in on the relative likelihood of those scenarios. Anyway, I&#8217;m <a href="http://news.google.com/news?ie=UTF-8&#038;oe=UTF-8&#038;aq=t&#038;rls=org.mozilla%3Aen-US%3Aofficial&#038;client=firefox-a&#038;um=1&#038;tab=wn&#038;q=hiasl&#038;btnG=Search+News">not the only one</a> being suckered if it <em>is</em> a prank.</p>
<p>UPDATE II: Plus everyone on <a href="http://science.slashdot.org/article.pl?sid=07/04/04/0031256">Slashdot</a> is fooled, should my paranoid suspicions pan out. Currently there are almost 1800 comments on this story, including at least one individual arguing that his/her cat should, indeed, be considered human under the law.</p>
<p>UPDATE III: Okay, here&#8217;s coverage from <a href="http://www.newscientist.com/blog/shortsharpscience/2007/03/great-ape-on-trial.html">March 28</a>. If it&#8217;s an April Fool&#8217;s joke, they&#8217;re breaking the rules. Now back to our serious discussion about animal and transhuman rights. The absurdity of trying to distinguish &#8220;reality&#8221; on the web we&#8217;ll save for another day.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.speculist.com/defining_humanity/human-rights-fo-1.html/feed</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
	</channel>
</rss>
