<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Who-whom?</title>
	<atom:link href="http://www.gnxp.com/new/2008/10/29/who-whom/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.gnxp.com/new/2008/10/29/who-whom/</link>
	<description>Genetics</description>
	<lastBuildDate>Tue, 03 Apr 2018 05:20:42 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.8.27</generator>
	<item>
		<title>By: Hyperbole</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23380</link>
		<dc:creator><![CDATA[Hyperbole]]></dc:creator>
		<pubDate>Sun, 09 Nov 2008 01:46:10 +0000</pubDate>
		<guid isPermaLink="false">#comment-23380</guid>
		<description><![CDATA[&quot;But that certain things will be perceived as desirable is one of the inevitable outcomes. Evolution directs and constrains what sort of value systems will persist across time. It defines what &#039;works&#039;, it provides the objective way to evalute systems of evaluation without being one itself.&quot;&#160;&lt;br&gt;&#160;&lt;br&gt;Then you can&#039;t evaluate anything except in hindsight, and even then, you can&#039;t attribute the success of societies or groups to their value systems with much certainty. There is too much noise.&#160;&lt;br&gt;&#160;&lt;br&gt;Furthermore, if humanity descends into idiocracy, then it is obviously because of evolutionary forces and value-systems that don&#039;t respect intelligence too much. If that is the inevitable outcome, then that value-system is actually &quot;right&quot; according to your standards. You&#039;re trying to simultaneously argue that your personal desires are X while connecting it to general principles of objective morality based on evolution, but the result you would consider abhorrent would be &quot;right&quot; according to your morality.&#160;&lt;br&gt;&#160;&lt;br&gt;It&#039;s hard to explain, but do you see what I&#039;m trying to say? The thing is, I actually agree with you, I&#039;m going to be studying mathematical/systems biology, I would hate to have an idiocracy and would prefer mind-children. I&#039;ve just given up trying to justify my beliefs because I&#039;ve concluded that they&#039;re just based on what I think is cool and interesting; there are no high-minded philosophical principles behind it, even though I used to try and create them. That said, I think it would probably be best if we could integrate ourselves into the robot world. I&#039;d much prefer the cyborg world were we could sybiotically coexist and still have some traces of our old selves around.]]></description>
		<content:encoded><![CDATA[<p>&#8220;But that certain things will be perceived as desirable is one of the inevitable outcomes. Evolution directs and constrains what sort of value systems will persist across time. It defines what &#8216;works&#8217;, it provides the objective way to evalute systems of evaluation without being one itself.&#8221;&nbsp;<br />&nbsp;<br />Then you can&#8217;t evaluate anything except in hindsight, and even then, you can&#8217;t attribute the success of societies or groups to their value systems with much certainty. There is too much noise.&nbsp;<br />&nbsp;<br />Furthermore, if humanity descends into idiocracy, then it is obviously because of evolutionary forces and value-systems that don&#8217;t respect intelligence too much. If that is the inevitable outcome, then that value-system is actually &#8220;right&#8221; according to your standards. You&#8217;re trying to simultaneously argue that your personal desires are X while connecting it to general principles of objective morality based on evolution, but the result you would consider abhorrent would be &#8220;right&#8221; according to your morality.&nbsp;<br />&nbsp;<br />It&#8217;s hard to explain, but do you see what I&#8217;m trying to say? The thing is, I actually agree with you, I&#8217;m going to be studying mathematical/systems biology, I would hate to have an idiocracy and would prefer mind-children. I&#8217;ve just given up trying to justify my beliefs because I&#8217;ve concluded that they&#8217;re just based on what I think is cool and interesting; there are no high-minded philosophical principles behind it, even though I used to try and create them. That said, I think it would probably be best if we could integrate ourselves into the robot world. I&#8217;d much prefer the cyborg world were we could sybiotically coexist and still have some traces of our old selves around.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Caledonian</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23381</link>
		<dc:creator><![CDATA[Caledonian]]></dc:creator>
		<pubDate>Sat, 08 Nov 2008 12:57:47 +0000</pubDate>
		<guid isPermaLink="false">#comment-23381</guid>
		<description><![CDATA[But this post is about values, things being better or worse, and what&#039;s &quot;desirable&quot;. It&#039;s not about inevitable outcomes.  But that certain things will be perceived as desirable is one of the inevitable outcomes.  Evolution directs and constrains what sort of value systems will persist across time.  It defines what &#039;works&#039;, it provides the objective way to evalute systems of evaluation without being one itself.&#160;&lt;br&gt;&quot;Mere&quot; nihilism... When you are arguing that machines will survive longer than people, and are therefore preferable  No, that is not what I&#039;m arguing.  I&#039;m saying that I perceive myself to have more in common with a world that has superintelligent machines but no humans than a world with only moronic humans.  My insticts for reproducing myself can be satisfied by entities that carry properties I value into the future, even if they&#039;re not my biological progeny.  In the Idiocracy scenario, virtually everything I value has been destroyed or ruined.  The humans that exist in that world share none of the properties I consider important parts of my identity.  I&#039;d rather see them all killed off and replaced by passionless, emotionless machines that were at least capable of reason.]]></description>
		<content:encoded><![CDATA[<p>But this post is about values, things being better or worse, and what&#8217;s &#8220;desirable&#8221;. It&#8217;s not about inevitable outcomes.  But that certain things will be perceived as desirable is one of the inevitable outcomes.  Evolution directs and constrains what sort of value systems will persist across time.  It defines what &#8216;works&#8217;, it provides the objective way to evalute systems of evaluation without being one itself.&nbsp;<br />&#8220;Mere&#8221; nihilism&#8230; When you are arguing that machines will survive longer than people, and are therefore preferable  No, that is not what I&#8217;m arguing.  I&#8217;m saying that I perceive myself to have more in common with a world that has superintelligent machines but no humans than a world with only moronic humans.  My insticts for reproducing myself can be satisfied by entities that carry properties I value into the future, even if they&#8217;re not my biological progeny.  In the Idiocracy scenario, virtually everything I value has been destroyed or ruined.  The humans that exist in that world share none of the properties I consider important parts of my identity.  I&#8217;d rather see them all killed off and replaced by passionless, emotionless machines that were at least capable of reason.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Hyperbole</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23382</link>
		<dc:creator><![CDATA[Hyperbole]]></dc:creator>
		<pubDate>Sat, 08 Nov 2008 04:28:49 +0000</pubDate>
		<guid isPermaLink="false">#comment-23382</guid>
		<description><![CDATA[&quot; You&#039;re rather missing the point. Goals are things organisms set; they&#039;re contingent and arbitrary. Evolution occurs whether we decide to pursue it or not. It IS objective, rather than &quot;objective&quot;, as you dismissively refer to it.&quot;&#160;&lt;br&gt;&#160;&lt;br&gt;But this post is about values, things being better or worse, and what&#039;s &quot;desirable&quot;. It&#039;s not about inevitable outcomes. I basically agree that the apotheosis of the nerd is exactly that, nerds wishing the world would be more like star-trek.&#160;&lt;br&gt;&#160;&lt;br&gt;&quot; In that scenario, I suppose there would be no point at all. So? Mere nihilism isn&#039;t much of an argument.&quot;&#160;&lt;br&gt;&#160;&lt;br&gt;&quot;Mere&quot; nihilism... When you are arguing that machines will survive longer than people, and are therefore preferable, the eventual pointlessness of everything matters.&#160;&lt;br&gt;&#160;&lt;br&gt;&quot; On the contrary, refusing to apply my personal preferences - or anyone else&#039;s - is precisely what I&#039;m advocating. And what you seem to be objecting to quite strongly.&quot;&#160;&lt;br&gt;&#160;&lt;br&gt;You are doing no such thing, you stated that mind children are preferred by people, even though many people disagreed. I think &lt;i&gt;you&lt;/i&gt; prefer mind children. I think that many people convince themselves that there is something about post-humanism that is kind of transcendent and good, but if humans just die out and are replaced, then I don&#039;t see how you can justify it to others who aren&#039;t similarly motivated by the &quot;coolness&quot; of it. There are no reasons why it is preferable for sentience unrelated to us to exist and us not to exist, than for us to exist in a state of non-advancement.&#160;&lt;br&gt;&#160;&lt;br&gt;Consider this: If we discovered that 10,000 light years away there already were self-modifying AI that far surpassed us, and would always be ahead of our AI, then would your logic still stand, or would idiocracy then be preferable as we could indulge in hedonism to our hearts&#039; content, knowing that the knowledge is being created somewhere else by someone else&#039;s mind-children?&#160;&lt;br&gt;&#160;&lt;br&gt;I just don&#039;t see the point except that a narrow subset of people consider it cool.]]></description>
		<content:encoded><![CDATA[<p>&#8221; You&#8217;re rather missing the point. Goals are things organisms set; they&#8217;re contingent and arbitrary. Evolution occurs whether we decide to pursue it or not. It IS objective, rather than &#8220;objective&#8221;, as you dismissively refer to it.&#8221;&nbsp;<br />&nbsp;<br />But this post is about values, things being better or worse, and what&#8217;s &#8220;desirable&#8221;. It&#8217;s not about inevitable outcomes. I basically agree that the apotheosis of the nerd is exactly that, nerds wishing the world would be more like star-trek.&nbsp;<br />&nbsp;<br />&#8221; In that scenario, I suppose there would be no point at all. So? Mere nihilism isn&#8217;t much of an argument.&#8221;&nbsp;<br />&nbsp;<br />&#8220;Mere&#8221; nihilism&#8230; When you are arguing that machines will survive longer than people, and are therefore preferable, the eventual pointlessness of everything matters.&nbsp;<br />&nbsp;<br />&#8221; On the contrary, refusing to apply my personal preferences &#8211; or anyone else&#8217;s &#8211; is precisely what I&#8217;m advocating. And what you seem to be objecting to quite strongly.&#8221;&nbsp;<br />&nbsp;<br />You are doing no such thing, you stated that mind children are preferred by people, even though many people disagreed. I think <i>you</i> prefer mind children. I think that many people convince themselves that there is something about post-humanism that is kind of transcendent and good, but if humans just die out and are replaced, then I don&#8217;t see how you can justify it to others who aren&#8217;t similarly motivated by the &#8220;coolness&#8221; of it. There are no reasons why it is preferable for sentience unrelated to us to exist and us not to exist, than for us to exist in a state of non-advancement.&nbsp;<br />&nbsp;<br />Consider this: If we discovered that 10,000 light years away there already were self-modifying AI that far surpassed us, and would always be ahead of our AI, then would your logic still stand, or would idiocracy then be preferable as we could indulge in hedonism to our hearts&#8217; content, knowing that the knowledge is being created somewhere else by someone else&#8217;s mind-children?&nbsp;<br />&nbsp;<br />I just don&#8217;t see the point except that a narrow subset of people consider it cool.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Caledonian</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23383</link>
		<dc:creator><![CDATA[Caledonian]]></dc:creator>
		<pubDate>Wed, 05 Nov 2008 13:17:54 +0000</pubDate>
		<guid isPermaLink="false">#comment-23383</guid>
		<description><![CDATA[...And there are a near infinite number of &quot;objective&quot; metrics that are equally meaningless in the ultimate cosmic sense. What logical process leads you to believe that evolutionarily &quot;better&quot; is an obvious goal?  You&#039;re rather missing the point.  Goals are things organisms set; they&#039;re contingent and arbitrary.  Evolution occurs whether we decide to pursue it or not.  It IS objective, rather than &quot;objective&quot;, as you dismissively refer to it.&#160;&lt;br&gt;And it seems that in the event of heat death of the universe or a big crunch, mind children would fare no better... So what&#039;s the point?  In that scenario, I suppose there would be no point at all.  So?  Mere nihilism isn&#039;t much of an argument.&#160;&lt;br&gt;even if they have no such thing as art and music and sex? I don&#039;t see what&#039;s so special about that. I think you&#039;re just advocating your personal preferences.  On the contrary, refusing to apply my personal preferences - or anyone else&#039;s - is precisely what I&#039;m advocating.  And what you seem to be objecting to quite strongly.]]></description>
		<content:encoded><![CDATA[<p>&#8230;And there are a near infinite number of &#8220;objective&#8221; metrics that are equally meaningless in the ultimate cosmic sense. What logical process leads you to believe that evolutionarily &#8220;better&#8221; is an obvious goal?  You&#8217;re rather missing the point.  Goals are things organisms set; they&#8217;re contingent and arbitrary.  Evolution occurs whether we decide to pursue it or not.  It IS objective, rather than &#8220;objective&#8221;, as you dismissively refer to it.&nbsp;<br />And it seems that in the event of heat death of the universe or a big crunch, mind children would fare no better&#8230; So what&#8217;s the point?  In that scenario, I suppose there would be no point at all.  So?  Mere nihilism isn&#8217;t much of an argument.&nbsp;<br />even if they have no such thing as art and music and sex? I don&#8217;t see what&#8217;s so special about that. I think you&#8217;re just advocating your personal preferences.  On the contrary, refusing to apply my personal preferences &#8211; or anyone else&#8217;s &#8211; is precisely what I&#8217;m advocating.  And what you seem to be objecting to quite strongly.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rafal Smigrodzki</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23384</link>
		<dc:creator><![CDATA[Rafal Smigrodzki]]></dc:creator>
		<pubDate>Tue, 04 Nov 2008 19:49:53 +0000</pubDate>
		<guid isPermaLink="false">#comment-23384</guid>
		<description><![CDATA[Anders wrote:&#160;&lt;br&gt;&#160;&lt;br&gt; humanity will go posthuman not because most people want it, but because every step will be seen as practical and fun. &#160;&lt;br&gt;&#160;&lt;br&gt;### As usual, Anders (and BTW, what a small world!) I largely agree with you - our world will end not because most people want it but because those who cause TEOTWAWKI would see each step as the right thing to do. But I am so much less sanguine about what it means for us - whether anyone wants it or not, our world and our position in it will end, as soon as somebody somewhere implements a human-level self-modifying AI. I am convinced that somebody will do it, somewhere, no matter how we, individually or collectively , try to prevent it. This implies most likely swift death to all of us, nerd and jock alike. The Singularity is not &quot;nerd rapture&quot;, it&#039;s the end of all nerds.&#160;&lt;br&gt;&#160;&lt;br&gt;I wish I could share your optimism. And grats on publishing the WBE roadmap. This is our only (slim) chance of making it to the next level in this game of life.]]></description>
		<content:encoded><![CDATA[<p>Anders wrote:&nbsp;<br />&nbsp;<br /> humanity will go posthuman not because most people want it, but because every step will be seen as practical and fun. &nbsp;<br />&nbsp;<br />### As usual, Anders (and BTW, what a small world!) I largely agree with you &#8211; our world will end not because most people want it but because those who cause TEOTWAWKI would see each step as the right thing to do. But I am so much less sanguine about what it means for us &#8211; whether anyone wants it or not, our world and our position in it will end, as soon as somebody somewhere implements a human-level self-modifying AI. I am convinced that somebody will do it, somewhere, no matter how we, individually or collectively , try to prevent it. This implies most likely swift death to all of us, nerd and jock alike. The Singularity is not &#8220;nerd rapture&#8221;, it&#8217;s the end of all nerds.&nbsp;<br />&nbsp;<br />I wish I could share your optimism. And grats on publishing the WBE roadmap. This is our only (slim) chance of making it to the next level in this game of life.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Anders Sandberg</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23385</link>
		<dc:creator><![CDATA[Anders Sandberg]]></dc:creator>
		<pubDate>Tue, 04 Nov 2008 05:02:05 +0000</pubDate>
		<guid isPermaLink="false">#comment-23385</guid>
		<description><![CDATA[From a utilitarian point of view, posthuman (or post-AI) beings might be able to sustain greater levels of utility than humans ever could (this is almost certain, since the space of possible minds is much bigger than the space of possible human minds). Hence the universe could be a much happier place with these beings around. &#160;&lt;br&gt;&#160;&lt;br&gt;But motivation-wise, of course much of current posthuman thinking is the apotheosis of the nerd. The people most interested in cognition enhancement are academics. A future with software intelligence has an obvious appeal to anybody who &quot;gets&quot; software. &#160;&lt;br&gt;&#160;&lt;br&gt;That does not tell us anything important, though. The Internet was invented for certain purposes but is now used for a lot more things. Cognition enhancers might be desired by a few cerebral people today, but once available people are going to find everyday uses of them. I think humanity will go posthuman not because most people want it, but because every step will be seen as practical and fun. We nerds will find that we will have the greatest success in spreading new technologies when they can fulfil human desires: the nerds may lead the way, but the funding will be mammal-controlled.]]></description>
		<content:encoded><![CDATA[<p>From a utilitarian point of view, posthuman (or post-AI) beings might be able to sustain greater levels of utility than humans ever could (this is almost certain, since the space of possible minds is much bigger than the space of possible human minds). Hence the universe could be a much happier place with these beings around. &nbsp;<br />&nbsp;<br />But motivation-wise, of course much of current posthuman thinking is the apotheosis of the nerd. The people most interested in cognition enhancement are academics. A future with software intelligence has an obvious appeal to anybody who &#8220;gets&#8221; software. &nbsp;<br />&nbsp;<br />That does not tell us anything important, though. The Internet was invented for certain purposes but is now used for a lot more things. Cognition enhancers might be desired by a few cerebral people today, but once available people are going to find everyday uses of them. I think humanity will go posthuman not because most people want it, but because every step will be seen as practical and fun. We nerds will find that we will have the greatest success in spreading new technologies when they can fulfil human desires: the nerds may lead the way, but the funding will be mammal-controlled.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Alex</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23386</link>
		<dc:creator><![CDATA[Alex]]></dc:creator>
		<pubDate>Mon, 03 Nov 2008 20:45:26 +0000</pubDate>
		<guid isPermaLink="false">#comment-23386</guid>
		<description><![CDATA[I am of a similar position to Hyperbole&#039;s.&#160;&lt;br&gt;&#160;&lt;br&gt;The idea that you expressed at the end of your monologue sounds exactly like the foundation for Oryx and Crake by Margaret Atwood.]]></description>
		<content:encoded><![CDATA[<p>I am of a similar position to Hyperbole&#8217;s.&nbsp;<br />&nbsp;<br />The idea that you expressed at the end of your monologue sounds exactly like the foundation for Oryx and Crake by Margaret Atwood.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Hyperbole</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23387</link>
		<dc:creator><![CDATA[Hyperbole]]></dc:creator>
		<pubDate>Sun, 02 Nov 2008 22:21:05 +0000</pubDate>
		<guid isPermaLink="false">#comment-23387</guid>
		<description><![CDATA[&quot;        You can&#039;t say one is better than the                                     &#160;&lt;br&gt;         other.&#160;&lt;br&gt;&#160;&lt;br&gt;Oh? Why not?&#160;&lt;br&gt;In any ultimate cosmic sense, &quot;better&quot; does not seem to be a meaningful concept; there is no known ultimate, objective standard of value. But evolutionarily speaking, &quot;better&quot; has a perfectly clear, objective meaning. It&#039;s an absolute standard that does not require any personal valuation to be valid.&quot;&#160;&lt;br&gt;&#160;&lt;br&gt;...And there are a near infinite number of &quot;objective&quot; metrics that are equally meaningless in the ultimate cosmic sense. What logical process leads you to believe that evolutionarily &quot;better&quot; is an obvious goal?&#160;&lt;br&gt;&#160;&lt;br&gt;Also, what is evolutionarily &quot;better&quot; is not objectively determinable except in hindsight. Who says ultra-intelligence is a positive trait?&#160;&lt;br&gt;&#160;&lt;br&gt;&quot;A moron&#039;s paradise might persist for a long, long time. But it would eventually prove a dead-end.&quot;&#160;&lt;br&gt;&#160;&lt;br&gt;And it seems that in the event of heat death of the universe or a big crunch, mind children would fare no better... So what&#039;s the point? You are hoping that more complex thoughts are thought by something, sometime in the future, even if they have no such thing as art and music and sex? I don&#039;t see what&#039;s so special about that. I think you&#039;re just advocating your personal preferences.]]></description>
		<content:encoded><![CDATA[<p>&#8221;        You can&#8217;t say one is better than the                                     &nbsp;<br />         other.&nbsp;<br />&nbsp;<br />Oh? Why not?&nbsp;<br />In any ultimate cosmic sense, &#8220;better&#8221; does not seem to be a meaningful concept; there is no known ultimate, objective standard of value. But evolutionarily speaking, &#8220;better&#8221; has a perfectly clear, objective meaning. It&#8217;s an absolute standard that does not require any personal valuation to be valid.&#8221;&nbsp;<br />&nbsp;<br />&#8230;And there are a near infinite number of &#8220;objective&#8221; metrics that are equally meaningless in the ultimate cosmic sense. What logical process leads you to believe that evolutionarily &#8220;better&#8221; is an obvious goal?&nbsp;<br />&nbsp;<br />Also, what is evolutionarily &#8220;better&#8221; is not objectively determinable except in hindsight. Who says ultra-intelligence is a positive trait?&nbsp;<br />&nbsp;<br />&#8220;A moron&#8217;s paradise might persist for a long, long time. But it would eventually prove a dead-end.&#8221;&nbsp;<br />&nbsp;<br />And it seems that in the event of heat death of the universe or a big crunch, mind children would fare no better&#8230; So what&#8217;s the point? You are hoping that more complex thoughts are thought by something, sometime in the future, even if they have no such thing as art and music and sex? I don&#8217;t see what&#8217;s so special about that. I think you&#8217;re just advocating your personal preferences.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: SFG</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23388</link>
		<dc:creator><![CDATA[SFG]]></dc:creator>
		<pubDate>Sun, 02 Nov 2008 06:36:19 +0000</pubDate>
		<guid isPermaLink="false">#comment-23388</guid>
		<description><![CDATA[Figures you guys would want to create superintelligent robots instead of nerdy women who would actually want to sleep with us. ;)]]></description>
		<content:encoded><![CDATA[<p>Figures you guys would want to create superintelligent robots instead of nerdy women who would actually want to sleep with us. ;)</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Robert Hume</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23389</link>
		<dc:creator><![CDATA[Robert Hume]]></dc:creator>
		<pubDate>Sat, 01 Nov 2008 18:34:36 +0000</pubDate>
		<guid isPermaLink="false">#comment-23389</guid>
		<description><![CDATA[Personally I&#039;d like to be as smart as the smartest physicists and mathematicians so that I could understand the world as well as it could be understood. Understanding implies consciousness, of course.&#160;&lt;br&gt;&#160;&lt;br&gt;I&#039;d like to live as long as possible so that I could understand history, biology, archeology, physics, math.&#160;&lt;br&gt;&#160;&lt;br&gt;I assume that everyone would choose to be like me if they were as smart as me, so it seems plausible that the most intelligent, conscious, population would be the happiest. So that&#039;s the earth I would aim for.&#160;&lt;br&gt;&#160;&lt;br&gt;Also, of course, sensual happiness, however that is to be obtained.]]></description>
		<content:encoded><![CDATA[<p>Personally I&#8217;d like to be as smart as the smartest physicists and mathematicians so that I could understand the world as well as it could be understood. Understanding implies consciousness, of course.&nbsp;<br />&nbsp;<br />I&#8217;d like to live as long as possible so that I could understand history, biology, archeology, physics, math.&nbsp;<br />&nbsp;<br />I assume that everyone would choose to be like me if they were as smart as me, so it seems plausible that the most intelligent, conscious, population would be the happiest. So that&#8217;s the earth I would aim for.&nbsp;<br />&nbsp;<br />Also, of course, sensual happiness, however that is to be obtained.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: razib</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23390</link>
		<dc:creator><![CDATA[razib]]></dc:creator>
		<pubDate>Sat, 01 Nov 2008 10:16:03 +0000</pubDate>
		<guid isPermaLink="false">#comment-23390</guid>
		<description><![CDATA[neuron, no. don&#039;t read all comments.]]></description>
		<content:encoded><![CDATA[<p>neuron, no. don&#8217;t read all comments.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: neuron</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23391</link>
		<dc:creator><![CDATA[neuron]]></dc:creator>
		<pubDate>Sat, 01 Nov 2008 05:04:05 +0000</pubDate>
		<guid isPermaLink="false">#comment-23391</guid>
		<description><![CDATA[Don&#039;t want to sound narcissistic here, but did you take the idea of &#039;apotheosis of the nerd&#039; from me? If so, please acknowledge it, thanks.]]></description>
		<content:encoded><![CDATA[<p>Don&#8217;t want to sound narcissistic here, but did you take the idea of &#8216;apotheosis of the nerd&#8217; from me? If so, please acknowledge it, thanks.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jason Malloy</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23392</link>
		<dc:creator><![CDATA[Jason Malloy]]></dc:creator>
		<pubDate>Fri, 31 Oct 2008 21:25:55 +0000</pubDate>
		<guid isPermaLink="false">#comment-23392</guid>
		<description><![CDATA[&lt;i&gt; In the first, you suggest that selection only favoured the proximal causes of reproduction - sex and general affection towards infants. In the second, you suggest (through irony) that selection did favour the desire to have one&#039;s own offspring, as opposed to raising someone else&#039;s.&lt;/i&gt;&#160;&lt;br&gt;&#160;&lt;br&gt;Both statements were ironic. In the many biological behaviors and drives associated with having children (which certainly go beyond the two I listed), canned metaphorical sentiments are not among them. &#160;&lt;br&gt;&#160;&lt;br&gt;Caledonian was suggesting these metaphors are a &quot;strong human drive&quot; which would lead people to &lt;i&gt;desire the replacement of their own children&lt;/i&gt; by a superior intelligence, as long as that intelligence was designed by other humans.&#160;&lt;br&gt;&#160;&lt;br&gt;Not only is this wrong because it suggests a primary metaphor-driven psychological mechanism in human reproduction that doesn&#039;t exist (my invention = my children), but because it also suggests &lt;i&gt;group selection&lt;/i&gt; behaviors (&lt;i&gt;human&lt;/i&gt; invention = my children), rather than inclusive fitness behaviors (my children = my children).]]></description>
		<content:encoded><![CDATA[<p><i> In the first, you suggest that selection only favoured the proximal causes of reproduction &#8211; sex and general affection towards infants. In the second, you suggest (through irony) that selection did favour the desire to have one&#8217;s own offspring, as opposed to raising someone else&#8217;s.</i>&nbsp;<br />&nbsp;<br />Both statements were ironic. In the many biological behaviors and drives associated with having children (which certainly go beyond the two I listed), canned metaphorical sentiments are not among them. &nbsp;<br />&nbsp;<br />Caledonian was suggesting these metaphors are a &#8220;strong human drive&#8221; which would lead people to <i>desire the replacement of their own children</i> by a superior intelligence, as long as that intelligence was designed by other humans.&nbsp;<br />&nbsp;<br />Not only is this wrong because it suggests a primary metaphor-driven psychological mechanism in human reproduction that doesn&#8217;t exist (my invention = my children), but because it also suggests <i>group selection</i> behaviors (<i>human</i> invention = my children), rather than inclusive fitness behaviors (my children = my children).</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: TGGP</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23393</link>
		<dc:creator><![CDATA[TGGP]]></dc:creator>
		<pubDate>Fri, 31 Oct 2008 19:51:43 +0000</pubDate>
		<guid isPermaLink="false">#comment-23393</guid>
		<description><![CDATA[I have a post on polywell fusion &lt;a href=&quot;http://entitledtoanopinion.wordpress.com/2008/10/07/clean-cheap-fusion-power/&quot;&gt;here&lt;/a&gt;.&#160;&lt;br&gt;&#160;&lt;br&gt;I think robots will remain our servants and life will be good. The world of uploaded minds might be unpleasant though, according to Hanson.]]></description>
		<content:encoded><![CDATA[<p>I have a post on polywell fusion <a href="http://entitledtoanopinion.wordpress.com/2008/10/07/clean-cheap-fusion-power/">here</a>.&nbsp;<br />&nbsp;<br />I think robots will remain our servants and life will be good. The world of uploaded minds might be unpleasant though, according to Hanson.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: kurt9</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23394</link>
		<dc:creator><![CDATA[kurt9]]></dc:creator>
		<pubDate>Fri, 31 Oct 2008 12:40:05 +0000</pubDate>
		<guid isPermaLink="false">#comment-23394</guid>
		<description><![CDATA[I think much of this discussion is immaterial because I do not expect to see A.I. or uploading in the foreseeable future (next 30-40 years). However, I do expect significant advances in biotech, biomedicine, and neuro-technology in the next few decades. I think the smart people are going to use these technologies to make themselves smarter and most everyone else is not going to care that much (If some physics guy increases his IQ from, say 135 to 160, its unlikely that people who work in more conventional jobs are going to care at all). Intelligence increase is going to be viewed by most people as the &quot;nerdly thing&quot; to do.&#160;&lt;br&gt;&#160;&lt;br&gt;Aside from biotech developments, I do not expect much in the next 30 years. I certainly do not believe in any kind of singularity.&#160;&lt;br&gt;&#160;&lt;br&gt;However, there are a couple of wild cards on the table that even the transhumanists are not aware of. One is IEC polywell fusion, which actually has a better than even chance of turning out for real. Another one is Extended Heim Theory (EHT).]]></description>
		<content:encoded><![CDATA[<p>I think much of this discussion is immaterial because I do not expect to see A.I. or uploading in the foreseeable future (next 30-40 years). However, I do expect significant advances in biotech, biomedicine, and neuro-technology in the next few decades. I think the smart people are going to use these technologies to make themselves smarter and most everyone else is not going to care that much (If some physics guy increases his IQ from, say 135 to 160, its unlikely that people who work in more conventional jobs are going to care at all). Intelligence increase is going to be viewed by most people as the &#8220;nerdly thing&#8221; to do.&nbsp;<br />&nbsp;<br />Aside from biotech developments, I do not expect much in the next 30 years. I certainly do not believe in any kind of singularity.&nbsp;<br />&nbsp;<br />However, there are a couple of wild cards on the table that even the transhumanists are not aware of. One is IEC polywell fusion, which actually has a better than even chance of turning out for real. Another one is Extended Heim Theory (EHT).</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: toto</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23395</link>
		<dc:creator><![CDATA[toto]]></dc:creator>
		<pubDate>Fri, 31 Oct 2008 11:12:41 +0000</pubDate>
		<guid isPermaLink="false">#comment-23395</guid>
		<description><![CDATA[&lt;i&gt;Were they? I think they were designed to like sex and to care for cute things. &lt;/i&gt;&#160;&lt;br&gt;&#160;&lt;br&gt;&lt;i&gt;&quot;Please fill my wife with your seed, my &quot;strong drives&quot; demand your superior &quot;mind children&quot; over my own undereducated prole sperm.&quot;&lt;/i&gt;&#160;&lt;br&gt;&#160;&lt;br&gt;Is there a contradiction between the two statements? In the first, you suggest that selection only favoured the proximal causes of reproduction - sex and general affection towards infants. In the second, you suggest (through irony) that selection did favour the desire to have one&#039;s own offspring, as opposed to raising someone else&#039;s.&#160;&lt;br&gt;&#160;&lt;br&gt;Humans are smart, they can grasp (i.e. develop neural associations corresponding to) the concept of having one&#039;s own children, and therefore it doesn&#039;t seem impossible for selection to act upon it.&#160;&lt;br&gt;&#160;&lt;br&gt;&lt;i&gt;I&#039;m surprised no one mentioned Michel Houellebecq&#039;s &quot;Elementary Particles,&quot; which ends on a post-human coda.&lt;/i&gt;&#160;&lt;br&gt;&#160;&lt;br&gt;The weakest part of an otherwise haunting book. Read also his previous novel, stupidly named &quot;Whatever&quot; in English (as opposed to the original title, &quot;Extension of the field of struggle&quot;).]]></description>
		<content:encoded><![CDATA[<p><i>Were they? I think they were designed to like sex and to care for cute things. </i>&nbsp;<br />&nbsp;<br /><i>&#8220;Please fill my wife with your seed, my &#8220;strong drives&#8221; demand your superior &#8220;mind children&#8221; over my own undereducated prole sperm.&#8221;</i>&nbsp;<br />&nbsp;<br />Is there a contradiction between the two statements? In the first, you suggest that selection only favoured the proximal causes of reproduction &#8211; sex and general affection towards infants. In the second, you suggest (through irony) that selection did favour the desire to have one&#8217;s own offspring, as opposed to raising someone else&#8217;s.&nbsp;<br />&nbsp;<br />Humans are smart, they can grasp (i.e. develop neural associations corresponding to) the concept of having one&#8217;s own children, and therefore it doesn&#8217;t seem impossible for selection to act upon it.&nbsp;<br />&nbsp;<br /><i>I&#8217;m surprised no one mentioned Michel Houellebecq&#8217;s &#8220;Elementary Particles,&#8221; which ends on a post-human coda.</i>&nbsp;<br />&nbsp;<br />The weakest part of an otherwise haunting book. Read also his previous novel, stupidly named &#8220;Whatever&#8221; in English (as opposed to the original title, &#8220;Extension of the field of struggle&#8221;).</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: PA</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23396</link>
		<dc:creator><![CDATA[PA]]></dc:creator>
		<pubDate>Fri, 31 Oct 2008 05:53:09 +0000</pubDate>
		<guid isPermaLink="false">#comment-23396</guid>
		<description><![CDATA[I&#039;m surprised no one mentioned Michel Houellebecq&#039;s &quot;Elementary Particles,&quot; which ends on a post-human coda.]]></description>
		<content:encoded><![CDATA[<p>I&#8217;m surprised no one mentioned Michel Houellebecq&#8217;s &#8220;Elementary Particles,&#8221; which ends on a post-human coda.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jason Malloy</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23397</link>
		<dc:creator><![CDATA[Jason Malloy]]></dc:creator>
		<pubDate>Thu, 30 Oct 2008 21:57:21 +0000</pubDate>
		<guid isPermaLink="false">#comment-23397</guid>
		<description><![CDATA[&quot;Humans were designed by evolution to wish (in their original environment) to perpetuate their existence through the mechanism of children.&quot;&#160;&lt;br&gt;&#160;&lt;br&gt;Were they? I think they were designed to like sex and to care for cute things. Sans birth control I don&#039;t think abstract metaphorical goals like &quot;seeking immortality in our children&quot; were needed or easily selected for.&#160;&lt;br&gt;&#160;&lt;br&gt;&quot;There are strong drives that render Idiocracy abhorrent and mind-children desirable.&quot;&#160;&lt;br&gt;&#160;&lt;br&gt;This is nonsense. &quot;Strong drives&quot; exhibited in what behavior? What people ever welcomed their murder and replacement by a more advanced society?&#160;&lt;br&gt;&#160;&lt;br&gt;Are husbands across America seeking out graduate students?: &quot;Please fill my wife with your seed, my &quot;strong drives&quot; demand your superior &quot;mind children&quot; over my own undereducated prole sperm.&quot;]]></description>
		<content:encoded><![CDATA[<p>&#8220;Humans were designed by evolution to wish (in their original environment) to perpetuate their existence through the mechanism of children.&#8221;&nbsp;<br />&nbsp;<br />Were they? I think they were designed to like sex and to care for cute things. Sans birth control I don&#8217;t think abstract metaphorical goals like &#8220;seeking immortality in our children&#8221; were needed or easily selected for.&nbsp;<br />&nbsp;<br />&#8220;There are strong drives that render Idiocracy abhorrent and mind-children desirable.&#8221;&nbsp;<br />&nbsp;<br />This is nonsense. &#8220;Strong drives&#8221; exhibited in what behavior? What people ever welcomed their murder and replacement by a more advanced society?&nbsp;<br />&nbsp;<br />Are husbands across America seeking out graduate students?: &#8220;Please fill my wife with your seed, my &#8220;strong drives&#8221; demand your superior &#8220;mind children&#8221; over my own undereducated prole sperm.&#8221;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Zora</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23398</link>
		<dc:creator><![CDATA[Zora]]></dc:creator>
		<pubDate>Thu, 30 Oct 2008 21:24:33 +0000</pubDate>
		<guid isPermaLink="false">#comment-23398</guid>
		<description><![CDATA[Try reading Iain Banks&#039; Culture novels: billions of organic beings all watched over by machines of loving grace. The organic beings are something between pets and friends.]]></description>
		<content:encoded><![CDATA[<p>Try reading Iain Banks&#8217; Culture novels: billions of organic beings all watched over by machines of loving grace. The organic beings are something between pets and friends.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jonathean</title>
		<link>http://www.gnxp.com/new/2008/10/29/who-whom/#comment-23399</link>
		<dc:creator><![CDATA[Jonathean]]></dc:creator>
		<pubDate>Thu, 30 Oct 2008 20:21:08 +0000</pubDate>
		<guid isPermaLink="false">#comment-23399</guid>
		<description><![CDATA[Vernor Vinge and others have talked frequently about the technological singularity that is supposedly just around the corner.  The idea is that humanity and/or its machines will eventually devise ways to increase intelligence, which will in turn facilitate evolution to an even higher level of intelligence.  And so on.  Eventually we could reach a point where civilization is as incomprehensible to us as our current civilization is to a chimp. &#160;&lt;br&gt;&#160;&lt;br&gt;Personally, I think this is a bunch of hooey.  Most people have no desire to meet their super-intelligent robotic superiors. Nor do they want to increase their own intelligence.  Of course, that may be because most people are idiots. It&#039;s hard to convince an idiot that he&#039;s an idiot.]]></description>
		<content:encoded><![CDATA[<p>Vernor Vinge and others have talked frequently about the technological singularity that is supposedly just around the corner.  The idea is that humanity and/or its machines will eventually devise ways to increase intelligence, which will in turn facilitate evolution to an even higher level of intelligence.  And so on.  Eventually we could reach a point where civilization is as incomprehensible to us as our current civilization is to a chimp. &nbsp;<br />&nbsp;<br />Personally, I think this is a bunch of hooey.  Most people have no desire to meet their super-intelligent robotic superiors. Nor do they want to increase their own intelligence.  Of course, that may be because most people are idiots. It&#8217;s hard to convince an idiot that he&#8217;s an idiot.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
