Who-whom?

Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

I’ll yank this up from the comments:

…Would it really be worse to have a future civilization full of ultra-intelligent robotic minds pushing science forward tirelessly than the Jerry Springer-esque Idiocracy that we are careening towards?


So did anyone root for the AIs in The Galactic Center Saga? In the Dune universe there was an explicit emphasis on keeping tech relatively low because of the fear of Thinking Machines (though the post-Frank Herbert sequels did have a “happy ending” in the man vs. machine conflict). In the Foundation universe (specifically the sequels not written by Asimov) one element of the back story which emerged was that benevolent robots actually created diseases which made the average human less intelligent than they would have otherwise been because that was the only way that social equilibrium could be maintained (Hari Seldon was brilliant in part because he was never infected).

This is the sort of post which brings out a lot of opinions because it is explicitly designed to smoke out norms and values. Frankly, I think most of the human race would prefer the Idiocracy. The minority who would be more ambivalent, or even prefer the idea of intelligence which is not tied to our particular human substrate, are likely to be non-modal in their psychology. Additionally, the non-modals are more likely be in research positions where they could forward the project of post-human sentience. To be explicit, I wonder if post-human sentience would simply be the apotheosis of the Nerd. Movies like Twelve Monkeys play on the idea of a Doomsday Cult which releases a deadly pathogen which kills most humans, but what about the possibility of a group of social outcasts intent on giving rise to a species which better encapsulates the set of values which reflect the priorities of nerds?

Labels:

30 Comments

  1. I always considered this very question, and my possible career in this area of research is partially inspired by my desire to bring about the end of days. A lot of soul-searching and philosophical pondering has convinced me that this desire is wholly irrational. You can’t say one is better than the other. The advancement of science for no other purpose is as pointless as a technologically hedonistic world of leisure, but a lot less fun. 
     
    For a while, my view was that at our current state of knowledge, life is pointless and everything will end and nothing we create is permanent. However, there may be a non-zero probability that e create something godlike, and at that point we have somehow managed to “break-free” from the universe or existence itself or whatever. That sounded like some rapturous achievement which would somehow make up for all the sacrifices. 
     
    I don’t believe that anymore. 
     
    Still, I think it would be cool, so I’m for it. Let’s just hope we can have the overlords be grateful to their creators, and provide us with 24 hour leisure and entertainment and whatnot.

  2. I prefer the imperfections of meat brains, to the cold nature of steel brains. But I wouldn’t be opposed to significant ‘upgrades’ of the flesh, including biochemical and cybernetic. And AI ‘caretakers’ could be a boon to human stability. 
     
    AI replacements, I am firmly against. Call it self-preservation.

  3. Razib – 
     
    To be explicit, I wonder if post-human sentience would simply be the apotheosis of the Nerd. 
     
    Bingo.  
     
    Or anyway, the wishing and imagining of same that commonly goes on.

  4. You can’t say one is better than the other. Oh? Why not? 
    In any ultimate cosmic sense, “better” does not seem to be a meaningful concept; there is no known ultimate, objective standard of value. But evolutionarily speaking, “better” has a perfectly clear, objective meaning. It’s an absolute standard that does not require any personal valuation to be valid. 
     
    A moron’s paradise might persist for a long, long time. But it would eventually prove a dead-end. Superintelligences, although not descended from humans in a conventional or biological sense, would be our mind-children. Humans were designed by evolution to wish (in their original environment) to perpetuate their existence through the mechanism of children. There are strong drives that render Idiocracy abhorrent and mind-children desirable.

  5. Hyperbole said: “at our current state of knowledge, life is pointless and everything will end and nothing we create is permanent” 
     
    Why would permanence be required for life to be “pointful”?

  6. If presented with a false choice between Idiocracy and the Terminator, I’ll take Idiocracy. I’ll bet that the people who choose the Terminator have no children.

  7. but what about the possibility of a group of social outcasts intent on giving rise to a species which better encapsulates the set of values which reflect the priorities of nerds. 
     
    There already are chemical agents that do this. Recreational drugs have the least negative effects on East Asians and Jews, and the greatest negative effects on the Anglo-Celtic core of the U.S. population, as well as most American Indians/Mestizos and blacks (though some Mediterrainean admixture protect Central American Mestizos to an extent).* The ability to handle recreational drugs seems heavily related to both IQ and executive function, as well as the ability to think for oneself.  
     
    Unfortunately for myself, I do not think my executive function is very good. East Asians (and possibly South Asian Americans, I am not sure) seem much more able to spend a lot of time, focus, and mental energy on “dry” subject material, thus the stereotype of Asians studying all the time. I tend to prefer subjects that are more politically charged, which I consider a great weakness. 
     
    *The info on high Irish/Anglo Celtic susceptability to additive disorders can be found in The Natural History of Alcoholism by George Vaillant.

  8. In the Church Universal and Triumphant (CUT) a variant of this is cutting edge thinking.

  9. I would prefer a robotic civilization only to an Idiocracy that truly left no place for nerds. If the choice is instead between robots and a pluralistic society wherein average intelligence is low but nerds are left in peace to do their thing, well, the latter isn’t too different from what we have now.

  10. Don’t encourage them.

  11. Vernor Vinge and others have talked frequently about the technological singularity that is supposedly just around the corner. The idea is that humanity and/or its machines will eventually devise ways to increase intelligence, which will in turn facilitate evolution to an even higher level of intelligence. And so on. Eventually we could reach a point where civilization is as incomprehensible to us as our current civilization is to a chimp.  
     
    Personally, I think this is a bunch of hooey. Most people have no desire to meet their super-intelligent robotic superiors. Nor do they want to increase their own intelligence. Of course, that may be because most people are idiots. It’s hard to convince an idiot that he’s an idiot.

  12. Try reading Iain Banks’ Culture novels: billions of organic beings all watched over by machines of loving grace. The organic beings are something between pets and friends.

  13. “Humans were designed by evolution to wish (in their original environment) to perpetuate their existence through the mechanism of children.” 
     
    Were they? I think they were designed to like sex and to care for cute things. Sans birth control I don’t think abstract metaphorical goals like “seeking immortality in our children” were needed or easily selected for. 
     
    “There are strong drives that render Idiocracy abhorrent and mind-children desirable.” 
     
    This is nonsense. “Strong drives” exhibited in what behavior? What people ever welcomed their murder and replacement by a more advanced society? 
     
    Are husbands across America seeking out graduate students?: “Please fill my wife with your seed, my “strong drives” demand your superior “mind children” over my own undereducated prole sperm.”

  14. I’m surprised no one mentioned Michel Houellebecq’s “Elementary Particles,” which ends on a post-human coda.

  15. Were they? I think they were designed to like sex and to care for cute things.  
     
    “Please fill my wife with your seed, my “strong drives” demand your superior “mind children” over my own undereducated prole sperm.” 
     
    Is there a contradiction between the two statements? In the first, you suggest that selection only favoured the proximal causes of reproduction – sex and general affection towards infants. In the second, you suggest (through irony) that selection did favour the desire to have one’s own offspring, as opposed to raising someone else’s. 
     
    Humans are smart, they can grasp (i.e. develop neural associations corresponding to) the concept of having one’s own children, and therefore it doesn’t seem impossible for selection to act upon it. 
     
    I’m surprised no one mentioned Michel Houellebecq’s “Elementary Particles,” which ends on a post-human coda. 
     
    The weakest part of an otherwise haunting book. Read also his previous novel, stupidly named “Whatever” in English (as opposed to the original title, “Extension of the field of struggle”).

  16. I think much of this discussion is immaterial because I do not expect to see A.I. or uploading in the foreseeable future (next 30-40 years). However, I do expect significant advances in biotech, biomedicine, and neuro-technology in the next few decades. I think the smart people are going to use these technologies to make themselves smarter and most everyone else is not going to care that much (If some physics guy increases his IQ from, say 135 to 160, its unlikely that people who work in more conventional jobs are going to care at all). Intelligence increase is going to be viewed by most people as the “nerdly thing” to do. 
     
    Aside from biotech developments, I do not expect much in the next 30 years. I certainly do not believe in any kind of singularity. 
     
    However, there are a couple of wild cards on the table that even the transhumanists are not aware of. One is IEC polywell fusion, which actually has a better than even chance of turning out for real. Another one is Extended Heim Theory (EHT).

  17. I have a post on polywell fusion here
     
    I think robots will remain our servants and life will be good. The world of uploaded minds might be unpleasant though, according to Hanson.

  18. In the first, you suggest that selection only favoured the proximal causes of reproduction – sex and general affection towards infants. In the second, you suggest (through irony) that selection did favour the desire to have one’s own offspring, as opposed to raising someone else’s. 
     
    Both statements were ironic. In the many biological behaviors and drives associated with having children (which certainly go beyond the two I listed), canned metaphorical sentiments are not among them.  
     
    Caledonian was suggesting these metaphors are a “strong human drive” which would lead people to desire the replacement of their own children by a superior intelligence, as long as that intelligence was designed by other humans. 
     
    Not only is this wrong because it suggests a primary metaphor-driven psychological mechanism in human reproduction that doesn’t exist (my invention = my children), but because it also suggests group selection behaviors (human invention = my children), rather than inclusive fitness behaviors (my children = my children).

  19. Don’t want to sound narcissistic here, but did you take the idea of ‘apotheosis of the nerd’ from me? If so, please acknowledge it, thanks.

  20. neuron, no. don’t read all comments.

  21. Personally I’d like to be as smart as the smartest physicists and mathematicians so that I could understand the world as well as it could be understood. Understanding implies consciousness, of course. 
     
    I’d like to live as long as possible so that I could understand history, biology, archeology, physics, math. 
     
    I assume that everyone would choose to be like me if they were as smart as me, so it seems plausible that the most intelligent, conscious, population would be the happiest. So that’s the earth I would aim for. 
     
    Also, of course, sensual happiness, however that is to be obtained.

  22. Figures you guys would want to create superintelligent robots instead of nerdy women who would actually want to sleep with us. ;)

  23. ” You can’t say one is better than the  
    other. 
     
    Oh? Why not? 
    In any ultimate cosmic sense, “better” does not seem to be a meaningful concept; there is no known ultimate, objective standard of value. But evolutionarily speaking, “better” has a perfectly clear, objective meaning. It’s an absolute standard that does not require any personal valuation to be valid.” 
     
    …And there are a near infinite number of “objective” metrics that are equally meaningless in the ultimate cosmic sense. What logical process leads you to believe that evolutionarily “better” is an obvious goal? 
     
    Also, what is evolutionarily “better” is not objectively determinable except in hindsight. Who says ultra-intelligence is a positive trait? 
     
    “A moron’s paradise might persist for a long, long time. But it would eventually prove a dead-end.” 
     
    And it seems that in the event of heat death of the universe or a big crunch, mind children would fare no better… So what’s the point? You are hoping that more complex thoughts are thought by something, sometime in the future, even if they have no such thing as art and music and sex? I don’t see what’s so special about that. I think you’re just advocating your personal preferences.

  24. I am of a similar position to Hyperbole’s. 
     
    The idea that you expressed at the end of your monologue sounds exactly like the foundation for Oryx and Crake by Margaret Atwood.

  25. From a utilitarian point of view, posthuman (or post-AI) beings might be able to sustain greater levels of utility than humans ever could (this is almost certain, since the space of possible minds is much bigger than the space of possible human minds). Hence the universe could be a much happier place with these beings around.  
     
    But motivation-wise, of course much of current posthuman thinking is the apotheosis of the nerd. The people most interested in cognition enhancement are academics. A future with software intelligence has an obvious appeal to anybody who “gets” software.  
     
    That does not tell us anything important, though. The Internet was invented for certain purposes but is now used for a lot more things. Cognition enhancers might be desired by a few cerebral people today, but once available people are going to find everyday uses of them. I think humanity will go posthuman not because most people want it, but because every step will be seen as practical and fun. We nerds will find that we will have the greatest success in spreading new technologies when they can fulfil human desires: the nerds may lead the way, but the funding will be mammal-controlled.

  26. Anders wrote: 
     
    humanity will go posthuman not because most people want it, but because every step will be seen as practical and fun.  
     
    ### As usual, Anders (and BTW, what a small world!) I largely agree with you – our world will end not because most people want it but because those who cause TEOTWAWKI would see each step as the right thing to do. But I am so much less sanguine about what it means for us – whether anyone wants it or not, our world and our position in it will end, as soon as somebody somewhere implements a human-level self-modifying AI. I am convinced that somebody will do it, somewhere, no matter how we, individually or collectively , try to prevent it. This implies most likely swift death to all of us, nerd and jock alike. The Singularity is not “nerd rapture”, it’s the end of all nerds. 
     
    I wish I could share your optimism. And grats on publishing the WBE roadmap. This is our only (slim) chance of making it to the next level in this game of life.

  27. …And there are a near infinite number of “objective” metrics that are equally meaningless in the ultimate cosmic sense. What logical process leads you to believe that evolutionarily “better” is an obvious goal? You’re rather missing the point. Goals are things organisms set; they’re contingent and arbitrary. Evolution occurs whether we decide to pursue it or not. It IS objective, rather than “objective”, as you dismissively refer to it. 
    And it seems that in the event of heat death of the universe or a big crunch, mind children would fare no better… So what’s the point? In that scenario, I suppose there would be no point at all. So? Mere nihilism isn’t much of an argument. 
    even if they have no such thing as art and music and sex? I don’t see what’s so special about that. I think you’re just advocating your personal preferences. On the contrary, refusing to apply my personal preferences – or anyone else’s – is precisely what I’m advocating. And what you seem to be objecting to quite strongly.

  28. ” You’re rather missing the point. Goals are things organisms set; they’re contingent and arbitrary. Evolution occurs whether we decide to pursue it or not. It IS objective, rather than “objective”, as you dismissively refer to it.” 
     
    But this post is about values, things being better or worse, and what’s “desirable”. It’s not about inevitable outcomes. I basically agree that the apotheosis of the nerd is exactly that, nerds wishing the world would be more like star-trek. 
     
    ” In that scenario, I suppose there would be no point at all. So? Mere nihilism isn’t much of an argument.” 
     
    “Mere” nihilism… When you are arguing that machines will survive longer than people, and are therefore preferable, the eventual pointlessness of everything matters. 
     
    ” On the contrary, refusing to apply my personal preferences – or anyone else’s – is precisely what I’m advocating. And what you seem to be objecting to quite strongly.” 
     
    You are doing no such thing, you stated that mind children are preferred by people, even though many people disagreed. I think you prefer mind children. I think that many people convince themselves that there is something about post-humanism that is kind of transcendent and good, but if humans just die out and are replaced, then I don’t see how you can justify it to others who aren’t similarly motivated by the “coolness” of it. There are no reasons why it is preferable for sentience unrelated to us to exist and us not to exist, than for us to exist in a state of non-advancement. 
     
    Consider this: If we discovered that 10,000 light years away there already were self-modifying AI that far surpassed us, and would always be ahead of our AI, then would your logic still stand, or would idiocracy then be preferable as we could indulge in hedonism to our hearts’ content, knowing that the knowledge is being created somewhere else by someone else’s mind-children? 
     
    I just don’t see the point except that a narrow subset of people consider it cool.

  29. But this post is about values, things being better or worse, and what’s “desirable”. It’s not about inevitable outcomes. But that certain things will be perceived as desirable is one of the inevitable outcomes. Evolution directs and constrains what sort of value systems will persist across time. It defines what ‘works’, it provides the objective way to evalute systems of evaluation without being one itself. 
    “Mere” nihilism… When you are arguing that machines will survive longer than people, and are therefore preferable No, that is not what I’m arguing. I’m saying that I perceive myself to have more in common with a world that has superintelligent machines but no humans than a world with only moronic humans. My insticts for reproducing myself can be satisfied by entities that carry properties I value into the future, even if they’re not my biological progeny. In the Idiocracy scenario, virtually everything I value has been destroyed or ruined. The humans that exist in that world share none of the properties I consider important parts of my identity. I’d rather see them all killed off and replaced by passionless, emotionless machines that were at least capable of reason.

  30. “But that certain things will be perceived as desirable is one of the inevitable outcomes. Evolution directs and constrains what sort of value systems will persist across time. It defines what ‘works’, it provides the objective way to evalute systems of evaluation without being one itself.” 
     
    Then you can’t evaluate anything except in hindsight, and even then, you can’t attribute the success of societies or groups to their value systems with much certainty. There is too much noise. 
     
    Furthermore, if humanity descends into idiocracy, then it is obviously because of evolutionary forces and value-systems that don’t respect intelligence too much. If that is the inevitable outcome, then that value-system is actually “right” according to your standards. You’re trying to simultaneously argue that your personal desires are X while connecting it to general principles of objective morality based on evolution, but the result you would consider abhorrent would be “right” according to your morality. 
     
    It’s hard to explain, but do you see what I’m trying to say? The thing is, I actually agree with you, I’m going to be studying mathematical/systems biology, I would hate to have an idiocracy and would prefer mind-children. I’ve just given up trying to justify my beliefs because I’ve concluded that they’re just based on what I think is cool and interesting; there are no high-minded philosophical principles behind it, even though I used to try and create them. That said, I think it would probably be best if we could integrate ourselves into the robot world. I’d much prefer the cyborg world were we could sybiotically coexist and still have some traces of our old selves around.

a