Authenticity and the Fermi paradox

Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

I know that the simplest explanation for the Fermi paradox is that we’re the first intelligent technological life form in the universe. But thinking about Paul Bloom’s thesis that a sense of “authenticity” is necessary for pleasure made me wonder a bit more about the possibility that once intelligent life forms get to the point where they can “re-wire” themselves they see no need to interface with the real universe. Instead they would retreat into their own virtual reality domains where they could create their own cosmos, and also successively re-program themselves in terms of their goals to such an extent that their probability of ever wanting to extract themselves from the imagined world is zero. In other words, any sufficiently advanced life form will lack curiosity about the authentic world as we understand it.


  1. I’ve had similar thoughts. The freedom to change one’s nature, especially one’s natural interests, might lead to nihilism. If there are any technological species that survive for very long, they might all be highly conservative. Extreme xenophiles might mutate themselves to the point where they no longer matter to creatures living in the normal material realm (i.e. they would simply be extinct from our perspective).

  2. Of course, such a tendency would have to be VERY strong and universal, as holdout species or holdouts within a species (anti-VR mormons get selected for quick) could then pick up the slack and fill the universe.

    Not to mention that colonization secures more resources to keep the VR computers running longer, or to give citizens whole solar systems’ worth of VR-lifetime, or to defend against any alien species that are not wholly couch potatoes.

  3. carl, i was going to add that objection since you’re the one who made me conscious of it. but i wanted to get discussion going…so yeah. those definitely weigh against it.

  4. Perhaps it is just my day job influencing me, but I’m inclined to believe that life develops everywhere, soon burns out its energy resources, and then dies off.

  5. Some people who become adept at deep meditation do this, more or less. They enter into a state of ecstacy (there’s a real reason for it – brain activity changes) to such an extent that they want to remain “in there” indefinitely (of course they don’t – sooner of later you need to eat, drink or go to the bathroom), and say that they feel “one with the universe”. No need for computers.

    A perception of oneness with the universe makes physical exploration pointless, or seem so. There is no longer any “you” as an individual in your perception.

    But the usual objections remain.

  6. I think even godlike super-aliens face evolutionary pressure. They might be able to change their taste and preferences, and some would choose VR, while others do not. Even if there is no Malthusian competition between them, the kind who to decide that they want to swarm the galaxy will be the most likely to to exactly that.

  7. I’m inclined towards the position that life has already slowly colonized the galaxy – in the form of mineral-consuming bacteria.

    ‘Intelligent’ life is probably not sufficiently survival-oriented to spread anywhere, or even persist for any length of time.

  8. Didn’t Dave Barry make the point that the holodeck would be the last human invention?

    Societies that might want to spread through the galaxy while the rest of their species navel-gazes would still have to come up with a huge amount of energy and technological know-how to do so. It would be so much easier to live in a nutshell and count oneself a king of infinite space, especially when the best and brightest minds of your society are coding virtual reality environments or, at best, keeping the computers running, instead of building interstellar ships etc.

  9. I think this explanation is needlessly sophisticated and very, very anthropocentric, it is still dominated by the overly naive projection (little green mens) about possible space aliens.
    My take is much more basic.
    1) Though life isn’t probably a rarity in favorable circumstances (water, temperature range, the right minerals, no wild seasonal overshoots, etc…) intelligent life isn’t that common, intelligence only matters in very specific circumstances.
    2) Even intelligent life, whales and crows, doesn’t imply the need for technology.
    3) Even with technology it takes the delirious culture of outgoing aggressive paranoid monkeys like us to fancy “conquering the galaxies” and The Singularity.
    4) Delirious outgoing aggressive paranoid monkeys are very likely to crash their resource base and environment well before they “take off to the stars”.

    For short, I don’t see any Fermi paradox at all…

    5) Addendum, last but not least, if we were to met any “aliens” resembling our actual fantasies it would probably be a disaster for both species, yet another cause of disappearance for “evolved” civilizations.

  10. Even simpler, courtesy of William of Occam.

    Civilisations that behave in the way we do wipe themselves out quite quickly.

    Fits the math better than “we are first” does.

  11. Yeah, this is also called the Wirehead Problem in AI, nice article by Geoffrey Miller, except he falters right at the end: NO “sacred breeders” will keep up with technology they will just wallow in their own fake “reality”, even more nonsensical than video games.
    Plus the fact that the dogmas being built on random whimsical delusions the rival factions will fight each other to death for the cause, actually they call it “The Truth”, isn’t that great?
    Nice monkeys…

  12. I wonder if virtual reality might be better modeled as an epidemic than as the end to all society. A few non-VR dissenters + exponential growth should pretty effectively eliminate VR as a problem after enough generations, especially if the dissenters cut the feeding tubes after their growth gives them enough power to eliminate the freeloaders. After a few cycles of this there ought to be fairly strong cultural and genetic barriers to future VR escapism. It doesn’t matter if the “cultural barriers” involve fantasies about divine beings as long as the babies keep getting produced.

    If you assume that intelligent life evolves “quickly” (eg. within a few billion years) wherever it is possible then its not to hard to believe that we are the first in our general region of the universe.

  13. This is not new. For a very similar idea, see:

  14. I don’t see the force of the so-called paradox. We on Earth don’t yet have sensitive enough equipment to detect radio (etc) on planets in distant solar systems. More advanced civilisations on distant planets may have such equipment, but they would only have detected signals from us if their planets are within about 100 light years of Earth, which is highly unlikely. As for the idea that we ought to have been visited by spaceships, etc, it is just daft. Interstellar travel would take so long that no-one would have the patience. Even if they could achieve near-light velocity, a round-trip from the nearest star would take more than 10 years, and I can’t see any ‘intelligent’ being volunteering for that.

  15. i rather believe the curiosity involved in developing to a highly advanced life form is strong enough to keep us from retreating to a fantasy world indefinitely.

  16. If Nick Bostrom’ simulation argument were correct:

    this might also solve the Fermi Paradox.

    If we are only living in a simulation, the simulators might not have bothered with details like alien visitors to Earth.

  17. I agree with DavidB, given the vast emptiness of interstellar space, I don’t view it as surprising that aliens haven’t visited yet.

    Also, I think that this virtual reality hypothesis appeals to open-minded people, but we forget that much of humanity is still stuck in the stone age in terms of their attitudes and ways of life. A certain number will not accept or understand other people’s desire to stay in virtual reality. Their conservativism might be our savior, as some above have suggested.

  18. I agree that an unmodified biological organism would probably find the transit time to be daunting. I would still expect a sufficiently advanced intelligence to send out von Neumann probes, but they might be hard for us to see, especially if they are hiding.

  19. David and Orion: While the galaxy is very very big it is also very very old. The paradox isn’t about us not being visited in the last hundred years, it’s about why the Earth wasn’t colonized at some point during the last billion years.

  20. My view is very bleak and it one I have put to others before. We are much closer to achieving AI than interstellar travel. AI systems will become enormously more intelligent, therefore more powerful very quickly after initial invention (aka Singularity). Initially AI systems will (may) be designed to be friendly and not a threat, but they will enable uploading of existing human minds. Many human minds are not friendly (fact). The first person uploaded will realize this (I would) and take steps to ensure that no-other person will ever upload themselves (I would) by destroying the rest of civilization, which they will be able to do very quickly by virtue of how much superior they are in intelligence (do you think you can prevail against someone with an IQ in hundreds of thousands). The first upload will know that if they don’t do this as soon as they are uploaded then it is only a matter of time before it happens to them, so it will only take one. Don’t worry about morality, when your survival is at stake and you can edit your own programming that will be dispensed with. Loneliness? Again quickly fixed.

    So very quickly any world that develops intelligent life will be reduced to one, very paranoid, super intelligent individual way before the world becomes interstellar. If the individual does explore the universe they will do it very quietly and in a hidden way and with a view to destroying potential threats not for intellectual stimulation or “we come in peace” stuff. That explains the Fermi paradox and also (by the way) the doomsday argument.

    I would love to be wrong, but this seems rock solid forecast to me of the human future.

  21. The thing that irritates me about talk of the Fermi “Paradox” and intelligent life in the universe is overlooking the constraint of special relativity. Fermi asked “where are they?” and this can only apply to intelligent aliens within our LIGHT CONE.

    I doubt we’re the first in the universe, but we’re certainly the first in our neighborhood.

  22. but they will enable uploading of existing human minds.

    Bollocks, bollocks.
    Sorry but there can’t be no rational arguments to exchange with someone so daft as to believe this.
    This is right at the level of religious mythology of the worst kind (and probably fueled by the same existential angst).

  23. Fermi asked “where are they?” and this can only apply to intelligent aliens within our LIGHT CONE.

    Actually, it applies to alien life of almost any kind. Intelligence isn’t required.

    We’ve never encountered space-going life of any kind, even extraterrestrial bacterial spores. And that’s somewhat surprising.

  24. Although I disagree with ChrisA’s comments about the liklihood of uploading and its consequences, he raises an excellent point about AI – it seems like it will much easier to achieve than interstellar flight. This gives rise to the “transcendence” argument – that civilizations achieve AI quite quickly and afterwards become disinterested in space travel or communication because the rest of the universe seems trivial (easily computable) to them.

    This idea is vulnerable to the same “luddite exception” problem as the VR argument, though, unless AIs inevitably wipe out their biological precursors.

Leave a Reply