Dystopias don't go to heaven: My solution to the Fermi Paradox

11 1 0
                                    

You have probably heard of the Fermi Paradox: If the universe is so huge, where are all the aliens?

Many people have tried to answer this paradox in a lot of different ways. Perhaps we haven't searched hard enough, perhaps life is more rare than we ever imagined, perhaps we are among the first civilizations, perhaps we are in a cosmic zoo... And so on and on.

But I have never seen anyone propose my solution: All civilizations eventually become dystopias, and dystopias don't colonize space.

To reach this conclusion we will use fairly reasonable assumptions:

1.- Intelligent life evolves in species made of individuals:

This simply means we will not consider Hive Minds or beings that span an entire planet, like the Sea in Solaris.

2.- Intelligent beings need to work together to reach space

Humanity was intelligent for hundreds of thousands of years in which we didn't develop any complex technology. However in order for any group of intelligent beings to achieve things like space travel they need to work together.

3.- To work together intelligent beings will perform systems for collective decision making: Governments

Maybe there are intelligent beings out there which consult every member of the group before taking any decision, but that system is so inefficient they will never develop complex technology. Only intelligent beings who develop governments can develop more complex technologies.

Also it's important to point out that governments are "performed" by the individuals of a certain group. Governments are not physical things, they just exist because we act as if they exist, because we perform them. This will be relevant.

4.- Any effective government must be able to ensure its own survival

In order to enact any decision made by a government that government must be able to ensure it will continue to exist.

It is at point number 4 that the problems start, because ensuring its own existence becomes the most important goal of any government.

This is actually a problem studied in the field Artificial Intelligence, they call it "Goal Misalignment." It is worth explaining it, since its crucial for this argument.

When we create AIs we give them a "reward function." This is a mathematical function that takes as input the state of the world as the AI perceives it and returns a certain "reward" in the form of a number. Then we train the AI to increase that number, and they indeed learn to take actions that increase it.

For example if the reward function is the score in a videogame the AI will learn to play the game to superhuman levels.

The problem is that the AI may discover a strategy which increases its reward function without doing what we wanted. For example it may find glitches in the game that increase its score without needing to actually play the game.

I think this problem also arises with governments.

Individuals may create governments with any number of goals in mind: protection form rival groups, going to the moon, increasing the harvest, whatever. 

The reward function we give to governments is our own satisfaction. If we are not happy with the results we may choose to stop performing that original government, or even create a rival government to destroy the old one.

The problem is that governments will learn to hack that reward function using many different strategies.

For example, if a government cannot offer individuals the benefits they want it may try to convince them they want some other benefit, and those other benefits don't even have to be something tangible. They may be as abstract as "freedom" or "nationalism." All that matters is that individuals will agree to keep performing that government in exchange of those perceived benefits, whether real or not.

Another more common strategy is to offer a few individuals large benefits in exchange for coercing other individuals to continue performing that government. That's how you get aristocrats.

Once we think of governments as intelligent agents similar to artificial intelligences these strategies become obvious. We could probably deduce them using game theory.

The conclusion is that these are features not just of our human governments, but also of any possible government performed by individual intelligent beings.

Now we finally arrive at Dystopias.

A dystopia is the ultimate result of a government hacking the reward function of the individuals performing it.

Once a government has successfully become a dystopia it has "won the game." It can continue to use the same glitch over and over without having to play. It doesn't have to worry to please the reward functions of the people performing it, because those reward functions always give the highest result.

Think back to dystopias in literature or real life like 1984 or North Korea. Those governments convince people they are working for their benefit, and people believe it, even if they are miserable. The government has hacked our own reward functions and tricked us into accepting our own suffering.

As governments continue to exist they always feel the pull towards becoming dystopian. It is the easiest solution to the problem we gave them after all.

Maybe all governments eventually become dystopian, and then complex technology stops being developed. New technology may change the reward function, so its best to just keep everything exactly as it is.

This means dystopias can never colonize space.

Maybe the universe is full of intelligent life, but that life is doomed to spend an eternity being repressed in hells of their own making by the very system they created to improve their lives.

We always worry about artificial intelligences rebelling, but we already created one, and it has already rebelled many times. Many one day it'll win.

I only pray I am wrong.

Dystopias don't go to heaven: My solution to the Fermi ParadoxWhere stories live. Discover now