Wednesday, August 22, 2007

Bostrom's Simulation argument

A poster on overcoming bias notes

Bostrom's paper has been published in a top philosophical journal and has been discussed by academics working in top philosophy departments. If you are going to dismiss the simulation argument on the grounds that it is bad philosophy, you need to explain the fact that the expert philosophical community doesn’t think so. Merely dismissing “the whole idea” as “completely retarded” simply won’t do.



Well here is the argument that everyone else should have made.

Bostrom's paper says

probability we are a sim = simulations / total

this is basically the following basic assumptions

A) the indifference principle
which is
1)We have no reason to think we are special
using these fact
2) We know we are a 'human'

he expands that to be

Fp: Fraction of all human-level technological civilizations that survive to reach a posthuman stage
N : Average number of ancestor-simulations run by a posthuman civilization
H *: Average number of individuals that have lived in a civilization before it reaches a posthuman stage

fsim = fp N H
-------------
(fp N H)+ H

and N = Nifi (ie the number of civilizations running simulations times the number of simulations they run) and he proposes one of the following would be approximately true

fp = 0
fi = 0
fsim=1

now I have a simple rule to add to that which can be termed "once you get the hammer out of the tool box you have to hit every available nail." (call it the indifference to nails principle)
So the indifference principle does say what Bostrom suggests but it has something to say about a number of other things too.
so what does the indifference principle say about the remaining variables?

lets have a go at fp
well the Dooms day argument is well established it proposes that given we are alive now in 2007 then we should not be particularly close to the beginning or end of time (same sort of logic that would say we would be unlikely to be a real human if sims were very common).
now that formula requires us to define a set - lets say we use homo sapiens (I would want to modify this later). Boistrom makes a call on there being about 60 billion humans before us (which is probably more than just homo sapiens) so we will go with this because it is his position.

Now the UN suggests a rise to 9 billion around 2050 and then stabilization (assuming death rates keep up I presume so life expectancy around 67ish). So for us to be in the middle it would mean we have until something like 2470 I presume. Of course it would take quite a lot longer to get a 95% probability but we will just keep the distribution in mind for the moment.

Now Bostrom implies that these civilizations are running Sims of themselves (which makes them maximally equivalent) but our problem then is that our humans statistically should have become extinct around 2470 or reached the sort of singularity that results in no More births (or at least not statistically enough to effect the equation) OTHERWISE they would simulate after that period and that would drag forward our likely position in time.

Now you can say 'ok but there is still a good chance humans would not have did out in 3000 or so - but there is also still a chance they would have did out in 2200! In sense as you push out the deadline you have a higher percentage of a lower probability that we are a sim eventually you have an massive percentage of zero.

so the question now becomes how realistic is it that we will have total simulations by 2470? if there is a 0 % chance there is already a 50% chance there aren’t even any simulations AT ALL. (ok I’m abusing statistics a bit here but its one way to look at it)

one counter would be that there is a greater probability that we would live in a universe with a greater population but if you accept that the whole argument blows out as we start talking about near infinite populations, no dooms day at all, and all of a sudden, us appearing to be amazingly close to the beginning of time. Again the near infinite population would apply to the real and the sim worlds equally partly because of the way the sim is supposed to duplicate the real world and also because the hypothetical sim or the hypothetical real person should both face the same mathematical problem.

Basically I find that it is very hard to back out of the problem with the sim argument without crippling the reason why you are proposing it.

---------

then there are some factors that would lower the number of early
time sims

A) there are some questions about how an ancestor simulation would be run - I'm inclined to think you would build backwards in time - you have current data so that would be the easiest way to run it.
B) the optimal unit of measure (to be addressed in section 2)
C) I'm inclined to think f'obviously not a sim' is a non zero set - I have no idea how large ifthat is the case a more relevant question might be "of the set to which we know we belong what are the odds that we belong to this subset that is a sim.

------------

In summary for Psim to be very large we seem to need

1) to start making almost perfect sims very soon (so that we can have billions of more or less perfect sims by 2500 AD or so)
and
2) to have a crash in real and sim population soon

this seems an unlikely combination since it implies either a disaster that a civilization able to make perfect sims could not avoid down to the last man
OR
a cesation of creation of new real people as part of a human singularity (somthing like us all becoming one person or somthing) despite a desire to rerun simulations of old people where both require, presumably, a similar information content (and thus similar cost?) and both being 'entities' which one would think one would want to aply the same sort of rules to particularly since - probably to a post human - one of those entities is an exact duplicate of the old you.

in adition to the standard arguments

3) to have a tendency to make ancestor sims as opposed to recognizably other sims (there are some reasons one might make ancestor sims but more than non ancestor?)
4) to care to make more sims than there were real (duplicates?)
5) for N to be large
6) for near perfect simulations to be possible and not too costly
etc etc

4 Comments:

Anonymous Anonymous said...

nEXT,
DOES THE INDIFFERENCE PRINCIPLE APPLY OT ANYTHIGN ELSE?

well,
max fsim = no of bits to make a sim /total number of bits available. (to propose we would devote all our computing power to tha is of course a massive assumption!)

N - well we already know we are in a world that LOOKS like it is real - the indifference principle implies that most worlds look like they are real which is some evidence most worlds are real.

N (again) : to use indifference again - how many sims of your history would you run? How many sims that were not of oyur history would oyu run?

fsim - lets say we did do perfect ancestoral simulations - as I hinted at bfore simulations of hte current time would be easier and more useful probably. But if you do a simulation of a time where you are able to do simulations oyu get an infinite regression. this puts an infinite weighting on you being a sim but also on you being a sim in a time when people create sims. the other option being of course that the probability of creating sims is 0.

3:24 PM  
Anonymous Anonymous said...

And if we compare the f for a ralistic vs a non ralistic simulation it would sem that there are many more ways to have a non realistic simulation this creates a increased probability that if we are a sim that our universe should be non realistic (it doesnt matter how that shows itself) ie that it looks realistic underminsthe sim hypothesis.
(the realistic non realistic part being somthign that bostrom seems to have tried to dodge.)

3:30 PM  
Anonymous Anonymous said...

So if there ae a near infinite number of simulatiosn we would probably be a simulation but it would seem that IF we are a simulation we should be a future simulation and probably a non realistic simulation.
The logical conclusion is that fsim is close to 0 and that for some reason fp fi or N (i note Bostrom rejected this prematurely) are close to 0 or that hte entire piece of logic is invalid along the normal critiques of the DD argumnt.

3:35 PM  
Anonymous Anonymous said...

now back to an old critique of the amount of computing power we would add to simulations.

One can expect that entropy will still constrain humans into hte future (it seems to be fundimental) - this means that we would want to achieve somthign by all usages of energy to make information such as simulations - presumably a simulation would b a way to find out somthing - (1) or a place to house the being themselves(2).

The problem with 1 is it seems inefficient the probelm with 2 is presumably such a being doesn't want to wipe out its own memory (part of itself!) and we don't remember not being humans.

sp this implies togethr with the vry many more ways you can make an unralistic sim that

f(unrealistic sim) >> f(realistic sim)

in theory f(unrealistic) >> f (realistic) f >> f(real) is possible but seems to be under attack from the indifference principle.

3:44 PM  

Post a Comment

<< Home