Can black holes be a proof that we live in a simulation?

disclaimer:

so first I am a programmer, and sorry for my bad englando. This is only my opinion, I am not even expert on Physic, just some interesting thoughts that have trapped my mind the last days that I want share

Some assumptions that I am using to support my thoughts

1. speed of light have the similar meaning that our GHz

The speed of the light c, is some sort of limit of how many things can be processed (causations) by the machines in which we live, like GHz for our computers, but not necessarily in seconds, we possible will never be able to perceive how the time pass in our parent universe, our brains are simulated by the same machine(s).

2. if holographic principle is real, then could be a smart way to build a database for simulate our universe, also the render function will naturally be recursive

Holographic principle states that the information of a volume is proportional to its surface area, this is already mind blowing, what I understand is that still is not proved, but for this exercise I will assume that is completely true.

As programmer my self it appears a smart way to store info in a database(s), just imagine you want to render anything in the earth, just read the info in the area that is the boundary of the earth, and start to build continents, oceans, cities.

Now you want to render people in x cities, you can read the area of the earth, calculate the area of the city, then calculate the area of the x district or neighbourhood, and with that area calculate the area of each people.

Now you want to calculate the state in a cell in the brain o X person, just repeat the process till you get the area of the cell from the area of the brain. this will work for atoms, electrons and soon on, and what is really curious about this is that it could explain why the (quantum mechanics)QM is so weird, because a algorithm of this kind will have hard time to predict the state of thing at some deep level.

Why could black holes confirm that we live in a simulated universe?

Just imagine this little mental experiment. and if you have some IT background I hope that you get what I trying to say easily

you as programmer are trying to simulate our universe, and store all the information of a specific area X in an array called A

now you have two function (there are probably more):

G(A) that will calculate the gravitational effects of this area R(A), that will render that area/volume and calculates all the inner causations (Laws of physics).

If the information on A become massive, then R(A) will take a lot of machine resources to be calculated, all the cause-effect calculations and interactions on A also could suffer the effect of time dilation. (the thing what happened in the interstellar movie)

Due the recursive nature of the R, ( see 2), there could be a theoretical limit, where to much info in A will trap the cpu, in an infinite loop, this actually happens a lot of some recursive functions, in which in certain conditions they just become an infinite loops, and the programmer need to identify those condition to avoid trap the cpu. knowing this, when A reach that limit you decide to not calculate R(A) anymore, because in other case it will just start to trap your cores in infinite loops and put the other parts of the simulation on danger to not be simulate.

This limit should depend of only two variables: the information on A (mass), and the cpu limit (speed of the light). and will be really mind blowing if it ends looking like the Schwarzschild_radius

Now what happens with G(A), does also G would take this enormous amount of cpu resources, when A grows, to be calculated?, Wrong!!!; is very easy to us as programmer to track the size or length of any Array, and G only depends of the size of A, it possible then calculate G, because will be really too damn easy, not matter the size of the array, but is not the same history for R.

this could explain why the gravity effect escape from black holes, but other things not, all the things in the A area are never rendered again, also the laws of causations stop forever, it becomes just a point in the space from our point of view, with volume 0, and looking the thing in this way make logical and understandable the nature of black holes to our human brain.

The array exists in memory, but is ignore by the cpu

why is easy to track the gravity? each time that the array grows you can add 1 to its size, and store that in other variable, so really nothing complicated to be calculated.

tl;dr:

1. if we are in a simulation

2. if the speed of the light is the cpu limit of our universe

3. if the holographic principle is just a smart way to render the universe using recursive functions.

4. the black holes are just natural consequence of those things.

is this testable? yes!!. it probably could

first we need to find an algorithm that allow us to render world recursively using the information on the area that enclose the object that we want to simulate, (see 2). then we can try to find what happen when those areas have to much information in their boundaries, will be a cpu able to simulate all the law of causations? will this cpu slow down the cause-effects calculations? will the cpu just get trapped in an infinite loop after the hard limit of maximum information on that area is reached?

3. does this limit have a similar formula that the Schwarzschild radius?

if the responses for that 3 questions are true, true, true. then is very probable that we just live in a simulation.