The theory, which I probably misunderstand because I have a similar level of education to a macaque, states that because a simulated world would eventually develop to the point where it creates its own simulations, it’s then just a matter of probability that we are in a simulation. That is, if there’s one real world, and a zillion simulated ones, it’s more likely that we’re in a simulated world. That’s probably an oversimplification, but it’s the gist I got from listening to people talk about the theory.

But if the real world sets up a simulated world which more or less perfectly simulates itself, the processing required to create a mirror sim-within-a-sim would need at least twice that much power/resources, no? How could the infinitely recursive simulations even begin to be set up unless more and more hardware is constantly being added by the real meat people to its initial simulation? It would be like that cartoon (or was it a silent movie?) of a guy laying down train track struts while sitting on the cowcatcher of a moving train. Except in this case the train would be moving at close to the speed of light.

Doesn’t this fact alone disprove the entire hypothesis? If I set up a 1:1 simulation of our universe, then just sit back and watch, any attempts by my simulant people to create something that would exhaust all of my hardware would just… not work? Blue screen? Crash the system? Crunching the numbers of a 1:1 sim within a 1:1 sim would not be physically possible for a processor that can just about handle the first simulation. The simulation’s own simulated processors would still need to have their processing done by Meat World, you’re essentially just passing the CPU-buck backwards like it’s a rugby ball until it lands in the lap of the real world.

And this is just if the simulated people create ONE simulation. If 10 people in that one world decide to set up similar simulations simultaneously, the hardware for the entire sim realty would be toast overnight.

What am I not getting about this?

Cheers!

  • doggle@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    You’re thinking in terms of how we do simulations within our universe. If the universe is a simulation then the machine that is simulating it is necessarily outside of the known universe. We can’t know for sure that it has to play by the same rules of physics or even of logic and reasoning as a machine within our universe. Maybe in the upper echelon universe computers don’t need power, or they have infinite time for calculations for reasons beyond our understanding.

    • KevonLooney@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      But that’s just a guess. It’s not necessarily true. You’re just saying “simulations might be possible, therefore they are definitely possible, therefore we are likely in a simulation”.

      That’s not logically sound. You can replace “simulation” with “God” and prove the existence of God similarly. It’s just a guess.

  • Voroxpete@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    If someone has the resources to simulate a universe, they probably have the resources to stimulate an arbitrarily large number of universes. This also assumes that any civilisation within the stimulated universe reaches the level of technological advancement required to make a universe level simulation. We’re talking, probably, whole networks of Matrioska Brains, that sort of thing.

  • blahsay@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    It’s simple - you cheat. In computer games we only draw the things you are looking at, and we only give the appearance of simulating the whole thing but the ‘world’ or universe is actually very limited and you can’t visit most places. Sound familiar?

    • zbyte64@awful.systems
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      I don’t think you can approximate Turing complete algorithms though. And then you end up with a situation where the simulation is making these Turing machines out of other simulated components, so it’s even more overhead then just giving the simulated agents direct CPU time.

      • blahsay@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        The real problems would be x^m computational issues. A finite number of ai running around on a finite amount of space are linear problems. Basically, very possible

    • JonEFive@midwest.social
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      The fun thing about this is that we have evidence that this is how our reality works. The double slit experiment showed that particles change their behavior when observed. (Gross oversimplification and only under very specific circumstances but still extremely fascinating.)

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    What am I not getting about this?

    The assumption is that the simulation runs constantly and at least as fast as real time.

    Neither needs to be true. A simulation might be to see what would have happened if we made different choices, it might be a video game, it might be a way to gen TV shows based on “the historical past” that we consider present time.

    We might just be an experiment to see if free will exists. Start 10,000 identical simulations to run a century, and at the end compare the results, see what’s changed, and if those changes snowballed or evened out.

    And just like how video games only “draw” what’s in field of view, a simulation could run the same way, drastically cutting down resource needs.

    And “impossible levels of energy” isn’t really right. At a certain point a species can get a Dyson sphere. And once they get the first, every subsequent one is a cake walk. It’s as close as possible to “infinite energy” there’s no real reason to even go past one.

    Hell, it doesn’t need to be “everything” everything. Generate a solar system and as long as no one leaves, you don’t need to generate anything past it other than some lights.

  • breadsmasher@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    It would take vast quantities of energy and resources if you were to do it real time, full time.

    As in - in the simulation 1 minute could be 1 year outside the simulation. Assuming we can continue to use more energy sources, develop the technology to fully simulate a single reality, it wouldn’t necessarily have to be real time.

    Inside the simulation, it wouldn’t make a difference

  • jj4211@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    30 days ago

    First, this is not really science so much as it is science-themed philosophy or maybe “religion”. That being said, to make it work:

    • We don’t have anyway of knowing the true scale and “resolution” of a hypothetical higher order universe. We think the universe is big, we think the speed of light is supremely fast, and we think the subatomic particles we measure are impossibly fine grained. However if we had a hypothetical simulation that is self-aware but not aware of our universe, they might conclude some slower limitation in the physics engine is supremely fast, that triangles are the fundamental atoms of the universe, and pixels of textures represent their equivalent of subatomic particles. They might try to imagine making a simulation engine out of in-simulation assets and conclude it’s obviously impossible, without ever being able to even conceive of volumetric reality with atoms and subatomic particles and computation devices way beyond anything that could be constructed out of in-engine assets. Think about people who make ‘computers’ out of in-game mechanics and how absurdly ‘large’ and underpowered they are compared to what we would be used to. Our universe could be “minecraft” level as far as a hypothetical simulator is concerned, we have no possible frame of reference to gauge some absolute complexity of our perceived reality.

    • We don’t know how much we “think” is modeled is actually real. Imagine you are in the Half Life game as a miraculously self-aware NPC. You’d think about the terribly impossibly complex physics of the experiment gone wrong. Those of us outside of that know it’s just a superficial model consisting of props to serve the narrative, but every piece of gadget that the NPC would see “in-universe” is in service of saying “yes, this thing is a real deep phenomenon, not merely some superficial flashes”. For all you know, nothing is modeled behind you at anything but the most vague way, every microscope view just a texture, every piece of knowledge about the particle colliders is just “lore”. All those experiments showing impossibly complex phenomenon could just be props in service of a narrative, if the point of the simulation has nothing to do with “physics” but just needs some placeholder physics to be plausible. The simulation could be five seconds old with all your memories prior to that just baked “backstory”.

    • We have no way of perceiving “true” time, it may take a day of “outside” time to execute a second of our time. We don’t even have “true” time within our observable universe, thanks to relativity being all weird.

    • Speaking of weird, this theory has appeal because of all the “weird” stuff in physics. Relativitiy and quantum physics are so weird. When you get to subatomic resolution, things start kind of getting “glitchy”, we have this hard coded limit to relative velocity and time and length get messed up as you approach that limit. These sound like the sort of thing we’d end up if we tried simulating, so it is tempting to imagine a higher order universe with less “weirdness”.

    • Mkengine@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      30 days ago

      Just to spin this a bit further, if we are living in a simulation, does it have a purpose? Sometimes I ask myself if the purpose of such a simulation for humanity could be to see how long it takes from the big bang to the creation of artificial life. Maybe our purpose is to create such artificial life that can travel to the stars, because as humans we are not really fit to do that. Maybe we are a mere step on the ladder of our universe’s purpose.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        30 days ago

        Such a purpose would inform the constraints. If we are just “the sims” on steroids, then all the deep physics are absolutely utterly faked and we are just “shown” convincing fakery. If it’s anthropological, then similar story that the physics are just skin deep. If it’s actually modeling some physics thing, then maybe we are “observing” real stuff.

        But again, this is all just for fun. It’s not vaguely testable and thus not scientific despite the sciencey theme of it, just something to ponder.

  • FeelzGoodMan420@eviltoast.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    Each simulation level would be more intensive to run. There is a minimum level where a simulation can no longer be sustained. So there is definitely a finite number of sim layers that are possible.

    To more directly answer your question, each level of simulation would run worse to compensate for the resource decrease.

  • CarbonatedPastaSauce@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I’ve always thought the sysadmin of our simulation must be really pissed that we keep inventing better and better telescopes.

    The JWST probably cost him a weekend adding more nodes to the cluster.

  • Th4tGuyII@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    As others have said, our reference of time comes from our own universe’s rules.
    Ergo if rendering 1 second of our time took 10 years of their time, we wouldn’t measure 10 years, we’d measure 1 second, so we’d have no way of knowing.

    It’s worth remembering that simulation theory is, at least for now, unfalsifiable. By it’s nature there’s always a counterargument to any evidence againat it, therefore it always remains a non-zero possibility, just like how most religions operate.

  • lath@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    It’s light-based computing.

    So you make a framework, compound it into a big bang ball and then let it run. Afterwards, you analyze the imagery from start to finish or at whatever point you need to.

    Can’t interact with it though, only observe.

  • MrJameGumb@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    I understand the point your making, but what if the simulation was actually not shared at all?

    Perhaps in this scenario the human brain is the only required hardware? Then there would only be one “base simulation” that is in fact just a basic set of prompts, rules, and initial visual stimulus that is then sent to each person in essence creating a whole separate simulation within each individual. Everything that happens after that is created based on how each individual reacts to the initial prompts. The main system would not have to create any new data to keep the simulation growing because the human mind would create and store all new information within itself. Each new person born would have all the additional hard drive and processing power needed to keep the simulation going for the rest of their lives.

    Just consider that if the world as we know it is just a simulation, and that simulation is all we have ever known since birth, how would you ever know if the other people are real or not? Would it even matter?

  • WolfLink@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    1 month ago

    You’ve basically hit the nail on the head. It’s pretty simple to argue based on information theory / statistical mechanics that a machine that runs the simulation has to support at least as many states as the thing its simulating, so a machine that simulates a universe as complex as its host would take up the entire host universe.

    It’s a fun idea but ultimately it’s not at all scientific and shouldn’t be taken seriously.