• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!
  • Welcome to our archives. No new posts are allowed here.

A use for a simulated universe

Skeptic Bob

DP Veteran
Joined
Oct 6, 2014
Messages
16,626
Reaction score
19,488
Location
Texas
Gender
Male
Political Leaning
Libertarian - Left
One of the wacky ideas I like to speculate about is the simulated universe "theory". If you don't know what it is, watch this quick video.



Ok, this particular thread is not about whether or not WE are living in a simulated universe. I think I did a thread on that before. This about what we could accomplish if WE ever are able to simulate a universe.

Now, it isn't necessary that we simulate a whole universe, atom for atom. It just needs to be good enough to fool our simulated, and sentient, humans on the simulated earth.

If we had this ability, what could we accomplish? Just about everything I have heard on the subject talks about using it to study possible alternate history lines and such. One can certainly imagine using it to study sociology or astrophysics or any number of other things.

But what if we ran the simulation and focused on the scientists in the simulation? And then what if we ran the simulation in fast forward? Could we watch the beings in the simulation actually pass our own technological abilities and then steal their ideas?

Don't take this too seriously. It is just a fun thought game. Any unique ideas for your toy universe you could come up with?
 
One of the wacky ideas I like to speculate about is the simulated universe "theory". If you don't know what it is, watch this quick video.

Ok, this particular thread is not about whether or not WE are living in a simulated universe. I think I did a thread on that before. This about what we could accomplish if WE ever are able to simulate a universe.

Now, it isn't necessary that we simulate a whole universe, atom for atom. It just needs to be good enough to fool our simulated, and sentient, humans on the simulated earth.

If we had this ability, what could we accomplish? Just about everything I have heard on the subject talks about using it to study possible alternate history lines and such. One can certainly imagine using it to study sociology or astrophysics or any number of other things.

But what if we ran the simulation and focused on the scientists in the simulation? And then what if we ran the simulation in fast forward? Could we watch the beings in the simulation actually pass our own technological abilities and then steal their ideas?

Don't take this too seriously. It is just a fun thought game. Any unique ideas for your toy universe you could come up with?

There is an idea advancing of late that all that we experience, all that are, is a computer program working.

This is very close to Zen, under this theory the entirety of Zen is making the program work the best we can by our last breath.
 
One of the wacky ideas I like to speculate about is the simulated universe "theory". If you don't know what it is, watch this quick video.



Ok, this particular thread is not about whether or not WE are living in a simulated universe. I think I did a thread on that before. This about what we could accomplish if WE ever are able to simulate a universe.

Now, it isn't necessary that we simulate a whole universe, atom for atom. It just needs to be good enough to fool our simulated, and sentient, humans on the simulated earth.

If we had this ability, what could we accomplish? Just about everything I have heard on the subject talks about using it to study possible alternate history lines and such. One can certainly imagine using it to study sociology or astrophysics or any number of other things.

But what if we ran the simulation and focused on the scientists in the simulation? And then what if we ran the simulation in fast forward? Could we watch the beings in the simulation actually pass our own technological abilities and then steal their ideas?

Don't take this too seriously. It is just a fun thought game. Any unique ideas for your toy universe you could come up with?


Simulations are only as good as the data inputted. Higher fidelity simulations require higher fidelity data. For a simulation to be effective it has to be checked against reality over again many times to insure that the results are accurate and precise and that will only be within the measured parameters of whatever it is you are measuring. Measurement are subject to a level of precision which naturally lead to a level of error. These the imprecisions add up over time and number. There will most likely be a level of error in any simulation, less so in more commonly measured parameters and more so in less commonly measured parameters. As a simulation nears its boundaries it becomes less and less accurate or precise due to the lack of data near the boundary. To simulate a universe will require an excellent understanding and ability simulate all parts of the universe. Much easier said than done.
 
One of the wacky ideas I like to speculate about is the simulated universe "theory". If you don't know what it is, watch this quick video.



Ok, this particular thread is not about whether or not WE are living in a simulated universe. I think I did a thread on that before. This about what we could accomplish if WE ever are able to simulate a universe.

Now, it isn't necessary that we simulate a whole universe, atom for atom. It just needs to be good enough to fool our simulated, and sentient, humans on the simulated earth.

If we had this ability, what could we accomplish? Just about everything I have heard on the subject talks about using it to study possible alternate history lines and such. One can certainly imagine using it to study sociology or astrophysics or any number of other things.

But what if we ran the simulation and focused on the scientists in the simulation? And then what if we ran the simulation in fast forward? Could we watch the beings in the simulation actually pass our own technological abilities and then steal their ideas?

Don't take this too seriously. It is just a fun thought game. Any unique ideas for your toy universe you could come up with?


Hold on I need another bonghit first.
 
But what if we ran the simulation and focused on the scientists in the simulation? And then what if we ran the simulation in fast forward? Could we watch the beings in the simulation actually pass our own technological abilities and then steal their ideas?

Ok now that I am adjusted: A super computer/AI set to the task of advancing technology, could (in theory) vastly improve technology. We already have AI software that can do that (sort of).

https://www.theverge.com/2017/10/18/16495548/deepmind-ai-go-alphago-zero-self-taught



"“By not using human data — by not using human expertise in any fashion — we’ve actually removed the constraints of human knowledge,” said AlphaGo Zero’s lead programmer, David Silver, at a press conference. “It’s therefore able to create knowledge itself from first principles; from a blank slate [...] This enables it to be much more powerful than previous versions.”

Silver explained that as Zero played itself, it rediscovered Go strategies developed by humans over millennia. “It started off playing very naively like a human beginner, [but] over time it played games which were hard to differentiate from human professionals,” he said. The program hit upon a number of well-known patterns and variations during self-play, before developing never-before-seen stratagems. “It found these human moves, it tried them, then ultimately it found something it prefers,” he said. As with earlier versions of AlphaGo, DeepMind hopes Zero will act as an inspiration to professional human players, suggesting new moves and stratagems for them to incorporate into their game. "
 
One of the wacky ideas I like to speculate about is the simulated universe "theory". If you don't know what it is, watch this quick video.



Ok, this particular thread is not about whether or not WE are living in a simulated universe. I think I did a thread on that before. This about what we could accomplish if WE ever are able to simulate a universe.

Now, it isn't necessary that we simulate a whole universe, atom for atom. It just needs to be good enough to fool our simulated, and sentient, humans on the simulated earth.

If we had this ability, what could we accomplish? Just about everything I have heard on the subject talks about using it to study possible alternate history lines and such. One can certainly imagine using it to study sociology or astrophysics or any number of other things.

But what if we ran the simulation and focused on the scientists in the simulation? And then what if we ran the simulation in fast forward? Could we watch the beings in the simulation actually pass our own technological abilities and then steal their ideas?

Don't take this too seriously. It is just a fun thought game. Any unique ideas for your toy universe you could come up with?


Nothing is there until we discover it.
 
You ate the red pill, right? :) Back to your vat of pink goo, Neo. Agent Smith is gonna Gitchya!

HA! I was about to say it sounds like something that would come from the hybridization of A Space Odyssey and The Matrix having a lovechild...
 
I like the video interview guy's claims about being in a simulated world is more likely. I think it's absurd, but fun.

To me, when I read/hear things like this, I always come back to singularity making it sort of pointless.

The story goes that once we reach the ability to create an artificial/synthetic intelligence similar to a human, we enter a sort of black hole, where nothing escapes the event horizon. What happens is when we achieve this sufficiently human-like AI, we simply scale it, and have it help improve itself, and ultimately have it create even faster/smarter AI. We repeat this as fast as both we, and the AI itself, can achieve. The end result is we would in theory advance our technological capabilities near some physical limitations, all at once in the biggest explosion of knowledge that can be conceived.

The idea that we'd need to simulate a scientist and steal is IP in real life, is an example of a (fun) idea that would unfortunately be sucked into that black hole. We wouldn't need to have a simulated scientist create IP, we would instead have an AI improvement loop (singularity) that would result in essentially allowing us to create/discover nearly everything almost overnight in term of a cosmic timeline. In my dreams, we'd have access to this god-like intelligence and we'd have it simply offer any portion of the population's problems to be solved. We'd likely merge our consciousness with it, and then take on synthetic forms, and maybe scatter across the universe looking for life and exploring the limits of the universe. 100% control of our minds, including where it resides, eliminating boredom and tedium, etc.
 
I like the video interview guy's claims about being in a simulated world is more likely. I think it's absurd, but fun.

To me, when I read/hear things like this, I always come back to singularity making it sort of pointless.

The story goes that once we reach the ability to create an artificial/synthetic intelligence similar to a human, we enter a sort of black hole, where nothing escapes the event horizon. What happens is when we achieve this sufficiently human-like AI, we simply scale it, and have it help improve itself, and ultimately have it create even faster/smarter AI. We repeat this as fast as both we, and the AI itself, can achieve. The end result is we would in theory advance our technological capabilities near some physical limitations, all at once in the biggest explosion of knowledge that can be conceived.

The idea that we'd need to simulate a scientist and steal is IP in real life, is an example of a (fun) idea that would unfortunately be sucked into that black hole. We wouldn't need to have a simulated scientist create IP, we would instead have an AI improvement loop (singularity) that would result in essentially allowing us to create/discover nearly everything almost overnight in term of a cosmic timeline. In my dreams, we'd have access to this god-like intelligence and we'd have it simply offer any portion of the population's problems to be solved. We'd likely merge our consciousness with it, and then take on synthetic forms, and maybe scatter across the universe looking for life and exploring the limits of the universe. 100% control of our minds, including where it resides, eliminating boredom and tedium, etc.

I am very familiar, and fascinated, with AI singularity. Of course we all know the potential dangers of AI surpassing humans in all abilities. It just occurred to me that if we confined such AI to a simulated universe, rather than allowing it to interact with "our universe", we could prevent the AI from harming us while still benefiting from its advances. I don't think that is how it will play out, but it is possible.

Hmm. I have all sorts of sci-fi plots popping through my head right now. :)
 
I am very familiar, and fascinated, with AI singularity. Of course we all know the potential dangers of AI surpassing humans in all abilities. It just occurred to me that if we confined such AI to a simulated universe, rather than allowing it to interact with "our universe", we could prevent the AI from harming us while still benefiting from its advances. I don't think that is how it will play out, but it is possible. Hmm. I have all sorts of sci-fi plots popping through my head right now. :)

That's interesting to imagine confining it, I never considered it. All of my musings about firewalls to stop such AI seem to meet a quick and complete defeat at the hands of this ultra-intelligence, that I sort of figure I better just embrace it and see what could be, rather than how to control it.

Did you read Snow Crash by Neil Stephenson? It dealt with jumping from a digital world to the analog world, it's all just signals in the end, sort of thing. Signal correctly and you can create the equivalent of a program, or a computer virus that works in the human mind (computer). Maybe confined to a simulation, it could conceive of a way to jump out, even if it's not directly connected. Maybe don't let it read Snow Crash :)

In my fantasy hypothesizing, we should devote a huge chunk of international resource to AI development and just get it over with. Solve all worldly problems with one giant swing. Maybe destroys the world as we know it...I figure that's a small price to pay ;)
 
That's interesting to imagine confining it, I never considered it. All of my musings about firewalls to stop such AI seem to meet a quick and complete defeat at the hands of this ultra-intelligence, that I sort of figure I better just embrace it and see what could be, rather than how to control it.

Did you read Snow Crash by Neil Stephenson? It dealt with jumping from a digital world to the analog world, it's all just signals in the end, sort of thing. Signal correctly and you can create the equivalent of a program, or a computer virus that works in the human mind (computer). Maybe confined to a simulation, it could conceive of a way to jump out, even if it's not directly connected. Maybe don't let it read Snow Crash :)

In my fantasy hypothesizing, we should devote a huge chunk of international resource to AI development and just get it over with. Solve all worldly problems with one giant swing. Maybe destroys the world as we know it...I figure that's a small price to pay ;)

I have Snow Crash on my Kindle but haven't read it yet. Perhaps it is time I do.

My guess is either AI will destroy us or we just keep augmenting ourselves so much with AI that we eventually become indistinguishable from it. The latter sound preferable to me.
 
I have Snow Crash on my Kindle but haven't read it yet. Perhaps it is time I do.
My guess is either AI will destroy us or we just keep augmenting ourselves so much with AI that we eventually become indistinguishable from it. The latter sound preferable to me.

Another good one that really delves into all this much more than snow crash.
https://en.wikipedia.org/wiki/Culture_series

Ian Banks Culture books.

A main feature in his distant future civilization known as the culture (there are other civilizations, this is but one), are "Minds", ultra-advanced AIs, citizens (if not a higher tier of citizen), and yet they live harmoniously with biological life, are part of government, etc. He really explores the blurring of the lines you mention between AI and us, you can find examples in his works of the end extremes and everything in between, in what appears to be a mature-sounding culture. Anyway, all this stuff puts me in mind of that series and if that sort of think is of interest, it may be worth a look.
I would do the earlier works, his first 4 full novels were good. Like Neil's work, each usually have some really good peaks that are memorable and unique.

And I mean specifically, in at least one of those books, the notion of a simulated experience used to uncover some truth outside that simulation, was a key device used...like straight out of your post and into the pages, hence my mention of yet another book in a discussion thread...
Look to Windward was good too, although less action than some of the others.
 
Last edited:
Back
Top Bottom