Read "Infinite" by Jeremy Robinson. It's a Sci-fi novel that explores a bit that idea.
Asklemmy
A loosely moderated place to ask open-ended questions
Search asklemmy π
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
Of course, but I'd still want to contribute to the real world. Luckily my contributions are non physical, so I could work from VR. And I'd have to log out occasionally to exercise.
Why couldn't you exercise in VR?
Unless the machine you're connected to somehow stimulates your muscles so that they don't athrope then exercise is probably one of the few things you couldn't do in VR. The reason is the same why you can't exercise in your dreams either.
You can do the activity ofcourse and it feels like working out but it does not translate to physical gains in your real body.
What are you talking about? Just because you're wearing a headset doesn't mean you can't move your body. You'd have to have weights in the real world to use weights in VR, but even if you didn't, you could do planks, push ups, and various other excersises on the real floor you're standing on.
I'd jump in, but i would still need a crafted experience. I find designing my own sandbox to be a bit dull. Remember the last season of the Good Place? Turns out infinite wish fulfilment might not be that effective at making us happy. And it certainly won't help us to develop.
But if there are fun, designed experiences that are engaging and challenging to do inside this realm, sign me the fuck up.
Question though: how is time experienced on the inside? Because if our virtual experiences happen faster than real time we could get some real world advantages by studying and training in virtual.
One issue with learning and training, is that you'll have the same limitations as now. You are still human, just connected to a machine and time cannot accelerate to learn faster.
However if we could move, change time to whatever place we want, create whatever we want. And still look real.
Then that would maybe make something very interesting for learning and training. It wouldn't be faster. But for example a teacher would be able to create a world where they can help the students learn better, with images, simulations, stories...
However that may also create some issues where it wouldn't be wise to recreate wars, death and other things which can be shocking for people. Because of that realism, it would be very hard to distinguish between a simulated war/death and a real one.
Tho it would maybe create a huge benefit for training for flying a plane for example. Cheap and no risks to break anything.
I question the ethics of ruling over AI subjects and the premise of "anything goes".
This is where we start getting into the realm of philosophy as it relates to science fiction esq "true" Artificial Intelligence.
Taking the post at face value these AI persons that populate your individual pocket dimension would be, for all intents and purposes, sentient artificial minds, or at least controlled by 1 central mind.
So does that AI deserve human rights? Do laws apply to the and interaction had with them? If all they know is humanity then are they also "human"? Is this theoretically infinitely intelligent super computer even capable of truly understanding humanity, emotions, life in all of its facets?
I fully accept that I am getting too deep into this funny internet post but there have been hundreds upon thousands of books, thought experiments, and debates over this EXACT premise. Short answer is there is no answer. It's Schrodinger's morality lol
That's why I said AI that seems consciouss
What's the difference seeming conscious and being conscious?
Consciousness means that you're capable of having a subjective experience. It feels like something to be you.
If you only seem consciouss then you can't experience anything. You could aswell not exist at all.
I guess it depends on how realistic the fake consciousness is. Is it indistinguishable from real consciousness? Or would I be acutely aware that every relationship I create is fake? I mean, I guess if we're claiming it absolutely is not real, then I'll always know that and it kinda taints the whole idea. It kind of makes me wonder about the whole concept. Like, if we did find a way to determine consciousness somehow, could that knowledge interfere with building an emotional relationship with a indistinguishable but fake conscious AI?
It's not fake consciousness per se but a character that acts as it was consciouss despite the fact that it's not. So called "philosophical zombie"
You could have real relationships with other real people in the simulation. AI could be your barista, driver, random people in the city etc.
How do you test that? How do you know that people around you actually have conscious and not just seem to have? If you can't experience anything, how do you fake conscious? And is this fake conscious really any less real than ours? I think anything that resembles conscious well enough to fool people could be argued to be real, even if it's different to ours.
I don't think it matters in this case. I decided that they are not consciouss and only seem to be because I didn't want this thread to turn into debate about wether it's immoral to abuse AI systems or not.
I think it matters a great deal! I would like to believe that not only would I not use such a system, I would actively fight to have it made illegal.
Why? That's like making it illegal to kick your roomba
No. I'm very certain that my Roomba is not conscious. But If we can't tell whether or not these people are conscious or not, then I don't think it's right to have this power over them. A better parallel than a Roomba would be an animal.
No. I wrote the premise myself and I specifically said they appear consciouss, not that they are consciouss. I get what you're saying but that does not apply here. In this specific case we know for a fact that they're not consciouss. The only other consciouss being there on top of you are the other real people in the simulation. Not the AI characters.
I'm saying that appears conscious and is conscious could very well be the same thing, we don't know, so in this imaginary world, I would not trust anyone who told me "don't worry, you can torture them, they are not actually conscious".
If we have technology that enables such virtual reality there's a good chance we have an answer to the hard problem of consciousness aswell. Again, that's why I said that they appear consciouss. They're programmed in such way but we know it's just an illusion.
I totally see where you're coming from though and I agree with you. There's also that even if we knew for absolute certain that they're not consciouss it would still take a literal psychopath to treat them the way they do on West World for example and even if you're not morally doing anything wrong I'd still think twice wether I want to hang with someone that's capable of acting that way. However if you are that kind of person then I'd rather have you take out your anger and fulfill your sadistic needs on a unconsciouss AI than real people.
We literally have no idea and have not figured out a good way to test this.
We do know. Consciousness is what you're experiencing now. Then again general anesthesia is what non-consciousness feels like. Nothing. It by definition cannot be experienced
What we don't know is how to measure it. There's no way to confirm that something is or isn't consciouss
We do know. Consciousness is what youβre experiencing now.
That's true from my pov, but I can't really prove it. Its kinda like the biggest "Trust me bro" that we all assume is true.
Not digging into the ethics, just the ideas are fascinating.
For me it depends on who controls it really, say Amazon becomes the skynet and creates such seamless vr I will never even try it out, resisting isnt too difficult im that case
I think it would be great to try out things that are impossible in the real world without any risks. Not necessarily crazy things, but things that are just not available to me irl. Like designing stuff without any limits of ressources or money. That way we could improve the real world by testing things in the virtual.
It quickly turns into a philosophical question of what you really want to do in the real world and why. It doesn't make much sense to better your real world for things that could be easily done in the virtual. However, since your life even in the virtual world depends on your real survival, there would still be things to do.
Then there's also the thing that your virtual world would be limited to your own imagination. At some point, that will get boring even with virtual people around. It would still make sense to exchange ideas with actual people in order to expand your own virtual world.
I would jump in head first. Anything that will make me run from this reality.
I think there is a flaw in these kinds of arguments and that is the assumption that a perfect simulation would even be desirable. Why not enhance it with abilities like teleportation, third person views, searches and other HUD-like UI elements,...? I can tell from years of using Second Life that those don't ruin immersion nearly as much as VR-proponents seem to think.
But I did not include such limitations
We could and we should. "Real world" will deteriorate socially, enviromentally and architecturally if real VR becomes popular.
Not necessarily. We might have robots and AI keeping up the economy and human drudgery is no longer needed.
Ready Player One Matrix And maybe others but I don't know.
It would be extremely hard to resist. Such tech may be expensive, tho it could still be owned by poorer people once it decreases in cost, as it would allow to escape their poorness.
Tho because mostly companies will do things like that, I mostly see something like in Ready Player One. Where you have a giant social network/game, where you can participate in plenty of different activities which can look like the real world, or not.
The Matrix version where you are in a world filled with "real people"/AI, where you have the same world but have some super powers, well not really sure. Do you really want to have powers, what to do with them?
It's also difficult to get a world like that. Social interactions are pretty much needed for most people. Even if these people don't see it directly, getting out, buying something, it's social interaction. If those AI people aren't good, the experience would most likely be mediocre because of the objective it implies (recreating a similar world but where you can do anything). Tho maybe if it is used as a game, maybe it could interest more people.
However it would enphasis the social distancing of many people and break many things. This is why I'd rather see it as like a social media/game universe.
Another issue in the question now is well, there is no such thing. So it's difficult to even know if it would be interesting or not. Would we be absorbed all day in it like people were in Ready Player One? Will companies try to control us? Make us buy things?
This seems to be based off the premise that it's not indistinguishable from reality. I think the concept is that the AI are indistinguishable from real people. Is the interaction any less useful than with other people. Plus it says you can visit other "real" people if you want. Personally, I think some ethics come into play in regards to "anything goes" and rulling over AI entities. That's part of my issue with it. If they're indistinguishable from people are they not actually people then?
Can ChatGPT be easily distinguished from a real person (if it doesn't say it's an AI)?
It is still possible, but not easy (also it is getting worse with time). Tho that doesn't make it a person. However we don't yet have the tech capable of making an entire person just in AI. But if we had it.
Your concerns may very well be a good point. But these AI humans, may not be considered persons if we suppose current tech enhanced.
However another moral issue is : let's say there is an AI human in there, and the player falls in love with it. Is the player marrying a person or an AI? From his perspective it could very well be a person. But from another ones perspective it would be an AI. How would other people need to treat such AI? As a person? Not as a person? How awkward would it be?
Then another one (if everything looks and feels as the real world) : AI humans in there wouldn't be considered as people. Would that mean that you can enslave them? Commit "crimes" (and other considered "bad" things) as they are not considered people? If they look and act like real people is it moral to do such thing?