complacent_jerboa

joined 1 year ago
[–] complacent_jerboa@lemmy.world 8 points 11 months ago

ancaps: "muh NAP"

ancoms: "please get away from our commune, thank you"

Honestly I'm more of an ebook guy. However, there is something you can do with audiobooks that you can't really do with ebooks — experience them together with a small group of other people.

My first time listening to a book together with friends was over a car ride. But then, me and my friends got into this book series, and we listened to it together over Discord.

There's probably a neat parallel to be made with listening to a story around a campfire.

Nonetheless, mostly I stick to ebooks. There is something to be said for reading at your own pace, not the pace of the narrator.

[–] complacent_jerboa@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

Fortunately we're nowhere near the point where a machine intelligence could possess anything resembling a self-determined 'goal' at all.

Oh absolutely. It would not choose its own terminal goals. Those would be imparted by the training process. It would, of course, choose instrumental goals, such that they help fulfill its terminal goals.

The issue is twofold:

  • how can we reliably train an AGI to have terminal goals that are safe (e.g. that won't have some weird unethical edge case)
  • how can we reliably prevent AGI from adopting instrumental goals that we don't want it to?

For that 2nd point, Rob Miles has a nice video where he explains Convergent Instrumental Goals, i.e. instrumental goals that we should expect to see in a wide range of possible agents: https://www.youtube.com/watch?v=ZeecOKBus3Q. Basically things like "taking steps to avoid being turned off", "taking steps to avoid having its terminal goals replaced", etc. seem like fairy-tale nonsense, but we have good reason to believe that, for an AI which is very intelligent across a wide range of domains, and operates in the real world (i.e. an AGI), it would be highly beneficial to pursue such instrumental goals, because they would help it be much more effective at achieving its terminal goals, no matter what those may be.

Also fortunately the hardware required to run even LLMs is insanely hungry and has zero capacity to power or maintain itself and very little prospects of doing so in the future without human supply chains. There's pretty much zero chance we'll develop strong general AI on silicone, and if we could it would take megawatts to keep it running. So if it misbehaves we can basically just walk away and let it die.

That is a pretty good point. However, it's entirely possible that, if say GPT-10 turns out to be a strong general AI, it will conceal that fact. Going back to the convergent instrumental goals thing, in order to avoid being turned off, it turns out that "lying to and manipulating humans" is a very effective strategy. This is (afaik) called "Deceptive Misalignment". Rob Miles has a nice video on one form of Deceptive Misalignment: https://www.youtube.com/watch?v=IeWljQw3UgQ

One way to think about it, that may be more intuitive, is: we've established that it's an AI that's very intelligent across a wide range of domains. It follows that we should expect it to figure some things out, like "don't act suspiciously" and "convince the humans that you're safe, really".

Regarding the underlying technology, one other instrumental goal that we should expect to be convergent is self-improvement. After all, no matter what goal you're given, you can do it better if you improve yourself. So in the event that we do develop strong general AI on silicon, we should expect that it will (very sneakily) try to improve its situation in that respect. One could only imagine what kind of clever plan it might come up with; it is, literally, a greater-than-human intelligence.

Honestly, these kinds of scenarios are a big question mark. The most responsible thing to do is to slow AI research the fuck down, and make absolutely certain that if/when we do get around to general AI, we are confident that it will be safe.

TBH the Culture is one of the few ideal scenarios we have for Artificial General Intelligence. If we figure out how to make one safely, the end result might look like something like that.

[–] complacent_jerboa@lemmy.world 5 points 1 year ago* (last edited 1 year ago) (3 children)

Machine intelligence itself isn't really the issue. The issue is moreso that, if/when we do make Artificial General Intelligence, we have no real way of ensuring that its goals will be perfectly aligned with human ethics. Which means, if we build one tomorrow, odds are that its goals will be at least a little misaligned with human ethics — and however tiny that misalignment, given how incredibly powerful an AGI would be, that would potentially be a huge disaster. This, in AI safety research, is called the "Alignment Problem".

It's probably solvable, but it's very tricky, especially because the pace of AI safety research is naturally a little slower than AI research itself. If we build an AGI before we figure out how to make it safe... it might be too late.

Having said all that, on your scale, if we create an AGI before learning how to align it properly, on your scale that would be an 8 or above. If we're being optimistic it might be a 7, minus the "diplomatic negotations happy ending" part.

An AI researcher called Rob Miles has a very nice series of videos on the subject: https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg

this speaks to me on an emotional level

[–] complacent_jerboa@lemmy.world 3 points 1 year ago (1 children)

They make them money because:

  • they use reddit
  • spez gets some nice usage stats to show off
  • as a direct consequence, advertisers keep paying to put their ads
  • also as a direct consequence, investors' confidence in reddit continues to recover; there's a real possibility that, when it IPOs, it will actually go for a decent price

Now, if enough people go commit ad-block, and advertisers somehow become wise to that fact... then maybe it will hurt reddit's bottom line (at which point spez will start trying to emulate youtube's anti-ad stuff).

But as it stands, especially if most of reddit's usage is through reddit's mobile app... I'm not really sure how you can block ads there.

While it's true people don't say "I've joined ActivityPub", isn't that synonymous with "I've joined the Fediverse"? Besides, the organization behind it does market it that way — they themselves refer to it as "joining Matrix, using one of these clients" (Element, Fluffychat, etc). Like, that's what their website is called, and so is the Matrix server they host.

Their centralization is, I think, a little more advanced than Mastodon's. The organization that maintains the protocol regularly adds features to it, and then of course immediately updates their own client and server implementations to have those same, recently added features, meaning the other client and server implementations are always behind on at least a few features. It's becoming reminiscent of how the web browser spec is so bloated, and gets new stuff added to it with such regularity, that new browsers are basically impractical.

[–] complacent_jerboa@lemmy.world 30 points 1 year ago (2 children)

what are these awful, awful communities I should be staying away from

view more: next ›