[-] rysiek@mstdn.social 8 points 6 months ago

@Mysteriarch @fer0n fool me once, shame on you; but go right ahead and fool me twice or thrice, why not!

[-] rysiek@mstdn.social 1 points 7 months ago

@maxprime amazing, thank you for sharing!

@nayminlwin

[-] rysiek@mstdn.social 1 points 7 months ago

@DolphinMath correct. But Vaultwarden is not the official thing. Not saying it's bad, just something to keep in mind.

[-] rysiek@mstdn.social 2 points 7 months ago* (last edited 7 months ago)

@Natanael enshittification is about power, and ATproto is designed to look decentralized but enable secondary centralization where it matters for power dynamics in the network, in a way that the Fediverse very much doesn't:
https://rys.io/en/167.html

(shameless plug, I wrote that, but it dives somewhat deep into the "why" of what I said above)

tl;dr it doesn't matter which PDS you use if everyone is still beholden to the same entity that controls the "reach" layer in BS.

@SkepticalButOpenMinded

[-] rysiek@mstdn.social 1 points 11 months ago

@Barbarian772 it matters because with regard to intelligent beings we have moral obligations, for example.

It also matters because that would be a truly amazing, world-changing thing if we could create intelligence out of thin air, some statistics, and a lot of data.

It's an extremely strong claim, and strong claims demand strong proof. Otherwise they are just hype and hand-waving, which all of the "ChatGPT intelligence" discourse is, in order to "maximize shareholder value".

[-] rysiek@mstdn.social 3 points 11 months ago

@Barbarian772 it was shown over and over and over again that ChatGPT lacks the capacity for abstraction, logic, understanding, self-awareness, reasoning, planning, critical thinking, and problem-solving.

That's partially because it does not have a model of the world, an ontology, it cannot *reason*. It just regurgitates text, probabilistically.

So, glad we established that!

[-] rysiek@mstdn.social 1 points 11 months ago

@CorruptBuddha well technically, since we're nit-picking, I did not make that claim, BobKerman3999 did.

And the claim was was about how ChatGPT's "intelligence" can be understood through the lens of the Chinese Room thought experiment.

Then I was asked to prove that human brains don't work like Chinese rooms, and that's a *different* thing. The broader claim in all of this, of course, is that ChatGPT "is intelligent" in the same sense as humans are, and that strong claim requires strong proof.

[-] rysiek@mstdn.social 4 points 11 months ago

@Barbarian772 no, GTP is not more "intelligent" than any human being, just like a calculator is not more "intelligent" than any human being — even if it can perform certain specific operations faster.

Since you used the term "intelligent" though, I would ask for your definition of what it means? Ideally one that excludes calculators but includes human beings. Without such clear definition, this is, again, just hand-waving.

I wrote about it in a bit longer form:
https://rys.io/en/165.html

[-] rysiek@mstdn.social 3 points 11 months ago

@Barbarian772 I don't have to. It's the ChatGPT people making extremely strong claims about equivalence of ChatGPT and human intelligence. I merely demand proof of that equivalence. Which they are unable to provide, and instead use rhetoric and parlor tricks and a lot of hand waving to divert and distract from that fact.

[-] rysiek@mstdn.social 1 points 11 months ago

@Barbarian772 so? If the cookie tastes sweet, what do I care what sweetening agent is used inside?

@BobKerman3999

view more: next ›

rysiek

joined 2 years ago