Chthonic

joined 1 year ago
[–] Chthonic@slrpnk.net 2 points 7 months ago

This makes me wanna play Thea again

[–] Chthonic@slrpnk.net 46 points 7 months ago (7 children)

That's misophobia, misophonia is when you don't like how soy paste sounds.

[–] Chthonic@slrpnk.net 11 points 7 months ago

Brilliant, very meta, love it

[–] Chthonic@slrpnk.net 5 points 8 months ago (1 children)

I like the lighting and composition but it looks a little fried, how hard did you sharpen?

[–] Chthonic@slrpnk.net 21 points 8 months ago

Wh-what are you doing, step-bother?

[–] Chthonic@slrpnk.net 3 points 9 months ago

If he were smarter and/or not a walking ego then yeah, that would have been the move. Though if he were smart he probably wouldn't be in this mess.

[–] Chthonic@slrpnk.net 27 points 9 months ago* (last edited 9 months ago) (5 children)

It's not. He never wanted to buy twitter, he just wanted to pump and dump the stock, but because he is stupid and the plan was obvious they sued him to make him honor the deal.

So if he just turned around and shut the company down, it would give the SEC legal grounds to argue that his intention all along was market manipulation.

[–] Chthonic@slrpnk.net 31 points 9 months ago (8 children)

My understanding is that the SEC would have fucked him if he just shut it down, because it would indicate that he never intended to buy it in the first place and instead was just trying to manipulate the stock market (which is definitely what he was doing).

[–] Chthonic@slrpnk.net 1 points 9 months ago* (last edited 9 months ago) (1 children)

They don't reason, they're stochastic parrots. Their internal mechanisms are well understood, no idea where you got the notion that the folks building these don't know how they work. It can be hard to predict/understand how an LLM generated a given prompt because of the huge training corpus and statistical nature of neural nets in general.

LLMs work the same as any other net, just with massive sample sets. They have no reasoning capabilities of any kind. We are naturally inclined to ascribe humanlike thought processes to them because they produce human-sounding outputs.

If you would like the perspective of real scientists instead of a "tech-bro" like me I would recommend Emily Bender and Timnit Gebru. I'd recommend them as experts without a vested interest in the massively overblown hype about what LLMs are actually capable of.

[–] Chthonic@slrpnk.net 13 points 9 months ago (4 children)

I work on chatbots for a big tech company. Every team is trying to use GenAI for everything. 90% of the stuff they try won't work. I have to explain that LLMs can't actually think at least three times a week. The hype train was too strong. Even calling it AI feels misleading.

That said, there are some genuinely great applications for LLMs that i've enjoyed looking into.

[–] Chthonic@slrpnk.net 3 points 10 months ago

If you're gonna link to That Scene from Spec Ops you gotta include a "Seriously Gnarly Shit Ahead" content warning or something.

[–] Chthonic@slrpnk.net 2 points 10 months ago

That may be true for warehouse employees, but the corporate offices are a toxic mess of shitty culture and dated ideas. I've never seen a tech department bleed so much underpaid talent to Amazon.

When I quit because they tried to force me back into the office mid-pandemic (August 2020) I had multiple offers for fully remote positions with twice the salary within a few weeks.

But yeah, if you are a cashier at a warehouse or whatever I hear it's a solid gig.

view more: next ›