vrighter

joined 1 year ago
[–] vrighter@discuss.tchncs.de 15 points 6 days ago

the problem isn't being pro ai. It's people puling ai supposed ai capabilities out of their asses without having actually looked at a single line of code. This is obvious to anyone who has coded a neural network. Yes even to openai themselves, but if they let you believe that, then the money stops flowing. You simply can't get an 8-ball to give the correct answer consistently. Because it's fundamentally random.

[–] vrighter@discuss.tchncs.de 5 points 6 days ago

no stigma has nothing to do with anything. people do die from its usage (or the consequences of its usage). It is more toxic than a lot of other "harder" drugs. It can ruin lives and break up families just as well. we're talking about what it is, not how it is perceived. I am also not implying you should smoke weed instead. Everyone has their drug of choice. To each their own. But make no mistake, it's a drug nonetheless.

[–] vrighter@discuss.tchncs.de 13 points 6 days ago* (last edited 6 days ago) (15 children)

yes it is, and it doesn't work.

edit: too expand, if you're generating data it's an estimation. The network will learn the same biases and make the same mistakes and assumtlptions you did when enerating the data. Also, outliers won't be in the set (because you didn't know about them, so the network never sees any)

[–] vrighter@discuss.tchncs.de 23 points 1 week ago (17 children)

also, what you described has already been studied. Training an llm its own output completely destroys it, not makes it better.

[–] vrighter@discuss.tchncs.de 16 points 1 week ago* (last edited 1 week ago)

The outputs of the nn are sampled using a random process. Probability distribution is decided by the llm, loaded die comes after the llm. No, it's not solvable. Not with LLMs. not now, not ever.

[–] vrighter@discuss.tchncs.de 16 points 1 week ago* (last edited 1 week ago)

no need for that subjective stuff. The objective explanation is very simple. The output of the llm is sampled using a random process. A loaded die with probabilities according to the llm's output. It's as simple as that. There is literally a random element that is both not part of the llm itself, yet required for its output to be of any use whatsoever.

[–] vrighter@discuss.tchncs.de 56 points 1 week ago (32 children)

why did it? because it's intrinsic to how it works. This is not a solvable problem.

[–] vrighter@discuss.tchncs.de 30 points 1 week ago (8 children)

they really are not.

[–] vrighter@discuss.tchncs.de 1 points 1 week ago

again, whoosh. you missed the part where you train me before asking the question. Then i can extrapolate. And I need very few examples, as little as 1.

I'm talking from the perspective of having actually coded this stuff, not just speculating. A neural network can interpolate, but it sure as hell can't extrapolate anything that was not in its training.

Also, as a human, I can also train myself.

[–] vrighter@discuss.tchncs.de 1 points 1 week ago

ok but that still entails trying random things until i find it. If I didn't already know it was a builtin i wouldn't know to search there. The bash thing was just an example. I have learned this stuff since i encountered the problem. This is just me recollecting my experience of trying to use man

[–] vrighter@discuss.tchncs.de 18 points 1 week ago

I've met someone employed as a dev, who not only didn't know that the compiler generates an executable file, but actually spent a month trying to change the code, not noticing that 0 of their code changes were having any effect whatsoever (because they kept running an old build of mine)

[–] vrighter@discuss.tchncs.de 1 points 1 week ago (2 children)

You point to me and tell me this is a bike. If we go around it 90 degrees and you ask me what it is, I can still tell you it's a bike, even though I don't know what one does or is used for. absolutely none of what you mentioned. i need no context. I only need to be able to tell that you pointed to the same object the second time even though I'm viewing it from a slightly different angle.

You point and say "this is a bike", we walk around it, you point again and ask me "what is that?" I reply "a bike.... you've just told me!"

Neural networks simply can't do that. It won't even recognize that it is the same object if it wasn't specifically trained to recognze it from all angles. You're talking about a completely different thing, which I never mentioned.

view more: ‹ prev next ›