this post was submitted on 25 Jul 2023
349 points (100.0% liked)

Technology

37360 readers
454 users here now

Rumors, happenings, and innovations in the technology sphere. If it's technological news or discussion of technology, it probably belongs here.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Over the past one and a half years, Stack Overflow has lost around 50% of its traffic. This decline is similarly reflected in site usage, with approximately a 50% decrease in the number of questions and answers, as well as the number of votes these posts receive.

The charts below show the usage represented by a moving average of 49 days.


What happened?

you are viewing a single comment's thread
view the rest of the comments
[–] focus@lemmy.film 46 points 11 months ago (4 children)

and copilot and chatgpt give good enough answers without being unfriendly

[–] bionicjoey@lemmy.ca 47 points 11 months ago (1 children)

ChatGPT has no knowledge of the answers it gives. It is simply a text completion algorithm. It is fundamentally the same as the thing above your phone keyboard that suggests words as you type, just with much more training data.

[–] Barbarian772@feddit.de 42 points 11 months ago (2 children)

Who cares? It still gives me the answers i am looking for.

[–] bionicjoey@lemmy.ca 26 points 11 months ago (4 children)

Yeah it gives you the answers you ask it to give you. It doesn't matter if they are true or not, only if they look like the thing you're looking for.

[–] magic_lobster_party@kbin.social 13 points 11 months ago (1 children)

An incorrect answer can still be valuable. It can give some hint of where to look next.

[–] thingsiplay@kbin.social 19 points 11 months ago (5 children)

@magic_lobster_party I can't believe someone wrote that. Incorrect answers do more harm than being useful. If the person asks and don't know, how should he or she know it's incorrect and look for a hint?

[–] seang96@spgrn.com 8 points 11 months ago

In the context of coding it can be valuable. I produced two tables in a database and asked it to write a query and it did 90% of the job. It was using an incorrect column for a join. If you are doing it for coding you should notice very quickly what is wrong at least if you have experience.

[–] CloverSi@lemmy.comfysnug.space 8 points 11 months ago

I don't know about others' experiences, but I've been completely stuck on problems I only figured out how to solve with chatGPT. It's very forgiving when I don't know the name of something I'm trying to do or don't know how to phrase it well, so even if the actual answer is wrong it gives me somewhere to start and clues me in to the terminology to use.

[–] psyspoop@kbin.social 4 points 11 months ago

In my experience, with both coding and natural sciences, a slightly incorrect answer that you attempt to apply, realize is wrong in some way during initial testing/analysis, then you tweak until it's correct, is very useful, especially compared to not receiving any answer or being ridiculed by internet randos.

[–] magic_lobster_party@kbin.social 4 points 11 months ago (1 children)

Google the provided solution for additional sources. Often when I search for solutions to problems I don’t get the right answer directly. Often the provided solution may not even work for me.

But I might find other clues of the problem which can aid me in further research. In the end I finally have all the clues I need to find the answer to my question.

[–] preciouspupp@sopuli.xyz 2 points 11 months ago (1 children)

How do you Google anything when all the results are AI generated crap for generating ad revenue?

[–] magic_lobster_party@kbin.social 2 points 11 months ago

Well then I guess I have to survive with ChatGPT if the internet is so riddled with search engine optimized garbage. We’re thankfully not there yet, at least not with computer tech questions.

[–] cat@feddit.it 2 points 11 months ago (1 children)

Well if they refer to coding solution they’re right : sometimes non-working code can lead to a working solution. if you know what you’re doing ofc

[–] FaceDeer@kbin.social 2 points 11 months ago

Even if you don't know what you're doing ChatGPT can still do well if you tell it what went wrong with the suggestion it gave you. It can debug its code or realize that it made wrong assumptions about what you were asking from further context.

[–] QHC@kbin.social 9 points 11 months ago (1 children)

How is that practically different from a user perspective than answers on SO? Either way, I still have to try the suggested solutions to see if they work in my particular situation.

[–] bionicjoey@lemmy.ca 8 points 11 months ago (1 children)

At least with those, you can be reasonably confident that a single person at some point believed in their answer as a coherent solution

[–] FaceDeer@kbin.social 3 points 11 months ago (1 children)

That doesn't exactly inspire confidence.

[–] bionicjoey@lemmy.ca 3 points 11 months ago (1 children)

Better than knowing there's some possibility that the answer was generated purely because the sequence of characters had the highest probability of convincing the reader that it seems correct based on the sequence of characters it was given as input (+/- a decent amount of RNG)

[–] FaceDeer@kbin.social 3 points 11 months ago (1 children)

Still debatable, IMO. Human belief is stubborn and self-justifying whereas an RNG can be rerolled as many times as needed.

[–] bionicjoey@lemmy.ca 2 points 11 months ago (1 children)

Yeah but if you keep rerolling the RNG, how do you know when a right answer gets randomly generated?

Also, my point above was that if a human believed the solution was true, it probably was true at some point. With generative language models, there's no guarantee that there's any logic to what it tells you.

[–] FaceDeer@kbin.social 2 points 11 months ago

You know when the code compiles and does what you want it to do. What's the point in asking for code if you're not going to run it? You'd be doing that with anything you got off of Stack Overflow too, presumably.

[–] Greg@lemmy.ca 8 points 11 months ago (1 children)

What point are you trying to make? LLMs are incredibly useful tools

[–] bionicjoey@lemmy.ca 5 points 11 months ago (2 children)

Yeah for generating prose, not for solving technical problems.

[–] Fingerthief@infosec.pub 4 points 11 months ago

You’ve never actually used them properly then.

[–] Greg@lemmy.ca 1 points 11 months ago* (last edited 11 months ago)

not for solving technical problems

One example is writing complex regex. A simple well written prompt can get you 90% the way there. It's a huge time saver.

for generating prose

It's great a writing boilerplate code so I can spend more of my time architecturing solutions instead of typing.

[–] focus@lemmy.film 2 points 11 months ago

the good thing if it gives you the answer in a programming language is that its quite simple tontestvif the output is what you expect, also a lot of humans hive wrong answers...

[–] displaced_city_mouse@midwest.social 9 points 11 months ago (1 children)

There was a story once that said if you put an infinite number of monkeys in front of an infinite number of typewriters, they would eventually produce the works of William Shakespeare.

So far, the Internet has not shown that to be true. Example: Twitter.

Now we have an artificial monkey remixing all of that, at our request, and we're trying to find something resembling Hamlet's Soliloquy in what it tells us. What it gives you is meaningless unless you interpret it in a way that works for you -- how do you know the answer is correct if you don't test it? In other words, you have to ensure the answers it gives are what you are looking for.

In that scenario, it's just a big expensive rubber duck you are using to debug your work.

[–] FaceDeer@kbin.social 8 points 11 months ago* (last edited 11 months ago) (1 children)

There's a bunch of people telling you "ChatGPT helps me when I have coding problems." And you're responding "No it doesn't."

Your analogy is eloquent and easy to grasp and also wrong.

[–] displaced_city_mouse@midwest.social 1 points 11 months ago (1 children)

Fair point, and thank you. Let me clarify a bit.

It wasn't my intention to say ChatGPT isn't helpful. I've heard stories of people using it to great effect, but I've also heard stories of people who had it return the same non-solutions they had already found and dismissed. Just like any tool, actually...

I was just pointing out that it is functionally similar to scanning SO, tech docs, Slashdot, Reddit, and other sources looking for an answer to our question. ChatGPT doesn't have a magical source of knowledge that we collectively also do not have -- it just has speed and a lot processing power. We all still have to verify the answers it gives, just like we would anything from SO.

My last sentence was rushed, not 100% accurate, and shows some of my prejudices about ChatGPT. I think ChatGPT works best when it is treated like a rubber duck -- give it your problem, ask it for input, but then use that as a prompt to spur your own learning and further discovery. Don't use it to replace your own thinking and learning.

[–] FaceDeer@kbin.social 2 points 11 months ago

Even if ChatGPT is giving exactly the same quality of answer as you can get out of Stack Overflow, it gives it to you much more quickly and pieces together multiple answers into a script you can copy and work with immediately. And it's polite when doing so, and will amend and refine its answers immediately for you if you engage it in some back-and-forth dialogue. That makes it better than Stack Overflow and not functionally similar.

I've done plenty of rubber duck programming before, and it's nothing like working with ChatGPT. The rubber duck never writes code for me. It never gives me new information that I didn't already know. Even though sometimes the information ChatGPT gives me is wrong, that's still far better than just mutely staring back at me like a rubber duck does. A rubber duck teaches me nothing.

"Verifying" the answer given by ChatGPT can be as simple as just going ahead and running it. I can't think of anything simpler than that, you're going to have to run the code eventually anyway. Even if I was the world's greatest expert on something, if I wrote some code to do a thing I would then run it to see if it worked rather than just pushing it to master and expecting everything to be fine.

This doesn't "replace your own thinking and learning" any more than copying and pasting a bit of code out of Stack Overflow does. Indeed, it's much easier to learn from ChatGPT because you can ask it "what does that line with the angle brackets do?" or "Could you add some comments to the loop explaining all the steps" or whatever and it'll immediately comply.

[–] blueson@feddit.nu 22 points 11 months ago (2 children)

I honestly believe people are way overvaluing the responses ChatGPT gives.

For a lot of boilerplating scenarios or trying to resolve some pretty standard stuff, it's good.

I had an issue a while back with QueryDSL running towards an MSSQL instance, which I tried resolving by asking ChatGPT some pretty straightforward questions regarding the tool. Without going too much into detail, I basically got stuck in a loop where ChatGPT kept suggesting solutions that were not viable at all in QueryDSL. I pointed it out, trying to point out why what it did was wrong and it tried correcting itself suggesting the same broken solutions.

The AI is great until whatever it has been taught previously doesn't cover your situation. My solution was a bit of digging in google away, which helpfully made me resolve the issue. But had I been stuck with only ChatGPT I'd still be going around in loops.

[–] CloverSi@lemmy.comfysnug.space 10 points 11 months ago (1 children)

It really doesn't work as a replacement for google/docs/forums. It's another tool in your belt, though, once you get a good feel for its limitations and use cases; I think of it more like upgraded rubber duck debugging. Bad for getting specific information and fixes, but great for getting new perspectives and/or directions to research further.

[–] blueson@feddit.nu 1 points 11 months ago

I agree! It has been a great help in those cases.

I just don't believe that it can fullfill the actual need for sites like StackOverflow. It probably never will be able to either, unless we manage to make it learn new stuff without reliable sources like SO, while also allowing it to snap up these obscure answers to problems without burying it in tons of broken solutions.

[–] GammaGames@beehaw.org 3 points 11 months ago

ChatGPT is great for simple questions that have been asked and answered a million times previously. I don’t see any downside to these types of questions not being posted to SO…

[–] Elw@lemmy.sdf.org 15 points 11 months ago (3 children)

Exactly this. SO is now just a repository of answers that ChatGPT and it’s ilk can train against. A high percentage is questions that SO users need answers to are already asked and answered. New and novel problems arise so infrequently thanks to the way modern tech companies are structured that an AI that can read and train on the existing answers and update itself periodically is all most people need anymore… (I realize that was rambling, I hope it made sense)

[–] Tolookah@discuss.tchncs.de 27 points 11 months ago (2 children)

So soon they will start responding with "this has been asked before, let's change the subject"

[–] pglpm@lemmy.ca 3 points 11 months ago
[–] Elw@lemmy.sdf.org 1 points 11 months ago

Exactly! It will all come full circle

[–] pglpm@lemmy.ca 3 points 11 months ago

A repository of often (or at least not seldom) outdated answers.

[–] focus@lemmy.film 3 points 11 months ago

yes! this! is chatgpt intelligent: no! does it more often than not give good enough answers to daily but somewhat obscure ans specific programming questions: yes! is a person on SO intelligent: maybe. do they give good enough answers to daily but somewhat obscure ans specific programming questions: mostly

Its not great for complex stuff, but for quick questions if you are stuck. the answers are given quicker, without snark and usually work

[–] thingsiplay@kbin.social 2 points 11 months ago (1 children)

@focus Is that the reason why we get more and more AI written articles?

[–] Elw@lemmy.sdf.org 1 points 11 months ago

No, thats because of capitalism