this post was submitted on 02 Dec 2023
156 points (85.1% liked)

Technology

57997 readers
2851 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Bill Gates feels Generative AI has plateaued, says GPT-5 will not be any better::The billionaire philanthropist in an interview with German newspaper Handelsblatt, shared his thoughts on Artificial general intelligence, climate change, and the scope of AI in the future.

you are viewing a single comment's thread
view the rest of the comments
[–] grabyourmotherskeys@lemmy.world 8 points 9 months ago (2 children)

Another way to think of this is feedback from humans will refine results. If enough people tell it that Toronto is not the capital of Canada it will start biasing toward Ottawa, for example. I have a feeling this is behind the search engine roll out.

[–] raptir@lemdro.id 5 points 9 months ago (2 children)

ChatGPT doesn't learn like that though, does it? I thought it was "static" with its training data.

[–] grabyourmotherskeys@lemmy.world 2 points 9 months ago

I was speculating about how you can overcome hallucinations, etc., by supplying additional training data. Not specific to ChatGPT or even LLMs...

[–] HiggsBroson@lemmy.world 2 points 9 months ago (1 children)

You can finetune LLMs using smaller datasets, or with RLHF (reinforcement learning from human feedback) wherein people can give ratings to responses and the model can be either "rewarded" or "penalized" based off of the ratings for a given output. This retrains the LLM to produce outputs that people prefer.

[–] niisyth@lemmy.ca 2 points 9 months ago (1 children)

Active Learning Models. Though public exposure can eaily fuck it up, without adult supervision. With proper supervision though, there's promise.

[–] BearOfaTime@lemm.ee 2 points 9 months ago (1 children)

So it will always have the biases of the supervisors

[–] niisyth@lemmy.ca 3 points 9 months ago

Bias is inevitable. Whether it is AI or any other knowledge based system. We just have to be cognizant of it and try to remedy it.

[–] Toes@ani.social 3 points 9 months ago

Toronto is Canadian New York. It wants to be the capital and probably should be but it doesn't speak enough French.