this post was submitted on 26 Jun 2023
110 points (97.4% liked)
Asklemmy
44156 readers
1265 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Bard however, does not seem to get the answer right:
Seems like it got kind of close, with the "The box is both yellow and red, so it is both good and happy"... but then falls apart afterwards.
Edit: I tried to debate with it:
Me:
Bard:
Which is interesting to say the least, its almost like its looking a bit too deeply into the question lol.
Not surprised. I got access to bard a while back and it does quite a lot more hallucinating than even GPT3.5.
Though it doubling down on the wrong answer even when corrected is something I've seen GPT4 do even in some cases. It seems like once it says something, it usually sticks to it.
Bing had no trouble
Bing is GPT4 based, though I don't think the same version as ChatGPT. But either way GPT4 can solve these types of problems all day.