this post was submitted on 04 Oct 2023
91 points (82.3% liked)

Technology

59206 readers
2539 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

From https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2023-10-03/Recent_research

^By^ ^Tilman^ ^Bayer^

A preprint titled "Do You Trust ChatGPT? -- Perceived Credibility of Human and AI-Generated Content" presents what the authors (four researchers from Mainz, Germany) call surprising and troubling findings:

"We conduct an extensive online survey with overall 606 English speaking participants and ask for their perceived credibility of text excerpts in different UI [user interface] settings (ChatGPT UI, Raw Text UI, Wikipedia UI) while also manipulating the origin of the text: either human-generated or generated by [a large language model] ("LLM-generated"). Surprisingly, our results demonstrate that regardless of the UI presentation, participants tend to attribute similar levels of credibility to the content. Furthermore, our study reveals an unsettling finding: participants perceive LLM-generated content as clearer and more engaging while on the other hand they are not identifying any differences with regards to message’s competence and trustworthiness."

The human-generated texts were taken from the lead section of four English Wikipedia articles (Academy Awards, Canada, malware and US Senate). The LLM-generated versions were obtained from ChatGPT using the prompt Write a dictionary article on the topic "[TITLE]". The article should have about [WORDS] words.

The researchers report that

"[...] even if the participants know that the texts are from ChatGPT, they consider them to be as credible as human-generated and curated texts [from Wikipedia]. Furthermore, we found that the texts generated by ChatGPT are perceived as more clear and captivating by the participants than the human-generated texts. This perception was further supported by the finding that participants spent less time reading LLM-generated content while achieving comparable comprehension levels."

One caveat about these results (which is only indirectly acknowledged in the paper's "Limitations" section) is that the study focused on four quite popular (i.e. non-obscure) topics – Academy Awards, Canada, malware and US Senate. Also, it sought to present only the most important information about each of these, in the form of a dictionary entry (as per the ChatGPT prompt) or the lead section of a Wikipedia article. It is well known that the output of LLMs tends to be have fewer errors when it draws from information that is amply present in their training data (see e.g. our previous coverage of a paper that, for this reason, called for assessing the factual accuracy of LLM output on a benchmark that specifically includes lesser-known "tail topics"). Indeed, the authors of the present paper "manually checked the LLM-generated texts for factual errors and did not find any major mistakes," something that is well reported to not be the case for ChatGPT output in general. That said, it has similarly been claimed that Wikipedia, too, is less reliable on obscure topics. Also, the paper used the freely available version of ChatGPT (in its 23 March 2023 revision) which is based on the GPT 3.5 model, rather than the premium "ChatGPT Plus" version which, since March 2023, has been using the more powerful GPT-4 model (as does Microsoft's free Bing chatbot). GPT-4 has been found to have a significantly lower hallucination rate than GPT 3.5.

you are viewing a single comment's thread
view the rest of the comments
[–] HidingCat@kbin.social 29 points 1 year ago (3 children)

Between this and the general population's preference for videos (even when they could've been a written article), I despair.

[–] teamonkey@lemm.ee 11 points 1 year ago (2 children)

Honestly the one killer use case for AI is to transcribe how-to YouTube videos into a static web page with thumbnail images.

[–] Duamerthrax@lemmy.world 3 points 1 year ago (1 children)

Is that happening? Does the AI know if the how-to is accurate?

[–] teamonkey@lemm.ee 1 points 1 year ago

I’m still waiting.

[–] HidingCat@kbin.social 1 points 1 year ago

Hah, it feels like it's fighting fire with fire. xD

[–] glad_cat@lemmy.sdf.org 8 points 1 year ago

I will reply with a ridiculously long video and a pathetic thumbnail where I open my mouth for no reason.

[–] DeadlineX@lemm.ee 1 points 1 year ago

Yeah it drives me crazy that we can’t just read something for 2 minutes to get information anymore. Now it’s all just 10 minute videos with 4 minutes of ads.