this post was submitted on 29 Jun 2023
0 points (NaN% liked)

Technology

37634 readers
407 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

There is huge excitement about ChatGPT and other large generative language models that produce fluent and human-like texts in English and other human languages. But these models have one big drawback, which is that their texts can be factually incorrect (hallucination) and also leave out key information (omission).

In our chapter for The Oxford Handbook of Lying, we look at hallucinations, omissions, and other aspects of “lying” in computer-generated texts. We conclude that these problems are probably inevitable.

you are viewing a single comment's thread
view the rest of the comments
[–] zzzzz@beehaw.org 1 points 1 year ago

Thank you so much for your thoughtful response. I'm sorry for not seeing it for so long! If you can believe it, I just discovered the "inbox" in my lemmy app and am going through all the things people said to me over the past month.

This whole topic is really interesting to me. I hear what you're saying and imagine the distinctions you're drawing between these models and real brains are significant. I can't help but wonder, though, if we, as humans, might be poorly equipped to recognize the characteristics of emerging intelligence in the systems we create.

I am reminded vaguely of the Michael Crichton book Andromeda Strain (it has been many years since I read it, granted) wherein an alien lifeforms based on silicon, rather than carbon, was the major plot object. It is interesting to think that something like an alien intelligence might emerge in our own networked systems without our noticing. We are waiting for our programs to wake up and pass the Turing test. Perhaps, when they wake up, no one will even see because we are measuring the wrong set of things...