this post was submitted on 23 Jul 2023
50 points (90.3% liked)
Asklemmy
43783 readers
874 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That's a great question! It's something I think about a lot. This is probably gonna sound sarcastic, but I mean it genuinely: Have you asked ChatGPT (or any other LLM) that question? I'd be curious to hear what it might have to say. Of course, its first few answers are probably gonna be just generic, useless stuff, so you'll have to really drill down into details to find something useful. But you might be able to find some good ideas in there.
Here are two things that immediately came to mind:
Democratization of knowledge and expertise. Think of the many people that now have access to (e.g.) a virtual doctor just because they have an internet connection. As with everything I'm going to say, this comes with the big caveat that nobody should trust LLMs unquestioningly and that they definitely hallucinate and confabulate frequently. Still, though, they can potentially provide quick diagnoses and relevant, immediate, life-saving information in situations where it's difficult or impossible to get an appointment with a doctor.
Handling information problems. I heard someone say recently that because LLMs are likely to be used for spam, ads, propaganda, and other kinds of information distortions and abuses, LLMs will also be the only systems capable of combating those things. For example, if people start using LLMs to write spam emails, then LLMs will almost certainly have to become part of the spam detection process. But even in cases where information isn't being used maliciously, we still struggle with information overload. LLMs are already being used to sift through (e.g.) the daily news, pick out the top few most important articles, and summarize them for readers. Finding a signal among the noise is actually quite important for all parts of life, so augmenting our ability to do that could be very useful.
I suspect those answers might be broader and larger-scale than what you were asking for. If so, I apologize!
It's curious to hear AI detection as being a feature, given that it's just the same machine being used 'in reverse' - that arms race will just leave humans unable to know what is real.
Yeah, I agree. Less a "feature" and more a necessary evil.