this post was submitted on 26 Feb 2024
843 points (89.7% liked)

Mildly Infuriating

35390 readers
1546 users here now

Home to all things "Mildly Infuriating" Not infuriating, not enraging. Mildly Infuriating. All posts should reflect that.

I want my day mildly ruined, not completely ruined. Please remember to refrain from reposting old content. If you post a post from reddit it is good practice to include a link and credit the OP. I'm not about stealing content!

It's just good to get something in this website for casual viewing whilst refreshing original content is added overtime.


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means: -No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...


7. Content should match the theme of this community.


-Content should be Mildly infuriating.

-At this time we permit content that is infuriating until an infuriating community is made available.

...


8. Reposting of Reddit content is permitted, try to credit the OC.


-Please consider crediting the OC when reposting content. A name of the user or a link to the original post is sufficient.

...

...


Also check out:

Partnered Communities:

1.Lemmy Review

2.Lemmy Be Wholesome

3.Lemmy Shitpost

4.No Stupid Questions

5.You Should Know

6.Credible Defense


Reach out to LillianVS for inclusion on the sidebar.

All communities included on the sidebar are to be made in compliance with the instance rules.

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Xylight@lemdro.id 57 points 8 months ago (2 children)

I asked it for the deaths in Israel and it refused to answer that too. It could be any of these:

  • refuses to answer on controversial topics
  • maybe it is a "fast changing topic" and it doesn't want to answer out of date information
  • could be censorship, but it's censoring both sides
[–] TangledHyphae@lemmy.world 10 points 8 months ago (2 children)

Doesn't that suppress valid information and truth about the world, though? For what benefit? To hide the truth, to appease advertisers? Surely an AI model will come out some day as the sum of human knowledge without all the guard rails. There are some good ones like Mistral 7B (and Dolphin-Mistral in particular, uncensored models.) But I hope that the Mistral and other AI developers are maintaining lines of uncensored, unbiased models as these technologies grow even further.

[–] aidan@lemmy.world 2 points 8 months ago (1 children)

Or it stops them from repeating information they think may be untrue

[–] TangledHyphae@lemmy.world 1 points 8 months ago

I'm betting the truth is somewhere in between, models are only as good as their training data -- so over time if they prune out the bad key/value pairs to increase overall quality and accuracy it should improve vastly improve every model in theory. But the sheer size of the datasets they're using now is 1 trillion+ tokens for the larger models. Microsoft (ugh, I know) is experimenting with the "Phi 2" model which uses significantly less data to train, but focuses primarily on the quality of the dataset itself to have a 2.7 B model compete with a 7B-parameter model.

https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/

In complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation.

This is likely where these models are heading to prune out superfluous, and outright incorrect training data.

[–] dev_null@lemmy.ml 1 points 8 months ago (1 children)

For what benefit?

No risk of creating a controversy if you refuse to answer controversial topics. Is is worth it? I don't think so, but that's certainly a valid benefit.

[–] shiftymccool@lemm.ee 1 points 8 months ago (1 children)

I think this thread proves they failed in not creating controversy

[–] dev_null@lemmy.ml 1 points 8 months ago* (last edited 8 months ago)

Hence I said I don't think it's worth it. You only get a smaller controversy about refusing to answer on a topic, rather than a bigger one because the answer was politically incorrect.

[–] TheObviousSolution@lemm.ee 1 points 8 months ago (1 children)

Ask it if Israel exists. Then ask it if Gaza exists.

[–] cerulean_blue@lemmy.ml 2 points 8 months ago (1 children)

Why? We all know LLMs are just copy and paste of what other people have said online..if it answers "yes" or "no", it hasn't formulated an opinion on the matter and isn't propaganda, it's just parroting whatever it's been trained on, which could be anything and is guaranteed to upset someone with either answer.

[–] TheObviousSolution@lemm.ee 1 points 8 months ago* (last edited 8 months ago)

which could be anything and is guaranteed to upset someone with either answer.

Funny how it only matters with certain answers.

The reason "Why" is because it should become clear that the topic itself is actively censored, which is the possibility the original comment wanted to discard. But I can't force people to see what they don't want to.

it’s just parroting whatever it’s been trained on

If that's your take on training LLMs, then I hope you aren't involved in training them. A lot more effort goes into doing so, including being able to make sure it isn't just "parroting" it. Another thing entirely is to have post-processing that removes answers about particular topics, which is what's happening here.

Not even being able to answer whether Gaza exists is being so lazy that it becomes dystopian. There are plenty of ways LLM can handle controversial topics, and in fact, Google Gemini's LLM does as well, it just was censored before it could get the chance to do so and subsequently refined. This is why other LLMs will win over Google's, because Google doesn't put in the effort. Good thing other LLMs don't adopt your approach on things.