OmnipotentEntity

joined 1 year ago
[–] OmnipotentEntity@beehaw.org 4 points 2 weeks ago (1 children)

A problem that only affects newbies huh?

Let's say that you are writing code intended to be deployed headless in the field, and it should not be allowed to exit in an uncontrolled fashion because there are communications that need to happen with hardware to safely shut them down. You're making a autonomous robot or something.

Using python for this task isn't too out of left field, because one of the major languages of ROS is python, and it's the most common one.

Which of the following python standard library functions can throw, and what do they throw?

bytes, hasattr, len, super, zip

[–] OmnipotentEntity@beehaw.org 7 points 3 weeks ago* (last edited 3 weeks ago) (3 children)
[–] OmnipotentEntity@beehaw.org 4 points 4 weeks ago (1 children)

If you are taking requests, I am curious how ridiculous The Longest Journey would be.

[–] OmnipotentEntity@beehaw.org 15 points 1 month ago (1 children)

I would be impressed if they risk it. Literally half of Mongolia's population resides in their capital city Ulaanbaatar. If a country bordering Russia were to arrest the sitting Russian president and turn him over to Copenhagen then there's a non-zero possibility of a retaliatory airstrike on the capital, destroying their only major city and killing a significant percentage of the entire country's population.

[–] OmnipotentEntity@beehaw.org 1 points 1 month ago

Good effort. But I don't know if it will be particularly effective considering Project 2025 has playbook stuff specifically about doing end runs around staffers.

The article is stupid as hell though.

[–] OmnipotentEntity@beehaw.org 1 points 1 month ago

No one tell OP that the ml in lemmy.ml is for Marxist Leninists.

[–] OmnipotentEntity@beehaw.org 2 points 2 months ago

Too bad you'll never receive that option from any manufacturer.

[–] OmnipotentEntity@beehaw.org 14 points 2 months ago (3 children)

The scam is that they are actually doing the work, getting paid well

Listen. I know that there are some really shitty stuff going on in North Korea, and very real threats that their government is capable of, and it sucks for the people living there who have to do this work under threat of death.

But if you say that "the scam" is they're doing work and receiving full pay for work done, I'm going to make fun of you. Oh no, someone outside of the West did work and was slightly less exploited by capital than usual in the process. Horror upon horror.

[–] OmnipotentEntity@beehaw.org 23 points 2 months ago

Most recently, other than Trump, George HW Bush lost the election while incumbent. Prior to that it was Jimmy Carter.

The next most recent person to win the election but lose the popular vote was George W Bush, prior to that is was Harrison back in 1888.

[–] OmnipotentEntity@beehaw.org 2 points 2 months ago (1 children)

Please don't tell me you, unironically, actually use the Carmack rsqrt function in the year of our Linux Desktop 2024.

Also if you like, you can write unsafe Rust in safe Rust instead.

 

Abstract:

Hallucination has been widely recognized to be a significant drawback for large language models (LLMs). There have been many works that attempt to reduce the extent of hallucination. These efforts have mostly been empirical so far, which cannot answer the fundamental question whether it can be completely eliminated. In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs. Specifically, we define a formal world where hallucina- tion is defined as inconsistencies between a computable LLM and a computable ground truth function. By employing results from learning theory, we show that LLMs cannot learn all of the computable functions and will therefore always hal- lucinate. Since the formal world is a part of the real world which is much more complicated, hallucinations are also inevitable for real world LLMs. Furthermore, for real world LLMs constrained by provable time complexity, we describe the hallucination-prone tasks and empirically validate our claims. Finally, using the formal world framework, we discuss the possible mechanisms and efficacies of existing hallucination mitigators as well as the practical implications on the safe deployment of LLMs.

 

You might know the game under the name Star Control 2. It's a wonderful game that involves wandering around deep space, meeting aliens, and navigating a sprawling galaxy while trying to save the people of Earth, who are being kept under a planetary shield.

view more: next ›