this post was submitted on 22 Sep 2023
2 points (66.7% liked)

Technology

35126 readers
126 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
all 29 comments
sorted by: hot top controversial new old
[–] nothingcorporate@lemmy.today 5 points 1 year ago (1 children)

For that thing that killed hundreds of monkeys? Yeah, sounds like a great plan.

[–] birdcat@lemmy.ml 5 points 1 year ago

"only" 15-17 monkeys, but thousands of other animals, insanely depressing. the more you read about it, the more you will start to actually believe that the death of one particular primate could indeed be beneficial for humanity ...

On several occasions over the years, Musk has told employees to imagine they had a bomb strapped to their heads in an effort to get them to move faster, according to three sources who repeatedly heard the comment. On one occasion a few years ago, Musk told employees he would trigger a “market failure” at Neuralink unless they made more progress, a comment perceived by some employees as a threat to shut down operations, according to a former staffer who heard his comment. Five people who’ve worked on Neuralink’s animal experiments told Reuters they had raised concerns internally. They said they had advocated for a more traditional testing approach, in which researchers would test one element at a time in an animal study and draw relevant conclusions before moving on to more animal tests. Instead, these people said, Neuralink launches tests in quick succession before fixing issues in earlier tests or drawing complete conclusions. The result: More animals overall are tested and killed, in part because the approach leads to repeated tests. One former employee who asked management several years ago for more deliberate testing was told by a senior executive it wasn’t possible given Musk’s demands for speed, the employee said. Two people told Reuters they left the company over concerns about animal research.

First rule of technology: if the increase in complexity and decrease in reliability outweigh the added tangible value, don't implement it. This is why it's usually best to avoid "smart" appliances or, you know, brain implants.

[–] 601error@lemmy.ca 2 points 1 year ago

Obligatory “What could possibly go wrong? /s”

[–] Krapulaolut@sopuli.xyz 2 points 1 year ago (1 children)

I'll bet you need a lifetime subscription with that and get a blue verification mark on your forehead.

[–] Xtallll@lemmy.blahaj.zone 2 points 1 year ago

A 30 day lifetime subscription.

[–] AnonTwo@kbin.social 2 points 1 year ago (1 children)

Is there no government oversight for "Uhh no you aren't?"

Given the recent animal testing results this seems like assisted suicide

[–] Bitrot@lemmy.sdf.org 2 points 1 year ago

There was, they were not initially approved.

[–] HappyMeatbag@beehaw.org 1 points 1 year ago (1 children)

If someone besides Musk was running things, I might be excited about the potential for progress… as it stands, though, I just can’t trust the guy.

[–] phx@lemmy.ca 1 points 1 year ago

Given what is coming out about how the animal test subjects were treated, you'd be better off letting a random dentist poke at your brain

[–] const_void@lemmy.ml 1 points 1 year ago (2 children)

Lol this guy can't even make a car that doesn't kill someone or have a bumper that doesn't fall off

[–] ivanafterall@kbin.social 1 points 1 year ago (1 children)

I shudder to think what the human equivalent of "fully autonomous driving" or a launchpad explosion looks like.

[–] RobotToaster@mander.xyz 1 points 1 year ago

Or even the human equivalent of a bumper falling off.

[–] cooljacob204@kbin.social -2 points 1 year ago (1 children)

All cars kill people. Don't Tesla's have a pretty decent safety rating?

[–] const_void@lemmy.ml 0 points 1 year ago (1 children)
[–] cooljacob204@kbin.social 2 points 1 year ago

Okay, instead of posting rage bait can you show me that more people are dying in/from Tesla's then other vehicles per mile driven?

And just to be clear, I don't own a car. Nor do I care for Teslas. But you can't claim it's a dangerous car while not comparing it to the rest of the industry. Cars in general are really fucking unsafe.

[–] willybe@lemmy.ca 1 points 1 year ago

Nice move there AI. I see what you're doing.

[–] ulkesh@beehaw.org 1 points 1 year ago

I mean look on the bright side. It’s Musk’s sycophants who would line up to die for something like this.

[–] waspentalive@beehaw.org 1 points 1 year ago

Always mount a scratch test subject before testing or reconfiguring.

http://www.catb.org/jargon/html/S/scratch-monkey.html

[–] Kit@lemmy.blahaj.zone 1 points 1 year ago

I can see how a quadrapoligic or someone with ALS would be excited for this trial. I hope it goes well. It could give someone that is trapped in their body a new way to communicate with their loved ones.

[–] beejjorgensen@lemmy.sdf.org 0 points 1 year ago (1 children)

Does it support full auto-think?

[–] JustJack23@slrpnk.net 1 points 1 year ago

Whoever thinks getting one is a good idea already has it enabled.

[–] nyakojiru@lemmy.dbzer0.com 0 points 1 year ago (1 children)

People would need to be force to insert this shit to be competitive in the market. This sucks

[–] pulaskiwasright@lemmy.ml 1 points 1 year ago* (last edited 1 year ago)

Is anyone up for starting a non religious Amish society?

[–] Pons_Aelius@kbin.social 0 points 1 year ago (1 children)

A good rule of thumb with computers and software is to never touch/buy an alpha/version 1.0 of any system as its best to let someone else sort out the major bugs.

This is the dilemma people trying to create wetware (brain-hardware interface) face. There will be problems and how the hell any experiments to advance this pass an ethics board is beyond me.

[–] floofloof@lemmy.ca 1 points 1 year ago* (last edited 1 year ago) (2 children)
[–] chfour@lemm.ee 1 points 1 year ago

that is just terrifying, imagine just suddenly going back to being completely blind, and then learning noone's really out there to fix it anymore because the company behind it just went poof one day

[–] Pons_Aelius@kbin.social 1 points 1 year ago

Holy shit that is beyond terrible. It reminds me of something William Gibson would write (tech wise) with the absurdity of Douglas Adams or Kurt Vonnegut mixed in.

And I can see this sort of thing happening again and again if this tech keeps developing over the next 50 years.

I would now revise this to never touch any wetware interface for the next 30 years and maybe by then it will be stable.