rho50

joined 1 year ago
[–] rho50@lemmy.nz 1 points 6 months ago (1 children)

Precisely. Many of the narrowly scoped solutions work really well, too (for what they're advertised for).

As of today though, they're nowhere near reliable enough to replace doctors, and any breakthrough on that front is very unlikely to be a language model IMO.

[–] rho50@lemmy.nz 6 points 6 months ago

Exactly. So the organisations creating and serving these models need to be clearer about the fact that they're not general purpose intelligence, and are in fact contextual language generators.

I've seen demos of the models used as actual diagnostic aids, and they're not LLMs (plus require a doctor to verify the result).

[–] rho50@lemmy.nz 27 points 6 months ago (10 children)

There are some very impressive AI/ML technologies that are already in use as part of existing medical software systems (think: a model that highlights suspicious areas on an MRI, or even suggests differential diagnoses). Further, other models have been built and demonstrated to perform extremely well on sample datasets.

Funnily enough, those systems aren't using language models 🙄

(There is Google's Med-PaLM, but I suspect it wasn't very useful in practice, which is why we haven't heard anything since the original announcement.)

[–] rho50@lemmy.nz 89 points 6 months ago (6 children)

It is quite terrifying that people think these unoriginal and inaccurate regurgitators of internet knowledge, with no concept of or heuristic for correctness... are somehow an authority on anything.

[–] rho50@lemmy.nz 25 points 6 months ago (3 children)

I know of at least one other case in my social network where GPT-4 identified a gas bubble in someone's large bowel as "likely to be an aggressive malignancy." Leading to said person fully expecting they'd be dead by July, when in fact they were perfectly healthy.

These things are not ready for primetime, and certainly not capable of doing the stuff that most people think they are.

The misinformation is causing real harm.

[–] rho50@lemmy.nz 3 points 6 months ago

Ohh, my bad! I thought the person you were replying to was asking about Gitea. Yeah, Forgejo seems truly free and also looks like it has a strong governance structure that is likely to keep things that way.

[–] rho50@lemmy.nz 3 points 6 months ago (2 children)

This sadly isn't true anymore - they now have Gitea Enterprise, which contains additional features not available in the open source version.

[–] rho50@lemmy.nz 5 points 6 months ago

From here:

  • SAML
  • Branch protection for organizations
  • Dependency scanning (yes, there are other tools for this, but it's still a feature the open source version doesn't get).
  • Additional security controls for users (IP allowlisting, mandatory MFA)
  • Audit logging
[–] rho50@lemmy.nz 78 points 6 months ago (12 children)

Don't use Gitea, use Forgejo - it's a hard fork of Gitea after Gitea became a for-profit venture (and started gating their features behind a paywall).

Codeberg has switched to Forgejo as well.

Also, there's some promising progress being made towards ActivityPub federation in Forgejo! Imagine a world where you can comment on issues and send/receive pull requests on other people's projects, all from the comfort of a small homeserver.

[–] rho50@lemmy.nz 11 points 7 months ago (1 children)

I saw a job posting for Senior Software Engineer position at a large tech company (not Big Tech, but high profile and widely known) which required candidates to have “an excellent academic track record, including in high school.” A lot of these requirements feel deliberately arbitrary, and like an effort to thin the herd rather than filter for good candidates.

[–] rho50@lemmy.nz 3 points 7 months ago

Songs and albums that I’ve uploaded from my own collection have disappeared from Apple Music, despite my physically owning them on CD and Apple advertising the ability to store my CD rips in the cloud.

It’s unacceptable. I’m still on Apple Music for now, but moving my music library to Jellyfin looks more appealing by the day.

[–] rho50@lemmy.nz 2 points 7 months ago

Agreed, and it could definitely make such an assumption. The other aspect that I don’t really get is… if a superintelligent entity were to eventuate, why would it care?

We’re going to be nothing but bugs to it. It’s not likely to be of any consequence to that entity whether or not I expected/want it to exist.

The anthropomorphising going on with the AI hype is just crazy.

view more: ‹ prev next ›