this post was submitted on 14 Apr 2024
21 points (92.0% liked)

Technology

55768 readers
2674 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Proponents of AI and other optimists are often ready to acknowledge the numerous problems, threats, dangers, and downright murders enabled by these systems to date. But they also dismiss critique and assuage skepticism with the promise that these casualties are themselves outliers — exceptions, flukes — or, if not, they are imminently fixable with the right methodological tweaks.

Common practices of technology development can produce this kind of naivete. Alberto Toscano calls this a “Culture of Abstraction.” He argues that logical abstraction, core to computer science and other scientific analysis, influences how we perceive real-world phenomena. This abstraction away from the particular and toward idealized representations produces and sustains apolitical conceits in science and technology. We are led to believe that if we can just “de-bias” the data and build in logical controls for “non-discrimination,” the techno-utopia will arrive, and the returns will come pouring in. The argument here is that these adverse consequences are unintended. The assumption is that the intention of algorithmic inference systems is always good — beneficial, benevolent, innovative, progressive.

Stafford Beer gave us an effective analytical tool to evaluate a system without getting sidetracked arguments about intent rather than its real impact. This tool is called POSIWID and it stands for “The Purpose of a System Is What It Does.” This analytical frame provides “a better starting point for understanding a system than a focus on designers’ or users’ intention or expectations.”

you are viewing a single comment's thread
view the rest of the comments
[–] db2@lemmy.world 3 points 2 months ago* (last edited 2 months ago) (1 children)

Proponents of AI and other optimists are often ready to acknowledge the numerous problems, threats, dangers, and downright murders enabled by these systems to date

tinfoil hat image

Edit: I see from the comments this is about insurance carriers.. in that case it's not tinfoil hat at all. The wording I quoted sucks though because it's not the AI doing it any more than it's the hammer that drives a nail sideways.

[–] JoBo@feddit.uk 2 points 2 months ago (1 children)

Where did you get insurance carriers from?

No idea what your post, before or after edit, is trying to say. But the subject of your quoted sentence is "proponents of AI" not "AI", and the sentence is about what is enabled by AI systems. Your attempt at pedantry makes no sense.

If you're suggesting that it is possible to build an AI with none of the biases embedded in the world it learns from, you might want to read that article again because the (obvious) rebuttal is right there.

[–] db2@lemmy.world 3 points 2 months ago (1 children)

The systems didn't do anything they weren't told to do. You're correct that it says proponents, but they knew what it was doing and kept doing it because it was giving them the answers they wanted regardless of reality. The AI is still like the hammer.

[–] JoBo@feddit.uk 1 points 2 months ago (2 children)

The systems didn’t do anything they weren’t told to do.

You're thinking of the kinds of algorithms written by human beings. AI is a black box. No one knows how these models obtain their answers.

[–] db2@lemmy.world 1 points 2 months ago (1 children)

That's not how programming works.

[–] JoBo@feddit.uk 3 points 2 months ago (1 children)
[–] db2@lemmy.world 1 points 2 months ago

Sure thing bud.

[–] Womble@lemmy.world 1 points 2 months ago (1 children)

Thats only true in the same sense that "no one knows how brains work" we understand bits and the low level and can constuct heuristics at a high level but have difficulty linking the two. That is not to say human minds or neural netwirks and entirely unpredictable and produce functionally random outputs that cant be reasoned about.

[–] JoBo@feddit.uk 1 points 2 months ago (1 children)

I think you overestimate the amount of 'thought' going on here. (ref}

[–] Womble@lemmy.world 1 points 2 months ago* (last edited 2 months ago)

Im not saying there is any thought going on, im saying a lack of mapping from low level processes to high level outcomes does not mean a system is entirely inscrutable.

But for reference you link has nothing to say about the amount of thought done, sexists have thoughts when they think women are lesser.Shit thoughts but its still thinking.