theluddite

joined 1 year ago
[–] theluddite@lemmy.ml 4 points 1 month ago

I would love to read an actually serious treatment of this issue and not 4 paragraphs that just say the headline but with more words.

[–] theluddite@lemmy.ml 28 points 1 month ago (5 children)

I have been predicting for well over a year now that they will both die before the election, but after the primaries, such that we can't change the ballots, and when Americans go to vote, we will vote between two dead guys. Everyone always asks "I wonder what happens then," and while I'm sure that there's a technical legal answer to that question, the real answer is that no one knows,

[–] theluddite@lemmy.ml 8 points 1 month ago (1 children)

Very well could be. At this point, I'm so suspicious of all these reports. It feels like trying to figure out what's happening inside a company while relying only on their ads and PR communications: The only thing that I do know for sure is that everyone involved wants more money and is full of shit.

[–] theluddite@lemmy.ml 15 points 1 month ago (8 children)

US Leads World in Credulous Reports of ‘Lagging Behind’ Russia. The American military, its allies, and the various think-tanks it funds, either directly or indirectly, generate these reports to justify forever increasing the military budget.

[–] theluddite@lemmy.ml 28 points 1 month ago* (last edited 1 month ago)

I know that this kind of actually critical perspective isn't point of this article, but software always reflects the ideology of the power structure in which it was built. I actually covered something very similar in my most recent post, where I applied Philip Agre's analysis of the so-called Internet Revolution to the AI hype, but you can find many similar analyses all over STS literature, or throughout just Agre's work, which really ought to be required reading for anyone in software.

edit to add some recommendations: If you think of yourself as a tech person, and don't necessarily get or enjoy the humanities (for lack of a better word), I recommend starting here, where Agre discusses his own "critical awakening."

As an AI practitioner already well immersed in the literature, I had incorporated the field's taste for technical formalization so thoroughly into my own cognitive style that I literally could not read the literatures of nontechnical fields at anything beyond a popular level. The problem was not exactly that I could not understand the vocabulary, but that I insisted on trying to read everything as a narration of the workings of a mechanism. By that time much philosophy and psychology had adopted intellectual styles similar to that of AI, and so it was possible to read much that was congenial -- except that it reproduced the same technical schemata as the AI literature. I believe that this problem was not simply my own -- that it is characteristic of AI in general (and, no doubt, other technical fields as well). T

[–] theluddite@lemmy.ml 14 points 1 month ago (1 children)

I've now read several of these from wheresyoured.at, and I find them to be well-researched, well-written, very dramatic (if a little ranty), but ultimately stopping short of any structural or theoretical insight. It's right and good to document the shady people inside these shady companies ruining things, but they are symptoms. They are people exploiting structural problems, not the root cause of our problems. The site's perspective feels like that of someone who had a good career in tech that started before, say, 2014, and is angry at the people who are taking it too far, killing the party for everyone. I'm not saying that there's anything inherently wrong with that perspective, but it's certainly a very specific one, and one that I don't particularly care for.

Even "the rot economy," which seems to be their big theoretical underpinning, has this problem. It puts at its center the agency of bad actors in venture capital becoming overly-obsessed with growth. I agree with the discussion about the fallout from that, but it's just lacking in a theory beyond "there are some shitty people being shitty."

[–] theluddite@lemmy.ml 27 points 1 month ago

I've already posted this here, but it's just perennially relevant: The Anti-Labor Propaganda Masquerading as Science.

[–] theluddite@lemmy.ml 63 points 2 months ago* (last edited 2 months ago) (2 children)

"The workplace isn't for politics" says company that exerts coercive political power to expel its (ex-)workers for disagreeing.

[–] theluddite@lemmy.ml 26 points 2 months ago (2 children)

Your comment perfectly encapsulates one of the central contradictions in modern journalism. You explain the style guide, and the need to communicate information in a consistent way, but then explain that the style guide is itself guided by business interests, not by some search for truth, clarity, or meaning.

I've been a long time reader of FAIR.org and i highly recommend them to anyone in this thread who can tell that something is up with journalism but has never done a dive into what exactly it is. Modern journalism has a very clear ideology (in the sorta zizek sense, not claiming that the journalists do it nefariously). Once you learn to see it, it's everywhere

[–] theluddite@lemmy.ml 19 points 2 months ago* (last edited 2 months ago)

All these always do the same thing.

Researchers reduced [the task] to producing a plausible corpus of text, and then published the not-so-shocking results that the thing that is good at generating plausible text did a good job generating plausible text.

From the OP , buried deep in the methodology :

Because GPT models cannot interpret images, questions including imaging analysis, such as those related to ultrasound, electrocardiography, x-ray, magnetic resonance, computed tomography, and positron emission tomography/computed tomography imaging, were excluded.

Yet here's their conclusion :

The advancement from GPT-3.5 to GPT-4 marks a critical milestone in which LLMs achieved physician-level performance. These findings underscore the potential maturity of LLM technology, urging the medical community to explore its widespread applications.

It's literally always the same. They reduce a task such that chatgpt can do it then report that it can do to in the headline, with the caveats buried way later in the text.

[–] theluddite@lemmy.ml 3 points 3 months ago

The purpose of a system is what it does

According to the cybernetician, the purpose of a system is what it does. This is a basic dictum. It stands for bald fact, which makes a better starting point in seeking understanding than the familiar attributions of good intention, prejudices about expectations, moral judgment, or sheer ignorance of circumstances.

The AI is "supposed" to identify targets, but in reality, the system's purpose is to justify indiscriminate murder.

 

Because technology is not progress, and progress is not necessarily technological. The community is currently almost entirely links to theluddite.org, but we welcome all relevant discussions.

Per FAQ, various link formats:

/c/luddite@lemmy.ml

!luddite@lemmy.ml

 

I read this article here, so I thought you'd all appreciate a followup. I pointed out in the comments that they were definitely wrong. I got in touch with them (was not easy to do) and it's finally been corrected.

Editor's Note, July 26, 2023: A previous version of this article incorrectly stated that vertical farms can use up to 90 percent less energy than traditional farms. In fact, that number referred to the amount of energy one vertical farm used in comparison to other vertical farms. We’ve updated the story to reflect this change. We regret the error.

view more: next ›