Only an existential crisis? What about existential crises?
ALostInquirer
While Lemmy doesn't have enough people for each product category yet, have you checked out the community !buyitforlife@slrpnk.net?
There's also !recommendations@lemmy.world for broader discussion, but it's not gained much traction yet.
Anyways. I know you probably wanted a story that was more interesting than depressing, but that’s just one that really stuck with me from that point in my life there. I don’t think that’s a normal experience for a Night Auditor to have, so I wouldn’t take my experience as a reason to dissuade anyone from taking the position, but you asked for a story, and so you got one.
Even a depressing story is interesting in its own way, so I appreciate it all the same! I can see why the experience stuck with you, it's a rough situation to find oneself in for almost all involved
For those interested in discussing their job searches, did you know there's a !jobs@lemmy.world community? Not terribly active at the moment, but given the discussion here there seems to be some potential interest
Any odd stories from that job?
Fun part is, that article cites a paper mentioning misgivings with the terminology: AI Hallucinations: A Misnomer Worth Clarifying. So at the very least I'm not alone on this.
Yeah, on further thought and as I mention in other replies, my thoughts on this are shifting toward the real bug of this being how it's marketed in many cases (as a digital assistant/research aid) and in turn used, or attempted to be used (as it's marketed).
perception
This is the problem I take with this, there's no perception in this software. It's faulty, misapplied software when one tries to employ it for generating reliable, factual summaries and responses.
It's not a bad article, honestly, I'm just tired of journalists and academics echoing the language of businesses and their marketing. "Hallucinations" aren't accurate for this form of AI. These are sophisticated generative text tools, and in my opinion lack any qualities that justify all this fluff terminology personifying them.
Also frankly, I think students have one of the better applications for large-language model AIs than many adults, even those trying to deploy them. Students are using them to do their homework, to generate their papers, exactly one of the basic points of them. Too many adults are acting like these tools should be used in their present form as research aids, but the entire generative basis of them undermines their reliability for this. It's trying to use the wrong tool for the job.
You don't want any of the generative capacities of a large-language model AI for research help, you'd instead want whatever text-processing it may be able to do to assemble and provide accurate output.
While largely true, I was also thinking of filtering/sorting systems within specific sites (e.g. stores/archives/etc.) as well, which may result in similar junk results but fewer than with a search engine.
Tbh I didn't mean to Lemmy, so much as simply off Twitter in general, preferably to a non-corporate social site. It may be naive/idealistic, but I think those most inclined to leave would be the better of the bunch, and those in-between are more apt to go to another corporate site anyway (e.g. Threads).
Does it sometimes seem like commenting in high traffic online spaces feels this way too, not just Reddit?