this post was submitted on 29 Jan 2024
92 points (100.0% liked)

Technology

37708 readers
393 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Generative artificial intelligence (GenAI) company Anthropic has claimed to a US court that using copyrighted content in large language model (LLM) training data counts as “fair use”, however.

Under US law, “fair use” permits the limited use of copyrighted material without permission, for purposes such as criticism, news reporting, teaching, and research.

In October 2023, a host of music publishers including Concord, Universal Music Group and ABKCO initiated legal action against the Amazon- and Google-backed generative AI firm Anthropic, demanding potentially millions in damages for the allegedly “systematic and widespread infringement of their copyrighted song lyrics”.

you are viewing a single comment's thread
view the rest of the comments
[–] megopie@beehaw.org 5 points 9 months ago* (last edited 9 months ago) (1 children)

What they have, is miles from artificial general intelligence, it is not AI in even a limited sense. It is AI in the same way a mob in a video game is AI.

Their claims to be approaching it are marketing fluff at best, and abject lies at worst.

[–] Drewelite@lemmynsfw.com 2 points 9 months ago

I think if we sit here and debate the nuances of what is or is not intelligence, we will look back on this conversation and laugh at how pedantic it was. Movies have taught us that A.I. is hyper-intelligent, conscious, has it's own objectives, is self aware, etc.. But corporations don't care about that. In fact, to a corporation, I'm sure the most annoying thing about intelligence right now is that it comes packaged with its own free will.

People laugh at what is being called A.I. because it's confidently wrong and "just complicated auto-complete". But ask your coworkers some questions. I bet it won't be long before they're confidently wrong about something and when they're right, it'll probably be them parroting something they learned. Most people's jobs are things like: organize these items on those shelves, mix these ingredients and put it in a cup, get all these numbers from this website and put them in a spreadsheet, write a press release summarizing these sources.

Corporations already have the A.I. they need. You gatekeeping intelligence is just your ego protecting you from the truth: you, or someone dear to you, are already replaceable.

I think we both know that A.I. is possible, I'm saying it's inevitable, and likely already at version 1. I'm sure any version of it would require access to training data. So the ruling here would translate. The only chance the general population has of keeping up with corporations in the ability to generate economic value, is to keep the production of A.I. in the public space.