195
submitted 10 months ago by Powderhorn@beehaw.org to c/technology@beehaw.org
you are viewing a single comment's thread
view the rest of the comments
[-] sapient_cogbag@infosec.pub 25 points 10 months ago

I hope not. Not a big fan of propriety AI (local AI all the way, and I hope people leak all these models, both code and weights), but fuck copyright and fuck capitalism which makes automation seem like a bad thing when it shouldn't be ;p nya

[-] wim@lemmy.sdf.org 16 points 10 months ago

Yes, because AI and automation will definitely not be on the side of big capital, right? Right?

Be real. The cost of building means they're always going to favour the wealthy. At best right now were running public copies of the older and smaller models. Local AI will always be running behind the state of the art big proprietary models, which will always be in the hands of the richest moguls and companies in the world.

[-] sapient_cogbag@infosec.pub 7 points 10 months ago

Be real. The cost of building means they're always going to favour the wealthy. At best right now were running public copies of the older and smaller models. Local AI will always be running behind the state of the art big proprietary models, which will always be in the hands of the richest moguls and companies in the world.

Distribution of LoRA-style fine-tuning weights means that FOSS AI systems have a long term advantage because of compounding effects. ^.^

That is, high-quality data provided for smaller models and very small "model finetuning" weights, which is more accessible to open groups, are sufficiently accessible and modular in their improvements to a given model that the FOSS community can take and run with it to compete effectively with proprietary groups from even a single leak.

Furthermore, smaller and more efficient models which can be run on lower end hardware also avoid the need to send off potentially sensitive data to AI companies and enable the kinds of FOSS compounding effect explained above.

This doesn't just affect people who like privacy, but also companies with data privacy requirements ^.^ - as long as the medium models are "good enough" (which I think they are ;p), the compounding effects of LoRA tuning and better data privacy properties, and further developments which already exist in research papers towards much lower weight-count models and training mechanisms capable of greater weight efficiency to induce zero-shot learning, mean local AI can compete with proprietary stuff. It's still early days but it is absolutely doable even today with fairly low-end hardware, and it can only get better for the reasons provided.

Furthermore, "intellectual property" and copyright stuff have an absolutely massive and arguably even more powerful set of industries behind them. Trying to strengthen IP stuff against AI means that AI will only be available to those controlling these existing IP resources and it's unending stranglehold on technology and communication and people as a whole :/

AI I think is also forcing more and more people to look and reevaluate society's relationship with work and labour. And frankly I think that this is super important, as it enables a greater chance of more radical liberation from the existing structures of not just capitalism and it's hierarchies but the near-mandatoriness of work as a whole (though there has already been some stuff like this around the concepts of "bullshit jobs").

I think people should use this as an opportunity to unionise and also try and push for cooperative and democratic control of orgs ^.^, and many other things that I CBA to list out ;3

[-] hascat@programming.dev 6 points 10 months ago

No leaks necessary; there are a number of open-source LLM's available:

https://github.com/Hannibal046/Awesome-LLM#open-llm

The key differentiator between these and proprietary offerings will always be the training data. Large amounts of high-quality data will be more difficult for an individual or a small team to source. If lawsuits like this one block ingestion of otherwise publicly-available data, we could have a future where copyright holders charge AI builders for access to their data. If that happens, "knowledge" could become exclusive to various AI platforms much the same way popular shows or movies are exclusive to streaming platforms.

[-] pax@rblind.com 1 points 10 months ago

the opensource models are so bad that they give you responses out of context. they have completely random responses.

this post was submitted on 17 Aug 2023
195 points (100.0% liked)

Technology

37208 readers
43 users here now

Rumors, happenings, and innovations in the technology sphere. If it's technological news or discussion of technology, it probably belongs here.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS