[-] WalnutLum@lemmy.ml 14 points 2 days ago

The ISS is aging, and for safety’s sake, NASA intends to incinerate the immense facility around 2031. To accomplish the job, the agency will pay SpaceX up to $843 million, according to a statement released on June 26.

See you guys in 2040

[-] WalnutLum@lemmy.ml 1 points 3 days ago

ChatGpt already is multiple smaller models. Most guesses peg chatgpt4 as a 8x220 Billion parameter mixture of experts, or 8 220 billion parameter models squished together

[-] WalnutLum@lemmy.ml 19 points 3 days ago

Let them Fight

[-] WalnutLum@lemmy.ml 15 points 3 days ago

My one dark hope is AI will be enough of an impetus for somebody to update DMCA

[-] WalnutLum@lemmy.ml 6 points 4 days ago* (last edited 4 days ago)

> pay once, get access to everything everywhere

> thinks about Elsevier

OH GOD PLEASE NO

[-] WalnutLum@lemmy.ml 4 points 5 days ago

That doesn't seem to be the same article

[-] WalnutLum@lemmy.ml 24 points 6 days ago

Turns out that whole idea of women being the primary bearers of hundred of years of exploited reproductive labor might have had some weight to it, huh.

All that labor being redirected into "L'economie" means that, at base, you'll have less children.

[-] WalnutLum@lemmy.ml 11 points 6 days ago

This is interesting but I'll reserve judgement until I see comparable performance past 8 billion params.

All sub-4 billion parameter models all seem to have the same performance regardless of quantization nowadays, so 3 billion is a little hard to see potential in.

[-] WalnutLum@lemmy.ml 3 points 6 days ago

Those cost efficiencies are also at the expense of the Chinese government. The massive investment is all part of their green revolution policy package.

It's why Solar cells are also incredibly cheap to produce in China, and why they're also mostly sold in China.

[-] WalnutLum@lemmy.ml 19 points 6 days ago

I seriously doubt the viability of this, but I'm looking forward to being proven wrong.

[-] WalnutLum@lemmy.ml 11 points 6 days ago

The OSI just published a resultnof some of the discussions around their upcoming Open Source AI Definition. It seems like a good idea to read it and see some of the issues they're trying to work around...

https://opensource.org/blog/explaining-the-concept-of-data-information

[-] WalnutLum@lemmy.ml 60 points 2 weeks ago

There are VERY FEW fully open LLMs. Most are the equivalent of source-available in licensing and at best, they're only partially open source because they provide you with the pretrained model.

To be fully open source they need to publish both the model and the training data. The importance is being "fully reproducible" in order to make the model trustworthy.

In that vein there's at least one project that's turning out great so far:

https://www.llm360.ai/

view more: next ›

WalnutLum

joined 6 months ago