this post was submitted on 01 Oct 2024
88 points (81.9% liked)

Asklemmy

43494 readers
1351 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Trainguyrom@reddthat.com 14 points 2 days ago (1 children)

Short answer: they already are

Slightly longer answer: GPT models like ChatGPT are part of an experiment in "if we train the AI model on shedloads of data does it make a more powerful AI model?" and after OpenAI made such big waves every company is copying them including trying to train models similar to ChatGPT rather than trying to innovate and do more

Even longer answer: There's tons of different AI models out there for doing tons of different things. Just look at the over 1 million models on Hugging Face (a company which operates as a repository for AI models among other services) and look at all of the different types of models you can filter for on the left.

Training an image generation model on research papers probably would make it a lot worse at generating pictures of cats, but training a model that you want to either generate or process research papers on existing research papers would probably make a very high quality model for either goal.

More to your point, there's some neat very targeted models with smaller training sets out there like Microsoft's PHI-3 model which is primarily trained on textbooks

As for saving the world, I'm curious what you mean by that exactly? These generative text models are great at generating text similar to their training data, and summarization models are great at summarizing text. But ultimately AI isn't going to save the world. Once the current hype cycle dies down AI will be a better known and more widely used technology, but ultimately its just a tool in the toolbox.

[–] Umbrias@beehaw.org 2 points 2 days ago (1 children)

also the answer to that question, shitloads of data for a better ai, is yes… with logarithmic returns. massively underpriced (by cost to generate) returns that have questionable value statement at best.

[–] intensely_human@lemm.ee 1 points 2 days ago (1 children)

How are the “returns” measured numerically here?

[–] greyw0lv@lemmy.ml 1 points 2 days ago

Hillusionations per GWH iirc.