this post was submitted on 11 Aug 2023
580 points (94.6% liked)
Asklemmy
44151 readers
1424 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
AI is to computer science what black magic is to science.
Seriously, what do you get after you've spent days and days to train a model? An inscrutable blob that may as well be proprietary software written for an alien CPU; studying it is damn near impossible, understanding how it works would require several lifespans, and yet it works, and we trust these models and use them to get solutions to problems that would normally be impossible to handle by computers using "real" computer science. And one day, this trust will bite us in the ass, not in the form of an "AI rebellion" but with every system that uses AI becoming unreliable because of situations outside its training.
In my field we use AI a lot for classification task that would need too much manual labour otherwise. We train the model for a specific task which will stay the same for a long time. We use statistical models to validate the results, and they are correct. The model will always be used for this task and we are happy we can use it. We actually do not care how the model did it, we are only interested in the result. Using XAI (explainable AI) we can actually can get pretty far in answering the question how the model works "mathematically" if we wanted. But of course we cannot infer causality from it.
There are so many tasks that can be solved by AI that are not critical, I see no reason why we should not use it in science until we found the real math behind the processes. The topics that are covered by media have to be discussed critically, the whole debate around GPT and the huge models big tech is building, but there is so much use of AI in other uncritical fields the public is just not aware of.
AI will never be smarter than the collective human race. It is our role to train its shortcomings so we can evolve ourselves. It's a cycle of training and learning