this post was submitted on 18 Oct 2024
782 points (98.4% liked)
Technology
60106 readers
1880 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Are you under the impression that I think Teslas approach to AI and computer vision is anything but fucking dumb? The person said a stupid and patently incorrect thing. I corrected them. Confidence values being literally baked into how most ML architectures work is unrelated to intentionally depriving your system of one of the most robust ccomputer vision signals we can come up with right now.
Yes, but confidence values are not magic. These values are calculated based on how familiar the current input is to a previous observed input. If the type of input is unfamiliar to the model, what do you think happens? Usually, there will be a category with a high enough confidence score so that it will be chosen as the correct one, while being wrong. Now, assuming you somehow manage to not get a favorable confidence score for any decision. What do you think happens in that case? I never encountered this, but there can only be 3 possible paths: 1) Choose a random value. Not good. 2) Do nothing. Not good. 3) Rerun the model with slightly newer data? Maybe helps, but in the case of driving a car, slightly newer data might be too late.
There's plenty you could do if no label was produced with a sufficiently high confidence. These are continuous systems, so the idea of "rerunning" the model isn't that crazy, but you could pair that with an automatic decrease in speed to generate more frames, stop the whole vehicle (safely of course), divert path, and I'm sure plenty more an actual domain and subject matter expert might come up with--or a whole team of them. But while we're on the topic, it's not really right to even label these confidence intervals as such--they're just output weighting associated with respective levels. We've sort of decided they vaguely match up to something kind of sort approximate to confidence values but they aren't based on a ground truth like I'm understanding your comment to imply--they entirely derive out of the trained model weights and their confluence. Don't really have anywhere to go with that thought beyond the observation itself.