Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I miss the days of earlier AI image-recognition software that would emit a confidence percentage.

New LLM-related AIs are all supremely confident in every assertion, no matter how wrong.

 help



I don’t know what tool they used, but it was very likely not an LLM. They probably have some database of drivers’ licenses and they ran a similarity search against the surveillance footage. This poor lady happened to be the top match.

Even if it also output a score, that score depends on how the model was trained. And the cops might ignore it anyways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: