Computing has traditionally been all about math and logic. This is really all that a binary logic computer is capable of. When applied to this purpose, it can offer highly accurate results at very low cost.
Current AI is an attempt to branch out from simply calculating into decision making. But it does so in the worst possible way --- using probability and statistics (aka guesswork) instead of logic and reasoning. In other words, AI offers questionable results at high cost.
As this article shows, relying on guesswork is a legal liability issue waiting to happen in many (if not most) operating environments.
Heh, I wasn't suggesting that AI would actually replace decision-making. Rather, I wonder whether attempts to use AI in this way would result in such publicly-embarassing and catastrophic outcomes that software engineers might decide to organize professional guardrails about it.
I fully agree, this seems like a legal liability issue waiting to happen.
I wonder if AI / shadow IT will change that.