Your scenario seems more likely to me than the usual "omnipotent super-AI kills all humans", because technology failing seems much more believable to me than technology being so perfect that it outsmarts all humans, controls everything on earth, and is impossible to defeat because all of it works so perfectly.
As long as printers sometimes work and sometimes become invisible, Windows forgets which window is supposed to be on which screen after it's been asleep for a few hours, Linux trackpad drivers fail randomly, and IoT light switches need to be rebooted twice a year, I think we're a long way from a global super-AI that controls everything perfectly.
Humans failing are more likely, such as deploying it as a sufficiently lethal weapon and potentially losing control. It would be a serious immediate hit for sure, but if an extinction, a long and excruciating one.
We do have manual fallbacks for everything critical, so unless we do something totally silly and let autonomous machines of sufficient power and numbers wage war on people, we're fine.
The potential for doom either comes from long consequences we ignore (see climate, propaganda damaging decision making on mass scale), or from extremely bad decisions where it's obvious that you should not have done it. (See nukes and other high yield bombs, bioweapons, autonomous warfare.)
As long as printers sometimes work and sometimes become invisible, Windows forgets which window is supposed to be on which screen after it's been asleep for a few hours, Linux trackpad drivers fail randomly, and IoT light switches need to be rebooted twice a year, I think we're a long way from a global super-AI that controls everything perfectly.