They are certainly different domains, but can you justify the claim "Self driving has a well defined, static problem space"?
One of the things that makes safety critical applications like self driving so hard is they have such an abundance of low probability, high severity cases that it is very difficult to define/test them all.
I think it's static inasmuch as it isn't an inherently adversarial problem. The world isn't bent on thwarting self-driving cars. Bot authors are bent on subverting detection.
What I mean to convey by "well defined" is that even though the problem space of self driving cars is enormous, the success criteria of teaching a car to self drive is probably going to look pretty similar in 10 years to what it looks like now.
Bots on the other hand changes constantly - what is the definition of an abusive bot? What is the definition of spam? The adversaries on Twitter adapt not only their tactics but also their goals.
Self driving has a well defined, static problem space, with inputs that don't change often (how often do they come out with a new street sign?).
Twitter is combatting distributed adversaries who are constantly adjusting their approach in evading detection.