I work for a company that makes robot arms for assistive purposes (Kinova) so I have some insight to share.
1st a voice setup with Alexa or similar can really help.
With regards to phone use, some of our users have an attachment to put the phone close to their head and use their nose to "click/select" (they can move their head).
Eye tracking technology is really impressive these days (can be as fast as using a mouse). I've recently demoed a system with a Tobii sensor (https://www.tobii.com/) that was hooked up to a laptop, very impressive when combined with appropriate software (it handles scrolling, keyboard shortcuts, etc in a custom interface). I'm not sure with regards to phone/tablet use how well they integrate.
1st a voice setup with Alexa or similar can really help.
With regards to phone use, some of our users have an attachment to put the phone close to their head and use their nose to "click/select" (they can move their head).
Eye tracking technology is really impressive these days (can be as fast as using a mouse). I've recently demoed a system with a Tobii sensor (https://www.tobii.com/) that was hooked up to a laptop, very impressive when combined with appropriate software (it handles scrolling, keyboard shortcuts, etc in a custom interface). I'm not sure with regards to phone/tablet use how well they integrate.
Ping me on Linkedin if you'd like to talk more.