I may be missing something, but how would giving programmers more powerful hardware prevent incremental changes? Even if I'm using a top of the line rig, it's unlikely that anyone actually using the software I write will have that same level of technology available. I can only see this as a benefit in specific cases where the limits of modern hardware are being challenged, which are far fewer today than they once were.
The point is that you can write something that takes advantage of hardware that will be consumer grade in x years, rather than hardware that is consumer grade now. I thought parent stated that fairly clearly.
Ah, that makes sense. However, since most applications outside of video editing, gaming, etc. are not resource intensive, why shouldn't programmers just use consumer-grade hardware? An email client running on 2014's hardware and one on 2019's hardware are probably not going to be substantially different.
The "email" clients of the future are going to be dealing with multi-terabyte caches[1] with lots of high-bitrate audio and 5K+ video and images. A robust and competitive email client should be able to do real-time summarization, translation, text-to-speech, speech-to-text, and index into media files (I should be able to query for a phrase and get the relevant portion of a video or audio clip in the search results).
We have all of the algorithms now, so the hardest part is probably developing a good UI. So, you need to a super fast computer so that:
(1) You can power the UI
(2) You can rapidly iterate in response to user testing.
[1] By "cache" I mean stuff that is stored locally rather than in the cloud; I'm not talking about CPU caches which will probably stay about the same.
Can you point to active research in the area of interactive applications with local or multi-tier caching? Everything in the news is about "cloud". The closest I've seen to cache-oriented apps were the cloudlets at CMU, http://elijah.cs.cmu.edu.
> We have all of the algorithms now
Aren't some of those algorithms patented, e.g. speech recognition from SRI, Nuance, Apple/MS/Google, IBM, AT&T and increasingly implemented in centralized cloud storage rather than at the edge? How about lack of access to training data?
I think the point here is that we don't see this kind of research because nobody invest in these super fast and expensive workstations that could mimic the future hardware.
I don't think speech recognition algorithms are patented. At least not the Google ones, since AFAIK they use neural networks. You could train your algorithm centrally and then ship all the neurons, weights and biases of to the individual device and keep the training data secret.
it's a 'strike price' R&D programme. work on expensive stuff _now_ on the assumption that it will be cheap/mainstream in x years. if you waited for it to be cheap enough before you start, you'll be outgunned by the crew that started 6 months before you did. that's iterative of course, how far back you go is the decision that counts. someone once said "10x faster isn't just faster - it's different". i like that metaphor, in CS you might throw away processor cycles for the GUI at a time when everyone else is optimising the metal for a text inteface to eke out a millisecond or two...
Developing bleeding edge software takes time. So you start writing your SW on a top end machine; in two years it becomes mainstream, which might coincide with the time of your first release. Moore's law worked perfectly for you here.