Recompress the ReadableStream to work out roughly how long the compressed version is, and use the ratio of the length of your recompressed stream to the Content-Length to work out an approximate progress percentage.
Many large files that get downloaded are already compressed, so using compression during http response is pointless and just adds overhead.
If you need progress on a text file, then don't compress it while downloading. Text files that are small won't really need progress or compression.
If you're sending a large amount of data that can be compressed, then zip it before sending it, download-with-progress (without http compression), and then unzip the file in the browser and do what you need with the contents.
I'm sure there are probably other ways to handle it too.
You won't be able to do this if you're downloading from a CDN. Which is exactly where you would host large files, for which progress reporting really matters.
Right. For example S3 supports custom headers, as long as that header happens to start with "x-amz-meta-..." - and now your progress reporting is tied to your CDN choice!
Not sure about you, but to me "XmlHttpRequest" in my request handling code feels less dirty than "x-amz-meta-". But to each their own I guess.