This would come with a cost of longer compression times - either multiple attempts with random shuffling or pre-compression file ordering optimization process. For resources that are compressed once and then distributed and decompressed multiple times this would be quite interesting solution
zstd is faster and smaller. If you can choose the format, zstd beats deflate across the board, on every front except for compatibility with things that only understand deflate.
Also, if you need to use deflate for compatibility, use https://github.com/zlib-ng/zlib-ng , which is substantially faster than either zlib or gzip.
I've never heard of zstd (thanks!), but it seems it's right under my nose.
"Arch Linux added support for zstd as a package compression method in October 2019 with the release of the pacman 5.2 package manager, and in January 2020 switched from xz to zstd for the packages in the official repository. Arch uses zstd -c -T0 --ultra -20 -, the size of all compressed packages combined increased by 0.8% (compared to xz), the decompression speed is 1300% faster, decompression memory increased by 50 MiB when using multiple threads, compression memory increases but scales with the number of threads used."