Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> As but one example: media encoding is pretty close to being "embarrassingly parallel" in principle

Which part? 90% of what you're doing is context or inter-frame dependent. Video encoders that live on graphics cards today use dedicated ASIC hardware.



You can divide the video into chunks and encode the chunks in parallel. This is what Netflix does:

https://medium.com/netflix-techblog/high-quality-video-encod...

https://medium.com/netflix-techblog/dynamic-optimizer-a-perc...

Works well when you're doing video at the scale of Netflix, but not necessarily much help to the individual user who just wants to encode a video.


> You can divide the video into chunks and encode the chunks in parallel

You can do this with zlib too (zlib divides a file up into 64k chunks). Doesn't mean that zlib is well-suited for GPUs, nor is each chunk "embarrassingly parallel". Neither Netflix post talks about using the GPU at all.


> You can divide the video into chunks and encode the chunks in parallel.

What about live encoding?





Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: