Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Absolutely. LLM inference is still a greenfield — things like overlap scheduling and JIT CUDA kernels are very recent. We’re just getting started optimizing for modern LLM architectures, so cost/perf will keep improving fast.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: