Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Basically because of very precise hardware timings. One tick/cycle of the CPU must correspond to exactly X ticks of the GPU since the code runs very close to the hardware and depends on specific behaviors (e.g. a memory store to this bank takes 4 cycles while this other bank takes 6 cycles).

Qemu takes a higher level approach that basically guarantees that over a period of time, one tick of the CPU corresponds to X ticks of the GPU on average and does expose a fair amount of host memory performance characteristics such as TLB/L1/L2 cache misses. PS2 games weren’t built to account for pipeline flushes or memory stalls, for example.



Yep, Ps2 was a beast of interconnected timings, data transfer and cache oddities.

I still find it so funny that they figured the best way to run the GPU was to have 2560 bit wide data bus on 4MB of RAM. So you could go fill rate crazy but on a very limited data set.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: