Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Compute MT is using both CPU and GPU together. To compete, any other CPU needs to use a dedicated GPU.

Also, at idle, the DRAM and SSD are powered off. Samsung 980 uses 4.5 watts of power when active. I'd guess a similar power consumption applies here.

LPDDR4x ranges from 0.6v to 1.8v This should give 2-3w per 8gb chip at full-power. Anandtech's article agrees with ST Memory adding 4.2w of power over their other ST benchmarks.

Now, 26.8 - 4.5 - 4.2 = 18.1w at PEAK.

For just CPU loads, we get 22.3 - 4.5 - 4.2 = 13.6w.



I don’t think your logic is based on sound assumptions:

You can’t completely turn off DRAM at idle if it’s to store data - it has to self-refresh continuously. Some power savings are available by running it in a power saving mode, but you can’t get around the leakage of the DRAM cells, so the extra power used to drive the DRAM hard is not the full DRAM power usage, but a fraction of it. (LPDDR5 is nifty in that you can tell it to power down parts of the chip that aren’t currently being used.)

The SSD isn’t being hit by either workload: why would a CPU benchmark hit disk? The SSD is going to remain idle & be part of the 4W that Anandtech assigns to the system idle power drain.

Where does it say that Compute is CPU + GPU? (I’m happy to believe it, but I don’t see it in the article & they already break out the GPU power usage seperately using a different benchmark.)

(It would really help if Anandtech told us what the benchmarks were - some CPU intensive benchmarks don’t hit memory at all, they just hammer a few cache lines. It’s entirely possible for a pure computational benchmark to leave most of the LPDDR5 in deep sleep!)

If by MT Compute they mean specifically the Geekbench 5 compute benchmark then that is a Metal / GPU benchmark, so I will concede this point in that case :)

Even conceding that the Compute benchmark is really a GPU benchmark, it’s still the case comparing a CPU benchmark load against the TDP of another processor isn’t entirely correct, as the TDP is meant to be a power ceiling (although Intel has rather breached this in recent years - AMD TDPs are more honest I’m told.) - if we accept the 22.3W of the average workload benchmark, then the TDP for the chip is still going to be more than that & that’s the appropriate point of comparison.


LPDDR4x has various power levels based on whether it's actually being used or just keeping data from corrupting. The latter is much less memory intensive and runs at much lower voltages. You may have a point about SSD depending on how much read/write is happening.

Check out this comparison of the passively-cooled M1 Macbook Air vs both Intel and AMD Surface 4 machines.

https://youtu.be/7yTWGjYFiC0?t=509

TL;DR -- AMD loses 25-50% of their performance as soon as the plug is removed. Their actual power usage spikes at 40w (for a 10-25w U-series CPU). No matter how you cut it, Peak M1 power usage is about half for similar performance.

Insult to injury, the M1 not only has far better performance, but it's also cheaper.


Oh, the M1 is a great chip, no question!

But I think it’s maybe twice (three times tops?) as good as the competition on a power/perf basis, not the 10x some people keep claiming when they compare the 10W claimed by Apple against the 100+W TDP of similarly performing AMD/Intel CPUs.

Incidentally, assuming the benchmark they’re running is Geekbench (seems likely at this point), I’ve just run a pass of both CPU & Compute GeekBench5 benchmarks with iotop running in the background & neither did any disk IO of any significance during the run. I think the peak throughput I saw was 150kb/s and that might have been something else on the desktop (I didn’t bother killing anything). Most of the time there was no IO at all.

So, if it was GeekBench they were running (seems plausible) then I think we can assume that the SSD is idle & that the Compute benchmark is really a GPU benchmark.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: