This post is fantastic. I wish there was more workstation porn like this for those of us who are not into the RGB light show ripjaw hacksaw aorus elite novelty stuff that gamers are so into. Benchmarks in the community are almost universally focused on gaming performance and FPS.
I want to build an epic rig that will last a long time with professional grade hardware (with ECC memory for instance) and would love to get a lot of the bleeding-edge stuff without compromising on durability. Where do these people hang out online?
Thanks! In case you're interested in building a ThreadRipper Pro WX-based system like mine, then AMD apparently starts selling the CPUs independently from March 2021 onwards:
Previously you could only get this CPU when buying the Lenovo ThinkStation P620 machine. I'm pretty happy with Lenovo Thinkstations though (I bought a P920 with dual Xeons 2.5 years ago)
My only quibble with that board is that I worry about how easily replaced the fan on the chipset is. In my experience that exact type of fan will inevitably fail in a moderately dusty environment... And it doesn't look like anything you could screw on/off from the common industry standard sizes of 40mm or 60mm 12VDC fans that come in various thicknesses.
Fortunately you can often swap the complete heatsink and fan combo on the chipset with a different one. If the mounting method is strange one can use thermal epoxy or thermal adhesive tape.
Even LinusTechTips has some decent content for server hardware, though they stay fairly superficial. And the forum definitely has people who can help out: https://linustechtips.com/
And the thing is, depending on what metric you judge performance by, the enthusiast hardware may very well outperform the server hardware. For something that is sensitive to memory, e.g., you can get much faster RAM in enthusiast SKUs (https://www.crucial.com/memory/ddr4/BLM2K8G51C19U4B) than you'll find in server hardware. Similarly, the HEDT SKUs out-clock the server SKUs for both Intel and AMD.
I have a Threadripper system that outperforms most servers I work with on a daily basis, because most of my workloads, despite being multi-threaded, are sensitive to clockspeed.
No one's using "gamer NICs" for high speed networking. Top of the line "gaming" networking is 802.11ax or 10GbE. 2x200Gb/s NICs are available now.
Gaming parts are strictly single socket - software that can take advantage of >64 cores will need server hardware - either one of the giant Ampere ARM CPUs or a 2+ socket system.
If something must run in RAM and needs TB of RAM, well then it's not even a question of faster or slower. The capability only exists on server platforms.
Some workloads will benefit from the performance characteristics of consumer hardware.
Workstations and desktops are distinct market segments. The machine in the article uses a workstation platform. And the workstation processors available in that Lenovo machine clock slower than something like a 5950X mainstream processor. The RDIMMS you need to get to 1TB in the machine run much slower than the UDIMMS I linked above.
I'm with you on this, I just built a (much more modest than the article's) workstation/homelab machine a few months ago, to replace my previous one which was going on 10 years old and showing its age.
There's some folks in /r/homelab who are into this kind of thing, and I used their advice a fair bit in my build. While it is kind of mixed (there's a lot of people who build pi clusters as their homelab), there's still plenty of people who buy decommissioned "enterprise" hardware and make monstrous-for-home-use things.
Look at purchasing used enterprise hardware. You can buy a reliable x9 or X10 generation supermicro server (rack or tower) for around a couple of hundred.
I've been planning to do this, but enterprise hardware seems like it requires a completely different set of knowledge on how to purchase it and maintain it, and especially as a consumer.
It's not quite as trivial of a barrier to entry as consumer desktops, but I suppose that's the point. Still, it would be nice if there was a guide that could help me make good decisions to start.
Downside of buying enterprise for home use is noise - their turbofan coolers are insanely loud while consumer grade 120mm (Noctua et al) coolers are most quiet.
Another downside is powerconsumption at rest. Supermicro board with 2x Xeon use 80watt at minimum. Add a 10Gbit switch and a few more peripherals and you’re looking at an additional $€80/month electricity bill. Year after year, that is $€10.000 in 10years.
Of course that is nothing compared to what you’d pay at Google/Azure/AWS for the AMD machine of this news item :-)
12V only PSUs like OEMs use or ATX12VO in combination with a motherboard without IPMI, similar to the German Fujitsu motherboards, have significant lower power consumption at rest. Somewhere around 8-10Watt without HDD. Much better for home use IHMO.
In the US, electricity rates are typically much cheaper than the EU. My rate is roughly .08 €/kWh, for example, and I don't get any subsidies to convert to solar, so I have no way to make it pay off for myself within 15 years (beyond the time most people expect to stay in a home here), while other states in the US subsidize so heavily or rates for electricity are so high most people have solar panels at least (see: Hawaii with among the highest costs for electricity in the US).
Regardless of electricity cost, all that electricity usage winds up with a lot of heat in a dwelling. To help offset the energy consumption in the future I plan to use a hybrid water heater that can act as a heat pump and dehumidifier and capture the excess heat as a way to reduce energy consumption for hot water.
It’s mostly about casing though - density is important with enterprise stuff, and noise level is almost irrelevant hence small chassis with small, loud, fans.
I’ve got a 3.5” x16 bay gooxi chassis that I’ve put a supermicro mb + xeon in.
I got this specific nas chassis because it got a fan wall with 3x120mm fans, not because I need bays.
With a few rather cool SSD’s for storage and quiet noctua fans it is barely a whisper.
Also - vertical rack mounting behind a closet door!
I can have a massive chassis that basically takes no place at all. Can’t belive I didn’t figure that one out earlier...
Mostly yes because server chassis are very compact and sometimes use proprietary connectors and fans. Still, many people have done that with good results, have a look in YouTube to know which server models are best suited for that kind of customization.
I've not been successful trying this with HPE servers. Most server fans (Foxconn/Delta) run 2.8 amp or higher.
Not aware of any "silent" gaming grade fans that use more than 0.38 amps.
That's not even considering the CFM.
Amps * Volts is power. Power is a proxy (a moderately good one) for air movement (a mix of volume/mass at a specific [back-]pressure).
It’s not likely that a silent 2W fan will move a similar amount of air as the stock 14W fans. The enterprise gear from HPE is pretty well engineered; I’m skeptical that they over-designed the fans by a 7x factor.
Operating voltage tells you “this fan won’t burn up when you plug it in”. It doesn’t tell you “will keep the components cool”.
Though I have to wonder.... would these be good gaming systems? Are there any scenarios where the perks (stupid numbers of cores, 8-channel memory, 128 PCI-E lanes, etc) would help?
Check out HardForum. Lots of very knowledgable people on there helped me mature my hardware level knowledge. Back when I was building 4 cpu, 64 core opteron systems. Also decent banter.
Happy to help if you want feedback. Servethehome forums are also a great resource of info and used hardware, probably the best community for your needs.
I want to build an epic rig that will last a long time with professional grade hardware (with ECC memory for instance) and would love to get a lot of the bleeding-edge stuff without compromising on durability. Where do these people hang out online?