ACPI is to blame because it specifically enabled laptop manufacturers to hide platform hardware details behind a shitty firmware interface instead of providing OS drivers/hardware details to driver developers, and it was around that time that SMI became a thing on Intel CPUs and platform firmware started to use that for fan control, instead of depending on the OS to do that.
ACPI is a huge part of the reason that linux works at all on x86, and in many cases is as popular as it is. If it acted more like arm boards where pretty much nothing works right, no one would use it for general purpose computing. There are literally hundreds and hundreds of drivers for PMICs, clock dividers, GPIO pin muxing/etc for the hundred or so boards that sorta work, and each one needs customization that takes months/years every time something gets tweaked on the board/etc.
ACPI allows random x86 board vendors to literally have hundreds of products themselves that not only boot linux, but a pile of other OS's because they aren't wasting their time writing drivers for every single board and voltage regulator.
So, call it shitty, but understand that its the reason you can boot a single linux image on everything from an atomicPi, to HP superdome flex, and the thousands of desktops/etc being customized by people in their bedrooms.
Its also largely the reason why linux works on 20 year old PC's that no one is actually testing on anymore.
The alternative is a monoculture where the HW is basically provided by a single vendor that doesn't change much and provides a half dozen supported configurations. I would point to the zseries here because that is effectively how it works, but it also has a huge hw/firmware abstract machine that allows IBM to change the underlying HW details without having to rewrite all this low value garbage.
> There are literally hundreds and hundreds of drivers for PMICs and voltage dividors, GPIO pin muxing/etc for the hundred or so boards that sorta work, and each one needs customization that takes months/years everytime something gets tweaked on the board/etc.
There's literally tens of thousands of devices for the PC platform and Linux has no problem supporting them if the hardware can be documented. Why can't the vendor document that instead of hiding this stuff behind opaque binary blobs that can have security vulnerabilities? Whether this is ARM or x86 doesn't matter.
> ACPI OTOH allows random x86 board vendors to literally have hundreds of products themselves that not only boot linux, but a pile of other OS's because they aren't wasting their time writing drivers for every single board and voltage regulator.
I'm parsing this as "ACPI OTOH allows random x86 board vendors to develop shitty hardware, cut corners on documenting it, and not care as long as Microsoft OS's boot on it."
> they aren't wasting their time writing drivers for every single board and voltage regulator.
Document the damn registers and hardware interfaces and then someone out there will write it for them.
> Its also largely the reason why linux works on 20 year old PC's that no one is actually testing on anymore.
Well Linux still supports hardware that predates ACPI, like floppy drives, so this is making a connection where there is none.
> The alternative is a monoculture where the HW is basically provided by a single vendor that doesn't change much and provides a half dozen supported configurations.
Wrong. Before ACPI, for example, there were many chipset vendors--Opti, VIA, etc. ACPI didn't kill these off but there wasn't a monoculture before ACPI.
> I would point to the zseries here
So ... do we want an IBM-mainframe like monoculture where you have to depend on IBM when the hardware changes?
> allows IBM to change the underlying HW details
The job of abstracting the hardware details is the operating system and operating system drivers. Literally that's 50% of the reason why you have an OS in the first place is to abstract I/O into things like open(), read(), write() or other interfaces that are portable because of underlying drivers. If the hardware is so different that existing I/O calls can't handle it, you need to develop new ones - that's what happened with Berkely sockets - NICs are not block or character devices.
If the drivers are open source then they are bug-correctable and useable even well after the hardware manufacturer goes away. Embedding that in closed-source platform-firmware makes you dependent on that platform manufacturer. A strong contributor to the Wintel monopoly.
Wrong. Before ACPI, for example, there were many chipset vendors--Opti, VIA, etc. ACPI didn't kill these off but there wasn't a monoculture before ACPI.
That is where your wrong, x86 PC's had BIOS and APM which also provided minimal platform abstractions. But there was a monoculture, you either provided PC/AT HW and BIOS compatibility or your x86 didn't work. There were HW "standards" for everything, be that CGA/EGA/VGA, or IDE controllers. Yes, you might make your own video card, but its absolutely supported those standards. Similarly with chipsets, there were closed source early boot firmware, but by the time the MBR was being loaded it looked like a 1980's PC/AT.
As far as there being enough driver developers to further fill the kernel with all these drivers belies a fundamental misunderstanding of how complex even those arm machines are behind the scenes, even the ones that work likely have huge stacks of binary code running places that aren't visible to linux/DT. You need only look at some of the open or reverse engineered arm boards to see that. the RK3399 specs were published, what 4 years ago at this point, and there are still rk3399 fixes landing. Should it take 5+ years from the release of a piece of hardware before linux can boot and work reliably on it?
edit: See https://en.wikipedia.org/wiki/Option_ROM for where all that binary code used to hide in the 1990's when you plugged in random "vga" boards and storage controllers.
The original idea behind BIOS was to provide drivers (and a firmware interface to them) so CP/M could run and also read the first disk block into memory so CP/M could load.
This was awesome when your platform consisted of an 8-bit CPU, a serial port or two, a printer port or two, a disk drive or two, and a text-based video display.
ROM and BIOS is a poor place for drivers if
- your system has any notion of plug-and-play at all
- your system has expansion slots and arbitrary people you don't control might develop hardware for it.
and these two things above are desirable if you want a free-ish computing platform not monopolized by one company.
Hardware interfaces are not the same as the firmware gunk ACPI foists upon you. Hardware interfaces are simply a way for the CPU to talk to a device outside of the system, but it doesn't result in the CPU running unknown code behind your back. A CPU that's cordoned off behind a peripheral interface running closed-source code is fine - where that's not fine is on the same CPU that's running my OS kernel.
> Should it take 5+ years from the release of a piece of hardware before linux can boot and work reliably on it?
I mean if the hardware manufacturer won't document their devices it's something they bring upon themselves. Embracing the open source community here would have substantial benefits unless something like market segmentation is taking place.
Option ROMs? Yeah, those are BIOS extensions - your OS isn't dealing with those ROMs once the BIOS has booted the OS unless it's a CP/M era operating system - like actual DOS. Except for the modesetting - but that's just as much bullshit as ACPI. Document the damn registers that do the modesetting change so we don't have to thunk back into 16-bit mode just to change the screen resolution.
Uh, option roms are still a foundation of how PCIe/etc work. So, unless you don't consider PCIe to be plug and play, option roms are very much a part of PnP. Ever plugged in a GPU? How do you think the BIOS/EFI/GRUB/etc display things on the screen? How about net booting of a random network adapter, or storage controller that isn't AHCI/NVMe?
So, all that said, your mental model of how a modern machine works, seems like its stuck with the idea that linux is the center of the machine and can access all the hardware, and that the HW looks like a 1980's PC with "registers" that actually modify HW states. Which is provably false, and will continue to be that way as long as people want inexpensive and/or high performance machines. The mainframe guys go on about channel processors, but that concept (using a small cpu/etc to manage a piece of hw or communication) is fundamental to a very large percentage of HW produced in the past couple decades. There is code buried in pretty much every single USB device to manage the bus, as is true for nearly every storage device where those microcontrollers manage everything from queue scheduling, signal processing, etc on spinning media, to the flash translation layers and error correction on SSDs. Then, there is all the power mgmt code running to control internal bus power/frequency, as well as cache power mgmt, etc, etc, etc. Even things you probably think are simple register model HW devices (say XHCI or NVMe) have microcontrollers buried in them actually driving the bus and maintaining connections. Overwhelmingly what you think of as HW registers are actually mailbox interfaces to micro controllers running proprietary firmware.
So, as I mentioned pretty much the entire HW docs for the RK3399 were released years ago, and that SoC and the dozens of boards you can find it on, still in general won't work out of the box with a random upstream linux kernel on any random board you can find. And when it "works" the power mgmt tends to be terrible. That is because it actually takes engineering time to make these things work well, someone has to write the drivers, device tree's, etc and go through the pain of getting them merged to mainline. And its very obvious that while sometimes there are people willing to spend their holidays and weekends making it work, those people are few and far between. And that's just for a few pieces of HW, if there were hundreds of manufactures making variations on say the pinebook pro, pretty much none of them would work outside of the hacked up debian/whatever that the manufacture shipped on the device.
And then, say linux actually works well on it, what happens if you want to run netbsd?
> Ever plugged in a GPU? How do you think the BIOS/EFI/GRUB/etc display things on the screen?
That's because the BIOS is not an operating system--or at least used to not be. And it goes back to the CP/M architecture that's even older than the 1980's PC. Option ROMs are there for the BIOS and single-tasking "operating systems" that use it CP/M style. Modern operating systems don't need the layer there.
I have a Guruplug (ARM platform) that boots Linux and what's in flash is U-Boot - a bootloader that loads Linux, the initrd, then it gets out of the way. The only reason why the PC platform can't work like this is because of ACPI.
> Overwhelmingly what you think of as HW registers are actually mailbox interfaces to micro controllers running proprietary firmware.
Oh I know. SATA is a communications interface (as was IDE, ATAPI, SCSI). USB is a communications interface. NVMe is a elaborate tagged/queued communications interface. Et cetera.
I'm not sure why you are conflating CPU-facing hardware interface (registers) with anything that is physically peripheral side stuff except for this which I will address:
I know a CPU I don't control and don't know the code it's running is on the other side of those links. That's fine--because it can't directly access my OS's RAM unless the OS allows DMA - and it's probably going to cause a device-level issue instead of a machine-level issue if there's a bug in that firmware.
Not seeing the value add--other than saving some poor, poor Microsoft-aligned platform firmware developer's bit of time--of having to jump to a closed-source firmware routine to use those communications interfaces instead of letting the OS directly talk to them.
> So, as I mentioned pretty much the entire HW docs for the RK3399 were released years ago, ...
So what makes the PC platform avoid this mess is not the presence or absence of closed-source firmware running on the main CPUs, but simply the hardware platform itself being standardized. This is because people copied it from IBM. The BIOS was copied because DOS needed it, not because the hardware needed it other than a CPU requires ROM at it's initial boot address. There were well known addresses for each device, such as DMA channel 0/1, IDE channel 0/1, FDC 0/1, serial port 0/1/2/3, paralell port 0/1/2/3. I really want to know why we couldn't have a hardware standard for laptop power control interface instead of APM. The industry managed to settle on PCI in reaction to IBM's attempt to grab back the platform with MCA. PCI doesn't requre firmware, and it's registers to scan the bus are well known and standardized in hardware. So why did everyone say it was OK to hide the power-controlling hardware behind APM? DOS of course needed something like that but you had more than a couple commercial operating systems on the platform besides that (Xenix I think, OS/2 likely still kicking a bit, NT).
Early 90's when APM started taking hold was also the time when Intel started to not publish certain things in its CPU manuals.
Device trees should be easily obtainable from device datasheets.
Overall the real problem with the ARM boards is an economy that values time to market and treat-your-first-X-customers-as-beta-testers over quality. Separate issue that we really should be unwilling to accept as an absolute requirement for unauditable code running on the same CPU my operating system is running on.
The option rom frequently is required to init the hardware so the OS can view something normal. Modern PC's aren't using 16-bit int services, rather UEFI GOP drivers sourced from option roms for early display, without which you can't interface with the machine until the OS starts. If you want something like a phone where you can't replace the OS, that is how one goes about it. Even uboot has noticed that standards matter as they slowly transform themselves into yet another UEFI firmware.
I have a guruplug too, or rather an openrd, because all the guruplugs died. But that is 1980's PC level of simple HW, and its not SMP, doesn't support virtualization, or any power mgmt to speak of. The list of things it can't do is longer than the list of things you can do with it compared to a modern piece of HW.
Modern PC's with ACPI aren't "standardized" outside of a few interfaces being used by the OS, that is my point. All those regulator drivers, clock controllers, pin muxing, I2C'ing, SPI'ing, to manage the platform is whatever the vendor put there in PC land (and arm land for that matter) is still there, only the OS doesn't have to worry about those details because it simply asks to power something on, and it happens, and some other processor takes over picking perf profiles and idling links/etc when needed. If there was a standard PMIC, wired up in a standard way it might make sense to attempt to use it, but there isn't. Instead its a bucket of parts wired up in every way imaginable.
ACPI, doesn't dictate what is on the other side, it could be a SMM trap, or it could be a mgmt processor, or a BMC, or the function can be completely written in AML. That is the point, its just an OS API surface, it could just as well be a standardized pile of HW registers but that would remove the ability to run in cases where the vendor wants to save a penny and run everything on the main core as you suggest doing with DT. A large part of why you provide these interfaces is because having the main core waking up to fittle with some 100Khz SPI bus to flash a led is dumb, wastes power, and eats perf doing work that would be better handled with a small mgmt core. Your basic argument seems to be that you don't want the OS wasting cycles in the firmware, but your perfectly ok with the OS wasting even more cycles all the time doing these functions sub optimally in a kernel that doesn't understand the intricacies of power management on the most pwoer hungry core in the system. You seem to think ACPI is always just a SMM trap, and that isn't the case. The reason I conflate is its because in the case where the remote is a BMC/etc your effectively just poking a mailbox to get some other piece of hardware to do the work and those mailbox interfaces are no more standard than PMICs, so now you have thousands of them needed to talk to battery mgmt controllers, power/standby buttons, you name it. Instead of standardizing all that garbage, a software API was created. Its no different from openGL or any other standard except that it uses a bytecoded function call interface (ala openfirmware's forth). Do you hate openGL too because your game isn't fittling fake HW registers?
And now that I point out what happens when a vendor just tosses those register maps you want over the wall you change the topic to how they are just creating beta level products while refusing to acknowledge that someone has to do the work. From the perspective of a vendor it costs a tiny fraction to have some closed source firmware that works across dozens of OS's and allows them to redesign their hardware vs hiring experts in a half dozen OSs to write piles of custom power mgmt code accessing dozens of drivers talking on dozens of SPI/I2C/mailbox/etc interfaces for a single machine. Its actually a little bit crazy that people in the arm space are trying this while simultaneously complaining about the HW vendors creating piles of patches that they can't get upstream so they build their own custom linux forks. Its the natural result of making the same claims your making. A vendor can either spend years fighting with kernel maintainers, or they can fork linux and ship it to their customers with a pile of patches that only work in linux. The middle ground is where a number of them have been going, its the rpi's proprietary mailbox interface that talks to the videocore to set the processor frequency. Multiply that mailbox by a few dozen SoC providers and you have the future of DT on arm and risc-v. In a decade or two they will end up standardizing the mailboxs and reinventing ACPI and openfirmware in order to support cross platform mailbox interfaces.