These days, Docker isn't really "about" the running of containers (all the logic for which is encapsulated outside of the Docker project itself, in runc.) Docker—the thing that Docker Inc produces—is the tooling that gets container images fed into a daemonized instance of runc.
So: the Docker Registry daemon (the thing you can push/pull images to/from, such as runs on Docker Hub); the `Dockerfile` format, and the build logic that uses it, and the CI bots that use that build logic; the local daemon that holds a mini-registry that builds and pulls write to; and the tooling to move and create and dump images between all those places.
If there's a tutorial that replicates that stuff, I'd love to read it.
I dunno about that. There are multiple issues with routing and DNS that are controlled by the docker daemon. Ever tried getting ipv6 working correctly? You end up hitting a whole host of issues that lead straight back to dockerd.
People use Docker because it's easy to use and easy to get support, and it's easy to hire people who have experience with it.
Similarly, you could roll your own Dropbox solution with rsync, but good luck teaching your Mom how to do that when she calls you on the phone and she just wants to know how to sync her photos.
By the way, it's pretty difficult to write a binary package system that is reliable, distributed, easy to use, easy to troubleshoot, well supported, and easy to sell to both the engineers and the execs at a company. Docker has done that. It's not a small thing.
Eh, if it's a binary package system it's a very nice one--perhaps an historically nice one.
Unlike most package systems, most people that develop in docker containers are writing the equivalent of a package manifest (Dockerfile), and they're doing it for things that weren't previously packaged. While there are plenty of exceptions, the standard for deployment of webapps has long been piecemeal deployments; not RPMs or whatnot. Docker's convenience features changed that.
Also, the layering system/virtual filesystems are integrated into Docker such that it yields a ton of convenience (in "extending" others images, speeding up deployments/caching, and making huge/arbitrary changes to the filesystem highly reversible) without making users manually manage most of it. There are times when the abstraction leaks (looking at you, hard limit on number of layers), and all of the component technologies that went into it existed before, but as I've posted elsewhere in the comments here, the big advantage of docker is that it integrates those technologies in a way that provides a simple mental model for reasoning about containers, and a very convenient/beginner-accessible toolset for packaging really complex dependencies. Even the nicest "typical" packaging tools out there (rpm/dpkg are pretty crufty, but stuff like Nix and FPM are getting quite nice) still have much rougher edges.
> Unlike most package systems, most people that develop in docker containers are writing the equivalent of a package manifest (Dockerfile)
Dockerfile is a set of build rules. You need to have build rules for RPMs and
DEBs (unless you use FPM that you somewhat praised later, in which case
there's no good place to put the rules and you get terrible packages with
missing metadata).
Unlike most package systems, Docker was designed by programmers for
programmers, so you don't need to learn those yucky tools for sysadmins and
you can stay oblivious to how the OS actually works (despite that you should
actually know the OS you're dealing with, so it's a dumb idea on its own).
> and they're [writing Dockerfiles] for things that weren't previously packaged.
Which is only a progress because programmers are putting the code into
packages. Exactly the same could be achieved with RPMs and DEBs, even if you
needed something that was already shipped by the distribution (you'd just put
your things on the side, just as you are doing now with virtualenv or gems or
whatever).
> While there are plenty of exceptions, the standard for deployment of webapps has long been piecemeal deployments; not RPMs or whatnot. Docker's convenience features changed that.
But Docker didn't change that because it allowed anything that was previously
impossible or even difficult. Quite the contrary, it was easy, it just
required to actually know how the OS works, which is not a common knowledge
among web programmers. OS packages were ignored by programmers solely because
they were sysadmins' tools, and as such they were boring. I see no other
reason, given that Docker gives virtually no technical benefit beside heavy
brittle magic on network configuration, so all your packaged daemons can
listen on the very same address 0.0.0.0:8888 and still communicate with each
other.
> the big advantage of docker is that it integrates those technologies in a way that provides a simple mental model for reasoning about containers
This mental model, it is what exactly? Because there's virtually no mental
model with the packages. A tarball with necessary files, that's all.
Yes, but as you said just after that, those in a Dockerfile are much more popular with a wide range of programmers. That might be because the dockerfile abstractions/API is better and simpler, or because the programmers like writing shell scripts but not RPM build rules, or for some other reason.
> you should actually know the OS you're dealing with, so it's a dumb idea on its own ... it was easy, it just required to actually know how the OS works, which is not a common knowledge among web programmers.
I'm really tired of this attitude. Of course you should know the OS you're dealing with. What you need to know about it depends on what you're doing with it. If I want to do kernel work, I don't need to know the best design principles for an ES7 web framework. If I want to make a website, I don't need to know how to write Apache 2 from first principles, and nor do I need to know how to manually chroot/install quotas/set up namespaces and capabilities to build a container system from scratch.
> This mental model, it is what exactly? Because there's virtually no mental model with the packages. A tarball with necessary files, that's all.
If it was just a tarball, it would be less powerful--it's the whole bunch of technically-unrelated things (resource management, networking, capabilities, namespaces, tarball-ish features, layering, dockerfile API, nice CLI with pluggable backends, standardized container interface) all unified under the abstraction of "this is a single unit, just like an RPM package". That concept is powerful exactly because it hides the fundamentals of the specific component technologies from people who don't need to know them--at least not at first.
Saying Docker is just "package maintenance for stupid people who only play with Duplo legos", you're being ignorant of the real needs of people at best, and deliberately elitist at worst. It's like saying "Dropbox is just for people who don't want to learn how file syncing over the network works--real programmers will just use curlftps and SVN".
> I'm really tired of this attitude. [...] What you need to know about [the OS] depends on what you're doing with it.
You want your system deployed, so you should know how to deploy. If you
don't know how most of the things are deployed, you're likely to create
a monstrosity that doesn't fit in any way to what the OS can do sensibly.
It's like arguing that a web programmer doesn't need to understand HTTP
protocol, because he only works with it through half a dozen layers of
abstractions (how currently it's done).
> If I want to make a website, I don't need to know how to write Apache 2 from first principles, and nor do I need to know how to manually chroot/install quotas/set up namespaces and capabilities to build a container system from scratch.
Of course, because that's what sysadmins do. But you should know how to
configure the said Apache. Docker hides that away behind a heavy magic, which
is bound to break apart for non-trivial requirements.
These days, Docker isn't really "about" the running of containers (all the logic for which is encapsulated outside of the Docker project itself, in runc.) Docker—the thing that Docker Inc produces—is the tooling that gets container images fed into a daemonized instance of runc.
So: the Docker Registry daemon (the thing you can push/pull images to/from, such as runs on Docker Hub); the `Dockerfile` format, and the build logic that uses it, and the CI bots that use that build logic; the local daemon that holds a mini-registry that builds and pulls write to; and the tooling to move and create and dump images between all those places.
If there's a tutorial that replicates that stuff, I'd love to read it.