Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What I find curious about all the container discussions and narrative is the strange lack of context. Sure discuss Docker but also discuss namespaces, cgroups, overlayfs, aufs, and all the other critical enabling technologies where a lot of major problems with containers exist and will be solved. For instance user namespaces, cgroups are not namespace aware, how to integrate overlayfs or aufs so they can be mounted by user namespaces seamlessly.

Surely these projects and developers need support and focus. Or else it become mere marketing for companies that have the funds or ability to market themselves. Do we just talk about libvirt without context or understanding of kvm and xen, how would that be useful or meaningful?

An ‘immutable container’ is nothing but launching a copy of a container enabled by overlay file systems like aufs or overlayfs, a ‘stateless’ container is a bind mount to the host. Using worlds like stateless, immutable or idempotent just obscures simple underlying technologies and prevents wider understanding of core Linux technologies that need to be highlighted and supported. How is this a sustainable development model?

Docker chooses to run containers without an init. The big problem here is most if not all apps you want to run in a container are not designed to work in an init less environment and require daemons, services, logging, cron and when run beyond a single host, ssh and agents. This adds a boatload of additional complexity for users before you can even deploy your apps, and a lot of effort is expended is just managing the basic process of running apps and managing their state in this context.

Contrast that with LXC containers which have a normal init and can manage multiple processes enabling for instance your VM workloads to move seamlessly to containers without any extra engineering. Any orchestration, networking, distributed storage you already use will work obviating the need for reinventing. That’s a huge win and a huge use case that makes deployment simple and slices all the complexity, but if you listen to the current container narrative and the folks pushing a monoculture and container standards it would appear there are no alternatives and running init less containers is the only ‘proper’ way to use containers, never mind the additional complexity may only make sense for specific use cases.



Docker chooses to run containers without an init.

I don't think Docker chooses one way or the other, and people do run Docker with an init: http://phusion.github.io/baseimage-docker/


Correct, Docker doesn't care what you run inside the container. You provide a command to run, and it runs it. That command may be your application server or a traditional init process which in turn will fork multiple children.

Docker does make it easier to follow an "application container" pattern, and that pattern avoids (with good reason) booting an entire traditional init system inside the container. But following that pattern is not mandatory. Not forcing too many patterns upon users all at once was part of the original Docker philosophy. Unfortunately that aspect was drowned in the cacophony as a few loud and passionate people interpreted Docker through the lens of their own favorite patterns.

In retrospect I wish we had been more assertive in reminding everyone to respect the philosophy behind Docker: that you can share tools with people without forcing everyone to use them in the same way as you.


Prominent Docker, Inc., employees have spent years arguing against it, and they don't prioritize bugs around that use case. Which matters.


My favourite is Alpine Linux with s6-overlay: https://github.com/just-containers/s6-overlay


Overlayfs, aufs, etc. are really irrelevant to containers. They are used in Docker because it's built around opaque binary disk images that do not compose, which I believe is a big flaw, but it doesn't have to be this way. For example, GNU Guix has a container implementation that does not use disk images at all but still achieves system-wide deduplication of software across all containers running on the host via simple bind mounts.


LXC containers which have a normal init

Incorrect. You can do custom application (eg. single-process) or init-based containers with LXC.


I think in discussions its useful to deal with defaults because there are always workarounds. For Docker the default is to run init less containers so you do not get a normal OS environment. For LXC the default is run init in the container so you get a container with a normal OS environment out of the box, more like a VM.

lxc-start by default will start the container's init and any apps installed with services will start with container start. lxc-start has no options or documentation I have seen that enables users to launch as a single process without starting the container init so it will be useful to share how that works.

With Docker you launch the app in the container directly from the host and if the app is a daemon for instance Nginx, you need to disable daemon mode or it won't work as there is no init process to manage background processes. So with Nginx you would need to disable daemon mode in nginx.conf or start it with 'nginx -g "Daemon off"' in Docker. In contrast with LXC you install Nginx and it will work as it would on bare metal or VMs. There is no need to think about how the app manages its processes

To run multiple processes or daemons in Docker you need to use a shell script, or use a third party process manager like supervisord for instance. The entire Docker ecosystem, and tooling is built and defined around single process containers.

A ton of problems around deployments especially for multi process apps, daemons, databases or any apps that require cron, logging, agents etc emanate here. This becomes a process in itself to know how the apps operates and then to configure it accordingly. Your deployments also become docker specific, for instance Nginx with a Wordpress stack deployed on bare metal or VM can move seamlessly to an LXC container and vice versa because they all run normal OS environments.

This can't happen with Docker and you need to re-engineer deployments. Why take on this extra load, and create incompatibility from a non standard OS environment, or worse give up the init that comes for free for for a third party process manager to do the exact same thing, only with more effort and cognitive load? This doesn't make much sense unless there is an extremely clear upside and use case.


I agree, docker is half-baked.


> Docker chooses to run containers without an init. The big problem here is most if not all apps you want to run in a container are not designed to work in an init less environment and require daemons, services, logging, cron and when run beyond a single host, ssh and agents.

Do this not confuse init with a process supervisor?


A process supervisor (foreman, monit, etc. ) is just one of the things an init system starts. The set of items you want to ensure are always running isn't usually the same as the set of items you want to start with the system.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: