I did something (slightly) similar via proot, called Bag [1], which I must have not described as a docker alternative: It has nothing to do with cgroups, and the cli deviates from that of docker's.
The backstory: To bypass internet censorship and deep packet inspection, I had written a proxy chain solution masquerading itself as plain html traffic. I needed it constantly running everywhere I went, but I didn't want to port it to a native android app. I wanted to run it through termux and at the time termux had no jdk/jre. Proot could spawn a archlinux env and there indeed was a jdk available.
The arch env within termux turned out to be generally more suitable for all tasks. Creating and destroying ephemeral envs with different setups and prooting into them to just run a single command is easily automated with a script; I named it bag.sh, a drastically smaller form of a shipping container.
Funny bag.sh also has a roadmap/todo in there untouched for 5 years! It's written on mobile screen hence mostly formatted to 40 columns lines to fit on the display without scrolling.
I guess a lot of us had stories like this. I needed to package a bunch of things into a single environment where a VM was unsuitable. I cooked up something using chroot, deb-bootstrap and make an installer using makeself. It created a mini debian inside /opt which held the software and all the dependencies I needed (mysql etc.). Worked pretty well and the company I made this for used it till they got acquired in 2016 or so.
More generally though, implementing a crude version of a larger application is one of the best ways of learning how things work inside it. I'm a big fan of the approach.
Now I'm kinda tempted to have a go at this using distroless, probably by building a container using the already existing tools and then slurping the contents back out of it to turn into a chroot.
I managed to coax my 8" tablet turned horizontal to give me about 80x22 at a size I could actually read.
Combine that with a ~10" bluetooth keyboard that fits comfortably in my leather jacket's inside pockets and I get to leave the house without carrying a bag and still sit and write code in the back corner of a handy beer garden.
Turns out to be surprisingly productive as well, probably because there's just enough extra friction to flipping to my usual distractions compared to a laptop that I tend to just take a sip of my beer while continuing to glare at the code instead.
True! A tablet and an ext-keyboard do a much better job (Although I wrote bag.sh out of boredom while stuck in a beer-less airport waiting for a delayed flight; I hope I won't have to actively plan for this setup again)
I have a stack of Thinkpad Tablet 2 bluetooth keyboards (though it's getting a bit low, will have to find somebody on ebay or whatever that still has some again soonish) - they're capable of being paired to more than one device (I also use them with the tablet half of my Helix 2) and have a built-in stand which takes my 8" tablet just as well as it did the Tablet 2 itself back when I was using that.
So had I ended up in the same situation as you I would probably have done similar, but with an added undercurrent of "Matt, you idiot, how did you manage to forget to put the keyboard in your pocket before you left?!"
(still a neat hack on your part, mind, no question there, but I'm very much glad I've optimised by standard "leaving the house" loadout so inflicting the same thing on myself is at least *less* likely ;)
I love these. Been a fan of minimal bash stuff.
Here's a proof of concept for a intra-cluster load balancer in 40 lines of bash done during a hackathon I organized to promote distributed infra with Docker, Mesos, etc. about a decade ago https://github.com/cell-os/metal-cell/blob/master/discovery/...
I likely lost it, but I had a redundand and distributed reverse SSH tunnel based colo-to-cloud transfer tool.
The fact how simple it is to re-implement a large part of Docker because all it fundamentally is a bit of glue code to the kernel is the biggest problem Docker-the-company faced and still faces.
Where Docker adds real value is not (just) Docker Hub but Docker for Windows and Mac. The integrations offer a vastly superior experience than messing around with VirtualBox and Vagrant by hand (been there, done that) to achieve running Docker on one's development machine.
Rancher desktop is also a viable option and free. Many including my work moved to it after Docker's new licensing kicked in.
IMO the real magic of Docker was the Docker/OCI image format. It's a brilliant way to perform caching and distribute container images, and it's really what still differentiates the workflow from "full" VM's.
My main dev machine is Linux so I use Rancher Desktop but I also have a MacBook Pro m1 machine. Orbstack is so much better than rancher and docker desktop. I know they are a small company but hell if their product isn’t significantly more efficient and better.
Completely agree. I moved from docker desktop to rancher after an update blew away my kubernetes cluster, and then from Rancher to Orbstack due to a number of bugs that were crashing the underlying VM. Orbstack has been rock solid (aside from one annoying networking issue), and it uses significantly less battery. They’ve done a fantastic job.
Related to image format, has anyone tried to use alternative image formats? There was a differnt format / filesystem for containers to leverage deduplication between different images (so the node won't need to fetch yet another copy of cuda / pytorch)
Docker Desktop on Mac is a handicapped, underprivileged mess. Docker cli for Mac with Colima is still underprivileged, but at least you can skip the bs license and Docker's gui. On Windows you can at least use Docker on WSL which works great. Why use Docker Desktop is beyond me.
I lived through a failed attempt to migrate from Docker Desktop for Mac to an open source alternative (minikube+portainer, IIRC). A lot of test scripts developers relied on – to run parts of the integration test suite on their laptops for debugging – broke, because Docker Desktop for Mac went to a lot of effort to make macOS look like you were running Docker on Linux, whereas the open source replacement wasn't as seamless. Some of these test scripts contained Java code directly talking to the Docker daemon over its Unix domain socket, so need the same API implemented. Many other scripts made heavy use of the Docker CLI. After spending a lot of time on it, it was decided to just go back to Docker Desktop for Mac. The failed migration had resulted in highly paid engineers spending time pulling their hair out trying to get test scripts to work instead of actually fixing bugs and delivering new features.
Now, that was 2+ years ago now, and maybe the open source alternatives have caught up since, or maybe we picked the wrong one or made some other mistake. But I'm not rushing to try it again.
I would look at Orbstack. Yes it costs money but it is pretty great.
Your situation sounds very similar to the company I work for. Orbstack has been a drop in replacement except one issue. Any dev using IPv6 assignment on their home network has issues where pods try to hit external dns because it tries to use IPv6 and I don’t think the Orbstack k8s instance is dual stack.
There are hacks to get around it but if I could get Orbstack to address this issue, I couldn’t find one other issue.
Orbstack is crazy fast and way better than docker desktop overall
The reason Docker Desktop for Mac looks like you're running Docker on Linux is because... you are. It's running docker in a linux VM for you.
Similar issues in our environment, and I managed to swap everything over to Rancher Desktop fairly seamlessly as it does the exact same thing. It runs a Linux VM and if you select the "dockerd (moby)" container engine it runs a copy of docker inside of it. So you get a socket with the same docker API implemented... because it's running actual docker. docker compose and everything else work as expected.
The reason we switched is that Rancher Desktop, along with providing a convenient way to run docker, also includes a full k3s install in that same VM. So we can work on unifying some of our stack/configs on kubernetes rather than docker for local and kubernetes for everywhere else. It also opens up using upstream helm charts and things when a developer wants to deploy and try something locally.
It's also free. Open source and backed by SUSE, who also develops and maintains the k3s distribution among other stuff in this space.
> The reason Docker Desktop for Mac looks like you're running Docker on Linux is because... you are. It's running docker in a linux VM for you.
Yes, but that wasn't what I was talking about. Docker Desktop for Mac goes to a lot of trouble to hide the fact that there are two different virtual filesystems involved (Linux vs macOS) and two different networking stacks too. That means scripts which run on Docker for Linux and do stuff involving filesystem/network integration between the host and the container will often work without change on Docker Desktop for Mac. In my past experience, open source alternatives don't offer as seamless integration, they don't do as good a job of hiding the fact that there are two different virtual filesystems and networking stacks, so those kinds of scripts are less likely to work.
Don't know your use case that precisely so can't say if it does a "good job" versus just "a job", and it's been a while since I've used Docker Desktop or macos (though about half of our dev team is using Rancher Desktop on macos right now), but as far as I'm aware it's essentially identical.
FS: Rancher mounts your `/Users/$USER` folder in the VM that Docker is running in. It supports virtiofs on macos (not sure if it's used by default though). As far as I can tell, this replicates the default Docker Desktop setup.
Networking Container -> Host: Connecting to `host.docker.internal` works as expected. On the host I can listen on a port on my host (`nc -l -p 1234`) and connect from a container (`docker run -it --rm alpine nc host.docker.internal 1234`).
Networking Host -> Container: Exposed ports work as I would expect. I can run a container with an exposed port (`docker run -it --rm -p 1234:1234 alpine nc -l -p 1234`) and connect from my host (`telnet localhost 1234`). I can't connect directly onto the docker network bridge (though I'm not sure if that was ever supported on OSX?).
No skin in the game either way here, just with a bunch of people suggesting buying OrbStack (which is OSX only), figured I'd throw Rancher Desktop out as a potentially viable cross-platform alternative that's also free and OSS.
Colima is the way to work with Docker on mac nowadays. I appreciate Docker Inc folks trying to get some money, but Docker Desktop is just not worth it.
Nah, they should have prioritized building some sort of PaaS solution like CloudRun, Render or Fly so they can sell that to enterprises for $$$. Instead they did half-baked docker swarm which never really worked reliably and then lost ground to k8s rapidly
Docker was a spinoff of an internal tool used to build exactly the type of PaaS you're describing. It was like a better Heroku and I loved it, but they shut it down when they focused on commercializing Docker itself.
There's also the issue that building an effective enterprise sales organisation is a whole Thing and if you believe you can achieve profitability via a different path then the temptation to file the enterprise approach under "I have no idea how to do this and also I would rather not" is probably pretty strong.
(this is in no way a comment about what the right decision would have been, only musing on an additional reason the decision might have gone the way it did)
That's what people usually say but they have tried to do just that a few years ago and it didn't really work. Docker inc has been doing great since they have shifted towards even more standardization in their container runtime, and focused on dev tooling. They became profitable when they focused on Docker desktop and docker hub instead of trying to build a clunky alternative to kubernetes or yet another cloud orchestration tool/platform.
A lot of popular wealthy systems are 'easy' to re-implement. I thought the value was in Docker images? Or is that not how Docker is used? The only way I've used it is to be able to import someone's virtual build setup so I could build something from years ago.
I like when repos say "not implemented yet" or "to-do" or "working on" and the last commit was years ago. Makes me feel better about not going back to my to-dos I drop through my code. (Not meaning to throw shade on this author, just finding it comforting)
I think it's good. I guess it's possible for something to be simply done, and you don't always have to have a bunch of next ideas, but I generally always have next ideas.
If there is always some next ideas then by definition you must always have todos that never get done. It should actually be the normal state of every single project.
Yeah it's weird, I feel like a repo is untrustworthy if it wasn't committed to in the past year but sometimes a project is just done. Now I'm actuality there would likely be work on my end to update it for integration with modern tools/devices, but there's a repo from 12 years ago I've been considering using. Maybe it'll just work, maybe it'll be trash.
Great point! It is not shade at all, you are trying to normalize this which I like. For unpaid, volunteer, or hobby code feeling a _need_ because its public can make coding less fun or prevent people from sharing code publicly they otherwise would.
When you start a project it's worth spending some time thinking about "non-goals" i.e. features that come to mind but that you intentionally are not going to implement. It's absolultely fine and often very helpful to have clear scope boundaries so you don't end up chasing rabbits and having projects that never feel "finished."
Lazydocker sure looks interesting, but self-promotional ads - for products in an entirely different space - in an OSS project's README.md? Seriously? At least for me it is the first time I have come across anything like this.
I'm wondering if advertising like this is even allowed under GitHub's TOS and AUP.
I wonder why Bocker makes the frontpage so often. Is Docker still that controversial even in 2024? Why people don't recognize that it actually brought something useful (mainly, software distribution and easy of "run everywhere") to the table?
Exactly. "Docker" is boring, everyone uses it, everyone knows it, no one really wants to rewrite it (on Linux) except for parochial infighting or religious license reasons.
But Linux containers[1] are actually fascinating stuff, really powerful, and (even for the Docker experts) poorly understood. The point of Bocker isn't "see how easy it is to rewrite Docker" it's "See how simple and powerful the container ecosystem is!".
[1] Also btrfs snapshots, which are used very cleverly in Bocker.
I like doing similar tricks with zfs snapshots for containers on linux and jails on fBSD.
(this is not intended to start a filesystem argument, I'm doing it with zfs because I already know zfs and it's available on both of the OSen I care about; if you already know btrfs and are only running things that support it, clearly you should use that for the exact same reasons ;)
It hits the frontpage often because people assume that Docker is this super complex thing, but (at its most fundamental), it's actually quite elegant and understandable, which is interesting - a perfect HN story, in fact.
It's possible it's not climbing the front page to slight docker, but rather that people are seeing that docker is something useful and want to know how it works. Bocker can be an entrypoint into the technologies.
I'm bringing overlayfs to people at my company to save time on a lenghty CI process, and they are in awe at the speedup. But after demo-ing it to a few people I realized they could just use / (I could have brought them) docker.
Huge (32gb) git repo full of junk and small files. No power to change it. Created a cron job that updates and rebuilds every 24hrs. Clients can clone it instantly with overlay instead of a 30m checkout and build each time. Not to mention disk space savings
A brother from another mother: https://bastillebsd.org/ Bastille manages jails using shell with many of the same constructs you'd find in docker. I like it over other jail management software in BSD because it has so few dependencies.
I'm also quite impressed by cbsd - shttps://www.bsdstore.ru/en/about.html - though that's more of a 'maximum overkill' solution in spite of being a CLI/TUI driven tool.
Currently I'm going through a phase of building and managing jails with just the stuff in FreeBSD base, but that's entirely intended to only be a phase - it'll last until I have the way all of it fits together burned into my brain well enough to be confident debugging it, and then I'll stop banging rocks together and go back to using higher level tools like a sensible person :D
Absolutely, it adds a lot of value for a shell script that is about 100 LoC.
By the way it took me a while to get why it was named Bastille. As La Bastille was a castle built to defend Paris from English attacks during the Hundred Years' War, and then turned into a prison.
> Because most distributions do not ship a new enough version of util-linux you will probably need to grab the sources from here and compile it yourself.
Careful. The default installation prefix is /usr/bin, and the install will happily clobber your mount command with one that requires a library that doesn't exist. Then next time you boot, the kernel will mount the file system read-only.
Two years ago I gave a presentation on how docker works under the hood. After trying to understand docker, moby and containerd and how they interact I was so happy to find Bocker. Pretty much show how it can be done while showing enough of the magic moves that docker itself is actually doing. Bocker is cups&balls with clear plastic cups by Penn and Teller for docker.
> Bocker runs as root and among other things needs to make changes to your network interfaces, routing table, and firewall rules. I can make no guarantees that it won't trash your system.
Linux makes it quite hard to run "containers" as an unprivileged user. Not impossible! https://github.com/rootless-containers/rootlesskit is one approach and demonstrates much of the difficulty involved. Networking is perhaps the most problematic. Your choices are either setuid binaries (so basically less-root as opposed to root-less) or usermode networking. slirp4netns is the state of the art here as far as I know, but not without security and performance tradeoffs.
Is there any Docker alternative on Mac that can utilize the MPS device in a container? ML stuff is many times slower in a container on my Mac than running outside
The issue you're running into is that to run docker on mac, you have to run it in a vm. Docker is fundamentally a linux technology, so first emulate x86_64 linux, then run the container. That's going to be slow.
There are native macos containers, but they arent very popular
You still pay the VM penalty, though it's a lot less bad than it used to be. And the Arm MacBooks are fast enough that IME they generally compare well against Intel Linux laptops even then now. But it sounds like first-class GPU access (not too surprisingly) isn't there yet.
Damn good to know! Have been gaslit by the ever-changing docker install instructions. Of course it would be a lagging version, but I think the docker feature-set has converged years ago, why would I care any more about the docker version than e.g. about the version of grep?
How exactly is docker (buildkit, compose, the runtime, the daemon, etc) not open source? Docker desktop isn't, but that's almost entirely unrelated to the containerization technology that it uses or that people refer to when they talk about docker.
That service agreement is for using the Docker Desktop GUI tool which isn't open source (though free to use for small businesses etc) whereas the basic docker CLI commands are all open source.
I did something (slightly) similar via proot, called Bag [1], which I must have not described as a docker alternative: It has nothing to do with cgroups, and the cli deviates from that of docker's.
The backstory: To bypass internet censorship and deep packet inspection, I had written a proxy chain solution masquerading itself as plain html traffic. I needed it constantly running everywhere I went, but I didn't want to port it to a native android app. I wanted to run it through termux and at the time termux had no jdk/jre. Proot could spawn a archlinux env and there indeed was a jdk available.
The arch env within termux turned out to be generally more suitable for all tasks. Creating and destroying ephemeral envs with different setups and prooting into them to just run a single command is easily automated with a script; I named it bag.sh, a drastically smaller form of a shipping container.
Funny bag.sh also has a roadmap/todo in there untouched for 5 years! It's written on mobile screen hence mostly formatted to 40 columns lines to fit on the display without scrolling.
[1]: https://github.com/hkoosha/bag
I guess a lot of us had stories like this. I needed to package a bunch of things into a single environment where a VM was unsuitable. I cooked up something using chroot, deb-bootstrap and make an installer using makeself. It created a mini debian inside /opt which held the software and all the dependencies I needed (mysql etc.). Worked pretty well and the company I made this for used it till they got acquired in 2016 or so.
More generally though, implementing a crude version of a larger application is one of the best ways of learning how things work inside it. I'm a big fan of the approach.
Now I'm kinda tempted to have a go at this using distroless, probably by building a container using the already existing tools and then slurping the contents back out of it to turn into a chroot.
I managed to coax my 8" tablet turned horizontal to give me about 80x22 at a size I could actually read.
Combine that with a ~10" bluetooth keyboard that fits comfortably in my leather jacket's inside pockets and I get to leave the house without carrying a bag and still sit and write code in the back corner of a handy beer garden.
Turns out to be surprisingly productive as well, probably because there's just enough extra friction to flipping to my usual distractions compared to a laptop that I tend to just take a sip of my beer while continuing to glare at the code instead.
True! A tablet and an ext-keyboard do a much better job (Although I wrote bag.sh out of boredom while stuck in a beer-less airport waiting for a delayed flight; I hope I won't have to actively plan for this setup again)
I have a stack of Thinkpad Tablet 2 bluetooth keyboards (though it's getting a bit low, will have to find somebody on ebay or whatever that still has some again soonish) - they're capable of being paired to more than one device (I also use them with the tablet half of my Helix 2) and have a built-in stand which takes my 8" tablet just as well as it did the Tablet 2 itself back when I was using that.
So had I ended up in the same situation as you I would probably have done similar, but with an added undercurrent of "Matt, you idiot, how did you manage to forget to put the keyboard in your pocket before you left?!"
(still a neat hack on your part, mind, no question there, but I'm very much glad I've optimised by standard "leaving the house" loadout so inflicting the same thing on myself is at least *less* likely ;)
FYI, I think you forgot some important quotes in your script. Try shellcheck?
> mkdir -p $(dirname "$2")
That'll handle whitespace in paths, but if you want it to handle all path characters, dirname and mkdir need "--" here too.
Wow
I love these. Been a fan of minimal bash stuff. Here's a proof of concept for a intra-cluster load balancer in 40 lines of bash done during a hackathon I organized to promote distributed infra with Docker, Mesos, etc. about a decade ago https://github.com/cell-os/metal-cell/blob/master/discovery/...
I likely lost it, but I had a redundand and distributed reverse SSH tunnel based colo-to-cloud transfer tool.
Shell Fu and others have good collections of these https://www.shell-fu.org/
shell-fu.org is great, would like to see comments of other users though
The fact how simple it is to re-implement a large part of Docker because all it fundamentally is a bit of glue code to the kernel is the biggest problem Docker-the-company faced and still faces.
Where Docker adds real value is not (just) Docker Hub but Docker for Windows and Mac. The integrations offer a vastly superior experience than messing around with VirtualBox and Vagrant by hand (been there, done that) to achieve running Docker on one's development machine.
Rancher desktop is also a viable option and free. Many including my work moved to it after Docker's new licensing kicked in.
IMO the real magic of Docker was the Docker/OCI image format. It's a brilliant way to perform caching and distribute container images, and it's really what still differentiates the workflow from "full" VM's.
My main dev machine is Linux so I use Rancher Desktop but I also have a MacBook Pro m1 machine. Orbstack is so much better than rancher and docker desktop. I know they are a small company but hell if their product isn’t significantly more efficient and better.
Completely agree. I moved from docker desktop to rancher after an update blew away my kubernetes cluster, and then from Rancher to Orbstack due to a number of bugs that were crashing the underlying VM. Orbstack has been rock solid (aside from one annoying networking issue), and it uses significantly less battery. They’ve done a fantastic job.
Only complaint is that my home network assigns IPv6 addresses and that fucks up external dns lookups for pods in Orbstack.
Podman-Desktop is also great b/c it now has gpu support on macOS (for the Linux container)
I could not get LocalStack to run on Podman (w/ Docker emulation), on Fedora, so had to go back to Docker.
Love to hear that :) sent you an email about the k8s IPv6 issue — should be able to get it fixed in OrbStack
Why not use Docker CE if you're on Linux?
Related to image format, has anyone tried to use alternative image formats? There was a differnt format / filesystem for containers to leverage deduplication between different images (so the node won't need to fetch yet another copy of cuda / pytorch)
This is common in the Bazel community.
Docker Desktop on Mac is a handicapped, underprivileged mess. Docker cli for Mac with Colima is still underprivileged, but at least you can skip the bs license and Docker's gui. On Windows you can at least use Docker on WSL which works great. Why use Docker Desktop is beyond me.
> Why use Docker Desktop is beyond me.
I lived through a failed attempt to migrate from Docker Desktop for Mac to an open source alternative (minikube+portainer, IIRC). A lot of test scripts developers relied on – to run parts of the integration test suite on their laptops for debugging – broke, because Docker Desktop for Mac went to a lot of effort to make macOS look like you were running Docker on Linux, whereas the open source replacement wasn't as seamless. Some of these test scripts contained Java code directly talking to the Docker daemon over its Unix domain socket, so need the same API implemented. Many other scripts made heavy use of the Docker CLI. After spending a lot of time on it, it was decided to just go back to Docker Desktop for Mac. The failed migration had resulted in highly paid engineers spending time pulling their hair out trying to get test scripts to work instead of actually fixing bugs and delivering new features.
Now, that was 2+ years ago now, and maybe the open source alternatives have caught up since, or maybe we picked the wrong one or made some other mistake. But I'm not rushing to try it again.
I would look at Orbstack. Yes it costs money but it is pretty great.
Your situation sounds very similar to the company I work for. Orbstack has been a drop in replacement except one issue. Any dev using IPv6 assignment on their home network has issues where pods try to hit external dns because it tries to use IPv6 and I don’t think the Orbstack k8s instance is dual stack.
There are hacks to get around it but if I could get Orbstack to address this issue, I couldn’t find one other issue.
Orbstack is crazy fast and way better than docker desktop overall
i pay for orbstack. makes life better.
i used it for a year or so then subscribed finally the other day. it really is well worth the money.
The reason Docker Desktop for Mac looks like you're running Docker on Linux is because... you are. It's running docker in a linux VM for you.
Similar issues in our environment, and I managed to swap everything over to Rancher Desktop fairly seamlessly as it does the exact same thing. It runs a Linux VM and if you select the "dockerd (moby)" container engine it runs a copy of docker inside of it. So you get a socket with the same docker API implemented... because it's running actual docker. docker compose and everything else work as expected.
The reason we switched is that Rancher Desktop, along with providing a convenient way to run docker, also includes a full k3s install in that same VM. So we can work on unifying some of our stack/configs on kubernetes rather than docker for local and kubernetes for everywhere else. It also opens up using upstream helm charts and things when a developer wants to deploy and try something locally.
It's also free. Open source and backed by SUSE, who also develops and maintains the k3s distribution among other stuff in this space.
> The reason Docker Desktop for Mac looks like you're running Docker on Linux is because... you are. It's running docker in a linux VM for you.
Yes, but that wasn't what I was talking about. Docker Desktop for Mac goes to a lot of trouble to hide the fact that there are two different virtual filesystems involved (Linux vs macOS) and two different networking stacks too. That means scripts which run on Docker for Linux and do stuff involving filesystem/network integration between the host and the container will often work without change on Docker Desktop for Mac. In my past experience, open source alternatives don't offer as seamless integration, they don't do as good a job of hiding the fact that there are two different virtual filesystems and networking stacks, so those kinds of scripts are less likely to work.
Don't know your use case that precisely so can't say if it does a "good job" versus just "a job", and it's been a while since I've used Docker Desktop or macos (though about half of our dev team is using Rancher Desktop on macos right now), but as far as I'm aware it's essentially identical.
FS: Rancher mounts your `/Users/$USER` folder in the VM that Docker is running in. It supports virtiofs on macos (not sure if it's used by default though). As far as I can tell, this replicates the default Docker Desktop setup.
Networking Container -> Host: Connecting to `host.docker.internal` works as expected. On the host I can listen on a port on my host (`nc -l -p 1234`) and connect from a container (`docker run -it --rm alpine nc host.docker.internal 1234`).
Networking Host -> Container: Exposed ports work as I would expect. I can run a container with an exposed port (`docker run -it --rm -p 1234:1234 alpine nc -l -p 1234`) and connect from my host (`telnet localhost 1234`). I can't connect directly onto the docker network bridge (though I'm not sure if that was ever supported on OSX?).
No skin in the game either way here, just with a bunch of people suggesting buying OrbStack (which is OSX only), figured I'd throw Rancher Desktop out as a potentially viable cross-platform alternative that's also free and OSS.
I've heard from someone who would know that you should be using Orbstack.
I've just use a Debian arm virtual machine and be done with it (M1). If I'm going to run a VM regardless, may as well go with a full fledged one.
I have a feeling we work at the same company. Well, maybe not, but we went through a strikingly similar experience around the same timeframe.
A fair amount of the Docker Desktop use, on both mac and Windows, is driven by it's internal workarounds for brain-dead corporate VPNs.
Docker for Mac does run on Linux. Just a striped down lightweight vm. It's why file Io is complete shit. It's a network share.
Use either the cached or delegated options for the volume [1] then even NodeJS becomes decently performant.
[1] https://tkacz.pro/docker-volumes-cached-vs-delegated/
Colima is the way to work with Docker on mac nowadays. I appreciate Docker Inc folks trying to get some money, but Docker Desktop is just not worth it.
I've been using Docker CLI for Mac happily for years. What am I missing?
I just use colima on macos, its a far better experience. Much lighter weight
Nah, they should have prioritized building some sort of PaaS solution like CloudRun, Render or Fly so they can sell that to enterprises for $$$. Instead they did half-baked docker swarm which never really worked reliably and then lost ground to k8s rapidly
Docker was a spinoff of an internal tool used to build exactly the type of PaaS you're describing. It was like a better Heroku and I loved it, but they shut it down when they focused on commercializing Docker itself.
That was always weird to me they opted for freemium cli instead of enterprise paas play. Maybe it was just too early
My guess is the margins were really bad for a PaaS. It's expensive to build on top of other people's clouds.
There's also the issue that building an effective enterprise sales organisation is a whole Thing and if you believe you can achieve profitability via a different path then the temptation to file the enterprise approach under "I have no idea how to do this and also I would rather not" is probably pretty strong.
(this is in no way a comment about what the right decision would have been, only musing on an additional reason the decision might have gone the way it did)
dot cloud yes?
I was surprised when they shut that down too.
That's what people usually say but they have tried to do just that a few years ago and it didn't really work. Docker inc has been doing great since they have shifted towards even more standardization in their container runtime, and focused on dev tooling. They became profitable when they focused on Docker desktop and docker hub instead of trying to build a clunky alternative to kubernetes or yet another cloud orchestration tool/platform.
Didn’t they buy at least one of these? It was garbage, and no one cared.
dotCloud was actually what Docker came out of. No one cared because they didn’t prioritize it enough to make it good
Orchard was the one I was thinking of.
A lot of popular wealthy systems are 'easy' to re-implement. I thought the value was in Docker images? Or is that not how Docker is used? The only way I've used it is to be able to import someone's virtual build setup so I could build something from years ago.
But Rancher Desktop does the same too (and is also open source).
I think Docker is really lucky that devs still think container=Docker.
Podman is in many aspects superior, while still being able to function as a drop in.
Docker for Windows and Mac are both bloated pieces of software, outperformed by Rancher Desktop and Orbstack.
Docker's only real innovation was the OCI format, which it had to give away for it to become an industry standard, and now doesn't own.
Docker on Windows can use WSL2 engine for near native performance.
Docker for Mac is just unusable. They're not really adding any value there.
Have you tried out Orbstack?
+1 on orbstack. near-perfect drop in
I like when repos say "not implemented yet" or "to-do" or "working on" and the last commit was years ago. Makes me feel better about not going back to my to-dos I drop through my code. (Not meaning to throw shade on this author, just finding it comforting)
I think it's good. I guess it's possible for something to be simply done, and you don't always have to have a bunch of next ideas, but I generally always have next ideas.
If there is always some next ideas then by definition you must always have todos that never get done. It should actually be the normal state of every single project.
I feel like most — if not all — projects are never done. Knowing when to stop is important
Yeah it's weird, I feel like a repo is untrustworthy if it wasn't committed to in the past year but sometimes a project is just done. Now I'm actuality there would likely be work on my end to update it for integration with modern tools/devices, but there's a repo from 12 years ago I've been considering using. Maybe it'll just work, maybe it'll be trash.
Great point! It is not shade at all, you are trying to normalize this which I like. For unpaid, volunteer, or hobby code feeling a _need_ because its public can make coding less fun or prevent people from sharing code publicly they otherwise would.
When you start a project it's worth spending some time thinking about "non-goals" i.e. features that come to mind but that you intentionally are not going to implement. It's absolultely fine and often very helpful to have clear scope boundaries so you don't end up chasing rabbits and having projects that never feel "finished."
Totall ok! As soon as the program does what I want, and my task is complete, I stop developing. Software is not my hobby.
Surprised no one's mentioned lazydocker as a great alternative for Docker Desktop (on Linux/macOS/Windows) [1].
It's a fairly full-featured Terminal UI that has the benefit of running over ssh:
[1] https://github.com/jesseduffield/lazydocker
Literally a few days ago: https://news.ycombinator.com/item?id=42214873
Lazydocker sure looks interesting, but self-promotional ads - for products in an entirely different space - in an OSS project's README.md? Seriously? At least for me it is the first time I have come across anything like this. I'm wondering if advertising like this is even allowed under GitHub's TOS and AUP.
I wonder why Bocker makes the frontpage so often. Is Docker still that controversial even in 2024? Why people don't recognize that it actually brought something useful (mainly, software distribution and easy of "run everywhere") to the table?
It's just a learning tool to see how docker works.
Docker is just a combination of kernel tech that already exists. Namespaces, cgroups, and union file systems and probably few others.
Exactly. "Docker" is boring, everyone uses it, everyone knows it, no one really wants to rewrite it (on Linux) except for parochial infighting or religious license reasons.
But Linux containers[1] are actually fascinating stuff, really powerful, and (even for the Docker experts) poorly understood. The point of Bocker isn't "see how easy it is to rewrite Docker" it's "See how simple and powerful the container ecosystem is!".
[1] Also btrfs snapshots, which are used very cleverly in Bocker.
I like doing similar tricks with zfs snapshots for containers on linux and jails on fBSD.
(this is not intended to start a filesystem argument, I'm doing it with zfs because I already know zfs and it's available on both of the OSen I care about; if you already know btrfs and are only running things that support it, clearly you should use that for the exact same reasons ;)
It hits the frontpage often because people assume that Docker is this super complex thing, but (at its most fundamental), it's actually quite elegant and understandable, which is interesting - a perfect HN story, in fact.
It is kinda complex, but all the complexity is in Linux, ~none is in 'Docker'.
It's possible it's not climbing the front page to slight docker, but rather that people are seeing that docker is something useful and want to know how it works. Bocker can be an entrypoint into the technologies.
Yes it's a wonderful little read. Besides without volumes and port forwarding few would ever deploy this to production.
The reason people use docker over Podman and rolling their own is because of the ecosystem and ubiquity of docker.
I'm bringing overlayfs to people at my company to save time on a lenghty CI process, and they are in awe at the speedup. But after demo-ing it to a few people I realized they could just use / (I could have brought them) docker.
How overlayfs speeds up CI processes?
Huge (32gb) git repo full of junk and small files. No power to change it. Created a cron job that updates and rebuilds every 24hrs. Clients can clone it instantly with overlay instead of a 30m checkout and build each time. Not to mention disk space savings
I am looking for a bash script that can pull docker images (via curl) and run them via chroot.
I don't need a seperate network nor process isolation.
A brother from another mother: https://bastillebsd.org/ Bastille manages jails using shell with many of the same constructs you'd find in docker. I like it over other jail management software in BSD because it has so few dependencies.
I'm also quite impressed by cbsd - shttps://www.bsdstore.ru/en/about.html - though that's more of a 'maximum overkill' solution in spite of being a CLI/TUI driven tool.
Currently I'm going through a phase of building and managing jails with just the stuff in FreeBSD base, but that's entirely intended to only be a phase - it'll last until I have the way all of it fits together burned into my brain well enough to be confident debugging it, and then I'll stop banging rocks together and go back to using higher level tools like a sensible person :D
Absolutely, it adds a lot of value for a shell script that is about 100 LoC.
By the way it took me a while to get why it was named Bastille. As La Bastille was a castle built to defend Paris from English attacks during the Hundred Years' War, and then turned into a prison.
> Because most distributions do not ship a new enough version of util-linux you will probably need to grab the sources from here and compile it yourself.
Careful. The default installation prefix is /usr/bin, and the install will happily clobber your mount command with one that requires a library that doesn't exist. Then next time you boot, the kernel will mount the file system read-only.
Should also be be /usr/local/bin.
While not good for daily driving, this gives you an idea on what docker is and how it works.
On Linux, docker is basically fancy chroot.
On macOS/Windows/etc., docker is basically fancy chroot in a linux VM.
Fun fact: docker started as bash, then moved to python before settling on golang.
Also, in a 2013 docker meetup, someone wrote a docker clone in bash.
People want to learn! Hopefully things like this help them.
Two years ago I gave a presentation on how docker works under the hood. After trying to understand docker, moby and containerd and how they interact I was so happy to find Bocker. Pretty much show how it can be done while showing enough of the magic moves that docker itself is actually doing. Bocker is cups&balls with clear plastic cups by Penn and Teller for docker.
If the author happens to see this: the link to your homepage on GitHub is broken - drop the "www."
This was written in 2015, I think we can get this down to 69 lines or less in brainfuck
Practicality aside, there seems to be a lot we can learn from the implementation.
Isn’t this how Docker started?
Does it require root access to the machine I have a user account on?
Yes, from the README:
> Bocker runs as root and among other things needs to make changes to your network interfaces, routing table, and firewall rules. I can make no guarantees that it won't trash your system.
Linux makes it quite hard to run "containers" as an unprivileged user. Not impossible! https://github.com/rootless-containers/rootlesskit is one approach and demonstrates much of the difficulty involved. Networking is perhaps the most problematic. Your choices are either setuid binaries (so basically less-root as opposed to root-less) or usermode networking. slirp4netns is the state of the art here as far as I know, but not without security and performance tradeoffs.
Is there any Docker alternative on Mac that can utilize the MPS device in a container? ML stuff is many times slower in a container on my Mac than running outside
The issue you're running into is that to run docker on mac, you have to run it in a vm. Docker is fundamentally a linux technology, so first emulate x86_64 linux, then run the container. That's going to be slow.
There are native macos containers, but they arent very popular
Docker can run ARM64 linux kernel, no need to emulate x86
You still pay the VM penalty, though it's a lot less bad than it used to be. And the Arm MacBooks are fast enough that IME they generally compare well against Intel Linux laptops even then now. But it sounds like first-class GPU access (not too surprisingly) isn't there yet.
Podman-Desktop can do it
Makes me wonder why docker still didn't make it to the ubuntu/debian repositories. Would be such an easy net benefit
What do you mean? It's been there for years: https://packages.debian.org/docker.io
It’s an old version, and I think it isn’t supported by Docker Inc (for the reasons mentioned in the sibling comment), but it’s there.
Damn good to know! Have been gaslit by the ever-changing docker install instructions. Of course it would be a lagging version, but I think the docker feature-set has converged years ago, why would I care any more about the docker version than e.g. about the version of grep?
The buildx/build and docker-compose vs 'docker compose' are recent updates.
(a) Docker wants to bundle vendor libraries instead of using other packages and (b) Canonical uses LXD and MicroK8s instead.
Very interesting. With how standard containerization has become, we sorely need an FOSS solution
Don’t we have them? I only casually use containers, but what about podman, runc, systemd-nspawn, LXC etc?
If Docker isn't open enough for you, check out Podman (now with extra CNCF).
how is docker open in any way?
https://i.imgur.com/2F0JmUw.png
In this way: https://imgur.com/a/PIkm7Eb
How exactly is docker (buildkit, compose, the runtime, the daemon, etc) not open source? Docker desktop isn't, but that's almost entirely unrelated to the containerization technology that it uses or that people refer to when they talk about docker.
That service agreement is for using the Docker Desktop GUI tool which isn't open source (though free to use for small businesses etc) whereas the basic docker CLI commands are all open source.
Why not podman?
Agree. Not sure about mac but on Windows, Podman + WSL works well. No need for podman desktop either, the cli vwrsion is fine.
Is the original docker just a script? Have they not added anything to the container story themselves?