Am I missing a tool or something? Shouldn't I be able to run my server in my IDE and proxy it into a Compose network or Kubernetes namespace, so I get my IDE tools for free? Or at least have my Docker container run in "watch" mode, where a change to one of the files the container is based on restarts the process with the new files?
Also, any 3rd party things (database for example) can be done with a docker-compose.local.yml omitting your app image and instead building it from .
You can build lean, layered Docker images with ease with it. And deploy those to container services like any other.
But you don't have to use those containers for development. You use Nix to set up your dev env (a lot will come for free after you have your code packaged for the container).
Nixpkgs has support for most mainstream languages nowadays, with varying levels of popularity. The more popular ones will have more polished Nix integration.
Now, if you _do_ want to use the container locally, you can do that too. And it will benefit from non-fragile caching thanks to Nix.
But tbh, if you need to replicate prod precisely locally to do local dev, you should probably consider..figuring out how to build and test your components with confidence in isolation. Local simulation of prod can be useful sometimes but if it's your default, you can do better.
Sounds like you need to re-access how you're using and thinking about containers when doing dev locally.
Look into running a container with the stuff you app needs installed, but running a shell instead of your app directly. Then look into mounting your source directory in the container using docker's or whatever container tool your using. Then things like auto-reloading (if your app supports it should work using inotify-tools).
And by "app" I'm referring to whatever your developing, most likely some kind of backend server?
I've only used the VS Code version, but it appears the Jetbrains IDEs support the concept as well.
VS Code injects a binary into your normal development container definition to create the "bridge". Local development files can be mounted into the container environment as well if you want the container to remain ephemeral.
https://code.visualstudio.com/docs/devcontainers/containers
https://www.jetbrains.com/help/idea/connect-to-devcontainer....
https://www.jetbrains.com/help/idea/remote-development-overv...
Here are a few pointers:
Podman runs rootless and avoids (some of) the problems with permissions this way.
It's possible to mount the source directory (when running a container) over the place you copy it to (when building it) so you can start a container once and rebuild and test inside it while you edit outside of it.
I think that containers are a good reason to make a technical distinction between unit tests and integration tests. The former should work outside the container to facilitate quick development whereas the latter can rely on the environment the container provides. That setup saves a lot of headache for configuring paths and dependencies.
Finally, i find it very important that building the software and executing the unit tests should be possible outside the container. This way you can always use your local setup, maybe after some tweaking. This tweaking is the (small) price everyone has to pay every now and then. That way the build environment doesn't run stale. Imagine developing software with a frozen tool stack packed into a container ten years ago. Because that's what happens when everyone just uses the image.
For example I have a compose with 10+ containers in it. Each container that needs to talk to another has some kind of environment property to tell it the name of that other container. So the "api" container might have a property called DB_HOST="db", "db" being the name of the db container.
Now, when developing i.e. the "api" image locally, your local dev server should be configured in the same way, providing the DB_HOST property to your local dev server environment. By doing this, you can completely stop the "api" container, allowing the local dev server to take its place, configured to talk to your other containers running in the docker network.
This way you are maintaining the local dev server setup that we've been using for ages and not developing directly on a docker image or dependent on its build cycle, etc.
IE, instead of building a fresh container on every code change, you only build a fresh container when your python version changes. You start a container and then from within it, you install your python packages. Or take it a step further, and your container will get baked to include dependencies and only rebuild when the dependencies change. The production container would inherit or be downstream from this, so that all the prod builds contain everything and are artifacts.
Replace python with rust, golang, etc. Doesn't matter.
The key is that you will need to abstract a base image, and then fork that into the dev image and the prod/stage/deployable images.
[1] https://gitlab.com/engmark/engmark.gitlab.io
Nowadays I use containers for the services, like redis, postgres, etc. But the app runs locally for dev. Works fine for standard web stuff.
This is what you should be doing, and you should not be building your artifact with docker build during development. If you can help it, you don't want to compile your application inside of a container at all. Build it outside and COPY it when you're ready to ship, or use a volume during development (docker run -v)
If you cannot rebuild outside of the container, you should be able to build your build environment as an image once, then exec into the running container to rebuild there, but you should NOT be rebuilding your docker images for each compile loop. It sounds like that's where you're encountering the pain.
If you are rebuilding your docker image every time you recompile your application, you're doing it wrong.
- Install my own Git using Gitea
- Install my own Repository instead of using Docker Hub
- Install Portainer
- Configure Gitea to use workers + actions
- Write the needed YAML to build the image, upload to local registry
- Configure hook on Portainer to recreate stack if image was updated
Of course there is a slight delay while the image is building, but I don't have to touch anything at all, just code, commit and a couple of minutes later image is up and running.
Slow build times, slower execution times, annoying keeping them updated, especially with k8s.
K8s manifest autoloading works, and IDE support is somewhat there. Not sure about build caches, should be possible I think.
Only problem is the Kustomize overlay syntax is a bit hard to grok. You can also use Helm or raw kubectl deploy commands.
I do this with npm scripts for "compose", "start", "stop", and "reset" for every service and tie it all together with dotenv for environment vars. Currently, I have dockerized Traefik (partially), Webpack (dev server only so far), Pocketbase, PostgreSQL, PostgREST, Swagger UI, PgTyped, and MongoDB under this and will soon also dockerize the Express-based RESTish API feature.
https://github.com/dietrich-stein/typescript-pgtyped-starter
https://tilt.dev/ (no affiliation, just a happy user)
I have noticed however that systemd containers (nspawn) don't have layered images but seem to simply run against a root file system hierarchy that you put on the disk.
This seems to me much simpler than dealing with diffed layers or whatever other container solutions do.
I avoided a lot of your troubles by coding/running/debuging the main program (app server) outside of a container and letting "only" the infrastructure parts inside (db, mail, ...)
Only at release time that I embed the server part in a container.
2. You can step into Docker containers so that you can work inside, iterating on builds and such. If you have a scriped workflow that launches a Docker image to do a build, crack it open and develop a more interactive alternative.
It allows you to use your local IDE to edit the code, but the actual container runs in the cloud. It allows the user to define and create thin or full environments (any number of services) running in the cloud, so no load on your local. Full support for debugging.
disclosure: I work at bunnyshell.
It provide a nice interface for creating native, local dev environments using the Nix package manager, which is especially helpful if you or your friends struggle with the Nix language. It also lets you use your local tools with your dev environment.
Incus (and LXD) make containers work in pretty much the same way as the VM, just without emulation overhead. You have prebuilt images with rich standard toolkit, systemd and services, SSH, networking is configurable from within in familiar ways.
Use systemd in prod to contain your apps automatically on launch. chroot the app and mount only the paths it needs with nearly everything as read only.
IME a monorepo is nice here. all app code and infra code live side by side, and while running the containers locally is not an ideal dev experience, it's at least accessible and enables consistency across environments.
Another thing that I would question is why would you be running containers locally so much it becomes a problem?
As you said, containers are great for shipping code; use them for it. Locally, run your code in the current environment.
You should only run a container locally if you need to debug an error in production that you suspect is related to the environment.