Do you also have to handle that? How do you do it? Is there any good software available out there to make it easy?
Getting a new machine requires a git pull, running nix rebuild / nix darwin rebuild and my machine is set up “exactly” the same as the other machine. Exact configuration, exact versions of software, completely deterministic.
No running custom GNU stow scripts that I used to do. No fragile non deterministic Ansible playbooks that randomly fail.
The caveat, Nix requires a time/investment from a user to learn (search hacker news post for nix). It’s language can be a bit unusual if not used ML style languages, it’s paradigm is a bit different to what you probably know, documentation is sparse although there’s more of it now and a good few blog posts teaching it.
GNU Stow is a symlink farm manager which takes distinct packages of software and/or data located in separate directories on the filesystem, and makes them appear to be installed in the same place. For example, /usr/local/bin could contain symlinks to files within /usr/local/stow/emacs/bin, /usr/local/stow/perl/bin etc., and likewise recursively for any other subdirectories such as .../share, .../man, and so on.
https://www.gnu.org/software/stow/
http://brandon.invergo.net/news/2012-05-26-using-gnu-stow-to...
For command line, all the tools I use have the equivalent of “source other/file” in their config language so all my dotfiles begin with a source line that sets up my sensible defaults, and then local changes come afterwards. For example:
# ~/.bashrc
source ~/.config/dotfiles/bashrc
# Local stuff
GITLAB_TOKEN=…
.config/dotfiles is a git repo. I have very few local overrides, there’s no install step needed other than a git pull, and no mucky symlinks. The initial setup is a script on my website that I curl-pipe-to-bash on new hosts.For my dev environment I have a script that builds a dockerfile that makes an ubuntu:jammy and installs 30 Debian packages, 10 Python packages, and two locales that apparently I can’t live without. When you run it, it bind mounts $HOME into the container and then cd’s to the directory I started in. With one command I can go from ~/projects/foo on my machine to ~/projects/foo in the container with all my favourite tools.
You can bind mount SSH_AUTH_SOCK and your X11 Unix socket too to run a bleeding edge copy of Xeyes. Docker on macOS M1 means the same environment in macOS as on my dev servers and desktops.
I store anything other than random downloads on my NAS so there's literally 0 difference from one machine to another.
- .dotfiles/etc/git/.gitconfig
- .dotfiles/windows/setup.cmd
- .dotfiles/linux/setup_ubuntu
- .dotfiles/macos/setup
The setup script depends on the OS, using `scoop` for windows (https://pilabor.com/blog/2021/12/automate-windows-app-setup-...), `apt` / `dnf` / `pacman` for linux and `brew` for macOS.
On windows, I'm still working on it, but chocolatey provides a decent starter for getting dev tools I need installed, just gotta figure out a way to save and restore configs.
Also, I keep my .emacs in a publicly accessible S3 bucket. The .emacs on each machine is just a stub to pull it down and evaluate it on startup.
Dotfiles with dotbot in a git repo to manage my user configuration.
I've used a few dotfiles managers and this one required the least amount of bootstrapping for me. Needing to install stow or yadm is extra steps I could have the computer do for me
The ansible repo is a submodule in the dotfiles repo because I'm too lazy to pull it separately.
I tried keeping my configs in source control but it always went out of date over the years. Dev-ing on a dedicated server has been easiest.
I blogged about it here https://her.esy.fun/posts/0021-my-personal-environment-sync/...
edit: typo
Among my personal computers, I'm running syncthing. I sync my Unix home directory separately from my documents, pictures, etc. folders, with some carefully crafted .stignore files that largely (but not entirely) mirror my .gitignore files (where they exist). To get around the fact that .stignore files themselves aren't synced, the actual list of things to not sync is stored in a separate file and #include-ed into the .stignore file. The first time I sync my home directory, I set syncthing to read-only mode with a custom .stignore file that excludes ~/.config, ~/.cache, etc., switch to including the .stignore.common file, then switch to bi-directional operation.
I have shell scripts that install the usual niceties on Windows, Ubuntu, macOS, and Ubuntu on Windows.
I use Firefox Sync for browser stuff, although not every extension supports cloud synchronization (and the UI for the ones that do is rather clumsy).
For Ubuntu on Windows, I've symlinked key folders to their Windows equivalents, e.g., Documents. I run synchthing both on Windows (in the background using synchtrayzor) and on WSL (in the background using GNU screen). The WSL syncthing instance just takes care of my Unix home directory since the Windows syncthing handles everything else.
For VMs, I only sync the home directory.
I used to use roaming profiles and folder redirection and NFS and GPOs and SaltStack and so on and so forth, but those things are all really, really brittle. And slow. And buggy. NFS-mounted home directories have issues common to all Unix operating systems, chief among them no reliable offline mode. Windows has the Offline Files feature, but it breaks certain file system access patterns very badly. Few Windows apps correctly distinguish among the various shell folders, so they end up putting the wrong stuff in APPDATA or APPDATALOCAL—or worse, they put stuff in the root of USERPROFILE. And trying to do silent installs of Windows apps is such a pain. So treating each laptop/desktop as a standalone computer plus syncing things here and there works out much better in the end. I kind of hate it but can't fight it any more. At least I've (thus far) successfully resisted Apple/Microsoft accounts and iCloud/OneDrive/Google Drive/Dropbox/etc., although that's probably going to change once Windows 11 has successfully been forced down everyone's throat.