Current state:
- Laptop with either Linux, Windows or MacOS pre-installed and integrated in Microsoft AD is provided.
- The developer needs to install software, CLI tools, Git configuration and so on by following guides in our documentation.
- Docs are frequently updated to reflect latest changes within the tooling (new version of software, different commands, ...).
- Setup time is roughly a day for less experienced developers.
Now some people ask for an automation to get a "pre-installed" laptop that only needs some sign-in credentials to get started. I'm sceptical that this does work without huge efforts, developers preferences might interfer a lot with the auomations. Especially for the upgrade case when updates happen.
Is there any other company/your employer doing such a thing and how well does that work?
1. There's very rarely a one-size-fits-all solution, in terms of what people need. Oh sure, if everyone needs a VPN then you should install the VPN and suchlike. If there's mandatory corporate security software, add that as well. But will they need docker? Will they need virtualbox? Will they need VS Code? CLion? PyCharm? IntelliJ? Do they need Java installed? Which version of Java? Will they need Altium? SolidWorks?
2. There's constant churn. You can't just create an image once and use it unchanged for years - there are so many components involved that there'll be some change or other needed pretty much monthly.
And you can't rely on volunteers helping out to scratch an itch - everyone in a position to do that has long since got their computer set up how they like it.
What's more, lot of the changes you end up pushing out will be unpopular. Ain't nobody going to volunteer their time to add a giant block of all-caps legalese to a login banner, or to deploy enterprise security crapware like crowdstrike.
3. Laptops are imaged by level 1 helpdesk workers. People who can troubleshoot Linux problems and maintain setup images probably aren't in helpdesk at all, and certainly aren't level 1.
So if you imagine making an image and handing it over to helpdesk to maintain - you'll find doing so isn't possible.
We pre install the licensed corporate tools and universally used tools like np++ and git for windows, and let them figure out the rest. Every time we bring up baseline changes with that group it turns into bikeshed city.
My goal was to be able to install any debian-based distro and run a single script, non-interactive, that would get me up and running. All that I needed to do was to clone code I wanted to work on.
After I created the dot files repo the process was:
- install distro
- checkout dot files
- run script
- do I have everything I need? (minikube, k9s, jq, yq, asdf, terraform, git config, nvim config, etc.)
- if no, then:
- fix, commit, push
- remove distro
Personally I dislike Git, and haven't found a git UI I like, so throw whatever you want on there. I am not going to like it, but not because I want something else.
Since you support a range of OSes I assume you would go with a cross platform editor IntelliJ, VSCode or something. Both are fine, and "something" probably is too.
I like beyond compare enough that I bought a personal license for it.
Give them a 100% works out of the box then they can customize the 1 or 2 things they care about.
It sets .zshrc, .zshenv, and .zshenv-private for tokens etc.
It also uses Homebrew to install a bunch of packages, then creates various config files.
It works well. Developer setup time went from days to less than 2 hours (corporate VPN is slow)
Edited to add:
Workflow is:
1. Install homebrew, xcode command line tools, Ansible, and then git+credential helper.
2. Clone repo (which ensures the new dev has correct roles/groups to access the repo)
3. Run Ansible.
Snippet of the Ansible script: ---
- name: Configure dev macOS
hosts: localhost
vars_prompt:
- name: githubtoken
prompt: What is your github token?
private: false
tasks:
- name: Create variable from brew prefix
ansible.builtin.command: "brew --prefix"
register: brew_prefix
changed_when: false
- name: Update Homebrew
community.general.homebrew:
update_homebrew: true
- name: Install GNU Coreutils
community.general.homebrew:
name: coreutils
state: present
One of the key pieces of Dev Home is Machine Configuration via WinGet Configuration files [1], which are an upgrade from PowerShell DSC, if you've tried that for server configs, with increased support for installing software from WinGet, cloning git repositories (to Windows Dev Drive if you want to allow that). These config files are just YAML files so you could very easily offer starter templates and also allow developers to customize them based on their own preferences. (Also as YAML they could be source controlled in git themselves.)
Some sample configuration files: https://github.com/microsoft/devhome/tree/main/docs/sampleCo...
Microsoft has been iterating on Dev Home pretty rapidly and adds new features regularly (relatively recently it added support to applying the same configuration files to building new virtual machines either locally under Hyper-V or remotely with Microsoft Dev Box).
I've not yet seen a company try to use WinGet Configuration and Dev Home as a company-wide initiative, but I have seen some growing "bottom up" usage as Developers like myself adopt parts of it for our own multi-device needs or virally share YAML snippets among each other for common corporate repos and tools.
[1]: https://learn.microsoft.com/en-us/windows/package-manager/co...
```
# .envrc
API_KEY=$(vault kv get -field=API_KEY secret/projectA/dev)
```
I also wanted to experiment using `pass`[2] instead, but messing with GPG keyrings and the like is unwieldy and not too user-friendly. It's what I use for personal projects since Vault is overkill.
[0] https://github.com/asdf-vm/asdf
Every place I’ve worked at where I had some control over workforce device management, we had devices shipping from the store directly to the end user, which would get automatically set up once the user logged in with their gsuite creds.
As for an installation script - I agree that is a big effort. ROI depends on frequency of the onboarding process. 10+ people per week? It might make sense. 3-4 people per year, not so much.
Smooth onboarding is a feature, not a baseline. Great onboarding takes consistent time, effort and energy to create and maintain.
Ensure that you have a predictable time to complete the onboarding. 1 day, 3 days, 2 weeks, etc. Time to complete onboarding should be very predictable and consistent.
Use relevant measures to reduce onboarding duration. Example: our developers need to install Oracle database on their Windows machine - normally it takes several hours to complete this step. We found that the Docker setup for Oracle can be completed in less than one hour.
they can automate the setup on their own if desired
also.. https://www.jetify.com/devbox for specific project tooling
If it's like 10 or even 25, then it's not worth the extra automation. More than 50 is when the time is worth it.
Using https://github.com/bradwilson/ansible-dev-pc (public equivalent to ours), you can automate the setup of a Linux laptop or WSL2 environment under Windows. This covers a fair chunk of our dev team. Once you start thinking about making this work on macOS, it starts becoming quite a bit more work, but it's not terrible if management values the effort.
If you have people doing mobile and Windows desktop development in addition to Linux web apps, it starts to get much more complex. Maybe you can't offer a fully "pre-installed" laptop, but you can automate the time-consuming parts like a Visual Studio install.
From experience, you are going to encounter a lot of developers who struggle more than you expect with troubleshooting and automation of their work. This includes experienced devs. It seems to turn on interest and curiosity versus dev experience. It helps to create a checkpoint at onboarding where you make an effort to determine the best approach for them and know who accepts an automated happy path provided by you and who wants to handle things on their own (and perhaps fails). You can then report on the general pattern of issues to management and prioritize ways of improving your happy path build so they don't reject it.
Devs need to manage their own. We hook it into AD/AAD, EDR and Office 365.
Dev enviroments THEY setup as THEY are developers. If then cant install some stuff can they even dev?
- Step 1. Use only Linux. You avoid having to translate between operating systems, you avoid paying expensive licenses, and you make your environments repeatable across dev machines. If a dev needs software that's only available on Windows or MacOS, have them run it in Docker or in a VM.
- Step 2. Define your software, programs, and user configuration in a .nix file. Create variables for user-specific stuff (usernames, email addresses, preferred software, etc.) and put them at the top of the file, so that you or the dev can easily edit it. Keep the Nix file in a Git repo, but allow your dev to maintain a separate branch. Build the Nix file into the system with `nixos-rebuild`.
- Step 3. There is no step 3.
More than onboarding, there is also the question of changing your laptop, and having to setup everything again.
Some pieces of software that help:
- Humble bundle (https://github.com/homebrew/homebrew-bundle) with a `Brewfile` stored in git and shared;
- dump of all dot files in $HOME encrypted and stored in the company's cloud (because it contains ssh private keys, or password manager db, etc);
- All documents are synchronized with OneDrive.
Mostly, there isn't a one-size fits all, but there is a common set of packages that all devs want, but IDE choice is personal, and many other things.
https://install.doctor/ is a higher level option which uses chezmoi.
Does that not work any more or is it just not popular ? Any insights as to why not ?
If you can get an image with just those two tools installed then each repo can specify what things it needs and when you cd into the repo it will install all the tools you need.
All of us are AD integrated. Some software is managed (auto updated/installed) other software is up to us to manage.
Both setups work pretty well.
The internal IT team should already have a way to configure security software and such but will also not want to manage a specific group setup for likely a small amount of people.
Reference.
Here comes a somewhat long form answer that doesn't cover everything but covers enough, assuming you have a tech department (be it SWE, testers, platform etc) of at least 100 users.
For Windows: Use Intune and Company Portal, don't bind to AD, but use Azure as that is a requirement for native Windows lifecycle management.
For macOS: Use ABM and ADE with an MDM (like Jamf, Mosyle, Addigy, Kandji), they all come with the option to deploy a self-service portal. Don't bind to AD, don't log in with Azure, especially on single-user devices. It antithetical to lifecycle management of macOS, and doesn't help. Do escrow FV2 to MDM. Optionally you can enable DEP which enables you to do more smooth transitions for existing devices. Using MAIDs can help with resolving a user to a directory user but isn't a requirement.
For Linux: if someone wants Linux, they usually also have specific preferences for their productivity. Providing something in terms of "here, have a linux" is something I have not seen being very effective or efficient. I have had fleets where there was a default supported setup (Debian mostly, but some orgs defaulted to Ubuntu). There were two approaches:
1. Hands-off, you want Linux, you'll probably know what you're doing, just enroll into osquery so we know your posture.
2. Hands-on, you can pick our defaults or roll your own, but you'll get a SaltStack Minion configured and our Master will append your installed packages and amend your configuration as needed. Interaction is purely via chat for end-users (chatbot style in IRC or channel integration with Slack).
As for how this then works provisioning-wise:
Windows: you publish packages to the Company Portal. They can be of many shapes and sizes but we've found that ensuring that everything is actually a package and not some wrapped script is the way to keep it clean and functional. We use pseudo apps that pull in dependencies to automatically install everything you need. So if you're maintaining some legacy .NET WPF app we'd give you VS, various .NET versions, WSL2 (so you can use normal Git and interact with our services), and perhaps Rancher Desktop if you need to mock some endpoints locally.
macOS: you publish packages to the self-service app of choice. Generally they are all Installer packages but it's fine if their payload is mostly scripts since it's mostly Unix under the hood anyway. We use groups of presets so someone can click "configure me as if I were building microservices with Go and NodeJS". This would result in Rosetta, XCode, Homebrew, JetBrains, VSCode, UTM, Podman all being setup in the background and the user either getting a SwiftDialog about the process or just a notification after completion.
Linux: you publish formulas to Git and for hands-off people can read those, for hands-on you ask your chat system of choice to handle it for you. We use some jinja2 magic to pull in many formula configurations at once, and inside the formulas we try to support at least from-source, apt and dnf. But often we'll only have the automation for apt as that is where the majority of "I need Linux but I don't want to do it myself" lands (the default distro we deliver). We don't have zero-touch provisioning but instead ask users to log in on our enrolment portal and download a small Go binary that does the enrolment for them.
For all of them we keep the sources in Git so we don't end up needing a billion service desk employees writing arbitrary application profiles for multiple platforms. If someone wants to add support for their special flavour of the week, they can do that, and after a review it'll be available. That also includes OSQuery packs we use for cross-platform posture management. There is no need to hide this, and especially in a technology BU you will only create friction with opaque top-down management. They (we) will always win as technologists will find ways to do things that get the job done and hide them if needed.
This model is of course not the same as say, accounting users, or marketing or C-level. They get Chromebooks or a computer of their favourite brand that is essentially turned into an expensive web browser kiosk. The only legacy 'fat' managed systems that remain are offline systems that you might use to manage PLCs in the field. That's mostly older (but supported) windows versions that people don't want to ever reboot or shut down because the old school way of managing those means it takes almost 50% of the time of a job to boot and authenticate before you can start to get any work done. But that's often as required by some compliance or regulatory regime.