What do you use VMs for regularly?
I know many people use VMs for work, or to test things they develop. Makes sense.
But what else do people use it for? I want to hear interesting or unusual things you use a VM for.
For example, I have thought of running a VM only to use git in there, maybe so try and see if magit will run faster in a VM rather than on the host macos. I also have thought of using a VM to only run a browser in there, to keep the memory under control. Not sure any of these are good, but they are interesting.
What are your ideas or actual ways you use VMs?
> VM to only run a browser in there, to keep the memory under control
For other Linux users out there — a VM is not needed for this, use a cgroup with memory limits. It's very easy to do with systemd, but can be done without it:
$ systemd-run --user --pty --property MemoryHigh=2G firefox
The kernel will prevent Firefox from using more than 2 GiBs of RAM by forcing it into swap (including all child processes). To quote systemd.resource-control(5):
> Specify the throttling limit on memory usage of the executed processes in this unit. Memory usage may go above the limit if unavoidable, but the processes are heavily slowed down and memory is taken away aggressively in such cases. This is the main mechanism to control memory usage of a unit.
If you'd rather have it OOMed, use MemoryMax=2G.
It's actually very useful for torrent clients. If you seed terabytes of data (like I do), the client quickly forces out more useful data out of the page cache. Even if you have dozens of gigabytes of RAM, the machine can get pretty slow. This prevents the client from doing that.
There are lots of other interesting controllers that can put limits on disk and network I/O, CPU usage, etc.
I'm using VMs for everything. Disposable, self-destructing VMs for untrusted browsing. Network VM solely for connecting to the Internet, Firewal VM for isolating the network from other parts of my system. Work VM for everything connected to work. Archive VM (with no networking) for storing important files. Banking VM for managing bank accounts. Zoom VM for isolating Zoom from the rest of my system. And so on.
All this works with a great, unified interface on Qubes OS (https://qubes-os.org). See also: https://forum.qubes-os.org/t/how-to-pitch-qubes-os/4499/15.
I’m a teacher without a tech background. My role is to schedule a high school timetable.
To do this I have been given software that is single threaded and will only run a single instance, so I have taught myself Hyper-V, and run several VMs with alternative searches simultaneously. These searches can take 12+ hours to run.
The software also runs 25% faster when allocated 2 threads maximum than when 3+ are available…
WSL2 can’t be scanned by my work’s antivirus so node projects build and run at full speed.
(2 minutes for starting the project outside vs 25 seconds inside)
Does WSL2 on Windows count?
Because 2 years ago I moved back to Windows from MacOS for my daily driver because of WSL2.
I get the same "modern GUI on top, Unix-like shell underneath" experience that I had with MacOS but now I have a 24-core machine with 32GB of RAM for a third (or less) the price of what a similar Mac would have cost me.
My main workstation runs Linux. It has a second GPU (NVIDIA RTX 2080 Super), USB 3.1 card, and an NVMe drive passed to a guest via PCIe passthrough.[1] I have a 2x2 DisplayPort 1.4 KVM to drive my monitors with the host GPU on one side, and the guest GPU on the other side. The peripherals are connected to the host through any open USB port, and the guest through the PCIe add-in card.
Audio is handled with Scream[2] mostly so I can get >65536Hz sample rate. (Really terrible things seem to happen if you try to boot a qemu guest w/ the emulated audio attached to pipewire-pulse when the DSP graph has a 96/192KHz sample rate. I've also had latency issues in the past w/ bonafide pulseaudio and the emulated audio card.) I do all my gaming and most of my browsing inside the Windows VM, which is bridged to my usual data VLAN. The linux host is where I do development work which lives on a separate experimental VLAN.
Other than that I run a few LXC containers for various services needed for running the LAN. (DNS, mail, VPNs, etc.) - I just want that stuff logically separated so that they can either (a) be moved to my new workstation in 2024, or (b) if one breaks it can just be rebuilt from scratch without affecting the others. It's also nice because I can use whatever distro works best for that particular package.
[1]: https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF
[2]: https://github.com/duncanthrax/scream
I use VMs to serve Plex and the software I run around Plex to make it fun and useful. I have a few NUCs that run ESXi, and in turn, my aforementioned VMs. The last time I rebuilt the box, I was considering going back to bare metal for Plex (at least), but the ability to treat the VM as effectively ephemeral but also backed up in case something happened is very useful. It is also theoretically portable if I got a second one and externalized it. I also like to think that virtualizing the machines lets me better carve the otherwise-overpowered host machine and allow it to decide how best to use it's resources, but I haven't actually sat and verified that is what happens.
I use a VM on my work machine to get around Docker Desktop licensing on macOS. I also use VMs on my work and personal machines to test out new-to-me OSes that I want to play with but don't necessarily want to run full time. And once upon a time, I used VMs heavily to write and test Chef cookbooks, but those days are mostly over for me.
I use VMs for piracy. I use a Linux VM with iptables configured to only allow connections to/from a specific VPN IP/port. This is probably overkill, but it is nice to have everything separated and isolated without having to worry about IP leaks.
Docker. I'd rather have a proper environment I control than whatever bastardization "Docker Desktop" is. It uses VMs in the background anyway.
Depending on your hardware, Desktop Linux may work much better in a VM on a Windows host than booting the machine directly to it. Far more stable, less screwing with configs to get everything working. You know suspend will work (in the form of saving the VM's state). You get snapshotting, and not just of the disk, which can be really handy. Easy to clone machines for complete isolation of e.g. work projects. Depending on your workflow this can be nicer than one machine with multiple accounts (see again: state-saving suspends, even across host reboots).
Even more true for the BSDs and other, even more obscure operating systems.
Not really viable on portable machines, though. Too much power use. Desktop, however, is great.
I run a Linux VM on my Windows work machine so that I can safely access my personal accounts on the web. I am allowed to access my personal email, bank accounts, etc on my work computer; but they have a web filtering agent installed with certs that MITM everything. In the VM I'm subject to a more restrictive filter, but without the certs or agent they can't MITM.
The latest macOS doesn't support my printer out of the box anymore but it's still supported on Linux, so I just passthrough the printer and print from there. Strange times.
I've been banned from selling on eBay and I get around it by having multiple Windows 10 VMs (running on proxmox) with 4G USB dongles passed through to each machine. It keeps everything tidy with no chance of me ever making a mistake and getting found out.
I used to use one for games.
I had a GPU (even two in SLi with hacked drivers at one point) passed through to the VM and used it for Windows / games (with a Linux host).
I've abandoned it since gaming on Linux itself has been proving alright
I use them for many things, but most recently I found myself using a VM to avoid installing tracking software (essentially malware) required by my employer. I don't turn it on unless someone asks questions, and otherwise can go about my business without fear of being monitored.
Job provides Windows or Mac but I prefer to use Linux. So I installed Linux in VM and do all the work in it. VMWare Workstation is rock solid with great performance. I boot Windows, then Linux in VMWare Workstation, go full screen and never look at Windows again for the whole day.
Tl; dr: this setup is probably very close to the GPU pass-through (with a twist) many folks are using for GPU-heavy use-cases in a VM already, so not sure if it could be classified as unusual.
At work, we use VMs with NIC virtual function interfaces (https://www.kernel.org/doc/html/latest/driver-api/vfio.html) from the hypervisor for first line of validation for our product (full disclosure: part of the quality and automation team at StorPool Storage), so that we can have an environment both close to the production systems with network hardware acceleration enabled in the VM (i.e. kernel bypass) and easy to reproduce and re-create. There are some limitations and setup quirks with different orchestrations, but they are not live-migrated anyway, so not really an issue.
At home the main usage is for isolated environments (not unusual), and recently to be able to access a ZFS in an older Freenas/Freebsd zpool drive (the Ubuntu ZFSoL could not detect it for some reason, haven't looked into it more). I was a bit surprised that passing through the whole storage controller with the sole SATA disk behind it (the host root OS is on an NVMe) is actually slower than attaching to the VM as virtio raw disk (i.e. /dev/sdN).
Running Photoshop on a Windows VM. There's so much trash Adobe pumps into their products that I don't want this anywhere near my Mac...
Running Linux on my mac. A VM is particularly useful because I can spin one up to install some crapware needed to work and then discard it.
Installing tunnels and certs needed for contract work.
Experimenting with new software environments.
I do not know if it enters the definition but it's my first step for trying "dangerous" stuff, like debugging a macOS kernel extension, or installing an exotic OS.
And of course for learning. For example you can learn a lot about operating systems if you can just run any older version at your will.
Also, you are obviously speaking of virtualization of a given hardware platform but a lot of stuff is modeled as a "virtual machine" in the generalized sense - e.g. the pickle Python format.
Running Linux desktops on a Windows system, and vice-versa, for development, testing or simply running some program without having to move to a different room and power up the desktop system, for example.
Testing and playing with older systems (Windows 9x, Linux distros from the 90s) for fun and kicks (or to compile some sutpidly old tarball of something that sounds interesting or fun but hasn't been updated in 15 years)
Building network meshes to play around with running BGP, OSPF etc. on FRRouting
I develop in an Ubuntu Multipass VM on my M1 MacBook Air. It's great -- I get all the GUI/ecosystem niceties of macOS, but work in "real" Linux. I mount my Mac home folder on the Ubuntu VM, so in a way this is kind of like "WSL for macOS" -- I sometimes even forget that I'm not actually on my Mac, it's so seamless.
For a while I was using Parallels and/or UTM, but honestly (perhaps due to my /r/unixporn addiction), not having the option of a GUI on the Linux VM is great (well, it's technically possible with Multipass, but it's a bit involved). For work, I prefer to treat Linux as command line-only, nothing more -- "OS as IDE," that whole thing.
Since it's a VM, I'm happy to just blow it away and start over if I screw something up (though, the lack of a DE greatly simplifies and reduces the number of things that can go wrong). I have a single repo with all my dotfiles, and if I pull that down and symlink a few things, I have my old environment up and running in minutes. I've been meaning to look into nix, but haven't yet since it's known to have such a steep learning curve. For now, though, this is working great; my needs are so simple and my tools so few that I honestly don't know if the juice will be worth the squeeze.
I also tend to keep my Mac pretty clean and "unpolluted," as I generally create bespoke build environments in VMs. So even the Mac is fungible.
I've honestly never been happier with a dev machine/environment. No more weird macOS/Linux inconsistencies, hardly any maintenance overhead, aside from customizing my dotfiles (which I really enjoy -- it's like gardening, in a way). It all just works.
I have a DVD-A ripper that only runs on Windows. Because I use a Mac, I primarily use a Windows VM just to rip DVD-A disks.
(DVD-A is a variant of the DVD format where audio is lossless, compressed using MLP. (Very similar to FLAC.) In general, DVD-A is 100% obsolete because Bluray supports lossless audio over HDMI without a special player. Unfortunately, some artists still release on DVD-A for reasons I don't fully understand.)
On my bare-metal server I use VMs to isolate certain network concerns like my mail server and my VPN (WireGuard) server. Regarding WireGuard, this was necessary because the host (Debian Buster) does not (or rather: did not) support it. Furthermore, I do not want to allow the host OS (and the services running in Docker) access to my internal networks.
My VMs are managed using libvirt/virt-manager (over SSH).
Malware analysis. I run Windows malware on FLARE VM with all traffic routed to REMnux, a Linux distro that emulates internet services with INetSim.
I work in industrial controls and I have probably at least a dozen different VMs for all of the various programming softwares (each brand of PLC generally has their own proprietary software stack) and versions. Some of these things really don't play nice together and have the tendency to "blow up" now and again, where the whole OS needs to be nuked and reinstalled from scratch. Some programs are a nightmare to get working right and it's nice to be able to share working VMs with my coworkers.
IT also can't get their grubby fingers on the software inside and break anything. I also don't even have admin rights to change my network settings on my host OS, which is 100% required for the job as I need to connect to machine networks running a static IP. With VMs, I can get a USB Ethernet dongle and give it to the VM and get network control that way.
These days I do almost nothing on my host OS.
Not strictly VMs but I love to work on remote containers, like Github Codespaces.
Made my side projects much easier to work and collaborate with.
I only use VMs to run other operating systems on the server, for all other purposes I use containers. For example, some document servers insist on running on Windows while I insist on not running Windows. The solution is found in a VM which I fire up on demand when I need to access something from that server, accessed through (web)VNC. Once done I terminate the VM. All other services are run in containers, VMs and containers managed through Proxmox:
qm start 600 to start ELSA, the VAG document server. Connect through VNC, find whatever was needed followed by qm shutdown 600
pct start 209 to start the Debian-stable build server, connect through SSH, build whatever is needed, package and copy the result out of the container followed by pct shutdown 209
pct start 208 starts the bookkeeping server, pct start 501 for the backup router, etc.
Testing features on Linux. Sounds dumb a d obvious, but if you use a windows laptop for work, being able to pipe into a Linux VM that is corporate approved is nice. Also use it for certain debuggers that aren't supported on my host OS. Basically as a stitch in for the fact I can't use Linux at work.
I recently brought up a Windows 10 VM and a couple Debian, FreeBSD, and OpenBSD VMs to test building C software on multiple platforms and toolchains. It's notoriously hard to run MSVC on non-Windows platforms, especially my M1 Mac, so having a dedicated build VM is a must for me. I use QEMU on the command line to virtualize everything. Although it was incredibly hard to get both x86_64 and aarch64 Windows reliably working (3+ days of waiting for installers and trial and error, in case you're wondering) it's worth it as now I can use a shell script to boot each VM, rsync a folder full of source files, and then run cmake to check for build errors, all from the host machine. It's kind of close to some form of CI at this point.
Occasional gaming on my wife's work PC (in parallel as she's working there).
VMWare can provide DirectX 11 support for guest OS, and there are many kinds of gaming-friendly remote desktop applications to play remotely from my Linux laptop.
Originally inspired by this LTT video: <https://www.youtube.com/watch?v=-Mgnwn4twZE>, except that I wanted "work" system to have 100% of all resources when "gaming" one is not used (since it's started up only once in a few months). Hence, nothing as fancy as in video: "host" OS is the work system, and "guest" OS is booted up once in a few months for occasional gaming.
I use a VM (wsl) on windows for git. I could run git on windows directly, but I have a whole setup of config files and ssh agent that I already have working on linux, so I just use that. It is a bit annoyingly slow when using it on a repo on the host is though.
I help one of my co-workers out with some industrial work. Its AB Studio 5000 and Wonderware edge. These programs are terrible. They are not backward compatible so the standard thing every control engineer seems to do is have 500 different VMs. One for each version of each individual program. It's bonkers.
Working on industrial equipment even with current technology is like going back 20 years. The controls software ecosystem seems to just barely be accepting version control now. Instead of just directory after directory of duplicates named "XXX - v1.000 - I really deployed this one to plant XXX"
From my time back in VMWare, i got into a habit of installing all my desktop apps in their own VMs. This makes switching to a new PC trivial: reinstall install OS, install VMware Desktop, move a folder of VMs. Done.
Very occasionally I need Windows for something, and then a VM would be nice, but Windows has been growing in size lately - so much that I don't want to use it anymore. So now I am trying to make do with just Linux.
Otherwise, I am using VMs for edge compute, caching and cache configuration etc. I think the caches of tomorrow are those that can be fully programmed, safely in a sandbox.
I used to work on unikernels, but I have become less interested simply due to the fact that while people find them interesting, in the end they all want Linux underneath.
I run five VMs at my house on a W2022 machine, running consumer-level hardware.
1. pfsense - It's the router & firewall for my house. runs a openvpn client
2. monitoring - A linux box I can ssh to. Port 22 is forwarded here
3. win11VM - Windows box I can RDP to. It runs 2FA
4. winDC - A windows domain controller for my house
5. transmission - networking forced to go over openvpn for better linux ISO sharing
I use them to reproduce bugs (both my bugs and bugs found by my customers), and give away the resulting disk images, if the software vendor has some difficulty reproducing the bug.
I use VM for many things, most notably, I use it as a development machine for work. I have a VM running Linux, on a separate PC, where I use VSCode SSH extension to develop on that machine, but run VSC on my MacOS. The main reason for this is, that running large applications makes my laptop super slow, but I'm too used to the UI. I just run the builds there, which makes the performance much better, though it poses some challenges at times.
I've been nerd-sniped by an old Forth variant, and with a lot of help from here, got it working... then my retina blew out... (so pause)
It's Linux only, and it's either Ubuntu in a VM, or WSL to get it to compile.
I've tried more than once to get it to compile for windows, where I understand things better, but no go.
I run R in a Debian VM because Gentoo emerge of R fails to compile on my machine and I cannot be bothered to figure out the problem this time.
At work we're running some self hosted service, issue is, the service is quite old, and the client only works with other old openjdk versions, so the vm runs the client.
I'm also running android-x86[1], for a single mobile game with an x86 port
[1] https://www.android-x86.org/
I have been close to installing a Windows VM only to get proper support for the Microsoft Office suite, which is used a lot at my new job. The browser version available on Linux is just not very good. The worst offense is that it doesn't open template files at all. But the features supported in Excel is also kind of weak.
I release versions of NSIS with a VM of Windows 2000 running Visual C 6. It creates the smallest executables possible.
I use google cloud shell (remote vm or a docker container whatever), it is fast and I can try bunch (most) of things real quick.
clone repos, docker-compose up, open web preview. It saves time and keeps local machine clean.
Also I made a script to install a DE and novnc, I can browse in it, and not feel guilty about local firewall.
My HTPC/couch driver is a Windows VM living inside of Unraid. Has been since 2019. Really happy with it. Close to bare-metal performance. Using it for gaming, movies and browsing.
The VM has four cpu cores with a passed through graphics card, sound card and its own SSD. Just to be able to watch 4K HDR content in Kodi.
Follow up question: What do you use to automate setting up a vm, ssh keys, etc? I tried using Ansible and it was like its own DSL, yet another one to learn. At this point, anytime I need to learn a DSL, I am instantly turned off from using that thing. Is there anything better?
Very occasionally: playing old games that don't run well (or at all) on modern operating systems.
I just recently spun up 16 Linux VMs in Virtualbox to simulate industrial scales connecting to my desktop software using Python and performing measurements. I wrote several scripts to copy Python code to each machine with scp and control them using Bash with WSL2 on Windows.
I have a VM running an old Windows 7 license I have, for the sole purpose of being able to continue to using Microsoft Money Sunset Edition. The only two application I run in this VM are Money and Firefox (only to connect to my bank).
I find it a lot easier to find a Docker container with whatever service I want to use vs trying to figure out how to install it locally. Nothing super exotic, but currently I'm running OpenSearch and PostgreSQL in a VM (via Docker).
i write about a product called MAAS, a bare-metal provisioning system. i have a houseful of NUCs of different vintages to provision, but sometimes i want to try odd configurations and/or create error conditions to build troubleshooting doc. LXD VMs are better for this than NUCS, because they are easier and faster to recover when things go south.
also, i use them to do large builds of code on OS versions other than the one i'm running on my laptop. and i use them to test packer-built custom versions of FreeBSD, RHEL, Alma, Rocky, etc., because i can control the interfaces and storage more easily when debugging my packer builds.
A pretty regular one: Some dev / analyst / whatever wants one. Nothing more, nothing less.
I've had great use of VMs when testing more boutique and unsupported software, where I need to test compatibility on various versions of OS
VMs for Windows mostly. We used to also use Vagrant a lot for development, but that's moved to Docker and then Kubernetes. Some of those on some platforms still use VMs on desktop GUIs but those are completely managed.
I use it to play Total Annihilation on LAN. I tried out all the 3rd party patches, nuking firewall/defender with DefenderControl but nothing worked except full virtualization.
I use FreeBSD for bhyve. It works pretty good on my old Lenovo X1 laptop.
Lately with docker it hasn't been as necessary, but I used to run VMs for just about every service so I don't have to worry about other apps being affected by an OS issue.
I use VMs at work. Always hosted in the cloud- to work with GPUs, or deploy some app.
Outside of work, I use VMs for distro-hopping, i.e. regularly trying out new Linux distributions.
To package software for my main hosts. I can avoid installing lots of compiletime dependencies in main hosts and leaving potential garbage of `make install`.
I use VMs mainly as development machines, back in the days in a server locally, nowadays mostly in Azure/AWS mainly to separate clients / software.
Currently, I'm not using them that much. However, I use containers intensively for some workloads like services on my UnRaid server or for development.
Mostly as a clean slate to run ansible against. That in turn sets up various services.
I could put multiple services on a single vm but split is easier mental model wise
At home I use VMs for work environments (usually Ubuntu desktop VMs). Home PC is a gaming computer. All my laptops run Ubuntu.
To run docker/k8s on machines that don't seem to run docker natively (Win 10 and Mac OS X).
Proxmox on my home server with a few VMs for different purposes. Parallels on my MacBook Air M1
Windows Sandbox to run Adobe Flash to access old data rooms.
I used a VM to host my Windows development environment.
I use VMs when reverse engineering malware.