HACKER Q&A
📣 bccdee

What's the best way to secure your workstation?


Here's a very plausible threat: Some developer with a left-pad package, some dependency-of-a-dependency, injects malware into their library. A developer (who is broadly trustworthy) updates their package's dependencies without auditing them properly, and the malware ends up in a VSCode plugin that you use. You open VSCode, your system is infected.

We know this sort of malware is making its way onto package repositories [1]. We know people are falling for these attacks. How do we protect ourselves against this family of threats?

[1]: https://www.theregister.com/2021/07/21/npm_malware_password/

We could trust nothing beyond our base system and our browser, and refuse to use any code we don't fully audit, but this would be an impossibly austere way to live. I expect most of us, when pressed, would admit that we're trusting much more code than we would like to.

The alternative is sandboxing, using a lightweight option like firejail (which I use) or a totalizing system like QubesOS. But these systems are awkward to use, and have their own drawbacks.

What's the bar for reasonable security, in your opinion? How do you secure your workstation without living like a monk?


  👤 emerongi Accepted Answer ✓
My main OS is Fedora Silverblue, which is "an immutable desktop operating system". I install GUI software through Flatpak. For development, I run a VM (Fedora Server) and connect to it through SSH (VSCode works really nicely here). I have different VMs for different use-cases, but mostly I work in just two "fat" VMs. I try to be diligent in what I install and use in the main OS as well as in the VMs.

It's not entirely safe, but I think gets me 90% of the way to a reasonably safe workspace. If there is malware in a VM, I can nuke it and reset affected credentials in my main OS (which is not infected). It's not too much extra overhead, I just SSH into the VM and work as usual. I've used Qubes before and have also tried a fully Docker-based workflow (developing exclusively in containers), but there can be too many headaches with either.


👤 hvgk
With modern software distribution infrastructure you are entirely fucked at this point. It’s number one business critical risk from our analysis. And it’s not just from developer workstations but from stuff sneaking into production during a build cycle as well which is flagged through or missed by any of the participating tools which are supposed to stop stuff going through. The developer workstations aren’t even a worthy target.

Imagine if someone managed to inject malware into a core kubernetes component which gets pulled from docker hub or something.

The only reasonable way to prevent this is do development work in a fully constrained environment both from a hardware and software perspective and that means taking on a hell of a lot of compromises which will cripple your productivity entirely.

My efforts to investigate this pretty much lead to the conclusion that you need to have two computers available attached to different physical networks. The first computer has internet access and allows things like email, www access but has no administrative capabilities and no development tools installed and no way of installing tools as an unprivileged user. The second computer is the only one you are allowed to do development work on and has no internet connection.

Obviously when proposing this, it was laughed out of the room. This is exactly what I intended to prove: you can’t fix this reasonably at this point so don’t bother doing it.

At least not unless you have an airgapped machine and solely write in something standardised, with no external dependencies or libraries and no possibility of pulling something from outside your trust boundary.

I hope you sleep better than I do knowing this as well.

Edit: I was actually most happy writing C in an airgapped network about 25 years ago on a Sun machine with some manuals and some Oreilly books on my desk. The very thought of downloading something there was laughed out of the room. I wonder what they do now.


👤 perlgeek
Some scattered thoughts on this:

* Don't always immediately updated to the newest version. If it's compromised, give the other users and the vendor time to find the vulnerability.

* Try to rely on software packaged by somebody you trust.

* Reduce potential impact of compromise by using 2FA with something off your dev machine (like authenticator app on your phone).

> What's the bar for reasonable security, in your opinion?

That really depends on your situation.

Do you think somebody might target you, specifically? Has your machine been compromised before? Do you do anything with potentially high leverage for an attacker?

If the answer is "no" to all these, I'd say don't sweat it too much beyond standard "best practices".


👤 mmaunder
I don’t think this is the right question. How do we secure our code, our data, and our supply chain is the right question. Most of our work environments span multiple endpoint hardwares and virtual instances and our data is spread across these devices and cloud services. It’s a harder problem but let’s start by defining what we’re protecting and then work it.

As a starting point I think crowd wisdom is called for given the size of the challenge this would be for an individual. If you see something, immediately say something. Responsible full disclosure on tight timelines. Ways to rapidly get the message in front of those impacted where action by them is needed. Build systems to avoid requiring action from those affected without compromising freedoms.


👤 zaitanz
In general, we will always start from a position of considering a developer machine to be infected. This is part of the Zero trust approach to security. We work with defense in depth. If the developer machine isn't trustworthy, and the developer isn't trustworthy, how do we best protect our systems and client data?

As you move through from code to production we have multiple stage gates and steps.

- From a code perspective, we use dependency and code scanning (yarn audit, sonarcloud, sonarcube etc). Sonarcloud has nice IDE integrations.

- Code is pushed and is picked up by a pipeline, further scans are done looking for vulnerabilities/CVEs etc. If any significant ones are found, the pipeline fails (yarn audit, sonarcloud, sonarcube, Palo Alto container scanner, docker bench etc)

- The pipeline deploys to test and does automated checking

- Prior to a production deployment, the pipeline must be manually approved.

- Once in production, we use further scanning and monitoring (Security Hub/Centre, Tenable, SIEM)

Our developers have no direct ability to change the production systems in any way. But, they can write code and commit to our Git repository as much as they want. Everything from that point is automated (except for manual approvals).


👤 closeparen
Some things you can do to mitigate the impact of a developer machine compromise:

a) Don’t let anyone push to master. Everyone goes through code review.

b) Limit access to production. If you’re a shop with a separate ops function, none at all. If developers do their own basic ops, then limited and structured control surfaces for them. Choose from among the code reviewed builds to deploy, that sort of thing.

c) Where relatively high privilege in production is required, provide dedicated workstations just for that. These don’t need arbitrary local software or even internet access generally, just the production VPN.


👤 fsflover
> How do you secure your workstation without living like a monk?

Using Qubes OS. It's really easier than you might think. The UX is amazing. Can't recommend it enough.


👤 arcastroe
I use Windows as my main OS. In Windows 10, there is a feature called Unified Write Filter that essentially resets your computer after every reboot [1].

> Unified Write Filter (UWF) is an optional Windows 10 feature that helps to protect your drives by intercepting and redirecting any writes to the drive (app installations, settings changes, saved data) to a virtual overlay. The virtual overlay is a temporary location that is usually cleared during a reboot or when a guest user logs off.

When I first got my laptop, I installed a fresh clean copy of Windows 10, installed all my commonly used applications, configured all my settings, and then enabled UWF. On every reboot, it goes back to this clean snapshot, no matter what I do - And reboots are quick too (~10 seconds).

I like this setup because I'm never worried about making changes to my computer to try them out (installing a new program, configuring obscure settings, etc). If I don't like it, I can get back to my fresh state with a simple reboot. I also like that the feature is built into the OS - there are similar third-party solutions such as "Reboot Restore RX" [2], but I don't trust these as much, and they're not as clean as UWF.

The only downside is when you _do_ want to persist changes to update you have to disable UWF, reboot, make your changes (such as Windows Update), enable UWF again, and reboot. But I seldom have to do this. I treat the OS as pretty stateless and keep all my personal files in a separate bitlocker-enabled partition that isn't subject to UWF.

[1] https://docs.microsoft.com/en-us/windows-hardware/customize/... [2] https://horizondatasys.com/reboot-restore-rx-freeware/


👤 progval
> We could trust nothing beyond our base system and our browser, and refuse to use any code we don't fully audit, but this would be an impossibly austere way to live. [...]

> The alternative is sandboxing, using a lightweight option like firejail (which I use) or a totalizing system like QubesOS.. But these systems are awkward to use, and have their own drawbacks.

I am somewhere between these two options, to be reasonably secure without experiencing too many drawbacks: All my software is installed either from Debian repositories, or compiled myself and ran as a application-specific unprivileged user, with no access to X/Wayland when possible. (You could allow yourself to download binaries, but source makes me feel somewhat safer.)

I also run Firefox and VLC in Firejail because they are complex pieces of software that deal with lots of untrusted input, and need access to X/Wayland.


👤 lupinglade
Seems like a problem with VSCode.

Write more of your own code. If your app is made up of more packages and dependencies than you can audit then you're doing it wrong.

Consider OpenBSD.


👤 rfoo
Unpopular opinion: I disable auto-update and for every piece of software I use, I go over it at least casually, or wait some days depending on how popular it is and the "trust level" in my mind, before first load and before update.

👤 vesche
One option would be to not run vscode locally: https://github.com/features/codespaces

👤 openfuture
The solution I am still working on is a xyte.ch x330 thinkpad that has been modified to remove bluetooth and mic as well as change the wifi to an atheros card (free software...)

It is also flashed with HEADs and I have a secret on a smartcard (usb stick) that I can use to sign my boot partition.

On the boot partition there will be a minimal system that lets me decrypt my hard drive and boot into my desired guix system generation. The boot partition is signed so it should never change (especially not every time you update your system configuration).

Guix allows you to bootstrap from a minimal seed so once I finish making the software I need for the bootstep I will set this up on that laptop by bootstrapping guix.

For me it's all about trying to go closer to the foundation a la precursor; rather than a theatre like qubes where the complexity is just too much.

In the not-as-distant-as-it-may-seem future I will probably try genode on a pinephone or a laptop and maybe it will be useable and robust..


👤 freedomben
Some of the easiest things you can do regardless of OS are to use containers when developing (so you aren't installing npm packages on your host/main system), and don't store sensitive credentials on your disk in plain text (like in `~/.aws/credentials`). Use env vars and only export them when needed.

👤 p2p_astroturf
There's nothing you can do. All language library distribution systems ever made are insecure, and backed by willful ignorance.

👤 lowbloodsugar
Over a decade ago we were laughing at poor corporations getting pwned because they ran IE and ActiveX and websites could do anything, same for Outlook or Word.

And here we are in 2021 and your dev tools are doing the same thing. Good luck with that.


👤 GhettoComputers
It’s easy, set up a good firewall on your router, disable constant internet access on a computer with an airgap, don’t update for the sake of updating, remove the LAN cable/WiFi card if you’re really worried.

👤 cogburnd02
Have an air gap between development machines and anything attached to the internet.

Or just put your machine in a bank vault.


👤 goodpoint
Use Debian and firejail. Avoid docker, snap and flatpak.

👤 atVelocet
Use Windows Sandboxing which is builtin or have the dev run a VM which only exposes needed ports. Dont make things to complivated…

👤 tonymet
develop in a VM.

👤 openssl
Run everything inside a VM. Next level would be develop in live USB, qubes OS or bare metal for every project (screw general purpose computer)

At the router level you can block Tor, block VPS IP ranges etc. You can also block the entire internet and only allow ips from your browser history

Besides sandboxing you can run a firewall, I tested with some reverse shells and it does stop them. Of course a red team can do more bad stuff