We know this sort of malware is making its way onto package repositories [1]. We know people are falling for these attacks. How do we protect ourselves against this family of threats?
[1]: https://www.theregister.com/2021/07/21/npm_malware_password/
We could trust nothing beyond our base system and our browser, and refuse to use any code we don't fully audit, but this would be an impossibly austere way to live. I expect most of us, when pressed, would admit that we're trusting much more code than we would like to.
The alternative is sandboxing, using a lightweight option like firejail (which I use) or a totalizing system like QubesOS. But these systems are awkward to use, and have their own drawbacks.
What's the bar for reasonable security, in your opinion? How do you secure your workstation without living like a monk?
It's not entirely safe, but I think gets me 90% of the way to a reasonably safe workspace. If there is malware in a VM, I can nuke it and reset affected credentials in my main OS (which is not infected). It's not too much extra overhead, I just SSH into the VM and work as usual. I've used Qubes before and have also tried a fully Docker-based workflow (developing exclusively in containers), but there can be too many headaches with either.
Imagine if someone managed to inject malware into a core kubernetes component which gets pulled from docker hub or something.
The only reasonable way to prevent this is do development work in a fully constrained environment both from a hardware and software perspective and that means taking on a hell of a lot of compromises which will cripple your productivity entirely.
My efforts to investigate this pretty much lead to the conclusion that you need to have two computers available attached to different physical networks. The first computer has internet access and allows things like email, www access but has no administrative capabilities and no development tools installed and no way of installing tools as an unprivileged user. The second computer is the only one you are allowed to do development work on and has no internet connection.
Obviously when proposing this, it was laughed out of the room. This is exactly what I intended to prove: you can’t fix this reasonably at this point so don’t bother doing it.
At least not unless you have an airgapped machine and solely write in something standardised, with no external dependencies or libraries and no possibility of pulling something from outside your trust boundary.
I hope you sleep better than I do knowing this as well.
Edit: I was actually most happy writing C in an airgapped network about 25 years ago on a Sun machine with some manuals and some Oreilly books on my desk. The very thought of downloading something there was laughed out of the room. I wonder what they do now.
* Don't always immediately updated to the newest version. If it's compromised, give the other users and the vendor time to find the vulnerability.
* Try to rely on software packaged by somebody you trust.
* Reduce potential impact of compromise by using 2FA with something off your dev machine (like authenticator app on your phone).
> What's the bar for reasonable security, in your opinion?
That really depends on your situation.
Do you think somebody might target you, specifically? Has your machine been compromised before? Do you do anything with potentially high leverage for an attacker?
If the answer is "no" to all these, I'd say don't sweat it too much beyond standard "best practices".
As a starting point I think crowd wisdom is called for given the size of the challenge this would be for an individual. If you see something, immediately say something. Responsible full disclosure on tight timelines. Ways to rapidly get the message in front of those impacted where action by them is needed. Build systems to avoid requiring action from those affected without compromising freedoms.
As you move through from code to production we have multiple stage gates and steps.
- From a code perspective, we use dependency and code scanning (yarn audit, sonarcloud, sonarcube etc). Sonarcloud has nice IDE integrations.
- Code is pushed and is picked up by a pipeline, further scans are done looking for vulnerabilities/CVEs etc. If any significant ones are found, the pipeline fails (yarn audit, sonarcloud, sonarcube, Palo Alto container scanner, docker bench etc)
- The pipeline deploys to test and does automated checking
- Prior to a production deployment, the pipeline must be manually approved.
- Once in production, we use further scanning and monitoring (Security Hub/Centre, Tenable, SIEM)
Our developers have no direct ability to change the production systems in any way. But, they can write code and commit to our Git repository as much as they want. Everything from that point is automated (except for manual approvals).
a) Don’t let anyone push to master. Everyone goes through code review.
b) Limit access to production. If you’re a shop with a separate ops function, none at all. If developers do their own basic ops, then limited and structured control surfaces for them. Choose from among the code reviewed builds to deploy, that sort of thing.
c) Where relatively high privilege in production is required, provide dedicated workstations just for that. These don’t need arbitrary local software or even internet access generally, just the production VPN.
Using Qubes OS. It's really easier than you might think. The UX is amazing. Can't recommend it enough.
> Unified Write Filter (UWF) is an optional Windows 10 feature that helps to protect your drives by intercepting and redirecting any writes to the drive (app installations, settings changes, saved data) to a virtual overlay. The virtual overlay is a temporary location that is usually cleared during a reboot or when a guest user logs off.
When I first got my laptop, I installed a fresh clean copy of Windows 10, installed all my commonly used applications, configured all my settings, and then enabled UWF. On every reboot, it goes back to this clean snapshot, no matter what I do - And reboots are quick too (~10 seconds).
I like this setup because I'm never worried about making changes to my computer to try them out (installing a new program, configuring obscure settings, etc). If I don't like it, I can get back to my fresh state with a simple reboot. I also like that the feature is built into the OS - there are similar third-party solutions such as "Reboot Restore RX" [2], but I don't trust these as much, and they're not as clean as UWF.
The only downside is when you _do_ want to persist changes to update you have to disable UWF, reboot, make your changes (such as Windows Update), enable UWF again, and reboot. But I seldom have to do this. I treat the OS as pretty stateless and keep all my personal files in a separate bitlocker-enabled partition that isn't subject to UWF.
[1] https://docs.microsoft.com/en-us/windows-hardware/customize/... [2] https://horizondatasys.com/reboot-restore-rx-freeware/
> The alternative is sandboxing, using a lightweight option like firejail (which I use) or a totalizing system like QubesOS.. But these systems are awkward to use, and have their own drawbacks.
I am somewhere between these two options, to be reasonably secure without experiencing too many drawbacks: All my software is installed either from Debian repositories, or compiled myself and ran as a application-specific unprivileged user, with no access to X/Wayland when possible. (You could allow yourself to download binaries, but source makes me feel somewhat safer.)
I also run Firefox and VLC in Firejail because they are complex pieces of software that deal with lots of untrusted input, and need access to X/Wayland.
Write more of your own code. If your app is made up of more packages and dependencies than you can audit then you're doing it wrong.
Consider OpenBSD.
It is also flashed with HEADs and I have a secret on a smartcard (usb stick) that I can use to sign my boot partition.
On the boot partition there will be a minimal system that lets me decrypt my hard drive and boot into my desired guix system generation. The boot partition is signed so it should never change (especially not every time you update your system configuration).
Guix allows you to bootstrap from a minimal seed so once I finish making the software I need for the bootstep I will set this up on that laptop by bootstrapping guix.
For me it's all about trying to go closer to the foundation a la precursor; rather than a theatre like qubes where the complexity is just too much.
In the not-as-distant-as-it-may-seem future I will probably try genode on a pinephone or a laptop and maybe it will be useable and robust..
And here we are in 2021 and your dev tools are doing the same thing. Good luck with that.
Or just put your machine in a bank vault.
At the router level you can block Tor, block VPS IP ranges etc. You can also block the entire internet and only allow ips from your browser history
Besides sandboxing you can run a firewall, I tested with some reverse shells and it does stop them. Of course a red team can do more bad stuff