HACKER Q&A
📣 thrwawy74

How do you guard against ransomeware? (offsite backups not possible)


I know the best mitigation for ransomware is to maintain off-site backups. I cannot do this. It is a design requirement not to have offline storage.

What are some ways you have prepared for ransomware?

I've thought about:

1) udev rules that blacklist drives until the infrequent moments I do full backups

2) a "USB condom" of sorts that I can remotely disconnect (sever the 5V line), to make a USB spinning HD available as needed with an authentication scheme

3) running a program that tries to detect ransomware execution (antivirus or excessive cpu usage/encryption/etc)

4) mirror it to the cloud (unworkable, considering it's 16TB)

I don't think there's a solution as good as offsite backups. Most of these are ideas that add indirection or complexity that common ransomware wouldn't be coded for.

How do you disconnect a drive without disconnecting it? Especially nvme rather than a USB enclosure?

What is your creative solution?


  👤 cesarb Accepted Answer ✓
> I know the best mitigation for ransomware is to maintain off-site backups. I cannot do this. It is a design requirement not to have offline storage.

The simplest solution would be to maintain "off-site" backups... where "off-site" is actually right next to it (in the same room), but in a physically separate computer. Then you can use on it something like borgbackup's append-only mode (https://borgbackup.readthedocs.io/en/stable/usage/notes.html...), so that ransomware running on other computers cannot overwrite old backups. Of course, this assumes that an attacker cannot access that computer other than through "borg serve" (it would be ideal if administration required physical console access to it), and that borgbackup doesn't have any bugs which allow bypassing the append-only restrictions in "borg serve".

(This has a drawback, explained on that page, that a periodic prune/compact on the server to free space from old backups could allow an attacker to delete data from the repository; but it can be easily worked around by taking a filesystem snapshot of the repository before doing the prune/compact. In case the attacker corrupted the backups, you just have to find a snapshot from before the corruption.)


👤 bravetraveler
Beyond backing up to an offline machine and hoping for the best, it's tricky. I'm not sure I'd be able to cook up something that sits well even thinking on it for some time.

My mind goes to immutability (eg: read-only filesystems), being able to repeatably build the hosts/generate as much state as possible, and relying on basic availability/eventual stability -- basically have so much of 'it' that they may not take it all down.

To practically do what I envision would require at least a decent amount of hardware, if not engineering time/workflow changes.

I think the main benefit from this is that it opens the door to more effective administration.

You'd be able to see a bunch of audit policy denials for servers trying to write where they shouldn't. You could also routinely reload the hosts to ensure they're clean. Ideally no one system controls everything, so they have to break many barriers to truly deny you access.

By the way, this is why SELinux is so highly recommended. It captures countless different ways these kinds of things can happen. Web servers opening unexpected sockets [for reverse shells], users reading devices they have no business with, etc.

Enable it, tune the policies as needed, and then monitor it. It does a good job at protecting you. Too good, too many people disable it in my opinion. Most don't realize it makes a fantastic snitch for nefarious actors.


👤 assttoasstmgr
Offsite or offline? Because they are two different things.

You state:

> It is a design requirement not to have offline storage

Then go on to list amateurish hacks like "sever the 5V line" (offline) or "mirror it to the cloud" (this is literally offsite backup). So your plans of action violate the unbreakable requirements you list in the first paragraph.

Why not back up to a proper tape robot which can rotate out tapes? Or did someone bid this job for $500?

If the requirements are as you state, then you have already failed because if the datacenter is destroyed by fire, flood, power surge, swarm of locusts - or disk corruption, filesystem corruption, bad RAID controller firmware, etc.... you are, in a word, fucked.

All that said, the best way to address ransomware is to prevent it from getting into your network in the first place. Which usually happens via unsecured endpoints manned by idiots. Application whitelisting can help with this.


👤 rapjr9
If you can't store backups offsite, then maybe you can create a disaster proof area onsite. Put the backups in a high quality fire proof safe inside a large enclosure built of firebrick, which is inside a steel shell, etc. It would cost a lot but if both no-offsite and backups-for-recovery are part of the requirements that might be one way to do it. The backup vault could even be a separate building, maybe buried in the ground away from the main building, but still within an enforced security perimeter. It won't be proof against some things (direct hit by a big missile/bomb for example unless you bury it very deep), but it would be better than nothing. Each time you move disks in/out of the vault you would be risking the vault integrity, so you might need two vaults.

Also do some web research on "data diode" as a way to prevent exfiltration if you must have network connectivity to the backup drives. If only one signed piece of software can write to the backup disks and the data can only flow one direction (to the disks) then if you can trust the security of that system your backups might be relatively safe, at least from being encrypted by ransomware. Corruption of the original data would still be a problem.

Which is a big flaw in using backups for protection. If the main system is corrupted and the corruption gets into the backups then you're compromised anyway. So a way to detect data corruption before you do a backup would be useful. If that's not possible then backups become essentially useless. You could keep very old backups forever and they might not be corrupted, but they would not be very useful because they would be so out of date. If you design all transactions to be reversible perhaps you can assume that after an attack you could figure out the nature of the corruption and reverse just those transactions, but I wouldn't count on that. An attacker does not have to actually encrypt your data, just copy it and then destroy your ability to use the data you have, and if they are sneaky about it you won't discover it until it is too late and all your current backups are also corrupted. I haven't seen this kind of attack yet, probably because simple ransomware is already so effective. If countermeasures against current ransomware become effective then ransomware may evolve.

In truth, if it is important, you need a detailed security analysis of the entire system to guide the backup strategy, not some quick hack.


👤 badrabbit
1) enforce Yubikey authentication in addition to password for any server management interface (including hypervisor like esxi which gets abused by ransomware a lot and of course SSH)

2) Restrict both read, write and modify permissions in Unix MAC permissions but also enforce it specifically for the data files using a LKM like selinux,apparmor or even the new and fancy eBPF LKM.

3) Enforce module signing and secure boot to prevent kernel mode code execution (short of an exploit) and of course harden your kernel.

4) This is shoddy, but if it were me I would have a LUKS encrypted backup partition with a detached header where the header and key are only fetched from a remote server temporarily from a and in-memory by a cron job/script to unlock it for a backup and lock it back again. That way, it requires a close examination of the cron script to figure out the partition is used for backup. Maybe even make the script itself remote exececuted by a simple 'curl ...|sh'


👤 _wldu
If you are not modifying/editing the files (just reading them) and you run Linux, then do this:

1. Don't allow users to sudo.

2. chmod the files to 400 (read-only) as the normal user.

3. chattr +i the files (as root) to prevent modifications. This makes the files immutable (cannot be changed).

So if ransomware (running in the context of a normal user) gets onto the machine, it cannot encrypt the files.

Hope this helps.


👤 yasinaydin
All files (and data) I have on my Linux machine is either a copy from a cloud (i.e git, photos, contacts) or synced to Dropbox Family (and some are even encrypted). In case of a ransomware, I would just revert to a previous version of that file. I dont have any physical backups but for some stuff I have multiple manual backups on cloud.

As for protection I try to follow general security guidelines: I mind all the packages I install (including dependencies like npm), I have ABP and nextdns on my browsers, I use Arch and update it every morning, and I disable any sw or hw feature (bluetooth, sdcards, usb boot) that I do not use, to give some examples. I have a feeling reducing attack surface should be enough for me.


👤 masterofmisc
Maybe a low-tech solution but I was just wondering if you could have a 2nd server in the same room/datacentre connected to one of those timer plugs. The timer plugs turn the server on at say 2am every day. On bootup, it connects to the main server and copies the latest backups to itself. Once done it shuts itself down. This way, the server is only up for a small window each day minimising the security risk. This idea has just come to me, so have not actually done this myself, but thought it post it anyway!

👤 mercurialuser
I've been thinking about this very problem recently.

My solution for an air-gapped backup is a linux server with no exposed ports, no daemons running.

A cron job starts a scp that copies data into a directory named as the date.

If you want you can expand using zfs with snapshots, compress, deduplicate.

Or if you have the money, buy a data domain or similar appliance and set it with immutable backups (nobody can delete them) and double password for any destructive operations. You can set it up as a tape with fiber channel or with ddboost, other ways to create a air gap


👤 oriettaxx
> 4) mirror it to the cloud (unworkable, considering it's 16TB)

sure? we faced almost the same scenario, but with incremental backups plus data deduplication we have two outside backups (with Borg and https://en.wikipedia.org/wiki/Proxmox_Backup_Server ) that are updated nightly (with a cheap internet connection)

deduplication is really a must have for backups, plus encryption and read only, of course


👤 sandreas
Edit: I think I misinterpreted your request... you CANNOT have offline storage? Then ignore the following lines :-)

Get a server with ECC RAM and storage

Get TrueNAS

Configure an ecrypted ZFS pool

Configure shares (NFS, Samba, etc.)

Configure periodic ZFS snapshots (hourly, daily, weekly, etc)

Connect your shares from clients

Use cloud providers /offsite that support zfs send and configure them

Test a disaster regularly and take notes on how to perform a restore as fast as possible

Do not ever login, store or input server credentials or unlock keys on your clients (otherwise ransomware may access your snapshots)

If ransomware attacks you, restore your latest snapshot

That should work


👤 mikebos
Most ransomware now a days stays hidden for a long time to try to invalidate your backups. I would just restrict access (physical and remote) to the maximum. So the datastore is just that, no other function on the server. Only access through an api or a well understood protocol, think rsync, ftp, whatever. Do the back-up locally to a non writeable directory / disk for non root users.

👤 throwaway2056
> It is a design requirement not to have offline storage.

Just curious why and what kind of application or company's requirement is not to have offline?


👤 josephcsible
Set up a Raspberry Pi or something, with the SSH server running in a container (to prevent exploiting privileged processes through Unix domain sockets), under an unprivileged user, and with PR_SET_NO_NEW_PRIVS enabled. Do not have any ports exposed to the outer host, not even an SSH server on the outer host or any other means of administering it over the network. Rsync your backups to the container, and have a cron job run as root on the host that takes Btrfs snapshots of the backup drive every day. Even knowing the root password wouldn't let a remote attacker modify or delete your old backups. The only way they could is if they exploited a vulnerability in the Linux kernel itself (and an information disclosure one wouldn't be sufficient).

👤 stevefan1999
make everything stateless and immutable as much as possible if not having the chance. Thia is not just copy and paste but similar to ZFS/BTRFS. You can try using a FUSE version for that first taste

👤 fsflover
How about using Qubes OS, where everything runs in its own VM by design?

👤 POPOSYS
Not use Windows / AD. This is the only solution that works.