HACKER Q&A
📣 adezxc

How to distribute a lot of images throughout multiple Universities


Basically, I've gotten coursework at my university to consider and start using a distributed file system for storing large amounts of crystal diffraction images. It would need to have multiple copies of the files distributed in case one of the servers goes down and be scalable as it will be always increasing. I've looked into things like LOCKSS[1] and IPFS[2] but LOCKSS seems to be limiting itself to storing articles and IPFS doesn't provide the data reliability in case one of the nodes goes down. Did anyone ever encounter a similar task and what did you use for that?

[1] https://www.lockss.org/ [2] https://ipfs.tech/


  👤 zcw100 Accepted Answer ✓
IPFS does provide data reliability with the use of pinning services, a private cluster, or cooperative cluster. It seems to be difficult how to communicate how IPFS works in this regard and there are a lot of misunderstandings about it. There are some people who want IPFS to be an infinite free hard drive in the sky with automatic replication and persistence till the end of time. (it is not). Then there are the people who worry that, "OMG someone can just put evil content onto my machine and I have to provide it!" (it does not)

IPFS makes it very easy to replicate content, but you don't have to replicate anything you don't want to. Resources cost money so you either have to ask someone to do it for free, and you get what you get as far as reliability, or you pay someone and you get better reliability so long as you keep paying.


👤 jjgreen
This is private data right? Maybe a private bittorrent tracker with a few nodes which "grab everything" to ensure persistence. Never done it myself, but might be a direction worth researching ...

👤 brudgers
How much data do you have now?

How fast is it increasing?

What is your budget for hardware?

What is your budget for software?

What is your budget for development labor?

What is your budget for maintenance?

I mean the simplest thing that might work is talking to your university IT department...

...or calling AWS sales or another commercial organization specializing in these things.

The second most complicated thing you can do is to roll your own.

The most complicated thing you can do is to have someone else do it.

Good luck.


👤 hannibal529
This is a simple task with NATS JetStream object storage https://docs.nats.io/nats-concepts/jetstream/obj_store/obj_w.... Just provision a JetStream cluster and an object store bucket. If you want to span the cluster over multiple clouds with a supercluster, that’s an option as well.

👤 DrStartup
Sounds like you’d want to setup a private multi org cloud storage system.

Something like this https://min.io/ or similar. There are a dozen or so open source / commercial s3-like object storage systems out there.

I have a friend that does this kind of mission critical infrastructure for research universities.

Dm if you’d like


👤 mikewarot
If you're replicating one primary file system to many secondary systems, MARS might be helpful[1]. It was developed by 1&1, who hosts my personal website, along with petabytes of other people's stuff.

[1] https://github.com/schoebel/mars


👤 rom16384
I was thinking about SyncThing, https://github.com/syncthing/syncthing but it's a file synchronization tool, meaning every node would have a full copy, and it would propagate deletes from one node to another.

👤 Quequau
Isn't rsync designed for use cases like this?

👤 Gigachad
How much data? Why not chuck it on S3 or Dropbox?

👤 toomuchtodo
BitTorrent or ceph?

👤 hooverd
Try asking your PI?