How I envisioned a solution would be some trusted third party takes my analysis script, returns the report and that is it. I never see the underlying data and recieve only one time token to access it.
I know it will never be hundred percent leak proof, and there is still a level of user trust, I realise that, but just thinking conceptually, is there any existing service out there, that does such a thing or attempts to offer something similar? Or what would an alternative approach look like?
A slow leaking ship will still sink. Attempts so far to anonymise public datasets have been terrible and turned into a garbage fire by attackers every time with minimal effort. Don't hand out false promises.
Guess you are looking for fully homomorphic encryption. A long-outstanding problem with lots of smart people working on it, some are doing ok at getting there.
The homomorphic encryption approach probably isn't worth the effort. There's always going to be a trade-off between doing something useful and sufficiently/securely obfuscating/anonymizing the data. So I'd recommend the local approach, with a prominent explanation of how you don't and can't see any of the data.
The stakes are lower when money, not privacy, is at risk. I have attempted to argue for years that the MathSciNet catalog of the mathematical literature should be open to all forms of machine learning and mind mapping software experiments. It remains a cash cow for the American Mathematical Society, and they're fiercely proud of its human curation by 19th century methods. Meanwhile, mathematicians continue to believe that math remains separated into tribes, with number theorists lobbying to hire their own at departmental meetings. The true connections between ideas defy these ancient categories. I see a generation of potential advances squandered by not letting third-party tools in to study MathSciNet.
The right ideas could help here. One isn't protecting individual privacy, just a cash cow. The bar is lower.
One idea would be:
1. distribute to the data owners a base system (something that can "run" stuff on their premises). People here have mentioned browsers, but for a more intensive processing this might not be enough.. so think of a docker daemon, keys for some docker registries, etc.
2. have a trusted "app store" (e.g. a docker registry where images are built in a reproducible manner from code which is inspected and certified, and then are cryptographically signed)
3. make a well described interface to the apps to consume the data (thinking of the general use case here.. if you just want to analyze fb info then you can make an adhoc parser...)
4. Have the data owner download, check the signature of, configure and run the app on their premises.
Things get even more interesting when the analytics need data from different non-trusting partners, so that Homeomorphic Encryption becomes necessary.
There is at least one specification that aims at supporting all of this: https://www.internationaldataspaces.org/wp-content/uploads/2... although implementation is, so far, lagging behind.
Shoot us a note -- would love to hear more details.
[0]: https://proofzero.io
https://federated.withgoogle.com/ https://en.wikipedia.org/wiki/Federated_learning https://github.com/poga/awesome-federated-learning
Assuming data is in a standard format then you can share your script for people to run themselves. Obviously this is fairly difficult in practice unless you can bundle everything into a client-side script on a website.
For reference Narrator [1] does this -- it puts data into a standard format so that analyses written for one company can be run for another. I'm not suggesting you build your stuff on that platform, but it's an interesting approach that does exist.
I'm sure there's some sort of homomorphic encryption[0] magic scheme that might let you process the data on other servers or something, but I could not even begin to tell you how. Really, it's just trust.