HACKER Q&A
📣 stanislavb

Someone is proxy-mirroring my website, can I do anything?


Hi Hacker News community,

I'm trying to deal with a very interesting (to me) case. Someone is proxy-mirroring all content of my website under a different domain name.

- Original: https://www.saashub.com

- Abuser/Proxy-mirror: https://sukuns.us.to

My ideas of resolution:

1) Block them by IP - That doesn't work as they are rotating the IP from which the request is coming.

2) Block them by User Agent - They are duplicating the user-agent of the person making the request to sukuns.us.to

3) Add some JavaScript to redirect to the original domain-name - They are stripping all JS.

4) Use absolute URLs everywhere - they are rewriting everything www.saashub.com to their domain name.

i.e. I'm out of ideas. Any suggestions would be highly appreciated.

p.s. what is more, Bing is indexing all of SaaSHub's content under sukuns.us.to ¯\_(ツ)_/¯. I've reported a copyright infringement, but I have a feeling that it could take ages to get resolved.


  👤 santah Accepted Answer ✓
Same thing happened to me and my service (https://next-episode.net) almost 2 years ago.

I wrote a HN post about it as well: https://news.ycombinator.com/item?id=26105890, but to spare you all the irrelevant details and digging in the comments for updates - here is what worked for me - you can block all their IPs, even though they may have A LOT and can change them on each call:

1) I prepared a fake URL that no legitimate user will ever visit (like website_proxying_mine.com/search?search=proxy_mirroring_hacker_tag)

2) I loaded that URL like 30 thousand times

3) from my logs, I extracted all IPs that searched for "proxy_mirroring_hacker_tag" (which, from memory, was something like 4 or 5k unique IPs)

4) I blocked all of them

After doing the above, the offending domains were showing errors for 2-3 days and then they switched to something else and left me alone.

I still go back and check them every few months or so ...

P.S. My advice is to remove their URL from your post here. This will not help with search engines picking up their domain and ranking it with your content ...


👤 musabg
1. Create fake url endpoint. And go to that endpoint in the adversary's website, when your server gets request, flag the ip. Do this nonstop with a script.

2. Create fake html elements and put unique strings inside. And you can search that string in search engines for finding similar fake sites on different domains.

3. Create fake html element and put all request details in encrypted format. Visit adversary's website and look for that element and flag that ip OR flag the headers.

4. Buy proxy databases, and when any user requests your webpage, check if its a proxy.

5. Instead of banning them, return fake content (fake titles and fake images etc) if proxy is detected OR the ip is flagged.

6. Don't ban the flagged ip's. She/He's gonna find another one. Make them angry and their user's angry so they give up on you.

7. Maybe write some bad words to the user on random places in the HTML when you detect flagged ip's :D So the user's will leave the site and this will reduce the SEO point of the adversary. Will be downranked.

8. Enable image hotlinking protection. Increase the cost of proxying for them.

9. Use @document CSS to hide the stuff when the URL is different.

10. Send abuse mail request to the hosting site.

11. Send abuse mail request to the domain provider.

12. Look for the flagged IPs and try to find the proxy provider. If you find, send mail to them too.

Edit: More ideas sparkled in my mind when I was in toilet:

1. Create fake big css files (10MB etc). And repeatedly download that from the adversary's website. This should cost them too much money on proxies.

2. When you detect proxy, return too big fake HTML files (10GB) etc. That could crash their server if they load the HTML into the memory when parsing.


👤 politelemon
Add a link rel="canonical" to your pages as well, it should give engines a hint that your domain is the legit one.

https://webmasters.stackexchange.com/questions/56326/canonic...

I noticed that the other domain is hotlinking your images. So you can disable image hotlinking, by only allowing certain domains as the referers. If you block hotlinked images then the other domain will not look as good. Remember to do it for SVGs too.

https://ubiq.co/tech-blog/prevent-image-hotlinking-nginx/

Finally I also see they are using a CDN called Statically to host some assets off your domain. You can block their scrapers by user agent listed here:

http://statically.io/docs/whitelisting-statically/


👤 halifaxbeard
Setup Cloudflare on the domain and turn on “bot fight mode”.

If the TLS ciphers the client proposes for negotiation doesn’t align with the client’s User-Agent they get a CAPTCHA.

I would suspect that whoever is doing this proxy-mirroring isn’t smart enough to ensure the TLS ciphers align with the User-Agent they’re passing through.


👤 JW_00000
What about a slightly alternative approach, where instead of trying to block the abuser, you try to make it clear to end users what the real website is? E.g. in your logo image, include the real domain name "saashub.com". Have some introduction text on your home page "Here at saashub.com, we compare SaaS products ...." When your images are hotlinked, replace them with text like "This is a fraudulent website, find us at saashub.com". Anything that can make it obvious to end users that they're on the wrong website when they visit the abuser's URL.

By the way, I've also reported the abuser as a phishing/fraud website through https://safebrowsing.google.com/safebrowsing/report_phish/?u...


👤 wpietri
One strategy tip: don't play cat and mouse. As you've demonstrated, if you change one thing, they will figure it out and change one thing. Not only does that not work, but you are training them that it's worth trying to beat your latest change.

Instead, plot a few different changes and throw them in all at once. Preferably in a way where they will have to solve all of the changes at the same time to figure out what happened and get things working again. Also, favor changes that are harder to detect. E.g., pure IP blocks are easier to detect than tarpitting and returning fake/corrupted content. The longer their feedback loops, the more likely it is that they'll just give up and go be a parasite somewhere else.


👤 ycommentator
My networking knowledge isn't great, so apologies if this is wrong. But if it's not wrong, it could help.

FIND THE IP FOR THE DOMAIN

  PS > ping sukuns.us.to
  Pinging sukuns.us.to [45.86.61.166] with 32 bytes of data:
  Reply from 45.86.61.166: bytes=32 time=319ms TTL=39
  ...
REVERSE DNS TO FIND HOST

  https://dnschecker.org/ip-whois-lookup.php?query=45.86.61.166
Apparently it's "Dedipath".

And that WHOIS lookup gives an abuse email address:

  "Abuse contact for '45.86.60.0 - 45.86.61.255' is 'abuse@dedipath.com'"
So you could try emailing that address. They may take the site down, or hopefully more than that...

👤 trinovantes
They are probably using some public cloud service so simply banning all IPs from cloud ASNs [1] will usually be enough. Downside is you're also banning any users using VPNs

[1] https://github.com/brianhama/bad-asn-list


👤 dutchbrit
Warning: Don't visit the proxy mirror at work, I was redirected to xcams/adult content.

👤 sdrinf
If the host (DediPath) is not respecting DMCA notices, one other thing you can do is adding the requester's IP address to every page, eg as a div class. If the responses are live proxied, this will surface the cloner's front-facing IP address, and you can block that (and their ASN) specifically.

👤 mkoryak
> They are stripping all JS

Are they now?

Add a `visibility: hidden` to random elements on the page, and show them with javascript.

OR

Are they removing _all_ js? Have you checked whether they remove `` ?

You can try to do script injection _into your own site_ to see if their mirroring software is smart enough to deal with all the different xss vectors.

Bonus points: if they remove your ` attribute, add a style like

body { display: none} body[onhover='the js code that they will remove'] {display: block}


👤 marginalia_nu
Look at your traffic logs and see if you can't fingerprint the scraper. Should be relatively easy since they're mirroring your entire site.

Then instead of blocking the fingerprint, poison the data. Introduce errors that are hard to detect. Maybe corrupt the URLs, or use the incorrect description or category. Be creative, but make it kind of shit.

It's easy to work around blocks. Working around poisoned data is much harder.


👤 modeless
HN probably won't like this but if they are blocking all JS you can make all content invisible with CSS and use JS to unhide it before page load finishes. Temporarily of course until these guys go away.

The nice thing about this is it can be made arbitrarily complex. For example you can make the page actually blank and fetch all the normal, real content with JS after validating the user's browser as much as you like on both client and server. That's what Cloudflare's bot shield stuff does. Since JS is Turing complete there is no shortcut that the proxy can take to avoid running your real JS if you obfuscate it enough. They would have to solve the halting problem.

What a determined adversary would do is run your code in a sandbox that spoofs the URL, so then your job becomes detecting the sandbox. But it's unlikely they would escalate to this point when there are so many other sites on the internet to copy.


👤 signaru
Some "less technical" suggestions after being inspired by other creative suggestions here:

Put brandings/personalizations/signatures in your pages that are not easy to remove to remove automatically. Include your site URL if possible. The idea is that if a visitor sees these on a different site, it becomes obvious that the content doesn't belong there.

Write an article page about these things happening, specifically mentioning the mirroring site URLs, and see if they will also blindly mirror it.


👤 mortehu
Can you use the fact they they're proxying to prove to Bing and Google webmaster tools that you own their domain, and delist it? The verification is done by serving a file provided by Bing/Google.

👤 cph123
I used to do this to other websites (we won't go into why) - one thing that may help you is to always return your HTML responses gzipped, regardless of whether the client asked for them or not, so ignore Accept-Encoding preferences. This makes it harder for their server to rewrite your URLs on demand, and most clients will accept gzipped responses.

👤 satya71
We had a similar situation, though it was just a snapshot. We found out they were hosting using S3, and filed a DMCA request with AWS. It was taken down and hasn’t returned.

👤 adql
When someone did it to us we replaced the served content with ads for our site.

👤 someweirdperson
If they are serving all files, that should work for systems that check if you are the owner by asking to serve a file as a response to a challenge.

The copy is using ZeroSSL. This seems to use a similar mechanism like letsencrypt to verify certs. Maybe, you could get their certificate by serving the response to their challenge from your server. Not idea how to proceed from there.

Or activating the google webmaster tools. Maybe there's some setting "remove from index" or "upload sitemap" that could reduce its visibility on google.


👤 pornel
Don't block their IPs, but rather return them subtly wrong content that isn't broken at the first glance. Insert typos, replace important terms, inject nonsense technobabble, make URLs point to wrong pages, inject off-topic SEO-spammy keywords that search engines will see as the SEO spam they are.

👤 pornel
If they are stripping all JS, you can make the page work only with JS enabled :/

👤 zupa-hu
You can overload their servers as they pass on any URL to you. Just make sure their resource usage is significantly more than yours. Eg. serving a gzipped response that contains millions copies of the same URL to your site. Ideally, make it not fit in their memory. Or, simply large enough to take really long to compute.

Put it under a URL only you know, then start DoS-ing it.

Of course that requires you to be able to serve a prepared gzipped resonse, depends on your stack.


👤 Fire-Dragon-DoL
Could you prepend full urls to your website on assets and trigger cors requests on all assets? That would make it really annoying to proxy

👤 zhouyisu
Maybe some logic honeypot would be good, such as a infinite content paging list with some random trigger hidden at pages with non-sense titles. When one IP hits these triggers, it is automatically banned.

Bots will trigger it by walking through all pages, but real human would not click in since the paging is non-sense and titles are non-sense.


👤 scarmig
This is a game of cat and mouse; although engineering approaches are fun, it's primarily an organizational/legal challenge, not a red/blue team exercise.

The first line of defense is contacting the relevant authorities. This means search engines, the hosting provider, and the owner of the domain (who may not be the abuser). Be polite and provide relevant evidence. Make it easy for them to act on it. There'll be some turnaround time and it's not always successful, but it's the best way to get a meaningful resolution to the issue.

What about in the meantime? If all the source IPs are from one ASN, just temporarily block all IPs originating from that ASN. There'll be some collateral damage, but most of your users won't be affected.


👤 tekno45
Block everyone else except them and start hosting disney content. Then give the mouse a ring

👤 SergeAx
Does it hurt you in any way? If not - I would just leave it alone. Google can tell a copy from original. I tried to search some arbitrary text from your website - there is no trace of copycat in Google SERP.

What striked me, though, is that a copycat website is waaaay faster than your original. If I were in your shoes, I would invest my time and effort into speeding up the site. Unlike hunting some script kiddies, that will bring palpable benefits.


👤 batch12
Have you looked at filtering the traffic by ASN? You may be able to identify the provider your adversary likes to use and apply some of the controls musabg suggested to any traffic sourcing from these networks.

I have a website doing this to one of my domains. I have let it slide for now since I get value out of users that use their site too, but I have thought about packing their content with advertisements to turn the tables a bit.


👤 heartbeats
What about steganography?

If you change subtle details about spelling, spacing, formatting, etc by the source IP, then you can look at one of their pages and figure out which IP it was scraped from.

Then, just add goatse to all pages requested by that IP. Alternatively, replace every other sentence with GPT-generated nonsense.

EDIT: it should be quite easy to use JS to fingerprint the scraper. The downside is that you will also block all NoScript users.


👤 achillean
The following can be done for free without an API key or Shodan account:

1. Grab the list of IPs that you've already identified and feed them through nrich (https://gitlab.com/shodan-public/nrich): "nrich bad-ips.txt"

2. See if all of the offending IPs share a common open port/ service/ provider/ hostname/ etc. Your regular visitors probably connect from IPs that don't have any open ports exposed to the Internet (or just 7547).

3. If the IPs share a fingerprint then you could lazily enrich client IPs using https://internetdb.shodan.io and block them in near real-time. You could also do the IP enrichment before returning content but then you're adding some latency (<40ms) to every page load which isn't ideal.


👤 than3
Have you considered enabling HSTS on the webserver with dynamic endpoints and rate limiting after detecting the flagged IP, literally to a crawl?

I seem to recall someone doing something similar at one point hosting files and setting up resources that get pulled down only on flagged IPs such as a 300kb gzip encoded file that tries to expand to 100TB.


👤 thenickdude
Maybe they're also proxying URLs like the HTML verification files that search engines have you upload to claim the domain as your own?

You may be able to claim their domain out from under them and then mess with search settings (e.g. In Google Search Console you can remove URLs from search results).


👤 NorwegianDude
Add the worst content imaginable to the page, but don't make it visible by default. If the site strips JS, then use CSS to only show the terrible content when it's shown on that domain. You can use css to check the current domain based on e.g. links.

Extra points if you can cause legal trouble for whoever runs the site. If you're hosting rather large files, then you can also hide content by default that will never be loaded on your site, but will load on the other site. Add a large file to your site, then reference that file a few thousand times with query params to ensure cache busting, and then make the browser load it all using CSS when it detects that it runs on the other site.


👤 tyingq
Other posts mentioned ways to detect, like:

  - "bait urls" that other crawlers won't touch
  - trigger by request volume and filter out legit crawlers
  - find something unique about the headers in their requests that you can identify them with
One additional suggestion is to not block them, but rather, serve up different content to them. Like have a big pool of fake pages and randomly return that content. If they get a 200/OK and some content they are less likely to check that anything is wrong.

Another idea is to serve them something that you can then report as some type of violation to Google, or something (think SafeSearch) that gets their site filtered.


👤 reaktivo
Redirect via meta tag


👤 mouzogu
add some CSS to mess with their URLs

a[href*="sukuns"] { font-size: 500px!important; color: LimeGreen!important; }

pretty much destroys the page. i guess eventually they would give up in the specificity battle.

probably more stuff you could do with CSS to mess with them.


👤 meltedcapacitor
Use them a free CDN? User page from your domain actually downloads content through them but with your ads. (Maybe for the continents which are not your primary market.)

(Less economical if they're not caching anything.)


👤 __oh_es
Facebook had multiple approaches to keep users seeing ads (ie your message) on their site despite ad blockers. Could you mix a message amongst some content mixed up with some elements? Hopefully would not affect rankings too much, but could at least reach users. https://www.bbc.co.uk/news/technology-46508234.amp

Base64 encoding images with watermarks may also be worth a shout.

Love the zip bombing.

Long shot but I wonder if its possible to execute some script on their server.


👤 alexfromapex
There are ways you can fix this yourself but like all things it's way easier to just get a managed solution. CloudFlare or similar should give the necessary tools to block these types of sites.

👤 werid
.us.to subdomains are sourced from (dynamic) dns provider, FreeDNS: https://freedns.afraid.org/

👤 napolux
Want to have some fun?

Happened to me back in the days of blogging.

Posted an image of me mocking them on my blog. Sure enough they published it and they didn't notice for a while. They stopped it soon after :)


👤 rglover
You might be able to do an origin filter on the headers for requests to your backend (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-...). Check and make sure the x-forwarded-for header is what you expect and if not, block the request at middleware level.

👤 gildas
Block all requests having "https://sukuns.us.to" as "Referer" HTTP header.

👤 nstart
Sending a mail to the hosting provider is helpful. Also, if you are looking at blocking IP you can try blocking an entire ASN temporarily to see how that works. It’s one thing for someone to destroy a server and reimage it on the same service. It’s another thing to destroy it and bring it up on a fresh provider. Currently the attacker is using dedipath for example. Block the ASN while waiting for their abuse team to respond.

👤 0xbkt
You can be able to identify those requests by inspecting the TLS cipher. Cloudflare Workers has that value in `request.cf.tlsCipher`[0]. Keep in mind the collateral damage it may have, though.

[0] https://developers.cloudflare.com/workers/runtime-apis/reque...


👤 MR4D
I just checked the offending site - it’s full of malware. I think if you report that aspect then you might get faster resolution from the search engines.

👤 NicoJuicy
Quick and easy first:

1) Add a watermark to your images when they proxy to you.

Stolen image from {url}

2) Add a js script when the url differs from yours and display a message + redirect


👤 codeonline
When you detect their ip.. - serve images upside down - serve images blurry

More examples here from long long ago.. http://www.ex-parrot.com/~pete/upside-down-ternet.html


👤 balls187
Um, so I clicked the second link, and was redirected to a not-safe-for-work website.

Luckily, I am at home, and my children are at school.

I have no idea what happened, or why I got redirected, but I can certainly suggest not taking up the idea to serve disgusting content (given I clicked a link that someone on HN posted, I shouldn't be subjected to that).


👤 psyfi
Fail2ban might help

Even with IP rotation, a proxy website would probably generate more traffic than normal from these few IPs, tweak fail2ban vars so you to make it less likely to trigger on false positives (larger number of requests / larger amount of time) but block the violating IPs for long period, few days for example.

I hope it helps


👤 antonyh
One thing you can do is add a canonical to each page, which will help solve the the Bing/Google issue until they realise it's there. Do it before they add one.

You're already using Cloudflare, you could try talking to their support or just turning up settings to make it more strict for bots.


👤 pera
*.us.to are FreeDNS subdomains, I would contact them. Additionally you could do a whois and contact the ISP.

👤 btbuildem
Lots of good suggestions here, let me throw one more in the pot -- could you do an equivalent of a "ghost ban"?

Instead of blocking their IPs, detect if the traffic is coming from the abuser's IPs, and serve different content -- blank, irrelevant, offensive, copyright violations, etc.


👤 VanTheBrand
This will change if they switch hosting but here’s a list of all the ip prefixes for their current hosting provider.

https://bgp.he.net/AS35913#_prefixes

The IPs they switch between may all be from this pool.


👤 lofaszvanitt
Kill their ad ranking by inserting explicit words into the content body only porn sites use (like get the categories/some video titles from pornhub). Google only shows results from adult sites when the search string also has explicit words in it.

👤 ycommentator
Lots of great ideas here. A slight variation or emphasis on some: Specifically aim to advertise your own site on the other one. While you can anyway. Free advertising to their (should be your) audience, in return for what they're doing... Seems fair!

👤 jFriedensreich
there are infinite mitigations and it will always boil down to how much they want to do this vs how much you want to prevent them. in the end they could render in a remote controlled browser and use cdn or aws ip adresses en mass, i would consider highjacking their users in subtle ways like replacing pictures or text with obscenities or legal disclaimers. unfortunately their motivation is ad selling to other dodgy companies so unlikely you can mitigate that way. i would also invest in getting the seo in order and having them removed from google if possible. lastly there are solutions like cloudflare turnstile that impact normal users not as much as in days of captchas

👤 jamal-kumar
Believe it or not ICANN actually takes abuse reports seriously:

https://www.icann.org/resources/pages/abuse-2014-01-29-en


👤 m1sta_
Dynamically generate all the content in the browser to a canvas element. No HTML to steal.

More simply you could just make all the HTML links broken unless some obfusticated or server-backed algorithm is run on them. Think google search results.


👤 frankzander
Had the same problem. They used a scraper which runs on Amazon AWS ... so I blocked all Amazon AWS IPs (google for the list of IPs ... and than for a script which creates you NGINX rules for all IPs). Works quite well.

👤 JensRantil
Depending on the nature of access patterns you might be able to automatically block the IPs by tuning the parameters on fail2ban (if you have a server) to block the proxy IPs.

👤 tacone
Some years ago I let expire my blog domain, only to find out that somebody bought it and was serving a mirror of my content plus scam ads. I reported them to their DNS provider and they were gone in 2-3 weeks.

👤 amurf
Instead of rendering server side, render client side. If they block JS, they get nothing. In JS script check for hostname and if it matches their hostname, don’t render anything.

Potential downsides: SEO.


👤 someweirdperson
If direct (non proxied) access from the search engine spiders can be identified serve the real robots.txt. Otherwise disable crawling. Also, switch meta noindex like this.

👤 nickphx
What about blocking by ssl fingerprinting? Established browsers have known fingerprints derived from how the ssl request is made, supported connection options, etc.

👤 55555
Anyone have any idea what this does? It's embedded in the copycat site's source: earlierindians.com/39faf03aa687eeefffbe787537b56e15/invoke.js

👤 MohammadAZ
I'm not going to focus on the problem here. I just want to say that I like the idea behind your website, a good source of market research. Bookmarked it.

👤 ddalex
Once upon a time, I served all the static content base64-xored with a session key on the backend, and decrypted the content on the front-end with JS.

👤 henryackerman
I wonder what they get out of this. Injecting ads perhaps?

👤 lessname
If they copy the Javascript too, you could a code that checks if the domain matches and if not show a blank page (or something like that)

👤 Ra8
You could maybe put your website behind a captcha? Google's recaptcha works behind the scene, so it won't affect normal users.

👤 cweagans
Just block all of the OVH IP ranges?

https://ipinfo.io/AS16276


👤 sagarpatil
What’s the motivation for someone to proxy mirror a site?

Does copied content even rank in Google? How are they driving the traffic to it?


👤 naijagoal

👤 simonz05
Another option would be to use a service like Cloudflare, which offers protections against scraping and other malicious behavior. This can help prevent the proxy-mirror site from being able to access your site's content.

https://blog.cloudflare.com/introducing-scrapeshield-discove...


👤 amelius
Generate your pages from Javascript.

👤 oliv__
This made me wonder whether something similar was happening to my domain?

How would one go about finding out?


👤 khiqxj
i cant see your site. the mirror/proxy is useful and doing its job.

Access denied Error code 1020

You do not have access to www.saashub.com.

The site owner may have set restrictions that prevent you from accessing the site.

Error details Caret icon Was this page helpful?

Performance & security by Cloudflare External link


👤 acomjean
DMCA takedown?

They're serving your copyrighted content. Seems like what it was made for.


👤 coding123
Their domain is likely to be considered hostile, and drop in Google results.

👤 shahidkarimi
What an idea. I will do same with some popular websites.

👤 darthrupert
Implement a reverse swearword filter for their IP.

👤 colesantiago
If you have a trademark, domain takedown always works.

👤 worthless-trash
> 3) Add some JavaScript to redirect to the original domain-name - They are stripping all JS.

Make your site only work with JS.. Easy.


👤 LinuxBender
I tried to look up their site then realized I block "us.to" locally. Since you have their site linked in this thread they are likely seeing the HN thread as a referrer in their access logs and reading this. I expect this to turn into an ongoing battle as a result, but maybe this could be a fun learning exercise for everyone here.

The current IP 45.86.61.166 is likely a compromised host [1] which tells me you are dealing with one of the gangs that create watering holes for phishing attacks and plan to use your content to lure people in. They probably have several thousand compromised hosts to play with. Since others mentioned you could change the content on your site, I would suggest adding the EICAR string [2] throughout the proxied content as well so that people using anti-malware software might block it. They are probably parking multiple phishing sites on the same compromised hosts [3].

This would also be a game of whack-a-mole but if you can find a bunch of their watering hole sites and get the certificate fingerprints and domains into a text file, give them to ZeroSSL and see if they can mass revoke them. Not many browsers validate this but it might get another set of eyes on the gang abusing their free certs.

If you have a lot of spare time on your hands, you could automate scripting the gathering of the compromised proxy hosts they are using and submit the IP, server name, domain name to the hosting provider with the subject "Host: ${IP}, ${Hostname}, compromised for phishing watering hole attacks". Only do this if you can automate it as many server providers have so many of these complaints they end up in a low priority bucket. Use the abuse@, legal@ and security@ aliases for the hosting company along with whatever they have on their abuse contact page. Send these emails from a domain you do not care about as it will get flagged as spam.

Another option would be to draft a very easy to understand email that explains what is occurring and give that to Google and Bing. Even better would be if we could get the eyes of Tavis Ormandy from Google's vulnerability research team to think of ways to break this type of plagiarized content. Perhaps ping him on Twitter and see if he is up to the challenge of solving this in a generalized way to defeat the watering holes.

I can think of a few other things that would trip up their proxies but no point in mentioning it here since the attackers are reading this.

[1] - https://www.shodan.io/host/45.86.61.166

[2] - https://www.eicar.org/download-anti-malware-testfile/

[3] - https://urlscan.io/result/af93fb90-f676-4300-838f-adc5d16b47...


👤 arnorhs
I would just contact cloudflare via discord. They will know what your best recourse is

👤 3nt0py__
well... the only reasonable thing to do is to find a hosting that accepts monero as a payment, rent a baremetal server with ipmi acces, cript the hardisk with lusk and veracrypt, scan 0.0.0.0/0 for unpached dns servers and start ddossing the mirror site