"Our cloaking technique is not designed to fool a tracker who uses similarity matching. However, we believe our cloaking technique should still be effective against Amazon Rekognition, since cloaks create a feature space separation between original and cloaked images that should result in low similarity scores between them."
I was in the process of verifying this when I found that _no such guarantee_ can be made using fawkes. I documented my experiment here: https://github.com/Shawn-Shan/fawkes/issues/125.
Using Azure Face and AWS Rekognition, perturbed images regardless of the settings resulted in high degrees of similarity. In addition, I did a separate test using a perturbed image and another image with the same subject which _still resulted in a high degree of similarity_.
While the README claims "Python scripts that can test the protection effectiveness will be ready shortly.", no such scripts have been provided.
According to the fawkes website, their application has been downloaded over 300k times. If the protection they claim to provide is ineffective, then there are _a lot of people_ exposed out there with a misguided sense of being protected. This situation is further complicated by the apparent legitimacy of this tool due to its presentation at Usenix 2020 and its inclusion in pypi.
My question is this - has anyone actually validated the efficacy of this algorithm and have reproducible results to share? Or is fawkes yet another example of AI snake oil?