Having read the myriad AWS data protection agreements I would feel comfortable running bedrock hosted models. Others may feel differently.
Code I only run through the zero-retention API accounts anyway.
The first thing to keep in mind is the illusion of transparency. You might internally know that something is wrong or exploitable or you've made an obvious mistake, but that's generally much less obvious to others.
The second to keep in mind is that we are currently in a crisis of attention. There's too much to think about and do nowadays, and there is a gigantic lack of motivated actors to act upon that information. You could consider it the dual of the illusion of transparency, but it's the illusion of motivation. Other people, by in large, just do not give a damn because they can't and don't have time for it.
Even a nation state if they wanted to go spy on everyone's private information would immediately find themselves with too much nonsense to sift through and not enough time to actually follow through even on surface level information. Let alone leaks that actually require some sort of sophisticated synthesis over two or three disparate pieces of info.
Lastly, it's the difficulty in exploitation. You know how projects and code and stuff seem easy until you try them, and it turns out that actually, this is taking forever, and it barely works? The whole devil in the details thing.
Well, that applies to exploits as well. It's easy until you try it, and then you have this Swiss cheese model of success where random stuff doesn't line up correctly and your workflow broke.
AI surveillance btw barely changes any of this calculus.
Your sensitive data? load it up bruh