There's the subtle changes made on-the-fly by smart boxes in between you and the outside world.
There's cacheing in the GCP buckets, Amazon S3 buckets.
There's cacheing in your frontend CDN.
There's DNS cacheing.
There's in-browser cacheing, differentially for page content, for crypto assets like X509 certificates, for DNS FQDN query results.
Maybe it's just me, but it "feels" like we have a cache problem at scale:
* you can't update your own cache lifetime without strong likelihood somebody else erodes it.
* you can't easily flush all these caches. Some of them are at scale held by millions of things.
* they interact!
"There are only two hard things in Computer Science: cache invalidation and naming things." -- Phil Karlton, around 1996 or 1997.
There are "cache busting" techniques that mostly work, appending a random string as a url parameter https:example.org/path/page.html?5trf6783gf7rvf5 will _almost_ always fetch a current version of the page from origin. If you have stuff where being able to bypass the caches that are out off your control between you and your users, it's worth making sure you have a way to do that.
I always use date stamps on things like T&C and Privacy Policy pages, so the actual URL changes when the policies/legals change. So use something like https://example.org/privacy_20220204.html - and make sure you have a good way to update all your internal links and redirect requests to old versions to the new one (or leave the old one in place with a big notice at the top saying "This policy was superseded on 4 Feb 2022 - click here for the current privacy policy")