https://forsetisecurity.org/docs/latest/concepts/
To understand the underlying design of how each GCP service relates to each other is complex and definitely not available to the public. There is also a huge amount of nuance between GCP services relying on underlying Google services vs. other GCP services. Container Registry and Artifact Registry may both depend on the same underlying storage service, which isn't necessarily GCS, but could be an internal Google storage service. How this is specifically managed, partitioned and run is very hard to extract. Failure modes and scenarios are well designed and understood internally, but not shared publicly.
If you had a very specific use case you could approach your Google Cloud TAM/sales/customer engineer with the questions and they will be able to help you understand.
Source: Former Customer Engineer in Google Cloud for 4 years
In Nov 2020, AWS Kinesis Firehose went down for a few hours and took down a slew of other services that depended on each other (Cloudwatch depends on kinesis, ec2 autoscaling and lambda depend on cloudwatch, everyone depends on ec2 and lambda...)
This was all sort of a large surprise internally that a "small" component like kinesis streams could take down so much.
https://binx.io/blog/2020/10/03/how-to-find-google-cloud-pla...
However, that does not take account of GCP services being implemented behind the scenes using other GCP technologies in Google-managed projects - e.g. Cloud SQL uses Compute Engine and GCR (search "speckle umbrella"). Cloud Functions relies on Cloud Build to compile the function into a container. AI Platform Training uses a GKE cluster internally.
You can often get hints about these things from the VPC-SC documentation, which explains on a per-service basis which APIs need to be enabled to protect the perimeter:
https://cloud.google.com/vpc-service-controls/docs/supported...
https://github.com/someengineering/cloudkeeper
I’ll reply more in depth since I’m on the run right now, but for now I hope the link is sufficient.