The downside is that my internal domain names are now public (e.g. you can find them by looking up issued certificates for my domain through https://crt.sh or https://developers.facebook.com/tools/ct/, or by looking up the public DNS records).
I could keep it all private if I set up my own root certificate, trusted it on all of my machines, and issued self-signed certificates. I could also set up my own DNS server and make all my machines use it. Needless to say, that's way more hassle than just making everything public and buying a domain.
Another way to keep it private is to issue a wildcard certificate through Let's Encrypt and point my DNS records to a reverse proxy which would use the certificate. This would require all network traffic to pass through the proxy, making it a single point of failure.
Have you encountered this problem before? Did you solve making your internal DNS private?
Personally, I have 1 publicly facing server on my network, and I've loaded my wildcard certificate into HAProxy that terminates HTTPS and forwards the request to the appropriate backend server (usually based on subdomain). Of course, I back up the configs, scripts, etc., so restoring and re-creating this setup isn't complicated.
I run split horizon DNS[0] (As I manage both my internal and external DNS zones), which works just fine.
For external-facing services, I use Let's Encrypt (LE) certs and when internal services run on the same hosts, I use those LE certs for TLS/HTTPS.
For internal only services, where encryption is desired, I use self-signed certs. That said, in many cases, since the internal services don't actually have any data that needs data privacy across my internal network (e.g., Podgrab[1], Deluge-web[2], etc.), often I don't bother.
I'm not sure (but I probably would) if I'd go that way in setting this up in 2022 (the origins of this set up go back to the 20th century), but it works for me and as it's already set up, there isn't much to do except maintain and update the zones/certs.
It's not that hard or that big a deal, IMHO.
[0] https://en.wikipedia.org/wiki/Split-horizon_DNS
[1] https://github.com/akhilrex/podgrab
[2] https://github.com/MAESTROHANTER/deluge-web
Edit: Added missing references.
DNS is handled by piHole, which forward internal requests to the opnsense. DHCP clients are on their own subdomain.
The opnsense ACME clients doesn't refresh the cert properly (I have a low life time), but other than that it works nice. All services force HTTPS, except Mealie and the "homepage" (for guest access).
In your case - have a public domain, issue a wildcard certificate, copy it to your endpoints.
You can use both private and public IPs in A records on a public server, so it's up to you what to use.
If you choose to use public IPs then you can use hairpin NAT on your router to make the local clients be able to access local resources by a public IPs.
Another way is move to IPv6 which eliminates most of these problems, but, obviously you need a routed IPv6 network or maintain a tunnel to IPv6 broker/provider (eg he.net if they still provide this service).
> looking up the public DNS records
Nope, nobody sends all your records. You need to know the exact record name to receive the value.