HACKER Q&A
📣 Tepix

What's your solution for SSL on internal servers?


Lets encrypt has made SSL/TLS a no-brainer for websites on the public internet. Behind home routers, it's still a tedious topic. What's your (non-enterprise) solution?


  👤 yabones Accepted Answer ✓
I have one main Nginx server that all other services are behind, regardless if they're internal or external. This box is where NAT port forwards 80,443 to.

I also use only subdomains of the domains I own, even for internal stuff. This means I also run a small bind9 DNS server with minimal zones to direct traffic to the proxy inside the network, and most of the records just don't exist outside, ie. they return NXDOMAIN from my public DNS provider.

On the nginx box, I have a snippet like this:

    deny all;
    allow 10.x.0.0/16;
Then, when I configure something as 'internal only' I just add this line to its config file:

    include /etc/nginx/private.conf
This means that I can decouple the certificate status from the internal/external status of the site. All sites get valid certs, and most of them get 401's from outside the network.

In reality, I manage the nginx config with ansible templates, so what I really do is set a boolean "public" flag to "true" on sites I want accessible outside, everything else is private by default.


👤 res0nat0r
I've not used this directly, but it may be useful: https://github.com/FiloSottile/mkcert

    mkcert is a simple tool for making locally-trusted development certificates. It requires no configuration.

    Using certificates from real certificate authorities (CAs) for development can be dangerous or impossible (for hosts like example.test, localhost or 127.0.0.1), but self-signed certificates cause trust errors. Managing your own CA is the best solution, but usually involves arcane commands, specialized knowledge and manual steps.

    mkcert automatically creates and installs a local CA in the system root store, and generates locally-trusted certificates. mkcert does not automatically configure servers to use the certificates, though, that's up to you.

👤 dindresto
I'm using subdomains on a domain I own and request Let's Encrypt certificates with the DNS challenge.

👤 throw0101a
DNS alias mode:

* https://dan.langille.org/2019/02/01/acme-domain-alias-mode/

* https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mo...

* https://www.eff.org/deeplinks/2018/02/technical-deep-dive-se...

You want the name "internal.example.com". In your external DNS you create a CNAME from "_acme-challenge.internal.example.com" and point it to (e.g.) "internal.example.net" or "internal.dns-auth.example.com"

When you request the certificate you specify the "dns-01" method. The issuer (e.g., LE) will go to the the external DNS server for the look up, see that it is a CNAME and then follow the CNAME/alias, and do the verification at the final hostname.

So your ACME client has to do a DNS (TXT) record update, which can often be done via various APIs, e.g.:

* https://github.com/AnalogJ/lexicon

You can even run your own DNS server locally (in a DMZ?) if your DNS provider does not have an convenient API. There are servers written for this use case:

* https://github.com/joohoi/acme-dns

* https://github.com/joohoi/acme-dns-certbot-joohoi

* https://github.com/pawitp/acme-dns-server


👤 tqwhite
For a long time I was all fussy about having to create a security exception for self-signed certificates. One day I realized I was acting insane, as if there was some glorious principle involved. There isn't.

I trust my own (or coworkers) certificates. It's a dev site for heavens sake.

Ever since, ssh-keygen all the way.


👤 api
At ZeroTier we are working on a solution for this that will implement ACME. Not ready for release quite yet but getting close. Could be used on ZeroTier networks but doesn't have to be.

👤 chenxiaolong
None of my self-hosted things are internet facing, so I use the DNS-01 challenge type. For internal DNS, I run the powerdns authoritative server. Each ACME client uses RFC2136 (TSIG) to update the _acme-challenge TXT record. I have a pdns lua policy script set up so that the only record that's allowed to be updated is the TXT record matching the TSIG name.

To allow Let's Encrypt to hit the DNS server, I run a public-facing dnsdist load balancer. It forwards the relevant TXT, CAA, DNSKEY, and NS queries to pdns and silently drops all other queries not required for ACME challenges.

I'd prefer that the internal hostnames wouldn't be leaked in the certificate transparency logs, but given that no services are exposed to the internet, it doesn't bother me enough to look for alternatives (eg. wildcard certs).


👤 AtlasBarfed
This all comes down to two things:

- keytool and openssh absolutely suck from a usability standpoint

- testing the pipeline of generation of keys/files/certs/stores and importing/generating/signing etc is difficult

- error messages, if you get them, are completely unhelpful, and often the errors are superficially not even ssl/security related.

Every time I do SSL, it is a 1-4 day job, and that's with StackOverflow saving my ass on translating "why this weird error means this failure".

Between the above two issues, SSL on every platform, application, database, operating system (or version of OS) has different errors.

If you have a non-mainstream language, I have NO IDEA how you would get SSL up. Python, JVM, javascript, C/C++ there's a lot of eyeballs on this.


👤 yosamino
I have a private CA for all internal hosts. It's a bit of a pain to run, but it's neccessary because "internal" includes VMs that run colocated servers and many services to mutual authentication.

It's hyper annoying that it's barely to not possible to include our own CA into mobile Browsers so that they can use internal websites.

The alternative would be to depend on LE where neccessary, but this introduces an external dependency that I would rather avoid.


👤 phsm
You can use https://github.com/joohoi/acme-dns to issue letsencrypt certificates to your internal hosts using DNS validation.

All it takes is to setup an ACME-DNS server somewhere (or just use author's public ACME-DNS server if you don't care much), and create one CNAME record in your DNS.


👤 captn3m0
Wildcard certificate using Traefik on a secondary domain generated using DNS challenge. Since this gives Traefik too much permissions to my DNS (CloudFlare now has better RBAC, but I haven't switched to it) - I use a secondary domain for my homeserver.

For the few services where I want to use my primary domain, I run dehydrated once in a while from my local setup (which uses my password manager on a script). These services do end up exposed publicly on the CT Logs, but I'm okay with that.

I proxy these services over a DO VPN, but resolve these internally to private IPs using NextDNS. There's also a internal domain for every service (service.in.example.com resolves to internal IP and service.example.com resolves to public IP, and service.example.com resolves to the private IP within my home network)


👤 rad_gruchalski
Running an internal CA is really easy in 2022: https://gruchalski.com/posts/2020-09-09-multi-tenant-vault-p....

👤 wyuenho
Depends on how your home network is setup and how you deploy your services. If you have a public domain for you home IP, and it's the usual docker bridge network setup with only a couple of containers, using Traefik or Caddy as a reverse proxy will suffice. They'll automatically provision TLS certs for your services with very little to no effort at all. If it's something more complicated than that, such as needing a separate IP and mdns host name per container running on a vlan, or some multicloud kubernetes setup, you pretty much have to setup your own CA. In that case, look into mkcert and/or step-ca.

👤 daper
If you have your own domain, then move it to one of the listed DNS providers and use DNS challenge with ACME:

We are using using certbot + cloudflare whis way. There is no HTTP request, certbot makes a temporary DNS record using the Cloudflare API to satisfy the challenge so you can run the script anywhere. Then copy the cert to the device that needs it.

DNS providers supported bycertbot:

https://community.letsencrypt.org/t/dns-providers-who-easily...


👤 sdevonoes
Good question. I'm also interested. My use case would be:

- a have an nginx server load balancing traffic between N web servers that talk to one DB. Everything is inside a VPC (I'm using digital ocean), and only nginx is public to the internet

One approach I have read is that I could terminate SSL at the nginx level, and handle normal http between my web servers. Question would be: how secure is that? Can I (should I) trust that everything within my VPC is only accessible to me? Is terminating SSL good enough when handling, let's say, account creations and payments via Stripe?


👤 justsomehnguy
I have my own 2T PKI.

Pro: you would know how some apps crash (and sometimes burn) when PKI breaks. Also you would know what every other distro guy thinks he is smarter than everyone and do the PKI his own way.

Cons: see Pro.

NB: Windows PKI in the ADDS environment (at terms of distributing the RCA/ICA trust) is a walk in the park compared to everything else.

NB2: Java keychain is PITA.

> Behind home routers, it's still a tedious topic

Buy a domain, park it at Cloudflare/Ghandi/whatever ACME supported provider, use DNS-01, push or pull certs to the local network.

There is no problem with the process, only with laziness and automation.


👤 stephenr
I'd already written a small (server focussed) tool to make use of certbot (or any ACME client really) certs a bit more automated (getting the right combination of certs/key in a file, converting to alternative formats, fetching the OCSP data, syncing across machines, restarting services after they're updated etc);

Last year I added in a 'create' mode where it sets up a self signed root CA, and issues certs using that. The other logic (convert, sync, combine) is obviously all the same still.


👤 CaptainJustin
I asked a similar question previously: https://news.ycombinator.com/item?id=29995812

👤 bjt2n3904
EasyRSA. Install the CA on my devices, distribute the SSL certs to the devices I care about. (Mainly NAS, OctoPi, PiHole). The me PiHole serves hostnames too, and DHCP.

👤 latch
Personally I avoid it where I can and use wireguard for all internal traffic. Hopefully someone more knowledgeable here can tell me if it's a good or bad idea.

👤 isthisfree
If you can automate DNS, create a wildcard LE cert and have a cronjob to maintain it around your different places from the one place you issue it. That is what I do.

Before that I just bought one wildcard cert and used that. Can be bought at less than 50 bucks and then no hassle.

If I could not automate DNS and I don't have 50 bucks per year for it, I would create a small CA myself, trust it in my browsers and issue certificates from that.


👤 ganondork
I do a lot of the same things as others with a custom domain at home with a NAT 80/443 forward to nginx, but I use https://nginxproxymanager.com/ as it gives a dead-simple hostname proxy to forward traffic to internal hosts, and will request/renew Let's Encrypt certs for them automatically.

👤 jhugo
I have a private CA (managed with cert-manager) and trust it on my systems, and issue certs for internal services from that.

👤 bravetraveler
I still pay for wildcard certs on a few domains; it's crazy - I know, I just haven't bothered to jump over to LE and add it to the list of things to mind.

For some services I'll put them behind those domains and simply use the appropriate certificate.

Generally though if this is strictly internal, my domain can issue internally trusted certificates.


👤 mikedelago
At a previous job, I used Smallstep Certificates[0] as a hosted CA, though for certificates to communicate with our Kafka clusters. It worked pretty well, and was relatively easy to set up.

[0] https://smallstep.com/docs/step-ca


👤 9wzYQbTYsAIc
I use an internal subdomain wildcard certificate (e.g., *.internal.example.com) and traditional methods of configuring ssl for internal services / sites on the internal servers.

For cases where the service or site doesn’t natively support ssl, I run a local reverse proxy with the above certificate.


👤 efitz

👤 Cockbrand
I have only one internal server which is coincidentally also accessible from the public Internet. I use NAT hairpinning for the external interface of my router and forward all packages on ports 22 and 443 to my server, so its TLS certificate is also valid from inside my LAN.

👤 galenguyer
I wrote this tool[1] to help me create a CA, generate certificates, and automatically renew them

https://github.com/galenguyer/hancock


👤 giaour
Private CAs are the enterprise solution, but that can get expensive or difficult to manage for a home setup.

You could get a cert for a wildcard subdomain and then use whatever private subdomains you want on your home network


👤 pcunite
I became my own CA. Install certs in the system/browser and issue keys that are 10 years out. Make everything with the CertManEX tool.

👤 acsigen
For a relatively small company I use PiHole Unbound with LE certificates and I'm routing the private subdomains to internal IPs.

👤 itgoon
Hashicorp Vault can generate certs via a web form or REST call. Easy to set up and maintain, and it's free.

👤 madduci
A private PKI using Ejbca Community Edition from Primekey. Pretty solid.

Another one is step-ca


👤 guilhas
Create a self-signed certificate on the server

Install it as trusted on each client


👤 oulipo
Can you explain your use-case for this?