NGINX ingress (public vs internal)
This page proposes a maintainable NGINX “infra web service” layout for web.core.lef with strict separation by listener IP:
- Public traffic binds only to VIPs
192.168.20.112–120(forwarded 1:1 from public IPs by the firewall; see Firewall & public ingress). - Internal traffic binds only to
192.168.20.2(LAN/VPN).
It also implements the docs.lef behavior:
docs.lefredirects tomain.docs.lef- any
*.docs.lefredirects tomain.docs.lefunless an exact vhost exists (e.g.pivot.docs.lef)
Proposed file layout
Section titled “Proposed file layout”Target layout on the server (Debian default /etc/nginx/), split by audience:
/etc/nginx/
conf.d/
10-known-domains.conf
20-internal-default-certs.conf # optional (advanced; see notes)
generate-known-domains-map.sh
maps/
known_domains.map
snippets/
acme-http01.conf
redirect-known-domain-to-https.conf
ssl-internal-params.conf
sites-enabled/
public/
00-default.conf
10-<public-hostname>.conf
internal/
00-default.conf
05-zone-fallbacks.conf
10-docs-redirects.conf
10-<internal-hostname>.conf
nginx.confReference implementation (example files)
Section titled “Reference implementation (example files)”These example files live in this repo under src/examples/nginx/web-core-lef/ and are written to be copied to /etc/nginx/ (adjust cert paths).
# Example NGINX configuration layout for `web.core.lef` (reverse proxy / ingress).
# Intended target path on the server: `/etc/nginx/nginx.conf`.
#
# This example matches the current `web.core.lef` style (Debian 12 + brotli)
# while switching to the listener-separated include layout:
# - strict listener separation by IP (public VIPs vs internal listener)
# - explicit default servers per IP:port
# - maintainable includes (conf.d/maps/snippets/sites)
user www-data;
worker_processes auto;
worker_rlimit_nofile 8192;
pid /run/nginx.pid;
error_log /var/log/nginx/error.log warn;
# brotli dynamic modules (Debian path-friendly)
load_module modules/ngx_http_brotli_filter_module.so;
load_module modules/ngx_http_brotli_static_module.so;
events {
worker_connections 2048;
}
http {
include mime.types;
default_type application/octet-stream;
# quiet + snappy
server_tokens off;
tcp_nodelay on;
# logs
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent"';
access_log off;
error_log /var/log/nginx/http_error.log warn;
# io + keepalive
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
# bodies
client_max_body_size 20m;
client_body_buffer_size 64k;
# --- compression (gzip + brotli) ---
gzip on;
gzip_comp_level 5;
gzip_min_length 256;
gzip_vary on;
gzip_proxied any;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/javascript
application/x-javascript
application/json
application/ld+json
application/xml
application/xml+rss
application/wasm
image/svg+xml
font/ttf
font/otf
font/woff
font/woff2
application/vnd.ms-fontobject
application/vnd.api+json
application/problem+json
application/manifest+json;
brotli on;
brotli_comp_level 5;
brotli_types
text/plain
text/css
text/xml
text/javascript
application/javascript
application/x-javascript
application/json
application/ld+json
application/xml
application/xml+rss
application/wasm
image/svg+xml
font/ttf
font/otf
font/woff
font/woff2
application/vnd.ms-fontobject
application/vnd.api+json
application/problem+json
application/manifest+json;
add_header vary accept-encoding always;
# --- websocket support ---
map $http_upgrade $connection_upgrade {
default upgrade;
"" close;
}
# --- rate limiting zones (apply in server/location as needed) ---
limit_req_zone $binary_remote_addr zone=req_limit:10m rate=10r/s;
limit_conn_zone $binary_remote_addr zone=per_ip:10m;
# Maps (must load before any server blocks reference their variables).
include /etc/nginx/conf.d/*.conf;
# Upstreams (grouped by zone).
include /etc/nginx/upstreams-enabled/*.conf;
# Listener-separated vhosts.
include /etc/nginx/sites-enabled/public/*.conf;
include /etc/nginx/sites-enabled/internal/*.conf;
}# Map "known" hostnames to a boolean used by the default :80 listeners.
#
# To avoid leaking internal names on public VIP listeners (and vice versa), the
# map key includes the local listener address:
# "$server_addr:$host"
#
# Format: in `maps/known_domains.map` each line is:
# <listener-ip>:<hostname> 1;
#
# Anything not listed is treated as "unknown":
# - public VIP :80 => dropped (444)
# - internal :80 => redirected to https://docs.lef/
map "$server_addr:$host" $is_known_domain {
default 0;
include /etc/nginx/maps/known_domains.map;
}# Exact hostnames that are expected to be served by this NGINX instance.
#
# Keep this list in sync with vhosts under:
# - /etc/nginx/sites-enabled/public/
# - /etc/nginx/sites-enabled/internal/
#
# Notes:
# - Entries are keyed by listener IP to enforce public/internal separation.
# - Regex entries are allowed (prefix with `~`) and match the same "<ip>:<host>"
# composite key.
# - This file is included inside a `map {}` block; each entry must end with `;`.
# Public VIP `192.168.20.112` (lef.digital)
192.168.20.112:collab.lef.digital 1;
192.168.20.112:io-trg.lef.digital 1;
192.168.20.112:my.lef.digital 1;
192.168.20.112:vault.lef.digital 1;
192.168.20.112:wf.lef.digital 1;
192.168.20.112:wiki.lef.digital 1;
# Public VIP `192.168.20.113` (coragem.app)
192.168.20.113:analytics.coragem.app 1;
192.168.20.113:proxy.coragem.app 1;
192.168.20.113:registry.coragem.app 1;
192.168.20.113:report.coragem.app 1;
192.168.20.113:s3.coragem.app 1;
# Public VIP `192.168.20.117` (lef.software)
192.168.20.117:concepts.lef.software 1;
192.168.20.117:credit-hub.lef.software 1;
192.168.20.117:experience.lef.software 1;
192.168.20.117:hook.lef.software 1;
192.168.20.117:sapore.lef.software 1;
192.168.20.117:tokiocred.lef.software 1;
192.168.20.117:unimed.lef.software 1;
# Internal listener `192.168.20.2` (LAN/VPN only)
192.168.20.2:ca.app.lef 1;
192.168.20.2:docs.lef 1;
192.168.20.2:main.docs.lef 1;
192.168.20.2:pivot.dev.lef 1;
192.168.20.2:report.app.lef 1;
192.168.20.2:s3.app.lef 1;
192.168.20.2:solution.dev.lef 1;
192.168.20.2:tokio.dev.lef 1;
192.168.20.2:trainee.core.lef 1;
192.168.20.2:uptime.app.lef 1;
# Treat all docs hostnames as "known" on the internal listener so HTTP redirects
# keep the hostname (project vhosts can override the wildcard on :443).
~^192\.168\.20\.2:.*\.docs\.lef$ 1;
# TODO:
# - Add additional served hostnames as vhosts are enabled.#!/usr/bin/env bash
# generate-known-domains-map.sh
#
# Generates `/etc/nginx/maps/known_domains.map` for the listener-separated layout.
#
# Output format matches `conf.d/10-known-domains.conf`:
# map "$server_addr:$host" $is_known_domain { include /etc/nginx/maps/known_domains.map; }
#
# Each entry is keyed by local listener IP to prevent internal-only hostnames
# being treated as "known" on public VIP listeners (and vice versa):
# <listener-ip>:<hostname> 1;
#
# Usage:
# sudo /etc/nginx/generate-known-domains-map.sh
# sudo nginx -t && sudo systemctl reload nginx
set -euo pipefail
NGINX_DIR="${NGINX_DIR:-/etc/nginx}"
SITES_ENABLED_DIR="${SITES_ENABLED_DIR:-$NGINX_DIR/sites-enabled}"
OUTPUT_FILE="${OUTPUT_FILE:-$NGINX_DIR/maps/known_domains.map}"
tmp="$(mktemp)"
trap 'rm -f "$tmp"' EXIT
mkdir -p "$(dirname "$OUTPUT_FILE")"
mapfile -t enabled_files < <(
find -L "$SITES_ENABLED_DIR" -maxdepth 5 -type f -name "*.conf" -print 2>/dev/null | sort
)
if [[ ${#enabled_files[@]} -eq 0 ]]; then
echo "No enabled vhost files found under: $SITES_ENABLED_DIR" >&2
echo "Refusing to overwrite: $OUTPUT_FILE" >&2
exit 2
fi
awk '
function strip_trailing_semicolon(s) { sub(/;[[:space:]]*$/, "", s); return s }
function extract_ipv4(listen_token, s) {
s = listen_token
s = strip_trailing_semicolon(s)
sub(/^\[?::\]?:/, "", s) # ignore accidental ipv6 bracket prefix
sub(/^[^0-9]*/, "", s)
if (s ~ /^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/) {
sub(/:.*/, "", s)
return s
}
return ""
}
function escape_regex(s, out, i, c) {
out = ""
for (i = 1; i <= length(s); i++) {
c = substr(s, i, 1)
if (c ~ /[.[\](){|}^$+?\\]/) out = out "\\" c
else out = out c
}
return out
}
function ip_host_key(ip, host) { return ip ":" host }
function record_host(ip, host) {
if (ip == "" || host == "" || host == "_") return
print ip_host_key(ip, host) " 1;"
}
function record_wildcard(ip, pattern, suffix, escaped_ip) {
# Convert `*.example.com` into a regex map key that matches any subdomain:
# ~^<ip>:.+\.example\.com$
suffix = pattern
sub(/^\*\./, "", suffix)
escaped_ip = escape_regex(ip)
print "~^" escaped_ip ":.*\\." escape_regex(suffix) "$ 1;"
}
BEGIN {
in_server = 0
listen_count = 0
server_name_count = 0
}
$1 == "server" && $2 == "{" {
in_server = 1
listen_count = 0
server_name_count = 0
next
}
in_server && $1 == "listen" {
ip = extract_ipv4($2)
if (ip != "") {
listen_ips[++listen_count] = ip
}
next
}
in_server && $1 == "server_name" {
# Collect all names on this line.
for (i = 2; i <= NF; i++) {
n = $i
n = strip_trailing_semicolon(n)
if (n != "") {
server_names[++server_name_count] = n
}
}
next
}
in_server && $0 ~ /^[[:space:]]*}/ {
# Emit ip:name pairs for this server block.
for (li = 1; li <= listen_count; li++) {
ip = listen_ips[li]
for (si = 1; si <= server_name_count; si++) {
n = server_names[si]
if (n ~ /^\*\./) record_wildcard(ip, n)
else record_host(ip, n)
}
}
in_server = 0
next
}
' "${enabled_files[@]}" \
| sort -u \
>"$tmp"
mv "$tmp" "$OUTPUT_FILE"
echo "✅ Generated known domains map at $OUTPUT_FILE"# ACME HTTP-01 challenge handler (Certbot/acme.sh webroot).
# Include this in public :80 server blocks.
location ^~ /.well-known/acme-challenge/ {
root /var/www/certbot;
default_type "text/plain";
try_files $uri =404;
allow all;
}# If this host is listed in `maps/known_domains.map`, redirect HTTP -> HTTPS.
# Include from inside `location / { ... }` blocks.
if ($is_known_domain) {
return 301 https://$host$request_uri;
}# Baseline internal TLS parameters (tune as needed).
ssl_protocols TLSv1.2 TLSv1.3;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_session_tickets off;# Public VIP defaults (explicit per-IP listeners).
#
# All "public" vhosts MUST bind only to the EVEO VIPs:
# 192.168.20.112–192.168.20.120
#
# Unknown HTTP:
# - If host is known (maps/known_domains.map) => 301 to https://$host$request_uri
# - Else => 444 (no response)
#
# Unknown HTTPS:
# - TLS handshake is rejected (no default vhost content leakage).
server {
listen 192.168.20.112:80 default_server;
listen 192.168.20.113:80 default_server;
listen 192.168.20.114:80 default_server;
listen 192.168.20.115:80 default_server;
listen 192.168.20.116:80 default_server;
listen 192.168.20.117:80 default_server;
listen 192.168.20.118:80 default_server;
listen 192.168.20.119:80 default_server;
listen 192.168.20.120:80 default_server;
server_name _;
include /etc/nginx/snippets/acme-http01.conf;
location / {
include /etc/nginx/snippets/redirect-known-domain-to-https.conf;
return 444;
}
}
server {
listen 192.168.20.112:443 ssl http2 default_server;
listen 192.168.20.113:443 ssl http2 default_server;
listen 192.168.20.114:443 ssl http2 default_server;
listen 192.168.20.115:443 ssl http2 default_server;
listen 192.168.20.116:443 ssl http2 default_server;
listen 192.168.20.117:443 ssl http2 default_server;
listen 192.168.20.118:443 ssl http2 default_server;
listen 192.168.20.119:443 ssl http2 default_server;
listen 192.168.20.120:443 ssl http2 default_server;
server_name _;
ssl_reject_handshake on;
}# Internal defaults (explicit per-IP listeners).
#
# All "internal" vhosts MUST bind only to:
# 192.168.20.2
#
# Unknown HTTP:
# - If host is known (maps/known_domains.map) => 301 to https://$host$request_uri
# - Else => 301 to https://docs.lef$request_uri
#
# Unknown HTTPS:
# - Completes TLS using an internal certificate, then redirects to docs.
server {
listen 192.168.20.2:80 default_server;
server_name _;
# Internal CA issuance uses HTTP-01 via webroot on internal hostnames too.
include /etc/nginx/snippets/acme-http01.conf;
location / {
include /etc/nginx/snippets/redirect-known-domain-to-https.conf;
return 301 https://docs.lef$request_uri;
}
}
server {
listen 192.168.20.2:443 ssl http2 default_server;
server_name _;
# Internal catchall certificate for the default vhost.
#
# Keep this a static path so `nginx -t` can validate it and NGINX can load it
# at reload time (master process as root). Variable-based certificate paths
# are resolved at handshake time by workers and require the key file to be
# readable by the worker user (typically `www-data` on Debian).
ssl_certificate /etc/nginx/ssl/certs/docs.lef/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/private/docs.lef/privkey.pem;
include /etc/nginx/snippets/ssl-internal-params.conf;
return 301 https://main.docs.lef$request_uri;
}# Internal wildcard fallbacks (internal-only).
#
# Purpose: If someone hits an unknown hostname inside an internal DNS zone,
# complete TLS with that zone’s wildcard certificate and redirect to the docs.
#
# Exact-name vhosts (e.g. `ca.app.lef`) will override these wildcards.
server {
listen 192.168.20.2:443 ssl http2;
server_name *.core.lef;
# NOTE: `ssl_certificate` must be the issued certificate/chain (not the CSR).
ssl_certificate /etc/nginx/ssl/core-lef-wildcard.cert.pem;
ssl_certificate_key /etc/nginx/ssl/core-lef-wildcard.key.pem;
include /etc/nginx/snippets/ssl-internal-params.conf;
return 301 https://main.docs.lef$request_uri;
}
server {
listen 192.168.20.2:443 ssl http2;
server_name *.app.lef;
ssl_certificate /etc/nginx/ssl/app-lef-wildcard.cert.pem;
ssl_certificate_key /etc/nginx/ssl/app-lef-wildcard.key.pem;
include /etc/nginx/snippets/ssl-internal-params.conf;
return 301 https://main.docs.lef$request_uri;
}
server {
listen 192.168.20.2:443 ssl http2;
server_name *.dev.lef;
ssl_certificate /etc/nginx/ssl/dev-lef-wildcard.cert.pem;
ssl_certificate_key /etc/nginx/ssl/dev-lef-wildcard.key.pem;
include /etc/nginx/snippets/ssl-internal-params.conf;
return 301 https://main.docs.lef$request_uri;
}
server {
listen 192.168.20.2:443 ssl http2;
server_name *.test.lef;
ssl_certificate /etc/nginx/ssl/test-lef-wildcard.cert.pem;
ssl_certificate_key /etc/nginx/ssl/test-lef-wildcard.key.pem;
include /etc/nginx/snippets/ssl-internal-params.conf;
return 301 https://main.docs.lef$request_uri;
}
server {
listen 192.168.20.2:443 ssl http2;
server_name *.container.lef;
ssl_certificate /etc/nginx/ssl/certs/container.lef/*.container.lef.cert.pem;
ssl_certificate_key /etc/nginx/ssl/private/container.lef/*.container.lef.key.pem;
include /etc/nginx/snippets/ssl-internal-params.conf;
return 301 https://main.docs.lef$request_uri;
}
# TODO:
# - If `*.db.lef` ever needs HTTPS fallbacks, add a zone wildcard cert/key and a server block here.# docs.lef routing policy (internal-only).
#
# - `docs.lef` redirects to `main.docs.lef`
# - `*.docs.lef` redirects to `main.docs.lef` unless an exact vhost exists
# (exact names like `pivot.docs.lef` override the wildcard).
server {
listen 192.168.20.2:443 ssl http2;
server_name docs.lef;
ssl_certificate /etc/nginx/ssl/certs/docs.lef/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/private/docs.lef/privkey.pem;
include /etc/nginx/snippets/ssl-internal-params.conf;
return 301 https://main.docs.lef$request_uri;
}
server {
listen 192.168.20.2:443 ssl http2;
server_name *.docs.lef;
ssl_certificate /etc/nginx/ssl/certs/docs.lef/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/private/docs.lef/privkey.pem;
include /etc/nginx/snippets/ssl-internal-params.conf;
return 301 https://main.docs.lef$request_uri;
}# Certificate selection for the internal `192.168.20.2:443` default server.
#
# Goal: allow the internal default vhost to complete TLS for common internal
# zones and return a redirect, without needing to create wildcard vhosts for
# every internal zone.
#
# This relies on NGINX support for variables in `ssl_certificate` /
# `ssl_certificate_key`. If your NGINX build does not support this, use a single
# internal SAN certificate for the default server instead.
map $ssl_server_name $internal_default_ssl_certificate {
default /etc/nginx/ssl/certs/docs.lef/fullchain.pem;
# docs.lef (zone apex + wildcard)
docs.lef /etc/nginx/ssl/certs/docs.lef/fullchain.pem;
~\.docs\.lef$ /etc/nginx/ssl/certs/docs.lef/fullchain.pem;
# core.lef / app.lef / dev.lef / test.lef / container.lef
~\.core\.lef$ /etc/nginx/ssl/certs/core.lef/wildcard.core.lef.cert.pem;
~\.app\.lef$ /etc/nginx/ssl/certs/app.lef/*.app.lef.cert.pem;
~\.dev\.lef$ /etc/nginx/ssl/certs/dev.lef/*.dev.lef.cert.pem;
~\.test\.lef$ /etc/nginx/ssl/certs/test.lef/*.test.lef.cert.pem;
~\.container\.lef$ /etc/nginx/ssl/certs/container.lef/*.container.lef.cert.pem;
}
map $ssl_server_name $internal_default_ssl_certificate_key {
default /etc/nginx/ssl/private/docs.lef/privkey.pem;
# docs.lef (zone apex + wildcard)
docs.lef /etc/nginx/ssl/private/docs.lef/privkey.pem;
~\.docs\.lef$ /etc/nginx/ssl/private/docs.lef/privkey.pem;
# core.lef / app.lef / dev.lef / test.lef / container.lef
~\.core\.lef$ /etc/nginx/ssl/private/core.lef/wildcard.core.lef.key.pem;
~\.app\.lef$ /etc/nginx/ssl/private/app.lef/*.app.lef.key.pem;
~\.dev\.lef$ /etc/nginx/ssl/private/dev.lef/*.dev.lef.key.pem;
~\.test\.lef$ /etc/nginx/ssl/private/test.lef/*.test.lef.key.pem;
~\.container\.lef$ /etc/nginx/ssl/private/container.lef/*.container.lef.key.pem;
}Rationale and constraints: see NGINX ingress rationale.
Migration action list (from a legacy 0.0.0.0 listener setup)
Section titled “Migration action list (from a legacy 0.0.0.0 listener setup)”This is the safe order to move an existing config (that currently listens on 0.0.0.0:80/443) to strict listener separation:
- Snapshot current state:
nginx -T,ip -br addr,ss -lntp | grep nginx. - Create the new layout:
/etc/nginx/sites-enabled/public/and/etc/nginx/sites-enabled/internal/. - Move or re-create existing vhost symlinks into the correct folder:
- public: VIP listeners (
192.168.20.112–120) - internal: internal listener (
192.168.20.2)
- public: VIP listeners (
- Replace the global
listen 80/listen 443defaults with the explicit per-IP defaults from the example files. - Update the known-domain map to use the composite key
"$server_addr:$host"and list<listener-ip>:<hostname>entries.- If you currently generate
snippets/known_domains.mapby greppingserver_name, replace it with the generator script shown above (it also extractslisten <ip>).
- If you currently generate
- Configure internal unknown HTTPS handling:
- Required: internal
192.168.20.2:443 default_serveruses a static internal certificate so it can return a redirect. - Recommended: add per-zone wildcard fallback vhosts (
05-zone-fallbacks.conf) for zones where you have wildcard certs (e.g.*.core.lef,*.docs.lef) so browsers see a matching certificate when a hostname is unknown-but-in-zone. - Optional (advanced): SNI-based cert selection (
20-internal-default-certs.conf) only if your keys are readable by NGINX workers (security trade-off).
- Required: internal
- Add
docs.lefand*.docs.lefredirect vhosts on the internal listener. - Validate and reload:
nginx -t && systemctl reload nginx. - Re-check bindings:
ss -lntp | grep nginx(local address should no longer be0.0.0.0:80/443or[::]:80/443). - Run the checklist below against both public VIPs and the internal listener.
Test checklist (server-side)
Section titled “Test checklist (server-side)”Run these on a machine that can reach the relevant listener IPs.
Listener binding sanity:
sudo ss -lntp | grep nginxPublic :80 behavior (known host redirects; unknown host dropped):
curl -I http://192.168.20.112/ -H 'Host: wiki.lef.digital'
curl -v http://192.168.20.112/ -H 'Host: does-not-exist.example'Known-domain separation (a known host on the “wrong” VIP is treated as unknown):
curl -v http://192.168.20.113/ -H 'Host: wiki.lef.digital' # should be 444
curl -v http://192.168.20.112/ -H 'Host: proxy.coragem.app' # should be 444
curl -v http://192.168.20.112/ -H 'Host: main.docs.lef' # should be 444Public ACME path is served (no redirect for the challenge location):
curl -i http://192.168.20.112/.well-known/acme-challenge/ping -H 'Host: wiki.lef.digital'Public :443 blocks unknown SNI (handshake rejected):
openssl s_client -connect 192.168.20.112:443 -servername does-not-exist.exampleInternal :80 redirects unknown to docs:
curl -I http://192.168.20.2/ -H 'Host: does-not-exist.example'Internal :443 redirects unknown to main docs (certificate trust may require your internal CA):
curl -Ik https://does-not-exist.example/ --resolve does-not-exist.example:443:192.168.20.2Troubleshooting: if this returns a TLS error like tlsv1 alert internal error, check whether the internal 192.168.20.2:443 default_server uses variable-based ssl_certificate_key and the key is not readable by the worker user (common on Debian with www-data). Check:
sudo tail -n 100 /var/log/nginx/http_error.log
sudo tail -n 100 /var/log/nginx/error.log
sudo nginx -T | sed -n '/listen 192.168.20.2:443.*default_server/,/}/p'Internal docs.lef routing:
curl -Ik https://docs.lef/ --resolve docs.lef:443:192.168.20.2
curl -Ik https://anything.docs.lef/ --resolve anything.docs.lef:443:192.168.20.2NGINX config validity / reload:
sudo nginx -t
sudo systemctl reload nginx