I was sitting in a cafe in Larnaca last year, waiting for a coffee and idly poking around in packet captures from my home network. I’d just installed a new ISP connection, and I wanted to see what the default DNS behaviour looked like before I started messing with it.
Every single query. In plain text. To my ISP’s resolver.
Every website I visited. Every API endpoint my code called during development. Every smart bulb that phoned home. Every NTP sync, every certificate revocation check, every background update from every device in my house. All of it, neatly logged by my ISP, correlated with my account, timestamped to the millisecond.
Your DNS traffic is your ISP’s bread and butter. It’s a complete record of your online activity, and they get it for free because the original DNS protocol, designed in 1987, sends everything as unencrypted UDP. No authentication. No privacy. No integrity checks. Just questions and answers in plain text, traversing networks controlled by people whose business model depends on knowing what you’re doing.
I decided to fix that. On a fanless box under my desk.
tl;dr, I built a three-layer DNS architecture on my OpenBSD router: encrypted upstream transport via dnscrypt-proxy, DNSSEC-validating cache via unbound, and 182,000+ blocked ad and tracker domains via the hagezi Pro blocklist. The firewall forces every device on the LAN through this stack, no exceptions. Here’s how it all fits together.
The architecture: three layers of defence
The mental model is a pipeline. DNS queries from devices on my LAN enter at the top and pass through three layers before an answer comes back:
LAN clients (10.20.10.0/24, 10.20.20.0/24)
│
▼
┌─────────────────────────┐
│ Layer 2: unbound │ Cache, DNSSEC validation, rebinding protection
│ 127.0.0.1:53 │ Also listens on 10.20.10.1:53, 10.20.20.1:53
│ 10.20.10.1:53 │
│ 10.20.20.1:53 │
└────────┬────────────────┘
│
▼
┌─────────────────────────┐
│ Layer 1: dnscrypt-proxy│ Encrypted transport to upstream resolvers
│ 127.0.0.1:5300 │
└────────┬────────────────┘
│
▼
┌─────────────────────────┐
│ Layer 3: blocked_names │ Ad/tracker blocking (hagezi Pro, 182k+ domains)
│ (dnscrypt-proxy plugin)│ Blocks before the query ever leaves the box
└────────┬────────────────┘
│
▼ DNSCrypt protocol (encrypted, authenticated)
CryptoStorm resolvers (cs-*)
Each layer has a specific job. Each layer is independently useful. Together, they give me encrypted transport, validated responses, and aggressive ad blocking, all enforced at the network level so no device can opt out.
Let me walk through each layer, bottom-up, starting with the encrypted transport.
Layer 1: dnscrypt-proxy, encrypting the last mile
The fundamental problem with traditional DNS is that it’s cleartext UDP. Anyone between you and your resolver, your ISP, a compromised router, a coffee shop’s access point, can read and modify your queries. DNS-over-HTTPS (DoH) and DNS-over-TLS (DoT) were designed to fix this, but I chose DNSCrypt [1] for reasons I’ll get to in a moment.
dnscrypt-proxy runs on the router, listening on 127.0.0.1:5300. Not port 5353, which is mDNS and gets blocked by my pf rules. Not port 53, which is unbound’s territory. Port 5300, a quiet corner of the loopback interface where it can do its job without stepping on anyone’s toes.
The configuration is straightforward:
listen_addresses = ['127.0.0.1:5300']
max_clients = 250
ipv4_servers = true
ipv6_servers = false
dnscrypt_servers = true
doh_servers = false
require_dnssec = true
require_nolog = true
require_nofilter = true
Choosing upstream resolvers
This is where it gets opinionated. I’m using CryptoStorm [3] as my upstream resolver, specifically their cs-* server constellation. Not Cloudflare. Not Google. Not Quad9.
Here’s why.
Cloudflare (1.1.1.1) is fast and their engineering is genuinely excellent. But they’re a US company with a business model built on being the world’s largest reverse proxy. They see an enormous percentage of the internet’s traffic already. I don’t want to give them my DNS queries too, even if they pinky-promise not to log them. Google (8.8.8.8) is… Google. I don’t think I need to explain why I’m not sending my DNS traffic to the world’s largest advertising company. Quad9 is a non-profit and I respect their mission, but they’re US-based with a complicated legal history.
CryptoStorm is independent, operates on a no-logs policy, supports the DNSCrypt protocol natively, and their resolvers provide DNSSEC validation. They’re run by people who are genuinely paranoid about privacy, which is exactly the kind of people I want running my DNS infrastructure.
I’ve pinned 12 servers by geographic proximity to Cyprus:
server_names = [
'cs-austria', 'cs-hungary', 'cs-serbia', 'cs-romania',
'cs-czech', 'cs-milan', 'cs-berlin', 'cs-poland',
'cs-london', 'cs-france', 'cs-nl', 'cs-nl2'
]
Austria, Hungary, Serbia, Romania, Czech Republic, Milan, Berlin, Poland, London, France, Netherlands. All within a reasonable RTT from Larnaca. The load balancing strategy is p2, which is the power-of-two-choices algorithm: for each query, dnscrypt-proxy picks two random servers from the list, measures their recent latency, and sends the query to whichever is faster. It’s a beautiful bit of applied probability theory, actually. Significantly better than round-robin, nearly as good as least-connections, and it avoids the thundering-herd problem where all clients pile onto the same “fastest” server simultaneously.
lb_strategy = 'p2'
lb_estimator = true
The lb_estimator flag enables continuous latency measurement, so the algorithm adapts as network conditions change. If my route to Vienna degrades, traffic shifts automatically to Budapest or Bucharest. No manual intervention.
Why DNSCrypt over DoH or DoT
DNSCrypt predates both DoH and DoT, and in some ways it’s the more honest protocol. DoH deliberately wraps DNS queries inside HTTPS to make them indistinguishable from web traffic, which is great for censorship resistance but terrible for network administrators who need to understand what’s happening on their networks. DoT uses a dedicated port (853) that’s trivially blocked by firewalls.
DNSCrypt provides what I actually want: authenticated encryption of DNS queries with cryptographic proof that the response came from the server I intended to talk to. It doesn’t try to hide inside another protocol. It doesn’t pretend to be web traffic. It just encrypts the DNS query, sends it, and verifies the response. Clean.
The protocol also supports server-side anonymisation relays, though I’m not using those yet. One thing at a time.
Layer 2: unbound, the validating cache
unbound [2] sits on top of dnscrypt-proxy and provides three critical functions: caching, DNSSEC validation, and DNS rebinding protection.
It listens on three addresses:
interface: 127.0.0.1
interface: 10.20.10.1
interface: 10.20.20.1
port: 53
The loopback address is for local services on the router itself. The two 10.20.x.1 addresses are the gateway addresses for my two LAN segments (office and IoT, separated at Layer 2, but that’s Part 2 of this series). Every device on either network gets these addresses via DHCP as their DNS servers.
Forwarding mode, not recursive
This is important. unbound can operate as a full recursive resolver, walking the DNS hierarchy from root servers down. I’m NOT doing that. I’m running it in forwarding-only mode:
forward-zone:
name: "."
forward-addr: 127.0.0.1@5300
forward-addr: 1.1.1.1 # fallback, plain DNS
forward-first: no
All queries go to dnscrypt-proxy on port 5300. The 1.1.1.1 entry is a fallback, plain unencrypted Cloudflare DNS, used only when dnscrypt-proxy isn’t running. This matters during boot: unbound starts before dnscrypt-proxy, and the system needs DNS to function during startup. Once dnscrypt-proxy is up, the fallback is never used because forward-first: no means unbound tries all forwarders in order and sticks with the one that responds.
Why forwarding mode? Two reasons. First, recursive resolution generates a LOT of outbound DNS traffic to root servers, TLD servers, and authoritative nameservers. All of that traffic would be unencrypted, because those servers don’t speak DNSCrypt. By forwarding everything through dnscrypt-proxy, the only DNS traffic leaving my network is encrypted. Second, recursive resolution is computationally expensive and slower for cold cache queries. My little APU3D2 has better things to do with its 2GB of RAM.
DNSSEC validation: defence in depth
Here’s a subtlety that took me a while to appreciate: DNSSEC validation in unbound is independent of the encryption provided by DNSCrypt. They protect against different threats.
DNSCrypt encrypts the transport between my router and CryptoStorm’s resolvers. It prevents my ISP from reading or modifying queries in transit. But it doesn’t verify that the DNS data itself is authentic. A compromised upstream resolver could return forged records, and DNSCrypt would happily encrypt the lie.
DNSSEC [5] solves this. It’s a chain of cryptographic signatures from the root zone down to the individual record. When unbound validates a DNSSEC-signed response, it’s verifying that the record was signed by the authoritative nameserver for that domain, regardless of which resolver forwarded it. Even if CryptoStorm’s infrastructure were compromised, DNSSEC validation would catch a forged response.
module-config: "validator iterator"
auto-trust-anchor-file: "/var/unbound/db/root.key"
val-clean-additional: yes
The auto-trust-anchor-file points to the DNSSEC root trust anchor, which unbound manages automatically via RFC 5011 automated trust anchor rollover. The val-clean-additional option strips unvalidated records from the additional section of DNS responses, preventing cache poisoning via unsigned glue records.
Two layers of verification. DNSCrypt ensures the transport is private. DNSSEC ensures the data is authentic. Belt and braces.
DNS rebinding protection
This one is less well-known but absolutely critical if you run any services on your local network. DNS rebinding [6] is an attack where a malicious domain initially resolves to a public IP (serving attack JavaScript), then re-resolves to a private RFC 1918 address on your network. The browser’s same-origin policy doesn’t catch it because the domain name hasn’t changed, only the IP.
unbound’s private-address directives strip RFC 1918 addresses from upstream DNS responses for public domain names:
private-address: 10.0.0.0/8
private-address: 172.16.0.0/12
private-address: 192.168.0.0/16
private-address: 169.254.0.0/16
private-address: fd00::/8
private-address: fe80::/10
If an upstream resolver returns 10.20.10.50 as the answer for evil-domain.com, unbound drops the response. Legitimate local DNS records (for my own domains) are served by local-zone directives, which bypass this check entirely. The protection only applies to records that come from upstream, which is exactly where the threat lives.
Cache tuning
The APU3D2 has 2GB of RAM, which is generous for a router but not unlimited. I’ve set conservative cache sizes:
msg-cache-size: 50m
rrset-cache-size: 100m
key-cache-size: 50m
neg-cache-size: 10m
The rule of thumb is that rrset-cache-size should be roughly double msg-cache-size, because each message cache entry references multiple RRSET entries. 50MB of message cache gives me tens of thousands of cached responses, which is more than enough for a home office. The neg-cache-size caches NXDOMAIN responses, which prevents repeated queries for domains that don’t exist (surprisingly common with certain IoT devices that probe for update servers on every boot).
Additional hardening:
harden-glue: yes
harden-dnssec-stripped: yes
harden-below-nxdomain: yes
harden-large-queries: yes
use-caps-for-id: no
harden-algo-downgrade: no
What I deliberately left out (and why)
This is the part that would trip up someone copying configs from blog posts without thinking about their specific setup. Three commonly recommended unbound hardening options are disabled, and each for a specific reason.
use-caps-for-id: no This enables 0x20-encoding, a clever trick where unbound randomises the capitalisation of letters in the query name (ExAmPlE.cOm) and checks that the response matches. It adds entropy to prevent cache poisoning. The problem is that some authoritative nameservers don’t preserve case, which causes legitimate queries to fail. I’ve seen it break resolution for several domains I use regularly. Since I’m already getting DNSSEC validation AND encrypted transport, the marginal security benefit doesn’t justify the reliability cost.
harden-algo-downgrade: no This would reject DNSSEC responses that use weaker algorithms than what the zone’s DS record advertises. Sounds good in theory. In practice, it breaks the DNSSEC validation chain for domains in transition between algorithm suites. When a domain is migrating from RSA to ECDSA, there’s a window where the DS and DNSKEY records reference different algorithms. With harden-algo-downgrade enabled, unbound rejects these responses as invalid. I’d rather have working DNSSEC with a briefly weaker algorithm than broken DNS resolution.
harden-referral-path: yes is often recommended, but it’s completely ineffective in forwarding mode. This option validates the referral path during recursive resolution, following the chain of NS records from root to authoritative. In forwarding mode, unbound never does recursive resolution, so this setting literally does nothing. Including it would give a false sense of security, which is worse than not having it.
Layer 3: ad and tracker blocking
The third layer is a blocklist of 182,000+ domains known to serve ads, trackers, telemetry beacons, and other unwanted content. I’m using the hagezi Pro blocklist [4], which I’ve found hits the sweet spot between coverage and false positives. The Pro list is aggressive enough to block the major ad networks, tracking pixels, and telemetry endpoints, but it doesn’t break web applications the way the Pro++ or Ultimate lists tend to.
For an office environment where I need things like payment processors, SaaS dashboards, and client portals to actually work, the Pro list is the right choice. If I were running a purely personal network, I’d probably step up to Pro++.
Why dnscrypt-proxy for blocking, not unbound
This is a design decision that I see people get wrong a lot. unbound CAN do domain blocking via local-zone directives, and plenty of guides show you how to convert a blocklist into unbound config syntax. The problem is that unbound requires a full reload to pick up config changes. With 182,000 entries, that reload takes several seconds, during which DNS resolution stalls.
dnscrypt-proxy’s blocked_names plugin loads the blocklist into memory separately from the main config and supports live reloading without interrupting DNS service. The list updates daily via a cron job:
#!/bin/sh
# /etc/cron.daily/update-blocklist.sh
ftp -o /var/dnscrypt-proxy/blocked-names.txt \
https://raw.githubusercontent.com/hagezi/dns-blocklists/main/domains/pro.txt
# dnscrypt-proxy picks up changes automatically, no reload needed
The blocked_names plugin configuration in dnscrypt-proxy:
[blocked_names]
blocked_names_file = '/var/dnscrypt-proxy/blocked-names.txt'
log_file = '/var/log/dnscrypt-proxy/blocked-names.log'
log_format = 'tsv'
When a query matches the blocklist, dnscrypt-proxy returns NXDOMAIN immediately. The query never reaches unbound’s cache, never hits CryptoStorm, never leaves the box. From the client’s perspective, the ad domain simply doesn’t exist.
The log file is useful for debugging false positives. When something breaks, usually a web app that requires an analytics domain to function, I can check the log, whitelist the domain, and move on.
DNS enforcement in the firewall
All of the above is useless if a device on the LAN can simply ignore my DNS servers and talk directly to Google’s 8.8.8.8 or Cloudflare’s 1.1.1.1. And plenty of devices do exactly this. Chromecast hardcodes Google DNS. Some IoT devices have their own resolver addresses baked into firmware. Android phones with Private DNS enabled will use DoT on port 853, bypassing local DNS entirely.
The firewall stops all of this. Three rules in pf.conf:
# Redirect all outbound DNS to local unbound
pass in on $lan_if proto { tcp, udp } from $lan_net to any port 53 \
divert-to 127.0.0.1 port 53
# Block DNS-over-TLS (port 853) — no sneaking past us
block in quick on $lan_if proto tcp from $lan_net to any port 853
# Block mDNS (port 5353) — stays on the local segment
block in quick on $lan_if proto udp from $lan_net to any port 5353
The divert-to rule is the key. It doesn’t reject or redirect the packet in a way the client can detect. It silently diverts it to the local unbound instance. From the client’s perspective, it sent a DNS query to 8.8.8.8 and got a valid response. It has no idea that the response actually came from my local resolver, was validated with DNSSEC, and was checked against a 182,000-domain blocklist before being served.
This is important for IoT devices that check whether they can reach their hardcoded DNS server and refuse to function if they can’t. The divert-to approach satisfies the check while still routing the query through my DNS stack.
Port 853 is blocked outright. DoT is a legitimate privacy technology, and in most contexts I’d support its use, but on my network it’s a bypass vector. If a device wants encrypted DNS, it gets it, through my infrastructure, not by tunnelling around it.
Port 5353 (mDNS) is blocked at the interface level. mDNS is a local service discovery protocol, it should never cross network boundaries. Allowing it out would leak information about devices and services on my LAN to the upstream network.
The result: every single DNS query from every device on my network, laptop, phone, IoT bulb, doesn’t matter, goes through unbound, gets DNSSEC validated, passes through the blocklist, and exits the network via encrypted DNSCrypt to CryptoStorm. No exceptions. No opt-outs.
The privacy argument
Let me be direct about what this achieves and what it doesn’t.
What my ISP can no longer see: Every DNS query from my network. They can see that I’m sending encrypted traffic to CryptoStorm’s resolvers, but they can’t see WHAT I’m querying. They know I’m using DNS. They don’t know what for.
What ad networks can no longer do: Track me via DNS-level telemetry. 182,000 domains worth of tracking infrastructure returns NXDOMAIN. No beacons fire. No tracking pixels load. No analytics JavaScript phones home.
What DNSSEC prevents: Response tampering. Nobody between CryptoStorm and the authoritative nameserver can forge a DNS response without the cryptographic signature failing validation.
What this does NOT prevent: My ISP can still see the IP addresses I connect to. They can do reverse DNS lookups, SNI inspection on TLS handshakes (unless I’m using ECH), and traffic analysis based on connection patterns. DNS encryption is one layer, not a complete solution. For full transport privacy, you’d need a VPN or Tor, which is a different post and a different threat model.
Who sees my queries now: CryptoStorm, and they operate on a no-logs policy. I can’t verify that claim, nobody can, but I trust them more than I trust my ISP, whose business model explicitly depends on monetising my browsing data. The threat model isn’t “nobody can ever see my DNS traffic.” The threat model is “my DNS traffic should be encrypted in transit, validated on arrival, and visible only to entities I’ve chosen to trust.”
That’s a reasonable position. Not paranoid. Not naive. Just… considered.
Testing the setup
How do you verify all this actually works? A few quick checks.
Test that unbound resolves and validates DNSSEC:
$ dig @127.0.0.1 cloudflare.com +dnssec +short
104.16.132.229
104.16.133.229
If DNSSEC validation fails, you’ll get a SERVFAIL. A clean response with the ad (authenticated data) flag set means DNSSEC is working.
Test that blocking works:
$ dig @127.0.0.1 ads.google.com
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN
NXDOMAIN. The ad domain doesn’t exist, as far as my network is concerned.
Test that DNS redirection works from a LAN client:
$ dig @8.8.8.8 ads.google.com
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN
The client thinks it’s talking to Google. It’s not. The divert-to rule silently redirected the query to local unbound, which checked the blocklist and returned NXDOMAIN. The client has no idea.
That last one still makes me grin every time.
What’s next
Part 4 covers DHCP configuration and network segmentation, how I assign addresses, separate the office LAN from the IoT LAN, and make sure the lightbulbs can’t talk to the laptops. (The lightbulbs are still trying to phone home, by the way. They’re just not getting through.)
The DNS layer is the part of this build I’m most satisfied with. It’s the layer where the privacy payoff is most tangible, where the engineering decisions compound on each other, and where the difference between a thoughtful setup and a copy-pasted config is most consequential. Three layers, each solving a different problem, each independently valuable, each reinforcing the others.
I’d love to tell you the whole thing worked first time. It didn’t. There was a memorable evening involving a typo in the forward-zone stanza that sent all DNS queries into a loop between unbound and dnscrypt-proxy until the APU3D2’s two cores were both pegged at 100% and every device in the house lost DNS simultaneously. The kid, trying to load something on her phone, was… patient. Eventually.
But that’s the thing about building your own infrastructure. When it breaks, you understand WHY it broke. And when it works, you understand exactly what it’s doing and, critically, what it’s NOT doing. No black boxes. No trust assumptions you haven’t explicitly chosen.
Your ISP is still logging your DNS queries. Unless you’ve told it to stop.
References
- dnscrypt-proxy: a flexible DNS proxy with support for encrypted DNS protocols
- Unbound DNS resolver, NLnet Labs
- CryptoStorm: encrypted DNS and VPN services
- hagezi/dns-blocklists: DNS blocklists for ad and tracker blocking
- RFC 4033: DNS Security Introduction and Requirements (DNSSEC)
- DNS Rebinding, Wikipedia
- unbound.conf(5), OpenBSD manual pages
- pf.conf(5), OpenBSD manual pages