A colleague looked at my pf.conf last month and said, “That’s it? I thought it’d be longer.”
I took it as a compliment. She didn’t.
She was expecting something impressive. Hundreds of lines of rules, maybe some complex queueing disciplines, an IDS integration, traffic graphs rendered in real time. What she got was about sixty lines of pf rules, two third-party packages, and a box that’s been quietly routing packets for months without anyone noticing it exists.
That’s the point, though. The most important decisions in this build aren’t the things I configured. They’re the things I chose NOT to configure. Every feature I rejected is attack surface I don’t have to defend. Every service I didn’t install is a daemon that can’t be exploited. Every line of config I didn’t write is a line that can’t contain a typo that breaks my network at 3pm while everyone in the house is trying to check their email.
This is Part 6, the final post in a six-part series about building a home firewall and router on OpenBSD. If you want to start from the beginning, Part 1 covers the hardware and network design. This post is about the decisions behind the decisions, the things I rejected and the philosophy that guided the whole build.
The decisions you don’t make
There’s a principle I keep coming back to in systems architecture: complexity is the enemy of security. Not in a fortune-cookie way. In a very literal, measurable way. Every component you add to a system increases its attack surface. Every configuration option you touch is an opportunity for a mistake. Every daemon you run is a process that needs to be patched, monitored, and understood.
OpenBSD’s developers understand this better than anyone. The project’s entire philosophy is “secure by default” [1], which means a fresh install has almost nothing running, almost nothing listening, and almost nothing that can go wrong. The smart move, most of the time, is to trust those defaults and resist the urge to “improve” them with additional complexity.
So here’s everything I considered, researched, and deliberately rejected, and why each rejection made the system stronger.
No full-disk encryption
This one surprised people. You’re building a security-focused firewall and you’re NOT encrypting the disk?
No. And here’s why.
Full-disk encryption on a gateway device requires a passphrase at boot. On the APU3D2, that means someone needs to be physically present at the serial console to type a passphrase every time the box starts up. In a data centre with an out-of-band management card, this is fine. In my home office in Larnaca, where power cuts happen several times a year, this is a recipe for my entire network going down while I’m at a cafe or, worse, off-island.
The whole point of this build is a box that survives unattended. Power goes out, power comes back, the router boots, the network works. No human required. A passphrase prompt on the serial console defeats that completely. I’d have to keep a monitor and keyboard connected to a device whose entire design philosophy is “set it and forget it.”
The trade-off is physical theft risk. If someone breaks into my house and steals this specific box (ignoring the laptops, the NAS, and everything else that’s more obviously valuable), they could read the filesystem. What would they find? pf rules, DHCP configs, and a DNS blocklist. The pf.conf tells them my network topology, which is mildly useful for a targeted attack, but if someone’s already inside my house, I’ve got bigger problems than my firewall config.
The compromise: swap is encrypted. OpenBSD supports this natively:
# sysctl vm.swapencrypt.enable=1
This is set in /etc/sysctl.conf and ensures that anything paged to swap, which could include sensitive data from running processes, is encrypted transparently. No passphrase required. No boot-time interaction. The encryption key is generated at boot and lives only in kernel memory. That covers the realistic threat (sensitive data leaking through swap) without introducing the operational fragility of full-disk encryption.
Would I make a different choice for a firewall in a co-location facility? Absolutely. Context matters. For a box under my desk that needs to survive Cyprus Electricity Authority’s creative relationship with uptime, operational reliability wins.
No Suricata or Snort IDS
I looked at this for about two days. Read the docs. Looked at deployment guides. Thought about it seriously. Then closed the browser tabs.
An intrusion detection system like Suricata [2] or Snort does deep packet inspection, comparing network traffic against thousands of signature rules to detect known attack patterns. It’s genuinely useful in environments with large, diverse, partially-trusted networks, corporate offices, university campuses, shared hosting environments.
My home network is not that.
I have a domestic network with devices I control. The laptops run patched operating systems. The IoT devices are firewalled to the point where they can barely reach the internet at all (and they DEFINITELY can’t reach each other). The pf rules already block everything that isn’t explicitly permitted. The DNS layer blocks 182,000 known-bad domains before a connection is ever established.
What would Suricata add? It would inspect the traffic that’s already been filtered, looking for attack signatures in the traffic I’ve explicitly allowed through. On a network where “allowed through” means “established connections from devices I trust to destinations the DNS layer hasn’t blocked,” the hit rate would be approximately zero.
And the cost wouldn’t be zero. The GX-412TC in the APU3D2 is a quad-core 1 GHz Jaguar. It handles NAT and pf at line rate without breaking a sweat, 0.6% CPU load during normal operation. DPI is a different beast entirely. Suricata would need to reassemble TCP streams, parse application-layer protocols, and match against thousands of regex patterns in real time. On this CPU, at Gigabit speeds, that’s not going to work without dropping packets. I’d need to either throttle throughput or accept incomplete inspection, and incomplete IDS is worse than no IDS because it gives you false confidence.
Then there’s the maintenance burden. IDS signatures need constant updating. New attack patterns appear daily. False positives need tuning. Rule categories need reviewing. Someone needs to be reading the alerts and deciding which ones matter. On a corporate network with a security team, that’s their job. On my home network, that “someone” is me, and I’m not doing it at 3am because Suricata decided that a perfectly normal HTTPS connection to GitHub looks a bit suspicious.
What I run instead is softflowd [3], which exports NetFlow data. It tracks connection metadata, source, destination, ports, byte counts, duration, without inspecting packet contents. If something on my network starts making thousands of connections to unusual destinations, or if a device starts transferring unexpected volumes of data, I’ll see it in the flow data. It’s not signature-based detection. It’s anomaly-based awareness. And it runs at line rate on the GX-412TC without breaking a sweat.
No ALTQ
This one is simple, but I’m including it because I still see blog posts and forum answers recommending ALTQ for traffic shaping on OpenBSD, and they’re all wrong. Not wrong in principle, wrong in fact.
ALTQ was removed from OpenBSD’s pf in version 5.6. That was 2014. Over a decade ago. It’s gone. If you’re reading a guide that tells you to configure ALTQ on OpenBSD, that guide is at least ten years out of date, and you should be suspicious of everything else it says too.
The replacement is the queue keyword in pf.conf with HFSC (Hierarchical Fair Service Curve) scheduling [4]. The new syntax is cleaner, more expressive, and better integrated with the rest of pf. Here’s what basic traffic shaping looks like in modern OpenBSD:
queue rootq on em0 bandwidth 100M
queue std parent rootq bandwidth 80M default
queue dns parent rootq bandwidth 5M min 1M
queue ssh parent rootq bandwidth 10M min 2M
queue bulk parent rootq bandwidth 5M
That’s it. Hierarchical queues with guaranteed minimums and bandwidth sharing. No ALTQ kernel modules. No altq on stanzas. No wondering which scheduler to use because HFSC is the only option and it’s the right one.
I’m not actually using traffic shaping on this build because my ISP connection is fast enough that contention isn’t a problem. But if I were, this is how I’d do it. And I wouldn’t touch any guide that mentions ALTQ.
No M:Tier
M:Tier [5] is a framework for maintaining OpenBSD packages from a trusted third-party repository with additional auditing and curation. It’s a good project with a legitimate purpose, providing an extra layer of package vetting beyond the standard ports tree.
M:Tier’s value proposition increases with your package footprint. If you’re running a web server with PHP, a database, a mail server, and a dozen supporting tools, the additional curation is worth the trust you’re placing in another infrastructure provider. For two packages, it’s not. Adding M:Tier would mean trusting another signing key, another update channel, another piece of infrastructure that needs to be available when I run pkg_add. The security gain for two well-known packages doesn’t justify the additional trust surface.
Keep it simple. Two packages. Two things to update. Two things to think about.
No pfstat graphing
pfstat generates graphical charts of pf statistics, packet counts, state table sizes, bandwidth over time, rendered as JPEG images. To view them, you need a web server.
I’ll let that sink in for a moment. To get pretty graphs of your firewall’s performance, you’d need to run a web server ON the firewall.
No. Absolutely not.
Every service you run on a firewall is a potential entry point. httpd, even OpenBSD’s excellent httpd, is a network-facing daemon that parses HTTP requests from potentially hostile clients. On an interior machine, that’s fine. On the device that’s responsible for protecting every other device on your network, it’s adding attack surface for no operational benefit.
What I use instead:
$ pfctl -s info # pf statistics, states, counters
$ pfctl -s state # active state table
$ systat pf # real-time pf monitoring
$ systat ifstat # interface statistics
All of these work over SSH. All of them are read-only queries against kernel data structures. None of them require running an additional daemon. None of them listen on a port. The information is the same as what pfstat would chart, just presented as text rather than JPEG. I can live without the pretty pictures.
If I genuinely needed historical graphing, I’d export the data to a monitoring system on a different machine. The firewall collects the data. Something else visualises it. Separation of concerns.
No custom DNS blocking in unbound
I covered this in Part 3, but it’s worth repeating as a design decision because it’s a mistake I see constantly.
unbound CAN do domain blocking via local-zone directives. Plenty of guides show you how to convert a blocklist into unbound config syntax. And it works. Until you need to update the list.
unbound requires a full configuration reload to pick up changes. With 182,000 blocked domains in the config file, that reload takes several seconds, during which DNS resolution stalls for every device on the network. On a daily blocklist update schedule, that’s a daily DNS outage. Brief, but real.
dnscrypt-proxy’s blocked_names plugin loads the blocklist independently and supports live reloading without interrupting DNS service. The list updates daily via cron, dnscrypt-proxy picks up the changes automatically, and DNS resolution continues uninterrupted throughout.
There’s also a separation-of-concerns argument. unbound’s job is DNS resolution and DNSSEC validation. dnscrypt-proxy’s job is encrypted transport and content filtering. Each component does one thing. Each can be debugged, updated, and restarted independently. When something breaks, I know exactly where to look.
Rejected unbound hardening options
This section is going to get specific, because getting DNS hardening wrong can be worse than not hardening at all. I spent a fair amount of time reading the unbound documentation [6] and testing these options before deciding to disable them.
harden-referral-path: ineffective in my setup
harden-referral-path validates the referral chain during recursive resolution, following NS records from root to authoritative nameserver and checking each step. It’s a solid defence against certain cache poisoning attacks.
But I’m running unbound in forwarding-only mode. Every query goes to dnscrypt-proxy on 127.0.0.1:5300. unbound never performs recursive resolution. It never follows referral paths. This setting literally does nothing in my configuration.
Including it would be harmless from a functionality perspective, but it would give a false sense of security, implying a check is being performed when it isn’t. I’d rather have an honest config than a reassuring one.
use-caps-for-id (0x20 encoding): breaks things
0x20 encoding is a clever trick. unbound randomises the capitalisation of letters in the DNS query name (eXaMpLe.CoM) and checks that the response preserves the same capitalisation. It adds entropy to the query, making cache poisoning harder because the attacker would need to guess the random capitalisation pattern.
The problem is that some authoritative nameservers don’t preserve case. The DNS spec says they SHOULD, but “should” isn’t “must,” and in practice several major authoritative servers normalise queries to lowercase before responding. When that happens, unbound’s capitalisation check fails and the query returns SERVFAIL.
I tested this for a week. It broke resolution for three domains I use regularly, including one payment processor. Since I’m already running DNSSEC validation AND encrypted transport to the upstream resolver, the marginal security benefit of 0x20 encoding doesn’t justify the reliability impact. Belt and braces is good. Belt and braces and a second belt that occasionally catches in the gears is not.
harden-algo-downgrade: breaks DNSSEC transition
This option rejects DNSSEC responses where the signature algorithm is weaker than what the parent zone’s DS record advertises. The idea is to prevent an attacker from downgrading a zone’s cryptographic strength.
In theory, excellent. In practice, it breaks DNSSEC validation for domains that are in the middle of migrating between algorithm suites. When a domain transitions from RSA/SHA-256 to ECDSA P-256, there’s a window where the DS record at the parent references the new algorithm but some authoritative servers are still signing with the old one. With harden-algo-downgrade enabled, unbound rejects these responses.
The alternative, working DNSSEC with a briefly weaker algorithm, is better than broken DNS. Algorithm transitions are a normal part of DNSSEC operations, and a hardening option that breaks normal operations isn’t hardening. It’s fragility.
Inter-LAN routing permitted
I mentioned this in Part 1, but it’s worth examining as a conscious security trade-off rather than a default.
The network has two LAN segments: wired (10.20.10.0/24) and WiFi (10.20.20.0/24). The purist move would be to isolate them completely, no traffic between segments, enforced at the firewall. If a device on the WiFi network gets compromised, it can’t reach anything on the wired network.
I don’t do this. The pf rules explicitly permit traffic between the two subnets.
Here’s why. My daily workflow involves SSHing from a MacBook on the WiFi network to Linux desktops on the wired network. VSCode Remote SSH, terminal sessions, file transfers, Git operations, all of it crosses the LAN boundary dozens of times an hour. Blocking inter-LAN traffic would break this workflow entirely. I’d need to either put everything on one subnet (losing the segmentation benefits) or set up a VPN to my own house (adding complexity and latency to every keystroke).
For a home office with devices I own and maintain, the risk of a compromised WiFi device attacking the wired network is low. The devices on WiFi are my MacBooks and phones, all running current operating systems with full-disk encryption. The IoT devices (smart bulbs, mostly) are on the WiFi segment but are firewalled so aggressively they can barely reach the internet, let alone other devices on the LAN.
Would I make this decision on a corporate network? No. Would I make it on a network with untrusted devices or guest access? No. Would I make it in a shared flat where other people’s devices are on the WiFi? Probably not. Context matters. For my specific situation, a trusted home office with devices I control, inter-LAN routing is an acceptable trade-off for a functional workflow.
The important thing is that it’s a DECISION, not a default. I thought about it, weighed the risks, and chose convenience over isolation for this specific case. That’s different from not having considered it at all.
The philosophy behind the choices
If you look at all the rejections above, a pattern emerges. Each one follows the same logic:
Does this add security that I don’t already have? If the existing layers already cover the threat (DNSSEC + encrypted transport covering what 0x20 encoding would add), the additional measure isn’t free, it’s additional complexity for marginal gain.
Does this work in my specific configuration? A hardening option that’s ineffective in forwarding mode isn’t just useless, it’s misleading. Config files should be honest about what they do.
Can I maintain this at 3am? If a component needs constant tuning, signature updates, or expert attention to function correctly, it’s a liability on a home network where the sysadmin is also the person who needs to sleep.
Does this survive unattended operation? Anything that requires human interaction at boot time, or during routine operation, is incompatible with a device that needs to be forgotten about for months at a time.
Does the benefit justify the attack surface? Every daemon, every package, every listening port is attack surface. The benefit needs to outweigh the risk. Pretty graphs don’t outweigh a web server on a firewall.
OpenBSD’s defaults are already secure [1]. The base system runs with pledge and unveil [7] [8] on virtually every daemon. The kernel has ASLR, W^X, stack protectors. pf defaults to blocking everything. The question isn’t “what do I need to add to make this secure?” It’s “what am I adding, and am I sure it makes things better rather than worse?”
The best firewall is the one with the fewest moving parts. Not because simplicity is aesthetically pleasing (though it is), but because every moving part is a part that can break, a part that can be exploited, and a part that needs maintenance. Peter Hansteen, who literally wrote the book on PF [9], makes this point repeatedly: understand what you’re configuring, understand why, and don’t add things just because a blog post told you to.
(Including, I should note, this blog post. Think about your own context. My decisions fit my threat model. Yours might be different.)
What I’d do differently
Eight days into the deployment when I wrote up the initial documentation, and now a few months in, a couple of things have become obvious.
WireGuard VPN for remote access. I don’t currently have VPN access to my home network. When I’m travelling, I can’t reach my home machines, can’t use my own DNS stack, can’t benefit from the ad blocking. OpenBSD has had WireGuard support in the kernel since 6.8 via wg(4), and the configuration is remarkably simple. This is the most obvious gap in the build and it’s at the top of my list.
Better monitoring and alerting. I check the logs manually via SSH. That’s fine for an active check, but it means I won’t notice a problem until I go looking for one. A lightweight monitoring setup, even just a cron job that emails me if pf’s block count spikes or if a device starts making unusual connection patterns, would catch issues faster. Something like a daily summary script that parses pfctl -s info and the softflowd data and sends me the highlights. Not a full monitoring stack. Just enough to notice when something changes.
Automated blocklist health checks. The DNS blocklist updates daily via cron, but if the download fails silently, I’m running on a stale list and I might not notice for weeks. A check that verifies the blocklist was actually updated, maybe by checking the file’s modification time or line count, would be a sensible addition.
None of these are urgent. The system works. But “works” and “works as well as it could” are different things, and I’d rather be honest about the gaps than pretend everything is perfect.
The production reality
Here’s where we are. At the point I first documented the deployment:
- 25 days uptime (and counting, no crashes, no reboots outside planned updates)
- 194,535,164 packets passed
- 139,563 packets blocked
- 58 active DHCP leases across both LAN segments (
doas cat /var/db/dhcpd.leases | grep lease | wc -l) - 0.6% CPU load during normal operation
- Silent, fanless, draws 6-12W
- 2 third-party packages installed (dnscrypt-proxy, sshguard)
It’s now been running considerably longer than that, and I’ve had to think about it exactly zero times. Which is the whole point.
The ratio of passed to blocked packets tells a story: roughly 0.01% of traffic is being blocked at the packet filter level. That’s low, and it’s low because most of the blocking happens at the DNS layer, before a connection is ever established. When a device tries to reach a blocked domain, unbound returns NXDOMAIN and no packets are ever sent. The 139,000 blocked packets are the things that got past DNS but hit a pf rule, port scans from the WAN, outbound connections from IoT devices trying to reach disallowed destinations, the occasional bit of broadcast traffic that shouldn’t be crossing segment boundaries.
The 58 DHCP leases include everything: laptops, phones, smart bulbs, a printer, a NAS, a couple of Raspberry Pis that are “going to be used for a project soon” (they’ve been on that shelf for two years, don’t judge me). Every one of them gets its DNS through my stack. Every one of them is subject to the same firewall rules. No exceptions.
And the CPU load is genuinely 0.6% during normal operation. I’m not rounding down. The GX-412TC quad-core is so wildly overpowered for NAT and packet filtering that the system literally idles. The most CPU-intensive thing this box does is the daily blocklist download, and even that barely registers.
Six watts. That’s less than the LED bulb on my desk lamp. For a device that handles DNS, DHCP, NAT, stateful packet filtering, NetFlow export, encrypted DNS transport, DNSSEC validation, and ad blocking for every device in my house. I’ll take that.
The infrastructure you forget about
Here’s the thing I keep coming back to: the best infrastructure is the infrastructure you forget about. Not because you’re negligent, but because it’s so reliable that it doesn’t demand your attention. It boots itself. It routes packets. It blocks the things it should block and passes the things it should pass. It sits under my desk, makes no noise, generates no heat you can feel, and costs less to run per year than a couple of coffees.
Every device in my house is safer because of this box. My DNS queries are encrypted. My ad tracking is blocked at the network level. My firewall rules are 251 lines (including many comments) of readable pf syntax that I can audit in five minutes. My IoT devices can talk to the internet but not to each other and not to my laptops. My ISP gets encrypted DNS traffic and nothing else.
I built this because I was offended by my ISP’s data collection practices. I kept building because the engineering was genuinely enjoyable, the kind of project where each layer compounds on the last and the whole becomes more than the sum of its parts. And I wrote it up because I think more people should run their own infrastructure, not because it’s easy (it’s not trivial), but because the alternative is trusting someone whose incentives don’t align with yours.
If you want to build one of these yourself, start with Part 1. The hardware is cheap. The software is free. The documentation is this blog series plus the OpenBSD FAQ [10], which is, no exaggeration, the best operating system documentation I’ve ever read. Peter Hansteen’s writing on PF [9] is also essential reading if you want to understand not just the how but the why.
The box under my desk has been running for months now. I had to check the uptime just now because I genuinely couldn’t remember the last time I thought about it.
That’s the highest compliment I can pay a piece of infrastructure.
References
- OpenBSD Security
- Suricata: Open Source IDS/IPS
- softflowd: flow-based network traffic analyser
- OpenBSD PF User’s Guide
- M:Tier: OpenBSD solutions and package maintenance
- unbound.conf(5), OpenBSD manual pages
- pledge(2), OpenBSD manual pages
- unveil(2), OpenBSD manual pages
- Peter Hansteen: Firewalling with PF
- OpenBSD FAQ: System Management
- PC Engines APU3D2 product page
- pf.conf(5), OpenBSD manual pages
- sshguard: protecting hosts from brute-force attacks
- dnscrypt-proxy: flexible DNS proxy with encrypted DNS support