Last month I stood in front of my engineering team and told them that the cross-platform build we’d spent weeks perfecting, the one that output our Qt application for Windows, macOS Intel, macOS ARM, Linux x86-64, and Linux ARM64 from a single CI pipeline, was being reduced to Linux x86-64 only.
They looked at me like I’d lost my mind. And honestly, from a pure engineering perspective, I had. We’d done the hard work. The CI was beautiful. The whole team could demo from their own machines. Product could show it on their MacBooks. The CEO could run it on his Windows laptop. It was one of those rare engineering achievements where everything just worked.
And I killed it. For a library.
The build-vs-buy decision is never about the tech. The tech decision is usually obvious. The business decision is the one that keeps you up at night.
What actually happened
We’re building autonomous systems for maritime environments at SafeNav. The software runs on ships. Ships that are in service for twenty years. Hardware you can’t easily update, in environments where “just restart it” isn’t an option when you’re in the middle of the Atlantic.
A strategic partnership brought us a code library that we needed to integrate. Partnership agreement, potential investor interest, the kind of business pressure that doesn’t come with a technical specification attached. The library was Linux x86-64 only. No Windows build. No macOS. No ARM.
The technical decision was straightforward: don’t use it. It breaks our cross-platform support, it limits who can demo the product, and it constrains our deployment options. Any engineer would tell you the same thing.
The business decision was also straightforward, but in the opposite direction: use it. The partnership matters. The investment matters. The strategic alignment matters. And so the CTO who spent weeks getting the cross-platform CI working had to be the same CTO who told the team to throw most of it away.
The bit that made me want to scream
The first thing I did when I got the library was what I always do with third-party code: I built it into our stack with AddressSanitizer enabled [1]. ASAN is a memory error detector that catches buffer overflows, use-after-free bugs, memory leaks, and all the other silent killers that make C++ code unreliable in long-running systems.
The results were not pretty.
Memory leaks. Multiple. The kind that don’t crash your application in a five-minute demo but will absolutely destroy a system that needs to run continuously for months on an embedded CPU at sea. The kind that most teams would never find, because most teams integrate a third-party library by checking that it compiles, running a quick smoke test, and shipping it.
We are not building an Android dating app. We are building systems that go on ships, in safety-critical maritime environments, where the software needs to just work for years without intervention. When I find memory leaks in a third-party library that I’ve been told to integrate for strategic reasons, I can’t just shrug and file a ticket. I have to fix it, work around it, or push back upstream, and all of those take time that wasn’t in anyone’s roadmap.
This is the hidden cost of “buy” that never makes it into the decision matrix.
The cross-platform casualty
Here’s what the business decision actually cost in engineering terms:
The team lost the ability to demo. Previously, every engineer could run the application on their own machine and show it to anyone. Now only Linux x86-64 works. Product, with their MacBooks, can’t run the demo. The CEO, with his Windows laptop, can’t run the demo. I have to show up personally, on my machine, every time someone needs to see the product working.
I got pulled back into engineering. I’m the only person on the team with experience building across all platforms, so I had to get hands-on with the integration. Which means I’m back to breaking my own 40% rule, spending time as an engineer when I should be spending time as a CTO.
I became the demo person. Instead of engineering and leading, I’m now also the person who needs to stop what I’m doing and sell things, which is not my specialty. The cross-platform build wasn’t just a technical achievement, it was a force multiplier that let the whole team be ambassadors for the product. Losing it centralised everything back to me.
None of this was in the partnership agreement. None of it was discussed when the decision was made. The business saw “integrate this library” as a simple technical task. The engineering reality was months of work, lost capability, and a CTO pulled in three directions at once.
Why embedded changes everything
The build-vs-buy conversation in the SaaS world is fundamentally different from the same conversation in embedded systems. In SaaS, if a third-party service has a bug, you can swap it out next sprint. If a library causes issues, you can update it on Tuesday. Your deployment cycle is days, not decades.
Maritime embedded systems have a twenty-year installed lifespan. The hardware going on a ship today will still be running in 2045. The software I write now needs to be maintainable, debuggable, and reliable for longer than some of my engineers have been alive.
In that context, every “buy” decision carries a different weight:
Can I maintain this for twenty years? If the vendor goes bust in five years, do I have the source code? Can I patch it myself? Is there an escape route, or am I locked in?
Does this meet safety standards? Maritime environments have real certification requirements. Third-party code that hasn’t been tested to the same standard as your own code is a liability, not a shortcut.
What’s the total cost of integration? Not just the licensing fee. The engineering time to integrate, test, sanitize, and work around the library’s limitations. The ongoing cost of maintaining compatibility. The opportunity cost of the work your team isn’t doing because they’re wrestling with someone else’s code.
My background is in aerospace and automotive, industries where these questions are existential, not theoretical. When code fails in a car or a plane or a ship, people can get hurt. The “just ship it and iterate” mentality that works for consumer software is genuinely dangerous in these domains.
The honest trade-off
I’m not saying “always build.” That would be naive, and I’d run out of money in a month. Strategic partnerships exist for good reasons. Buying capability you can’t build is sometimes the only viable path, especially in a startup where time and headcount are your scarcest resources.
What I am saying is that the decision is almost never about the technology. The tech is usually the easy part. The hard part is everything else:
Politics. Who wants this partnership to work, and how much organisational capital are they spending on it? Can you push back, or is this a done deal that you’re being asked to execute?
Timing. Is this the right moment to take on integration risk, or are you already stretched thin? A library that would be fine to integrate in a quiet quarter becomes a disaster when your team is already at capacity.
Dependencies. What else breaks when you make this choice? The cross-platform build was a casualty I could see coming. The demo bottleneck was one I didn’t predict until it happened.
Exit strategy. If this doesn’t work out, what does backing out look like? If the answer is “six months of work to untangle,” you need to factor that into the decision, not discover it later.
What I wish I’d said in the meeting
When the partnership library was first discussed, I should have been clearer about the total cost. Not just “this is Linux only,” but “this means the following twelve things change, and here’s what each one costs us in engineering time, team capability, and my personal bandwidth.”
I’ve learned from this. The CTO’s job in a build-vs-buy discussion isn’t to make the technology decision, that’s usually obvious. It’s to translate the technology implications into business language so that the people making the business decision understand what they’re actually buying.
“We’ll lose cross-platform support” sounds manageable. “I’ll have to personally attend every demo for the next six months because nobody else can run the software” sounds expensive. Same fact, different framing. The second one gets taken seriously.
The bit that keeps me up
The memory leaks bother me more than anything else. Not because they’re hard to fix, they’re not, but because they represent a philosophy gap between consumer software and safety-critical systems.
Most teams would have integrated that library, run a quick test, seen it work, and moved on. The leaks would have shown up months later as intermittent crashes in a system running on a ship somewhere, and nobody would have connected them to the library because by then three other things would have changed too.
I caught them because I test everything with ASAN. Because I’ve spent enough years in aerospace and automotive to know that the bugs that kill you are the ones you don’t look for. Because when someone hands me a binary and says “just link against this,” my first instinct is to assume it’s broken until proven otherwise.
That instinct is expensive. It slows things down. It makes me difficult in meetings. And it’s the reason our software will still be running in twenty years when the systems built by teams who “just shipped it” are being quietly replaced.
Build vs. buy is never about the tech. It’s about what you’re willing to live with.