Reproducible builds are builds which you are able to reproduce
byte-for-byte, given the same source input. Your initial reaction to that statement might be “Aren’t nearly all
builds ‘reproducible builds’, then? If I give my compiler a source file it will always give me the same binary,
won’t it?” It sounds simple, like it’s something that should just be fundamentally true unless we go out
of our way to break it, but in reality it’s actually quite a challenge. A group of Debian developers have been
working on reproducible packages for the best part of a decade and while they’ve made fantastic progress,
Debian still isn’t reproducible. Before we talk about why it’s
a hard problem, let’s take a minute to ponder why it’s worth that much effort.
On supply chain attacks
Suppose you want to run some open-source software. One of the many benefits of open-source software is that
anyone can look at the source and, in theory, spot bugs or malicious code. Some projects even have sponsored
audits or penetration tests to affirm that the software is safe. But how do you actually deploy that software?
You’re probably not building from source - more likely you’re using a package manager to install a pre-built
version, or downloading a binary archive, or running a docker image. How do you know whoever prepared those
binary artifacts did so from an un-doctored copy of the source? How do you know a middle-man hasn’t decided to add malware to the
binaries to make money?
I run a fair number of services as docker containers. Recently, I’ve been moving away from pre-built images
pulled from Docker Hub in favour of those I’ve hand-crafted myself. If you’re thinking “that sounds like a lot of
effort”, you’re right. It also comes with a number of advantages, though, and has been a fairly fun journey.
The problems with Docker Hub and its images
Rate limits
For the last few years, I’ve been getting increasingly unhappy with Docker Hub itself. Docker-the-technology
is wonderful, but Docker-the-company has been making some rather large missteps. The biggest and most impactful
of these has been introducing “pull rate” limits. At the time of writing, if you want to just pull a public image
without logging in then you are limited to 100 pulls every 6 hours. If you log in then you’re limited to 200
pulls per 6 hours, but it’s account wide. This might seem like a big enough number, but I repeatedly hit it and
there is no way to actually audit what is causing it. I have various containers that may all pull images at
arbitrary times (e.g. continuous integration build agents), and the only information you get back from Docker Hub
is the number of pulls remaining.
User stories are a staple of most agile methodologies. You’d be hard-pressed to find an experienced software
developer that’s not come across them at some point in their career. In case you haven’t, they look something
like this:
As a frequent customer,
I want to be able to browse my previous orders,
So that I can quickly re-order products.
They provide a persona (in this case “a frequent customer”), a goal (“browse my previous orders”) and a reason
(“so that I can quickly re-order products”). This fictitious user story would probably rank among one of the
better ones I’ve seen. More typically you end up with something like:
As a user,
I want to be able to login,
So that I can browse while logged in.
This doesn’t really provide a persona or any proper reasoning. It’s just a straight-forward task pretending to
be a user story. If this is written in an issue then it provides no extra information over one that simply says
“Allow users to login”. In fact, because it’s expressed so awkwardly I’d argue that it’s worse.
For the last year and a bit, I’ve been using a SteelSeries Arctis Pro Wireless Headset for
gaming and talking to friends. It’s a fine headset, but because there’s an always-on receiver there’s no way to
detect if the headset is turned on or not from the desktop.
Whenever I start using the headset, I set my desktop’s sound to go to the headset, and then when I stop using
the headset I set it to go back to speakers. It doesn’t take more than a second, but some days I might put the
headset on a dozen times as I’m on calls, or if it’s noisy outside, etc. That means it’s probably worth at least a few hours of my time trying to automate it.
At first, I hoped I’d be able to tell from the state of the USB device whether there was a headset connected
but nothing at all changed when flipping it on and off. Then I went hunting for existing open source tools that
might work with it and found that while people have reverse engineered many of the older Arctis headsets, no one
has done the same for the Pro Wireless. I finished off with a search to see if anyone had documented the wire
protocol even if there was no nice open source software to go with it; I came up short there, too. Looks like I’d
have to do it myself.
The HTC Dream, the first phone released running Android.
For the past decade I’ve exclusively used Android phones. I got the HTC Dream (aka the T-Mobile G1) shortly
after it came out, and dutifully upgraded every 1-2 years. In that timespan I used Android as the basis for my
Master’s Thesis, took a job on the Android team at Google, and eventually became a contractor specialising in
Android app development. So when I switched to using an iPhone earlier this year a few people were surprised.
The good old days
When Android was announced in 2007 – alongside the formation of the Open Handset Alliance – it was positioned
as a bastion of openness: it would be built on open standards and the operating system would be open source. At
the time iPhones were strongly coupled to iTunes and Apple was exercising strict control over what app developers
could do.