Most programming – and sysadmin – problems can be debugged in a fairly straight forward manner using logs,
print statements, educated guesses, or an actual debugger. Sometimes, though, the problem is more elusive.
There’s a wider box of tricks that can be employed in these cases but I’ve not managed to find a nice overview of
them, so here’s mine. I’m mainly focusing on Linux and similar systems, but there tend to be alternatives
available for other Operating Systems or VMs if you seek them out.
tcpdump prints out descriptions of packets on a network interface. You can apply filters to limit
which packets are displayed, chose to dump the entire content of the packet, and so forth.
One thing that always confuses me with Docker is how exactly mounting volumes behaves. At a basic level it’s
fairly straight forward: you declare a volume in a Dockerfile, and then either explicitly mount something there
or docker automatically creates an anonymous volume for you. Done. But it turns out there’s quite a few edge
For the past few years I’ve been taking part in Eric Wastl’s
Advent of Code, a coding challenge that provides a 2-part problem each
day from the 1st of December through to Christmas Day. The puzzles are always interesting — especially as they
get progressively harder — and there’s an awesome community of folks that share their solutions in a huge variety
To up the ante somewhat, Shane and I usually have a little informal
competition to see who can write the most performant code. This year, though, Shane went massively overboard and
wrote an entire benchmarking suite
and webapp to measure our performance, which I took as an invitation and personal challenge to try to beat
him every single day.
For the past three years I’d used Python exclusively, as its vast standard library and awesome syntax lead to
quick and elegant solutions. Unfortunately it stands no chance, at least on the earlier puzzles, of beating the
speed of Shane’s preferred language of PHP. For a while I consoled myself with the notion that once the
challenges get more complicated I’d be in with a shot, but after the third or fourth time that Shane’s solution
finished before the Python interpreter even started1 I decided I’d have to jump ship. I started using Nim.
DNS-over-TLS is a fairly recent specificiation described in RFC7858, which enables DNS clients to communicate with servers over a
TLS (encrypted) connection instead of requests and responses being sent in plain text. I won’t ramble on about
why it’s a good thing that your ISP, government, or neighbour can’t see your DNS requests…
I use an EdgeRouter Lite from Ubiquiti Networks at
home, and recently configured it to use DNS-over-TLS for all DNS queries. Here’s how I did it.
I was thinking about switching DNS providers recently, and found myself whoising random domains
and looking at their nameservers. One thing lead to another and I ended up doing a survey of the nameservers of
the top 100,000 sites according to Alexa.
Most popular providers
The top providers by a large margin were, unsurprisingly, Cloudflare and AWS Route 53. Between them they
accounted for around 30% of the top 100k sites.
I’ve been spending some time recently setting up automated testing for our collection of Android apps and
libraries at work. We have a mixture of unit tests, integration tests, and UI tests for most projects, and
getting them all to run reliably and automatically has posed some interesting challenges.
Running tests on multiple devices using Spoon
Spoon is a tool developed by Square that handles distributing
instrumentation tests to multiple connected devices, aggregating the results, and making reports.
As part of our continuous integration we build both application and test APKs, and these are pushed to the
build server as build artefacts. A separate build job then pulls these artefacts down to a Mac Mini we have in
the office, and executes Spoon with a few arguments:
Spoon finds all devices, deploys both APKs on them, and then begins the instrumentation tests. We use two
physical devices and an emulator to cover the form factors and API versions that are important to us; if any test
fails on any of those devices, Spoon will return an error code and the build will fail.
I recently came across a useful tool on GitHub called ssh-audit. It’s a small Python script that connects to an SSH server,
gathers a bunch of information, and then highlights any problems it has detected. The problems it reports range
from potentially weak algorithms right up to know remote code execution vulnerabilities.
I recently noticed that I’d accidentally lost my previous GPG private key — whoops. It was on a drive that I’d
since formatted and used for a fair amount of time, so there’s no hope of getting it back (but, on the plus side,
there’s also no risk of anyone else getting their hands on it). I could have created a new one in a few seconds
and been done with it, but I decided to treat it as an exercise in doing things properly.
Background: GPG? Yubikey?
GPG or GnuPG is short for Gnu Privacy Guard, which is a suite of
applications that provide cryptographic privacy and authentication functionality. At a basic level, it works in a
similar way to HTTPS certificates: each user has a public key which is shared widely, and a private key that is
unique to them. You can use someone else’s public key to encrypt messages so only they can see them, and use your
own private key to sign content so that others can verify it came from you.
A Yubikey is a small hardware device that offers two-factor
authentication. Most Yubikey models also act as smartcards and allow you to store OpenPGP credentials on
One of my favourite hobbyhorses recently has been the use of HTTPS, or lack thereof. HTTPS is the thing that
makes the little padlock appear in your browser, and has existed for over 20 years. In the past, that little
padlock was the exclusive preserve of banks and other ‘high security’ establishments; over time its use has
gradually expanded to most (but not all) websites that handle user information, and the time is now right for it
to become ubiquitous.
Over the past few weeks I’ve gradually been migrating services from running in LXC containers to Docker
containers. It takes a while to get into the right mindset for Docker - thinking of containers as basically
immutable - especially when you’re coming from a background of running things without containers, or in “full”
VM-like containers. Once you’ve got your head around that, though, it opens up a lot of opportunities: Docker
doesn’t just provide a container platform, it turns software into discrete units with a defined interface.
With all of your software suddenly having a common interface, it becomes trivial to automate a lot of things
that would be tedious or complicated otherwise. You don’t need to manage port forwards because the containers
just declare their ports, for example. You can also apply labels to the application containers, and then query
the labels through Docker’s API.