I’ve been spending some time recently setting up automated testing for our collection of Android apps and
libraries at work. We have a mixture of unit tests, integration tests, and UI tests for most projects, and
getting them all to run reliably and automatically has posed some interesting challenges.
Running tests on multiple devices using Spoon
Spoon is a tool developed by Square that handles distributing
instrumentation tests to multiple connected devices, aggregating the results, and making reports.
As part of our continuous integration we build both application and test APKs, and these are pushed to the
build server as build artefacts. A separate build job then pulls these artefacts down to a Mac Mini we have in
the office, and executes Spoon with a few arguments:
Spoon finds all devices, deploys both APKs on them, and then begins the instrumentation tests. We use two
physical devices and an emulator to cover the form factors and API versions that are important to us; if any
test fails on any of those devices, Spoon will return an error code and the build will fail.
Shoring up SSHd configuration
Published on Oct 18, 2016
I recently came across a useful tool on GitHub called
ssh-audit. It’s a small Python script that connects to an
SSH server, gathers a bunch of information, and then highlights any problems it has detected. The problems it
reports range from potentially weak algorithms right up to know remote code execution vulnerabilities.
This is the kind of output you get when running ssh-audit. In this particular example, I’m looking at GitHub’s
SSH server and have filtered the output to just warnings and failures:
Creating an offline GnuPG master key with Yubikey-stored subkeys
Published on Aug 11, 2016
I recently noticed that I’d accidentally lost my previous GPG private key — whoops. It was on a drive that I’d
since formatted and used for a fair amount of time, so there’s no hope of getting it back (but, on the plus
side, there’s also no risk of anyone else getting their hands on it). I could have created a new one in a few
seconds and been done with it, but I decided to treat it as an exercise in doing things properly.
Background: GPG? Yubikey?
GPG or GnuPG is short for Gnu Privacy Guard, which is a suite of
applications that provide cryptographic privacy and authentication functionality. At a basic level, it works
in a similar way to HTTPS certificates: each user has a public key which is shared widely, and a private key
that is unique to them. You can use someone else’s public key to encrypt messages so only they can see them,
and use your own private key to sign content so that others can verify it came from you.
A Yubikey is a small hardware device that offers two-factor
authentication. Most Yubikey models also act as smartcards and allow you to store OpenPGP credentials on them.
Why you should be using HTTPS
Published on Jun 17, 2016
One of my favourite hobbyhorses recently has been the use of HTTPS, or lack thereof. HTTPS is the thing that
makes the little padlock appear in your browser, and has existed for over 20 years. In the past, that little
padlock was the exclusive preserve of banks and other ‘high security’ establishments; over time its use has
gradually expanded to most (but not all) websites that handle user information, and the time is now right for
it to become ubiquitous.
Why use HTTPS?
There are numerous advantages to using HTTPS, both for the users of a website and for the operator:
Privacy
The most obvious advantage is that HTTPS gives your users additional privacy. An insecure (HTTP) request can
potentially be read by anyone on the same network, or the network operators, or anyone who happens to operate
a network along the path between the user and the server.
Users on shared WiFi networks (such as those in coffee shops, hotels, or offices) are particularly vulnerable
to passive sniffing by anyone else on that network. If the network is open (as is frequently the case) then
anyone in radio range can see exactly what the user is up to.
Automatic reverse proxying with Docker and nginx
Published on May 21, 2016
Over the past few weeks I’ve gradually been migrating services from running in LXC containers to Docker
containers. It takes a while to get into the right mindset for Docker - thinking of containers as basically
immutable - especially when you’re coming from a background of running things without containers, or in “full”
VM-like containers. Once you’ve got your head around that, though, it opens up a lot of opportunities: Docker
doesn’t just provide a container platform, it turns software into discrete units with a defined interface.
With all of your software suddenly having a common interface, it becomes trivial to automate a lot of things
that would be tedious or complicated otherwise. You don’t need to manage port forwards because the containers
just declare their ports, for example. You can also apply labels to the application containers, and then query
the labels through Docker’s API.