Here are all the blog posts I've written, starting with the most recent. If you're looking for something in
particular, you can see all the titles on one page in the site map. If you want to stay
up-to-date with what I'm writing, you can subscribe to the RSS feed in your favourite
feed reader.
More and more sites are implementing privacy-invading age checks or just completely blocking the UK thanks to
the Online Safety Act.
Protecting kids from some content online is certainly a noble goal, but the asinine guidance from
Ofcom, threats of absolutely disproportionate fines, and the stupidly broad categories of content have
resulted in companies just giving up or going through a tick-box exercise that offers very little protection
but lots of inconvenience and a complete invasion of privacy.
Instead of uploading my ID to some third party company, I’ve taken to proxying my traffic through to a country
that doesn’t have such stupid laws. Thankfully, Tailscale makes this really easy. I’ve discussed
how I use Tailscale before, but not really covered
app connectors. I find Tailscale’s description of these pretty confusing, but they basically amount
to automatic, DNS-based subnet routing configurations (or, to put it another way, a per-website exit node).
You can safely ignore all references to ‘SaaS apps’ in their docs.
I create a custom app connector, and give it the domains to be included:
App configuration in the Tailscale admin console
Tailscale then magically resolves those domains, and has the ‘connector’ advertise routes for them. Any client
that accepts routes will start sending requests to the connector, which passes them onto the Internet at
large. Any other traffic is left alone, unlike when you use an exit node.
The special bit here is how you can specify wildcard domains. Tailscale proxies the DNS requests from clients
(so it can inject responses for nodes on your tailnet), which means it can dynamically update the routes as
you resolve new domains. I tried to set this up more manually, and quickly came unstuck: despite using the
same DNS servers, my server and my desktop would get different responses for the same query as it varied by
geography. Trying to get the full set of IPs (and keeping them updated) would have been a nightmare. Tailscale
expanding the wildcards nicely sidesteps all of that.
At first I was just proxying the traffic to one of my servers, but just today I added a new connector for
Imgur and found I was still blocked, just for different reasons. They not only block my entire country but
also a load of known datacenter IP ranges. Hmph. I fixed this by hacking up a new side project:
tsv. It’s a simple Go app that accepts traffic from the tailnet
(advertising itself as both an app connector and an exit node), and passes it on to another VPN.
There are lots of other ways you could accomplish this, but this makes it so all my devices can still access
services without any additional configuration. As long as Tailscale is installed, the Internet will still work
as it’s meant to, without all the nonsense. If I come across a site that doesn’t work, adding it is trivial: I
just make a new app connector in Tailscale.
Obvious disclaimer: the laws in the UK are binding on the service providers, not the end user. Doing this sort
of thing in other countries might be illegal. I don’t know; do your own research! Also all of this is a
workaround for something that should be fixed at a legislative level, but I’m not holding my breath.
I wrote before about how I’d
dropped Spotify in favour of locally stored music, but things
have advanced a bit since. I had a few issues: Tauon would occasionally manage to lose its database and along
with it all my carefully constructed playlists and song ratings, and the experience on my phone was not very
fun.
I had to manually sync the music by plugging my phone in to the computer, and sometimes it just refused to
mount the right partition. I don’t think there’s really a good way to debug an Apple phone not behaving
properly when connected to a Linux desktop. Then I started wanting more than one playlist synced, and trying
to find a way to make that work just broke me.
I spent a while looking at different ways of hosting the music centrally.
Plexamp gets lots of good reviews, and I’ve used Plex a fair
bit. I set about spinning up a Plex server, and just could not get it working. The server ran fine, served the
web interface, but would neither associate with my account nor run standalone. The docs were contradictory and
there was very little useful logging. After a lot of frustration, I stumbled across mentions that
Plex block running on Hetzner. I assume that’s the cause of my issues, although I have no way to know for sure. I use Hetzner for all my
servers and other hosted services, but Plex have decided I can’t run the self-hosted software that I have a
lifetime subscription for there. What the actual fuck?
I could have probably worked around the arbitrary restriction, but I didn’t want to throw more time down the
drain. Instead, I set up Navidrome, an open source music server. It
supports the Subsonic protocol, which means you can use a whole slew of different clients with it (or even
write your own). It also means there’s a nice way to get data in and out of it programmatically, which I
recall being a bit of a fight with Plex.
Recently I realised that I’ve developed a self-imposed quality bar for blog posts. They need to be a certain
length, and have a certain substance to them. They need to be generally useful in some way I can’t
quite define, to some imagined future audience. They need to have images to break up the page, and opengraph
data for when they’re linked to on social media. But… maybe they don’t? Those things all make sense for longer
“article” type posts, but not so much for a personal blog.
I’ve done some fiddling so that I can make posts without all that extra stuff. We’ll see how it goes. The
posts won’t be distinguished on the site (yet?), but I have set up separate RSS feeds for just
short form posts and long form posts in case subscribers are
radically opposed to one or the other. Now I can start blogging like it’s 2005 again. But hopefully with a bit
less cringe.
I’ve been staring at this post in my editor for about five minutes. The urge to make it longer, more thorough,
more article-like is really strong. This entire paragraph is only here as a compromise with myself so I can
actually save and commit the post.
My watch. Yes, I am available for wrist modelling opportunities.
Around ten weeks ago I picked up an Apple Watch 10, and have been wearing it almost constantly since. It’s not
my first Apple Watch — I had a Series 5 for a bit back in 2020 — but it’s the first time I’ve actually stuck
with it. Ten weeks seems like an apt time to reflect on it.
Firstly, why did I even bother? Well, for a couple of years I’d been wearing a Xiaomi Smart Band 7, mainly to
monitor my sleep stats and set alarms that won’t wake up everyone else nearby. Its battery life was fantastic
— with notifications and other things turned off, I got about a month of use between charges — but actually
using it felt like trying to order food via the medium of interpretive dance.
My biggest gripe was the screen lock. If I didn’t have the screen locked then I’d periodically trigger it
during the night when I moved around and it came in contact with my chest or leg. With the lock enabled you
had to deliberately swipe up from bottom to top to enable interaction, but it didn’t work reliably. When I
wanted to adjust an alarm, I’d be stood swiping repeatedly trying to get it to respond. When you finally get
it unlocked, the whole interface is just fiddly.
The other issue was the data quality. There were some nights when I’d been woken up, sometimes even getting up
and moving around, and it just didn’t show it in the data. If it can’t even get whether I’m asleep right, can
I trust anything else it says?
I spent a while researching the best devices for sleep tracking. The Oura ring came highly recommended, but it
was expensive and required a subscription to do anything useful. No thanks! The Apple Watch was consistently
rated pretty well, and I reasoned I could pick up a refurbished older unit. I’ve been
using an iPhone as my daily driver for a while, so it’d fit
right into my begrudged walled garden.
The Series 10 has a significant advantage, though: it charges much quicker than all the previous generations.
On a 30-minute charge, the Series 10 can go from 0 to 60%; the 9 can only make it to 40%, and my old 5 a
measly 30%. Shorter charge times means I’m far less likely to leave it on charge and wander off without it. In
some ways the daily charging is more convenient than monthly: the wireless charger sits on my desk, and I plop
the watch on it for a little while in the evening; I don’t need to dig out the weird pogo-pin connector that
has vanished sometime in the last four weeks, then carefully arrange it so it stays attached.
How a watch maybe saved my life
One of the big features of the Apple Watch, like many other wearable devices, is health and fitness tracking.
I didn’t think much about this, beyond the sleep data I wanted, at first. I’ve never had a particularly good
relationship with exercise, but I do like some good statistics. I started going for walks more often to get
more data and see the graphs of VO2 max and HR recovery gradually inch up. That wasn’t the most profound
effect on my health, though…
Recently I’ve been on a small campaign to try to make my personal website more… personal. Little ways to make
it obvious it’s mine and personal, not just another piece of the boring corporate dystopia
that is most of the web these days. I don’t quite want to fully regress to the Geocities era and fill the
screen with animated under construction GIFs, but I do want to capture some of that vibe.
I’d added some bits and pieces along those lines: floating images in articles now look like they’re stuck to
the page with sellotape, related post links have a wavy border that animates when you hover over them, and so
on. Next, I wanted to change the heading fonts from a monospace font to something cursive, to resemble
handwriting. Less terminal output, more handwritten letter. I couldn’t find one I liked, though. So why not
make my own? It can’t be that hard, right?
Failing to do it myself
I set out to try to make the font myself using open source tools. After doing a bit of research, it seemed
like the general approach was to create vectors of each character and then import them into a font editor.
That seems to mean either Adobe Illustrator and FontLab (if you have too much money) or Inkscape and FontForge
(if you like open source). I fall firmly into the latter category, so I grabbed my graphics tablet and opened
Inkscape.