In the past I’ve written about my attitude towards system administration. As I summarized in in that post:
Your hard drive is object code; you had better be able to build it from source.
I haven’t really been practicing this lately, at least with my personal
machines. When I switched back to Arch, I tried to capture most of the
installation process in automated scripts. It wasn’t too bad; the way
the installation process is set up you can mostly just dump the commands
you end up running into a text file and call it done. There are a few
quirks like formatting disks, since
fdisk is interactive, but you can
deal with this by just prepping the input and piping it in:
cat | sed 's/# *//' | fdisk /dev/sda << "EOF" o # start with a blank partition table n # new partition ... EOF
sed command lets me add those comments, as otherwise the input to
fdisk is a bit obscure.
But, I have three machines that I use on a regular basis, and tweaking
the script to do the right thing on each of those, and not be brittle to
slight changes, got to be a bit much. I did it for two machines, and
told myself I’d get to the other one at some point. For a couple weeks I
was careful to to keep my ansible config in sync with the system.
But, eventually, I ran out of steam, and started cheating a bit. I’d
install a package with
pacman -S pkgname, without recording that in
the config. I’d fudge differences between systems, rather than
accounting for them in the scripts.
At some point, I came to wonder if this was more effort than it was worth. In my early days with Linux, when asked why I used such a bare bones system like Arch, and wasn’t it so much more work, I would point out that, for one, I could actually get Arch set up faster than Ubuntu, if only because the install image took less time to download. In other words, “it isn’t actually as much work as you think.”
If setting up a machine is network bound, why bother with automation at all, I thought?
There are reasons. For one, I run an open WiFi network, because I think it’s the neighborly thing to do. However, this makes it all the more important to set up my machines securely, and so it’s worth being certain that I get that part right every time.
So this is my current approach: Automate the stuff that’s worth it. Sometimes messing up is dangerous. Sometimes you hit something that is actually a fair bit of work to do manually. If you’ve got a lot of machines, doing everything manually will take too much time, and you run the risk of introducing small inconsistencies that will come back to bite you. So you probably want to automate everything to do with those machines.
I think I could design and implement a distro like I described at the end of that other post, that would make automating “everything” easy enough to be worthwhile. But that would be a lot more work than I’m likely to spend on actually dealing with my personal machines for years. For most of the systems of today, it just isn’t worth it.