Time estimation favours manual labour

sociology

Software engineers are not (and should not be) technicians is the title of the article, but I tried to give it a better summary. (Large) companies want predictability, predictable work is manual work, which means work doesn’t get automated, leading to inefficiency. It’s better to pay the one-time time-price for automation and then reap the benefits forever, than to produce steady progress because steady means not actually solving problems, just handling them.

Sound familiar?


par2cmdline-turbo.py

software, python, packaging

And there we go: par2cmdline-turby.py is up:


Python packaging

software, python, packaging

I’m trying to figure out how to build Python packages that contains a binary component. There’s ton’s op options out there, scikit-build-core is what I last used. I wanted to repurpose hugo-python-distributions to get a few more commandline utilities up on PyPI, and here’s what I’ve learned:

  1. There is no concept of cross-compilation in Python-land. See this discussion. Agriya very hansomely managed to implement an impressive build-matrix for the hugo-python package, but seeing as Go cross compiles natively, it’s a bit of a pity it’s necessary. cibuildwheel offers some functionality, but that is outside of the Python build, it just preconfigures a platform using QEMU for you. It’s fantastic that it works, but a rather time consuming and cycle-wasting way, given the option that Go already provides. I actually would like to re-use binaries from an existing build system and CI job, so just download them from somewhere, which makes setting up QEMU even more rediculous. If I could tell pip (wheel) for which platform tag it should build, I could tell also which file to fetch. But no, nothing let’s me do that.
  2. I can write Python code that does it, but how do I do that when I just learned to delete setup.py and only use pyproject.toml? I can’t! Building requires code requires setup.py, it’s a simple as that. Well, I could use an external build system such as CMake and drive it using scikit-build-core, but that again is rather overkill than what’s a simple as copying a file based on the target platform.

So, setup.py it is. But I’ll write a new one from scratch for this project and not re-use the somewhat overengineered hugo-python-distributions chain. At least I’ve refreshed my memory on those wheel tags (PEP 425: Python version-Python ABI-platform), that the x and y in manylinux-x-y-arch refer to GLIB version (PEP 600) and that I can set the whole thing by hand if I know better, and when I’m going to just download a binary instead of building it, I should.

What would be nice if setuptools would let you specify what files (and not just deps) to include based on platform, and let you set the platform with a flag so I don’t actually have to be running it.

Serves me right for not simply checking in the binaries into the source tree 😁


Photo storage 2

software, hosting, server

Earlier I investigated a few pieces of software to manage a self-hosted photo library. I kinda settled on using Digikam and generating (static) HTML galleries every now and then. Someone maintains a grid comparison, similar to my earlier attempt, so I’ll use that to review my choice.


Synology and Tailscale

synology, tailscale

I bought an old Synology DS115j recently, because I was putting off configuring an old laptop as server for a long time, and I needed something that could host 3.5" drives. First thing was to discover that all my TVs can stream from the DNLA service without setup (i.e. gathering telemetry), so that’s great! Also saves me from syncing the various USB drives I had pluggid into the TVs all over the house.

Some other things I wanted it to do was host photos in a way I could share a link with friends and family, without relying on Big Tech services. Well, Synology has got you covered, but the builtin photo app is both quite heavy (and I have 100k+ photos), and requires an app. I use Resilion Sync, but that was too much for the 256MB RAM. No way that would work.

I prefer to generate a (static) website myself and use the Synology to host. Well, you can! Although the docs tell you to install Web Station and then PHP or Python, you can actually leave the latter out. Just Web Station is enough for you to be able to access /web on the 80 and 443 ports. Just overwrite the index.html Synology put there.

You can forward ports on your router and then access by IP-address, but that kind of exposure isn’t a great idea. I saw Tailscale available in the Synology ‘store’, and turns out they have a funnel service. My first thought was to use Cloudflare Tunnel (I use Cloudflare for my websites already), but the advised way to get that running is to run a docker image, which again is too much for this old little thing. Turns out there is a third party package, but I decided to go with the officially provided Tailscale package first.

It works! After installation it requires a boot script as of DSM7. That left a hint as how to setup the rest:

sleep 120; /var/packages/Tailscale/target/bin/tailscale funnel 80

Sleep because Tailscale needs some time to restart (certainly on this machine), port 80 because the https certificate that Synology uses is self-signed which Tailscale doesn’t accept.

Lastly, the Synology didn’t show up as being able to configure as exit node. A second script was needed:

/var/packages/Tailscale/target/bin/tailscale up --reset --advertise-exit-node

After this you must enable the capability on the Tailscale website, and select on your other devices the machine as exit node. Nice, a safe tunnel home from anywhere in the world.

So far, no resource issues; RAM consumption is about 50%, idle CPU usage <5%.