Managing e-books with Calibre-web

Screenshot of the calibre-web interface

If, like me, you’ve picked up a number of e-books over the years, you may use Calibre as your e-book manager. It’s a desktop application with an optional web interface, but it has its drawbacks. The user interface is clunky, and it tries to cram lots of advanced features in – even the latest version 7 is overwhelming for new users. So, if you can forego the desktop application, there’s an alternative called calibre-web that does the same thing in a web browser, and with a much nicer interface.

Once installed, you can migrate your existing metadata.db from Calibre and the e-book folders, and calibre-web will pick up where you left off. I particularly like the ability to download metadata from sources such as Google Books, to get more complete data about each book besides its author and title. There’s a built-in e-reader, or you can use an app that supports OPDS – I used Aldiko.

By far the easiest way to install it is using Docker. There’s a good image on DockerHub; it’s maintained by a third-party but recommended by calibre-web’s developers. Once installed, it doesn’t require much additional configuration.

By default, calibre-web doesn’t allow uploads, but you can amend this in the Admin settings. The settings toggle s rather buried away, and it took me some time to find. But once uploads are enabled, it allows you to completely replace the desktop Calibre app if you want to. You can also set up multiple user accounts, if you want to share your calibre-web server with others.

I have calibre-web installed on the same Raspberry Pi as my Plex and Home Assistant servers. Indeed, calibre-web essentially offers a kind-of Plex for e-books, seeing as Plex doesn’t offer this itself. Unfortunately, most of my e-books were purchased through Amazon, and so only accessible through their Kindle apps and devices. But for the handful of books that I’ve picked up through the likes of Unbound and Humble Bundle, it’s helpful to have them in one place.

Comment Spam strikes back

An illustration of a robot turning web pages into canned meat product. Generated using Bing AI Image Generator

So now that I’m blogging again, it’s the return of comment spam on my blog posts.

Comment spam has always been a problem with blogs – ever since blogs first allowed comments, spam has followed. Despite the advert of the rel=”nofollow” link attribute, automated bots still crawl web sites and submit comments with links in the hope that this will boost the rankings in search engines.

In the early days of blogging, blogs often appeared high in Google’s search engine results – by their very nature, they featured lots of links, were updated frequently, and the blogging tools of the time often produced simple HTML which was easily parsed by crawlers. So it was only natural that those wanting to manipulate search engine rankings would try to take advantage of this.

I’ve always used Akismet for spam protection, even before I switched to WordPress, and it does a pretty good job. Even then, I currently have all comments set to be manually approved by me, and last week a few got through Akismet that I had to manually junk.

Humans, or AI?

These five interested me because they were more than just the usual generic platitudes about this being a ‘great post’ and ‘taught me so much about this topic’. They were all questions about the topic of the blog post in question, with unique names. However, as they all came through together, and had the same link in them, it was clear that they were spam – advertising a university in Indonesia, as it happens.

Had it not been for the prominent spam link and the fact they all came in together, I may have not picked up on them being spam. Either they were actually written by a human, or someone is harnessing an AI to write comment spam posts now. If it’s the latter, then I wonder how much that’s costing. As many will know already, AI requires a huge amount of processing power and whilst some services are offering free and low cost tools, I can’t see this lasting much longer as the costs add up. But it could also just be someone being paid using services like Amazon Mechanical Turk, even though such tasks are almost certainly against their terms of service.

I think I’m a little frustrated that comment spam is still a problem even after a few years’ break from blogging. But then email spam is a problem that we still haven’t got a fix for, despite tools like SPF, DKIM and DMARC. I’m guessing people still do it because, in some small way, it does work?

Asking your friends a question every day

An illustration of a question mark appearing from a wizard's hat. Generated by Bing AI Image Generator

A couple of years ago, I asked my Facebook friends a question – what animal do you think our child wants as a pet? And as an incentive, whoever guessed correctly could nominate a charity to receive a £5 donation. The post got around 60 comments before the correct answer – a parrot – was guessed, and the £5 went to the Bradford Metropolitan Food Bank.

We didn’t buy our child a parrot as a pet – they’re expensive to buy and insure, and can out-live their humans – but it gave me an idea.

So for the whole of 2022, I asked a new, unique question to my Facebook friends. I wrote most of these in an Excel spreadsheet over the course of Christmas 2021, and then added to it over the year. Questions were usually posted in the morning, and all got at least one comment – but some many more.

I have around 300 friends on Facebook and so I tried to come up with questions that were inclusive, or hypothetical, so as not to exclude people. For example, not all my friends drive, so asking lots of questions about cars would exclude people. I also wanted to avoid any questions that could be triggering for people, so most were framed around positive experiences.

I suppose I was taking a leaf out of Richard Herring’s book – I suppose literally because he has published several Emergency Questions books – but it’s something I enjoyed doing. It also meant that I found out some more facts about my friends and got to know some of them better. It also reminded me of the really early days of blogging with writing prompts like the Friday Five.

This year, I’ve asked the same questions, but included my answers in the posts, as I didn’t usually get a chance to answer my own questions in 2022. This has also required some re-ordering of questions, as some related to events like Easter which were on different days this year.

And for 2024? Well, I’m slowly working on some brand new questions, although I’m only up to March so far. And I keep thinking of great ideas for questions, only to find that I’ve already asked them before.

Maybe I’ll publish them as a page on here someday.

Bringing back the archives

An illustration of a phoenix rising from the ashes, with a web page. Generated by the Bing AI Image Creator

Last month, I wrote about how I had found peace with myself regarding losing over a decade’s worth of blog posts.

Well, I’ve already sort-of changed my mind. I have already brought back some old posts which, until now, were only accessible on the Web Archive Wayback Machine.

This doesn’t mean that all of my old posts will be reinstated – if anything, I’ll be bringing back 1-2% of them at most. My criteria are:

  • Posts which, despite being offline for about 5 years, are still linked to. I’m using the Redirection WordPress plugin to track 404 errors, which you can group by URL to see the most commonly not-found pages.
  • Posts that still offer useful advice, or information that is otherwise not easily accessible on other web sites.
  • Posts that mark important events in my life.

So, here’s a selection of what I’ve brought back already, in chronological order:

  • Media Player Classic (January 2004). A review of a now-defunct lightweight alternative media player for Windows. VLC is probably a better option nowadays.
  • Apple Lossless Encoder (May 2004). A blog post about Apple’s then-new music format which preserves full audio quality when ripping CDs in iTunes, and how it compares to other formats like FLAC and Monkey’s Audio.
  • Knock-off Nigel (August 2008). An anti-piracy advert for TV.
  • How to migrate a Parallels virtual machine to VirtualBox (November 2008). A how-to guide for switching from Parallels Desktop to VirtualBox, which I imagine is still useful for some people.
  • Fixing high memory usage caused by mds (February 2013). A how-to guide for fixing an issue with MacOS. I don’t use a Mac anymore but hopefully this is still useful to someone.
  • Baby update (November 2015). This was actually a draft version of a post that must have somehow survived in Firefox’s local storage, so I re-published it.
  • How to: fix wrong location on iPhone (January 2017). Another how-to guide that fixed an issue I was having at the time with my iPhone’s location randomly jumping around.

There’s more to come, as and when I find time to restore them. I’m also using Google Search Console to find pages that it’s expecting to work, but that result in a 404 error.

I wear glasses now

A photo of me, taken in July 2021, wearing glasses

There’s a few life developments that have happened in the years since I stopped blogging regularly, and one of those was in July 2021 – I started wearing glasses.

I hadn’t noticed that my vision was deteriorating, but it was picked up at a routine eye test. I had a suspicion that the optometrist had found I needed glasses as he tweaked the lenses, and suddenly the last couple of lines on the eye chart got much more clear. Oh well, I managed 37 years without needing to wear them.

I’m fortunate that I can just wear one pair of glasses for both near and distance vision, so I don’t need to take them on and off for different tasks, or wear bi-focals. And they make a difference – as someone who uses screens all day at work, my eyes aren’t as tired at the end of the day as they were before.

Of course, July 2021 was around the time when we still needed to wear facemasks on public transport, so I got the lovely experience of my glasses steaming up.

You may also notice that I’m overdue for my next eye test, so I promise that I’ll book another one soon. I’ve contemplated getting contact lenses next time, but it depends how much my glasses cost. And I don’t mind wearing glasses too much.

Running Home Assistant in Docker and Snap

A screenshot of the Home Assistant installation instructions for Docker

So, as I mentioned a couple of weeks ago, I’ve set up Home Assistant (HA) to control the various smart devices that we have around the home. At the time, I just used a snap package, but now I’ve migrated to using Docker, and here’s why.

Firstly, there are some disadvantages of installing Home Assistant using a snap package. Namely:

  1. The snap package isn’t an official release by the Home Assistant project, and is instead built by a third party.
  2. This means that, at time of writing, it’s a couple of releases behind the latest official release.
  3. It also means that it’s not a formally supported way of running Home Assistant, and there are fewer resources out there to help you if you’re stuck.
  4. I had issues updating previously installed custom components from HACS

Meanwhile, there’s an official Home Assistant Docker image that is updated at the same time as new releases, and it’s mentioned in the installation guide.

So, on the whole, Docker is better for running HA than Snap. But I wanted to run HA on my Raspberry Pi 4 which has Ubuntu Core on it, and that only offers Snap. But wait… you can install Docker on Snap, and the Docker Snap package is one maintained by Canonical so it’s regularly updated.

You can see where this is going. What if I install Docker using Snap, and then install Home Assistant into Docker? Well, that’s what I did, and I’m pleased to inform you that it works.

Docker on Snap, step-by-step

If you want to try this yourself, here’s the steps that I followed. However, please be aware that you can’t migrate a Home Assistant setup from Snap to Docker. Whilst HA does offer a backup tool, the option to restore a backup is only available on Home Assistant Operating System, and it seems that manually copying the files across won’t work either. So, if you currently use Snap, you’ll have to set up HA again from scratch afterwards. You’ll also, at the very least, need to run snap stop home-assistant-snap before you start.

  1. Install Docker. You can do this by logging into your machine using SSH and typing in snap install docker.
  2. Enable networking. There’s probably a better way of doing this, but for me, just running chmod 777 /var/run/docker.sock worked.
  3. Install Home Assistant. You’ll need to enter quite a long shell command, which is:
    docker run -d \
    --name homeassistant \
    --privileged \
    --restart=unless-stopped \
    -e TZ=MY_TIME_ZONE \
    -v /PATH_TO_YOUR_CONFIG:/config \
    --network=host \
    ghcr.io/home-assistant/home-assistant:stable

    The two variables in bold will need changing. For ‘MY_TIME_ZONE‘ you’ll need to type in your time zone, which in my case is ‘Europe/London‘, and for ‘PATH_TO_YOUR_CONFIG‘ is a folder where you want your configuration files. I suggest /home/[username]/homeassistant .
  4. Grab a drink, as the installation will take a few minutes, and then open http://[your IP address]:8123 in a web browser. If it’s worked, then you’ll be presented with HA’s onboarding screen.

Again, if you had the HA snap package installed, then if everything’s working with Docker, you’ll need to uninstall any related HA packages (like HACS, toolbox and configurator) and then the home-assistant-snap itself. And then you’ll need to set up all of your devices again. The good news is that, if you decide to move your HA installation to a new machine, you can just migrate the Docker image in future.

Wouldn’t it be better just running Docker?

Okay, so you may be wondering why I’ve set up HA this way. After all, it would probably be easier just to install Raspberry Pi OS Lite and put Docker on that, without using Snap. Well, there’s a method to my madness:

  • I like running Ubuntu Core because it’s so minimalist. It comes with the bare minimum of software installed, which means that there’s less risk of your system being compromised if a software vulnerability is found and exploited.
  • I already have Plex running quite happily in Snap, and didn’t want to have to migrate that as well.

In other words, this was the easiest way of running HA in Docker with my current setup. And I’m happy with it – I’m running the latest version of HA and it seems to work better.

There are a couple of additional steps that I still need to complete, which are:

  • Enabling SSL/TLS for remote access
  • Enabling mDNS broadcasts for Apple HomeKit integration

I’m working on these. Home Assistant Cloud is the easiest way of setting up secure access and I’m considering it. It’s a paid-for service, but it does financially support HA’s development, and seems to much easier than the alternatives. As for mDNS, I’m still working on this, and I imagine there’ll be things I need to tweak in both Docker and Snap to get it to work.

New theme, who dis?

Screenshots of the old and new themes for the blog, side by side

I’ve deployed a new theme on the blog. If you’re reading this in your feed reader, firstly, go you, because so few people do nowadays, but also, please click through and have a look.

The theme I’m using is GeneratePress, with mostly default settings. This replaces one of the default WordPress themes that I was using before.

Why the change? Mainly page bloat; whilst the default WordPress themes are very extensible, the output code includes shedloads of extra JavaScript, CSS and style tags which result in web pages which are bigger than they should be. Whilst I’m at no risk of exceeding the data transfer limits offered by my hosting company, it does affect the speed of the site, and not everyone has unlimited mobile data or a fast connection.

I learnt HTML at a time when it was the done thing to hand-code pages – indeed, back when I used Blogger and later Movable Type as my blogging tools, for the most part I used themes that I had written all myself. JavaScript was used very sparingly, and the HTML and CSS code was nice, clean and simple. So seeing the code soup that was being outputted by the default themes was off-putting.

I also think about this blog post by Terence Eden, ‘the unreasonable effectiveness of simple HTML‘, where he gives an example of someone applying for housing benefit on a PlayStation Portable (PSP). This is presumably because it’s the only portable device with a web browser that she can use. But because the HTML on gov.uk is so clean and lightweight, the old, under-powered web browser on the PSP is still able to render it, and she’s able to get the information that she needs. A big, flashy web site oozing with various JavaScript frameworks, loads of tracking scripts and adverts everywhere just isn’t going to work on such an old device.

And then I saw this toot today:

I can't help but notice the new Apple laptops rate "Video Playback 22 hours, Web Browsing 15 hours" under battery life.

Congratulations web developers everywhere, it's now more computationally intense to render a webpage than video playback!

— Brad L. :verified: (@reyjrar) 2023-11-05T04:41:28.299Z

Web pages are getting so full of cruft, that they require more processing power than video playback.

So, that’s why I’m going with a lightweight theme. It makes the web site much more accessible to more people. GeneratePress seems to output lighter code that displays fast, and it offers a good balance between extensibility and speed. It won’t be for everyone, but it seems to work well for me.

Ghost Signs

You know those old painted adverts you sometimes see on the side of buildings? York, where I grew up, has a famous one for Bile Beans, due to its prominent location, but there’s also one in Halifax too.

Historic England is building a list of these, with photos and GPS locations, and you can contribute. I’ve added one near where I live in Sowerby Bridge – it’s seen better days, but perhaps in Historic England have a list, then there may be money and resources to restore some of these.

Ianvisits mentioned this last week, and it’s encouraging to have seen the list grow in the days since. It’s not just painted adverts like this that are welcome – signs for old and defunct shops can be added too.

Getting started with Home Assistant

A screenshot of Home Assistant

A recent project of mine has been to set up Home Assistant, as a way of controlling the various smart devices in our home.

From bridge to assistant

You may remember, back in February, that I had dabbled with Homebridge, a more basic tool which was designed to bridge devices into Apple’s HomeKit universe which aren’t otherwise supported.

I’ve ditched Homebridge, as it didn’t really do what I wanted it to do. If you want to primarily use Apple’s Home ecosystem, but have a few devices which don’t support it, then it’s great. But that doesn’t really apply to our home – although I’m an iPhone and iPad user, I no longer have a working Mac and so I use a Windows desktop, and my wife uses Android devices. Consequently, the only device that we own which natively supports HomeKit is our LG smart TV.

Home Assistant is essentially a replacement for Apple Home, Google Home, Samsung SmartThings and whatever Amazon’s Alexa provides. That means that it provides its own dashboard, and lots of possibilities for automations. But instead of your dashboard being hosted on a cloud server somewhere, it’s on a device in your own home.

Setting it up

Like with Homebridge and HOOBS, you can buy a Home Assistant hub with the software pre-installed. If you already have a device, such as a spare Raspberry Pi, then you can either install HAOS (a complete operating system based around Home Assistant) or just install Home Assistant on an existing system. I chose the latter, and now I have Home Assistant sat on the same device as my Plex Server, using Ubuntu Core and the relevant Snap package.

Once set up, Home Assistant will auto-discover some devices; it immediately found both my ADSL router and my Google Wifi hub using UPnP. You can then add devices yourself. Home Assistant supports way, way more devices than its competitors, due to its hobbyist nature. For example, there’s an IPP integration which means that you can view your printer’s status, including how much ink is left. Despite it being a ‘smart device’ of sorts, Google Home won’t show this in its app. You can also bring in web services like Google Calendar and last.fm.

Some integrations are easier to set up than others though. In most cases, one of the first instructions for setting up an integration is ‘sign up for a developer account with your device manufacturer’. Whilst the instructions are usually quite clear, you’ll find yourself spending lots of time copying and pasting OAuth keys and client secrets to be able to connect your devices. In the case of my Nest Thermostat, this included paying a non-refundable $5 USD charge to access the relevant APIs.

It should also be noted that, whilst Home Assistant does offer integration with Apple HomeKit, I’ve yet to get this to work. Which is ironic as this was the reason why I previously used HomeBridge.

Remote access

Another thing which took some trial and error to get right was enabling remote access. If you want to be able to view and control your devices when you’re out of the home, then there’s a few additional steps you’ll need to complete. These include:

  • Configuring port forwarding on your router
  • Setting up a DNS server

Home Assistant recommends DuckDNS, which is pretty simple and seems to work okay, but again it’s something that requires some technical know-how.

One limitation of using Home Assistant as a Snap on Ubuntu Core is that you can’t use addons, so setting up DuckDNS meant manually editing Home Assistant’s configuration.yaml file. Indeed, some integrations require this, and so it’s worth backing up this file regularly. You can, however, install a separate snap which enables the Home Assistant Community Store (HACS), and this allows you to install additional (but less-well tested) integrations. I initially couldn’t get this to work, but managed to install it literally whilst writing this paragraph.

If you’re willing to pay, then for £6.50 per month, you can get Home Assistant Cloud. As well as providing an income for Home Assistant’s developers, it offers an easier and secure remote access solution, and integrates Google Assistant and Alexa.

Privacy matters

It should also be noted that Home Assistant has a greater focus on privacy. By hosting an IoT hub yourself, you can limit how much data your devices send to cloud servers, which may be in places like China with markedly different attitudes to privacy. Indeed, the integration with my Solax inverter (for our solar panels) connects directly to the inverter, rather than the Solax Cloud service. It’s therefore not surprising that many of the Home Assistant developer team are based in Europe.

Looking to the future, I’m hoping more of my devices will support Matter – indeed, this week, Matter 1.2 was released, adding support for devices like dishwashers. Theoretically, our existing Google Home devices can all be Matter hubs, but none of my other devices yet support it, and may never will. Home Assistant can work with Matter devices, if you buy their SkyConnect dongle, and again, it will mean that more of your device communications can be done within in your home and not using the cloud. That should be faster, and better for privacy.

Overall, I’m quite happy Home Assistant, even though it’s taken a long time to get every device added and some trial and error. I appreciate being able to see (almost) all of my devices on one dashboard, and it feels like I have more oversight and control over the smart devices in our home. I hope that, with greater Matter support, it’ll become easier for less-experienced users to use in future.

The times, they are upgrading

An AI generated image of a superhero emerging from a server cabinet, generated using Microsoft's Bing AI Image Creator

Hello – if you can read this, then the server upgrade worked!

I’ve wiped the previous server image (yes, I remembered to do more than one type of backup this time), and installed a freshly upgraded version of Linux. This means it’s running on Debian 12 (codenamed ‘bookworm’), and version 12 of Sympl. Sympl is a set of tools for Debian that makes managing a web server remotely a little easier, and is forked from Symbiosis which was originally developed by my hosting company Bytemark.

Going nuclear and starting from a fresh installation was for two reasons:

  1. The next version of WordPress, which will be 6.4, will have a minimum recommended PHP version of 8.1. This server was running version PHP 7.3, and whilst I’m sure future versions would work up to a point, it’s a good opportunity to upgrade.
  2. I’ve had a few issues with the previous installation. The FTP server software never seemed to work correctly, and the database (MariaDB) would lock up almost every time I posted a new blog post. Hopefully, this won’t happen anymore.

As this is a fresh WordPress installation, there may be a few things which don’t quite work yet. I’ve imported the existing blog posts and pages, and the theme is mostly the same, but I need to re-install the plugins and probably need to amend some settings. I’ll sort these issues out over the next few days.