Category Archives: Software

Replacing my Home Media Server pt. 2

In the first post in this series, I documented the process of procuring new media server hardware, installing and configuring the operating system, and getting my Drobo shares mounted with systemd and cifs.

In this post, I’ll go into some detail about setting up Plex Media Server and getting offsite backups working with Duplicati and Syncthing.

Plex Media Server

I tend the run the services that are hosted on my media server inside of Docker containers. The advantage to this approach is twofold:

  1. If anything stops working, I can just restart the container
  2. Configuration can be kept in one location, mounted by the container, and backed up to my Drobo so that I can easily recover in case of hardware failure

When setting up a new media server, I typically create a docker-compose file that lists all of the containers that I’d like to run. The Docker files that describe each container come courtesy of, which provides an excellent collection of containerized services. Each container’s file contains a sample docker-compose file that shows how to configure the service. Here’s the docker-compose file for linuxserver/plex:

version: "2.1"
    container_name: plex
    network_mode: host
      - PUID=1000
      - PGID=1000
      - VERSION=docker
      - PLEX_CLAIM= #optional
      - /mnt/media/plex:/config
      - /mnt/media/tv:/tv
      - /mnt/media/movies:/movies
      - /mnt/media/pictures:/pictures
    restart: unless-stopped

I make use of the PUID and PGID environment variables to configure which user account this container runs as, and set it up to mount all of the media shares that Plex will serve.

Once I have added all of my services to a single docker-compose file, I use systemd to define a service that runs docker-compose up when the system boots:

Description=Docker Compose Services
Requires=docker.service mnt-media.mount
After=docker.service mnt-media.mount

ExecStartPre=/usr/bin/docker-compose pull --quiet --parallel
ExecStart=/usr/bin/docker-compose up -d
ExecStop=/usr/bin/docker-compose down


This service runs at boot after the Docker daemon has started, the server has obtained an IP address, and the Drobo share that contains my media has been mounted. For more information on how I mount my media shares, see the previous post in this series. The only other part of this service that needs to be customized is the WorkingDirectory attribute, which needs to be set to the path to the directory that contains the docker-compose file.

With these files in place, I can enable my new service:

$ sudo systemctl enable docker-compose.service

reload the systemctl daemon so that it picks up the service definition:

$ sudo systemctl daemon-reload

and give it a test run:

$ sudo systemctl start docker-compose.service

At least, that’s how it’s supposed to work. When I set Plex Media Server up using this approach, it didn’t start. In fact, it didn’t even try to start because my new service required docker.service, which didn’t exist. It turns out that modern Ubuntu distributions have repackaged a bunch of common services as “Snaps”, and that those Snaps have different names than their non-Snap counterparts. In this case, docker.service had been rebranded as snap.docker.dockerd.service, which is clearly an improvement.

Lesson learned, I tweaked my service definition to use the correct identifier for the Docker daemon and tried again. This time, the service tried to run docker-compose up, but failed with a permissions error:

ERROR: .FileNotFoundError: [Errno 2] No such file or directory: './docker-compose.yml'

I did some digging and found that I could once again invite the “Snap” edition of Docker to The Accusing Parlor. It seems that there is some sort of incompatibility with the variant of docker-compose that Snap includes and the version of Python that is present on my host system. The easiest solution to this problem was to scorch the earth: uninstall the Docker snap and reinstall Docker using apt.

Earth summarily scorched, my service started… well, starting, along with the Plex Docker container. Unfortunately, the actual Plex Media Server service that runs inside of the Docker container refused to come online. Every time the container started, Plex crashed while trying to initialize.

The most that I could get out of Plex was this error screen that was shown whenever I tried to load the Plex dashboard in my web browser after starting the service:

I dutifully documented my issue on the Plex forums as requested, but nobody responded. I tried switching to the plexinc/pms-docker Docker file that is officially maintained by the Plex team, but encountered the same issue.

Giving up on Docker

Frustrated, I decided to simplify the situation by installing Plex directly onto the host machine instead of trying to run it from inside of a Docker container. I downloaded the 64 bit .deb file for Ubuntu systems and installed it with dpkg.

The installer helpfully warned me that the Intel NUC platform requires a bunch of platform-specific libraries that can be obtained from Intel’s compute-runtime GitHub repo. I used the instructions on that page to install Intel Gmmlib, Intel IGC Core, Intel IGC OpenCL, Intel OpenCL, Intel OCLoc, and Intel Zero GPU.

Having installed those prerequisites, I re-ran the Plex installer, and it created a systemd service called plexmediaserver.service that is responsible for starting Plex Media Server at system boot. This time, everything worked as expected, and Plex came up without issue.

I never did find out why my Docker-based solution crashed on startup. In theory, the Docker container should have isolated Plex from the vagaries of the underlying hardware, making the Intel NUC prerequisites unnecessary. In practice, the fact that I was trying to poke a hole through the container to allow Plex to access the NUC’s hardware-based video transcoding acceleration capabilities may have negated that isolation.

Either way, I had a working Plex Media Server, so I moved on.

Firewalls and Transcoder Settings

To allow clients on my home network to access Plex, I poked a hole through the firewall:

$ sudo ufw allow from proto tcp to any port 32400

Finally, I navigated to the Transcoder Settings page in the Plex Media Server dashboard and enabled hardware acceleration. This configuration tells Plex to take advantage of the Intel Quick Sync technology that is built into my NUC, allowing it to offload transcoding tasks to the underlying hardware.


I have a good friend who runs his own home media server and NAS system. Since storage is cheap, we decided to trade storage space for offsite backups. He suggested that we use Syncthing to keep our backup folders in sync. Once again, came to the rescue. Here’s my docker-compose file:

  container_name: syncthing
  hostname: myhostnamegoeshere
    - PUID=1001
    - PGID=1002
    - TZ=America/Toronto
    - /mnt/backup/config:/config
    - /mnt/backup/remote:/remote
    - /mnt/backup/local:/local
    - /mnt/backup/fileshare:/fileshare
    - 8384:8384
    - 22000:22000/tcp
    - 22000:22000/udp
    - 21027:21027/udp
  restart: unless-stopped

I poked all of the ports through my firewall and started the container. When I brought the service online, it started a web interface on port 8384 that looked like this:

The first order of business was to set a username and password to prevent those pesky hackers from reading and changing the files on my computer. Next up, I worked my way through the section in the docs that deals with configuring the service, exchanging device ids with my friend along the way.

Once set up, Syncthing periodically scans any folders that I’ve opted to keep in sync with the remote device (in this case, a media server running at my friend’s house), and if it finds files that are out of sync, it transfers them to the remote device.

My friend configured his instance of Syncthing in a similar fashion, and the result is a two-way backup system that stores an offsite copy of my files at his house and visa-versa.


To complete the offsite backup solution, I needed a simple way to copy the files that I want to backup over to the directory that I’ve shared via Syncthing. For this task, I chose to use Duplicati. Like Syncthing, it has been packaged into a docker container by the team, who also provide a sample docker-compose entry that can be used to get the service running.

Once again, I poked all of the ports through my firewall and started the container. With the service up and running, I navigated to the Duplicati dashboard in my web browser and set to work configuring a backup job:

I then followed the steps in the wizard, creating a backup job that copies all of my photos from the directory that they live in to a directory that I’ve configured to share with my friend via Syncthing. The backup runs every day at 1am and automatically cleans up old backups when they are no longer relevant.

At the time of this writing, my friend has backed up 49GB of data to my home server, and I’ve sent him 105GB of photos in exchange. Thanks to Duplicati, the files on both ends are split into small chunks that are compressed and encrypted, so my data is safe from prying eyes even as it is being moved back and forth across the internet or sitting on a remote server at rest.

The entire system has been pretty much bulletproof since we set it up, automatically discovering and backing up new photos as they are added to the collection.

Wrapping Up

Jack Schofield once wrote that data doesn’t really exist unless you have at least two copies of it, and I tend to agree, at least in principle. In practice, this is the first time that I’ve taken the time to live by that rule. In addition to the remote backup that is kept on my friend’s server, I took the time to snapshot my photo collection to a USB drive that will spend the rest of its life in a safety deposit box at my bank. I intend to repeat this exercise once a year for the rest of time. Given that storage is cheap, I figure that there’s no reason not to keep redundant copies of my most irreplaceable asset: The photos that my wife and I took of my boy growing up.

Next time, we’ll continue this series by setting up Nextcloud, a self-hosted alternative to DropBox, Google Drive, iCloud, etc. I’ve got most of the work done, but have been procrastinating on the final touches. Here’s hoping that I find time to finish the project soon.

Leave a Comment

Filed under Software

Piping External Audio into Zoom

When the stay at home orders that resulted from the outbreak of the COVID-19 pandemic went into effect, the Kitchener Waterloo Amateur Radio Club (KWARC) was forced to start holding our meetings remotely.

Being a radio club and having some members who suffer from unreliable internet access at home, we were loathe to move proceedings entirely to Zoom, and started holding club meetings on our VHF repeaters. In time, we realized that some of our members did not have access to a VHF radio at home or were out of range of our repeaters, and would be better served by a Zoom call.

In an effort to serve all club members equitably, we decided to combine the two technologies. Meetings would be held primarily on VHF, but we would pipe the audio from the meetings into Zoom, allowing members who couldn’t get on the air to at least listen to the proceedings.

My VHF radio, a Kenwood TM-281, tuned to local repeater VE3RCK

v1: The Hardware Based Solution

Our initial stab at a solution was hardware based. One of our club members, Patrick VA3PAF, put a spare VHF radio and his wife’s smartphone into a box, logged into the Zoom meeting on the smartphone, recording the audio from the radio and sending it directly into Zoom.

This approach worked well, so long as the box was far enough away from Patrick’s primary radio and other sources of interference so as not to be swamped with noise. Because it wasn’t monitored during meetings, we had a couple of problems with the phone’s battery dying or Zoom crashing that caused the audio signal to drop until Patrick could troubleshoot the problem.

v2: The Software Based Solution

In an effort to improve on the hardware-based solution, I started digging into software solutions. I realized that my primary VHF radio, a Kenwood TM-281, features a 3.5mm output jack on its back panel. I purchased a short 3.5mm male to 3.5mm male audio cable, and plugged the radio’s output into my Scarlett 2i2 audio interface. This setup allowed me to record any signal received by my radio on my computer, or to pipe that audio directly into Zoom.

My (somewhat dusty) Focusrite Scarlett 2i2 audio interface. It’s old, but an extremely reliable and versatile piece of equipment

After a little bit of testing, I realized that this setup still had a problem – it was only capable of recording audio that came out of the radio, and that audio cuts out any time I transmit. This meant that people listening on Zoom could hear everything that was happening on the repeater, except for my transmissions.

The fix for this problem was to introduce a software mixing solution. My primary computer is a Windows 10 machine, so I chose to use VB-Audio VoiceMeeter Banana, a donationware application that allows you to mix the audio from two or more devices together in software, and send the resulting signal out to some other audio device.

VoiceMeeter Banana mixing two audio signals together. Hardware Input 1 is the output from my VHF radio, while Hardware Input 2 is the microphone on my webcam

This piece of software was a total game changer for me. It allowed me to mix my webcam’s microphone in with the signal from my radio, in theory allowing the folks on Zoom to hear a perfect re-creation of what was actually happening on the repeater.

One problem remained, and that was figuring out where to send the audio to. By default, the only output devices that are available on a Windows computer are physical ones. I could send the resulting mix out to my laptop speakers, or to the output of my audio interface, but I couldn’t send it to Zoom, because Zoom is designed to listen to audio inputs.

Once again, the folks at VB-Audio came to the rescue, this time with VB-CABLE Virtual Audio Device, a software audio device that presents a virtual audio input device that is connected to a similarly named virtual audio output device via software. I could configure VoiceMeeter Banana to send the audio mix to the CABLE Input virtual device, and then tell Zoom to use the CABLE Output virtual device as a microphone.

I’ve configured Zoom to use the virtual CABLE Output audio device as a microphone, which contains the mix of my VHF radio and webcam microphone

Troubleshooting Choppy Audio

The setup described thus far worked great for the first year and a half of online KWARC meetings. One evening, I turned on my VHF radio, logged into Zoom, started the audio feed, and was immediately inundated by complaints from the folks listening on Zoom, all of whom were telling me that the audio was choppy.

I set about tweaking all of my audio settings, checking and double checking that everything was configured correctly, that none of the audio signals were being over-driven, and testing the audio signal at various points in the pipeline. After a bit of digging, I found that the issue seemed to be caused by the VB-CABLE Virtual Audio Device.

If I piped the audio from VoiceMeeter Banana out to my laptop’s speakers, the audio signal was clear as a bell. If I piped it into the CABLE Input, and monitored the corresponding CABLE Output with Zoom or recorded it with Reaper, the signal was choppy and unlistenable.

Some furious googling led me to this forum post, where the OP described the exact issue that I was having, and noted that the solution was to increase the size of the WDM Buffer.

Whenever audio is piped through a digital device or piece of software, some amount of lag is added to the signal. This lag is caused by one or more buffers – essentially a queue of audio samples – the software does its best to keep some number of samples in the buffer at all times so that it can ensure smooth audio processing and output. If a buffer is bigger than it needs to be, more lag will be introduced; if a buffer is too small, audio will not always be available, and the result will sound choppy.

I dug into the VoiceMeeter Banana settings panel, and found that the default WDM Buffer size was 512 samples. I increased this to 1024 samples, and lo and behold, the problem was resolved!

Increasing the Buffering WDM value from 512 to 1024 solved the stuttering audio problem

Leave a Comment

Filed under Amateur Radio, Software

Replacing my Home Media Server pt. 1

One project that’s been on my to do list for quite some time now is replacing my home media server. Over the years, this machine has been migrated from one hand me down system to another, and is currently running on an old laptop that is starting to strain under the load that we put on it.

The primary duty of this machine is to run a local instance of Plex Media Server, in addition to a half dozen Docker containers that run services for everything from photo management to the various homebrew projects that I’m working on at any given time. While early iterations of the server included a RAID array for storage, more recent versions have externalized that duty to a Drobo 5N2 that simplifies the job considerably.

In this post, I’ll explain the process of setting up my replacement system. Replacing the server is a big job, so there will be at least one subsequent post that details the process of setting up Plex Media Server, NextCloud, and other useful services that I run.

Procuring the Hardware

Years ago, my wife and I ripped all of the TV series and films that we had on DVD and Bluray off to our home media server so that we could watch them as digital files. That collection has continued to grow as time goes on, and we’ve now started to add video of our son to the list of files that we want to play back from any device in the house.

As mentioned above, I use Plex Media Server to organize all of this content, and recently found out that it is capable of taking advantage of Intel Quick Sync Video, a hardware-accelerated video transcoding solution that is built into modern Intel CPUs. When using this feature, Plex offloads transcoding to the underlying hardware, dramatically lowering the amount of CPU and RAM that it needs to use to transcode video files, which in turn should increase the useful lifespan of my hardware as the size of video files that we play back continues to grow.

After a good deal of research, I settled on the Intel NUC BXNUC10i7FNHJA, an all-in-one machine that’s approximately 4″ square by 2″ tall. It contains an Intel Core i7-10710U CPU that supports Quick Sync Video, and ships with a 1TB SSD and 16GB of RAM installed.

When the machine arrived, I found that it was missing a part of the power cord.

I had an extra one kicking around, but it seemed like a strange omission to me.

When I first booted up the machine, I found that it came preinstalled with Windows 10. I had always intended to run Ubuntu Server as the OS, but figured that I may as well create a USB recovery drive with the Windows 10 license, seeing as I had already paid for it and might one day want to restore it to the hardware.

Four hours into process of creating the recovery drive with no end in sight, I gave up on that notion, and decided to blow it away in favour of Ubuntu.

Installing Ubuntu Server

With the hardware ready to go, I set about trying to get the my OS of choice installed.

I started by downloading a copy of Ubuntu Server LTS, a headless operating system that will be supported until April 2025. Because my primary PC is a Windows 10 machine, I used Powershell’s Get-FileHash command to verify the SHA-256 hash of the downloaded ISO. Finally, I converted the ISO into a bootable USB stick with an open source Windows application called Rufus.

Unfortunately, every time I tried to use my newly created USB stick to install the OS, the installer crashed. Hard. After my third attempt, I decided to try a BIOS update. I found the updated firmware on Intel’s website, but it didn’t solve the problem.

After some research, I found a post on the Ubuntu support forum that suggested that I disable Intel Turbo Boost, a technology that automatically overclocks the CPU when under heavy load, so long as it is running below certain temperature and power draw thresholds. Unfortunately, this did not solve my problem.

I eventually tired of tinkering with BIOS settings and opted to try installing the Ubuntu Desktop variant of the 20.04 LTS release. This version of the OS ships with a desktop and a graphical installer that is much smarter than its Server release counterpart, and it surfaced a helpful popup that told me to deactivate RST in favour of AHCI. Having flipped that switch in the BIOS settings, I went back to the Ubuntu Server installer and it (finally) worked without issue.

Securing the System

With the operating system installed, it was time to get to work configuring and securing it. I started off by setting up a static IP address for the machine so that it would always be assigned the same address whenever it connects to our home network.

While I was playing around with the router, I configured a NameCheap Dynamic DNS hostname for our home network. I run an EdgeRouter Lite, and found some helpful instructions for configuring DDNS at the router level. Now, any traffic that goes to the subdomain that I configured will be redirected to my home IP address. In the future, I’ll be able to set up some port forwarding rules at the router that allow me to connect to the media server via SSH or to expose web interfaces for the various services that I run to any machine in the world.

Next up, I configured sshd to only accept public/private key authentication, and tightened up the ssh security configuration. I also set up a firewall (UFW), running sudo ss -ltnp to check for open ports before and after the firewall was configured. Going forward, I’ll have to explicitly poke holes through the firewall for each service that I want exposed to the network. In addition to the firewall, I set up fail2ban, a service that detects and automatically blocks DDOS attacks against my SSH server. It can watch over other services in addition to sshd, so I may revisit its configuration at a later date.

Mounting Shared Drives

The last few iterations of the home media server have offloaded media storage duties to a Drobo 5N2. It’s a trusty NAS appliance that makes storing our files a snap. Add to that the fact that it can accept hard drives of any size, and can gracefully recover from a failed drive, and it’s a no-brainer for the home administrator. Gone are my days of cobbling together software RAID5 arrays out of scavenged drives, and I couldn’t be happier for it.

Up until now, I’ve stored everything on a single public Drobo share. One of the things that I’d like to change in this build is to split that single share up into a number of different shares, each with a particular purpose and accessible only to the users that need those files.

Since Ubuntu uses systemd to manage services, I opted to use a mount unit configuration to mount the drives at boot. Each Drobo share requires a .mount file and a corresponding .automount file in the /etc/systemd/system directory.

Here’s the .mount file for the public share that holds files that any machine connected to the network should be able to access:

  Description=Drobo Public Share



and here’s the corresponding .automount file for that share:

  Description=Drobo Public Share



Together, these files cause the Drobo share that lives at // to be mounted at /mnt/media whenever the server boots. Because everybody can access this share, it is mounted without authentication, and all users get full read, write, and execute access to all files on it.

The .mount files for Drobo shares that require authentication to mount look very similar, except for the value of the Options key in the [Mount] section. The value of this key holds the cifs options that are specified when mounting the samba share that is exposed by the Drobo. I make use of the credentials option to pass the path of a file that holds the username and password that protect the Drobo share. This file is can only be read by the root user, and the credentials in it correspond to a user account that I created on the server. Finally, I use the uid and gid cifs options to make the user account the owner of the mounted directory. Here’s an example:


The last thing to do was to make a group called mnt, and to put all users that have the ability to access one or more Drobo shares into that group. Then, I modified the directory that I mount shares into (in my case /mnt) so that it belongs to the mnt group. You can see in the sample above that I use the cifs gid option to assign ownership of the mounted share to the mnt group, which in my case has group id 1001.

This setup was the result of much tinkering and experimentation. If you’re interested in a setup like this, I would suggest that you take a read through this post on Michl’s Tech Blog. It was extremely helpful!

In Our Next Installment

At this point, we’ve got new hardware running; an operating system installed, configured, and secured; and our file shares mounted. In my next post, I’ll document the process of getting Plex Media Server and NextCloud up and running.


Filed under Software

Resizing Images for a Digital Photo Frame

My wife recently returned to work after a year of maternity leave. I figured that she might miss being home with me and our son, so I bought her a digital photo frame for our anniversary. To seal the deal, I dug back through all of our digital photos and selected a few hundred that I felt best represent the different stages of our relationship.

The frame that I chose is pretty bare bones. After some shopping, I settled on the Aluratek ASDMPF09. It’s a 10″ frame with 4GB of internal memory and a 1024×600 pixel display.

Probably don’t buy one of these. The only redeeming thing about it is that it is incapable of connecting to the internet. God knows what a shit show that would be…

There’s not much to this device, but while researching, I found that the market leaders in this sector have gone full Internet of Shit in their offerings – Every device comes with a web service, cloud storage, and an email address. Some even require an internet connection to operate. And so I chose to stick with the low tech model in hopes of a better, more secure product, albeit with fewer bells and whistles.

What I didn’t bank on was this device’s absolute inability to rotate and resize images at display time. Here’s an example of what I mean:

The image on the left is the original. On the right, you can see the image as displayed on the digital picture frame. The colour, contrast, and pixelation is the result of taking a photo of the digital frame’s display. These artifacts aren’t present in person, but the horizontal squishing is, and it looks god awful, particularly on pictures of people.

At first, I thought that the problem was the height of the image. I figured that the frame was removing horizontal lines from the image to resize it to fit on the 600px tall screen. Perhaps in doing so, it decided to remove the same number of vertical lines from the image, causing it to look unnaturally squished in the horizontal direction. That would be stupid, but also understandable.

I tried to solve for this by resizing the source image such that it had a maximum width of 1024px and a maximum height of 600px, all while respecting the aspect ratio of the original image. In practice, this meant that the resulting image was either 800x600px or 600x800px, depending on its orientation.

Unfortunately, this did not solve the problem.

After a bit of digging, I remembered that older iPhone cameras used to save time when taking photos by writing files to storage in whatever orientation the phone happened to be in when the photo was taken. To compensate, they added an EXIF attribute to the file to indicate that the photo needed to be rotated at display time. Most devices, including Windows, implicitly handle this reorientation and you never notice that it’s happening. The digital photo frame that I purchased tries and fails, leaving the image stretched in nasty ways that make it look unnatural.

We can see this EXIF re-orientation magic happening in practice by running one of the affected photos through Phil Harvey’s excellent ExifTool. It spits out all of the metadata associated with the photo, including this attribute that instructs the display device to flip the image upside down:

Orientation: Rotate 180

To solve the problem, I can rotate the image such that the EXIF attribute is no longer necessary, and then remove that metadata so that the digital frame does not try to modify the image on the fly at display time. I actually wrote up a solution to this problem way back in 2016 when WordPress did not properly handle the issue. If you read that post back in the day, the rest of this one is going to look eerily familiar.

Then as now, the solution is to sprinkle a little bit of ImageMagick over my photos, resizing them to the dimensions of the digital photo frame while retaining their aspect ratio, re-orienting them as necessary, and stripping any unnecessary EXIF metadata along the way. The end result is an image that the device does not have to resize or rotate at display time.

With a little bit of help from StackOverflow and the folks on the ImageMagick forums, I figured out how to do all of this in a single command:

magick.exe convert -auto-orient -strip -geometry 1024x600 input.jpg output.jpg

This operation is pretty straightforward. Let’s break it down into pieces:

  • convert: tells ImageMagick that we want to modify the input image in some way, making a copy in the process
  • -auto-orient: rotates the image according to the EXIF Orientation attribute if present, effectively undoing the iPhone’s laziness
  • -strip: Removes any unnecessary EXIF data, including the Orientation attribute that is no longer required to correctly display the image
  • -geometry widthxheight: allows us to specify the desired width and height of the output image, in this case 1024×600. By default, this option preserves the input image’s aspect ratio
  • input.jpg: is the path to the file that we want to resize
  • output.jpg: is the path to write the resized image to. Note that this operation will not modify input.jpg

One thing that you’ll notice is that this command only resizes a single photo. Since I have an entire directory full of photos that I need to process, it would be ideal to batch them out. Unfortunately, ImageMagick’s convert utility can only operate on single files. No matter, though – I’m on Windows, so I can wrap the operation in a Powershell command that enumerates all of the files in the current directory and pipes each filename into the ImageMagick convert command for processing:

Get-ChildItem -File | Foreach {magick.exe convert -auto-orient -strip -geometry 1024x600 $_ resized\$_}

You need to run this operation from within the directory that contains all of the images that you want to process, since the Get-ChildItem -File command lists every file in the current directory. We pipe that list into the Foreach command, which loops over every file in the list, substituting its name in for every instance of $_ in the {} braces that follow.

The result is a resized and correctly oriented copy of every image, each with its original filename, all in a directory called resized that resides within original directory of images. One nice side-effect of this operation is that the 300 or so photos that I wanted to put on the frame shrunk in size from 1.7GB to around 80MB. That’s means that I can put significantly more photos on the device than expected, which is a bonus.

Leave a Comment

Filed under Product Review, Software

Installing Ubuntu on a Raspberry Pi 400 from Windows 10

I recently picked up a Raspberry Pi 400 for my in-laws. Having gifted them many a hand-me-down laptop over the years, I was immediately struck by the simplicity of the new offering from the Raspberry Pi Foundation, and at $140 CAD, the price point couldn’t be beat.

The box that the Raspberry Pi 400 ships in, about the size of a standard shoe box.

When the Pi arrived, I continued to be impressed by the packaging. The box contains everything that you need to get started (aside from a wall outlet and an external monitor with an HDMI input), and apart from the included mouse, all components feel well made and are pleasant to use.

Setup was simple – just plug in the power cable, the monitor, and the mouse, and the machine comes to life. Like previous iterations of the Pi, the machine boots from an SD card, and it doesn’t have a hardware power switch, so it turns on just as soon as power is connected.

The entire kit set up and plugged into a spare monitor.

The SD card comes inserted into the Pi, and is flashed with Raspbian GNU/Linux 10 (buster). On first boot, it asks for some locale information and prompts you to change the password for the default pi account, after which it downloads and installs updates.

Now, my in-laws have only just started to learn basic computer skills in the past few years. I have installed Ubuntu on the laptops that we’ve given them in the past, and I wanted the new Raspberry Pi to present a familiar user interface, so I opted to purchase a 32GB SD card and flash it with Ubuntu 20.10 to ease the transition to the new machine.

The Ubuntu blog confirms that the latest release of the OS can indeed be installed on the Raspberry Pi 400, and the article links to a tutorial for flashing Ubuntu onto an SD card destined for a Raspberry Pi 4. Presumably, the internals of the two models are similar enough that the same binaries work on both.

I downloaded the Raspberry Pi Imager for Windows, launched the app, chose Ubuntu Desktop 20.10 for the Raspberry Pi 400, selected the SD card to flash, and clicked the Write button.

The Raspberry Pi Imager v1.3 for Windows, pictured writing Ubuntu Desktop 20.10 to an SD card.

One of the great things about a machine that boots from an SD card is that there’s really nothing to install. I just popped the card into the Raspberry Pi, powered it on, and it immediately booted into Ubuntu.

From there, I followed the steps on screen to configure the system, installed updates, and it was ready to go.

Leave a Comment

Filed under RaspberryPi, Software

Photo Organization Part 1: Importing Existing Files

Over the years, my wife and I have accumulated a large volume of digital photos. Nary a vacation has gone by that didn’t result in 1000+ photos of every landmark, vista, and museum artifact that we discovered during our travels.

As it turns out, the Continental Hotel from John Wick is actually a sushi bar

Unfortunately, the majority of these photos have ended up haphazardly organized in various folders on my media server, all with different directory structures and naming methodologies, which makes it difficult to lay my hands on a particular photo from some time in the past.

One of the projects that I’ve decided to tackle this year is to come up with some method by which to rein in this madness and restore some semblance of order to the realm.

I’m not sure exactly what tools or processes I will use to tackle this problem, but I figure that the best way to start is to split it up into smaller, more manageable sub-problems.

This, then, is the first in what I hope to be a series of posts about organizing my family photo collection. Subsequent posts will (hopefully) deal with a pipeline for importing new photos, establishing a reliable offsite backup solution, and maybe even experimenting with some deep learning to automatically tag photos that contain recognizable faces.

Why not use [Insert Cloud Service Here]?

“But,” I hear you protest in an exasperated tone of voice, “you could just upload all of your photos to Google Photos and let The Cloud solve this problem for you.”

While it’s true that there are a number of cloud providers that offer reasonably-priced storage solutions, and that some of them even use the metadata in your photos to impose some kind of organization solution, I have a few concerns with these products:

  1. They cost money: Nothing in life is free, at least not if you have more than a few gigabytes of it. The largest of my photo folders contains around 70GB of files, and with the recent arrival of our son, we take new photos at a heretofore unimaginable clip. I already have a media server, and storage on it is effectively free.
  2. My metadata isn’t complete/correct: Garbage in, garbage out, as they say. Most (all?) of the cloud storage solutions that I’ve seen that purport to organize your photos will do a messy job of the task if the metadata on your photos is incorrect or missing. Any tool that I use will need to correct for this problem.
  3. Google is a creep: The same is true of Facebook et al. I’d rather generate less digital data for the multinational companies that control our modern world to indiscriminately slurp up and process in hopes of getting me to click on a single ad, thank you very much. Doubly so if the storage provider in question is going to use facial recognition to tie a name to faces in my photos, especially photos of my son.

Organizing Files with Elodie

My initial inclination, when considering the problem of sorting many gigabytes of photos into a reasonably logical folder structure was to write a script that would read the EXIF data out of the file, interpret it, and use it to determine where the file belongs.

But the decrepit fire station from Ghostbusters? Actually a fire station.

In one of the better decisions that I’ve made so far this year (hey, it’s only January), I decided to take a look around GitHub to see if somebody had already written a script to do this work, and man, am I glad that I did.

As with most things that seem simple, it turns out that the task of reading EXIF data can get really complicated in a hurry. There are a bunch of different historical format differences, every camera manufacturer implements some number of custom extensions to the format, etc.

Enter Elodie

Elodie is a cutely-named tool with an even cuter mascot that uses exiftool to read the metadata out of your photos, and then sorts them into a configurable folder structure based on that data. If your camera or phone wrote GPS coordinates indicating where the photo was taken, Elodie can even query the MapQuest API to translate those coordinates into a human-readable place name that is added to the directory structure.

The documentation is comprehensive, albeit brief, but I’ll include a handful of the commands that I used in this post just to demonstrate the workflow that I ended up with.

There are basically two major operations in Elodie: import and update. The former is used to move pictures from some source directory into the target directory, reading their metadata, renaming them, and organizing them within a configurable directory structure along the way. The latter, meanwhile, is essentially used to correct mistakes that were made during the import process. It lets you correct the date, location, or album metadata for previously imported files, and appropriately re-sorts them into the directory hierarchy.

The command for importing existing files into your collection is simple:

~/elodie $ ./ import --trash --album-from-folder --destination=/mnt/media/Pictures/ /mnt/media/Pictures.old/Montreal\  Trip/

In this case, I’m importing vacation photos from /mnt/media/Pictures.old/Montreal Trip. The destination folder is /mnt/media/Pictures, and I’m using the --album-from-folder option to tell Elodie that I want it to keep all of the pictures in this batch together in a folder called Montreal Trip in the destination directory. We went on this trip in September of 2010, so the resulting folder structure looks like this:

├─ media/
│  ├─ Pictures/
│  │  ├─ 2010/
│  │  │  ├─ September/
│  │  │  │  ├─ Montreal Trip/
│  │  │  │  │  ├─ 2010-09-08_19-29-44-dsc00279.jpg
│  │  │  │  │  ├─ 2010-09-09_20-15-44-dsc00346.jpg
│  │  │  │  │  ├─ ...

There may be other pictures that were taken in September of 2010 in the September folder, but they won’t be sorted into the Montreal Trip folder unless they are marked as being a part of the Montreal Trip album.

It should be noted at this point that Elodie goes to great lengths to avoid maintaining a database of imported files. Instead, it reads metadata from and writes metadata to the image files that it is organizing. This ensures that the resulting directory structure is as cross-platform and broadly compatible as possible without locking your photo collection into a proprietary file format.

The Men in Black Headquarters at 504 Battery Drive

Elodie does offer one database-lite feature that helps to detect bit rot: The generate-db operation records a cryptographic hash of every photo in your collection into a file. Months or years down the road, you can check if any of the files in your collection have become corrupted by running the verify operation. This will recompute the hashes of all of the files in your collection, compare them against the previously recorded values, and let you know if anything has changed.

One place where not having a database falls short is if a small handful of images within a directory that contains hundreds or thousands of vacation photos have incorrect or missing EXIF data. In this case, it’s possible for those photos to be written to the wrong place in the target directory structure, and if you don’t catch the problem during the import, you’re not likely to find and fix the mistake. If Elodie were to maintain a database that tracked the source and destination locations for every imported photo, these mistakes would be easy to find. This in turn means that importing a large existing photo collection with Elodie becomes a job that requires human supervision.

Here’s a brief rundown of some of the issues that I ran into while importing a large collection of existing files:

  • Missing metadata: Some photos, particularly those taken by pre-smartphone cameras, didn’t have the date on which the photo was taken in their EXIF data. When it encounters this problem, Elodie falls back to the lesser of the file created or file updated date. Because of the way that I managed my import, all of these files ended up in Pictures/2021/January/Unknown Location/, but if you didn’t accidentally overwrite the file created date on all of your photos as a part of your import process, Elodie may put them into an unexpected location in your target directory tree.

    If you happen to be importing an album (i.e. a bunch of photos that were taken as a part of a named event) and your target directory structure includes the album name, you can find the improperly organized photos by running find /path/to/photos -type d "album name" -print to find directories in your collection that have the same name as the album. Once found, you can use Elodie’s update operation to fix the problem:
    ~/elodie $ ./ /update --time="2014-10-22" --album="album name" /path/to/photos
  • Incorrect metadata: In many ways, incorrect metadata is worse than missing metadata, because it causes photos to be organized into unexpected locations. As far as I can tell, the cause of this problem was a pre-smartphone camera that had its date and time set incorrectly. Remember when you used to have to set the date and time on all of your electronics? Boy, that sucked.

    You’ll know this is the problem if you are importing photos from a vacation that you took in 2012, but they’re inexplicably being put into the 2008 directory. In this case, you can use exiftool, the underlying library that Elodie uses to read and write metadata, to add or subtract some number of years, months, days, or hours to or from the dates recorded in the photos’ EXIF data.
  • Geotag lookup doesn’t appear to work: As mentioned above, Elodie has the ability to convert GPS coordinates found in the EXIF data of photos into human-readable place names by way of a call to the MapQuest API. This, of course, only works if your camera was aware of its location at the time that the photo was taken, which wasn’t really a thing in the pre-smartphone era.

    This isn’t really a problem for vacations, as I can import all photos from a trip into an album, and that album name will be used in the folder structure that Elodie creates. For example, if you import photos from a folder called Egypt using the --album-from-folder option, they’ll end up in a directory structure like this: 2012/March/Egypt/.

    It does, however, get annoying for photos that were taken closer to home with a non-GPS-aware camera. These all get sorted into a year/month/Unknown Location/ directory. I can’t find anything in the Elodie docs or source code that allows this behaviour to be changed. I would rather that these photos end up in the root of the year/month/ folder, because I think that the extra Unknown Location directory is, well, less than helpful, but I recognize that this is a matter of preference. For now, I think I’ll solve this problem by writing a quick script to move these photos as I see fit.

What’s next?

Even with the help of a tool like Elodie, reorganizing tens of gigabytes worth of photos is not a quick task. I can hear the fans on my Drobo humming, and am only too aware that this kind of mass file migration is a great way to find the breaking point of old hard drives.

This isn’t from a movie. It’s just a cool shot looking North from the top of the Freedom Tower

Once I’m done importing all of the pictures that I already have, I’ll move on to figuring out an import pipeline for photos that I haven’t yet taken. Off the top of my head, it needs to be mostly automatic, work well with the iPhones that my wife and I use to take most photos these days, and should ideally email me the Elodie log so that I can see what photos ended up where and whether or not I need to manually correct any mistakes.

I’m hopeful that once I get around to importing photos that were taken with a smartphone that has internet time and some notion of its location, the metadata problems that I catalogued above will become the exception instead of the rule.

Once I figure out that next step, I’ll write about it here. In the meantime, go organize all of those photos that are cluttering up your media server. You’ll feel better once you do. I promise.

Leave a Comment

Filed under Software

CAT Control with N1MM+ and a Yaesu FT-450D

When participating in amateur radio contests, my logging software of choice is N1MM+. This tidy little logger is highly optimized for contesting, automatically updating its user interface to prompt the user for the information required to log a valid contact.

One major quality of life improvement for me has been rigging up N1MM+ to talk to my HF rig via CAT control. This allows the logging software to automatically transmit pre-recorded macros on my behalf at the appropriate times during a contact, which saves me from having to yell my callsign over and over again when trying to break through a pileup.

Because every radio is different, configuring N1MM+ to control your rig can be a bit of a bear. Below are the steps that I followed to get things working:

Tell N1MM+ What Kind of Radio You Have

From the main window of N1MM+, select Configure Ports, Mode Control, Winkey, etc… from the Config menu.

In the Configurer window that appears, activate the Hardware tab. Select the COM port that your radio is attached to from the Port dropdown, and the make and model of your radio from the Radio dropdown.

Next, click on the Set button under the Details header. In the window that pops up, select the baud rate of the serial connection with your radio from the Speed dropdown.

Click the OK button twice to dismiss both windows and navigate back to the main window of N1MM+.

Customize the CAT Commands that N1MM+ Sends to your Radio

From the Config menu, select Change CW/SSB/Digital Function Key Definitions > Change SSB Function Key Definitions. In the SSB Message Editor window that appears, you can edit the contents of the config file that controls the CAT commands that N1MM+ sends to your radio at different stages of the contact.

When running (i.e. sitting on a particular frequency calling CQ and waiting for other operators to call me back), I execute the VM1TX function on my Yaesu FT-450D, which transmits a pre-recorded macro that says something like “CQ Contest CQ Contest, Victor Alpha Three Juliet Foxtrot Zulu”.

To execute this command, I have to configure N1MM+ to execute the {CAT1ASC PB7;} command. CAT1ASC tells N1MM+ to send an ASCII command down the serial connection, and PB7; tells my rig to execute the VM1TX function, which transmits the pre-recorded macro.

Similarly, when searching and pouncing (i.e. tooling around the band looking for other operators who are calling CQ), I execute the VM2TX function on my radio, which transmits a different pre-recorded macro says my callsign. I use this when answering another operator and waiting for them to acknowledge me.

Executing this command is much the same as the previous. I configure N1MM+ to execute the {CAT1ASC PB8;} command, which tells my rig to execute the VM2TX function, transmitting the pre-recorded macro.

For reference, here’s the contents of my entire SSB Message Editor config file:

# SSB Function Key File
# Edits may be necessary before using this file
# Use Ctrl+O in the program to set the Operator callsign
#   RUN Messages
F2 Exch,{OPERATOR}\CqwwExchange.wav
F3 TNX,{OPERATOR}\Thanks.wav
# Add "!" to the F5 message if you are using voicing of callsigns 
F5 His Call,
F6 Spare,
F8 Agn?,{OPERATOR}\AllAgain.wav
F9 Zone?,{OPERATOR}\ZoneQuery.wav
F10 Spare,
F11 Spare,
F12 Wipe,{WIPE}
#   S&P Messages
# "&" doubled, displays one "&" in the button label
F2 Exch,{OPERATOR}\S&PExchange.wav
F3 Spare,
# Add "!" to the F5 message if you are using voicing of callsigns 
F5 His Call,
F6 Spare,
F7 Rpt Exch,{OPERATOR}\RepeatExchange.wav
F8 Agn?,{OPERATOR\AllAgain.wav
F9 Zone,{OPERATOR}\RepeatZone.wav
F10 Spare,
F11 Spare,
F12 Wipe,{WIPE}

Note of course that unless you also run a Yaesu FT-450D, lines 10, 13, 28, and 31 will need to be customized to send CAT commands appropriate to your radio.

Reading through the file may also suggest to you that there is more than one way to skin this cat; indeed, it is possible to configure N1MM+ to key your radio and then play a WAV file from your computer, assuming that you pipe the audio from your computer into your rig. This can be a solution for radios that don’t allow you to record voice macros, or for operators who want to use a wider range of macros than their radio supports.

Good luck and happy contesting!


Filed under Amateur Radio, Software

CAT Control from Log4OM 1.x Using hamlib

In a previous post, I wrote about using hamlib to control my Yeasu FT-450D from the Windows command line.

This time, we’ll look at integrating hamlib with Log4OM to achieve cat control from within the logging software, primarily so that I don’t have to record the frequency whenever I enter a new QSO.

If you haven’t already installed hamlib, you’ll want to follow the instructions in the previous post. If you can run the rigctl examples in that post, you should be good to go.

Configuring Log40M

First, we need to tell Log4OM to use hamlib. Unlike N1MM+, Log4OM does not come with the ability to talk to your radio, and it needs some help to do so.

From the Settings menu in Log4OM, choose Options, and in the dialog box that appears, select the Cat & Cluster tab. We only care about two controls on this tab:

  • Under the CAT SOFTWARE heading, choose hamlib
  • Under the CAT & Cluster heading, select the Open CAT on program start checkbox

Click on the big floppy disk in the bottom right corner to close the dialog box.

Back on the main screen of Log4OM, click on the icon that looks like a pair of headphones in the toolbar:

Why it’s a pair of headphones is beyond me. While I do wear headphones while using my HF radio, they don’t have anything to do with cat control.

Anyway, clicking on that button will open the Log4OM Cat dialog box, where you can configure cat integration with your radio. There are three settings that you want to change in this window:

  • Select the make and model of your rig from the RIG Model dropdown box. My primary HF radio is a Yaesu FT-450D, and I use the Yaesu FT-450 0.22.1 Beta | 127 profile.
  • Select the COM port that your radio is connected to. My radio typically connects to COM4, but that can change if I plug its serial cable into a different USB port
  • Choose the appropriate Baud Rate for your radio. My radio is set to 9600 baud, but this can be changed in its menu. The radio and Log4OM have to be expecting the same Baud Rate for communication to be successful

Once configured, press the Open button in the bottom right hand corner of the dialog box to test and save your settings. if the CAT Status indicator in the bottom left corner of the dialog box turns green, you can close the box.

So What’s the Point?

Once you configure cat control, Log4OM will communicate with your radio as you use it. Most notably, the frequency that your radio is tuned to will appear in the top right-hand corner of the logger, and will automatically record that frequency in any new QSOs that you enter.

Cat control also allows Log4OM to change your radio’s transmit/receive frequency, and to activate the radio’s transmitter. This in turn allows you to connect your radio’s audio input and output to your computer, and play pre-recorded audio snippets from the logger, which can be very useful while contesting.

Leave a Comment

Filed under Amateur Radio, Software

Using ImageMagick to Prepare Images for Upload on Windows

A few years ago, I wrote a short post detailing how to automatically rotate, resize, and strip EXIF data from the images that I upload to this website.

At the time, I was working on Linux, so the instructions in that article target Linux-based operating systems. Over the intervening years, I’ve switched back to running Windows 10 on my home laptop, so I figured it was time to update my original instructions to target my new platform of choice.

Installing ImageMagick

One of the reasons that I now run Windows at home (or at least one of the reasons why I don’t mind running Windows at home as much as I once did) is that the Windows command line experience has improved by leaps and bounds over the past few years.

For command line goodness in Windows, I run PowerShell inside of ConEmu, and I use Chocolatey to install and manage command line utilities.

Luckily ImageMagick is available for Windows, and you can install it via Chocolatey as follows:

$ choco install imagemagick.tool

Note that I’m purposefully installing the portable release here, because it includes all of the command line tools in addition to the main ImageMagick GUI application.

The installer modifies your $PATH variable, so you’ll have to run refreshenv once it’s complete.

Modifying Images with Mogrify

As in the original tutorial, I still use mogrify to prepare my images for upload.

Start by copying the images that you want to use into a temporary folder. Mogrify is going to modify them in place, so you definitely don’t want to work against the original images.

Next, run the following command from PowerShell/cmd in the temporary folder:

mogrify -auto-orient -resize 584x438 -strip -quality 85% *.jpg

This command will re-orient and resize your images, strip any EXIF data from them, and re-encode them as jpegs at 85% quality. The full manpage for mogrify can be found on the ImageMagick website.

Leave a Comment

Filed under Software

Using hamlib to Control my Yeasu FT-450D

A couple of years ago, I became VA3JFZ after studying for and passing my amateur radio exam. Since then, I have been building out my shack (ham radio enthusiast lingo for the place where you keep your rigs, er… radios).

As my setup has matured, I’ve started to look for interesting ways to interconnect my equipment. Most amateur radio operators use software packages called loggers to keep track of who they’ve talked to when on the air. I use two different loggers: log4OM is my everyday driver, and N1MM+ is for contesting.

Some of my recent contacts, or QSOs, as recorded in log4om

While contesting, I got used to N1MM+ automatically reading the frequency from my HF (that’s high frequency) radio, making for one less thing that I have to enter into the logger as I work contacts. While trying to figure out how to get log4om to do the same thing, I stumbled on an open source project called hamlib.

You see, while most modern rigs provide some form of CAT (that’s computer aided transceiver) control via an RS-232 serial port, every manufacturer’s radio responds to a slightly different set of commands. The goal of the hamlib project is to create a common interface that any piece of software can use to talk to all kinds of radios without having to re-implement all of their individual peculiarities.

After downloading the release 3.3 and running the exe file, I added the C:\Users\Jonathan Fritz\AppData\Roaming\LogOM\hamlib directory to my PATH and opened Powershell, where the following command started an interactive terminal:

$ rigctl --m 127 --r COM3 --serial-speed=9600

Let’s break down the arguments to this command:

  • rigctl is the name of the program, pronounced “rig control”
  • --m 127 tells rigctl that my radio is a Yeasu FT-450D
  • --r COM3 says that my radio is connected to the COM3 port; and
  • --serial-speed=9600 tells it that my radio expects serial commands at a rate of 9600 baud.

It’s worth noting that your radio might appear on a different COM port when connected to your computer via a RS232 to USB cable, and that you may need to adjust the baud rate of the serial connection to match the settings in your rig’s config menu.

You can find out what COM port your radio is connected to in Windows > Control Panel > Device Manager

Once you’ve started rigctl, there are a few interesting commands that you can run.

Get the frequency of the radio:

Rig command: f
Frequency: 7301000

Get the mode that the radio is in:

Rig command: m
Mode: LSB
Passband: 3000

Ok, that’s a neat party trick, but what’s the point? Well, rigctl can also be used to change your radio’s settings, and can be run in a non-interactive mode where commands are read in from a file.

Non-interactive mode

I started by writing my commands out to a file:

$ echo M LSB 3000 \r\n F 7150000 > 40m.txt

Once again, let’s break this command down:

  • echo prints whatever comes after it to the terminal
  • M LSB 3000 tells the radio to set the mode to lower sideband with a passband of 3000Hz
  • \r\n is a line break in Windows, which separates two commands from one another
  • F 7150000 tells the radio to set the frequency to 7.150.00MHz, the middle of the 40M band
  • > pipes the output of the echo command (the string M LSB 3000 \r\n F 7150000) into a file on disk
  • 40m.txt is the name of the file to pipe the command into

The result is a file called 40m.txt containing two commands that will set the radio to LSB mode and set the frequency to 7.150.00MHz.

Now, we can execute those two commands by running this command:

$ rigctl --m 127 --r COM3 --serial-speed=9600 - < 40m.txt

The first four arguments here are the same ones that we used to open the interactive terminal above. The remaining arguments are:

  • - tells rigctl to read the remaining commands from stdin, letting us pipe them in from a file
  • < the opposite of >, pipes commands in from a file instead of out to a file
  • 40m.txt the name of the file containing the commands that we want to send to the radio

Running this command will set the radio’s mode and frequency, initializing it for operations on the 40m band.

The rigctl manual contains a bunch of other really interesting commands, including the ability to activate the rig’s PTT (push to talk) switch, which could be used to write a script that puts the radio into transmit mode before playing pre-recorded message. That sounds like a very useful feature for contesting.

Finally, if there’s something that your radio can do that rigctl can’t, you can always use the w command to sent CAT control strings directly to the rig. The control strings for most rigs can be found on the manufacturer’s website.

73 (that’s amateur radio speak for “best regards”), and enjoy your newfound power.

1 Comment

Filed under Amateur Radio, Software