Arduino Types and Use Cases

Arduinos come in all shapes and sizes and it can be intimidating to choose between them. This post should clear up some confusions you might have. This is based on my experience and opinion, because I have spent time figuring what works and what doesn’t in different situations.

How to use Logic Level Shifters

How to use Logic Level Shifters

If you need to use an Arduino that outputs 3.3v logic in a circuit that requires 5v logic signals, you can use a component known as a logic level shifter.

4-channel logic level shifter

I am using this component in circuits controlling addressable LED strips. A NodeMCU ESP8266 chip (which outputs 3.3v logic) can then be used to drive 5v logic signals to the LED strip.

This circuit requires:

  1. A logic level shifter to shift 3.3v logic signals to 5v.
  2. A 3.3v and 5v reference voltage to be applied across its pins.
  3. Which in turn requires a DC voltage converter to splice off 3.3v off of whatever your power supply generates. I only found out recently that the NodeMCU (and Arduinos) have a built-in DC voltage converter. It turns out you can connect any voltage 3.3-12V to the chip’s Vin and GND pins, and the built-in converter steps down the voltage to run the chip! This is great because we do not need our own voltage converter. Using a 5v power supply, we can connect the NodeMCU’s Vin to 5v and the NodeMCU creates a 3.3v voltage across its 3.3v pins. Voltage converter built in! Awesome!

How to wire LLS

The shifter has a high voltage and a low voltage side. LV1, LV2, LV3 and LV4 take in low voltage signals that you want to convert. You can input 4 independent signals to be stepped up to 5v. HV1 to HV4 are the corresponding output pins.

LV and GND on the low voltage side require the expected input signal voltage to be applied to it. In our case you would connect the Arduino’s 3.3v and adjacent GND pins to those pins respectively.

HV and GND on the high voltage side require the “high” potential to be applied to it. In our case that is +5v and ground, respectively.

The component then converts all incoming LV signals to the supplied HV voltage.

As we are working with different voltages here it is easy to mix them up and damage your components. Be very careul about connecting your wires correctly the first time.

How to get around not using LLS

Avoid using logic level shifters if possible. They complicate your circuit and add room for error.

It is a good idea to eliminate having to use a logic level shifter if it can be avoided at all. How? Different ways:

  • If you need to control 5V logic, get a controller that runs on 5v and outputs uses 5v logic on its pins. That way you don’t need to work with different voltages and theres no need to convert logic signals.
  • (for LED controller projects): Use of RGB adapters. These adapters take low voltage PWM signal (say 5v) and a target voltage (12V) on one side and automatically step up the signal to the required voltage. They are essentially compact circuits that integrate the DC voltage converter and transistors. This is a lot easier to use than creating a custom circuit. They are made to step up the PWM signal voltage to control common anode LED strips (which run on 12V). This makes for a much neater and simpler circuit than wiring your own MOSFETS.

Hosting a Jekyll Blog on Raspberry Pi (using Docker)

In an attempt to self-host this blog, I have built an ARM/RPi compatible Docker image for Jekyll. You can build it yourself by running docker build -t jekyll-rpi . or download my image from Dockerhub using docker pull danobot/jekyll-rpi.

The image is 1GB in size and any suggestions to reduce this file size would be appreciated. Leave a comment or edit this page using the link near the post title to submit your changes.

Dockerfile (if you want to build yourself)

FROM arm32v7/ruby:2.4.2-jessie

RUN apt-get update \
  && apt-get install -y --no-install-recommends \
    node \
    python-pygments \
  && apt-get clean \
  && rm -rf /var/lib/apt/lists/
RUN wget
RUN tar xzf cmake-3.10.1.tar.gz && rm cmake-3.10.1.tar.gz

RUN cd cmake-3.10.1 && ls && ./configure --prefix=/opt/cmake  && make
RUN  cd cmake-3.10.1 &&  make install

RUN gem install \
  github-pages \
  jekyll \
  jekyll-redirect-from \
  kramdown \
  rdiscount \
RUN cd /bin && ln -s /opt/cmake/bin/cmake cmake

ENV JEKYLL_ENV production

This image packages jekyll, allowing you to run jekyll build without having to set up ruby, bundler, gem and all that good Ruby stuff1 (which can get incredibly frustrating).

We can run this image using the following command:

docker run -it --name jekyll-build -v /home/pi/repos/blog:/src danobot/jekyll-rpi

We then need to run bundle install inside the container before we can use it.

docker exec jekyll-build bundle install

We can now use our image:

docker exec jekyll-build jekyll build JEKYLL_ENV=production


Now that we’ve got our jekyll-build container ready to build our jekyll blog, we need a way to serve the generated HTML. Jekyll stores the generated site in the _site directory. All we need to do is set up an nginx web server to serve that directory.

Create a docker-compose.yaml file with the following contents:

version: '3.3'
    container_name: blog-serve
    image: lroguet/rpi-nginx:latest
      - 80:80
      - ~/repos/blog/_site:/var/www/html

If you followed my previous tutorial on how to set up SSL and a reverse proxy, then add the labels, networks and expose section as explained in the linked post. This configures your reverse proxy to route internet traffic to your Jekyll webserver container.

Automated Daily Builds

The following script pulls your latest repository changes and rebuilds your _site directory. It starts up your jekyll-build container, regenerates your blog, and then shuts down that container (as its only required to generate the static HTML).


git pull

docker stop blog-serve
sudo rm -rf _site

docker start jekyll-build
docker exec jekyll-build jekyll build JEKYLL_ENV=production
docker stop jekyll-build

docker start blog-serve

You can add this script to a Cron job by typing crontab -e and adding the following line to it:

0 2 * * * /bin/bash /home/pi/repos/blog/
  1. I find setting up Ruby/Rails development environments an absolute pain in the butt. Hence the docker image. ↩︎

Automating Linux File Backups using Rsync, Bash script and Cron

This is a script that copies directories from A to B. It does not compress directories into an archive, though you are welcome to adapt your script using snippets from this post.

The following is a bash script that mounts an external storage device for you (given the /dev/sda1 device name), copies files contained in a SOURCE directory to a DESTINATION directory. You can specify files to ignore in a separate rsync-ignore.txt file. Check out this post for the various ways you can exclude files with rsync.


#echo "DIR: $DIR"

if [ -n "$DIR" ]; then

  if [ $(mount | grep -c $MOUNTDIR) != 1 ]
    echo "Mounting $DEVICE"
    mount -t exfat $DEVICE $MOUNTDIR || exit 1
    echo "$MOUNTDIR is now mounted"
    echo "$MOUNTDIR already mounted"
  echo "Commencing copy of files"
  rsync -ahvWi --exclude-from='rsync-ignore.txt' --progress $SOURCE $DESTINATION
  umount $MOUNTDIR
  echo "The directory is empty."

You can find out where your device is mounted by running tail -f /var/log/syslog and checking the mount location log entry when you plug in your storage device.

rsync-ignore.txt Add and remove files types to exclude. THis file must be in the same directory as your bash script.


Automatic execution using Crontab

Type crontab -e, and paste this:

0 2 * * * /bin/bash /home/pi/

This will run your script daily at 2 am.

Spotify Button

Wouldn’t if be great if all you had to do to add a song to your Spotify is smack a big red buzzer button when you hear a song you like? The currently playing song is added to a playlist on your Spotify Account and you can listen to it again later!


It happens all too often when I have people over that there is a song playing on Spotify that I really like. In that moment it would be nice (and probably a lot of fun) to be able to smack a button on the table which instantly saves that song to a playlist. I have also been looking for an opportunity to write some backend Javascript in NodeJS and dockerize a NodeJS application server. This project presents the perfect opportunity to try these things out.



  • Small box (~10cm by 10cm) with large (>60mm emergency buzzer button)
  • (probably) Small OLED screen to display song name and artist
  • (possibly) Battery operated
  • (optional) LED Matrix display for showing the currently playing song name and artist



  • NodeJS backend serivce written in Express1
  • Communicate with Spotify API to fetch song information
  • Keep track of “buzzed” songs by adding them to a playlist.
  • Respond to “button pressed” events by adding currently playing song to a playlist.
  • Send a websocket or MQTT message2 at the beginning of a new song to update the LED display.

Button Firmware

  • Send a simple “button pressed” API request to Express backend
  • provide visual feedback to show button press has been registered
  • Open a websocket or MQTT connection to backend and listen for song change messages and update the LED display when a message is received.

Backend Features

I got a little carried away with the backend and came up with a web interface with the following features (in development):

Unfortunately, I developed this with the assumption that the Spotify API would allow adding new songs to the play queue. This is not possible yet but the feature seems to be in development.

Auto DJ
Specify one or more playlists on your Spotify account and Auto DJ will add a song from those playlist every couple of songs. This is useful for adding the occasional “sing along” songs to the mix. Just create a playlist with sing-along songs and Auto DJ will automatically insert those songs into the mix at bearable intervals without duplicates!
Party Mode
Enter other Spotify usernames and let your friends add a playlist of their own to the mix! The server keeps track of played songs to ensure there are no duplicates. Keeps everyone happy whilst minimizing the amount of time spent selecting music. Plus, if you hear a song you like smack that button and it is added to a public playlist accessible to everyone.
Normal Distribution Player
Kind of like intelligent shuffle. The randomness of this shuffle mode is based on a normal distribution. We start by sorting the playlist by new to old songs. You can then specify the mean and variance of the normal distribution curve. For example, if you want to hear predominantly songs that you recently discovered then you would place the mean nearer to 100%. Say there are 100 songs in your playlist sorted from old to new. Using the normal distrution as input, we can generate a series of numbers clustered around the mean of, say, 80 percent. The next song played is determined by the output of the normal distribution. Say the output is 75, in this case we play a song in position 75/100. Any played songs are removed from the list. So the next song played will be X% along a scale of 1 to 99, where X is the next number generated by the normal distribution. The benefit of the normal distribution is that you won’t be bombarded with all the latest songs (making you despise them after a while). The normal distribution will add in the occasional “olderish” song into the mix, while playing mostly new material.

How well this will work in practise will be discovered later.


Some photos and a video showing the prototype:

Some photos of the NodeMCU breadboard prototype with 1 led and a small red button.

The video above shows the following steps:

  1. Red button is pressed
  2. Arduino makes GET web request to /recents/save to save the currenlty playing song.
  3. Server handled GET request, retrieves currently playing song, saves it to playlist.
  4. Server returns song info and custom message back to client. (The custom messages are inside jokes among my friends)
  5. The request returns song information as well as a random message.
  6. Arduino prints those to serial console (prototype does not include OLED screen) and flashes LED to indicate success.
  7. The banger button is pressed again for the same song (whether thats by accident or deliberately).
  8. The server returns 304 (HTTP status code for “no change/duplicate”) along with song information and a message to display. The song is not logged again as it has already been marked a “banger” (a great song).

Here is the Arduino console output:

Connecting to ***
WiFi connected
IP address:
Attempting MQTT connection...connected
making GET request
Status code: 200
{"status":200,"message":"Whatta tune."}
Whatta tune.
Mi Gente
J Balvin
Attempting MQTT connection...connected
making GET request
Status code: 200
{"status":304,"message":"Duplicate, but we'll let it slide."}
Duplicate, but we'll let it slide.
Mi Gente
J Balvin
Attempting MQTT connection...connected


The NodeJS backend is mostly completed and you can install it by cloning the repository or starting the Docker container. Access it by going to localhost:3000. You need to authenticate with your Spotify account first by clicking on the Login menu option.

docker run -d -p 3000:3000 danobot/spotify-button
  1. Express for no particular reason other than it seems popular and lightweight. I had a look at some API route examples and it seems to be just what I am looking for. ↩︎

  2. Depends if I can be bothered implementing a websocket. An MQTT implementation would probably be simpler because the nature of the protocol implies pushing data messages into the network and then forgetting about them. It is not mission critical that the receiver acknowledges the receipt of new data. A websocket allows two way communication through a dedicated connection between the browser and server. ↩︎

Setting up SSL encryption using Reverse Proxy on Raspberry Pi

Setting up SSL encryption using Reverse Proxy on Raspberry Pi

As this post title suggest, this gon’ be a major headache from start to finish. This is hopefully an improvement on other tutorials and will make the process of implementing a containerized reverse proxy on Raspberry Pi easier for you.

Setting up a free domain name, Dynamic DNS (DDNS) and Port Forwarding to your Raspberry Pi

If you are thinking about running a website or blog on your Raspberry Pi, there is a real need to make your device accessible from the internet. This post shows how to set up a free, custom domain name for your router along with all the stuff needed to make it work.

The issue with this is that your internet modem is assigned a random IP address and this address can change at any given moment. The traditional way to allow remote access to your modem is using a static IP address assigned through your ISP. This can be expensive and is impossible in some cases.


Before we start, make sure you have the following:

  • An account on, a free Dynamic DNS provider with lots of configuration options
  • get a free .tk domain name at (make sure you sign up, we need admin access later)
  • enable port forwarding on your ISP account to enable TCP port forwarding to your modem
  • have a service running on your Raspberry Pi (or other network device) that you want to make available outside of your home network


  1. Forward external modem port to Raspberry Pi
  2. Set up Dynamic DNS between your modem and the DDNS provider (
  3. Link custom domain name to Dynu name servers
  4. (Optional) map subdomain to port

Step 1: Set up port forwarding on your modem

The instructions for this vary for different modems. The objective is to forward a modem port to your Raspberry Pi port. For webservers this is usually port 8080, however it can vary depending on the application you want to make externally available.[1^] A quick Google search should tell you how to achieve this for your particular modem.

Step 2: Set up Dynamic DNS between your modem and the DDNS provider (

Since your modem’s IP address changes arbitrarily, we need a mechanism to map a static domain name to your ever-changing IP address.

Step 2.1: Create a new DDNS service on Dynu

  1. Click on DDNS Services on Control Panel (or click here)
  2. Click on Add button on top right
  3. Enter you domain name under “Option 2” and click Add

Your DDNS service has been created and your current IP address prefilled. The screen shown below allows you to configure the service. Control Panel for new DDNS service on Dynu

Step 2.2: Update IP address automatically from modem

Most modems these days have the ability to send IP address updates to DDNS providers, notifying them every time the modem is assigned a new IP by the ISP. Again, the instructions for this are modem specific. Dynu’s help section includes instructions for a few common modem brands. If yours is not listed, googling setting up dynamic dns on <modem> should tell you how to do it.

You need your custom domain name (, your Dynu usrname and password and the IP address update link (supplied by Dynu).

Now that we have established a reliable connection between our DDNS provider and our modem, which gives us a static address to access our home network, we need to link our domain name to Dynu’s DNS server1. Doing this is relatively simple. All we need to do is tell our domain name provider to use Dynu’s name servers (listed here).

  1. Log on to your account
  2. Go to the Domains section and edit your domain
  3. Select custom name servers and plug in all of Dynu’s listed name servers into the empty text fields.

Step 4: Map subdomain to specific port

If you have multiple applications running on your Raspberry Pi, each on a different port, then you will benefit from mapping a subdomain to those ports. Say you have notebook server running on port 5055. Rather than having to specify the port manually when accessing your domain ( we can map port 5055 to a custom subdomain such as This way you can have a few subdomains for different services such as, and

Control Panel for new DDNS service on Dynu

We can achieve this by clicking on the Web Redirect link, which takes us to the following page:

Add new Web redirect

Under Node name you can specify the subdomain to use for the application running on your server. Make sure you select “Port Forwarding” as the redirect type. Leave Hostname or IP empty and enter the application port to redirect to. YOu can optionally check the Mask/cloak URL option to hide any query strings from view. This results in the browser displaying the hostname only in the address bar (e.g. rather than These query parameters are application specific. Cloaking URLs is highly undesirable if you want the URL to be bookmarkable by users.


  • make sure your IP address is updated correctly. If you IP changed, and your router information is incorrect, then the DDNS provider will not be notified of the change. This results in your domain forwarding traffic to an outdated IP address. (check on dynu whether the listed IP is correct).
  • If you are experiencing downtime due to unknown reasons, but your setup works perfectly at other times, it is possible this is the result of DDOS attacks. You are especially prone to DDOS attacks if your website is indexed by search engines. The scrapers themselves might be bombarding your server with requests. In this case you need to adjust the scraper/indexing settings.


You should now be able to access the services on your Raspberry Pi using a static domain name with subdomains for each service forwarded to a specific port on your Pi. This is a very neat setup. If you have any problems please get in touch through the comments section.

A note on security: It is important to password protect your applications and use SSL encryption whereever possible. I will cover the installation of fail2ban and certbot in a future tutorial2, which will make the setup more secure.

  1. Dynu will convert your custom domain name to IP address currently assigned to your modem. This bridges the gap of the “unknown IP address”. ↩︎

  2. fail2ban helps prevent DDOS attacks by blocking known attacker IP addresses through traffic analysis. certbot allows the automatioc retrieval and validation of SSL certificates, allowing you to serve content via https↩︎

Automating Docker Volume Backups

Automating Docker Volume Backups

Backing up production databases regularly is very important. I am self-hosting Leanote, an open-source note taking application server and that required some kind of automated daily backups.

Docker Background

Docker stores volumes in the /var/lib/docker/volumes directory. The naming convention for docker volumes is <directory name>_<volume name> where the directory name is the name of the directory containing docker-compose file and volume is the volume name as specified in the docker-compose file. On Linux based systems, each volume directory is directly accessible from a root account. This makes for a simple backup process.

Automating Docker Volume Backups

To backup a volume, we can simply compress the volume directory using tar and then back up the archive file to version control1.

To set up automated backups of your important data:

  1. Copy and paste the script below to a new file in your ~/backups directory.
  2. Run git init inside your backups directory (and set up a remote link for external backups)
  3. Run crontab -e and append the following line: 0 1 * * * /bin/bash /home/pi/backups/ This runs our backup script daily at 1am.

Backup Script

#Purpose: Backup docker container/s
#Version 1.0

items=(mongo)                           #space separated list of words. Used in file names only.
vol_names=(leanote_data)                #space separated list of volume names. Same order as items array.

DESDIR=/home/pi/backups                 # backup directory


TIME=`date +%m-%d-%y-%H-%M-%S`

for i in "${!items[@]}"; do
  echo "[$i]: Backing up ${items[$i]} (Volume: ${vol_names[$i]}) -------------------------- "
  echo "     Source:      $SRCDIR"
  echo "     Destination: $DIR"
  sudo tar -cpzf $DIR $SRCDIR
  echo "Content Listing (and integrity test):"
  tar -tzf $DIR
  git add $DIR
  git commit -m "$ITEM backup $TIME"


# Push all commits at the end
git push

This script compresses a given volume, moves the resulting archive to a subdirectory in backups and commits that file to version control. You can use this same script to backup multiple volumes by adding more elements to the items and vol_names arrays.

Congratulations! You can now rest assured that your data is backed up automatically. To confirm backups work, check your git repository or your local mail server. Cron sends output logged to STDOUT to the user executing the script (pi@raspberrypi). If your Cron logs show mail delivery errors, then you need to install postfix.

Access Cron emails using the mutt command (install if unavailable). Mutt provides a simple way to check the script outputs and confirm it is working as expected.

Do not stop here! Try this script in a non-production environment and restore a backup of some test data (see next section).

Backing up to AWS

See the following modified script to backup to AWS instead. YOu can set up a lifecycle rule to automatically delete backups older than 31days. Some sort of lifecycle is required as to not exceed the free usage limits.

#Purpose: Backup docker container/s
#Version 1.1

items=(influxdb)                           #space separated list of words. Item is descriptive, used in file names only.
vol_names=(influxdb)                #space separated list of volume names. Same order as items array.

DESDIR=/home/daniel/backups/ubuntu                 # backup directory


TIME=`date +%Y-%m-%d-%H-%M-%S`

pushd $DESDIR

for i in "${!items[@]}"; do
  echo "[$i]: Backing up ${items[$i]} (Volume: ${vol_names[$i]}) -------------------------- "
  echo "     Source:      $SRCDIR"
  echo "     Destination: $DIR"

  docker run -v ${vol_names[$i]}:/volume -v$DESDIR/$ITEM:/backup --rm loomchild/volume-backup backup $ITEM-$FILENAME


/home/daniel/.local/bin/aws s3 sync $DESDIR s3://bucket-name

Restoring a Volume Backup

Before you relax and let your backup script do its work, it is important you convince yourself that the resulting archive contains not only the correct files but that they are picked up correctly by Docker when extracted and moved back into the /var/lib/docker/volumes directory.

Run the backup script, and then use the script below. We can extract this archive using sudo tar -zxvf <archive> command. This reproduces the same directory structure where the files were originally located. In our case, var/lib/docker/volumes/<volume name>. To restore the volume, move the <volume_name> directory into /var/lib/docker/volumes.

# cd into extracted file
cd var/lib/docker/volumes
mv <volume name>/ /var/lib/docker/volumes
  1. This is ok, in my opinion, for small databases up to a few megabytes in size. For larger backups, a remote FTP share would be more appropriate. ↩︎