PiboyDMG running from USB drive

My wife’s PiboyDMG stopped working and wouldn’t boot. After some investigation it was due to the SD card being fried (no surprise there, SD cards are horribly unreliable) so I tried setting it up to boot from a 128GB mini usb drive. I followed the same steps I would have with an sd drive:

  1. Dowload retropie
  2. Install retropie on the usb drive using balena etcher
  3. Download the Piboy DMG firmware updates ,copy the Piboy DMG files to the usb drive and update the config file
  4. Boot up πŸ™‚

I just wrote strcat( char*, char*) :)

#include<stdio.h>
char * strcat (char *,char *);
void main(void){
	char *s = "Pointer powers\n";
	char *t = " activate\n";
	
	char *z=strcat(s,t);

}
char * strcat(char *s, char *t){
	//return the concatenation of s and t
	char concatenation[255];
	int i=0;
	while (*s != '\0'){
		concatenation[i]=*s;
		*s++;
		i++;
	}
	while (*t != '\0'){
		concatenation[i]=*t;
		t++;
		i++;
	}
	concatenation[i]='\0';
	return concatenation;
}

Build a raspberry pi kubernetes cluster

1. Get the hardware

To build this cluster you will need the following hardware parts:

  1. 1xPoE switch (link) cost EUR 118
  2. 5x 15cm ethernet cables (link) cost EUR 2.4×5 = EUR 12
  3. 4xPoE hat for Raspberry Pi (link) cost EUR 28.99×4 = EUR 115.95
  4. 4xRaspberry pi 4 8 GB (link) cost EUR 81.66×4 = EUR 326.64
  5. 4x64GB usb 3.1 pendrives (link) cost EUR 11.63×4 = EUR 46.52
  6. spacers to stack the raspberry pis (link) cost EUR 9.99
  7. Total cost EUR 630

2. Install base infrastructure

  1. Set up the PoE Hats and stack the raspberry pis using the spacers.
  2. Install raspberry pi OS and base configuration (link)
    1. Since Raspberry pi 3B you can boot directly from USB. No more io errors from unreliable SD cards πŸ™‚
    2. To perform a headless install create a file called ssh in /boot/ folder, this will enable ssh so you can access your pis remotely without need for a monitor (link)
    3. Install tmux and get this script (link) to simultaneously modify all 4 raspis
      1. sudo apt install tmux
      2. vi multi-ssh.sh
#!/bin/bash
ssh_list=( user1@server1 user2@server2 ... )
split_list=()
for ssh_entry in "${ssh_list[@]:1}"; do
    split_list+=( split-pane ssh "$ssh_entry" ';' )
done
tmux new-session ssh "${ssh_list[0]}" ';' \
    "${split_list[@]}" \
    select-layout tiled ';' \
    set-option -w synchronize-panes
  1. Install Docker on each raspberry pi (link)
sudo apt-get update && sudo apt-get upgrade
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker pi
sudo reboot

2. Install kubeadm, kubelet and kubectl on each raspberry pi (link)

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

3. disable swap on each raspberry pi (link)

sudo dphys-swapfile swapoff && \
sudo dphys-swapfile uninstall && \
sudo systemctl disable dphys-swapfile

4. Add cgroup parameters to /boot/cmdline.txt on each raspberry pi (link)

sudo vi /boot/cmdline.txt
cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1

5. Configure docker to use systemd on each raspberry pi

sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

3. Initialize Kubernetes Control Plane on the master node

  1. Choose one raspberry pi to be your master node from which you will control the cluster. This is called the Kubernetes Control Plane. Run the below commands on the master node.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
rm -rf .kube/
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
vi $HOME/.bashrc
# Add the below line to the end of .bashrc
export KUBECONFIG=$HOME/.kube/config

2. Set up the kubernetes network (I am using flannel) on the master node.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
cat > /run/flannel/subnet.env
FLANNEL_NETWORK=100.96.0.0/16
FLANNEL_SUBNET=100.96.1.1/24
FLANNEL_MTU=8973
FLANNEL_IPMASQ=true

3. Get the tokens to connect the different nodes. Run this command on the master node and take note of the output.

kubeadm token create --print-join-command

4. Add the different nodes to the cluster

In each raspberry pi that you want to add as a node to the cluster run the following commands.

  1. Configure the flannel subnet file
cat > /run/flannel/subnet.env
FLANNEL_NETWORK=100.96.0.0/16
FLANNEL_SUBNET=100.96.1.1/24
FLANNEL_MTU=8973
FLANNEL_IPMASQ=true

2. Use the output you obtained previously from command “kubeadm token create –print-join-command” it will look something like this

sudo kubeadm join 192.168.1.102:6443 --token 5aotn8.ab493943zf9zf9nm \
        --discovery-token-ca-cert-hash sha256:396a8a8b28b11c8caa8474384398493482034320947090766366bff9d1516699acde

3. If all went well you should see all your nodes ready with command kubectl get nodes. You are now ready to create deployments, pods and services.

5. Destroy your kubernetes infrastructure

After you are done playing, or in case things stop working and you want to start from scratch you can use the below instructions to destroy your kubernetes infrastructure.

  1. Remove nodes by running the below in your control plane instead of raspiclustern you can use the hostnames you have set up for your machines.
kubectl drain raspicluster1 --delete-emptydir-data --force --ignore-daemonsets 
kubectl drain raspicluster2 --delete-emptydir-data --force --ignore-daemonsets 
kubectl drain raspicluster3 --delete-emptydir-data --force --ignore-daemonsets 
kubectl drain raspicluster4 --delete-emptydir-data --force --ignore-daemonsets 
kubectl delete node raspicluster1 
kubectl delete node raspicluster2 
kubectl delete node raspicluster3 
kubectl delete node raspicluster4 
kubectl get nodes

2. SSH to each machine and run the below code

sudo rm /etc/kubernetes/kubelet.conf /etc/kubernetes/pki/ca.crt /etc/kubernetes/bootstrap-kubelet.conf
sudo kubeadm reset

Artificial intelligence tweeting Bird Feeder

Create a bird watching Artificial Intelligence that runs on a raspberry and tweets short videos every time it detects a bird. You can find mine at https://twitter.com/birdfeederAI

1. Get the hardware

To build this bird watcher you will need the following hardware parts. Total cost EUR 255

  1. (optional) PoE switch (link) cost EUR 118
  2. (optional) PoE hat for Raspberry Pi (link) cost EUR 28.99
  3. Raspberry pi 4 8 GB (link) cost EUR 81.66
  4. Raspberry pi camera (link) cost EUR 33.90
  5. 64GB SD card (link) cost EUR 11.99 (update, since raspberry pi 3B you can boot from USB which is faster and fails less, so better get a 64GB usb 3.1 pendrive (link) 11.63 EUR )

2. Set up base infrastructure

  1. Install raspberry pi OS (link)
  2. Activate the camera (link)
  3. Log into the raspberry pi and create a python virtual environment with base libraries
    1. mkdir -p /home/pi/dev/birdfeederAI
    2. cd /home/pi/dev/birdfeederAI
    3. python3 -m venv ./venv
      1. This creates the python virtual environment.
    4. source ./venv/bin/activate
      1. This activates the python virtual environment. All modules installed by pip will be installed only for this virtual environment
    5. which python
      1. Now that we have created and activated our virtual environment this should return: “/home/pi/dev/birdfeederAI/venv/bin/python”
    6. which pip
      1. Now that we have created and activated our virtual environment this should return: “/home/pi/dev/birdfeederAI/venv/bin/pip”
    7. which pip3
      1. Now that we have created and activated our virtual environment this should return: “/home/pi/dev/birdfeederAI/venv/bin/pip3”
    8. pip list
      1. this will list the python modules we have installed for our virtual environment.
    9. pip install --upgrade pip
      1. This will install the latest version of pip
    10. pip install opencv-python twython
      1. install the opencv python module to process video, twython to send tweets
  4. Fix all the missing libs
    1. sudo apt install apt-file
    2. sudo apt-file update
    3. for i in find /home/pi/dev/birdfeederAI/venv/lib |grep so$|xargs ldd|grep "not found"|awk '{print $1;}'; do apt-file search $i|awk 'BEGIN{FS=":"};{print $1;}'; done|sort|uniq|xargs apt install
      1. This will list the libraries which are not installed and install them for you. If something went wrong look here (link)
  5. Download the pre-trained model MobileNet-SSD
    1. cd /home/pi/dev/birdfeederAI
    1. git clone https://github.com/chuanqi305/MobileNet-SSD.git
  6. Check that the base infrastructure is correctly installed
    1. raspistill -o mypicture.jpg
      1. this should create a picture from the camera and store it as mypicture.jpg
    2. raspivid -t 5000 -o myvideo.h264
      1. this should create a video from the camera and store it as myvideo.h264
    3. copy the below code to pycamtest.py and run it with python pycamtest.py. If everything is correctly set up you should see the output of your camera in a window.
import cv2
cv2.namedWindow("TestCV2")
vc = cv2.VideoCapture(0)
if vc.isOpened():
    rval,frame = vc.read()
else:
    rval = False
while rval:
    frame = cv2.flip(frame,-1)
    cv2.imshow("TestCV2", frame)
    rval, frame = vc.read()
    key = cv2.waitKey(20)
    if key ==27:
        break
vc.release()
cv2.destroyWindow("TestCV2")

3. Set up twitter functionality

  1. Set up a twitter account
  2. Activate a twitter developer account in https://developer.twitter.com/ and generate your API keys which you will need in the next step
  3. Obtain your API keys and copy them to /home/pi/dev/birdfeederAI/auth.py
    1. cat>/home/pi/dev/birdfeederAI/auth.py
    2. consumer_key = 'puthereyourconsumerkey'
    3. consumer_secret = 'puthereyourconsumersecret'
    4. access_token = 'puthereyouraccesstoken'
    5. access_token_secret = 'puthereyouraccesstokensecret'
    6. Ctrl-C

4. Code your birdfeeder

  1. Code your program, you can use my code to set up a headless tweeting bird detecting camera πŸ™‚
    1. git clone https://github.com/Rogeman/birdfeederAI.git
import numpy as np
import cv2
import random
import os
import logging
from twython import Twython
from twython import TwythonError

from auth import (
        consumer_key,
        consumer_secret,
        access_token,
        access_token_secret
)
twitter = Twython(
        consumer_key,
        consumer_secret,
        access_token,
        access_token_secret
)


confidence_thr = 0.5
CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat",
    "bottle", "bus", "car", "cat", "chair", "cow", "diningtable",
    "dog", "horse", "motorbike", "person", "pottedplant", "sheep",
    "sofa", "train", "tvmonitor"]
COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3))
birdfeeder_dir=os.path.dirname(os.path.abspath(__file__))
logging.basicConfig(filename=birdfeeder_dir+'/log/birdfeederAI.log', level=logging.DEBUG, format='%(asctime)s %(message)s')
mobilenet_dir=birdfeeder_dir+'/MobileNet-SSD/'
net = cv2.dnn.readNetFromCaffe(mobilenet_dir+ 'deploy.prototxt' , mobilenet_dir+ 'mobilenet_iter_73000.caffemodel')
blob=None

def applySSD(image):
    global blob
    mybird = bool(False)
    blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300)), 0.007843, (300, 300), 127.5)

    # pass the blob through the network and obtain the detections and
    # predictions
    net.setInput(blob)
    detections = net.forward()

    # loop over the detections
    for i in np.arange(0, detections.shape[2]):
        # extract the confidence (i.e., probability) associated with the
        # prediction
        confidence = detections[0, 0, i, 2]

        if confidence > confidence_thr:
            idx = int(detections[0,0,i,1])
            if CLASSES[idx]=="bird":
                mybird=bool(True)
    return mybird

def birdRatio(videoName):
    totalBirdFrames = 1
    totalFrames = 1
    vc2 = cv2.VideoCapture(videoName)
    if vc2.isOpened():
        rval2,frame2 = vc2.read()
    else:
        rval2 = False

    while rval2:
        birdinFrame = applySSD(frame2)
        rval2, frame2 = vc2.read()
        if (birdinFrame):
            totalBirdFrames = totalBirdFrames + 1
        totalFrames = totalFrames + 1

    vc2.release()
    return totalBirdFrames/totalFrames

videoLength=8*60*60*1000
randomsec=random.randint(0,videoLength)


#vc = cv2.VideoCapture(birdfeeder_dir+"/birds_video.mp4")
# If you want to record birds using your camera comment the above line and uncomment the below line. If you want to find birds in a video uncomment the line above and comment the line below πŸ™‚
vc = cv2.VideoCapture(0)
vc.set(cv2.CAP_PROP_POS_MSEC, randomsec)
if vc.isOpened():
    width = vc.get(cv2.CAP_PROP_FRAME_WIDTH)
    height = vc.get(cv2.CAP_PROP_FRAME_HEIGHT)
    fps = vc.get(cv2.CAP_PROP_FPS)
    fcount = vc.get(cv2.CAP_PROP_FRAME_COUNT)
else:
    logging.error('Can\'t open video')

    exit()

recording= False
framerecorded = 0
framecounter = 0
birdinFrame=False
fourcc = cv2.VideoWriter_fourcc(*'h264')
#out = cv2.VideoWriter('output.mp4',fourcc,20.0,(640,480))
out = cv2.VideoWriter(birdfeeder_dir+'/output.mp4',fourcc,fps,(int(width),int(height)))

if vc.isOpened(): # try to get the first frame
    rval, frame = vc.read()
    (h, w) = frame.shape[0] , frame.shape[1]
else:
    rval = False

logging.debug('Started main loop')
while rval:
    #You enter this loop once per frame
    rval, frame = vc.read()
    #uncomment the below line if you need to flip the camera upside down.
    frame = cv2.flip(frame,-1)
    key = cv2.waitKey(20)
    if key == 27: # exit on ESC
        break
    framecounter = framecounter + 1
    if (framecounter > 60):
    # Write frame to disk every 60 frames so we can see what the camera is seeing
        framecounter = 0
        cv2.imwrite(birdfeeder_dir+"/webserver/currentframe.jpg",frame)
    if (birdinFrame==False):
        #Check if this frame has a bird in it
        birdinFrame= applySSD(frame)
    if (birdinFrame== True and recording== False):
        #You have detected the first bird in a frame, start recording
        logging.info('Started recording video')
        recording=True
    if (recording == True):
        #write the frame to file keep track of how many frames you have saved.
        framerecorded = framerecorded + 1
        out.write(frame)
    if (framerecorded > 200):
        #after 200 frames stop recording
        logging.info('Checking recorded video')
        recording = False
        birdinFrame=False
        framerecorded = 0
        out.release()
        filename = birdfeeder_dir+"/output.mp4"
        birdsinvideo= birdRatio(filename)
        logging.debug('percentage of bird in video: '+birdsinvideo)
        if (birdsinvideo> 0.50):
            # if the recorded video has more than 50% of frames with a bird in it then tweet it
            logging.info('Tweeting bird video')
            video = open(filename,'rb')
            try:
                response = twitter.upload_video(media=video, media_type='video/mp4', media_category='tweet_video', check_progress=True)
                twitter.update_status(status='birdfeeder 0.5', media_ids=[response['media_id']])
            except TwythonError as e:
                logging.error('Twitter error:'+str(e))
            birdsinvideo=0
            video.close()
        randomsec=random.randint(0,videoLength)
        vc.set(cv2.CAP_PROP_POS_MSEC, randomsec)
        os.remove(birdfeeder_dir+'/output.mp4')
        out = cv2.VideoWriter(birdfeeder_dir+'/output.mp4',fourcc,fps,(int(width),int(height)))



vc.release()

5. Create an nginx webserver to see what the camera is seeing

  1. Install Docker
sudo apt-get update && sudo apt-get upgrade
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker pi
sudo reboot
  1. Run nginx docker container serving documents from the birdfeeder webserver folder
    1. docker run -it --rm -d -p 8080:80 --name web -v /home/pi/dev/birdfeederAI/webserver/:/usr/share/nginx/html nginx
  2. Create an index.html which refreshes every 2 seconds showing currentframe.jpg
cat > /home/pi/dev/birdfeederAI/webserver/index.html
<html>
    <head>
        <title>Birdfeeder</title>
        <meta http-equiv="refresh" content="2" />
    </head>
    <body>
    <img src=./currentframe.jpg>
    </body>
</html>

Ctrl+C

You can now open a web browser to your raspberry pi’s ip address port 8080 and see what your camera is seeing

6. Add birdfeeder service to systemd

We add birdfeeder to systemd so it starts on boot.

  1. Create a bash script that runs birdfeeder in a loop. Run it with nice so it does not consume 100% of cpu (running at 100% for long makes the sd card non-responsive and the system unstable).
    1. vim /home/pi/dev/birdfeederAI/bin/birdfeeder.sh
    2. chmod +x /home/pi/dev/birdfeederAI/bin/birdfeeder.sh
#!/bin/bash
docker run -it --rm -d -p 8080:80 --name web -v /home/pi/dev/birdfeederAI/webserver/:/usr/share/nginx/html nginx
source /home/pi/dev/birdfeederAI/venv/bin/activate
while [ 1 -eq 1 ]
do
nice python /home/pi/dev/birdfeederAI/birdfeeder.py
done

  1. Create service file
    1. sudo vim /lib/systemd/system/birdfeeder.service
 [Unit]
 Description=birdfeeder service
 After=multi-user.target

 [Service]
 Type=idle
 ExecStart=/home/pi/dev/birdfeederAI/bin/birdfeeder.sh

 [Install]
 WantedBy=multi-user.target

  1. Grant pemissions, add the service and reboot system
    1. sudo chmod 644 /lib/systemd/system/birdfeeder.service
    2. sudo systemctl daemon-reload
    3. sudo systemctl enable birdfeeder.service
    4. sudo reboot

Arduino controlled robot arm

I enjoy tinkering around with robots and electronics. Bridging the invisible world of software with the real world of physical things.

I discovered I could glue a breadboard to the side of the base of this robotic arm, and that I could hold the arduino board to the base with elastic bands. and that the adafruit motor board left 5 analog pins free to use, and that I could put a switch, a potentiometer and an hbridge in the breadboard with these five free pins. And that I could substitute the broken led with a new one with my soldering iron πŸ˜€

It is now all ready and working. Every time I hold down the switch button it activates one of the five motors iteratively. With the potentiometer I can have the motor run in one direction or the other. And the LED at the hand of the robot arm shines while the button is pressed.

The only problem pending is I need to change all the worn out gears from the motors as they are eroded from previous experiments (the problem with dc motors as opposed to servo motors is that you can’t know where they are, so I overextended them eroding the gears)

Machine Learning 101

I started self-training on machine learning. I bought a copy of “Hands-On Machine Learning with Scikit-Learn, Keras & Tensorflow” and am enjoying it a lot. I have a small moleskine with which I break down the concepts one day at a time πŸ™‚

So far I have not yet encountered any concept which is not trivial. This means I am progressing. In my experience things are either trivial or impossible. Our job is to break down impossible tasks until they become trivial πŸ™‚