Adding a ks0212 relay board to the mqtt universe

Weatherstation with raspi
Weatherstation with raspi

Adding the 4 channel relay board ks0212 to the MQTT universe

We just hacked a trotec dehumidifier for Herwigs Observatory. The idea was to additionally activate the dehumidifier when the difference between outside and inside humidity is above 10%. Normally there is a fan taking care of it but sometimes the differents gets to high. As there is already a raspberry pi running in the observatory for the weatherstation and the flightradar24 installation we just added the 4 channel relay board ks0212 from keyestudio. Not touching the 220V part we directly used the relay to “press” the TTL switch on the board for 0.5 seconds to turn on and off the dehumidifier. Here are the code snipped we used for this. The control is completely handled via MQTT.

Installing necessary programs and libraries

For the sake of simplicity we used python and the GPIO library wiringpi. Therefore we first install the python development parts and them the python libraries for wiringpi and MQTT. As this is a dedicated hardware installation we don’t use virtualenv and directly install the library as root system wide.

The python program

Again, a very simple python script, basically attaching to a (you need to change the code, there is no config) mqtt server and subscribes itself to a certain topic. Then it waits for messages and cuts off the last part of the topic to identify the relay. The naming convention is based on the relay name printed on the ks0212 pcb. As payload you can send “on“, “off” and “press“. “press” switches the relay on for half a second in order to simulate a button press as we need it for our dehumidifier.

Adding a systemd service

In order to keep the wantabe daemon up and running and also start it automatically at system start we add this service configuration file in “/lib/systemd/system/relayboard.service“:

Activating the service

The following lines activate the service:

Checking the status can be done with:

ks0212 Pinout

If you want to do some hacking with the ks0212 relay board on your own here is the pin mapping table. I used the very cool side https://pinout.xyz/pinout/wiringpi for getting the numbers:

Relay WiringPi BCM GPIO Link
J2 7 4 7 https://pinout.xyz/pinout/pin7_gpio4
J3 3 22 15 https://pinout.xyz/pinout/pin15_gpio22
J4 22 6 31 https://pinout.xyz/pinout/pin31_gpio6
J5 25 26 37 https://pinout.xyz/pinout/pin37_gpio26

 

 

Workload container for autoscaling test with kubernetes

Workload
Workload

The Idea

Every now and then you want to test your installation, your server or your setup. Specially when you want to test auto scaling functionalities. Kubernetes has an out of the box auto scaler and the official descriptions recommends a test docker container for testing with a apache and php installation. This is really great for testing a web application where you have some workload for a relatively short time frame. But I would also like to test a scenario where the workload runs for a longer time in the kubernetes setup and generates way more cpu workload then a web application. Therefore I hacked a nice docker container based on a c program load generator.

The docker container

The docker container is basically a very very simple Flask server with only one entry point “/”. The workload itself can be configured via two parameters:

  • percentage How much cpu load will be generated
  • seconds How long will the workload be active

The docker container itself uses nearly no CPU cycles as Flask is the only python process being active and waits for calls to start using CPU cycles.

lookbusy

I use a very nice open source tool called lookbusy from Devin Carraway which consumes memory and cpu cycles based on command line parameters. Unfortunately the program has no parameter to configure the time span it shout run. Therefore I call it the unix command timeout to terminate its execution after the given amount of seconds.

The Flask python wrapper

The only program is a python Flask one, very short and only takes the get call to its root folder, checks for the two parameters and starts a thread with the subprocess. The get call immediately returns as it also supports long run workload simulations.

The Dockerfile

The docker container is based on python latest (at this time 3.6.4). I put all the curl, make, install and rm calls into a single line in order to have a minimal footprint for the docker layer as we do not need the source code any more. As Flask is the only requirements I also call it directly without the requirements.txt file. The “-u” parameter for the python call is necessary to prevent python from buffering the output. Otherwise it can be quite disturbing when trying to read the debug log file.

Building and pushing the docker container

Building and pushing it to hub.docker.com is straightforward and nothing special.

Testing it on a kubernetes cluster

I have chosen the IBM cloud to test my docker container.

Requesting a kubernetes cluster

Requesting a kubernetes cluster can be done after login with

This command uses the bluemix CLI with the cluster plugin to control and configure kubernetes on the IBM infrastructure. The parameters are

  • –name to give your cluster a name (will be very important later on)
  • –location which datacenter to use (in this case dallas). Use “bx cs locations” to get your possible locations for the chosen region
  • –workers how many worker nodes are requested
  • –kube-version which kubernetes version should be used. Use “bx cs kube-versions” to get the available versions. “(default)” is not part of the parameter call.
  • –private-vlan which vlan for the private network should be used. Use “bx cs vlans <location>” to get the available public and private vlans
  • –public-vlan see private vlan
  • –machine-type which kind of underlying configuration you want to use for your worker node. Use “bx cs machine-types <location>” to get the available machine types. The first number after the “.” is the amount of cores and one after “x” the the amount of RAM in GB.

This command takes some time (~1h) to generate the kubernetes cluster. BTW my bluemix cli docker container has all necessary tools and also a nice script called “start_cluster.sh” to query all parameters and start a new cluster. After the cluster is up and running we can get the kubernetes configuration with

Starting a pod and replica set

We start the pod and replica set without a yaml file because the request is very straight forward. Important here is the parameter “–requests“. Without it the autoscaler can not measure the cpu load and it never triggers.

Exposing the http port

Again because the call is so simple we directly call kubectl without a yaml file to expose the Port 80. We can check for the public IP with

In case the cloud runs out of public IP addresses and the “EXTERNAL_IP” is still pending after several minutes we can use one of the workers public ip addresses and the dynamic assigned port. The port is visible with “kubectl get svc” at the “PORTS” section. The syntax is as always in docker internalport:externalport. The workers public IP can be checked with

So instead of calling our service with a official public ip address on port 80 we can use

Autoscaler

Kubernetes has a build in horizontal autoscaler which can be started with

In this case it measures the cpu load and starts new pods when the load is over 50%. The autoscaler in this configuration never starts more than 10 and never less than 2 pods. The current measurements and parameters can be checked with

So right now the cpu load is 0 and only one replica is running.

Loadtest

Time to get call our container and start the load test. Depending on the URL we an use curl to start the test with

and check the result after some time with

As we see the load increases and autoscaler kicks in. More details can obtained with the “kubectl proxy” command.

Deleting the kubernetes cluster

To clean up we could either delete all pods and replica sets and services but we could also delete the complete cluster with

 

Execute the Radio Meteor Observations program on mac os

MeteorLogger Screenshot
MeteorLogger Screenshot

What is it about

Wolfgang Kaufmann wrote an impressive article and even a more impressive software for the hobby radio astronomers. I highly recommend checking out the article and play with the software as there are not so many radio astronomers among the community. His software is written in python with a very clean UI. It directly connects via the computer sound card and grabs the audio signal. I shortly describe here what to install on a Mac OS to get his software up and running.

Where to get it

The software can be downloaded at http://www.ars-electromagnetica.de/robs/download.html. Unfortunately it is not available on any online repo like github but the source code can be downloaded as a zip file.

Preparation

The PyAudio package needs some libraries and direct access to the os sound system. Therefore we need it install this audio package outside of python itself

The necessary python libs can be installed via pip. I recommend doing it in a virtual environment

That is all. After installing the python libs the program starts right away

 

Image Recognition with Tensorflow classification on OpenWhisk

The big picture

Image classificationAs described in a previous article we (Niklas and I) are going to use Tensorflow to classify images into pre-trained categories. The previous artikel was about  on how to train a model with Tensorflow on Kubernetes. This article here now describes how to use the pre trained model which is stored on Object Storage. Similar to the training we will also use docker to host our program but this time we will use OpenWhisk as a platform.

Like the first part I also use the Google training Tensorflow for Poets. This time not the code itself but I copied the important classification parts from their script into my python file.

OpenWhisk with Docker

OpenWhisk is the open source implementation of an so called serverless computing platform. It is hosted by apache and maintained by many companies. IBM offers OpenWhisk on their IBM cloud and for testing and even playing around with it it the use is for free. Beside python and javascript OpenWhisk also offers the possibility to run docker containers. Internally all python and javascript code is executed anyhow on docker containers. So we will use the same official Tensorflow docker container we used to build our training docker container.

Internally OpenWhisk has three stages for docker containers. When we register a new method the execution instruction is only stored in a database and as soon as the first call approaches OpenWhisk the docker container is pulled from the repository, then initialised by an REST call to ‘\init‘ and then executed by calling the REST interface ‘\run‘. The docker container keeps active and each time the method is called only the ‘\run‘ part is executed. After some time of inactivity the container is destroyed and needs to be called with ‘\init‘ again. After even more time of inactivity even the image is removed and need to be pulled again.

The setup

The code itself is stored on github. Let’s have a look first on how we build the Docker container:

Dockerfile

As you can see this Docker is now really simple. It basically installs the python requirements to access the SWIFT Object Store and starts the python program. The python program keeps running until the OpenWhisk system decides the stop the container.

We make heavy use of the idea of having a init and a run part in the execute code. So the python program has two main parts. The first on is init and the second run. Let’ have a look the init part first which is basically setting up the stage for the classification itself.

\init

Unfortunately it is not so easy to configure the init part in a dynamic way with parameters from outside. So for this demo we need to build the Object Store credentials in our source code. Doesn’t feel right but for a demo it is ok. In a later article I will describe how to change the flow and inject  the parameters in a dynamic way. So what are we doing here?

  1. 10-16 is setting up a connection to the Object Store as described here.
  2. 18-22 is reading the pre trained Tensorflow graph directly into memory. tf is a global variable
  3. 24-26 is reading the labels which are basically a string of names separated by line breaks. The labels are in the same order as the categories in the graph

By doing all this in the init part we only need to do it once and the run part can concentrate on classifying the images without doing any time consuming loading any more.

Tensorflow image manipulation and classification

How to get the image

The image is transferred base64 encoded as part of the Line 24-25 request. Part of the dictionary is the key payload. I choose this because Node-red is using the same name for some kind of most important key. Tensorflow has a function to consume base64 encoded data as well but I could not get it to run with the image encoding I use. So I took the little extra step here and write the image on file and read it back later. By directly consuming it I think we could same some milliseconds processing time.

Transfer the image

  • Line 27 reads the image back from file
  • Line 29 decode the jpeg into an internal representation format
  • Line 30 cast the values to an float32 array
  • Line 31 adds a new dimension on the beginning of the array
  • Line 32 resizes the image to 224, 244 to have a similar size with the training data
  • Line 33 normalize the image values

Classify the image

  • Line 34-35 gets the input and output layer and stores it in the variables
  • Line 36 loads the image into Tensorflow
  • Line 39 here is the magic happening. Tensorflow processes the CNN with the input and output layer connected and consumes the Tensorflow image. Furthermore numpy is squeezing out all array nesting to a single array.
  • Line 40 has an array with probabilities for each category.

Mapp the result to labels

The missing last step is now to map the label names to the results which is be done in line 43 and 44.

Build and deploy it in OpenWhisk

The docker container can be build with

and pushed with

Run it in OpenWhisk

After configuring the command line tool wsk the action itself can be created with

For testing we need an image base64 encoded as file on our local hard disk. Then we can invoke the call with

The first execution will take up to 15 seconds because the docker container will be pulled from docker hub and the graph will be loaded from the Object Store. Calls later should be around 150 milliseconds processing time. The parameter –result will force OpenWhisk to wait for the function to end and also show you the result on your command line.

If you want to get the log file and also an exact execution time try this command:

  • First call results in  “duration”: 3805. Your call itself took way longer in the first call because 3805 is only the execution of the docker container (including init) not the time it tooks OpenWhisk to pull the docker container from docker hub.
  • Second call results in  “duration”: 156.

Build a web UI

Well UI is nothing I can talk about. But have a look at Niklas blog post on how to build a web UI. An test installation can be found here: https://visual-recognition-tensorflow.mybluemix.net

Accessing IBM Object Store from Python

IBM Object Store

SWIFT Object StoreIBM offers a S3 compatible Object Store as a file storage. Beside S3 the storage can also be accessed via the SWIFT protocol by selecting a different deploy model. As the cost for this storage is extremely low compared to Database storage it is perfect for storing sensor data or other kind of data for machine learning.

I use the storage for example to host my training data or trained model for Tensorflow. Access and payment for the Object Store is managed via IBM Cloud aka Bluemix. And as this offering is included in the Lite offering the first 25GB are for free. 🙂

As there is a problem getting the S3 credentials right now I use the SWIFT access model. Please make sure when you request the Object Store service to access the SWIFT version to select the right access model.

Python libs

As the SWIFT protocol is part of openstack, the python access client can be found at https://docs.openstack.org/python-swiftclient. Depending on the security access model you also need the openstack Identity API (Keystone). Both libs are on github (swiftclient and keystone) and also available via pip.

Access storage

Inside the IBM Cloud web interface you can create or read existing credentials. If your program runs on IBM Cloud (Cloudfoundry or Kubernetes) the credentials are also available via the VCAP environment variable. In both cases they look like mine here:

Important informations are the projectId, region, userId and password. The access with keystone the swift python client looks like this:

Important is the version information, also as part of the authurl.

Accessing data

Objects can be read and written, containers (aka buckets) can we read and modified as described in the documentation. For example: