Accessing IBM Object Store from Python

IBM Object Store

SWIFT Object StoreIBM offers a S3 compatible Object Store as a file storage. Beside S3 the storage can also be accessed via the SWIFT protocol by selecting a different deploy model. As the cost for this storage is extremely low compared to Database storage it is perfect for storing sensor data or other kind of data for machine learning.

I use the storage for example to host my training data or trained model for Tensorflow. Access and payment for the Object Store is managed via IBM Cloud aka Bluemix. And as this offering is included in the Lite offering the first 25GB are for free. ūüôā

As there is a problem getting the S3 credentials right now I use the SWIFT access model. Please make sure when you request the Object Store service to access the SWIFT version to select the right access model.

Python libs

As the SWIFT protocol is part of openstack, the python access client can be found at https://docs.openstack.org/python-swiftclient. Depending on the security access model you also need the openstack Identity API (Keystone). Both libs are on github (swiftclient and keystone) and also available via pip.

Access storage

Inside the IBM Cloud web interface you can create or read existing credentials. If your program runs on IBM Cloud (Cloudfoundry or Kubernetes) the credentials are also available via the VCAP environment variable. In both cases they look like mine here:

Important informations are the projectId, region, userId and password. The access with keystone the swift python client looks like this:

Important is the version information, also as part of the authurl.

Accessing data

Objects can be read and written, containers (aka buckets) can we read and modified as described in the documentation. For example:

 

 

Dev-Ops with OtA update for ESP8266

Over the Air update (Ota) for ESP8266

esp
esp

Thanks to the esp8266 project on github there is a very convenient way how an ESP can be updated over the air. There are three different ways available.

  1. The first one is via the arduino IDE itself where the esp opens a port and is available for firmware upload just like with a serial connection. Very convenient if you are in the same network.
  2. The second one is via http upload. So the esp provides a web server to upload the bin file. In this case there is no need to be in the same network but it is still a push and for each installed esp individual necessary.
  3. The third and most convenient way for a bigger installation base or in case the devices are behind a firewall (as they always should be) and no remote access is possible. In this case the device can download the firmware itself via http(s) download from a web server somewhere in the internet.

For a complete dev-ops pipeline from pushing to a repository to flashing a device the third scenario it the easiest one. So we need a place to store the binary files. For convenience I use amazon s3 to host my binary files as travis easily supports s3 upload. But it can be every internet platform where files can be stored and downloaded via http(s). The necessary code on arduino side looks like this:

This arduino function can be called from time to time (at startup or on constant running systems every now and then) to check for a new firmware version and in case there is a new version available automatic flash it and restart.

  • Line 1 is a #define with a placeholder for the current version of the installed firmware. This placeholder is replaced in the build pipeline at travis with an increasing number. So the compiled code has something like 23 or 42 instead of REPLACE_WITH_CURRENT_VERSION.
  • Line 2 is the URL for a latest version of a firmware.
  • Line 3 is the URL to a file with only one line with the latest build number in it.
  • Line 7-9 loads the version file from s3.
  • Line 12-13 converts the file into a number which can be compared with the define from line 1.
  • Line 17 is the firmware update itself. A detailed description of the ESPhttpUpdate class can be found here.

There are two ways to check if there is a new version available and only flash if there is something new. The one we use here is to have an own mechanism for it. I do it because on s3 I can only host static files and therefore I place the latest build number in a static file next to the firmware itself. The other way is build in into ESPhttpUpdate. The update function can be called with a build number which will be compared on the server and the return code will reflect if there is a new version or not. In this case we would need a script on the server to check for it.

Get an increasing build version number

With a little bash script we could load the last build number from s3 and then increase it in order to have the current number for our build.

This script loads the version file (line 3), increases the number (line 4) and patches our source code file (line 11) with this number instead of REPLACE_WITH_CURRENT_VERSION. After running this script the current source code contains the latest number and also the upload folder for s3 has a new file with the newest number in order to inform the polling ESPs.

Travis config file

Travis-ci is incredible easy to use and very reliable for continuous integration. In combination with platformio it is very easy to compile arduino code for several types of hardware. Simply configure the hardware in the platformio.ini file:

In this case we use the esp8266 feather board aka Huzzah. Just set the framework to your kind of esp.

Travis itself is configured by the .travis file in the root directory of your repository on github:

  • Line 1: Platformio is based on python so the build environment (although the code is c++) is python for maintaining platformio.
  • Line 3: Right now platformio is only available for python 2.7 so this line gets the latest stable version of python 2.7.
  • Line 5-7: Gets the latest cache files from the last build in order to save compile time and reduce the costs for travis. As this service is for free for open source projects it is always nice to save some money for the cool guys.
  • Line 10: Installs the latest version of platformio itself.
  • Line 13: Creates the upload directory which we will upload to s3 later on.
  • Line 14: Calls the build number increase and patch script.
  • Line 15-16: Patches the wireless lan config in case it is not handled inside the arduino code itself.
  • Line 17: Calls platformio to download all libraries and compile the arduino code itself.
  • Line 18: Platformio generates a lot of files for the linker and several other files. We only need the bin file later on, so we copy it here to the upload folder.
  • Line 20: Travis has a build in functionality to upload files after compilation. This is the part where we upload the files to s3.
  • Line 22: Defines the s3 bucket to upload the files.
  • Line 23-26: Provides the encrypted s3 credentials. See travis documentation on how to create these lines.
  • Line 29: Defines the local folder to be uploaded. Otherwise travis will upload everything from the current run.
  • Line 30: Defines the s3 folder in the bucket where the files will be stored.

With this files in place travis monitors your github repository and creates / uploads new firmware versions each time you push changes to your repository. The arduino code checks for new versions and patches itselfs as soon as there is a new version available. A complete project can be found here in my github repository.

 

Bash script for automatic picture enhancement and upload to Watson Visual Recognition classifier

Visual Recognition
Visual Recognition Tool

pushVisualRecognition.sh

I hacked a nice script for the Watson Visual Recognition service. There is already a very helpful page available here but many people (including me) like command line tools or scripts to automate processes. The script does the following processes to each picture:

  1. Resize to max 500×500 pixel. Watson internally use only ¬Ī250 pixels, so this saves a lot of upload time.
  2. Enhance the image (normalisation) for better results.
  3. Autorotate the images based on the EXIF data from your camera. Watson ignores EXIF data.

The tool expects this directory structure and reads all necessary information from it:

  • Classifiername
    • Classname
      • <more then 10 files>.jpg

The Visual Recognition key is read from the “VISUAL_KEY” environment variable.

How to install it

The pushVisualRecognition.sh script is part of the bluemixcli docker container as described here. It basically only needs imagemagick and zip installed so you can also run it without the docker container and download the script directly from github with this link. If you want to run it with docker the command is

How to run it

Simply call pushVisualRecognition.sh in your directory, all necessary information will be retrieved from the directory structure and the environment variable.

Example

Create a directory structure like this one:

Calling pushVisualRecognition.sh will result in:

Docker container with Bluemix CLI tools

BluemixCLI on docker hub
BluemixCLI on docker hub

Being an developer advocate means to play always with the latest version of tools and being on the edge. But installed programs are getting out of date and so I always end up with having installed old versions of CLI tools. One reason why I love cloud (aka other people’s computers) computing so much is because I don’t need to update the software, it is done by professionals. In order to have always the latest version of my Bluemix CLI tools in hand and being authenticated I compiled a little docker container with my favourite command line tools. cf, bx, docker and wsk.

Getting the docker container

I published the docker container on the official docker hub. So getting it is very easy when the docker tools are installed. This command will download the latest version of the container and therefore the latest version of installed cli tools. We need to run this command from time to time to make sure the latest version is available on our computer.

Get the necessary parameters

For all command line tools we need username, passwords and IDs. Obviously we can not hardcode them into the docker container therefore we need to pass them along as command line parameters when starting the docker container.

  • Username (the same as we use to login to Bluemix)
  • Password (the same as we use to login to Bluemix)
  • Org (The Organisation we want to work in, must already be existing)
  • Space (The Space we want to work in, must already be created)
  • AccountID (This can we catched from the URL when we open “Manage Organisation and click on the account)
  • OpenwhiskID (Individual for org and space, get be catched here:¬†https://console.bluemix.net/openwhisk/learn/cli)

Run the container

The container can be started with docker run and passing all parameters with -e in:

Line 8 mounts the local directory inside the docker container under /root/host. So we can fire up the container and have a bash with the latest tools and our source code available.

Use the tools

Before we can use the tools we need to configure them and authenticate against Bluemix. The script “init.sh” which is located in “/root/” (our working directory) takes care of all logins and authentications.

cf

The Cloudfoundry command line tool for starting, stopping apps and connecting services.

bx

The Bluemix version of the Cloudfoundry command line tool. Including the plugin for container maintenance. By initializing this plugin we also get the credentials and settings for the docker client to use Bluemix as a docker daemon.

docker

The normal docker client with Bluemix as daemon configured.

wsk

The OpenWhisk client already authenticated.

We can configure an alias in our .bashrc so by just typing “bxdev” we will have bash with the latest cli tools available.

Draw an analytic chart directly in the picture

Debugging feature extraction of images

Histogram chart drawn on image
Histogram chart drawn on image

A big part when writing code for visual recognition in cognitive computing is to extract information aka. features from images. For example we need to understand colour information and want to see a histogram chart on a certain RGB or HSV channel. OpenCV offers a nice way to combine image analytics and image manipulation. In combination with the library mathplot it is relatively easy to analyse the picture, extract features and write debug information as a chart to the picture itself.

Getting a test picture

For this demo I will choose a picture from a webcam of the Observatory Friolzheim. On astrohd.de we can get a life picture taken from the observatory dome. Beside the webcam pointing to the building there are also weather information (very helpful when mapping images features) and a AllSky cam.

Getting the software and libraries

As described in this blog article there is a nice docker container for openCV available. We can use this one and just add missing libs and tools with a minimalistic Dockerfile:

and build it with

Python file to get the chart for debug visual features

Line 5

This one is very important if we want to run this matplotlib “headless”. Meaning without a graphic export attached to the runtime. Like in a docker container or on a remote server.

Line 8-10

We create a matplot lib figure with subplot here where we are going to add the chart.

Line 16

As openCV uses numpy to work with images, the image itself is displayed in a multidimensional array and image.ravel() flattens this to a one dimensional array. Bincount just counts the amount of unique numbers used in this array. So with this line we just create a new 1 dimensional array of all the RGB colours in the image which is basically a histogram of the image. By doing so we mix all colours together, which is ok in this sample or a classic histogram. However if we want to see only one channel we can use

to extract a single channel.

Line 18-19

If the picture contains very shiny or very black parts the high and low values like <10 or >250 are over proportionally high. This will distort the histogram sometimes. These two lines just resets the histogram array on the upper and lower end to 0.

Line 21-22

This just smooth the histogram.

Line 23-24

Histogram without blanked parts
Histogram without blanked parts

Argmax returns the index of the highest number in a array. This one can be considered as a feature of the image. For example if we want to rank or sort our webcam pictures by daytime regarding to the sunlight this features gives us an indication. If we are using this feature the lines 18-19 becomes very important here to eliminate the reflexions in a picture. For example in the demo image the observation dome. The upper pictures contains the chart which already blanked out parts. The original chart looks like the one on the right.

Line 26-27

The matplotlib chart is drawn here into the subplot. Still the matplotlib chart and the image itself are “stored” in different libraries.

Line 29

Here the magic happens. fig.canvas.tostring_rgb exports the chart from matplotlib into a string and numpy.fromstring imports this string back to a numpy array which is like the picture we imported at the beginning.

Line 30

Here we reshapes the newly created image to the same size as the imported image.

Line 31

OpenCV can handle all kind of different image colour representations, like RGB, HSV aso. The fromstring importer reads in RGB but our image is in BGR, so this line just converts the colour representation.

Line 32

Here we combine both image with a weighted parameter of 70% of the original image and 30% of the chart.

Line 33

Finally we save this image in a debug folder with the same filename but just because we can as a png file.

Line 35

Finally we clear up the used memory. This is very important when we convert a huge amount of images.

How to use this for visual recognition

By extracting features in images this little piece of code helps us to add the feature directly in a debug image. In our example we only used the histogram and extracted the colour with the highest number, but by having the histogram in hand we could see what was the underlying information the sorting algorithm used. Sorting images in night, dusk/dawn, daylight can be a way of preparing pictures for further processing in neuronal networks. By browsing through the images in one of this sorting folders we can easily see what went wrong in a picture which does not belong in this group. Typical problem is night and day pictures are missorted because of reflexions.

Optimize pictures for visual recognition with openCV and gimp

Visual Recognition

Watson result

Computer Vision or Visual Recognition is part of cognitive computing (CC) aka Artificial Intelligence. One of the main concepts is to extract information out of unstructured data. For example you have a webcam pointing on a highway. As a human you see if there is a traffic jam or not. For a computer it’s only 640x480x3x8 (7.372.800) bit. Visual Recognition helps you to extract information out of this data. For example “This is a highway”. Out of the box systems like Watson are able to give you information what do you see on the picture. You can try it here¬†https://visual-recognition-demo.mybluemix.net. The result can be seen on the left picture. So Watson knows it is a highway and even it’s a divided highway but it does not tell you there is a traffic jam or even a blocked road. Fortunately Watson is always eager to learn, let us see how we can teach him what is a traffic jam. This article only focuses on the picture preparation part not the train Watson part. See next postings for the Watson part.

Get pictures

There are many traffic cameras all around but I am not sure about the licence, so it is hard to use it here as a demo. But let us assume we can take pictures like this one from Wikimedia:¬†Cars in I-70.If you live in south Germany there are nice traffic cameras from Strassenverkehrszentrale BaWue. Unfortunately they don’t offer the pictures with the right licence for my blog. If you know a great source for traffic cameras with the right licence please let me know.

Prepare pictures for training

Visual Recognition works a little bit like magic. You give watson 100 pictures of a traffic jam and 100 without traffic jam and he learns the difference. But how do we make sure he really learns traffic jam and not the weather or the light conditions. And furthermore only one lane in case the camera shows both lanes? So first we need to make sure we find enough different pictures of the road with traffic jam under different weather and light conditions. The second part can be done with OpenCV. OpenCV stands for open computer vision and helps you to manipulate images. The idea is to mask out parts we don’t want Watson to learn. In our case the second part of the lane and the sky. We can use GIMP to create a mask we can apply with openCV automatically to each picture.

Gimp

GIMP-Layers

First step is obvious to load the image in GIMP. Then open the layers dialog. It’s located under Windows/Dockable Dialogs/Layers or cmd-L. Here we add a new layer and select this one to paint on. Then we select in the tools menu the Paintbrush Tool and just paint the parts black we don’t want Watson to learn.

Blanked-image

Then we hide the original image by pressing the eye symbol in the layer dialog. This should leave us with only the black painting we did before. This will be our mask for openCV to be applied to all pictures. Under File/Export you can save it as mask.jpg. Make sure it is only the black mask and not the picture with the black painting.

Use openCV in docker

As openCV is quite a lot to install, we could easily use it within docker to work with our pictures. We can mount host directories inside a docker container, so in this case our directory with pictures:

This brings up the openCV docker container from victorhcm¬†and opens a shell with our current directory mounted under /host. As soon es you exit the container it will be removed because of the “–rm” parameter. Don’t worry only the docker container will be deleted, everything under /host is mounted from the host system and will remain. Everything you save in other directories will be deleted.

How to mask out part of the picture

The python program to use openCV to mask out all pictures in a directory is then really easy to use:

Basically the program iterate through all “jpg” pictures in the subfolder “pics” and saves the masked pictures with the same name in the “masked” folder. Both directories have to exists before you start the script. In order to keep the script reduced to the important parts I left the create and check directory part out of this script.

Line 4

Loads the mask images as a grayscale image.

Line  8

Loads the image to work on as a colour image.

Line 9

Here is the real work done, this applies the mask with bitwise add of all pixels. Therefore the blank will win and the transparent will let the normal picture gets through.

Line 10

Saves the new masked picture in the “maksed” folder.

Preselect pictures

For the learning process we need to sort the pictures by hand. One bucked with traffic jam and the other with ok.

How to use travis-ci to check your application code and the smart contract

Travis CI – Build summary

Hyperledger projects code structure

Travis-CI is one of the most used continuous delivery platforms for open source projects. In order to check your hyperledger blockchain application you need to check at least two different languages in the same travis-ci configuration, as your smart contract is written in GOlang and your application most likely in Javascript or Python or whatever(tm).

Lets assume you have a project with:

  • Smart Contract written in GO
  • Backend Server written in Python
  • Frontend written in Javascript (NodeJS)

If you run your hyperledger blockchain for development purpose on Bluemix you will need to compile your code against v0.6. This leads to a problem because you can not specify a branch when you checkout a repository in “go get”. Therefore we need to checkout the code “by hand”. Let us assume your project looks like this:

Each part of the project (smart contract, backend, frontend) is in a separate directory and has a code part and a unit test part. Without getting into details on how to run unit tests for each language we concentrate now on the .travis file.

Configure travis.yml

We have two tests for python (one with 2.7 the other one with the latest nightly build), one for the javascript (no unit test in this example) and the GO part.

Important are lines 29 and 30. Before we run the go build we create the directory for hyperledger and check out manually the official hyperledger code with the v0.6 branch. After that we can normally build the smart contract by calling go build ./

 

 

BC95 Board

NB-Iot with the BC95 with arduino

NBIothack
Hardware hacking with the mobile c-lab

Last weekend the c-base and friends team had a great time doing some hacking with Narrow Band IoT (NB-IoT) from Deutsche Telekom at the nbiot-hackathon at hub:raum. We could put our hands on the BC95 chip, a dev board and access to the Telekom test network. Beside hacking we had a great time with funny hats and our obligatory overalls.

The BC95 Board

The board we could use was the BC95-B8 board which we mounted on a development board with a support controller and serial converter. Beside the board setup we also soldered a PCB with some sensors and a Teensy board to control the BC95 and the sensors.

Wiring

Pin-Connection
Pin-Connection

The dev board itself has a RS232 converter to give access via “normal” RS232 connectors. This is convenient for older laptops or desktops but luckily they also give you access via 3.3V pegel to the same UART interface. The pins are pre soldered on a 10 pin header so it’s easy to connect this to arduino or Raspberry PI via the serial connection. As you can see in the picture it is pin 1,2 and 6. No level converter necessary.

AT- Command Set

Line 1,2

The serial protocol can be any speed at 8N1, the board auto detects the speed at first communication, therefore you need to send the AT command several times (normally 2) to set the speed. Normally the first AT command is answered with ERROR and the second one with AT. Make sure you get an OK before you continue.

Line 3

In order to get a clean setup we first reboot the board with “NRB”. It takes some seconds and it will come back with OK.

Line 7

Depending on your network you need to set the band with “NBAND”. The Telekom test network is on 900 MHz so we go with 8. Other bands are 5 (850 Mhz) and 20 (800 MHz).

Line 8

Depending on your setup and provider you need to set the APN with the CGDCONT command.

Line 9

Connect to the IoT Core

Line 10

Power on the module

Line 11

Connect to the network. This can take several seconds even a minute. You can always check the connection with at+cgatt?” and check for “+CGATT:1”. You can double check the existing connection by asking for the IP address which is assigned to you by sending¬†“at+cgpaddr=1” to get for example “+CGPADDR:1,10.100.0.16”.

Line 15

Ping a server to test everything is fine. (In this case the google DNS server)

Line 16

Open an UDP socket to receive answers. In our case the DGRAM and 17 are mandatory but the port you are using (in our case 16666) is up to you.

Line 17

Send your UDP data package. First parameter is the out port (0). The second one is the address you want to send the data to (ip or name). Third one is the receivers port (16666). Fours is the amount of data you want to send (keep it below 100 bytes) and the last one is your data in hex notification. I recommend asciitohex.com to convert a string you want to send.

Arduino

This arduino code is really a fast and ugly hack for the hackathon in order to send out the data. It does not listen for the AT returns or anything else. So this is only an example on how NOT to do coding but it worked for the hackathon.

WordPress docker setup with nginx proxy

Setup a wordpress blog on docker with nginx as reverse proxy

docker setup with wordpress, nginx and mysql
Docker setup with wordpress, nginx and mysql containern

My friend und very experienced colleague Niklas Heidloff convinced me to start also a blog with all the geeky things I am doing all day long. In order to make it more interesting for me I decided to host wordpress on my own instead of using wordpress as a service (WaaS ?). Sylwester from Fablab.berlin pointed me to Vultr for hosting and so here we start. The idea is to have a simple docker setup with 3 containers. One for the Nginx webserver facing the evil internet and proxying the wordpress which is located in the local private docker network. Doing so I can also redirect to other web services (containers) later. Luckily there are already maintained docker container available for all 3 parts, so I only need to customize the nginx container with a dockerfile but can use the two others right as they are.

Install Docker on Ubuntu

Either you go with a docker provider like Bluemix or you get a virtual machine from softlayer or any other provider. In my case I have chosen a virtual server so I had to install docker on Ubuntu LTS. Which is really easy. Basically you add a new repository entry to your apt sources and install latest stable docker packages. There is also a script available on get.docker.com but I don’t feel comfortable to execute a shell script right from the net with root access. But it’s up to you.

wget -qO- https://get.docker.com/ | sh

Docker on linux does not contain docker-compose compared to the docker installation for example on mac. Installing docker compose is straightforward. The docker compose script can be downloaded from github here: https://github.com/docker/compose/releases.

Docker-compose

Docker-compose takes care of a docker setup containing more than one docker container, including network and also basic monitoring. The following script starts and builds all docker container with nginx, mysql and wordpress. It also exports the volumes on the host file system for easy backup and persistence along docker container rebuilds and monitors if the docker containers are up and running.

Mysql is the first container we bring up with environment variables for the database like username, password and database name. Line 7 takes care to save the database file outside the docker container so you can delete the docker container, start a new one and still have the same database up and running. Point this where you want to have it. In this case in “db” under the same directory. Also make sure you come up with decent passwords.

The second container is wordpress. Same here with the host folder on line 21. Furthermore make sure you have the same user, password and db name configured as in the mysql container configuration.

Last one is nginx as internet facing container. You expose the port 80 here. While you just specify a container in the other two, in this one you configure a Dockerfile and a build context to customize your nginx regarding to the network setup. If you only want to host static files you can add this via volume mounts, but in our case we need to configure nginx itself so we need a customized Dockerfile as described below.

Dockerfile for nginx setup

This dockerfile inherits everything from the latest nginx and copies the default.conf file into it. See next chapter for how to setup the config file.

Nginx config file

Line 2 and 3 configures the port we want to listen on. We need one for ip4 and one for ip6. Important is the proxy configuration in line 8 to 15. Line 11 redirect all calls to “/” (so without a path in the URL) to the server wordpress. As we used docker-compose for it docker takes care to make the address available via the internal DNS server. Line 13-15 rewrites the http header in order to map everything to the different URL, otherwise we would end up with auto generated links in docker pointing to http://wordpress

Start the System

If everything is configured and the docker-compose.yml, default.conf, Dockerfile-nginx and the folders db and wordpress are in the same folder, we can start everything being in this folder with:

The parameter “-d” starts the setup in the background (daemon). For the very first run I would recommend using it without the “-d” parameter to see all debug messages.