Image Recognition with Tensorflow classification on OpenWhisk

The big picture

Image classificationAs described in a previous article we (Niklas and I) are going to use Tensorflow to classify images into pre-trained categories. The previous artikel was about  on how to train a model with Tensorflow on Kubernetes. This article here now describes how to use the pre trained model which is stored on Object Storage. Similar to the training we will also use docker to host our program but this time we will use OpenWhisk as a platform.

Like the first part I also use the Google training Tensorflow for Poets. This time not the code itself but I copied the important classification parts from their script into my python file.

OpenWhisk with Docker

OpenWhisk is the open source implementation of an so called serverless computing platform. It is hosted by apache and maintained by many companies. IBM offers OpenWhisk on their IBM cloud and for testing and even playing around with it it the use is for free. Beside python and javascript OpenWhisk also offers the possibility to run docker containers. Internally all python and javascript code is executed anyhow on docker containers. So we will use the same official Tensorflow docker container we used to build our training docker container.

Internally OpenWhisk has three stages for docker containers. When we register a new method the execution instruction is only stored in a database and as soon as the first call approaches OpenWhisk the docker container is pulled from the repository, then initialised by an REST call to ‘\init‘ and then executed by calling the REST interface ‘\run‘. The docker container keeps active and each time the method is called only the ‘\run‘ part is executed. After some time of inactivity the container is destroyed and needs to be called with ‘\init‘ again. After even more time of inactivity even the image is removed and need to be pulled again.

The setup

The code itself is stored on github. Let’s have a look first on how we build the Docker container:

Dockerfile

As you can see this Docker is now really simple. It basically installs the python requirements to access the SWIFT Object Store and starts the python program. The python program keeps running until the OpenWhisk system decides the stop the container.

We make heavy use of the idea of having a init and a run part in the execute code. So the python program has two main parts. The first on is init and the second run. Let’ have a look the init part first which is basically setting up the stage for the classification itself.

\init

Unfortunately it is not so easy to configure the init part in a dynamic way with parameters from outside. So for this demo we need to build the Object Store credentials in our source code. Doesn’t feel right but for a demo it is ok. In a later article I will describe how to change the flow and inject  the parameters in a dynamic way. So what are we doing here?

  1. 10-16 is setting up a connection to the Object Store as described here.
  2. 18-22 is reading the pre trained Tensorflow graph directly into memory. tf is a global variable
  3. 24-26 is reading the labels which are basically a string of names separated by line breaks. The labels are in the same order as the categories in the graph

By doing all this in the init part we only need to do it once and the run part can concentrate on classifying the images without doing any time consuming loading any more.

Tensorflow image manipulation and classification

How to get the image

The image is transferred base64 encoded as part of the Line 24-25 request. Part of the dictionary is the key payload. I choose this because Node-red is using the same name for some kind of most important key. Tensorflow has a function to consume base64 encoded data as well but I could not get it to run with the image encoding I use. So I took the little extra step here and write the image on file and read it back later. By directly consuming it I think we could same some milliseconds processing time.

Transfer the image

  • Line 27 reads the image back from file
  • Line 29 decode the jpeg into an internal representation format
  • Line 30 cast the values to an float32 array
  • Line 31 adds a new dimension on the beginning of the array
  • Line 32 resizes the image to 224, 244 to have a similar size with the training data
  • Line 33 normalize the image values

Classify the image

  • Line 34-35 gets the input and output layer and stores it in the variables
  • Line 36 loads the image into Tensorflow
  • Line 39 here is the magic happening. Tensorflow processes the CNN with the input and output layer connected and consumes the Tensorflow image. Furthermore numpy is squeezing out all array nesting to a single array.
  • Line 40 has an array with probabilities for each category.

Mapp the result to labels

The missing last step is now to map the label names to the results which is be done in line 43 and 44.

Build and deploy it in OpenWhisk

The docker container can be build with

and pushed with

Run it in OpenWhisk

After configuring the command line tool wsk the action itself can be created with

For testing we need an image base64 encoded as file on our local hard disk. Then we can invoke the call with

The first execution will take up to 15 seconds because the docker container will be pulled from docker hub and the graph will be loaded from the Object Store. Calls later should be around 150 milliseconds processing time. The parameter –result will force OpenWhisk to wait for the function to end and also show you the result on your command line.

If you want to get the log file and also an exact execution time try this command:

  • First call results in  “duration”: 3805. Your call itself took way longer in the first call because 3805 is only the execution of the docker container (including init) not the time it tooks OpenWhisk to pull the docker container from docker hub.
  • Second call results in  “duration”: 156.

Build a web UI

Well UI is nothing I can talk about. But have a look at Niklas blog post on how to build a web UI. An test installation can be found here: https://visual-recognition-tensorflow.mybluemix.net

Image Recognition with Tensorflow training on Kubernetes

The big picture

Modern Visual Recognition is done with deep neural networks (DNN). One framework (and I would say the most famous one) to build this kind of network is Tensorflow from Google. Being open source and specially awesome it is perfect to play around and build your own Visual Recognition System. As the compute power and specially the RAM memory raises there is now a chance of having much more complicated networks compared to the 90th where there where only one or two hidden layer.

One architecture is the Convolutional Neural Network (CNN). The idea is very close to brain structure. The basic idea is to intensively train a network on gazillions of images and let it learn features inside the many hidden layers. Only the last layer connects features to real categories. Similar to our brain the networks learns concepts and patterns but not really the picture groups.

After spending a lot of compute power to train these networks they can be easily reused to train new images by replacing only the last layer with a new one representing the to be trained categories. Training this network is only training the last connection between the last layer and the rest of the network. This training is extremely fast (only minutes) compared to month for the complete network. The charming effect is to train only the “mapping” from features to categories. This is what we are going now.

Basically the development of such a system can be divided into two parts. The first part (training) is described there. For the “use” aka classification have a look into the second part on my blog. I developed this system together with a good friend and colleague of mine. Check out Niklas Heidloff, here is his blog and twitter account. The described system has mainly three parts. Two docker containers described in this blog and one epic frontend described in Niklas blog. The source code can be found on github.

ImageNet

If you want to train a neural network (supervised learning) you need a lot of images in categories. Not ten or hundred but better hundred thousands or even 15 million pictures. A wonderful source for this is Imagenet.  >14 million pictures organized in >20k categories. So a perfect source to train this kind of network. Google has done the same and participated in the Large Scale Visual Recognition Challenge (ILSVRC). Not only Google but many other research institutes build networks on top of Tensorflow in order have a better image recognition. The outcome are pre-trained models which can be used for system like we want to build.

Tensorflow for poets

Like always it is best to stand on shoulders of giants. So in our case use the python code developed by google at the codelabs. In this very fascinating and content full online training on Tensorflow Google developed python code to retrain the CNN and also to use the new trained model to classify images. Well, actually the training part is just using the original code and wraps it into a docker container and connects this container to an Object Store. So no much new work there but a nice and handy way to use this code for an own project. I highly recommend taking the 15 minutes and take the online training to learn how to use Tensorflow and Python.

MobileNet vs. Inception

As discussed there are many trained networks available the most famous ones are Inception and MobileNet. Inception has a much higher classification rate but also needs more compute power. Both on training and on classification. While we use kubernetes on “the cloud” the training is not a big problem. But we wanted to use the classifier later on on OpenWhisk we need to take care of the RAM memory usage. (512MB). The docker container can we configured to train each model but for OpenWhisk we are limited to the MobileNet.

Build your own classifier

Visual Recognition ArchitectureAs you can see in the picture we need to build two containers. The left one is loading the training images and the categories from an Object Store, trains the neural network and uploads the trained net back to the Object Store. This container can run on your laptop or somewhere in “the cloud”. As I developed a new passion for Kubernetes I added a small minimal yaml file to start the docker container on a Kubernetes Cluster. Well not really with multiple instances as the python code only uses one container but see it as some kind of “offloading” the workload.

The second container (will be described in the next article)  runs on OpenWhisk and uses the pre-trained network downloaded from the Object Store.

Use docker / kubernetes to train your model

We use the official Tensorflow docker container with python support as published from Google and the training script from Tensorflow for poets.

Dockerfile

The Dockerfile is straightforward. We use the Tensorflow docker image as base and install the git and zip (unpacking the training data) packages. Then we install all necessary python requirements. As all the Tensorflow related packages for Python are already installed these packages are only for accessing the Object Store (see my blog article). Then we clone the official github tensorflow-for-poets repository, add our execution shell script and finish with the CMD to call this script.

Execution Script

All important and sensitive parameters are configured via environment variables introduced by the docker container call. The basic and always the same parameters are set here. Where to do the keystone authentication and which protocol version for the Object Store. The swift commands downloads a zip file containing all training images in subfolders for each category. So you need to build a folder structure like this one:

The execution script unpacks the training data and calls the retrain script from Tensorflow-for-poets. Important parameters are how_many_training_steps (can be reduced to speed up for testing) and the architecture. As the last parameter can be changed depending on how accurate the classifier has to be and also how much memory is available for the classifier this parameter is also transferred via a command line parameter.

The image can be build with:

and pushed with:

train.yml

After building the docker container and pushing it to docker hub this yaml file triggers Kubernetes to run the container with the given parameters, many taken from your Object Store credential file:

  • OS_USER_ID  -> VCAP[‘userId’]
  • OS_PASSWORD -> VCAP[‘password’]
  • OS_PROJECT_ID -> VCAP[‘projectId’]
  • OS_REGION_NAME -> VCAP[‘region’]
  • OS_BUCKET_NAME -> Up to you however you called it
  • OS_FILE_NAME -> Up to you, however you called it
  • TF_MODEL -> ‘mobilenet_0.50_{imagesize}’ or ‘inception_v3’

Use Object Store to store your trained class for later use

We decided to use Object Store to store our training data and also the re-trained network. This can be any other place as well, for example S3 on AWS or your local HDD. Just change the Dockerfile and exec file to download and upload your data correspondingly. More details on how to use the Object Store can be found in my blog article.

 

Draw an analytic chart directly in the picture

Debugging feature extraction of images

Histogram chart drawn on image
Histogram chart drawn on image

A big part when writing code for visual recognition in cognitive computing is to extract information aka. features from images. For example we need to understand colour information and want to see a histogram chart on a certain RGB or HSV channel. OpenCV offers a nice way to combine image analytics and image manipulation. In combination with the library mathplot it is relatively easy to analyse the picture, extract features and write debug information as a chart to the picture itself.

Getting a test picture

For this demo I will choose a picture from a webcam of the Observatory Friolzheim. On astrohd.de we can get a life picture taken from the observatory dome. Beside the webcam pointing to the building there are also weather information (very helpful when mapping images features) and a AllSky cam.

Getting the software and libraries

As described in this blog article there is a nice docker container for openCV available. We can use this one and just add missing libs and tools with a minimalistic Dockerfile:

and build it with

Python file to get the chart for debug visual features

Line 5

This one is very important if we want to run this matplotlib “headless”. Meaning without a graphic export attached to the runtime. Like in a docker container or on a remote server.

Line 8-10

We create a matplot lib figure with subplot here where we are going to add the chart.

Line 16

As openCV uses numpy to work with images, the image itself is displayed in a multidimensional array and image.ravel() flattens this to a one dimensional array. Bincount just counts the amount of unique numbers used in this array. So with this line we just create a new 1 dimensional array of all the RGB colours in the image which is basically a histogram of the image. By doing so we mix all colours together, which is ok in this sample or a classic histogram. However if we want to see only one channel we can use

to extract a single channel.

Line 18-19

If the picture contains very shiny or very black parts the high and low values like <10 or >250 are over proportionally high. This will distort the histogram sometimes. These two lines just resets the histogram array on the upper and lower end to 0.

Line 21-22

This just smooth the histogram.

Line 23-24

Histogram without blanked parts
Histogram without blanked parts

Argmax returns the index of the highest number in a array. This one can be considered as a feature of the image. For example if we want to rank or sort our webcam pictures by daytime regarding to the sunlight this features gives us an indication. If we are using this feature the lines 18-19 becomes very important here to eliminate the reflexions in a picture. For example in the demo image the observation dome. The upper pictures contains the chart which already blanked out parts. The original chart looks like the one on the right.

Line 26-27

The matplotlib chart is drawn here into the subplot. Still the matplotlib chart and the image itself are “stored” in different libraries.

Line 29

Here the magic happens. fig.canvas.tostring_rgb exports the chart from matplotlib into a string and numpy.fromstring imports this string back to a numpy array which is like the picture we imported at the beginning.

Line 30

Here we reshapes the newly created image to the same size as the imported image.

Line 31

OpenCV can handle all kind of different image colour representations, like RGB, HSV aso. The fromstring importer reads in RGB but our image is in BGR, so this line just converts the colour representation.

Line 32

Here we combine both image with a weighted parameter of 70% of the original image and 30% of the chart.

Line 33

Finally we save this image in a debug folder with the same filename but just because we can as a png file.

Line 35

Finally we clear up the used memory. This is very important when we convert a huge amount of images.

How to use this for visual recognition

By extracting features in images this little piece of code helps us to add the feature directly in a debug image. In our example we only used the histogram and extracted the colour with the highest number, but by having the histogram in hand we could see what was the underlying information the sorting algorithm used. Sorting images in night, dusk/dawn, daylight can be a way of preparing pictures for further processing in neuronal networks. By browsing through the images in one of this sorting folders we can easily see what went wrong in a picture which does not belong in this group. Typical problem is night and day pictures are missorted because of reflexions.

Optimize pictures for visual recognition with openCV and gimp

Visual Recognition

Watson result

Computer Vision or Visual Recognition is part of cognitive computing (CC) aka Artificial Intelligence. One of the main concepts is to extract information out of unstructured data. For example you have a webcam pointing on a highway. As a human you see if there is a traffic jam or not. For a computer it’s only 640x480x3x8 (7.372.800) bit. Visual Recognition helps you to extract information out of this data. For example “This is a highway”. Out of the box systems like Watson are able to give you information what do you see on the picture. You can try it here https://visual-recognition-demo.mybluemix.net. The result can be seen on the left picture. So Watson knows it is a highway and even it’s a divided highway but it does not tell you there is a traffic jam or even a blocked road. Fortunately Watson is always eager to learn, let us see how we can teach him what is a traffic jam. This article only focuses on the picture preparation part not the train Watson part. See next postings for the Watson part.

Get pictures

There are many traffic cameras all around but I am not sure about the licence, so it is hard to use it here as a demo. But let us assume we can take pictures like this one from Wikimedia: Cars in I-70.If you live in south Germany there are nice traffic cameras from Strassenverkehrszentrale BaWue. Unfortunately they don’t offer the pictures with the right licence for my blog. If you know a great source for traffic cameras with the right licence please let me know.

Prepare pictures for training

Visual Recognition works a little bit like magic. You give watson 100 pictures of a traffic jam and 100 without traffic jam and he learns the difference. But how do we make sure he really learns traffic jam and not the weather or the light conditions. And furthermore only one lane in case the camera shows both lanes? So first we need to make sure we find enough different pictures of the road with traffic jam under different weather and light conditions. The second part can be done with OpenCV. OpenCV stands for open computer vision and helps you to manipulate images. The idea is to mask out parts we don’t want Watson to learn. In our case the second part of the lane and the sky. We can use GIMP to create a mask we can apply with openCV automatically to each picture.

Gimp

GIMP-Layers

First step is obvious to load the image in GIMP. Then open the layers dialog. It’s located under Windows/Dockable Dialogs/Layers or cmd-L. Here we add a new layer and select this one to paint on. Then we select in the tools menu the Paintbrush Tool and just paint the parts black we don’t want Watson to learn.

Blanked-image

Then we hide the original image by pressing the eye symbol in the layer dialog. This should leave us with only the black painting we did before. This will be our mask for openCV to be applied to all pictures. Under File/Export you can save it as mask.jpg. Make sure it is only the black mask and not the picture with the black painting.

Use openCV in docker

As openCV is quite a lot to install, we could easily use it within docker to work with our pictures. We can mount host directories inside a docker container, so in this case our directory with pictures:

This brings up the openCV docker container from victorhcm and opens a shell with our current directory mounted under /host. As soon es you exit the container it will be removed because of the “–rm” parameter. Don’t worry only the docker container will be deleted, everything under /host is mounted from the host system and will remain. Everything you save in other directories will be deleted.

How to mask out part of the picture

The python program to use openCV to mask out all pictures in a directory is then really easy to use:

Basically the program iterate through all “jpg” pictures in the subfolder “pics” and saves the masked pictures with the same name in the “masked” folder. Both directories have to exists before you start the script. In order to keep the script reduced to the important parts I left the create and check directory part out of this script.

Line 4

Loads the mask images as a grayscale image.

Line  8

Loads the image to work on as a colour image.

Line 9

Here is the real work done, this applies the mask with bitwise add of all pixels. Therefore the blank will win and the transparent will let the normal picture gets through.

Line 10

Saves the new masked picture in the “maksed” folder.

Preselect pictures

For the learning process we need to sort the pictures by hand. One bucked with traffic jam and the other with ok.