Debugging feature extraction of images
A big part when writing code for visual recognition in cognitive computing is to extract information aka. features from images. For example we need to understand colour information and want to see a histogram chart on a certain RGB or HSV channel. OpenCV offers a nice way to combine image analytics and image manipulation. In combination with the library mathplot it is relatively easy to analyse the picture, extract features and write debug information as a chart to the picture itself.
Getting a test picture
For this demo I will choose a picture from a webcam of the Observatory Friolzheim. On astrohd.de we can get a life picture taken from the observatory dome. Beside the webcam pointing to the building there are also weather information (very helpful when mapping images features) and a AllSky cam.
Getting the software and libraries
RUN pip install matplotlib
and build it with
docker build -t mathplot-opencv:latest .
Python file to get the chart for debug visual features
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
for fullname in glob.glob("data/*.jpg"):
filename = fullname.split('/')
name = filename.split('.')
image = cv2.imread(fullname, cv2.IMREAD_COLOR)
histogram = numpy.bincount(image.ravel(), minlength=256)
histogram[:2] = 0
histogram[250:] = 0
weights = [0.3, 0.4, 0.3]
histogram = numpy.convolve(histogram, numpy.array(weights)[::-1], 'same')
maxindex = numpy.argmax(histogram)
imga = numpy.fromstring(fig.canvas.tostring_rgb(), dtype=numpy.uint8, sep='')
imga = imga.reshape(fig.canvas.get_width_height()[::-1] + (3,))
imga = cv2.cvtColor(imga, cv2.COLOR_RGB2BGR)
imga = cv2.addWeighted(image, 0.7, imga, 0.3, 0)
cv2.imwrite("debug/%s.png" % name, imga)
This one is very important if we want to run this matplotlib “headless”. Meaning without a graphic export attached to the runtime. Like in a docker container or on a remote server.
We create a matplot lib figure with subplot here where we are going to add the chart.
As openCV uses numpy to work with images, the image itself is displayed in a multidimensional array and image.ravel() flattens this to a one dimensional array. Bincount just counts the amount of unique numbers used in this array. So with this line we just create a new 1 dimensional array of all the RGB colours in the image which is basically a histogram of the image. By doing so we mix all colours together, which is ok in this sample or a classic histogram. However if we want to see only one channel we can use
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(hsv)
to extract a single channel.
If the picture contains very shiny or very black parts the high and low values like <10 or >250 are over proportionally high. This will distort the histogram sometimes. These two lines just resets the histogram array on the upper and lower end to 0.
This just smooth the histogram.
Argmax returns the index of the highest number in a array. This one can be considered as a feature of the image. For example if we want to rank or sort our webcam pictures by daytime regarding to the sunlight this features gives us an indication. If we are using this feature the lines 18-19 becomes very important here to eliminate the reflexions in a picture. For example in the demo image the observation dome. The upper pictures contains the chart which already blanked out parts. The original chart looks like the one on the right.
The matplotlib chart is drawn here into the subplot. Still the matplotlib chart and the image itself are “stored” in different libraries.
Here the magic happens. fig.canvas.tostring_rgb exports the chart from matplotlib into a string and numpy.fromstring imports this string back to a numpy array which is like the picture we imported at the beginning.
Here we reshapes the newly created image to the same size as the imported image.
OpenCV can handle all kind of different image colour representations, like RGB, HSV aso. The fromstring importer reads in RGB but our image is in BGR, so this line just converts the colour representation.
Here we combine both image with a weighted parameter of 70% of the original image and 30% of the chart.
Finally we save this image in a debug folder with the same filename but just because we can as a png file.
Finally we clear up the used memory. This is very important when we convert a huge amount of images.
How to use this for visual recognition
By extracting features in images this little piece of code helps us to add the feature directly in a debug image. In our example we only used the histogram and extracted the colour with the highest number, but by having the histogram in hand we could see what was the underlying information the sorting algorithm used. Sorting images in night, dusk/dawn, daylight can be a way of preparing pictures for further processing in neuronal networks. By browsing through the images in one of this sorting folders we can easily see what went wrong in a picture which does not belong in this group. Typical problem is night and day pictures are missorted because of reflexions.