Fixing the caching_sha2 problem with wordpress and mysql verion 8

The problem

I am using wordpress with mysql both in a docker installation. The procedure for my setup is described here. Since mysql updated to version 8 they introduced caching_sha2 as the default password algorithm. When you use the auto update mechanism in wordpress everything is fine and wordpress still works with the native password version configured for the wordpress user. But if you use wordpress in a docker container and pull wordpress:latest there is a problem since wordpress 4.9.7 to access the mysql database: (Never thought I can use the word wordpress so many times in a sentence!)

Warning: mysqli::__construct(): Unexpected server respose while doing caching_sha2 auth: 109 in Standard input code on line 22
Warning: mysqli::__construct(): MySQL server has gone away in Standard input code on line 22
Warning: mysqli::__construct(): (HY000/2006): MySQL server has gone away in Standard input code on line 22
MySQL Connection Error: (2006) MySQL server has gone away

The solution

The solution is relatively easy. You need to change the wordpress user manually from ​”mysql_native_password” to “caching_sha2_password“. This can be done with a simple SQL call. First stop your wordpress docker container and keep the mysql docker container running. Then execute these commands.

docker exec -it blog_wordpress_db_1 bash
mysql -u root -pREALLYEPICSECURE
ALTER USER wordpressuser IDENTIFIED WITH caching_sha2_password BY 'REALLYEPICSECURE';

Replace blog_wordpress_db_1 with your mysql docker instance name (“docker ps”), “REALLYEPICSECURE” with your root password and “wordpressuser” with your wordpress username.

That is basically all. Now you can start your wordpress:latest docker container again and it should work.


Workload container for autoscaling test with kubernetes


The Idea

Every now and then you want to test your installation, your server or your setup. Specially when you want to test auto scaling functionalities. Kubernetes has an out of the box auto scaler and the official descriptions recommends a test docker container for testing with a apache and php installation. This is really great for testing a web application where you have some workload for a relatively short time frame. But I would also like to test a scenario where the workload runs for a longer time in the kubernetes setup and generates way more cpu workload then a web application. Therefore I hacked a nice docker container based on a c program load generator.

The docker container

The docker container is basically a very very simple Flask server with only one entry point “/”. The workload itself can be configured via two parameters:

  • percentage How much cpu load will be generated
  • seconds How long will the workload be active

The docker container itself uses nearly no CPU cycles as Flask is the only python process being active and waits for calls to start using CPU cycles.


I use a very nice open source tool called lookbusy from Devin Carraway which consumes memory and cpu cycles based on command line parameters. Unfortunately the program has no parameter to configure the time span it shout run. Therefore I call it the unix command timeout to terminate its execution after the given amount of seconds.

The Flask python wrapper

import subprocess
from   threading import Thread
from   flask     import Flask, request

app = Flask(__name__)

def worker(percentage, seconds):['timeout', str(seconds), '/usr/local/bin/lookbusy', '-c', str(percentage)])

def load(): 
    percentage = request.args.get('percentage') if "percentage" in request.args else 50
    seconds    = request.args.get('seconds')    if "seconds"    in request.args else 10
    Thread(target=worker, args=(percentage, seconds)).start()
    return "started"

if __name__ == "__main__":'', port=80, processes=10)

The only program is a python Flask one, very short and only takes the get call to its root folder, checks for the two parameters and starts a thread with the subprocess. The get call immediately returns as it also supports long run workload simulations.

The Dockerfile

FROM   python:latest
RUN    curl | tar xvz && \
       cd lookbusy-1.4 && ./configure && \
       make && make install && cd .. && rm -rf lookbusy-1.4
RUN    pip install Flask
CMD    python -u

The docker container is based on python latest (at this time 3.6.4). I put all the curl, make, install and rm calls into a single line in order to have a minimal footprint for the docker layer as we do not need the source code any more. As Flask is the only requirements I also call it directly without the requirements.txt file. The “-u” parameter for the python call is necessary to prevent python from buffering the output. Otherwise it can be quite disturbing when trying to read the debug log file.

Building and pushing the docker container

docker build -t ansi/lookbusy .
docker push     ansi/lookbusy

Building and pushing it to is straightforward and nothing special.

Testing it on a kubernetes cluster

I have chosen the IBM cloud to test my docker container.

Requesting a kubernetes cluster

Requesting a kubernetes cluster can be done after login with

bx cs cluster-create --name ansi-blogtest --location dal10 --workers 3 --kube-version 1.8.6 --private-vlan 1788637 --public-vlan 1788635 --machine-type b2c.4x16

This command uses the bluemix CLI with the cluster plugin to control and configure kubernetes on the IBM infrastructure. The parameters are

  • –name to give your cluster a name (will be very important later on)
  • –location which datacenter to use (in this case dallas). Use “bx cs locations” to get your possible locations for the chosen region
  • –workers how many worker nodes are requested
  • –kube-version which kubernetes version should be used. Use “bx cs kube-versions” to get the available versions. “(default)” is not part of the parameter call.
  • –private-vlan which vlan for the private network should be used. Use “bx cs vlans <location>” to get the available public and private vlans
  • –public-vlan see private vlan
  • –machine-type which kind of underlying configuration you want to use for your worker node. Use “bx cs machine-types <location>” to get the available machine types. The first number after the “.” is the amount of cores and one after “x” the the amount of RAM in GB.

This command takes some time (~1h) to generate the kubernetes cluster. BTW my bluemix cli docker container has all necessary tools and also a nice script called “” to query all parameters and start a new cluster. After the cluster is up and running we can get the kubernetes configuration with

bx cs cluster-config ansi-blog
The configuration for ansi-blogtest was downloaded successfully. Export environment variables to start using Kubernetes.

export KUBECONFIG=/root/.bluemix/plugins/container-service/clusters/ansi-blog/kube-config-dal10-ansi-blog.yml

Starting a pod and replica set

kubectl run loadtest --image=ansi/lookbusy --requests=cpu=200m

We start the pod and replica set without a yaml file because the request is very straight forward. Important here is the parameter “–requests“. Without it the autoscaler can not measure the cpu load and it never triggers.

Exposing the http port

kubectl expose deployment loadtest --type=LoadBalancer --name=loadtest --port=80

Again because the call is so simple we directly call kubectl without a yaml file to expose the Port 80. We can check for the public IP with

kubectl get svc
loadtest LoadBalancer <pending>   80:31277/TCP 23m

In case the cloud runs out of public IP addresses and the “EXTERNAL_IP” is still pending after several minutes we can use one of the workers public ip addresses and the dynamic assigned port. The port is visible with “kubectl get svc” at the “PORTS” section. The syntax is as always in docker internalport:externalport. The workers public IP can be checked with

bx cs workers ansi-blog
ID                                               Public IP     Private IP     Machine Type       State  Status Version
kube-dal10-cr1dd768315d654d4bb4340ee8159faa17-w1 b2c.4x16.encrypted normal Ready  1.8.6_1506

So instead of calling our service with a official public ip address on port 80 we can use


Kubernetes has a build in horizontal autoscaler which can be started with

kubectl autoscale deployment loadtest --cpu-percent=50 --min=1 --max=10

In this case it measures the cpu load and starts new pods when the load is over 50%. The autoscaler in this configuration never starts more than 10 and never less than 2 pods. The current measurements and parameters can be checked with

kubectl get hpa
loadtest  Deployment/loadtest 0% / 50% 1       10      1        23m

So right now the cpu load is 0 and only one replica is running.


Time to get call our container and start the load test. Depending on the URL we an use curl to start the test with

curl ""

and check the result after some time with

kubectl get hpa
loadtest  Deployment/loadtest 60%/50%  1       10      6        23m

As we see the load increases and autoscaler kicks in. More details can obtained with the “kubectl proxy” command.

Deleting the kubernetes cluster

To clean up we could either delete all pods and replica sets and services but we could also delete the complete cluster with

bx cs cluster-rm ansi-blog


Execute the Radio Meteor Observations program on mac os

MeteorLogger Screenshot
MeteorLogger Screenshot

What is it about

Wolfgang Kaufmann wrote an impressive article and even a more impressive software for the hobby radio astronomers. I highly recommend checking out the article and play with the software as there are not so many radio astronomers among the community. His software is written in python with a very clean UI. It directly connects via the computer sound card and grabs the audio signal. I shortly describe here what to install on a Mac OS to get his software up and running.

Where to get it

The software can be downloaded at Unfortunately it is not available on any online repo like github but the source code can be downloaded as a zip file.


The PyAudio package needs some libraries and direct access to the os sound system. Therefore we need it install this audio package outside of python itself

brew install portaudio

The necessary python libs can be installed via pip. I recommend doing it in a virtual environment

mkdir ms
cd ms
virtualenv -p python3 .
. bin/activate 
pip install cycler matplotlib numpy PyAudio pyparsing \
            python-dateutil pytz scipy six tk-tools   \
            xlrd xlwt

That is all. After installing the python libs the program starts right away



Setting up SDRplay remote on a raspberry pi



I recently bought myself a SDRPlay receiver to play with this technology and maybe build a ground station or meteor scatter detector. The original plan is to setup a receiver on the Motionlab roof with an raspberry pi and send the IQ data via network down to a local server and extract the interesting information. One great software to work remotely with an SDR receiver is the Soapy project.

Install the raspberry pi part

Build system

Install the latest raspberry pi lite version from

sudo apt update
sudo apt upgrade
sudo apt install cmake g++ libpython-dev python-numpy swig git

Core system

The soapy part consist of 3 parts. The Core system must be installed first.

git clone
cd SoapySDR
mkdir build
cd build
cmake ..
make -j4
sudo make install
sudo ldconfig


The SDRplay part consist of two parts one are the proprietary binary libraries from SDRplay itself the the other part is the soapy wrapper for SDRplay.

Binary Libraries

The driver can be downloaded from the SDRplay homepage

chmod 777

The SDRplay Soapy wrapper

git clone
cd SoapySDRPlay
mkdir build
cd build
cmake ..
make -j4
sudo make install

Test the Soapy access

SoapySDRUtil --info

Soapy Server for Remote Access

git clone
cd SoapyRemote
mkdir build
cd build
cmake ../ # -DCMAKE_BUILD_TYPE=Debug
make -j4
sudo make install

Run the server

SoapySDRServer --bind

If you want to run it as a service have a look here on how to autostart stuff in linux.

Dev-Ops with OtA update for ESP8266

Over the Air update (Ota) for ESP8266


Thanks to the esp8266 project on github there is a very convenient way how an ESP can be updated over the air. There are three different ways available.

  1. The first one is via the arduino IDE itself where the esp opens a port and is available for firmware upload just like with a serial connection. Very convenient if you are in the same network.
  2. The second one is via http upload. So the esp provides a web server to upload the bin file. In this case there is no need to be in the same network but it is still a push and for each installed esp individual necessary.
  3. The third and most convenient way for a bigger installation base or in case the devices are behind a firewall (as they always should be) and no remote access is possible. In this case the device can download the firmware itself via http(s) download from a web server somewhere in the internet.

For a complete dev-ops pipeline from pushing to a repository to flashing a device the third scenario it the easiest one. So we need a place to store the binary files. For convenience I use amazon s3 to host my binary files as travis easily supports s3 upload. But it can be every internet platform where files can be stored and downloaded via http(s). The necessary code on arduino side looks like this:

#define ULR_FIRMWARE_BIN     ""

void checkForNewFirmware(void){

    HTTPClient http;
    int httpCode = http.GET();

    if(httpCode == HTTP_CODE_OK) {
        String payload = http.getString();
        int newVersion = payload.toInt();

        if (BUILD_VERSION < newVersion){
            Serial.println("I need to update");
            t_httpUpdate_return ret = ESPhttpUpdate.update(ULR_FIRMWARE_BIN);

            if (ret == HTTP_UPDATE_FAILED){
                Serial.printf("HTTP_UPDATE_FAILD Error (%d): %s\n", ESPhttpUpdate.getLastError(), ESPhttpUpdate.getLastErrorString().c_str());

This arduino function can be called from time to time (at startup or on constant running systems every now and then) to check for a new firmware version and in case there is a new version available automatic flash it and restart.

  • Line 1 is a #define with a placeholder for the current version of the installed firmware. This placeholder is replaced in the build pipeline at travis with an increasing number. So the compiled code has something like 23 or 42 instead of REPLACE_WITH_CURRENT_VERSION.
  • Line 2 is the URL for a latest version of a firmware.
  • Line 3 is the URL to a file with only one line with the latest build number in it.
  • Line 7-9 loads the version file from s3.
  • Line 12-13 converts the file into a number which can be compared with the define from line 1.
  • Line 17 is the firmware update itself. A detailed description of the ESPhttpUpdate class can be found here.

There are two ways to check if there is a new version available and only flash if there is something new. The one we use here is to have an own mechanism for it. I do it because on s3 I can only host static files and therefore I place the latest build number in a static file next to the firmware itself. The other way is build in into ESPhttpUpdate. The update function can be called with a build number which will be compared on the server and the return code will reflect if there is a new version or not. In this case we would need a script on the server to check for it.

Get an increasing build version number

With a little bash script we could load the last build number from s3 and then increase it in order to have the current number for our build.

#!/usr/bin/env bash

let oldversion=`curl`
let newversion=oldversion+1

echo "============="
echo "New Version:"
echo $newversion
echo "============="

sed -i "s/REPLACE_WITH_CURRENT_VERSION/$newversion/g" src/main.cpp

echo $newversion > upload/blanked.version

This script loads the version file (line 3), increases the number (line 4) and patches our source code file (line 11) with this number instead of REPLACE_WITH_CURRENT_VERSION. After running this script the current source code contains the latest number and also the upload folder for s3 has a new file with the newest number in order to inform the polling ESPs.

Travis config file

Travis-ci is incredible easy to use and very reliable for continuous integration. In combination with platformio it is very easy to compile arduino code for several types of hardware. Simply configure the hardware in the platformio.ini file:

platform  = espressif8266
board     = huzzah
framework = arduino

In this case we use the esp8266 feather board aka Huzzah. Just set the framework to your kind of esp.

Travis itself is configured by the .travis file in the root directory of your repository on github:

language: python
- '2.7'
sudo: false
  - "~/.platformio"

- pip install -U platformio

- mkdir upload
- ./
- sed -i "s/WLANSSID/$WLANSSID/g"     src/main.cpp
- sed -i "s/WLANPASSWD/$WLANPASSWD/g" src/main.cpp
- platformio run
- cp .pioenvs/huzzah/firmware.bin upload/blanked.bin

  provider: s3
  bucket: 'feelflight'
    secure: amzqBC+rs+S860Z6ABQNAseKYL+7UgNnJGhF7jGkc6Aq/e8JmPqRSYHrEESM4S1jkOXYR5WouX04ytZqoXnrt0E625LT0+rLUGjyZ1QpGlrI5dwhOP4TagT+A90DtRI77TGf4qgnAkX+wAOufKehMKNms8jL6M64vwR5mIg3veiewZFRBtpvlkqCS55+rdWmYbFuT+UYNdq5UItJkfY1HNunafDvS1qfCTwBzoa5Yro/pyGA5cSdKDEZrJ+WfgCm03PZHVMKARm07lOcAZTstp5qQCbG8S4jE0OA8Q+AQ/mcwzB+JRHrJZdoQmpNjAsREnRDvv/Zz88V4JluPVrgk1B3mWw7tAPGnxT+N/Kwj+f455AMjsEcJ3z3YdGeJtftqYtr9kbcECWt7puPILpRhSKkAMGEPAQhOQAdLqQvfL1qZQbunexDShKkpMbpmVvyTYQXXmixoc26dB7MJpbw4UHNui3zpb5fWDHuJ3EIEvHvuoMDT2Dk2GTpStBqACrbo74Orsfah6DvEuJXXbmBIChfDufalNA5CNkjhIfBSDQpu5HE6UEylPDYcwgXwvhIl9zSXljYcH6LBP18axwheCmyeolVse3a3h9GF3tSfcJrlMshZ0oZ0WuwvLflE4ZzDWMT1XX8kgHrvaYklagwKbgltMYkq7R04kD++h32J8s=
    secure: jZiEn7PTFRrwFu8ZmDEkUGjYuKSWmP+kI6biVKaRwhcqA+WeYjKOH3r5NgR5V+xoYcZA47Qm41pl6gi71aMEU2Xil4+HUdmLM6pXLU87Q0NG974cLesccaDy2/rkADmLP/jaqN66Pavd4l2tRxGOP+p1QQRXQOccFEW6j95PdPOyzppPZc6h8yzqmxerIgDSDFQuF4pRWjEtJSPrEyw79p834wvVVahlXRJ6jrqy5X4CiqabYmaR3QuT0W9tBHHtfMfgPJBCooTxqT0uqDnOSN0wU6TmQ8ZHg9y7d4ChOWLbpHwHhOk3UrDrTllbTSr7zRjqzwW69yivZX2e0XR89X8PFcLg8jIcZxgKyIKGo+BpCnaLlVQ1dxmIrDfcComino+3ZWC4lZDLgaw/uTfcAapn1sPBNhnxed7kr7u/RkZIfdXWZn4GSO1aDAJWXMF6lC2lq1JN7FlfpyGJuvsN6FQcIrq0W8jghZ0+8AAgwOzzNPG5bY34+8R3Qtp1d4hwqkan4peF0vVfeVRldtkissmup+bQRyU7xyYUPqL7EdtjJXBXwP3ChTv/FGu2eQhjgweLHsyrkcFBeqpKwjHnG0jUSV3QQPq9hpO8mk3eSjSbM91cY9S5t2BKtIR0ALCyyAn+B40P6OiJ+4v4d0ZdyXGjL3aRyOSg4jIOl5Awa10=
  skip_cleanup: true
  acl: public_read
  local_dir: upload/
  upload-dir: firmware
  • Line 1: Platformio is based on python so the build environment (although the code is c++) is python for maintaining platformio.
  • Line 3: Right now platformio is only available for python 2.7 so this line gets the latest stable version of python 2.7.
  • Line 5-7: Gets the latest cache files from the last build in order to save compile time and reduce the costs for travis. As this service is for free for open source projects it is always nice to save some money for the cool guys.
  • Line 10: Installs the latest version of platformio itself.
  • Line 13: Creates the upload directory which we will upload to s3 later on.
  • Line 14: Calls the build number increase and patch script.
  • Line 15-16: Patches the wireless lan config in case it is not handled inside the arduino code itself.
  • Line 17: Calls platformio to download all libraries and compile the arduino code itself.
  • Line 18: Platformio generates a lot of files for the linker and several other files. We only need the bin file later on, so we copy it here to the upload folder.
  • Line 20: Travis has a build in functionality to upload files after compilation. This is the part where we upload the files to s3.
  • Line 22: Defines the s3 bucket to upload the files.
  • Line 23-26: Provides the encrypted s3 credentials. See travis documentation on how to create these lines.
  • Line 29: Defines the local folder to be uploaded. Otherwise travis will upload everything from the current run.
  • Line 30: Defines the s3 folder in the bucket where the files will be stored.

With this files in place travis monitors your github repository and creates / uploads new firmware versions each time you push changes to your repository. The arduino code checks for new versions and patches itselfs as soon as there is a new version available. A complete project can be found here in my github repository.


Docker container with Bluemix CLI tools

BluemixCLI on docker hub
BluemixCLI on docker hub

Being an developer advocate means to play always with the latest version of tools and being on the edge. But installed programs are getting out of date and so I always end up with having installed old versions of CLI tools. One reason why I love cloud (aka other people’s computers) computing so much is because I don’t need to update the software, it is done by professionals. In order to have always the latest version of my Bluemix CLI tools in hand and being authenticated I compiled a little docker container with my favourite command line tools. cf, bx, docker and wsk.

Getting the docker container

I published the docker container on the official docker hub. So getting it is very easy when the docker tools are installed. This command will download the latest version of the container and therefore the latest version of installed cli tools. We need to run this command from time to time to make sure the latest version is available on our computer.

docker pull ansi/bluemixcli

Get the necessary parameters

For all command line tools we need username, passwords and IDs. Obviously we can not hardcode them into the docker container therefore we need to pass them along as command line parameters when starting the docker container.

  • Username (the same as we use to login to Bluemix)
  • Password (the same as we use to login to Bluemix)
  • Org (The Organisation we want to work in, must already be existing)
  • Space (The Space we want to work in, must already be created)
  • AccountID (This can we catched from the URL when we open “Manage Organisation and click on the account)
  • OpenwhiskID (Individual for org and space, get be catched here:

Run the container

The container can be started with docker run and passing all parameters with -e in:

docker run -it --rm                      \
-e BX_USERNAME=<Bluemix Username>        \
-e BX_PASSWORD=<Bluemix Password>        \
-e BX_ORG=<Bluemix Organisation>         \
-e BX_SPACE=<Bluemix Space>              \
-e BX_ACCOUNT_ID=<Bluemix Account ID>    \
-e WSK_AUTH=<Openwhisk Authentification> \
-v ${PWD}:/root/host                     \
ansi/bluemixcli /bin/bash

Line 8 mounts the local directory inside the docker container under /root/host. So we can fire up the container and have a bash with the latest tools and our source code available.

Use the tools

Before we can use the tools we need to configure them and authenticate against Bluemix. The script “” which is located in “/root/” (our working directory) takes care of all logins and authentications.


The Cloudfoundry command line tool for starting, stopping apps and connecting services.


The Bluemix version of the Cloudfoundry command line tool. Including the plugin for container maintenance. By initializing this plugin we also get the credentials and settings for the docker client to use Bluemix as a docker daemon.


The normal docker client with Bluemix as daemon configured.


The OpenWhisk client already authenticated.

We can configure an alias in our .bashrc so by just typing “bxdev” we will have bash with the latest cli tools available.

How to use travis-ci to check your application code and the smart contract

Travis CI – Build summary

Hyperledger projects code structure

Travis-CI is one of the most used continuous delivery platforms for open source projects. In order to check your hyperledger blockchain application you need to check at least two different languages in the same travis-ci configuration, as your smart contract is written in GOlang and your application most likely in Javascript or Python or whatever(tm).

Lets assume you have a project with:

  • Smart Contract written in GO
  • Backend Server written in Python
  • Frontend written in Javascript (NodeJS)

If you run your hyperledger blockchain for development purpose on Bluemix you will need to compile your code against v0.6. This leads to a problem because you can not specify a branch when you checkout a repository in “go get”. Therefore we need to checkout the code “by hand”. Let us assume your project looks like this:




Each part of the project (smart contract, backend, frontend) is in a separate directory and has a code part and a unit test part. Without getting into details on how to run unit tests for each language we concentrate now on the .travis file.

Configure travis.yml

    - language: python
      python: 2.7
        - cd python
        - python -m unittest discover -p

    - language: python
      python: nightly
        - cd python
        - python -m unittest discover -p

    - language: node_js
      node_js: 7
        - cd javascript
        - ls

    - language: go
      go: master
        - cd go
        - mkdir -p $GOPATH/src/
        - git   -C $GOPATH/src/ clone -b v0.6
        - go build ./


We have two tests for python (one with 2.7 the other one with the latest nightly build), one for the javascript (no unit test in this example) and the GO part.

Important are lines 29 and 30. Before we run the go build we create the directory for hyperledger and check out manually the official hyperledger code with the v0.6 branch. After that we can normally build the smart contract by calling go build ./