Fixing the caching_sha2 problem with wordpress and mysql verion 8

The problem

I am using wordpress with mysql both in a docker installation. The procedure for my setup is described here. Since mysql updated to version 8 they introduced caching_sha2 as the default password algorithm. When you use the auto update mechanism in wordpress everything is fine and wordpress still works with the native password version configured for the wordpress user. But if you use wordpress in a docker container and pull wordpress:latest there is a problem since wordpress 4.9.7 to access the mysql database: (Never thought I can use the word wordpress so many times in a sentence!)

The solution

The solution is relatively easy. You need to change the wordpress user manually from ​”mysql_native_password” to “caching_sha2_password“. This can be done with a simple SQL call. First stop your wordpress docker container and keep the mysql docker container running. Then execute these commands.

Replace blog_wordpress_db_1 with your mysql docker instance name (“docker ps”), “REALLYEPICSECURE” with your root password and “wordpressuser” with your wordpress username.

That is basically all. Now you can start your wordpress:latest docker container again and it should work.

 

Workload container for autoscaling test with kubernetes

Workload
Workload

The Idea

Every now and then you want to test your installation, your server or your setup. Specially when you want to test auto scaling functionalities. Kubernetes has an out of the box auto scaler and the official descriptions recommends a test docker container for testing with a apache and php installation. This is really great for testing a web application where you have some workload for a relatively short time frame. But I would also like to test a scenario where the workload runs for a longer time in the kubernetes setup and generates way more cpu workload then a web application. Therefore I hacked a nice docker container based on a c program load generator.

The docker container

The docker container is basically a very very simple Flask server with only one entry point “/”. The workload itself can be configured via two parameters:

  • percentage How much cpu load will be generated
  • seconds How long will the workload be active

The docker container itself uses nearly no CPU cycles as Flask is the only python process being active and waits for calls to start using CPU cycles.

lookbusy

I use a very nice open source tool called lookbusy from Devin Carraway which consumes memory and cpu cycles based on command line parameters. Unfortunately the program has no parameter to configure the time span it shout run. Therefore I call it the unix command timeout to terminate its execution after the given amount of seconds.

The Flask python wrapper

The only program is a python Flask one, very short and only takes the get call to its root folder, checks for the two parameters and starts a thread with the subprocess. The get call immediately returns as it also supports long run workload simulations.

The Dockerfile

The docker container is based on python latest (at this time 3.6.4). I put all the curl, make, install and rm calls into a single line in order to have a minimal footprint for the docker layer as we do not need the source code any more. As Flask is the only requirements I also call it directly without the requirements.txt file. The “-u” parameter for the python call is necessary to prevent python from buffering the output. Otherwise it can be quite disturbing when trying to read the debug log file.

Building and pushing the docker container

Building and pushing it to hub.docker.com is straightforward and nothing special.

Testing it on a kubernetes cluster

I have chosen the IBM cloud to test my docker container.

Requesting a kubernetes cluster

Requesting a kubernetes cluster can be done after login with

This command uses the bluemix CLI with the cluster plugin to control and configure kubernetes on the IBM infrastructure. The parameters are

  • –name to give your cluster a name (will be very important later on)
  • –location which datacenter to use (in this case dallas). Use “bx cs locations” to get your possible locations for the chosen region
  • –workers how many worker nodes are requested
  • –kube-version which kubernetes version should be used. Use “bx cs kube-versions” to get the available versions. “(default)” is not part of the parameter call.
  • –private-vlan which vlan for the private network should be used. Use “bx cs vlans <location>” to get the available public and private vlans
  • –public-vlan see private vlan
  • –machine-type which kind of underlying configuration you want to use for your worker node. Use “bx cs machine-types <location>” to get the available machine types. The first number after the “.” is the amount of cores and one after “x” the the amount of RAM in GB.

This command takes some time (~1h) to generate the kubernetes cluster. BTW my bluemix cli docker container has all necessary tools and also a nice script called “start_cluster.sh” to query all parameters and start a new cluster. After the cluster is up and running we can get the kubernetes configuration with

Starting a pod and replica set

We start the pod and replica set without a yaml file because the request is very straight forward. Important here is the parameter “–requests“. Without it the autoscaler can not measure the cpu load and it never triggers.

Exposing the http port

Again because the call is so simple we directly call kubectl without a yaml file to expose the Port 80. We can check for the public IP with

In case the cloud runs out of public IP addresses and the “EXTERNAL_IP” is still pending after several minutes we can use one of the workers public ip addresses and the dynamic assigned port. The port is visible with “kubectl get svc” at the “PORTS” section. The syntax is as always in docker internalport:externalport. The workers public IP can be checked with

So instead of calling our service with a official public ip address on port 80 we can use

Autoscaler

Kubernetes has a build in horizontal autoscaler which can be started with

In this case it measures the cpu load and starts new pods when the load is over 50%. The autoscaler in this configuration never starts more than 10 and never less than 2 pods. The current measurements and parameters can be checked with

So right now the cpu load is 0 and only one replica is running.

Loadtest

Time to get call our container and start the load test. Depending on the URL we an use curl to start the test with

and check the result after some time with

As we see the load increases and autoscaler kicks in. More details can obtained with the “kubectl proxy” command.

Deleting the kubernetes cluster

To clean up we could either delete all pods and replica sets and services but we could also delete the complete cluster with

 

Execute the Radio Meteor Observations program on mac os

MeteorLogger Screenshot
MeteorLogger Screenshot

What is it about

Wolfgang Kaufmann wrote an impressive article and even a more impressive software for the hobby radio astronomers. I highly recommend checking out the article and play with the software as there are not so many radio astronomers among the community. His software is written in python with a very clean UI. It directly connects via the computer sound card and grabs the audio signal. I shortly describe here what to install on a Mac OS to get his software up and running.

Where to get it

The software can be downloaded at http://www.ars-electromagnetica.de/robs/download.html. Unfortunately it is not available on any online repo like github but the source code can be downloaded as a zip file.

Preparation

The PyAudio package needs some libraries and direct access to the os sound system. Therefore we need it install this audio package outside of python itself

The necessary python libs can be installed via pip. I recommend doing it in a virtual environment

That is all. After installing the python libs the program starts right away

 

Setting up SDRplay remote on a raspberry pi

SDRplay
SDRplay

SDRPlay

I recently bought myself a SDRPlay receiver to play with this technology and maybe build a ground station or meteor scatter detector. The original plan is to setup a receiver on the Motionlab roof with an raspberry pi and send the IQ data via network down to a local server and extract the interesting information. One great software to work remotely with an SDR receiver is the Soapy project.

Install the raspberry pi part

Build system

Install the latest raspberry pi lite version from raspberrypi.org

Core system

The soapy part consist of 3 parts. The Core system must be installed first.

SDRplay

The SDRplay part consist of two parts one are the proprietary binary libraries from SDRplay itself the the other part is the soapy wrapper for SDRplay.

Binary Libraries

The driver can be downloaded from the SDRplay homepage https://www.sdrplay.com/rpi2dl.php

The SDRplay Soapy wrapper

Test the Soapy access

Soapy Server for Remote Access

Run the server

If you want to run it as a service have a look here on how to autostart stuff in linux.

Dev-Ops with OtA update for ESP8266

Over the Air update (Ota) for ESP8266

esp
esp

Thanks to the esp8266 project on github there is a very convenient way how an ESP can be updated over the air. There are three different ways available.

  1. The first one is via the arduino IDE itself where the esp opens a port and is available for firmware upload just like with a serial connection. Very convenient if you are in the same network.
  2. The second one is via http upload. So the esp provides a web server to upload the bin file. In this case there is no need to be in the same network but it is still a push and for each installed esp individual necessary.
  3. The third and most convenient way for a bigger installation base or in case the devices are behind a firewall (as they always should be) and no remote access is possible. In this case the device can download the firmware itself via http(s) download from a web server somewhere in the internet.

For a complete dev-ops pipeline from pushing to a repository to flashing a device the third scenario it the easiest one. So we need a place to store the binary files. For convenience I use amazon s3 to host my binary files as travis easily supports s3 upload. But it can be every internet platform where files can be stored and downloaded via http(s). The necessary code on arduino side looks like this:

This arduino function can be called from time to time (at startup or on constant running systems every now and then) to check for a new firmware version and in case there is a new version available automatic flash it and restart.

  • Line 1 is a #define with a placeholder for the current version of the installed firmware. This placeholder is replaced in the build pipeline at travis with an increasing number. So the compiled code has something like 23 or 42 instead of REPLACE_WITH_CURRENT_VERSION.
  • Line 2 is the URL for a latest version of a firmware.
  • Line 3 is the URL to a file with only one line with the latest build number in it.
  • Line 7-9 loads the version file from s3.
  • Line 12-13 converts the file into a number which can be compared with the define from line 1.
  • Line 17 is the firmware update itself. A detailed description of the ESPhttpUpdate class can be found here.

There are two ways to check if there is a new version available and only flash if there is something new. The one we use here is to have an own mechanism for it. I do it because on s3 I can only host static files and therefore I place the latest build number in a static file next to the firmware itself. The other way is build in into ESPhttpUpdate. The update function can be called with a build number which will be compared on the server and the return code will reflect if there is a new version or not. In this case we would need a script on the server to check for it.

Get an increasing build version number

With a little bash script we could load the last build number from s3 and then increase it in order to have the current number for our build.

This script loads the version file (line 3), increases the number (line 4) and patches our source code file (line 11) with this number instead of REPLACE_WITH_CURRENT_VERSION. After running this script the current source code contains the latest number and also the upload folder for s3 has a new file with the newest number in order to inform the polling ESPs.

Travis config file

Travis-ci is incredible easy to use and very reliable for continuous integration. In combination with platformio it is very easy to compile arduino code for several types of hardware. Simply configure the hardware in the platformio.ini file:

In this case we use the esp8266 feather board aka Huzzah. Just set the framework to your kind of esp.

Travis itself is configured by the .travis file in the root directory of your repository on github:

  • Line 1: Platformio is based on python so the build environment (although the code is c++) is python for maintaining platformio.
  • Line 3: Right now platformio is only available for python 2.7 so this line gets the latest stable version of python 2.7.
  • Line 5-7: Gets the latest cache files from the last build in order to save compile time and reduce the costs for travis. As this service is for free for open source projects it is always nice to save some money for the cool guys.
  • Line 10: Installs the latest version of platformio itself.
  • Line 13: Creates the upload directory which we will upload to s3 later on.
  • Line 14: Calls the build number increase and patch script.
  • Line 15-16: Patches the wireless lan config in case it is not handled inside the arduino code itself.
  • Line 17: Calls platformio to download all libraries and compile the arduino code itself.
  • Line 18: Platformio generates a lot of files for the linker and several other files. We only need the bin file later on, so we copy it here to the upload folder.
  • Line 20: Travis has a build in functionality to upload files after compilation. This is the part where we upload the files to s3.
  • Line 22: Defines the s3 bucket to upload the files.
  • Line 23-26: Provides the encrypted s3 credentials. See travis documentation on how to create these lines.
  • Line 29: Defines the local folder to be uploaded. Otherwise travis will upload everything from the current run.
  • Line 30: Defines the s3 folder in the bucket where the files will be stored.

With this files in place travis monitors your github repository and creates / uploads new firmware versions each time you push changes to your repository. The arduino code checks for new versions and patches itselfs as soon as there is a new version available. A complete project can be found here in my github repository.

 

Docker container with Bluemix CLI tools

BluemixCLI on docker hub
BluemixCLI on docker hub

Being an developer advocate means to play always with the latest version of tools and being on the edge. But installed programs are getting out of date and so I always end up with having installed old versions of CLI tools. One reason why I love cloud (aka other people’s computers) computing so much is because I don’t need to update the software, it is done by professionals. In order to have always the latest version of my Bluemix CLI tools in hand and being authenticated I compiled a little docker container with my favourite command line tools. cf, bx, docker and wsk.

Getting the docker container

I published the docker container on the official docker hub. So getting it is very easy when the docker tools are installed. This command will download the latest version of the container and therefore the latest version of installed cli tools. We need to run this command from time to time to make sure the latest version is available on our computer.

Get the necessary parameters

For all command line tools we need username, passwords and IDs. Obviously we can not hardcode them into the docker container therefore we need to pass them along as command line parameters when starting the docker container.

  • Username (the same as we use to login to Bluemix)
  • Password (the same as we use to login to Bluemix)
  • Org (The Organisation we want to work in, must already be existing)
  • Space (The Space we want to work in, must already be created)
  • AccountID (This can we catched from the URL when we open “Manage Organisation and click on the account)
  • OpenwhiskID (Individual for org and space, get be catched here: https://console.bluemix.net/openwhisk/learn/cli)

Run the container

The container can be started with docker run and passing all parameters with -e in:

Line 8 mounts the local directory inside the docker container under /root/host. So we can fire up the container and have a bash with the latest tools and our source code available.

Use the tools

Before we can use the tools we need to configure them and authenticate against Bluemix. The script “init.sh” which is located in “/root/” (our working directory) takes care of all logins and authentications.

cf

The Cloudfoundry command line tool for starting, stopping apps and connecting services.

bx

The Bluemix version of the Cloudfoundry command line tool. Including the plugin for container maintenance. By initializing this plugin we also get the credentials and settings for the docker client to use Bluemix as a docker daemon.

docker

The normal docker client with Bluemix as daemon configured.

wsk

The OpenWhisk client already authenticated.

We can configure an alias in our .bashrc so by just typing “bxdev” we will have bash with the latest cli tools available.

How to use travis-ci to check your application code and the smart contract

Travis CI – Build summary

Hyperledger projects code structure

Travis-CI is one of the most used continuous delivery platforms for open source projects. In order to check your hyperledger blockchain application you need to check at least two different languages in the same travis-ci configuration, as your smart contract is written in GOlang and your application most likely in Javascript or Python or whatever(tm).

Lets assume you have a project with:

  • Smart Contract written in GO
  • Backend Server written in Python
  • Frontend written in Javascript (NodeJS)

If you run your hyperledger blockchain for development purpose on Bluemix you will need to compile your code against v0.6. This leads to a problem because you can not specify a branch when you checkout a repository in “go get”. Therefore we need to checkout the code “by hand”. Let us assume your project looks like this:

Each part of the project (smart contract, backend, frontend) is in a separate directory and has a code part and a unit test part. Without getting into details on how to run unit tests for each language we concentrate now on the .travis file.

Configure travis.yml

We have two tests for python (one with 2.7 the other one with the latest nightly build), one for the javascript (no unit test in this example) and the GO part.

Important are lines 29 and 30. Before we run the go build we create the directory for hyperledger and check out manually the official hyperledger code with the v0.6 branch. After that we can normally build the smart contract by calling go build ./