The Idea

Every now and then you want to test your installation, your server or your setup. Specially when you want to test auto scaling functionalities. Kubernetes has an out of the box auto scaler and the official descriptions recommends a test docker container for testing with a apache and php installation. This is really great for testing a web application where you have some workload for a relatively short time frame. But I would also like to test a scenario where the workload runs for a longer time in the kubernetes setup and generates way more cpu workload then a web application. Therefore I hacked a nice docker container based on a c program load generator.

The docker container

The docker container is basically a very very simple Flask server with only one entry point “/”. The workload itself can be configured via two parameters:

  • percentage How much cpu load will be generated
  • seconds How long will the workload be active

The docker container itself uses nearly no CPU cycles as Flask is the only python process being active and waits for calls to start using CPU cycles.

lookbusy

I use a very nice open source tool called lookbusy from Devin Carraway which consumes memory and cpu cycles based on command line parameters. Unfortunately the program has no parameter to configure the time span it shout run. Therefore I call it the unix command timeout to terminate its execution after the given amount of seconds.

The Flask python wrapper

import subprocess
from   threading import Thread
from   flask     import Flask, request

app = Flask(__name__)

def worker(percentage, seconds):
    subprocess.run(['timeout', str(seconds), '/usr/local/bin/lookbusy', '-c', str(percentage)])

@app.route('/')
def load(): 
    percentage = request.args.get('percentage') if "percentage" in request.args else 50
    seconds    = request.args.get('seconds')    if "seconds"    in request.args else 10
    Thread(target=worker, args=(percentage, seconds)).start()
    return "started"

if __name__ == "__main__":
    app.run(host='0.0.0.0', port=80, processes=10)

The only program is a python Flask one, very short and only takes the get call to its root folder, checks for the two parameters and starts a thread with the subprocess. The get call immediately returns as it also supports long run workload simulations.

The Dockerfile

FROM   python:latest
RUN    curl http://www.devin.com/lookbusy/download/lookbusy-1.4.tar.gz | tar xvz && \
       cd lookbusy-1.4 && ./configure && \
       make && make install && cd .. && rm -rf lookbusy-1.4
RUN    pip install Flask
ADD    server.py server.py
EXPOSE 80
CMD    python -u server.py

The docker container is based on python latest. I put all the curl, make, install and rm calls into a single line in order to have a minimal footprint for the docker layer as we do not need the source code any more. As Flask is the only requirements I also call it directly without the requirements.txt file. The “-u” parameter for the python call is necessary to prevent python from buffering the output. Otherwise it can be quite disturbing when trying to read the debug log file.

Building and pushing the docker container

docker build -t ansi/lookbusy .
docker push     ansi/lookbusy

Building and pushing it to hub.docker.com is straightforward and nothing special.

Testing it on a kubernetes cluster

I have chosen the IBM cloud to test my docker container.

Requesting a kubernetes cluster

Requesting a kubernetes cluster can be done after login with

bx cs cluster-create --name ansi-blogtest --location dal10 --workers 3 --kube-version 1.8.6 --private-vlan 1788637 --public-vlan 1788635 --machine-type b2c.4x16

This command uses the bluemix CLI with the cluster plugin to control and configure kubernetes on the IBM infrastructure. The parameters are

  • –name to give your cluster a name (will be very important later on)
  • –location which datacenter to use (in this case dallas). Use “bx cs locations” to get your possible locations for the chosen region
  • –workers how many worker nodes are requested
  • –kube-version which kubernetes version should be used. Use “bx cs kube-versions” to get the available versions. “(default)” is not part of the parameter call.
  • –private-vlan which vlan for the private network should be used. Use “bx cs vlans ” to get the available public and private vlans
  • –public-vlan see private vlan
  • –machine-type which kind of underlying configuration you want to use for your worker node. Use “bx cs machine-types ” to get the available machine types. The first number after the “.” is the amount of cores and one after “x” the the amount of RAM in GB.

This command takes some time (~1h) to generate the kubernetes cluster. BTW my bluemix cli docker container has all necessary tools and also a nice script called “start_cluster.sh” to query all parameters and start a new cluster. After the cluster is up and running we can get the kubernetes configuration with

bx cs cluster-config ansi-blog
OK
The configuration for ansi-blogtest was downloaded successfully. Export environment variables to start using Kubernetes.

export KUBECONFIG=/root/.bluemix/plugins/container-service/clusters/ansi-blog/kube-config-dal10-ansi-blog.yml

Starting a pod and replica set

kubectl run loadtest --image=ansi/lookbusy --requests=cpu=200m

We start the pod and replica set without a yaml file because the request is very straight forward. Important here is the parameter “–requests“. Without it the autoscaler can not measure the cpu load and it never triggers.

Exposing the http port

kubectl expose deployment loadtest --type=LoadBalancer --name=loadtest --port=80

Again because the call is so simple we directly call kubectl without a yaml file to expose the Port 80. We can check for the public IP with

kubectl get svc
NAME     TYPE         CLUSTER-IP   EXTERNAL-IP PORT(S)      AGE
loadtest LoadBalancer 172.21.3.160 <pending>   80:31277/TCP 23m

In case the cloud runs out of public IP addresses and the “EXTERNAL_IP” is still pending after several minutes we can use one of the workers public ip addresses and the dynamic assigned port. The port is visible with “kubectl get svc” at the “PORTS” section. The syntax is as always in docker internalport:externalport. The workers public IP can be checked with

bx cs workers ansi-blog
ID                                               Public IP     Private IP     Machine Type       State  Status Version
kube-dal10-cr1dd768315d654d4bb4340ee8159faa17-w1 169.47.252.96 10.177.184.212 b2c.4x16.encrypted normal Ready  1.8.6_1506

So instead of calling our service with a official public ip address on port 80 we can use

http://169.47.252.96:31277

Autoscaler

Kubernetes has a build in horizontal autoscaler which can be started with

kubectl autoscale deployment loadtest --cpu-percent=50 --min=1 --max=10

In this case it measures the cpu load and starts new pods when the load is over 50%. The autoscaler in this configuration never starts more than 10 and never less than 2 pods. The current measurements and parameters can be checked with

kubectl get hpa
NAME      REFERENCE           TARGETS  MINPODS MAXPODS REPLICAS AGE
loadtest  Deployment/loadtest 0% / 50% 1       10      1        23m

So right now the cpu load is 0 and only one replica is running.

Loadtest

Time to get call our container and start the load test. Depending on the URL we an use curl to start the test with

curl "http://169.47.252.96:31277/?seconds=1000&percentage=80"

and check the result after some time with

kubectl get hpa
NAME      REFERENCE           TARGETS  MINPODS MAXPODS REPLICAS AGE
loadtest  Deployment/loadtest 60%/50%  1       10      6        23m

As we see the load increases and autoscaler kicks in. More details can obtained with the “kubectl proxy” command.

Deleting the kubernetes cluster

To clean up we could either delete all pods and replica sets and services but we could also delete the complete cluster with

bx cs cluster-rm ansi-blog