I am using wordpress with mysql both in a docker installation. The procedure for my setup is described here. Since mysql updated to version 8 they introduced caching_sha2 as the default password algorithm. When you use the auto update mechanism in wordpress everything is fine and wordpress still works with the native password version configured for the wordpress user. But if you use wordpress in a docker container and pull wordpress:latest there is a problem since wordpress 4.9.7 to access the mysql database: (Never thought I can use the word wordpress so many times in a sentence!)
Warning: mysqli::__construct(): Unexpected server respose while doing caching_sha2 auth: 109 in Standard input code on line 22
Warning: mysqli::__construct(): MySQL server has gone away in Standard input code on line 22
Warning: mysqli::__construct(): (HY000/2006): MySQL server has gone away in Standard input code on line 22
MySQL Connection Error: (2006) MySQL server has gone away
The solution
The solution is relatively easy. You need to change the wordpress user manually from ”mysql_native_password” to “caching_sha2_password“. This can be done with a simple SQL call. First stop your wordpress docker container and keep the mysql docker container running. Then execute these commands.
docker exec -it blog_wordpress_db_1 bash
mysql -u root -pREALLYEPICSECURE
ALTER USER wordpressuser IDENTIFIED WITH caching_sha2_password BY 'REALLYEPICSECURE';
exit
exit
Replace blog_wordpress_db_1 with your mysql docker instance name (“docker ps”), “REALLYEPICSECURE” with your root password and “wordpressuser” with your wordpress username.
That is basically all. Now you can start your wordpress:latest docker container again and it should work.
I really like to plan the day in my calendar. Therefore I added a lot of external ical feeds like meetup, open-air cinema and for sure lauchlibrary. In order to decide on transportation I always have the weather underground page in a separate browser tab. This is very inconvenient, therefore I wrote a small script to get weather predictions via API call from wunderground and export an ical feed and update my google calendar with weather conditions.
Wunderground
Weather Underground is (or at least was for many years) the coolest weather page in the internet. Really great UI and a wonderful API to get current weather conditions and weather predictions for the next 10 days. Further more (and that is why I really really like it) users could send their own weather sensor data to the side to enhance the sensor mash network and get a nice visualization. Unfortunately the service is loosing features on a monthly basis and also the page itself is down for several hours every now and then. Very sad, but I still love it.
As I said they have a nice API to get weather forecast for the next 10 days on an hourly base. OK, we can all discuss how dependable a weather prediction for a certain hour in 8 days is, but at least for the next days it is really helpful. I am using the forecast10day and the hourly10day API endpoints to get a nicely formatted JSON document from wunderground. If you want to run this script for your own area you need an account and an API key as the calls are restricted (but for free).
PWS
My favorite Maker-space (Motionlab.berlin) has an epic weather phalanx (as I love to call it) and sends in local weather conditions to wunderground. Therefore I can ask beside weather conditions in a city for weather conditions based a certain weather reporting station. In our case its the IBERLIN1705 station. Check out current conditions here.
Forecast10day
The API call to http://api.wunderground.com/api/YOUR-API-KEY-HERE/forecast10day/q/pws:IBERLIN1705.json returns for each day of the next 10 days information about humidity, temperature (min/max), snow, rain, wind and many more. I take these data and create one calendar entry each morning at 06:00-06:15 with summary information for the day. Specially for days beyond the 4 days boundry this condition is more accurate then an hourly information. Getting this information in python is very easy:
try:
data = json.loads(requests.get("http://api.wunderground.com/api/YOUR-API-HERE/forecast10day/q/pws:IBERLIN1705.json").content)
except:
print("Error in Forecast")
return False
for e in data['forecast']['simpleforecast']['forecastday']:
day = e['date']['day']
month = e['date']['month']
year = e['date']['year']
conditions = e['conditions']
humidity = e['avehumidity']
high = e['high']['celsius']
low = e['low']['celsius']
snow = e['snow_allday']['cm']
rain = e['qpf_allday']['mm']
I am using requests to make the REST call and parse the “content” value with json loads. Easy as it looks. The data var contains the dictionary with all weather information on a silver tablet (if the API is not down, happens way to often).
Hourly10day
http://api.wunderground.com/api/YOUR-API-KEY/hourly10day/q/pws:IBERLIN1705.json contains the weather information on an hourly basis for the next 10 days, So the parsing is very similar to the forcast API call. I am specially interested here in rain, snow, temperature, wind, dewpoint and UV-Index as these are values I want to monitor and add calendar entries when they are outside a certain range.
Wind > 23 km/h
Temperature > 30 or < -10 C
UV-Index > 4 (6 is max)
Rain and Snow in general
(Temperature – Dew point) < 3
Humidity in general are not so important and highly dependent on the current temperature. But dew point (“the atmospheric temperature (varying according to pressure and humidity) below which water droplets begin to condense and dew can form.”) is very interesting when you want to know if it is getting muggy. Even when it is 10 C a very low difference between temperature and dew point means you really feel the cold crawling into your bones. 🙂
Ical
To create an Ical feed I use the icalendar library in python. Very handy to create events and export them as an ical (XML) feed.
Summary will be the text your calendar program displays when displaying the calendar itself, while description will be displayed then showing calendar entry details. “dtstart” and “dtend” mark the time range. For the timezone I use the pytz library. “to_ical()”. That’s basically all you need to create an ical feed.
Google
The google calendar can import and subscribe to calendars. While import adds the calendar entries to an existing calendar once (great for concerts, public transport booking), subscribe creates a new calendar and updates the feed every > 24 hours. This is great for long lasting events like meetup or rocket starts but weather predictions changes several times per hour. Therefore I added a small feature to the script to actively delete and create calendar entries. So I can do it every 3 hours and keep the calendar up to date.
As always google offers nice and very handy API endpoints to manipulate the data. Beside calling the API Rest endpoint by hand there are libraries for different languages. I use the “googleapiclient” and “oauth2client” to access my calendar. First step is to create a new calendar in google, then active the calendar API in the developer console and create an API key for your app. The googleapiclient takes care of the Oauth dance and stares credentials in a local file.
store = file.Storage('token.json')
creds = store.get()
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('credentials.json', SCOPES)
creds = tools.run_flow(flow, store)
return build('calendar', 'v3', http=creds.authorize(Http()))
If you call this function the very first time to requires the OAuth dance. Basically call a webpage and give access to your google calendar. The secreats are stored in the token.json file and reloaded every call.
Deleting old events
service = getService()
events_result = service.events().list(calendarId=CALENDAR_ID, maxResults=100, singleEvents=True, orderBy='startTime').execute()
events = events_result.get('items', [])
for e in events:
service.events().delete(calendarId=CALENDAR_ID, eventId=e['id']).execute()
“getService” calls the upper function to get an access object. “events().list().execute() request a list of the first 100 calendar entries and “events_result.get() returns an array with all calendar entries and their details. “service.events().delete().execute() removes these entries.
Very similar to the delete calls, the add calls gets the credentials, and calls “events().insert().execute()” with a dictionary containing the detailed information.
Adding the 4 channel relay board ks0212 to the MQTT universe
We just hacked a trotec dehumidifier for HerwigsObservatory. The idea was to additionally activate the dehumidifier when the difference between outside and inside humidity is above 10%. Normally there is a fan taking care of it but sometimes the differents gets to high. As there is already a raspberry pi running in the observatory for the weatherstation and the flightradar24 installation we just added the 4 channel relay board ks0212 from keyestudio. Not touching the 220V part we directly used the relay to “press” the TTL switch on the board for 0.5 seconds to turn on and off the dehumidifier. Here are the code snipped we used for this. The control is completely handled via MQTT.
For the sake of simplicity we used python and the GPIO library wiringpi. Therefore we first install the python development parts and them the python libraries for wiringpi and MQTT. As this is a dedicated hardware installation we don’t use virtualenv and directly install the library as root system wide.
The python program
import time
import wiringpi
import paho.mqtt.client as mqtt
def setup():
wiringpi.wiringPiSetup()
wiringpi.pinMode(3, 1)
wiringpi.pinMode(7, 1)
wiringpi.pinMode(22, 1)
wiringpi.pinMode(25, 1)
def short(pin):
switch_on(pin)
time.sleep(0.5)
switch_off(pin)
def switch_on(pin):
wiringpi.digitalWrite(pin, 1)
def switch_off(pin):
wiringpi.digitalWrite(pin, 0)
def on_connect(self, client, userdata, rc):
mqclient.subscribe("sternwarte/relay/#")
def on_message(client, userdata, msg):
m = msg.topic.split("/")
pin = 0
if m[-1] == "j3":
pin = 3
if m[-1] == "j2":
pin = 7
if m[-1] == "j4":
pin = 22
if m[-1] == "j5":
pin = 25
if pin != 0:
if msg.payload == "on":
switch_on(pin)
if msg.payload == "off":
switch_off(pin)
if msg.payload == "press":
short(pin)
if __name__ == "__main__":
setup()
mqclient = mqtt.Client(clean_session=True)
mqclient.connect("192.168.2.5", 1883, 60)
mqclient.on_connect = on_connect
mqclient.on_message = on_message
mqclient.loop_forever()
Again, a very simple python script, basically attaching to a (you need to change the code, there is no config) mqtt server and subscribes itself to a certain topic. Then it waits for messages and cuts off the last part of the topic to identify the relay. The naming convention is based on the relay name printed on the ks0212 pcb. As payload you can send “on“, “off” and “press“. “press” switches the relay on for half a second in order to simulate a button press as we need it for our dehumidifier.
Adding a systemd service
In order to keep the wantabe daemon up and running and also start it automatically at system start we add this service configuration file in “/lib/systemd/system/relayboard.service“:
If you want to do some hacking with the ks0212 relay board on your own here is the pin mapping table. I used the very cool side https://pinout.xyz/pinout/wiringpi for getting the numbers:
Every now and then you want to test your installation, your server or your setup. Specially when you want to test auto scaling functionalities. Kubernetes has an out of the box auto scaler and the official descriptions recommends a test docker container for testing with a apache and php installation. This is really great for testing a web application where you have some workload for a relatively short time frame. But I would also like to test a scenario where the workload runs for a longer time in the kubernetes setup and generates way more cpu workload then a web application. Therefore I hacked a nice docker container based on a c program load generator.
The docker container
The docker container is basically a very very simple Flask server with only one entry point “/”. The workload itself can be configured via two parameters:
percentage How much cpu load will be generated
seconds How long will the workload be active
The docker container itself uses nearly no CPU cycles as Flask is the only python process being active and waits for calls to start using CPU cycles.
lookbusy
I use a very nice open source tool called lookbusy from Devin Carraway which consumes memory and cpu cycles based on command line parameters. Unfortunately the program has no parameter to configure the time span it shout run. Therefore I call it the unix command timeout to terminate its execution after the given amount of seconds.
The Flask python wrapper
import subprocess
from threading import Thread
from flask import Flask, request
app = Flask(__name__)
def worker(percentage, seconds):
subprocess.run(['timeout', str(seconds), '/usr/local/bin/lookbusy', '-c', str(percentage)])
@app.route('/')
def load():
percentage = request.args.get('percentage') if "percentage" in request.args else 50
seconds = request.args.get('seconds') if "seconds" in request.args else 10
Thread(target=worker, args=(percentage, seconds)).start()
return "started"
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80, processes=10)
The only program is a python Flask one, very short and only takes the get call to its root folder, checks for the two parameters and starts a thread with the subprocess. The get call immediately returns as it also supports long run workload simulations.
The Dockerfile
FROM python:latest
RUN curl http://www.devin.com/lookbusy/download/lookbusy-1.4.tar.gz | tar xvz && \
cd lookbusy-1.4 && ./configure && \
make && make install && cd .. && rm -rf lookbusy-1.4
RUN pip install Flask
ADD server.py server.py
EXPOSE 80
CMD python -u server.py
The docker container is based on python latest (at this time 3.6.4). I put all the curl, make, install and rm calls into a single line in order to have a minimal footprint for the docker layer as we do not need the source code any more. As Flask is the only requirements I also call it directly without the requirements.txt file. The “-u” parameter for the python call is necessary to prevent python from buffering the output. Otherwise it can be quite disturbing when trying to read the debug log file.
This command uses the bluemix CLI with the cluster plugin to control and configure kubernetes on the IBM infrastructure. The parameters are
–name to give your cluster a name (will be very important later on)
–location which datacenter to use (in this case dallas). Use “bx cs locations” to get your possible locations for the chosen region
–workers how many worker nodes are requested
–kube-version which kubernetes version should be used. Use “bx cs kube-versions” to get the available versions. “(default)” is not part of the parameter call.
–private-vlan which vlan for the private network should be used. Use “bx cs vlans <location>” to get the available public and private vlans
–public-vlan see private vlan
–machine-type which kind of underlying configuration you want to use for your worker node. Use “bx cs machine-types <location>” to get the available machine types. The first number after the “.” is the amount of cores and one after “x” the the amount of RAM in GB.
This command takes some time (~1h) to generate the kubernetes cluster. BTW my bluemix cli docker container has all necessary tools and also a nice script called “start_cluster.sh” to query all parameters and start a new cluster. After the cluster is up and running we can get the kubernetes configuration with
bx cs cluster-config ansi-blog
OK
The configuration for ansi-blogtest was downloaded successfully. Export environment variables to start using Kubernetes.
export KUBECONFIG=/root/.bluemix/plugins/container-service/clusters/ansi-blog/kube-config-dal10-ansi-blog.yml
Starting a pod and replica set
kubectl run loadtest --image=ansi/lookbusy --requests=cpu=200m
We start the pod and replica set without a yaml file because the request is very straight forward. Important here is the parameter “–requests“. Without it the autoscaler can not measure the cpu load and it never triggers.
Again because the call is so simple we directly call kubectl without a yaml file to expose the Port 80. We can check for the public IP with
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
loadtest LoadBalancer 172.21.3.160 <pending> 80:31277/TCP 23m
In case the cloud runs out of public IP addresses and the “EXTERNAL_IP” is still pending after several minutes we can use one of the workers public ip addresses and the dynamic assigned port. The port is visible with “kubectl get svc” at the “PORTS” section. The syntax is as always in docker internalport:externalport. The workers public IP can be checked with
bx cs workers ansi-blog
ID Public IP Private IP Machine Type State Status Version
kube-dal10-cr1dd768315d654d4bb4340ee8159faa17-w1 169.47.252.96 10.177.184.212 b2c.4x16.encrypted normal Ready 1.8.6_1506
So instead of calling our service with a official public ip address on port 80 we can use
http://169.47.252.96:31277
Autoscaler
Kubernetes has a build in horizontal autoscaler which can be started with
In this case it measures the cpu load and starts new pods when the load is over 50%. The autoscaler in this configuration never starts more than 10 and never less than 2 pods. The current measurements and parameters can be checked with
kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
loadtest Deployment/loadtest 0% / 50% 1 10 1 23m
So right now the cpu load is 0 and only one replica is running.
Loadtest
Time to get call our container and start the load test. Depending on the URL we an use curl to start the test with
I recently bought myself a SDRPlay receiver to play with this technology and maybe build a ground station or meteor scatter detector. The original plan is to setup a receiver on the Motionlab roof with an raspberry pi and send the IQ data via network down to a local server and extract the interesting information. One great software to work remotely with an SDR receiver is the Soapy project.
Install the raspberry pi part
Build system
Install the latest raspberry pi lite version from raspberrypi.org
The soapy part consist of 3 parts. The Core system must be installed first.
git clone https://github.com/pothosware/SoapySDR.git
cd SoapySDR
mkdir build
cd build
cmake ..
make -j4
sudo make install
sudo ldconfig
SDRplay
The SDRplay part consist of two parts one are the proprietary binary libraries from SDRplay itself the the other part is the soapy wrapper for SDRplay.
git clone https://github.com/pothosware/SoapySDRPlay.git
cd SoapySDRPlay
mkdir build
cd build
cmake ..
make -j4
sudo make install
Test the Soapy access
SoapySDRUtil --info
Soapy Server for Remote Access
git clone https://github.com/pothosware/SoapyRemote.git
cd SoapyRemote
mkdir build
cd build
cmake ../ # -DCMAKE_BUILD_TYPE=Debug
make -j4
sudo make install
Run the server
SoapySDRServer --bind
If you want to run it as a service have a look here on how to autostart stuff in linux.
IBM offers a S3 compatible Object Store as a file storage. Beside S3 the storage can also be accessed via the SWIFT protocol by selecting a different deploy model. As the cost for this storage is extremely low compared to Database storage it is perfect for storing sensor data or other kind of data for machine learning.
I use the storage for example to host my training data or trained model for Tensorflow. Access and payment for the Object Store is managed via IBM Cloud aka Bluemix. And as this offering is included in the Lite offering the first 25GB are for free. 🙂
As there is a problem getting the S3 credentials right now I use the SWIFT access model. Please make sure when you request the Object Store service to access the SWIFT version to select the right access model.
Inside the IBM Cloud web interface you can create or read existing credentials. If your program runs on IBM Cloud (Cloudfoundry or Kubernetes) the credentials are also available via the VCAP environment variable. In both cases they look like mine here:
I hacked a nice script for the Watson Visual Recognition service. There is already a very helpful page available here but many people (including me) like command line tools or scripts to automate processes. The script does the following processes to each picture:
Resize to max 500×500 pixel. Watson internally use only ±250 pixels, so this saves a lot of upload time.
Enhance the image (normalisation) for better results.
Autorotate the images based on the EXIF data from your camera. Watson ignores EXIF data.
The tool expects this directory structure and reads all necessary information from it:
Classifiername
Classname
<more then 10 files>.jpg
The Visual Recognition key is read from the “VISUAL_KEY” environment variable.
How to install it
The pushVisualRecognition.sh script is part of the bluemixcli docker container as described here. It basically only needs imagemagick and zip installed so you can also run it without the docker container and download the script directly from github with this link. If you want to run it with docker the command is
docker run --rm -it -v ${PWD}:/root/host -e VISUAL_KEY=<add your key here> ansi/bluemixcli /bin/bash
How to run it
Simply call pushVisualRecognition.sh in your directory, all necessary information will be retrieved from the directory structure and the environment variable.
root@9a874a1af6e6:~/host/Dropbox/Apps# pushVisualRecognition.sh
Work on classifier: myclassifier/
Work on class: classa/
Work on class: classb/
Work on class: negative/
{
"classifier_id": "myclassifier_2110375920",
"name": "myclassifier",
"owner": "af63a091-ea7c-4d85-bcc6-1b62762f7dcb",
"status": "training",
"created": "2017-09-20T17:58:06.417Z",
"classes": [
{"class": "classa"},
{"class": "classb"}
]
}root@9a874a1af6e6:~/host/Dropbox/Apps#