Run your own docker registry with token-based identification behind nginx


How to build an controlled environment to distribute docker images based on user accounts

Docker itself, AWS (just to name the biggest docker hosts right now) and many more public / private repository servers are on the marked. But sometime there is need to host an own registry for docker images. One reason can be because we can, the other is for example to give individual pull / push rights to different images to different users and control the access also based on expiration dates.

Components and the big picture

For this setup we need several software components to work orchestrated together. Starting with the firewall to block all ports except the 443 for HTTPS, the nginx reverse proxy to terminate the SSL connection and protect the underlying services against direct access and also possible load balancing, the docker registry to host the images and at last but not least the docker token authenticator to identify users and give access to images (push and/or pull) based on their rights.

Docker introduced in the second version for the registry protocol the “Docker registry authentication scheme“. This basically transfers the access control to images to an outside system and uses the bearer token mechanism to communicate. The flow is to access an docker image is:

  1. Docker daemon accesses the docker registry server as usual and gets a 401 Unauthorized in return with a “WWW-Authenticate” header pointing to the authentication server the registry server trusts.
  2. Docker daemon contacts the authentication server with the given URL and the user identifies against the server.
  3. The authentication server checks the access rights based on username, password, image name and access type (pull/push) and returns a bearer token signed with the private key.
  4. Docker daemon accesses the docker registry again with the bearer token and the docker image request.
  5. Docker registry server checks the bearer token based on the authentication server public key and grants access or doesn’t.

Firewall

Ubuntu ships with a very simple firewall control script called “Uncomplicated Firewall“. The script manages the iptable configuration and lets the user configure ports with a single line. If you access the server via SSH make sure you allow ssh access before you activate the firewall. I also recommend installing fail2ban to ban script hacking.

sudo apt update
sudo apt install -y ufw fail2ban 
ufw allow ssh #only necessary when you need remote access
ufw allow https
ufw allow http
ufw enable 
ufw status

Nginx reverse proxy

We install Nginx also as a docker service because the update cycle is way faster compared to the software repository. The basic Nginx docker container is ready to be used and only needs the settings for http and https. Everything is handled via the https port but we also have http (port 80) open to have a redirect to https for everything with a 301 (moved permanently) return code.

FROM docker.io/nginx:latest

COPY   default.conf /etc/nginx/conf.d/default.conf
COPY   ssl.conf     /etc/nginx/conf.d/ssl.conf
COPY   cert /cert 

EXPOSE 80
EXPOSE 443

This is a very simple Dockerfile to to add the ssl certificates and the http/https configuration. We could also mount the ssl and configuration in the docker-compose file and leave the images plain as it is. Both options are valid and just a flavour.

server {
    listen      80;
    listen [::]:80;
    server_name registry.23-5.eu auth.23-5.eu;
    return 301 https://$host$request_uri;
}

This is the http configuration for nginx. Accepting everything for http and returning a 301 (moved permanently) to the same server and path just with https.

SSL configuration

SSL configuration is a little bit more complicated as we also specify the ciphers and parameters for the encryption. As this topic is endless and very easy to screw up I personally relay on https://cipherli.st as a configuration source.

openssl dhparam -out dhparams.pem 4096

The recommendation is to generate own Diffie–Hellman pool bigger than 2048 bit. This process can take a very long time. We add the result file together with our keys to the cert folder.

ssl_protocols              TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers  on;
ssl_dhparam                /cert/dhparams.pem;
ssl_ciphers                "ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384";
ssl_ecdh_curve             secp384r1; 
ssl_session_cache          shared:SSL:10m;
ssl_session_timeout        10m;
ssl_session_tickets        off; 
ssl_stapling               on; 
ssl_stapling_verify        on; 
resolver                   8.8.8.8 9.9.9.9 valid=300s;
resolver_timeout           5s;
add_header                 Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
add_header                 X-Frame-Options DENY;
add_header                 X-Content-Type-Options nosniff;
add_header                 X-XSS-Protection "1; mode=block";

This configuration is based on the recommendation from cipherlist. Be aware one part of this setup is the Strict-Transport-Security with can cause a lot of long-time trouble if you mess it up. This completes the basic SSL setup.

map $upstream_http_docker_distribution_api_version $docker_distribution_api_version {
  '' 'registry/2.0';
}

This mapping helps to set the right header even when Nginx removed it because of authentication. Docker registry needs this information in the http header.

server {
    listen      443 ssl http2;
    listen [::]:443 ssl http2;

    server_name auth.23-5.eu;

    ssl_certificate         /cert/auth/fullchain.pem;
    ssl_certificate_key     /cert/auth/privkey.pem;
    ssl_trusted_certificate /cert/auth/chain.pem;

    location /auth {

        proxy_read_timeout    90;
        proxy_connect_timeout 90;
        proxy_redirect        off;

        proxy_set_header X-Real-IP         $remote_addr;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-Port  443;
        proxy_set_header Host              $http_host;

        proxy_pass http://dockerauth:5001/auth;
    }
}

In this case we are running the registry and the auth server on the same virtual machine. Therefore both configurations are in the SSL.conf file. This one is for the auth server.

server {
    listen      443 ssl http2;
    listen [::]:443 ssl http2;

    server_name  registry.23-5.eu;

    ssl_certificate         /cert/registry/fullchain.pem;
    ssl_certificate_key     /cert/registry/privkey.pem;
    ssl_trusted_certificate /cert/registry/chain.pem;

    client_max_body_size 0;
    chunked_transfer_encoding on;

    location /v2/ {

        if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
          return 404;
        }

        add_header 'Docker-Distribution-Api-Version' $docker_distribution_api_version always;

        proxy_pass http://registry:5000;
        proxy_set_header  Host              $http_host;   # required for docker client's sake
        proxy_set_header  X-Real-IP         $remote_addr; # pass on real client's IP
        proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header  X-Forwarded-Proto $scheme;
        proxy_read_timeout                  900;
    } 
}

And this configuration part for the registry server itself. Important here is the client_max_body_size parameter to make sure even bigger docker images are getting through. Older docker client versions getting a 404 because they can not be handled by the docker registry.

Lets encrypt

The easiest way to get a certificate is by using let’s encrypt. There are different ways how to receive a certificate, we just use a very simple one here with the standalone call. The certbot opens a mini web server on port 80 to handle the authentication request on its own. Therefore make sure the Nginx docker is not running.

certbot certonly -d registry.23-5.eu --standalone
certbot certonly -d auth.23-5.eu     --standalone

for i in registry auth client
do
 cp /etc/letsencrypt/live/${i}.23-5.eu/chain.pem     /root/nginx/cert/${i}/
 cp /etc/letsencrypt/live/${i}.23-5.eu/fullchain.pem /root/nginx/cert/${i}/
 cp /etc/letsencrypt/live/${i}.23-5.eu/privkey.pem   /root/nginx/cert/${i}/
done

Do the certificate request call for the auth and the registry certificate and copy the certificate and private key to your cert folder for the docker build to pick it up. Don’t forget the dhaprams.pem file.

Docker registry

Now as the server is configured and more or less secured, let’s configure the docker registry server and auth server. Docker inc. offers a docker registry docker container which is relatively easy to hande and to configure.

      - REGISTRY_AUTH=token
      - REGISTRY_AUTH_TOKEN_REALM=https://auth.23-5.eu/auth
      - REGISTRY_AUTH_TOKEN_SERVICE="Docker registry"
      - REGISTRY_AUTH_TOKEN_ISSUER="Acme auth server"
      - REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE=/ssl/domain.crt

The configuration is done in the docker-compose file itself. The important information is the REALM, so the docker registry can redirect the client to the auth server with the issuer and the cert bundle from the referred auth server to check the bearer token later.

Docker Token Authenticator

Docker Inc. does not provide an auth server out of the box as done with the registry itself. This is basically left for the registry provider to build their own. Luckily Cesanta stepped up and build a nice configurable auth server to be used with the registry server. docker_auth has different ways of how to store information about the user.

  • Static list of users
  • Google Sign-In
  • Github Sign-In
  • LDAP bind
  • MongoDB user collection
  • External Program (gets login parameters and returns 0 or 1)

In our case the way to go is the MongoDB user collection as we can control for each user individually who has access to which image and easily change it on the fly by modifying the user data in the DB itself.

server:  # Server settings.
  # Address to listen on.
  addr: ":5001"

token:
  issuer: "Acme auth server" # Must match issuer in the Registry config.
  expiration: 900
  certificate: "/ssl/domain.crt"
  key: "/ssl/domain.key"

mongo_auth:
  dial_info:
    addrs: ["authdb"]
    timeout: "10s"
    database: "23-5"
    username: "ansi"
    password_file: "/config/mongopass.txt"
    enabled_tls: false
  collection: "users"

acl_mongo:
  dial_info:
    addrs: ["authdb"]
    timeout: "10s"
    database: "23-5"
    username: "ansi"
    password_file: "/config/mongopass.txt"
    enabled_tls: false
  collection: "acl"
  cache_ttl: "10s"

This is the configuration file for the auth server. Mainly 4 parts.

  • Server
    • Witch port to listen on
    • Nginx handles the TLS termination, therefore, this server has no TLS handling.
  • Token
    • Use the same issuer as configured in the registry server itself and provide the certificate files for signing the bearer token.
  • Mongo_auth
    • Where the user information is stored, the password is saved in a simple ASCII file and how to access the MongoDB. In our case, as we are behind a firewall in a docker network we don’t use TLS to access thMongoDBDB.
  • ACL_Mongo
    • Beside the user information, the AccessControlList (ACL) can also be stored in a MongoDB. Same configuration as the mongo_auth but there is a cache information as this information is stored in memory and refreshed every 10 seconds.

MongoDB

mongo --host localhost --username root --password example --authenticationDatabase admin

use 23-5

db.createUser({user: "ansi", pwd: "test", roles: ["readWrite"], mechanisms: ["SCRAM-SHA-1"]})

mongo --host localhost --username ansi --password test --authenticationDatabase 23-5

db.users.insert({
    "username" : "waldi",
    "password" : "$2y$05$hxH........Ii33Csix8hC",
    "labels" : {"full-access":["test/*"],
                "read-only-access":["prod/*"]
               }
})

db.acl.insert([
  { "seq": 10,
    "match": {"name": "${labels:full-access}"},
    "actions": ["*"],
    "comment": "full access"
  },
  { "seq": 20,
    "match": {"name": "${labels:read-only-access}"},
    "actions": ["pull"],
    "comment": "pull access"
  }
])

The mongoDB was initialized by the docker-compose file with an admin user “root” and passwd “example”. We use this account to create a new database called “23-5” and set a new user there with username “ansi” and passwd “test”. This database stores all user and acls. The docker registry users by themselves are stored with an bencrypted password. and some labels. Bencrypt a passwd with:

sudo apt install apache2-tools
htpasswd -nB USERNAME

Beside username and password, we can also store labels of all kind to a given user. This allows us to use these labels for the ACLs again. So in our case, the ACLs defines all docker images with a given name (the name is stored in the label with read-only or full access) to access images based on their label. In our case, the user “waldi” has full access to all docker images with “test/*” and only read access to everything in “prod/*” but nothing else. ACLs have a seq number in which they were processed. The first patching ACL will be used.

Labels can be combined so for example:

ACL:
{
  "match": { "name": "${labels:project}/${labels:group}-${labels:tier}" },
  "actions": [ "push", "pull" ],
  "comment": "Contrived multiple label match rule"
}
USER:
{
    "username" : "busy-guy",
    "password" : "$2y$05$B.x.......CbCGtjFl7S33aCUHNBxbq",
    "labels" : {
        "group" : [
            "web",
            "webdev"
        ],
        "project" : [
            "website",
            "api"
        ],
        "tier" : [
            "frontend",
            "backend"
        ]
    }
}

Would give push and pull access to the docker image

website/webdev-backend

These variables can be checked for the ACL:

  • ${account} the account name aka username
  • ${name} the repository name “*” can be used. So for example “prod/*” gives access to “prod/server”

Generating bearer SSL key

In order to sign a bearer token we need a key. This can be a self signed key done with openssl:

openssl req \
       -newkey rsa:4096 \
       -days 365 \ 
       -nodes -keyout domain.key \
       -out domain.csr \
       -subj "/C=EU/ST=Germany/L=Berlin/O=23-5/CN=auth.23-5.eu"

openssl x509 \
       -signkey domain.key \
       -in domain.csr \
       -req -days 365 -out domain.crt

openssl req \
        -x509 \
        -nodes \
        -days 365 \
        -newkey rsa:2048 \
        -keyout server.key \
        -out server.pem

Docker-compose

We can configure and start the auth and registry server and nginx with one docker-compose file:

version: '3'

services:

  nginx:
    restart: always
    build:
      context: nginx
    ports:
      - 80:80
      - 443:443

  mongoclient:
    image: docker.io/mongoclient/mongoclient:latest
    restart: always
    depends_on:
      - authdb
    ports:
      - 3000:3000
    environment:
      - TZ=Europe/Berlin
      - STARTUP_DELAY=1

  authdb:
    image: docker.io/mongo:4.1
    restart: always
    volumes:
      - /root/auth_db:/data/db
    environment:
      - TZ=Europe/Berlin
      - MONGO_INITDB_ROOT_USERNAME=root
      - MONGO_INITDB_ROOT_PASSWORD=example
    ports:
      - 27017:27017
    command: --bind_ip 0.0.0.0

  dockerauth:
    image: docker.io/cesanta/docker_auth:1
    volumes:
      - /root/auth_server/config:/config:ro
      - /root/auth_server/ssl:/ssl:ro
    command: --v=2 --alsologtostderr /config/auth_config.yml
    restart: always
    environment:
      - TZ=Europe/Berlin

  registry:
    image: docker.io/registry:2
    volumes:
      - /root/auth_server/ssl:/ssl:ro
      - /root/docker_registry/data:/var/lib/registry
    restart: always
    environment:
      - TZ=Europe/Berlin
      - REGISTRY_AUTH=token
      - REGISTRY_AUTH_TOKEN_REALM=https://auth.23-5.eu/auth
      - REGISTRY_AUTH_TOKEN_SERVICE="Docker registry"
      - REGISTRY_AUTH_TOKEN_ISSUER="Acme auth server"
      - REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE=/ssl/domain.crt

I also added a mongoclient docker container to have easy access to the mongodb server. Please be aware this one is not secured by the nginx reverse proxy and is only for testing. You can also access the mongodb with command line:

docker exec -it root_authdb_1 mongo --host localhost --username root --password example --authenticationDatabase admin

The MongoDB docker is also called with a different command to give access outside of localhost. (–bind_ip 0.0.0.0)

Testing

docker-compose build 
docker-compose up -d

Is starting the setup. We have a docker registry user “waldi” with this setup:

[{"username": "waldi",
  "password": "$2......dKOIrAn.KxCfeEn7HhePFIO",
  "labels": {"full-access": ["test", "socke*"]}
  }
]

[{"seq": 10,
  "match":{"name": "${labels:full-access}"},
  "actions":["*"],
  "comment": "full access"
 },{
  "seq": 20,
  "match":{"name": "${labels:read-only-access}"},
  "actions":["pull"],
  "comment": "pull access"
  }
]

So user “waldi can write and read all repositories with either “test” or anything starting with “socke“. Let’s try it.

$ docker login registry.23-5.eu
Authenticating with existing credentials...
Login Succeeded

$ docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
Status: Image is up to date for nginx:latest

$ docker tag nginx:latest registry.23-5.eu/test:latest

$ docker push registry.23-5.eu/test:latest
The push refers to repository [registry.23-5.eu/test]
fc4c9f8e7dac: Pushed 
912ed487215b: Pushed 
778790 size: 948

$ docker tag nginx:latest registry.23-5.eu/socken-test:latest

$ docker push registry.23-5.eu/socken-test:latest            
The push refers to repository [registry.23-5.eu/socken-test]
fc4c9f8e7dac: Mounted from test 
912ed487215b: Mounted from test 
5dacd731af1b: Mounted from test 
latest: digest: sha256:c10f4146f30fda9f40946bc114afeb1f4e867877c49283207a08ddbcf1778790 size: 948

It works. Now let’s test the negative part and try if the push gets refused:

$ docker tag nginx:latest registry.23-5.eu/test-socke:latest 

$ docker push registry.23-5.eu/test-socke:latest     
The push refers to repository [registry.23-5.eu/test-socke]
fc4c9f8e7dac: Preparing 
912ed487215b: Preparing 
5dacd731af1b: Preparing 
denied: requested access to the resource is denied

It works! The user can be modified on the fly in the MongoDB and granted or revoked rights. There is one final test to check if the Nginx is secured: https://www.ssllabs.com/ssltest/index.html.

Lindenblad Antenna for 2 Meters DIY

We need an Antenna

Lindenblad
Lindenblad

There was need for an antenna for our SatNogs (Satellite Ground Station Network). As serious hackers there was no other option than to build one of our own. After several more or less unsuccessful experiments with several antenna types we decided to build a Lindenblad antenna for the 2 meters (144 Mhz frequency) range. We are Ronny (DL7ROX) and myself (DM1AS). There are several papers and discussions available on how to build such an antenna, most of them vom Amsat and US in general.

So I only focus here on the “translation” into the metric system and the DIY parts to assemble one antenna.  For a very good paper and the magic background please have a look at https://www.amsat.org/wordpress/wp-content/uploads/2015/08/An-EZ-Lindenblad-Antenna-for-2-Meters2.pdf.

Dipole Dimensions

Dimension Length / Distance
Length on one dipol element 373 mm
Space between the dipoles 19 mm
Total length of the dipole 765 mm

In order to make your life easier and the spacing hopefully very accurate I create this T-connector with Fusion360.


https://a360.co/2Q210Xh

and this plug


https://a360.co/2RmxR5P

The cross connection in between the 4 dipoles is the same aluminium tube with a length of 584 mm.

The wires

As we have 4 dipoles of 50 Ohm impedance in parallel and the wire typically as 50 Ohm we need to match it. The solution in the paper is to use an 75 Ohm TV wire with a defined range so it will match the 200 Ohm to the 50 Ohm of the wire impedance.

impedance matching wire
impedance matching wire

A 584 mm
B 5 mm
C 8 mm

 

Put it all together

Each dipol will be connected to one impedance matching wire and all 4 wires to the antenna wire. Don’t forget a cable ferrite on each of the impedance matching wires very close to the dipol side. The 4 dipoles will then be connected opposite to each other and each dipole rotated by 30 degrees clockwise to the horizon.

Measure the SWR

SWR-Lindenblad
SWR-Lindenblad

We measured the dimensions with an AA-1400 and where very proud to get such a great result of 1 at the center frequency.

Fixing the caching_sha2 problem with wordpress and mysql verion 8

The problem

I am using wordpress with mysql both in a docker installation. The procedure for my setup is described here. Since mysql updated to version 8 they introduced caching_sha2 as the default password algorithm. When you use the auto update mechanism in wordpress everything is fine and wordpress still works with the native password version configured for the wordpress user. But if you use wordpress in a docker container and pull wordpress:latest there is a problem since wordpress 4.9.7 to access the mysql database: (Never thought I can use the word wordpress so many times in a sentence!)

Warning: mysqli::__construct(): Unexpected server respose while doing caching_sha2 auth: 109 in Standard input code on line 22
Warning: mysqli::__construct(): MySQL server has gone away in Standard input code on line 22
Warning: mysqli::__construct(): (HY000/2006): MySQL server has gone away in Standard input code on line 22
MySQL Connection Error: (2006) MySQL server has gone away

The solution

The solution is relatively easy. You need to change the wordpress user manually from ​”mysql_native_password” to “caching_sha2_password“. This can be done with a simple SQL call. First stop your wordpress docker container and keep the mysql docker container running. Then execute these commands.

docker exec -it blog_wordpress_db_1 bash
mysql -u root -pREALLYEPICSECURE
ALTER USER wordpressuser IDENTIFIED WITH caching_sha2_password BY 'REALLYEPICSECURE';
exit
exit

Replace blog_wordpress_db_1 with your mysql docker instance name (“docker ps”), “REALLYEPICSECURE” with your root password and “wordpressuser” with your wordpress username.

That is basically all. Now you can start your wordpress:latest docker container again and it should work.

 

Serious weather condition in your calendar

The need

Calendar
Calendar

I really like to plan the day in my calendar. Therefore I added a lot of external ical feeds like meetup, open-air cinema and for sure lauchlibrary. In order to decide on transportation I always have the weather underground page in a separate browser tab. This is very inconvenient, therefore I wrote a small script to get weather predictions via API call from wunderground and export an ical feed and update my google calendar with weather conditions.

Wunderground

Weather Underground is (or at least was for many years) the coolest weather page in the internet. Really great UI and a wonderful API to get current weather conditions and weather predictions for the next 10 days. Further more (and that is why I really really like it) users could send their own weather sensor data to the side to enhance the sensor mash network and get a nice visualization. Unfortunately the service is loosing features on a monthly basis and also the page itself is down for several hours every now and then. Very sad, but I still love it.

As I said they have a nice API to get weather forecast for the next 10 days on an hourly base. OK, we can all discuss how  dependable a weather prediction for a certain hour in 8 days is, but at least for the next days it is really helpful.  I am using the forecast10day and the hourly10day API endpoints to get a nicely formatted JSON document from wunderground. If you want to run this script for your own area you need an account and an API key as the calls are restricted (but for free).

PWS

My favorite Maker-space (Motionlab.berlin) has an epic weather phalanx (as I love to call it) and sends in local weather conditions to wunderground. Therefore I can ask beside weather conditions in a city for weather conditions based a certain weather reporting station. In our case its the IBERLIN1705 station. Check out current conditions here.

Forecast10day

The API call to http://api.wunderground.com/api/YOUR-API-KEY-HERE/forecast10day/q/pws:IBERLIN1705.json returns for each day of the next 10 days information about humidity, temperature (min/max), snow, rain, wind and many more. I take these data and create one calendar entry each morning at 06:00-06:15 with summary information for the day. Specially for days beyond the 4 days boundry this condition is more accurate then an hourly information. Getting this information in python is very easy:

 try:
            data   = json.loads(requests.get("http://api.wunderground.com/api/YOUR-API-HERE/forecast10day/q/pws:IBERLIN1705.json").content)
        except:
            print("Error in Forecast")
            return False

        for e in data['forecast']['simpleforecast']['forecastday']:
            day        = e['date']['day']
            month      = e['date']['month']
            year       = e['date']['year'] 
            conditions = e['conditions']
            humidity   = e['avehumidity']
            high       = e['high']['celsius']
            low        = e['low']['celsius']
            snow       = e['snow_allday']['cm']
            rain       = e['qpf_allday']['mm']

I am using requests to make the REST call and parse the “content” value with json loads. Easy as it looks. The data var contains the dictionary with all weather information on a silver tablet (if the API is not down, happens way to often).

Hourly10day

http://api.wunderground.com/api/YOUR-API-KEY/hourly10day/q/pws:IBERLIN1705.json contains the weather information on an hourly basis for the next 10 days, So the parsing is very similar to the forcast API call. I am specially interested here in rain, snow, temperature, wind, dewpoint and UV-Index as these are values I want to monitor and add calendar entries when they are outside a certain range.

  • Wind > 23 km/h
  • Temperature > 30 or < -10 C
  • UV-Index > 4 (6 is max)
  • Rain and Snow in general
  • (Temperature – Dew point) < 3

Humidity in general are not so important and highly dependent on the current temperature. But dew point (“the atmospheric temperature (varying according to pressure and humidity) below which water droplets begin to condense and dew can form.”) is very interesting when you want to know if it is getting muggy. Even when it is 10 C a very low difference between temperature and dew point means you really feel the cold crawling into your bones. 🙂

Ical

To create an Ical feed I use the icalendar library in python. Very handy to create events and export them as an ical (XML) feed.

newcal = Calendar()

event = Event()    
event.add('summary', "%s-%sC %s%% Rain:%s Snow:%s %s" % (low, high, humidity, rain, snow, conditions))
event.add('dtstart', datetime(year,month,day,6, 0,0,0,timezone('Europe/Berlin')))
event.add('dtend',   datetime(year,month,day,6,15,0,0,timezone("Europe/Berlin")))
event.add('description', DESC)

newcal.add_component(event)
return newcal.to_ical()

Summary will be the text your calendar program displays when displaying the calendar itself, while description will be displayed then showing calendar entry details. “dtstart” and “dtend” mark the time range. For the timezone I use the pytz library. “to_ical()”. That’s basically all you need to create an ical feed.

Google

The google calendar can import and subscribe to calendars. While import adds the calendar entries to an existing calendar once (great for concerts, public transport booking), subscribe creates a new calendar and updates the feed every > 24 hours. This is great for long lasting events like meetup or rocket starts but weather predictions changes several times per hour. Therefore I added a small feature to the script to actively delete and create calendar entries. So I can do it every 3 hours and keep the calendar up to date.

As always google offers nice and very handy API endpoints to manipulate the data. Beside calling the API Rest endpoint by hand there are libraries for different languages. I use the “googleapiclient” and “oauth2client” to access my calendar. First step is to create a new calendar in google, then active the calendar API in the developer console and create an API key for your app. The googleapiclient takes care of the Oauth dance and stares credentials in a local file.

store = file.Storage('token.json')
creds = store.get()

if not creds or creds.invalid:
  flow = client.flow_from_clientsecrets('credentials.json', SCOPES)
  creds = tools.run_flow(flow, store)
        
return build('calendar', 'v3', http=creds.authorize(Http()))

If you call this function the very first time to requires the OAuth dance. Basically call a webpage and give access to your google calendar. The secreats are stored in the token.json file and reloaded every call.

Deleting old events

service       = getService()
events_result = service.events().list(calendarId=CALENDAR_ID, maxResults=100, singleEvents=True, orderBy='startTime').execute()
events        = events_result.get('items', [])
        
for e in events:
  service.events().delete(calendarId=CALENDAR_ID, eventId=e['id']).execute()

“getService” calls the upper function to get an access object. “events().list().execute() request a list of the first 100 calendar entries and “events_result.get() returns an array with all calendar entries and their details. “service.events().delete().execute() removes these entries.

Creating new events

ge = {
       'summary'    : '',
       'description': DESC,
       'start': {
                 'dateTime' : '',
                 'timeZone' : 'Europe/Berlin',
                },
       'end':   {
                 'dateTime' : '',
                 'timeZone' : 'Europe/Berlin',
                }
     }

ge['summary']           = "%s-%sC %s%% Rain:%s Snow:%s %s" % (low, high, humidity, rain, snow, conditions)
ge['start']['dateTime'] = '%s-%s-%sT06:00:00' % (year, month, day)
ge['end'  ]['dateTime'] = '%s-%s-%sT06:15:00' % (year, month, day)

service = getService()
service.events().insert(calendarId=CALENDAR_ID, body=ge).execute()

Very similar to the delete calls, the add calls gets the credentials, and calls “events().insert().execute()” with a dictionary containing the detailed information.

Docker container

The docker container is very simple.

FROM python:latest

RUN pip install icalendar requests Flask oauth2client google-api-python-client iso8601

ADD Exporter.py      Exporter.py
ADD credentials.json credentials.json
ADD token.json       token.json

EXPOSE 80

CMD python /Exporter.py

I am using the latest python docker container, installing some libraries with pip and copy the python file, the creadentials and token json files.

The repo

The complete source code can be found in my github repository.

The calendar for Berlin weather conditions can be found and added here.

 

G199 or how to 3d print a logo on existing STL files

The problem

Logo with 2 different Filaments
Logo with 2 different Filaments

Sometime you want to print you logo or some text on your 3D object with different filament but you only have a single head printer and don’t want to spend all the time sitting next to your printer to wait for the right moment to manually pause the print and change the filament. Like the Motionlab logo on the picture. For sure you could print it separately and glue it on the main printed part but specially with text it’s a lot of tiny parts to take care of and align. If you are lucky and have a dual print head it’s not a problem but there is also a way to do it very simple with a singe print head by editing the G-Code file and add G-Codes by hand.

The solution

There is a G-Code named G199. Regarding to Craftware the purpose of the code is “G199 pauses the print immediately, and moves the head to X0, Y100. (this is the command the LCD screen uses)”. So by adding this code by hand the printer stops printing and moved the head to the side. After changing the filament (and also extrude some more by hand to make sure the printer is ready) you can press “continue” on the printer display.

Prepare the SVG file

If your logo is already in SVG you are lucky. Otherwise try to convert it to SVG and make sure it’s in connected objects. If you need some geeky stuff I can recommend Geeksvgs.

Use Fusion360 to create the STL logo file

Fusion360 insert SVG
Fusion360 insert SVG

In Fusion360 use Insert -> Insert SVG -> Select SVG File to open the SVG file on a Sketch. Resize and stretch it as you like or the dimensions dictate.

Next step is to extrude the logo to a 3D object. This can be done simply by “Stop Sketching” and then press “e” for extrude. Select everything by drawing a frame with your mouse. Unfortunately fusion has no idea witch part of the logo should be extruded and witch not. Press and hold CTRL and deselect the inner parts of the logo. For example the circle in the “o”. I recommend extracting 10 mm even if you only want to rise the logo by 4 mm.

As the single objects are not connected fusion creates several bodies instead of one.

Save single STL
Save single STL

A single STL file with all Bodies included at the right position can be exported by pressing the Component name and press the right mouse button.

Combine both STL in your slicer

Now as we have two STL files we can load them both at the same time in our slicer (no matter witch one). Position your logo at the right place, scale it and change the z access offset accordingly to your needs.

Combine STLs
Combine STLs

As we extruded the logo 10 mm there is enough space to play around. Make sure at least one mm is submerged in your main body.

 

Manually edit the gcode to add the pause sequence

Find the right Layer
Find the right Layer

Now we need to find the right place in the G-Code itself. Our slicer can help us with the preview mode. The best layer is the second one after the main body is done and the logo starts to be printed.

Mark down this layer and open the G-Code in your favorite text editor. All slicers I used always make nice comments in the code to find the light position. Search for “layer nnn” and add the “G199” statement.

G1 X120.290 Y97.291 E0.0407
G1 X117.699 Y96.872 E0.0884
G1 X115.696 Y96.511 E0.0685
G1 X113.803 Y94.618 E0.0901
G1 X113.662 Y94.477 F2400
G1 E-1.5000 F1800
; layer 156, Z = 39.000
; inner perimeter
G199
G1 X114.840 Y96.357 F4800
G1 Z39.000 F1000
G1 E1.5000 F1800
G1 X111.737 Y95.763 E0.1064 F2400
G1 X110.227 Y95.491 E0.0517
G1 X108.129 Y95.141 E0.0716
G1 X106.311 Y94.873 E0.0619
G1 X105.294 Y94.746 E0.0345
G1 X104.465 Y94.666 E0.0281
G1 X103.842 Y94.638 E0.0210

 

Print

Just print the G-Code as you always do. As soon as the printer reads and process the G199 comment it stops printing and moves the head to the left side. All heating settings remain the same and you can easily replace your filament and press “Continue” or “GO” on your printers screen. Happy printing.

AstroDIY 3D printed Dobson Telescope

How to print your own Telescope

3D printed Dobson Telescope
3D printed Dobson Telescope

For the last 3 month I worked together with a very good friend of mine (Herwig Diessner aka AstroHD) on a DIY project. The idea started back at the 34c3 conference in Leipzig. Talking the topic “tuwat!” (Do something) serious we decided to 3D print a real size and working telescope. And yes we did it. Tomorrow at the “Tag der Astronomie 2018” we will present our own 3D printed Dobson Telescope.

Adding a ks0212 relay board to the mqtt universe

Weatherstation with raspi
Weatherstation with raspi

Adding the 4 channel relay board ks0212 to the MQTT universe

We just hacked a trotec dehumidifier for Herwigs Observatory. The idea was to additionally activate the dehumidifier when the difference between outside and inside humidity is above 10%. Normally there is a fan taking care of it but sometimes the differents gets to high. As there is already a raspberry pi running in the observatory for the weatherstation and the flightradar24 installation we just added the 4 channel relay board ks0212 from keyestudio. Not touching the 220V part we directly used the relay to “press” the TTL switch on the board for 0.5 seconds to turn on and off the dehumidifier. Here are the code snipped we used for this. The control is completely handled via MQTT.

Installing necessary programs and libraries

sudo apt install python python-pip python-dev
sudo pip install wiringpi paho-mqtt

For the sake of simplicity we used python and the GPIO library wiringpi. Therefore we first install the python development parts and them the python libraries for wiringpi and MQTT. As this is a dedicated hardware installation we don’t use virtualenv and directly install the library as root system wide.

The python program

import time
import wiringpi
import paho.mqtt.client as mqtt

def setup():
   wiringpi.wiringPiSetup()
   wiringpi.pinMode(3,  1)
   wiringpi.pinMode(7,  1)
   wiringpi.pinMode(22, 1)
   wiringpi.pinMode(25, 1)

def short(pin):
    switch_on(pin)
    time.sleep(0.5)
    switch_off(pin)

def switch_on(pin):
    wiringpi.digitalWrite(pin, 1)

def switch_off(pin):
    wiringpi.digitalWrite(pin, 0)

def on_connect(self, client, userdata, rc):
    mqclient.subscribe("sternwarte/relay/#")

def on_message(client, userdata, msg):
    m = msg.topic.split("/")
    pin = 0
    if m[-1] == "j3": 
        pin = 3
    if m[-1] == "j2": 
        pin = 7
    if m[-1] == "j4": 
        pin = 22
    if m[-1] == "j5": 
        pin = 25
    if pin != 0:
        if msg.payload == "on":
            switch_on(pin)
        if msg.payload == "off":
            switch_off(pin)
        if msg.payload == "press":
            short(pin)

if __name__ == "__main__":
    setup()
    mqclient = mqtt.Client(clean_session=True)
    mqclient.connect("192.168.2.5", 1883, 60)
    mqclient.on_connect = on_connect
    mqclient.on_message = on_message
    mqclient.loop_forever()

Again, a very simple python script, basically attaching to a (you need to change the code, there is no config) mqtt server and subscribes itself to a certain topic. Then it waits for messages and cuts off the last part of the topic to identify the relay. The naming convention is based on the relay name printed on the ks0212 pcb. As payload you can send “on“, “off” and “press“. “press” switches the relay on for half a second in order to simulate a button press as we need it for our dehumidifier.

Adding a systemd service

In order to keep the wantabe daemon up and running and also start it automatically at system start we add this service configuration file in “/lib/systemd/system/relayboard.service“:

#cat /lib/systemd/system/relayboard.service
[Unit]
Description=ks0212 Relay Board
After=multi-user.target

[Service]
Type=simple
ExecStart=/usr/bin/python /home/pi/ks0212.py
Restart=on-abort

[Install]
WantedBy=multi-user.target

Activating the service

The following lines activate the service:

sudo chmod 644 /lib/systemd/system/relayboard.service
sudo systemctl daemon-reload
sudo systemctl enable relayboard.service
sudo systemctl start relayboard.service

Checking the status can be done with:

sudo systemctl status relayboard.service

ks0212 Pinout

If you want to do some hacking with the ks0212 relay board on your own here is the pin mapping table. I used the very cool side https://pinout.xyz/pinout/wiringpi for getting the numbers:

Relay WiringPi BCM GPIO Link
J2 7 4 7 https://pinout.xyz/pinout/pin7_gpio4
J3 3 22 15 https://pinout.xyz/pinout/pin15_gpio22
J4 22 6 31 https://pinout.xyz/pinout/pin31_gpio6
J5 25 26 37 https://pinout.xyz/pinout/pin37_gpio26

 

 

Workload container for autoscaling test with kubernetes

Workload
Workload

The Idea

Every now and then you want to test your installation, your server or your setup. Specially when you want to test auto scaling functionalities. Kubernetes has an out of the box auto scaler and the official descriptions recommends a test docker container for testing with a apache and php installation. This is really great for testing a web application where you have some workload for a relatively short time frame. But I would also like to test a scenario where the workload runs for a longer time in the kubernetes setup and generates way more cpu workload then a web application. Therefore I hacked a nice docker container based on a c program load generator.

The docker container

The docker container is basically a very very simple Flask server with only one entry point “/”. The workload itself can be configured via two parameters:

  • percentage How much cpu load will be generated
  • seconds How long will the workload be active

The docker container itself uses nearly no CPU cycles as Flask is the only python process being active and waits for calls to start using CPU cycles.

lookbusy

I use a very nice open source tool called lookbusy from Devin Carraway which consumes memory and cpu cycles based on command line parameters. Unfortunately the program has no parameter to configure the time span it shout run. Therefore I call it the unix command timeout to terminate its execution after the given amount of seconds.

The Flask python wrapper

import subprocess
from   threading import Thread
from   flask     import Flask, request

app = Flask(__name__)

def worker(percentage, seconds):
    subprocess.run(['timeout', str(seconds), '/usr/local/bin/lookbusy', '-c', str(percentage)])

@app.route('/')
def load(): 
    percentage = request.args.get('percentage') if "percentage" in request.args else 50
    seconds    = request.args.get('seconds')    if "seconds"    in request.args else 10
    Thread(target=worker, args=(percentage, seconds)).start()
    return "started"

if __name__ == "__main__":
    app.run(host='0.0.0.0', port=80, processes=10)

The only program is a python Flask one, very short and only takes the get call to its root folder, checks for the two parameters and starts a thread with the subprocess. The get call immediately returns as it also supports long run workload simulations.

The Dockerfile

FROM   python:latest
RUN    curl http://www.devin.com/lookbusy/download/lookbusy-1.4.tar.gz | tar xvz && \
       cd lookbusy-1.4 && ./configure && \
       make && make install && cd .. && rm -rf lookbusy-1.4
RUN    pip install Flask
ADD    server.py server.py
EXPOSE 80
CMD    python -u server.py

The docker container is based on python latest (at this time 3.6.4). I put all the curl, make, install and rm calls into a single line in order to have a minimal footprint for the docker layer as we do not need the source code any more. As Flask is the only requirements I also call it directly without the requirements.txt file. The “-u” parameter for the python call is necessary to prevent python from buffering the output. Otherwise it can be quite disturbing when trying to read the debug log file.

Building and pushing the docker container

docker build -t ansi/lookbusy .
docker push     ansi/lookbusy

Building and pushing it to hub.docker.com is straightforward and nothing special.

Testing it on a kubernetes cluster

I have chosen the IBM cloud to test my docker container.

Requesting a kubernetes cluster

Requesting a kubernetes cluster can be done after login with

bx cs cluster-create --name ansi-blogtest --location dal10 --workers 3 --kube-version 1.8.6 --private-vlan 1788637 --public-vlan 1788635 --machine-type b2c.4x16

This command uses the bluemix CLI with the cluster plugin to control and configure kubernetes on the IBM infrastructure. The parameters are

  • –name to give your cluster a name (will be very important later on)
  • –location which datacenter to use (in this case dallas). Use “bx cs locations” to get your possible locations for the chosen region
  • –workers how many worker nodes are requested
  • –kube-version which kubernetes version should be used. Use “bx cs kube-versions” to get the available versions. “(default)” is not part of the parameter call.
  • –private-vlan which vlan for the private network should be used. Use “bx cs vlans <location>” to get the available public and private vlans
  • –public-vlan see private vlan
  • –machine-type which kind of underlying configuration you want to use for your worker node. Use “bx cs machine-types <location>” to get the available machine types. The first number after the “.” is the amount of cores and one after “x” the the amount of RAM in GB.

This command takes some time (~1h) to generate the kubernetes cluster. BTW my bluemix cli docker container has all necessary tools and also a nice script called “start_cluster.sh” to query all parameters and start a new cluster. After the cluster is up and running we can get the kubernetes configuration with

bx cs cluster-config ansi-blog
OK
The configuration for ansi-blogtest was downloaded successfully. Export environment variables to start using Kubernetes.

export KUBECONFIG=/root/.bluemix/plugins/container-service/clusters/ansi-blog/kube-config-dal10-ansi-blog.yml

Starting a pod and replica set

kubectl run loadtest --image=ansi/lookbusy --requests=cpu=200m

We start the pod and replica set without a yaml file because the request is very straight forward. Important here is the parameter “–requests“. Without it the autoscaler can not measure the cpu load and it never triggers.

Exposing the http port

kubectl expose deployment loadtest --type=LoadBalancer --name=loadtest --port=80

Again because the call is so simple we directly call kubectl without a yaml file to expose the Port 80. We can check for the public IP with

kubectl get svc
NAME     TYPE         CLUSTER-IP   EXTERNAL-IP PORT(S)      AGE
loadtest LoadBalancer 172.21.3.160 <pending>   80:31277/TCP 23m

In case the cloud runs out of public IP addresses and the “EXTERNAL_IP” is still pending after several minutes we can use one of the workers public ip addresses and the dynamic assigned port. The port is visible with “kubectl get svc” at the “PORTS” section. The syntax is as always in docker internalport:externalport. The workers public IP can be checked with

bx cs workers ansi-blog
ID                                               Public IP     Private IP     Machine Type       State  Status Version
kube-dal10-cr1dd768315d654d4bb4340ee8159faa17-w1 169.47.252.96 10.177.184.212 b2c.4x16.encrypted normal Ready  1.8.6_1506

So instead of calling our service with a official public ip address on port 80 we can use

http://169.47.252.96:31277

Autoscaler

Kubernetes has a build in horizontal autoscaler which can be started with

kubectl autoscale deployment loadtest --cpu-percent=50 --min=1 --max=10

In this case it measures the cpu load and starts new pods when the load is over 50%. The autoscaler in this configuration never starts more than 10 and never less than 2 pods. The current measurements and parameters can be checked with

kubectl get hpa
NAME      REFERENCE           TARGETS  MINPODS MAXPODS REPLICAS AGE
loadtest  Deployment/loadtest 0% / 50% 1       10      1        23m

So right now the cpu load is 0 and only one replica is running.

Loadtest

Time to get call our container and start the load test. Depending on the URL we an use curl to start the test with

curl "http://169.47.252.96:31277/?seconds=1000&percentage=80"

and check the result after some time with

kubectl get hpa
NAME      REFERENCE           TARGETS  MINPODS MAXPODS REPLICAS AGE
loadtest  Deployment/loadtest 60%/50%  1       10      6        23m

As we see the load increases and autoscaler kicks in. More details can obtained with the “kubectl proxy” command.

Deleting the kubernetes cluster

To clean up we could either delete all pods and replica sets and services but we could also delete the complete cluster with

bx cs cluster-rm ansi-blog

 

Execute the Radio Meteor Observations program on mac os

MeteorLogger Screenshot
MeteorLogger Screenshot

What is it about

Wolfgang Kaufmann wrote an impressive article and even a more impressive software for the hobby radio astronomers. I highly recommend checking out the article and play with the software as there are not so many radio astronomers among the community. His software is written in python with a very clean UI. It directly connects via the computer sound card and grabs the audio signal. I shortly describe here what to install on a Mac OS to get his software up and running.

Where to get it

The software can be downloaded at http://www.ars-electromagnetica.de/robs/download.html. Unfortunately it is not available on any online repo like github but the source code can be downloaded as a zip file.

Preparation

The PyAudio package needs some libraries and direct access to the os sound system. Therefore we need it install this audio package outside of python itself

brew install portaudio

The necessary python libs can be installed via pip. I recommend doing it in a virtual environment

mkdir ms
cd ms
virtualenv -p python3 .
. bin/activate 
pip install cycler matplotlib numpy PyAudio pyparsing \
            python-dateutil pytz scipy six tk-tools   \
            xlrd xlwt

That is all. After installing the python libs the program starts right away

python MeteorLogger_v1.21a.py

 

Setting up SDRplay remote on a raspberry pi

SDRplay
SDRplay

SDRPlay

I recently bought myself a SDRPlay receiver to play with this technology and maybe build a ground station or meteor scatter detector. The original plan is to setup a receiver on the Motionlab roof with an raspberry pi and send the IQ data via network down to a local server and extract the interesting information. One great software to work remotely with an SDR receiver is the Soapy project.

Install the raspberry pi part

Build system

Install the latest raspberry pi lite version from raspberrypi.org

sudo apt update
sudo apt upgrade
sudo apt install cmake g++ libpython-dev python-numpy swig git

Core system

The soapy part consist of 3 parts. The Core system must be installed first.

git clone https://github.com/pothosware/SoapySDR.git
cd SoapySDR
mkdir build
cd build
cmake ..
make -j4
sudo make install
sudo ldconfig

SDRplay

The SDRplay part consist of two parts one are the proprietary binary libraries from SDRplay itself the the other part is the soapy wrapper for SDRplay.

Binary Libraries

The driver can be downloaded from the SDRplay homepage https://www.sdrplay.com/rpi2dl.php

chmod 777 SDRplay_RSP_API-RPi-2.11.1.run
./SDRplay_RSP_API-RPi-2.11.1.run

The SDRplay Soapy wrapper

git clone https://github.com/pothosware/SoapySDRPlay.git
cd SoapySDRPlay
mkdir build
cd build
cmake ..
make -j4
sudo make install

Test the Soapy access

SoapySDRUtil --info

Soapy Server for Remote Access

git clone https://github.com/pothosware/SoapyRemote.git
cd SoapyRemote
mkdir build
cd build
cmake ../ # -DCMAKE_BUILD_TYPE=Debug
make -j4
sudo make install

Run the server

SoapySDRServer --bind

If you want to run it as a service have a look here on how to autostart stuff in linux.