Run your own docker registry with token-based identification behind nginx


How to build an controlled environment to distribute docker images based on user accounts

Docker itself, AWS (just to name the biggest docker hosts right now) and many more public / private repository servers are on the marked. But sometime there is need to host an own registry for docker images. One reason can be because we can, the other is for example to give individual pull / push rights to different images to different users and control the access also based on expiration dates.

Components and the big picture

For this setup we need several software components to work orchestrated together. Starting with the firewall to block all ports except the 443 for HTTPS, the nginx reverse proxy to terminate the SSL connection and protect the underlying services against direct access and also possible load balancing, the docker registry to host the images and at last but not least the docker token authenticator to identify users and give access to images (push and/or pull) based on their rights.

Docker introduced in the second version for the registry protocol the “Docker registry authentication scheme“. This basically transfers the access control to images to an outside system and uses the bearer token mechanism to communicate. The flow is to access an docker image is:

  1. Docker daemon accesses the docker registry server as usual and gets a 401 Unauthorized in return with a “WWW-Authenticate” header pointing to the authentication server the registry server trusts.
  2. Docker daemon contacts the authentication server with the given URL and the user identifies against the server.
  3. The authentication server checks the access rights based on username, password, image name and access type (pull/push) and returns a bearer token signed with the private key.
  4. Docker daemon accesses the docker registry again with the bearer token and the docker image request.
  5. Docker registry server checks the bearer token based on the authentication server public key and grants access or doesn’t.

Firewall

Ubuntu ships with a very simple firewall control script called “Uncomplicated Firewall“. The script manages the iptable configuration and lets the user configure ports with a single line. If you access the server via SSH make sure you allow ssh access before you activate the firewall. I also recommend installing fail2ban to ban script hacking.

Nginx reverse proxy

We install Nginx also as a docker service because the update cycle is way faster compared to the software repository. The basic Nginx docker container is ready to be used and only needs the settings for http and https. Everything is handled via the https port but we also have http (port 80) open to have a redirect to https for everything with a 301 (moved permanently) return code.

This is a very simple Dockerfile to to add the ssl certificates and the http/https configuration. We could also mount the ssl and configuration in the docker-compose file and leave the images plain as it is. Both options are valid and just a flavour.

This is the http configuration for nginx. Accepting everything for http and returning a 301 (moved permanently) to the same server and path just with https.

SSL configuration

SSL configuration is a little bit more complicated as we also specify the ciphers and parameters for the encryption. As this topic is endless and very easy to screw up I personally relay on https://cipherli.st as a configuration source.

The recommendation is to generate own Diffie–Hellman pool bigger than 2048 bit. This process can take a very long time. We add the result file together with our keys to the cert folder.

This configuration is based on the recommendation from cipherlist. Be aware one part of this setup is the Strict-Transport-Security with can cause a lot of long-time trouble if you mess it up. This completes the basic SSL setup.

This mapping helps to set the right header even when Nginx removed it because of authentication. Docker registry needs this information in the http header.

In this case we are running the registry and the auth server on the same virtual machine. Therefore both configurations are in the SSL.conf file. This one is for the auth server.

And this configuration part for the registry server itself. Important here is the client_max_body_size parameter to make sure even bigger docker images are getting through. Older docker client versions getting a 404 because they can not be handled by the docker registry.

Lets encrypt

The easiest way to get a certificate is by using let’s encrypt. There are different ways how to receive a certificate, we just use a very simple one here with the standalone call. The certbot opens a mini web server on port 80 to handle the authentication request on its own. Therefore make sure the Nginx docker is not running.

Do the certificate request call for the auth and the registry certificate and copy the certificate and private key to your cert folder for the docker build to pick it up. Don’t forget the dhaprams.pem file.

Docker registry

Now as the server is configured and more or less secured, let’s configure the docker registry server and auth server. Docker inc. offers a docker registry docker container which is relatively easy to hande and to configure.

The configuration is done in the docker-compose file itself. The important information is the REALM, so the docker registry can redirect the client to the auth server with the issuer and the cert bundle from the referred auth server to check the bearer token later.

Docker Token Authenticator

Docker Inc. does not provide an auth server out of the box as done with the registry itself. This is basically left for the registry provider to build their own. Luckily Cesanta stepped up and build a nice configurable auth server to be used with the registry server. docker_auth has different ways of how to store information about the user.

  • Static list of users
  • Google Sign-In
  • Github Sign-In
  • LDAP bind
  • MongoDB user collection
  • External Program (gets login parameters and returns 0 or 1)

In our case the way to go is the MongoDB user collection as we can control for each user individually who has access to which image and easily change it on the fly by modifying the user data in the DB itself.

This is the configuration file for the auth server. Mainly 4 parts.

  • Server
    • Witch port to listen on
    • Nginx handles the TLS termination, therefore, this server has no TLS handling.
  • Token
    • Use the same issuer as configured in the registry server itself and provide the certificate files for signing the bearer token.
  • Mongo_auth
    • Where the user information is stored, the password is saved in a simple ASCII file and how to access the MongoDB. In our case, as we are behind a firewall in a docker network we don’t use TLS to access thMongoDBDB.
  • ACL_Mongo
    • Beside the user information, the AccessControlList (ACL) can also be stored in a MongoDB. Same configuration as the mongo_auth but there is a cache information as this information is stored in memory and refreshed every 10 seconds.

MongoDB

The mongoDB was initialized by the docker-compose file with an admin user “root” and passwd “example”. We use this account to create a new database called “23-5” and set a new user there with username “ansi” and passwd “test”. This database stores all user and acls. The docker registry users by themselves are stored with an bencrypted password. and some labels. Bencrypt a passwd with:

Beside username and password, we can also store labels of all kind to a given user. This allows us to use these labels for the ACLs again. So in our case, the ACLs defines all docker images with a given name (the name is stored in the label with read-only or full access) to access images based on their label. In our case, the user “waldi” has full access to all docker images with “test/*” and only read access to everything in “prod/*” but nothing else. ACLs have a seq number in which they were processed. The first patching ACL will be used.

Labels can be combined so for example:

Would give push and pull access to the docker image

These variables can be checked for the ACL:

  • ${account} the account name aka username
  • ${name} the repository name “*” can be used. So for example “prod/*” gives access to “prod/server”

Generating bearer SSL key

In order to sign a bearer token we need a key. This can be a self signed key done with openssl:

Docker-compose

We can configure and start the auth and registry server and nginx with one docker-compose file:

I also added a mongoclient docker container to have easy access to the mongodb server. Please be aware this one is not secured by the nginx reverse proxy and is only for testing. You can also access the mongodb with command line:

The MongoDB docker is also called with a different command to give access outside of localhost. (–bind_ip 0.0.0.0)

Testing

Is starting the setup. We have a docker registry user “waldi” with this setup:

So user “waldi can write and read all repositories with either “test” or anything starting with “socke“. Let’s try it.

It works. Now let’s test the negative part and try if the push gets refused:

It works! The user can be modified on the fly in the MongoDB and granted or revoked rights. There is one final test to check if the Nginx is secured: https://www.ssllabs.com/ssltest/index.html.

Serious weather condition in your calendar

The need

Calendar
Calendar

I really like to plan the day in my calendar. Therefore I added a lot of external ical feeds like meetup, open-air cinema and for sure lauchlibrary. In order to decide on transportation I always have the weather underground page in a separate browser tab. This is very inconvenient, therefore I wrote a small script to get weather predictions via API call from wunderground and export an ical feed and update my google calendar with weather conditions.

Wunderground

Weather Underground is (or at least was for many years) the coolest weather page in the internet. Really great UI and a wonderful API to get current weather conditions and weather predictions for the next 10 days. Further more (and that is why I really really like it) users could send their own weather sensor data to the side to enhance the sensor mash network and get a nice visualization. Unfortunately the service is loosing features on a monthly basis and also the page itself is down for several hours every now and then. Very sad, but I still love it.

As I said they have a nice API to get weather forecast for the next 10 days on an hourly base. OK, we can all discuss how  dependable a weather prediction for a certain hour in 8 days is, but at least for the next days it is really helpful.  I am using the forecast10day and the hourly10day API endpoints to get a nicely formatted JSON document from wunderground. If you want to run this script for your own area you need an account and an API key as the calls are restricted (but for free).

PWS

My favorite Maker-space (Motionlab.berlin) has an epic weather phalanx (as I love to call it) and sends in local weather conditions to wunderground. Therefore I can ask beside weather conditions in a city for weather conditions based a certain weather reporting station. In our case its the IBERLIN1705 station. Check out current conditions here.

Forecast10day

The API call to http://api.wunderground.com/api/YOUR-API-KEY-HERE/forecast10day/q/pws:IBERLIN1705.json returns for each day of the next 10 days information about humidity, temperature (min/max), snow, rain, wind and many more. I take these data and create one calendar entry each morning at 06:00-06:15 with summary information for the day. Specially for days beyond the 4 days boundry this condition is more accurate then an hourly information. Getting this information in python is very easy:

I am using requests to make the REST call and parse the “content” value with json loads. Easy as it looks. The data var contains the dictionary with all weather information on a silver tablet (if the API is not down, happens way to often).

Hourly10day

http://api.wunderground.com/api/YOUR-API-KEY/hourly10day/q/pws:IBERLIN1705.json contains the weather information on an hourly basis for the next 10 days, So the parsing is very similar to the forcast API call. I am specially interested here in rain, snow, temperature, wind, dewpoint and UV-Index as these are values I want to monitor and add calendar entries when they are outside a certain range.

  • Wind > 23 km/h
  • Temperature > 30 or < -10 C
  • UV-Index > 4 (6 is max)
  • Rain and Snow in general
  • (Temperature – Dew point) < 3

Humidity in general are not so important and highly dependent on the current temperature. But dew point (“the atmospheric temperature (varying according to pressure and humidity) below which water droplets begin to condense and dew can form.”) is very interesting when you want to know if it is getting muggy. Even when it is 10 C a very low difference between temperature and dew point means you really feel the cold crawling into your bones. 🙂

Ical

To create an Ical feed I use the icalendar library in python. Very handy to create events and export them as an ical (XML) feed.

Summary will be the text your calendar program displays when displaying the calendar itself, while description will be displayed then showing calendar entry details. “dtstart” and “dtend” mark the time range. For the timezone I use the pytz library. “to_ical()”. That’s basically all you need to create an ical feed.

Google

The google calendar can import and subscribe to calendars. While import adds the calendar entries to an existing calendar once (great for concerts, public transport booking), subscribe creates a new calendar and updates the feed every > 24 hours. This is great for long lasting events like meetup or rocket starts but weather predictions changes several times per hour. Therefore I added a small feature to the script to actively delete and create calendar entries. So I can do it every 3 hours and keep the calendar up to date.

As always google offers nice and very handy API endpoints to manipulate the data. Beside calling the API Rest endpoint by hand there are libraries for different languages. I use the “googleapiclient” and “oauth2client” to access my calendar. First step is to create a new calendar in google, then active the calendar API in the developer console and create an API key for your app. The googleapiclient takes care of the Oauth dance and stares credentials in a local file.

If you call this function the very first time to requires the OAuth dance. Basically call a webpage and give access to your google calendar. The secreats are stored in the token.json file and reloaded every call.

Deleting old events

“getService” calls the upper function to get an access object. “events().list().execute() request a list of the first 100 calendar entries and “events_result.get() returns an array with all calendar entries and their details. “service.events().delete().execute() removes these entries.

Creating new events

Very similar to the delete calls, the add calls gets the credentials, and calls “events().insert().execute()” with a dictionary containing the detailed information.

Docker container

The docker container is very simple.

I am using the latest python docker container, installing some libraries with pip and copy the python file, the creadentials and token json files.

The repo

The complete source code can be found in my github repository.

The calendar for Berlin weather conditions can be found and added here.