Lindenblad Antenna for 2 Meters DIY

We need an Antenna

Lindenblad
Lindenblad

There was need for an antenna for our SatNogs (Satellite Ground Station Network). As serious hackers there was no other option than to build one of our own. After several more or less unsuccessful experiments with several antenna types we decided to build a Lindenblad antenna for the 2 meters (144 Mhz frequency) range. We are Ronny (DL7ROX) and myself (DM1AS). There are several papers and discussions available on how to build such an antenna, most of them vom Amsat and US in general.

So I only focus here on the “translation” into the metric system and the DIY parts to assemble one antenna.  For a very good paper and the magic background please have a look at https://www.amsat.org/wordpress/wp-content/uploads/2015/08/An-EZ-Lindenblad-Antenna-for-2-Meters2.pdf.

Dipole Dimensions

Dimension Length / Distance
Length on one dipol element 373 mm
Space between the dipoles 19 mm
Total length of the dipole 765 mm

In order to make your life easier and the spacing hopefully very accurate I create this T-connector with Fusion360.


https://a360.co/2Q210Xh

and this plug


https://a360.co/2RmxR5P

The cross connection in between the 4 dipoles is the same aluminium tube with a length of 584 mm.

The wires

As we have 4 dipoles of 50 Ohm impedance in parallel and the wire typically as 50 Ohm we need to match it. The solution in the paper is to use an 75 Ohm TV wire with a defined range so it will match the 200 Ohm to the 50 Ohm of the wire impedance.

impedance matching wire
impedance matching wire
A 584 mm
B 5 mm
C 8 mm

 

Put it all together

Each dipol will be connected to one impedance matching wire and all 4 wires to the antenna wire. Don’t forget a cable ferrite on each of the impedance matching wires very close to the dipol side. The 4 dipoles will then be connected opposite to each other and each dipole rotated by 30 degrees clockwise to the horizon.

Measure the SWR

SWR-Lindenblad
SWR-Lindenblad

We measured the dimensions with an AA-1400 and where very proud to get such a great result of 1 at the center frequency.

Serious weather condition in your calendar

The need

Calendar
Calendar

I really like to plan the day in my calendar. Therefore I added a lot of external ical feeds like meetup, open-air cinema and for sure lauchlibrary. In order to decide on transportation I always have the weather underground page in a separate browser tab. This is very inconvenient, therefore I wrote a small script to get weather predictions via API call from wunderground and export an ical feed and update my google calendar with weather conditions.

Wunderground

Weather Underground is (or at least was for many years) the coolest weather page in the internet. Really great UI and a wonderful API to get current weather conditions and weather predictions for the next 10 days. Further more (and that is why I really really like it) users could send their own weather sensor data to the side to enhance the sensor mash network and get a nice visualization. Unfortunately the service is loosing features on a monthly basis and also the page itself is down for several hours every now and then. Very sad, but I still love it.

As I said they have a nice API to get weather forecast for the next 10 days on an hourly base. OK, we can all discuss how  dependable a weather prediction for a certain hour in 8 days is, but at least for the next days it is really helpful.  I am using the forecast10day and the hourly10day API endpoints to get a nicely formatted JSON document from wunderground. If you want to run this script for your own area you need an account and an API key as the calls are restricted (but for free).

PWS

My favorite Maker-space (Motionlab.berlin) has an epic weather phalanx (as I love to call it) and sends in local weather conditions to wunderground. Therefore I can ask beside weather conditions in a city for weather conditions based a certain weather reporting station. In our case its the IBERLIN1705 station. Check out current conditions here.

Forecast10day

The API call to http://api.wunderground.com/api/YOUR-API-KEY-HERE/forecast10day/q/pws:IBERLIN1705.json returns for each day of the next 10 days information about humidity, temperature (min/max), snow, rain, wind and many more. I take these data and create one calendar entry each morning at 06:00-06:15 with summary information for the day. Specially for days beyond the 4 days boundry this condition is more accurate then an hourly information. Getting this information in python is very easy:

I am using requests to make the REST call and parse the “content” value with json loads. Easy as it looks. The data var contains the dictionary with all weather information on a silver tablet (if the API is not down, happens way to often).

Hourly10day

http://api.wunderground.com/api/YOUR-API-KEY/hourly10day/q/pws:IBERLIN1705.json contains the weather information on an hourly basis for the next 10 days, So the parsing is very similar to the forcast API call. I am specially interested here in rain, snow, temperature, wind, dewpoint and UV-Index as these are values I want to monitor and add calendar entries when they are outside a certain range.

  • Wind > 23 km/h
  • Temperature > 30 or < -10 C
  • UV-Index > 4 (6 is max)
  • Rain and Snow in general
  • (Temperature – Dew point) < 3

Humidity in general are not so important and highly dependent on the current temperature. But dew point (“the atmospheric temperature (varying according to pressure and humidity) below which water droplets begin to condense and dew can form.”) is very interesting when you want to know if it is getting muggy. Even when it is 10 C a very low difference between temperature and dew point means you really feel the cold crawling into your bones. 🙂

Ical

To create an Ical feed I use the icalendar library in python. Very handy to create events and export them as an ical (XML) feed.

Summary will be the text your calendar program displays when displaying the calendar itself, while description will be displayed then showing calendar entry details. “dtstart” and “dtend” mark the time range. For the timezone I use the pytz library. “to_ical()”. That’s basically all you need to create an ical feed.

Google

The google calendar can import and subscribe to calendars. While import adds the calendar entries to an existing calendar once (great for concerts, public transport booking), subscribe creates a new calendar and updates the feed every > 24 hours. This is great for long lasting events like meetup or rocket starts but weather predictions changes several times per hour. Therefore I added a small feature to the script to actively delete and create calendar entries. So I can do it every 3 hours and keep the calendar up to date.

As always google offers nice and very handy API endpoints to manipulate the data. Beside calling the API Rest endpoint by hand there are libraries for different languages. I use the “googleapiclient” and “oauth2client” to access my calendar. First step is to create a new calendar in google, then active the calendar API in the developer console and create an API key for your app. The googleapiclient takes care of the Oauth dance and stares credentials in a local file.

If you call this function the very first time to requires the OAuth dance. Basically call a webpage and give access to your google calendar. The secreats are stored in the token.json file and reloaded every call.

Deleting old events

“getService” calls the upper function to get an access object. “events().list().execute() request a list of the first 100 calendar entries and “events_result.get() returns an array with all calendar entries and their details. “service.events().delete().execute() removes these entries.

Creating new events

Very similar to the delete calls, the add calls gets the credentials, and calls “events().insert().execute()” with a dictionary containing the detailed information.

Docker container

The docker container is very simple.

I am using the latest python docker container, installing some libraries with pip and copy the python file, the creadentials and token json files.

The repo

The complete source code can be found in my github repository.

The calendar for Berlin weather conditions can be found and added here.

 

G199 or how to 3d print a logo on existing STL files

The problem

Logo with 2 different Filaments
Logo with 2 different Filaments

Sometime you want to print you logo or some text on your 3D object with different filament but you only have a single head printer and don’t want to spend all the time sitting next to your printer to wait for the right moment to manually pause the print and change the filament. Like the Motionlab logo on the picture. For sure you could print it separately and glue it on the main printed part but specially with text it’s a lot of tiny parts to take care of and align. If you are lucky and have a dual print head it’s not a problem but there is also a way to do it very simple with a singe print head by editing the G-Code file and add G-Codes by hand.

The solution

There is a G-Code named G199. Regarding to Craftware the purpose of the code is “G199 pauses the print immediately, and moves the head to X0, Y100. (this is the command the LCD screen uses)”. So by adding this code by hand the printer stops printing and moved the head to the side. After changing the filament (and also extrude some more by hand to make sure the printer is ready) you can press “continue” on the printer display.

Prepare the SVG file

If your logo is already in SVG you are lucky. Otherwise try to convert it to SVG and make sure it’s in connected objects. If you need some geeky stuff I can recommend Geeksvgs.

Use Fusion360 to create the STL logo file

Fusion360 insert SVG
Fusion360 insert SVG

In Fusion360 use Insert -> Insert SVG -> Select SVG File to open the SVG file on a Sketch. Resize and stretch it as you like or the dimensions dictate.

Next step is to extrude the logo to a 3D object. This can be done simply by “Stop Sketching” and then press “e” for extrude. Select everything by drawing a frame with your mouse. Unfortunately fusion has no idea witch part of the logo should be extruded and witch not. Press and hold CTRL and deselect the inner parts of the logo. For example the circle in the “o”. I recommend extracting 10 mm even if you only want to rise the logo by 4 mm.

As the single objects are not connected fusion creates several bodies instead of one.

Save single STL
Save single STL

A single STL file with all Bodies included at the right position can be exported by pressing the Component name and press the right mouse button.

Combine both STL in your slicer

Now as we have two STL files we can load them both at the same time in our slicer (no matter witch one). Position your logo at the right place, scale it and change the z access offset accordingly to your needs.

Combine STLs
Combine STLs

As we extruded the logo 10 mm there is enough space to play around. Make sure at least one mm is submerged in your main body.

 

Manually edit the gcode to add the pause sequence

Find the right Layer
Find the right Layer

Now we need to find the right place in the G-Code itself. Our slicer can help us with the preview mode. The best layer is the second one after the main body is done and the logo starts to be printed.

Mark down this layer and open the G-Code in your favorite text editor. All slicers I used always make nice comments in the code to find the light position. Search for “layer nnn” and add the “G199” statement.

 

Print

Just print the G-Code as you always do. As soon as the printer reads and process the G199 comment it stops printing and moves the head to the left side. All heating settings remain the same and you can easily replace your filament and press “Continue” or “GO” on your printers screen. Happy printing.

Adding a ks0212 relay board to the mqtt universe

Weatherstation with raspi
Weatherstation with raspi

Adding the 4 channel relay board ks0212 to the MQTT universe

We just hacked a trotec dehumidifier for Herwigs Observatory. The idea was to additionally activate the dehumidifier when the difference between outside and inside humidity is above 10%. Normally there is a fan taking care of it but sometimes the differents gets to high. As there is already a raspberry pi running in the observatory for the weatherstation and the flightradar24 installation we just added the 4 channel relay board ks0212 from keyestudio. Not touching the 220V part we directly used the relay to “press” the TTL switch on the board for 0.5 seconds to turn on and off the dehumidifier. Here are the code snipped we used for this. The control is completely handled via MQTT.

Installing necessary programs and libraries

For the sake of simplicity we used python and the GPIO library wiringpi. Therefore we first install the python development parts and them the python libraries for wiringpi and MQTT. As this is a dedicated hardware installation we don’t use virtualenv and directly install the library as root system wide.

The python program

Again, a very simple python script, basically attaching to a (you need to change the code, there is no config) mqtt server and subscribes itself to a certain topic. Then it waits for messages and cuts off the last part of the topic to identify the relay. The naming convention is based on the relay name printed on the ks0212 pcb. As payload you can send “on“, “off” and “press“. “press” switches the relay on for half a second in order to simulate a button press as we need it for our dehumidifier.

Adding a systemd service

In order to keep the wantabe daemon up and running and also start it automatically at system start we add this service configuration file in “/lib/systemd/system/relayboard.service“:

Activating the service

The following lines activate the service:

Checking the status can be done with:

ks0212 Pinout

If you want to do some hacking with the ks0212 relay board on your own here is the pin mapping table. I used the very cool side https://pinout.xyz/pinout/wiringpi for getting the numbers:

Relay WiringPi BCM GPIO Link
J2 7 4 7 https://pinout.xyz/pinout/pin7_gpio4
J3 3 22 15 https://pinout.xyz/pinout/pin15_gpio22
J4 22 6 31 https://pinout.xyz/pinout/pin31_gpio6
J5 25 26 37 https://pinout.xyz/pinout/pin37_gpio26

 

 

Execute the Radio Meteor Observations program on mac os

MeteorLogger Screenshot
MeteorLogger Screenshot

What is it about

Wolfgang Kaufmann wrote an impressive article and even a more impressive software for the hobby radio astronomers. I highly recommend checking out the article and play with the software as there are not so many radio astronomers among the community. His software is written in python with a very clean UI. It directly connects via the computer sound card and grabs the audio signal. I shortly describe here what to install on a Mac OS to get his software up and running.

Where to get it

The software can be downloaded at http://www.ars-electromagnetica.de/robs/download.html. Unfortunately it is not available on any online repo like github but the source code can be downloaded as a zip file.

Preparation

The PyAudio package needs some libraries and direct access to the os sound system. Therefore we need it install this audio package outside of python itself

The necessary python libs can be installed via pip. I recommend doing it in a virtual environment

That is all. After installing the python libs the program starts right away

 

Setting up SDRplay remote on a raspberry pi

SDRplay
SDRplay

SDRPlay

I recently bought myself a SDRPlay receiver to play with this technology and maybe build a ground station or meteor scatter detector. The original plan is to setup a receiver on the Motionlab roof with an raspberry pi and send the IQ data via network down to a local server and extract the interesting information. One great software to work remotely with an SDR receiver is the Soapy project.

Install the raspberry pi part

Build system

Install the latest raspberry pi lite version from raspberrypi.org

Core system

The soapy part consist of 3 parts. The Core system must be installed first.

SDRplay

The SDRplay part consist of two parts one are the proprietary binary libraries from SDRplay itself the the other part is the soapy wrapper for SDRplay.

Binary Libraries

The driver can be downloaded from the SDRplay homepage https://www.sdrplay.com/rpi2dl.php

The SDRplay Soapy wrapper

Test the Soapy access

Soapy Server for Remote Access

Run the server

If you want to run it as a service have a look here on how to autostart stuff in linux.

Accessing IBM Object Store from Python

IBM Object Store

SWIFT Object StoreIBM offers a S3 compatible Object Store as a file storage. Beside S3 the storage can also be accessed via the SWIFT protocol by selecting a different deploy model. As the cost for this storage is extremely low compared to Database storage it is perfect for storing sensor data or other kind of data for machine learning.

I use the storage for example to host my training data or trained model for Tensorflow. Access and payment for the Object Store is managed via IBM Cloud aka Bluemix. And as this offering is included in the Lite offering the first 25GB are for free. 🙂

As there is a problem getting the S3 credentials right now I use the SWIFT access model. Please make sure when you request the Object Store service to access the SWIFT version to select the right access model.

Python libs

As the SWIFT protocol is part of openstack, the python access client can be found at https://docs.openstack.org/python-swiftclient. Depending on the security access model you also need the openstack Identity API (Keystone). Both libs are on github (swiftclient and keystone) and also available via pip.

Access storage

Inside the IBM Cloud web interface you can create or read existing credentials. If your program runs on IBM Cloud (Cloudfoundry or Kubernetes) the credentials are also available via the VCAP environment variable. In both cases they look like mine here:

Important informations are the projectId, region, userId and password. The access with keystone the swift python client looks like this:

Important is the version information, also as part of the authurl.

Accessing data

Objects can be read and written, containers (aka buckets) can we read and modified as described in the documentation. For example:

 

 

Dev-Ops with OtA update for ESP8266

Over the Air update (Ota) for ESP8266

esp
esp

Thanks to the esp8266 project on github there is a very convenient way how an ESP can be updated over the air. There are three different ways available.

  1. The first one is via the arduino IDE itself where the esp opens a port and is available for firmware upload just like with a serial connection. Very convenient if you are in the same network.
  2. The second one is via http upload. So the esp provides a web server to upload the bin file. In this case there is no need to be in the same network but it is still a push and for each installed esp individual necessary.
  3. The third and most convenient way for a bigger installation base or in case the devices are behind a firewall (as they always should be) and no remote access is possible. In this case the device can download the firmware itself via http(s) download from a web server somewhere in the internet.

For a complete dev-ops pipeline from pushing to a repository to flashing a device the third scenario it the easiest one. So we need a place to store the binary files. For convenience I use amazon s3 to host my binary files as travis easily supports s3 upload. But it can be every internet platform where files can be stored and downloaded via http(s). The necessary code on arduino side looks like this:

This arduino function can be called from time to time (at startup or on constant running systems every now and then) to check for a new firmware version and in case there is a new version available automatic flash it and restart.

  • Line 1 is a #define with a placeholder for the current version of the installed firmware. This placeholder is replaced in the build pipeline at travis with an increasing number. So the compiled code has something like 23 or 42 instead of REPLACE_WITH_CURRENT_VERSION.
  • Line 2 is the URL for a latest version of a firmware.
  • Line 3 is the URL to a file with only one line with the latest build number in it.
  • Line 7-9 loads the version file from s3.
  • Line 12-13 converts the file into a number which can be compared with the define from line 1.
  • Line 17 is the firmware update itself. A detailed description of the ESPhttpUpdate class can be found here.

There are two ways to check if there is a new version available and only flash if there is something new. The one we use here is to have an own mechanism for it. I do it because on s3 I can only host static files and therefore I place the latest build number in a static file next to the firmware itself. The other way is build in into ESPhttpUpdate. The update function can be called with a build number which will be compared on the server and the return code will reflect if there is a new version or not. In this case we would need a script on the server to check for it.

Get an increasing build version number

With a little bash script we could load the last build number from s3 and then increase it in order to have the current number for our build.

This script loads the version file (line 3), increases the number (line 4) and patches our source code file (line 11) with this number instead of REPLACE_WITH_CURRENT_VERSION. After running this script the current source code contains the latest number and also the upload folder for s3 has a new file with the newest number in order to inform the polling ESPs.

Travis config file

Travis-ci is incredible easy to use and very reliable for continuous integration. In combination with platformio it is very easy to compile arduino code for several types of hardware. Simply configure the hardware in the platformio.ini file:

In this case we use the esp8266 feather board aka Huzzah. Just set the framework to your kind of esp.

Travis itself is configured by the .travis file in the root directory of your repository on github:

  • Line 1: Platformio is based on python so the build environment (although the code is c++) is python for maintaining platformio.
  • Line 3: Right now platformio is only available for python 2.7 so this line gets the latest stable version of python 2.7.
  • Line 5-7: Gets the latest cache files from the last build in order to save compile time and reduce the costs for travis. As this service is for free for open source projects it is always nice to save some money for the cool guys.
  • Line 10: Installs the latest version of platformio itself.
  • Line 13: Creates the upload directory which we will upload to s3 later on.
  • Line 14: Calls the build number increase and patch script.
  • Line 15-16: Patches the wireless lan config in case it is not handled inside the arduino code itself.
  • Line 17: Calls platformio to download all libraries and compile the arduino code itself.
  • Line 18: Platformio generates a lot of files for the linker and several other files. We only need the bin file later on, so we copy it here to the upload folder.
  • Line 20: Travis has a build in functionality to upload files after compilation. This is the part where we upload the files to s3.
  • Line 22: Defines the s3 bucket to upload the files.
  • Line 23-26: Provides the encrypted s3 credentials. See travis documentation on how to create these lines.
  • Line 29: Defines the local folder to be uploaded. Otherwise travis will upload everything from the current run.
  • Line 30: Defines the s3 folder in the bucket where the files will be stored.

With this files in place travis monitors your github repository and creates / uploads new firmware versions each time you push changes to your repository. The arduino code checks for new versions and patches itselfs as soon as there is a new version available. A complete project can be found here in my github repository.

 

Optimize pictures for visual recognition with openCV and gimp

Visual Recognition

Watson result

Computer Vision or Visual Recognition is part of cognitive computing (CC) aka Artificial Intelligence. One of the main concepts is to extract information out of unstructured data. For example you have a webcam pointing on a highway. As a human you see if there is a traffic jam or not. For a computer it’s only 640x480x3x8 (7.372.800) bit. Visual Recognition helps you to extract information out of this data. For example “This is a highway”. Out of the box systems like Watson are able to give you information what do you see on the picture. You can try it here https://visual-recognition-demo.mybluemix.net. The result can be seen on the left picture. So Watson knows it is a highway and even it’s a divided highway but it does not tell you there is a traffic jam or even a blocked road. Fortunately Watson is always eager to learn, let us see how we can teach him what is a traffic jam. This article only focuses on the picture preparation part not the train Watson part. See next postings for the Watson part.

Get pictures

There are many traffic cameras all around but I am not sure about the licence, so it is hard to use it here as a demo. But let us assume we can take pictures like this one from Wikimedia: Cars in I-70.If you live in south Germany there are nice traffic cameras from Strassenverkehrszentrale BaWue. Unfortunately they don’t offer the pictures with the right licence for my blog. If you know a great source for traffic cameras with the right licence please let me know.

Prepare pictures for training

Visual Recognition works a little bit like magic. You give watson 100 pictures of a traffic jam and 100 without traffic jam and he learns the difference. But how do we make sure he really learns traffic jam and not the weather or the light conditions. And furthermore only one lane in case the camera shows both lanes? So first we need to make sure we find enough different pictures of the road with traffic jam under different weather and light conditions. The second part can be done with OpenCV. OpenCV stands for open computer vision and helps you to manipulate images. The idea is to mask out parts we don’t want Watson to learn. In our case the second part of the lane and the sky. We can use GIMP to create a mask we can apply with openCV automatically to each picture.

Gimp

GIMP-Layers

First step is obvious to load the image in GIMP. Then open the layers dialog. It’s located under Windows/Dockable Dialogs/Layers or cmd-L. Here we add a new layer and select this one to paint on. Then we select in the tools menu the Paintbrush Tool and just paint the parts black we don’t want Watson to learn.

Blanked-image

Then we hide the original image by pressing the eye symbol in the layer dialog. This should leave us with only the black painting we did before. This will be our mask for openCV to be applied to all pictures. Under File/Export you can save it as mask.jpg. Make sure it is only the black mask and not the picture with the black painting.

Use openCV in docker

As openCV is quite a lot to install, we could easily use it within docker to work with our pictures. We can mount host directories inside a docker container, so in this case our directory with pictures:

This brings up the openCV docker container from victorhcm and opens a shell with our current directory mounted under /host. As soon es you exit the container it will be removed because of the “–rm” parameter. Don’t worry only the docker container will be deleted, everything under /host is mounted from the host system and will remain. Everything you save in other directories will be deleted.

How to mask out part of the picture

The python program to use openCV to mask out all pictures in a directory is then really easy to use:

Basically the program iterate through all “jpg” pictures in the subfolder “pics” and saves the masked pictures with the same name in the “masked” folder. Both directories have to exists before you start the script. In order to keep the script reduced to the important parts I left the create and check directory part out of this script.

Line 4

Loads the mask images as a grayscale image.

Line  8

Loads the image to work on as a colour image.

Line 9

Here is the real work done, this applies the mask with bitwise add of all pixels. Therefore the blank will win and the transparent will let the normal picture gets through.

Line 10

Saves the new masked picture in the “maksed” folder.

Preselect pictures

For the learning process we need to sort the pictures by hand. One bucked with traffic jam and the other with ok.

BC95 Board

NB-Iot with the BC95 with arduino

NBIothack
Hardware hacking with the mobile c-lab

Last weekend the c-base and friends team had a great time doing some hacking with Narrow Band IoT (NB-IoT) from Deutsche Telekom at the nbiot-hackathon at hub:raum. We could put our hands on the BC95 chip, a dev board and access to the Telekom test network. Beside hacking we had a great time with funny hats and our obligatory overalls.

The BC95 Board

The board we could use was the BC95-B8 board which we mounted on a development board with a support controller and serial converter. Beside the board setup we also soldered a PCB with some sensors and a Teensy board to control the BC95 and the sensors.

Wiring

Pin-Connection
Pin-Connection

The dev board itself has a RS232 converter to give access via “normal” RS232 connectors. This is convenient for older laptops or desktops but luckily they also give you access via 3.3V pegel to the same UART interface. The pins are pre soldered on a 10 pin header so it’s easy to connect this to arduino or Raspberry PI via the serial connection. As you can see in the picture it is pin 1,2 and 6. No level converter necessary.

AT- Command Set

Line 1,2

The serial protocol can be any speed at 8N1, the board auto detects the speed at first communication, therefore you need to send the AT command several times (normally 2) to set the speed. Normally the first AT command is answered with ERROR and the second one with AT. Make sure you get an OK before you continue.

Line 3

In order to get a clean setup we first reboot the board with “NRB”. It takes some seconds and it will come back with OK.

Line 7

Depending on your network you need to set the band with “NBAND”. The Telekom test network is on 900 MHz so we go with 8. Other bands are 5 (850 Mhz) and 20 (800 MHz).

Line 8

Depending on your setup and provider you need to set the APN with the CGDCONT command.

Line 9

Connect to the IoT Core

Line 10

Power on the module

Line 11

Connect to the network. This can take several seconds even a minute. You can always check the connection with at+cgatt?” and check for “+CGATT:1”. You can double check the existing connection by asking for the IP address which is assigned to you by sending “at+cgpaddr=1” to get for example “+CGPADDR:1,10.100.0.16”.

Line 15

Ping a server to test everything is fine. (In this case the google DNS server)

Line 16

Open an UDP socket to receive answers. In our case the DGRAM and 17 are mandatory but the port you are using (in our case 16666) is up to you.

Line 17

Send your UDP data package. First parameter is the out port (0). The second one is the address you want to send the data to (ip or name). Third one is the receivers port (16666). Fours is the amount of data you want to send (keep it below 100 bytes) and the last one is your data in hex notification. I recommend asciitohex.com to convert a string you want to send.

Arduino

This arduino code is really a fast and ugly hack for the hackathon in order to send out the data. It does not listen for the AT returns or anything else. So this is only an example on how NOT to do coding but it worked for the hackathon.