Horus 64B: What does ALSA audio have to do with high altitude balloons???

Amateur Radio Experimenters Group (AREG) decided to plan a winter balloon launch. Something that they typically don’t do due to the unpredictable weather and cloud cover.

One of the ideas for the winter launch was to be a crossband FM repeater and the increased winds to Victoria would mean a larger coverage area for possible interstate communication. The other (and more interesting to me) benefit was real world testing of the webhorus and webwenet sites which simplifies decoding of both balloon protocols. While others had used the app and reported success, I had only ever tested it in my lab. It’s a bit of a weird experience to have never used my own application in a production setting.

The first possible launch date got pushed back due to bad weather. The second launch window wasn’t looking much better and if that was cancelled it was unlikely that the launch would go ahead until much later in the year.

Predictions showing a very long drive from Adelaide into Victoria

The predictions were showing a longer distance than we could travel (legally) during the flight time - meaning that we likely wouldn’t be near the landing site on touch down. Chance of recovery would be lower.

Keen to still test my software I ask Mark if we I provided a small low cost payload if we could fly something.

I rushed together parts I had for a Wenet transmitter. Given the main payloads weren’t flying maybe I should try something a bit more experimental in nature.

As AREG is in Adelaide and I’m in Melbourne it also meant driving the 8ish hours across. It was agreed that I would build up the Pi / transmitter and the box, antenna and camera would be installed when I arrived - leaving only a few hours in the evening for that integration to happen before launch the next day.

I2S PCM Audio

While developing webwenet I learnt a lot more about how Wenet worked. One thing that really stood out was that Wenet receivers need to remove RS232 framing from the data before decoding the packets. This initially struck me as odd. What does RS232 have to do with this protocol.

Today most Wenet transmitters use a RFM98W, usually on some LoRa shield/package. The UART pin from the Raspberry Pi is connected to DIO2. The chip is configured for a 2-FSK direct mode, bypassing pretty much all the smarts of the chip itself. If the DIO2 pin is high it transmits one tone, and if it is low it transmits the other.

Since the UART is used to transmit the packet that means that RS232 framing is also transmitted.

Chart showing the start and stop bits of an RS232 transmission

I had a quick look at the Raspberry Pi datasheet for the UART and didn’t really see a obvisous way of disabling the start and stop bits. It would be possible to bit bang data out - but timing is important for us on RF so I didn’t explore this option.

The Raspberry Pi however has a ton of different interfaces. Maybe another one would work better without the framing. My first thought was - maybe we could just bit bang a GPIO. The Pi is pretty fast and we can use some kernel features to make that work.

My next thought was along the lines of SPI, but Droppy reminded me that the Pi has I2S output - which is IMHO the classic bit banging target.

Well before the Horus flight was even planned I prototyped an I2S version which emulated the the UART (basically adding back in the RS232 framing) to compare in the lab.

Now the proper way to implement this is probably to access the I2S interface directly, however easiest approach for me to use the RaspberryPi_I2S_Master device tree driver which adds a generic sound device. This is expected to be tied to a audio chip - but we just ignore that part. From the code point of view we import alsaaudio and generate the correct audio frames.

Some complications show up though. The linux simple-audio-card driver seems to have fixed set of sample rates and channels. I don’t think there’s a specific reason this needs to be the case, but without building a new kernel module it means a little bit of extra work. Now channel wise it doesn’t matter so much, we send the samples irrespective of which channel is selected (that pin gets ignored entirely) - but we do have to factor the channels into our data rate.

So with only hand full of sample rates to choose from how do we get the bit rate we desire. At startup we calculate the desired RF bitrate and then work out multiples of the audio sample rates.

For example - if the audio sample rate is 48000, the number of channels is 2, 16 bit audio width and the desired RF baud rate is 96000, then the audio bit rate is 1536000, then we divide by 8 bits per byte - 1536000/8 = 2 bytes. So for every “1” bit we send two bytes - 0xffff and for every “0” bit we send 0x0000.

In testing however I noticed that my transmitter just wouldn’t work when running the software. It turns out the Wenet TX software had an LED configured for the same GPIO that was used for I2S PCM out on the Pi. Annoyingly when this GPIO is set the Pi had to be rebooted before the I2S output would work again!

To make use of the advantages of I2S approach I needed to write new transmitter code which didn’t include the RS232 framing, update the existing Wenet receiver software, and my webwenet code. This was a big undergoing so close to the flight deadline, but we managed to do it.

By removing the RS232 framing we remove the 20% overhead from start and stop bits. This allowed us to pick a lower RF bandwidth while still being slightly faster than the original system.

Modulation testing

It’s all fine doing this in practice and testing locally where SNR is high, but we need to make sure that both the modulator and demodulator is working correctly. Subtle errors in things like the parity code, timing mistakes or off by one errors won’t be apparent until low SNR.

To do this Mark built some benchmarking scripts. These take a high SNR recording, then generate lower SNR samples. The lower SNR samples are feed back into the demodulator and a count of the packets decoded is taken. With the low density parity check (LDPC) and the quality of the modem we expect a fairly sharp fall off. The nice thing about these scripts is that they normalise for Eb/N0 (SNR per bit). This means that even though our baud rate is different we can still compare the new I2S method to the old UART method.

SNR plot with a very slow gradient instead of a sharp cut off compared to the UART method

When we run this I2S we see similar dB threshold where all the packets are received however there’s a bit of a weird slope. We expect that due to the parity check that either you receive a packet or don’t, so the dB difference between that threshold point should be small. This lead us to believe that the testing wasn’t working right.

Now one thing that we noted when switching to I2S is that the UART method didn’t include any sort of scrambling or whitening. Scrambling is used to ensure that there isn’t a long run of zeroes or ones which could cause the modem to loose timing and become unsynchronised. For the UART mode this wasn’t a big concern because the RS232 framing meant that the start and stop bits would always cause a 1 and 0. Switching naively to I2S meant that we lost this free bit flip.

The theory I have at the moment is that the long runs of 1s or 0s in the I2S approach breaks the normalisation scripts. Adding in scrambling we get a much sharper cut off as expected.

Same chart but with scrambling showing a sharp cutoff

Dual mode

While I was pretty confident in the I2S approach we wanted to be able to compare the two modes in a real world setting. A normal person might just fly two payloads or have two transmitters. Instead however I decided that it was likely possible to have the software switch between the two modes.

To do this a soldered diodes to the output pins on the Pi and feed both UART and I2S to the RFM98W module. With the hardware done, we need to look at the software. The first problem is that the UART transmitter needs to disable its output when its not active. To do this we use the break_condition attribute in pySerial. I2S luckily sits low normally so nothing needed to be done there. Finally we need to switch between the two modes in the transmitting software - which it was never really built for. I hacked in the functionality to have a list of radio modules which are cycled only when the radio is in an idle state after a period of time.

The result was that the radio switched between modes roughly every 1-2 pictures.

Testing deadlines

I moved my code over to the target Pi Zero 2 W. Went to plug in my PiCam only to find the connector is different between the Zero and the normal Pi. From a “launching the balloon” point of view it wasn’t going to be a problem as the camera was being supplied by Mark however from a testing point of view it meant I wasn’t going to be able to test that the camera functions. I rushed off an order to Amazon for a camera module that did have the correct cable.

In the meantime I tested without the camera module. I had some extremely weird issues. File descriptors were being opened and after about an hour the software would crash due to reaching the ulimit. Debugging the issue and I couldn’t determine what was causing it - I thought it was caused by swapping between the two radio modes, but that didn’t seem to be the case.

I tried a bunch of Python debugging tools to see the cause but couldn’t nail it down to any Python code. It seemed to be occurring due to the picamera2 Python module. It seems that file descriptors were being leaked due to the lack of camera connected - but I wasn’t entirely certain. To alleviate the problem I bumped up the max number of files open in ulimits however it turns out that select is limited to 1024 file descriptors anyway - so that option didn’t work.

I quickly added a shell script to check for open file descriptor count and put into the watchdog.d config. If it got too many fds it would reboot and everything would be fine. It’ll do.

Integration woes

Arriving in Adelaide I passed the payload to Mark who quickly fitted it into the box, added the camera and we fired it up.

styrofoam box with GPS, Pi and batteries

Two issues presented us. The GPS didn’t stay locked / struggled to lock. I had seen some of these issues with my testing - we’re not entirely sure why my uBlox 6 chip wasn’t working correctly as ubxtool showed correct data. Current suspicion is that the code was written for uBlox 7 and isn’t happy with some of the data. The other factor is that I was using soft serial (as UART was taken for I2S transmitter). After some quick debugging we opt’d for replacing it with a known good USB GPS module.

More concerning was that with the new camera connected the Pi rebooted shortly after initial boot up. We believe this was a kernel panic but with no swap setup to do coredumps we don’t really have any logs of that. I believe this was to do with the PiCamera 3 module as I didn’t see the reboot happen in my testing.

The reboot didn’t happen often and the system started up again just fine. But it’s not something I really like to see before launch. The picamera2 github is full of issues of people having random issues with the library. I even saw some issues where Python crashed somewhere that it practically couldn’t, and I think this was also caused from the picamera2 library - my guess is that it lacks thread safety and incorrect/lack of locking.

I put in some extra error handling. We did a few tests and it seemed fairly stable, even if it did reboot. However it exposed a known limitation in webwenet. Wenet protocol sends images with a byte for image header, followed by a byte for an image id. Restarting the payload resets the image id. When webwenet receives an image is combines all the packets for an image based on the image id. With the image id reseting the images were being updated rather than replaced in the UI. So a last minute patch/hack was added so that users wouldn’t get confused if the image id was reset.

Launch day

Selfie with me and the payload. The payload has a SondeHub sticker, a its heckin windy sticker and a xssfox sticker

We arrived at the launch site. The car was prepped for receive mode and the balloon was filled. Up until now we hadn’t really decided if we wanted to chase the balloon, or if we wanted to get to a higher location to have a better chance of receiving as much data as possible. We opt’d for the high location and would attempt recovery on our drive home the following day.

Many hands were required to handle the balloon during filling and tying off, but eventually it was filled and launched.

Balloon being pulled sideways by a gust of wind

Droppy drove while Alex and I monitored the balloon and receivers. One of the things I wanted to test with webwenet was around lowering the entry requirements for receiving. What would be the minimal equipment needed to receive?

For that I went with:

  • LNA4ALL - €23 + shipping
  • RTL-SDR $33.95 USD
  • AliExpress 7 Element Yagi - $93 AUD
  • AliExpress 3dbi “short” vertical antenna - $15 AUD

The LNA4ALL was chosen as it can be powered directly from the RTL-SDR using the Bias-T option. The LNA4ALL does need a small modification to allow this.

I created a few little brackets for our cars roof rack that mounted the LNA close to the antenna.

Bracket holding the antenna and LNA on the car roof rack

The small vertical antenna is used while driving as it provides low gain. A higher gain antenna would result it a more narrow beam which isn’t very useful if the balloon is high up.

Once we arrived at the high location we switched to the yagi antenna which provided a reasonably good SNR throughout the rest of the flight.

Car parked with yagi antenna mounted to the rear tyre carrier pointing at the payload

This setup worked fairly well and was pretty on par with some of the bigger setups we saw on the day.

Screenshot of webwenet being used to receive images from the payload. The payload is nearly 4km up at this point

Fun with phones

One of the fun things with webwenet and webhorus is that you can load it up on mobile phone browsers. This meant that we hooked up a mobile phone on a many-element yagi - something that just seems extremely ridiculous.

Droppy with their phone attached to a very very long yagi receiving images from the payload

Pi Camera 3 Focus issues

As images started streaming in it was obvious that a long running issue with the Pi Camera 3 had struck us. Out of focus images. Current suspicion is that the payload is moving too much for the Pi Camera 3 to obtain proper focus. Which is a shame because some of these pictures would have been stunning. Regardless I still think they look pretty good.

A picture from the payload pointing down at Mt Barker. Many buildings visible

The highest picture was recorded at 20,116m.

Mostly clouds with a slight curvature of the earth

Landing and recovery

The balloon landed over 250km away. The last reported altitude was 1,083m.

Screenshot of sondehub showing the telemetry of the last packet.
. During the descent we noticed a rapid change in descent rate. This was an indication that the parachute had failed. The parachute was bright red and was going to be necessary to find the payloads.

Graph showing descent rate from 7.5m/s to over 10m/s

Being so high up for the last reported location leads to a very large search area. Local ground winds play a huge part in final landing location, so it was going to be a challenge to recover. Probably unlikely given that the transmitters batteries would now be flat and uncertainty regarding if the balloon still had a red parachute to spot.

Arriving at the predicted landing spot, the farmland covered the repeating dunes. Luckily being winter the fields were bare. We stopped on the second dune near our suspected landing location and walked across to the next. Not even sure which field it had landed in we kept scouting around. Part way up the next dune I spotted in the distance a red patch with two white items near it. It was a long way away but given it was unlikely that the field would otherwise have something like that in the middle of it I was pretty certain that it was the payload.

Balloon, parachute, and two payloads in the field

Walking across the field I arrived at the landing site. It was a surprise to recover the payloads with the data we had, let alone find them so easily. I guess one advantage to winter launches is empty fields. Enjoy the time lapse created from the remaining battery on the image transmitter payload.

After bringing the payload back we opened it up and turned it back on to get a group photo before heading back home.

Group photo of Alex, Droppy and myself

The final verdict on I2S

It performed no worse than the UART version. The theoretical performance increase of the lower RF bandwidth requirement isn’t actually much - to the point that it’s in the testing noise. We do know that the data rate is slightly faster though. The other advantage is that it frees up the Raspberry Pi hardware UART for other tasks. It sounds like AREG will be flying I2S Wenet going forward after the success of the dual mode payload.

Thanks

Many thanks to AREG, Mark, Droppy, Alex and all the receivers who let me fly this payload, help with recovery and put up with my bullshit.


Bringing the Smart Pacifier to sports

This post is going to be unapologetically weird. No one is forcing you to read this.

I don’t think there’s enough proper dummy spits1 in cycling. After a lot of thinking about this I think I know why. Cyclists love data.

Many cyclist have a cycling computer giving them data like:

  • Speed
  • Lap time
  • Gradient
  • Temperature
  • FTP

Then there are the additional sensors:

  • Heart rate
  • Cadence
  • Wheel speed
  • Power
  • Left/right balance
  • HRV

What does this have to do with a dummy? Well up until recently there wasn’t any way to record smart pacifier data for sports activities. In fact there wasn’t even a smart pacifier. Who’s going to ride a bike while sucking on a pacifier if it can’t give you any juicy data and stats.

I was extremely excited and very privileged to receive a prototype Smart Paci from Curious Inventions. The device can monitor bite strength along with 4 capacitative touch sensors on the sides and reports over Bluetooth low energy.

Picture of the smart paci, white adult sized paci that’s been 3d printed.

You have a crank power meter, I have a smart paci. We are not the same.

The Smart Paci provides us with an opportunity to bring genuine dummy spits into the world of sports, but alas cycling computers and sports trackers have no native support to this new sensor.

Luckily Garmin Connect provides support for 3rd party apps and integrations through it’s IQ SDK. IQ apps are coded in “Monkey C”, which is weird….(I did warn you).

For example:

ByteArray objects are fixed size, numerically indexed, single dimensional, and take Numbers with a value >= -128 and <= 255 as members.

One of my frustrations with Monkey C / IQ SDK is that you have to do a lot of the heavy lifting. For example sending BLE messages, you need to implement your own queuing system. This is especially annoying when it comes to layout. You’d think that if I have a sensor value there would be something generic I could extend to make my data field look identical to every other - but it seems not. If you want to make a chart that looks like the Garmin ones? I think you’re on your own. Then you have to test the layouts on all the devices…

A list of garmin devices to select from which doesn&rsquo;t fit in the screenshot

All this is to say, I’ve made (well hacked together - it’s not my proudest code) a Garmin Connect IQ data field for the Smart Paci. The field itself is very basic, it connects to the device and displays the bite sensor. It’ll flash if a touch sensor is pressed (maybe later I might make one of these trigger a new lap or something?).

Smart Paci next to the Garmin Edge Explore bike computer, showing the smart paci data field

But importantly the data is recorded into the .fit file and when it’s automatically synchronised to Garmin Connect the data is viewable like any other critical stat you find in an activity.

Garmin connect screenshot showing lots of running stats including the Smart Paci bite sensor

I took the Smart Paci out for a ride and used it for a kilometre long segment. It was surprisingly comfortable while riding. As expected the data field recorded how hard I bit, which I believe is an important metric when your biting down on that steep hill or Strava segment.

Selfie of me on my bike with smart paci

It’s certainly not just limited to cycling though and works just as well running.

Selfie of me running with the Smart Paci

The Connect IQ app/datafield is published on the Garmin App store (and currently features on the homepage…) - however you will need a Smart Paci from Curious Inventions.

a0f683f05193f426430205a67c7c2d56.png

Limitations

  • Currently it pairs to the first seen Smart Paci, so if you have multiple this might be a headache
  • My prototype Smart Paci generates a new mac address on boot so the datafield won’t reconnect. For long rides where you might have a coffee break remember to keep the Smart Paci on so the Garmin can reconnect to it
  • Datafield recording in Garmin doesn’t provide feedback as to when/which value was recorded. This proves a little tricky in deciding what to write to the field and when. At the moment I select the max value roughly once a second. The current Smart Paci firmware seems to only send data when it changes so it’s possible that the resulting “0” value might not get recorded. In practice I don’t think this is a problem as there’s usually a changing value and you can determine on the chart if there’s an actual data point, but it does have the potential to be misleading (you’ll notice a long section of the same value)

  1. Apparently “dummy spit” is an Australianism. It means to have a tantrum / act as a child. A dummy is what we call a pacifier. ↩︎


Low budget running timing gates (Part 1?)

I am writing today laying on the couch both a little bit sick and with a sore knee (best to double up these things up to save on recovery time right?). This does present me with an opportunity to write about my budget running timing gate project.

At large running, cycling and probably countless other events, participants are usually given a bib containing their name / race number. Inside this bib is a little UHF chip/tag which transmits as it crosses a gate such as the start or finish line. The gates themselves are usually a mat that contain the antenna within. Many races usually contain multiple gates and recently have provided websites to monitor the progress of participants throughout the event.

Timing chip on the back of a bib

Prior to messing up my knee, then falling off my bike… again, I regularly took part in a run club near me. The run involves following the river then a short city section back to the start, totalling about 7.5km. It’s a really lovely run with a small group of people. At one point we joked about people cutting one of the corners and that they would be penalised. It got me thinking - could we build a low budget gate system for fun. Even +/- 10 seconds is going to be fine for this.

So when we think about these gates there are three main components, the gate, the tag, and the server/site. I wanted this to be super low budget so for the tag I decided to pick something that we all already had - smart watches. In this case specifically Garmin, however I think this approach could work for others as well. The idea here is to set up the watch to broadcast heart rate over bluetooth. The gate would monitor bluetooth signal strength and record a crossing when the bluetooth signal is strongest.

Graph showing bluetooth SNR for two mac addresses and lap activation. The blue mac address matches up with the lap activation
Some quick tests showed that this approach was possible. The above diagram shows my signal strength (in blue) and in yellow when I pressed the lap button as I cross the gate. Someone else (red) happened to be running at the same and were accidentally included into the experiment - I guess they had heart rate transmitting turned on.

For the gate, the constraints are even harder. It needs to be installed in public space, ideally without asking for permission, not getting in the way (eg no mat), be left unattended (so not expensive if stolen/lost), have some form of live communication. My first idea here was cheap smart phones. However I had concerns about them being stolen and the cost of mobile plans. My next thought was using LoRa WAN devices. Originally I went with a LILYGO TTGO but switched to a T-Beam.

The T-Beam gives me a GPS (good for time!), WiFi/Bluetooth, LoRa transmitter, 18650 battery, and a little display for diagnostics. These are used a lot with Meshtastic folk so there’s also lots of 3d printed case designs as well.

TBeam transmitter cable tied to a chain fence

I wasn’t sure if I wanted to hide these or make them look like they belong there.

In terms of communication I’ve been using the “The Things Network”. Likely Helium network would provide more coverage but fuck crypto. As a user crosses through the gate it records the time (this is why GPS was handy) and MAC address. After a waiting period it’ll transmit the seen bluetooth mac address and the time of the strongest signal. The Things Network receives this and triggers a web hook for my API.

Fun little side note here. My backend API doesn’t have any real code. There’s no Lambda function or container running. It’s just API gateway mapping the request into a DynamoDB request.

Screenshot showing API gateway mapping to DynamoDB

Before we get too far I wanted to talk about privacy. This system doesn’t really have an opt-in function - apart from turning bluetooth on or off on your smart watch. Any Garmin MAC address detected will get forwarded over the The Things Network and to my backend. The Things Network is encrypted so other people sniffing around won’t be able to see that data. I’ve had a think about how this might be able to be improved and the best I can do is a sort of registration process, where a user might sign up and register their MAC address with the service ahead of time. Possibly even require them to register at a specific gate. Because this is just a fun little project with friends I haven’t implemented any of that, but I thought it would be worth mentioning if anyone wanted to build this into a bigger system.

4 3d printed tbeam enxclosures with antenna

So I built four of these little gates. I think four is sort of the minium you can get away with for this system. You probably need one for the start and unless your start is also the finish you need 1 for the finish. Then you need one fairly close to the start so you have an initial pace estimate (I always wondered why there was a gate really close to the start at Run Melbourne). Which leaves one more for a half way point.

Alex running in Burnley park

I haven’t had much luck with actually doing practical tests with these, but on the weekend I got the opportunity to. Alex donated her time to help me, and I even did two laps of the test course myself.

Screenshot of the website showing a table of pace times and a map showing estimated location

So this is what the website looks like after the run. While running the red marker will indicate estimated runner position and lap times are updated once received. The route is programmed in via a small JSON config file like so:

{
    "name": "Burnley Park",
    "gates":{
        "eui-70b3d57ed00687e9": {
            "name": "Start / Finish",
            "lat":-37.82511038348801,
            "lng": 145.01353745417688
        },
        ....
    },
    "route": {
        "path": [
            {"gate": "eui-70b3d57ed00687e9", "distance": 0},
            {"gate": "eui-70b3d57ed00695ea", "distance": 220},
            ....
            {"gate": "eui-70b3d57ed00687e9", "distance": 273}
            
        ]
    }
}

And a GeoJSON file is used to provide the route for the map.

As you can see the screenshot, a number of gates were missed. One of the gates had a failed battery when we went to do the test so we had to only use 3 gates. One of the gates position didn’t have any The Things Network coverage, so never reported crossings. And just to make things worse, someone parked a car right where the start/finish gate was located. This stress tested the algorithm used for lining up the gate crossings with the route - something that has taken a lot of thinking. Handling missing gate crossings can be a bit tricky to get right.

Alex and myself running in the park

Even though not many data points were recorded in this test it was still a good outcome showing that the system can work. The coverage issue is somewhat known. I was running a gateway recently but didn’t like the overall setup so I’m in the process of rebuilding it which should improve that specific test track. Looking forward to trying this on a much longer track when I’m up for long runs again.