WSJT-X SuperFox Verification is flawed

WSJT-X has published a release candidate which includes a new fox mode called SuperFox which promises a +10dB total system gain compared to the old fox mode. It also comes with a “SuperFox digital signature” in attempt to alleviate dx-pedition pirates. Verification of dx-peditions is an excellent idea, and I really want to see this problem be solved.

A screenshot of WSJT-X showing a VK callsign being used in SuperFox mode - freetext says “OC-001 Ultrarare DxPed”

There’s a few problems with the SuperFox signature system, however. Let’s talk about it.

How does SuperFox signatures work?

The basic flow is this:

  1. Before a dx-pedition, the group running the dx-pedition applies to Northern California DX Foundation to receive a “SuperFox key”
  2. During the dx-pedition, the fox transmitter configures the “SuperFox key” into their WSJT-X application
  3. When transmitting in SuperFox mode, WSJT-X will encode a verification signature into each message
  4. Hounds will receive this verification signature and ensure its valid
  5. If the message is valid, a “Verified” message is displayed in WSJT-X

Governance issues

The first problem to dive into is how the dx-peditions keys are distributed. We are reliant to a single org to distribute keys for all dx-peditions world wide. This can pose issues if the org doesn’t accept your dx-peditions credentials or is uncontactable for some reason.

Given the global reach of this hobby it’s frustrating to see this approach taken.

What could be done better

Public/private keys could be generated by any dx-peditions and used within WSJT-X. The public key could be uploaded to the dx-pedition website where users could check for validity. This allows anyone to use the new SuperFox mode. WSJT-X might want to have a certificate download system where wsjt-x could download certificates automatically that have been vetted.

GPL source code and amateur radio ethos

SuperFox decoders and the signature check are packaged as binary blobs and not covered by the GPL. This means that users installing through open source channels will not be able to use the SuperFox mode as the binary blobs need to be stripped from the packages.

Additionally, the use of binary blobs may not be in line with the project’s GPL licence, but I’ll leave that debate up to the legal people.

Regardless of the legal aspects, I’m a strong believer that amateur radio should focus on using open standards, protocols and modulation schemes where possible, and this very much goes against that. Keeping the modulation and verification scheme a secret prevents innovation and limits the lifetime of the service.

What could be done better

Develop the digital signature process in public with open source code. This allows feedback and improvements. It also ensures that the project can live on.

Security by obscurity is a bad idea

I’m sure you already know this one. Security by hiding the algorithm is a bad idea. You might have noticed that when I walked through how SuperFox is meant to work, that there was no step for where “WSJT-X downloads latest keys” or “user inserts public key”. That’s because the only security provided by this system is from the algorithm used to “sign” the messages.

The good news is that it’s no longer obscure…

mwheeler@foxbook superfox_keygen % gcc -I. main.c <censored>.c
mwheeler@foxbook superfox_keygen % ./a.out N0CALL
OP0C-COPY

I spent a little bit of time looking at how the binaries worked and made my own implementation of the key generator.

A public release of this code as GPL will be available after the Jarvis Island 2024 dx-pedition.

What could be done better

Including some people knowledgable in secure system design would be a great start. This system was very hand crafted without much experience and didn’t use any existing open standards for signing messages. Being amateur radio, we don’t need anything fancy. In fact, H40WA implemented a reasonably workable solution using TOTP tokens to verify that the station wasn’t a pirate. Alternatively, some basic public/private key cryptography could have helped here.

Oh no.

So what’s going on here. (note: I’m no cryptography expert)

The major problem with SuperFox is the system design for the digital “signature”. The system is symmetrical which means the receiver needs to know how the key and the process used to “sign” the message. This means the only thing protecting someone from generating a SuperFox key is client side security - a terrible place to be in.

Facepalming xssfox

In a future post after the Jarvis Island 2024 dx-pedition I’ll write up the steps I took in discovering how SuperFox and foxchk work. This work was done with the intention of creating open source versions of the closed binaries to allow SuperFox to work on Debian, however as the security issues were discovered it was important to document the issues publicly (while keeping the details private for now) to allow developers a chance to hopefully change their approach to something more open and sustainable.

With help/love from

  • the6p4c
  • isomer
  • kitty
  • my insomnia

Longest Run Gippsland 2024

Last Sunday I took part in the Longest Run Gippsland. The Longest Run is a series of unofficial parkruns (5km) at 7 different locations completed near to the shortest day of the year (usually Monarchs birthday long weekend).

Gippsland’s Longest Run 2024 started at 7am in Warragul then moved through Newborough, Traralgon, Churchill, Grand Ridge Rail Trail, Koonwarra and finally finishing on the coast at Inverloch at 4:30pm. This gives about 1 hour to complete each run and 30ish minutes to have a snack + travel to the next location.

While North Melbourne Longest run probably made more sense for me, I first heard about the Longest Run Gippsland and the concept of doing some of the parkruns that Alex had told me about sounded super appealing. The idea of getting up at 5am to be ready in Warragul did not seem like a fun idea and as Alex and Geordie would be joining in we decided to share a hotel room in Warragul for the night. Luke decided to also join us in the morning.

We would all run at our own paces. My plan was for 6min/km - much slower than many of my runs recently. I packed several pairs of running clothes however I didn’t factor in how cold it would be on the day. I really need to get a running jacket.

Myself, Alex and Geordie in the dark ready for the first parkrun

Warragul

28:27 5:41/km 24m Ascent

Group photo of all the runners at Warragul - Probably around 30 people
After an introduction to the Longest Run and a quick briefing on the course we were off. I had actually completed this course officially once before during Antennapalooza. This course does a lap of the main park before turning north past the ovals towards the end of the park and returning. The northern section is performed twice before finishing near the start line.

I completed this course a little bit faster than I wanted but still kept it pretty easy. Afterwards some bananas and corn chips were consumed. My diet that day consisted mostly of corn chips.

Newborough

28:33 5:42/km 25m Ascent

This was a lovely course. A simple double out and back along the rail trail. The approach to the rail trail followed a lovely flowing stream that just looked so gorgeous this time of year.

Once again ran a bit faster than I really wanted but still felt good afterwards. This is also the first time I met Liz. We happened to be going about the same pace and chatted throughout most of the run. While everyone during this event was so friendly and supportive Liz was next level supportive and got me through some of the later parkruns.

Traralgon

27:26 5:29/km 20m Ascent

I’ve run sections of the Traralgon parkrun before so I knew what to expect. This is another double out and back which follows the creek, however you don’t get much of a view of the creek due to the path placement. It’s a reasonably flat course. I certainly ran a lot faster than I should have, not sure if this is because I needed to go to the bathroom or because I was running with Alex.

Chomped on some chocolate along with some sugar coated nuts in the short break.

Churchill

29:48 5:52/km 46m Ascent

We only just got to Churchill in time before the “official” start. It should be worth noting that as these aren’t official parkruns there are no timers, no finish funnel, no tokens. You record your own time. This means that you can start the courses early or late. Many of the people walking the courses started them early.

Churchill caught me a bit offguard. Up until this point I had been running with my very normal, not designed for running, cotton hoodie. The combination of slightly better weather and a bit more ascent on this course meant I had to remove it mid run and the some what sharp elevation changes meant that I ended up taking two short little walking segments on this one.

The course starts by heading south down the park, u-turning then heading back north towards the very top of the park. Two laps of this are completed with the exception of the very north section. When finishing I was bit confused to find that the finish line is a short distance away from the start on the grassy section, however I think under normal parkrun conditions this would be easy to identify.

We had lunch at this point, some wraps with tabouli, corn chips, salad, salad dressing and probably some other fillings I’ve forgotten to mention.

At this point I was feeling a little tired and there was a bit of pain in my knee. I think some of the aggravation in the knee had actually come from driving segments.

Grand Ridge Rail Trail

32:37 6:31/km 30m Ascent

Selfie with Luke and Alex while holding up the Grand Ridge Rail Trail parkrun selfie border cardboard

This is a fun course. Well I think it would be a fun course in the dry. It’s a single out and back course with a slight down hill grade for the entire way out. I decided before I even started this course that I should take it super easy and walk if needed to save energy and my limbs to complete all 7 courses. The approach I ended up with was run the down hill, then walk at the turn around point for about 500m then run the rest of the return. Towards the end I was feeling pretty good so picked up the pace a bit. Felt really good.

This also marks the point where you’ve completed a half marathon worth of running - and for me that also meant the most running I’ve done in a single day.

I really want to try this course again in the dry. I felt like I spent a lot of time focusing on not twisting my ankle with the slippery mud and clay rather than enjoying the track. The other reason is that this course often only gets around 20 people attending which means I’m in with a shot for getting in the top 10!

Only two more to go.

Koonwarra

37:55 7:22/km 34m Ascent

This is a gorgeous course. Probably my favorite of all 7. I’m not sure exactly how well I would go under normal parkrun conditions as the bridges were quite wobbly and that usually makes me feel a bit unwell, but the views were amazing.

Koonwarra parkrun is an out and back starting from the town and following the Great Southern Rail Trail. The start takes you through a tunnel under the highway where you become surrounded by trees, eventually opening up in plains / farm land which great visibility across the bridges.

At this point I had pretty much hit my limit. I ran ran, walked, ran, walked, ran. Walking roughly half of the 5km. I tried to keep the walking pace fast as possible. My knee was starting to hurt a lot more. It wasn’t bad but it also wasn’t good. I didn’t want to push it.

Inverloch

36:53 7:15/km 17m Ascent

Looking back at the stats I find it hard to believe that Inverloch only had 17m of ascent. Every little up section felt like pain to me. I had a secret weapon though. A progress pride flag worn as a cape to celebrate pride month.

Me running in Inverloch with a progress pride flag worn as cape

I was pretty tired at this point and didn’t really understand the course at all, luckily I had people to follow otherwise I would have been utterly confused when I arrived back at the start line having only done 2km. The course starts in the middle and runs east, turns around back west past the start line for another 500m. Two laps are done to make the 5km.

The first 2km I ran eager to finish the final parkrun without walking but I realised quickly it was not going to happen and switched back to walking. When I reached 1km to go I decided to give running another shot and was able to complete the final kilometer running. Just.

With that it was done. 7 parkruns in a single day. Even though it was only 14°C I threw myself under the outdoor beach shower for a few minutes before changing into some fresh cloths.

I’m super happy with the outcome. I wasn’t sure if I could do all 7 but considering I was still running by the end that was a win for me. My knee was a bit sore for a few days after but it’s now good. 4 days later I did a reasonable 5km run (5:16/km) followed by a 10km PB (5:24/km) on the Saturday - so I think its safe to say that most of my body has recovered pretty quickly from what has been the most running in a single day I’ve ever done.

I’m pretty sure I’ll find myself doing another Longest Run in the future, I’m just not sure which one yet.


Surviving Terraform

I pretty much give out this information as a talk in any company I’ve worked with using or struggling with Terraform. None of these ideas are ground breaking and like the other posts in this series, very opinionated, but hopefully you learn something out of this post.

Don’t use input variables on the main stack*

Terraform variables seems like the perfect solution to parameterising different environments. The problem is that you now need to manage a bunch of variables per environment. One solution is to have a bunch of .tfvars files for each environment. The problem with this is now you have to remember to use the correct file for each environment.

Instead I prefer to use a locals map. This removes the need to remember to define which file has to be loaded for each environment. If you are using workspace names you can do something like this.

locals {
  config = {
    live = {
      instance_size = "m5.large"
    }
    staging = {
      instance_size = "t2.medium"
    }
    test = {
    }
  }
}

output "instance_size" {
  value = try(local.config[terraform.workspace].instance_size, "t2.micro") # Another approach is using lookup()
}

My other advice around configuration is try your hardest to avoid configuration options. The less configuration options the less variation testing you need to perform and the less likely there will be differences in environments. If all your environments have the same instance_size you don’t need to declare it as a configurable option. This also allows for having as much of the configuration visible in the resource definition as possible making for debugging and configuration changes easier.

*except for provider secrets

The exception to this rule is for provider secrets. These should never be stored in a tfvar file or in locals. Your CI/CD should configure them securely using environment variables.

Keep configuration close to resources

Terraform / HCL is a domain specific language to define your infrastructure. Don’t try to move all your configuration into locals variables. When someone wants to make a change or debug a problem they want to look at the resource block to see how a resource is configured - not follow a trail of breadcrumbs. When you start moving every configuration option into locals you end up creating a worse version of terraform / HCL. If you need to calculate a value that’s used across multiple resources do that in the same file as those resources.

Avoid third party modules

I don’t think I’ve ever found a single module on registry.terraform.io that has actually saved me time in the long run. Often they seem like a great idea initially however in nearly every case the organisation has had to fork or vendor the module to add in features they need. Maintaining the forked versions becomes troublesome as you now need to update functionality in the module that you might not even be using. To make matters worse many modules will use other modules creating a dependency hell when trying to upgrade provider or terraform versions.

Modules on the registry are often either

  • very complex to support many different use cases to the point that using primitives would be easier or
  • extremely basic making their existence pointless

Instead use terraform modules for inspiration.

See also information about supply chain attacks on terraform:

Use internal modules sparingly

Flat terraform is good terraform, but there are times where using modules makes a lot of sense. The first might be when you need to create a large number of resources multiple times (however consider using chained for_each’s first). The second is reusable components. You might need to spin up an ALB, ECS task def/service, security group often as part of your company’s usual design pattern.

In this case a module usually makes sense - however also make it have purpose. Try to make the module fill in as many gaps as possible. Remembering the rule above about reducing the number of variables. You might have a standard set of subnets these ALBs are always deployed to. Rather than taking that as an input variable use data sources to look up those values. If the user of the module only has to set a single name input variable to that module it’s a big win for users and operations teams. Less variables - less mistakes.

Often people see using modules as a way of reducing the size of the stack or project, however counting lines of code like this is silly. You can separate out components of your stack into separate .tf files. Having the least amount of nesting makes it easier to debug, easier to understand and easier to write.

Layout

For a small project a single terraform stack can work great. However as things start to get larger you probably want to consider breaking things apart. One sign that it might be time to look into breaking apart a monolithic terraform stack is when the plans start taking unbearably long to finish.

In these cases try to separate things into shared components. You might have a VPC or AWS account stack which handles a lot of the shared common infrastructure. If all your projects use a shared database server you might break that into its own stack. Then each service or micro-service might get its own stack.

The important part of this process is thinking about dependencies. For the example above the VPC stack should be deployable on it own. While the database stack should be deployable with only the VPC. You want to make sure these dependencies flow one way. Ideally these should be soft dependencies - meaning that the database stack would use data sources to look up the details it needs to perform its deployment.

Workspaces and backend config

Workspaces are a great way of managing different environments. If your thinking of having multiple environments deployed from a single workspace please stop and reconsider - there is so much risk in this approach.

There is however a downside to using workspaces in terraform. Workspaces share a single backend configuration. Often you might want to deploy test, staging and live out to different AWS accounts. You could store the backend config in a single shared AWS account. However there’s an alternative. Terraform backend config can be defined/overridden on the command line. This can be preferential to using a shared backend configuration as tfstate can have secret values stored in it.

Secret Management

Often an application might need access to third party services via an API key or require storing some other secret information. Storing this inside the git repo would be a terrible idea. In these cases I suggest doing something like creating a bunch of placeholder SSM parameters with ignore_changes enabled for the value attribute.

resource "aws_ssm_parameter" "test" {
  name  = "test"
  type  = "SecureString"
  value = "PLACEHOLDER"
  lifecycle {
    ignore_changes = [value]
  }
}

This lets you have terraform create all the attributes that might need configuring for an environment and a way of referencing them. An admin can then enter the AWS console to fill in the parameters that need setting. This however will not protect you from the fact that terraform will still put the actual secret value into the state file next refresh.

There are terraform providers for various vaults and password managers like 1Password that can be used to populate the values from if your security model allows this. Alternatively it might be suitable to source these variables from a variable. As long as the secret isn’t being committed to git.

Other tips

Don’t name things

If you must, use name_prefix where available. Sometimes you’ll need to have a resource recreated to change configuration and most of the time the resource name must be unique. If you are using create_before_destroy this means that you can’t create the new resource before the old resource is created. This is even worse for things like S3 buckets which much be globally unique.

AWS allowed_account_ids

Use allowed_account_ids if possible. This allows you to ensure that your terraform is only ever applied to the correct accounts. You can use this with a local variable mapping against a terraform workspace to ensure that workspace == aws account id you expect.

Modules should use git tag references

When you release a new version of a local module, tag it with a version number. Use this version number in your terraform module usage. Terraform does not lock modules to a specific commit ID, so for reliable deployments you need to do it yourself.

If you haven’t kept up with latest releases

The following are useful for refactoring terraform while still using a CI/CD environment

  • moved Move/rename resources
  • removed Remove a resource without destroying it
  • import Import an existing resource
  • checks Assert on conditions during different stages of the terraform run

Try to not deprecate input variables

If you want developers to keep up to date with terraform modules, make it easy. Try not to rename variables or change their input types. If you need to support new configuration types try to accept both the old and new types within the module.

If you really want to remove a variable give users a warning first. You can use check block to do this, like so.

variable "instance_id" {
  default = ""
  description = "[DEPRECATED] Do not use anymore as its been replaced with instance_name"
}

check "device" {
  assert {
    condition     = var.instance_id == ""
    error_message = "Warning instance_id variable is deprecated and should not be used. See instance_name."
  }
}

Have CI/CD do everything

I feel like this one is obvious, but just to be clear, you shouldn’t actually be running terraform locally. Have CI/CD do plans on PRs. Have CI/CD do terraform fmt and push the changes to PRs. On merge do the apply (ideally with the plan generated in the PR if only fast forward commits).

Plan and state security

As hinted above, tfplans and state files can have secrets in them. Factor this in when deciding who has access to the state backend, and who has access to download plans from your CI/CD system. Make sure they aren’t public.