This occurred on an AWS website (not a site hosted on AWS, but a site run by AWS). It shows that security is hard, even for a $51 billion business. This issue can occur not just on websites but even SDKs and libraries
📸 Erik Mclean via unsplash
While developers have a keen nose for code smells us operations types have a keen nose for infrastructure smells. When I opened this git repository for first time it hit me. A buildspec.yml file.
The humble buildspec.yml
For those unfamiliar, buildspec.yml is used by a service called CodeBuild and basically defines the steps used to build a project, including running shell commands. It’s basically remote code execution as a service.
The presence of this file in a repository isn’t call for alarm, but when it’s in a public repository it certainly raises red flags. The usual concern is someones committed some secret credentials into this file. In this case the file was clean of credentials.
All good right? Not so fast.
📸 Lachlan Gowen via unsplash
notices your deploy.sh
The buildspec.yml referenced a deploy.sh. This is when I verbally said “oh no”. Like before no secrets committed. A good start. deploy.sh contains instructions to deploy out the project - like aws s3 sync and the like, so we can determine that when this gets run it has access to upload to the production site.
📸 Nathan Anderson via unsplash
The issue here is that the buildspec.yml and deploy.sh could be modified by a malicious user.
The pull request
However malicious user doesn’t have access to commit to the repository and an admin isn’t going to merge malicious code, so this is no big deal right? Let’s see what happens when we lodge a pull request.
Upon creation of the pull request GitHub triggers a CodeBuild job. This is a fairly common practice to make sure nothing in the pull request breaks the build. What prevents the pull request build from deploying to production? Lets check deploy.sh
if[["$CODEBUILD_WEBHOOK_HEAD_REF"=="refs/heads/main"&&${CODEBUILD_SOURCE_VERSION:0:3} !="pr/"]]; then
oh no.
So deployment is purely controlled by a script that can be changed in the pull request.
📸 Scott Walsh via unsplash
One last chance
At this stage we’ve got remote code execution into the pipeline. Apart from mining some Bitcoin this is pretty uneventful. What about the S3 sync we mentioned earlier? It’s possible that the role granted for pull requests is the same role used for deploying to production, so lets check it out.
I edited the shell script to have my code right at the start …
target_bucket value was recovered from original deploy.sh
… and lodged a pull request. I checked the website and sure enough my file was there. 😮
📸 Nathan Anderson via unsplash
It doesn’t end there
It’s quite possible that the role used for deployment might have access to lots of interesting things, a private subnet, IAM admin, CloudFormation. I didn’t check further than this and submitted a disclosure reported to the security team immediately.
Prevention
If you still want pull requests to trigger builds on a public repository there a couple of things you can do to limit risk.
Place build scripts in a separate repo. Some build tools let you specify a separate repo to use for the build pipeline. Be careful though as this doesn’t guarantee that the project build can’t execute commands, depending on the programming language and build tools.
For services like CodeBuild you can utilize a separate IAM role for pull requests which is limited to just build requirements. Make sure the build agents for PRs aren’t within a a trusted network.
The below video is streamed from AWS practically for free. This is done very similar to the original Big Buck AWS but using OpsWorks logs instead. This exploit is a little more useful as you can store gigabytes of data, many more times, and CORS is enabled.
(click play if video doesn’t auto play)
When you run a command in AWS OpsWorks (such as Setup or Configure) the logs are uploaded to S3 and viewed from the UI using presigned URLs. The bucket for this is opsworks-us-east-1-log - notice how it’s an AWS bucket and not your bucket!
So what we do is we run a whole bunch of deployments to generate logs, and we modify the opsworks agent lib/instance_agent/agent/process_command.rb file to print out the presigned URL that it uses to upload logs. Once we have the presigned URLs and the logs are uploaded we reupload whatever content we want to. In this case the MPEG files for Big Buck Bunny.
to get the log URLs.
To form this site we dump the Lambda behind an API Gateway and serve up a HLS m3u8 file. More details can be found at the original Big Buck AWS GitHub
Ever wanted to host 1080p 60fps video but don’t want to pay the hosting bill? By (ab)using some of AWS’s features we can serve DVD quality content to a million clients for ~$10USD
(works in IE Edge, and Safari - to use Firefox or Chrome see links before; should also work in most modern media players)
How does it work?
AWS lets us upload 75GB of Lambda functions. Each function can be 50MB in size. We simply split up the video into Lambda functions. There’s a few problems with this though, first off Lambda only allows zip files. How do we get around this, we zip up the video.
Now we have another problem, to play the video we’d need to unzip it. That’s where we cam do two little tricks, first when we zip up the file with 0% compression. This allows the original file to remain intact, but the zipping process just wraps some headers around the files (like a tarball). But video players aren’t going to want to play a zip file, so that’s where HLS standards come in to the rescue.
By using HLS we can split the file into multiple chunks and create a playlist of videos to play seamlessly together. A m3u8 HLS file looks something like this
what we do to the playlist file is make use of the EXT-X-BYTERANGE tag that allows us to tell the client where to get the bytes from. Using this we can skip past the zip header and straight to the actual content within the zip file. It looks something like this:
The last little piece is working out how to get the client to download the data from AWS Lambda. AWS lets you download your uploaded code again, but the way it lets you do this is through a presigned url to an S3 bucket. All we need to do is call get-function and it provides the S3 presigned URL that’s valid for 10 minutes. Media players aren’t really going to understand this so we need to make it easier for them. Remember that this isn’t your bucket, it’s Amazons, so they pay the $$$.
This is where most of the expense comes into this system. The easiest way I found was to create an API Gateway and Lambda function that responds to requests for chunks with a redirect to the presigned URL. Eg media player says “can I have the first playlist item” to API gateway, API gateway fires the Lambda function that runs get-function and returns back to the media player with the presigned URL. We also perform some caching here so that we don’t overwhelm AWS with get-function requests.
So our final m3u8 playlist looks something like this
and the links (https://mixpj3rk5d.execute-api.us-east-2.amazonaws.com/prod/0) end up redirecting to something that looks like this https://awslambda-us-east-2-tasks.s3.us-east-2.amazonaws.com/snapshots/082208999166/bbb-0-7db00eaf-4c7a-4b18-b6bf-8be2424dee1d?versionId=O_...<snip>...1b749e9426408827fb5732e1d8ec305b11e2938a27a89d02e175a3c
Rough instructions on how to replicate with your own videos. Should you use this? Probably not.
Split up video
# create the ts files and m3u8
ffmpeg -y \
-i bbb_sunflower_1080p_60fps_normal.mp4 \
-codec copy \
-bsf:v h264_mp4toannexb \
-map 0 \
-f segment \
-segment_time 30 \
-segment_format mpegts \
-segment_list "bbb.m3u8" \
-segment_list_type m3u8 \
"bbb-%d.ts"
#zip them up so lambda accepts them (on fish shell) - 0 compression because we want to range to them
for i in (seq 0 21); zip -r -0 bbb-$i.zip bbb-$i.ts; end
use xxd to find offsets (probably easier using zipinfo -v but this works)
# 1-9 = 66 offset
# 10+ = 67
# upload lambda functions
aws lambda create-function --region us-east-2 --function-name bbb-0 --runtime nodejs12.x --role "arn:aws:iam::082208999166:role/lambda_basic_execution" --handler "blah.blah" --zip-file fileb://bbb-0.zip
for i in (seq 0 21); aws lambda create-function --region us-east-2 --function-name bbb-$i --runtime nodejs12.x --role "arn:aws:iam::082208999166:role/lambda_basic_execution" --handler "blah.blah" --zip-file fileb://bbb-$i.zip; end
Create the API Gateway / Lambda function
(really this should be CloudFormation but given I’m doing this as one off PoC that’s left to the reader to build)
We need a way of getting the latest code download link. We use Lambda for this because it’s dirt cheap. In front of Lambda we place API Gateway
Create a Lambda function from scratch, give it IAM permissions to read lambda functions, upload redirect.py as the the function code.
Create a new API Gateway in AWS
Create a /{proxy+} proxy method
Update Integration Request for the method to be Lambda Function and point it at the redirect Lambda
Deploy the api gateway.
Create the m3u8
ffmpeg Should have given you an m3u8, you need to modify that. Before every file we need to add the Byterange field. the first number is the length of the file (without zip) and the last number is the offset inside the zip (which we found with xxd earlier)
#EXT-X-BYTERANGE:19254396@66
Update the path to the file to point to your api gateway endpoint, eg https://mixpj3rk5d.execute-api.us-east-2.amazonaws.com/prod/2
You can probably script this.
Inspiration
Laurent Meyer did a great write up about Google Drive streamers using PNG to cover their tracks.