Signed S3 Custom, Secure Urls on CloudFlare

3 min read

It may sound contrived, but the signing a custom S3 URL which is secured through CloudFlare is a very common and unbelievably challenging process.

If you’ve been pulling your hair out staring at cryptic AWS errors and CloudFlare timeouts, this might be your lucky day. Personally, I’ve worked through this issue several times – each time re-remembering all the nuances and work-arounds. This time I’ve decided to write down what I’ve learned.

The Problem

Say you have a bucket called “my-s3-bucket” and you want to serve that content through a custom URL “mybucket.example.com”. If the files are completely public, this is easy – you just set a CNAME from “mybucket.example.com” to “my-s3-bucket.s3.us-east-1.amazonaws.com”.

However – if you need any of these things, you’re in for a world of pain:

  • Signed Urls – meaning the content is not public – your server will sign a URL that allows the user to download the file for a short period of time
  • HTTPS – if you need CloudFlare to make secure requests to your S3 bucket AND your bucket has any “.” characters in it, you’re in trouble.
  • CORS – the content will be loaded client-side on your website and needs to have CORS headers that allow it to run on the user’s domain

Frankly, the easier path is to use Amazon CloudFront instead of CloudFlare to setup these CNAMES, provide SSL, etc. Configuring everything in AWS CloudFront is the normal AWS mess of obscure configurations, but it’s still the easiest path. The biggest drawback is CloudFront costs money – especially at scale.

Problem #1 – Signed URLS

Here’s why Signed URLs are tricky on CloudFlare. You use the AWS SDK’s getSignedUrl with a custom endpoint set to “mybucket.example.com” and the result is:

https://mybucket.example.com/somefile.pdf?AWSAccessKeyId=XXXXXXXXXXXXX&Expires=1652155571&Signature=xxxxxxxxxxx

Looks perfect, but it won’t work. Why? Let’s walk through the lifecycle. That request hits CloudFlare where you’ve already setup a CNAME for “my-s3-bucket.s3.us-east-1.amazonaws.com”. So CloudFlare makes a request to “https://my-s3-bucket.s3.us-east-1.amazonaws.com/somefile.pdf?AWSAccessKeyId=XXXXXXXXXXXXX&Expires=1652155571&Signature=xxxxxxxxxxx” to actually fetch the content. S3 compares the signature you sent in the query string & compares it with its own signature. They don’t match. Why? Because the hostname is one of the ingredients used to create the signature. You calculated the signature using “mybucket.example.com” as the hostname. AWS calculated the signature using “my-s3-bucket.s3.us-east-1.amazonaws.com” as the hostname.

Fixing the Hostname in the Signature

Fair enough. So you try again, this time calling “getSignedUrl” without a custom endpoint and it creates:

https://my-s3-bucket.s3.us-east-1.amazonaws.com/somefile.pdf?AWSAccessKeyId=XXXXXXXXXXXXX&Expires=1652155571&Signature=xxxxxxxxxxx

But you want the user to see your custom domain, not the s3 stuff. So you find/replace the hostname manually and come up with:

https://mybucket.example.com/somefile.pdf?AWSAccessKeyId=XXXXXXXXXXXXX&Expires=1652155571&Signature=xxxxxxxxxxx

Will it work? You’d think so, but no, it doesn’t. Ends up, S3 calculates the signature using the Host header, not with the hostname in the URL. When CloudFlare forwards the request to S3, CloudFlare sends your custom domain “mybucket.example.com” as the host header even though the hostname in the url is “my-s3-bucket.s3.us-east-1.amazonaws.com”.

The Solution

The only way S3 will be happy is if the hostname in the URL and the Host header both say “my-s3-bucket.s3.us-east-1.amazonaws.com”. Again – that’s easy in CloudFront. But CloudFlare is designed to always send the custom URL in the Host header. So you’re kinda SOL.

CloudFlare basically gives you two options if you need to mess with the request that gets forwarded to S3.

  • Page Rules – CloudFlare has a handy page rule that let’s you override the Host header. Easy, right? The bad news is that it’s only for “enterprise” customers which means $$$
  • CloudFlare Workers – CloudFlare workers let you write a JavaScript function that handles a request. Think of it like a Lambda, but nearly zero maintenance. Also – it’s free (at least for now). Here’s the code I currently run to rewrite the Host header. I won’t write a tutorial on workers, but when you setup a worker you list what routes it receives. In this case I’d put “mybucket.example.com/*” meaning it gets all requests for this subdomain. If you still have a CNAME setup for “mybucket” it’s meaningless and can be deleted. Personally I like to set it to “unused-see-cloudflare-workers.example.com”.

Problem #2. Buckets with Dots

TLDR: Do not use dots in your bucket name

For many years, AWS S3 only allowed you to host a custom domain if your bucket’s name was the custom domain. For example, “my-bucket” isn’t a URL, so it couldn’t be a custom domain. So it became a best practice to name your bucket “my-bucket.example.com” so that it could be used in a custom domain.

But what about HTTPS requests? S3 has a wildcard subdomain for “*.s3.us-east-1.amazonaws.com” which automatically lets you server assets via HTTPS to your users or to your CloudFlare instance. But what if you have a dot in your bucket name? [Record Scratch]

If you have a dot, you basically don’t get SSL for your bucket. You have to use CloudFront. There are some old urls where the bucket name is in the path (instead of the subdomain) but those are deprecated. We’ll see if that actually happens.

Problem #3. CORS

This actually has the same solution as problem 1, but it’s still worth understanding the root issue.

CORS headers tell a browser whether it can load the file client-side. In short, if the file’s response header “Access-Control-Allow-Origin” is set to all “*” or to the origin of the current webpage, then the file is allowed to load.

(Todo: finish CORS explanation)

Leave a Reply

Your email address will not be published. Required fields are marked *