Finnian's blog

Software Engineer based in New Zealand

Uploading files to S3 with React Native and Ruby on Rails

Uploading files to AWS S3 using presigned urls can be tricky, here's how I got it working.

5-Minute Read

I've recently been working on a top secret React Native project, powered by a Rails backend deployed to Heroku.

The app allows users to upload images/videos to our backend as part of one of the flows. Originally, I was using a multipart/form-data request to upload the data + files directly to our backend, which would then upload them to S3.

This approach has a couple of main disadvantages:

  • Heroku timeouts will abort the request if it takes longer than 30s (can happen if uploading lots of, or big files)
  • it increases the load on your backend

This post will be specifically talking about AWS S3 but GCP has a Signed URLs feature too.

Presigned URLs in S3 allow us to generate a signed request from our backend which we can give to the client, enabling them to upload to S3 directly. This solves both issues outlined above, with the added benefits of not exposing AWS credentials. Nice.

This all sounds great in theory, so I started hacking.

Generating a Presigned URL

The first step is to generate a URL that the client can upload to.
For this to happen, we need to know a few things about what the client intends to upload:

  • filename
  • size (bytes)
  • type (mimetype)
  • checksum

This is so that S3 can verify the client is uploading what they told us they would. Otherwise, they could upload any file they wished, which may or may not be an issue in your application.

This code takes the fields from the request and creates an ActiveStorage::Blob from them.
Then it returns a JSON object which contains all the information needed for the client to upload the file to S3. Example response:

{
  "data": {
    "url": "https://bucket.s3.region.amazonaws.com/etc",
    "headers": {
      "Content-Type": "image/jpeg",
      "Content-MD5": "3Tbhfs6EB0ukAPTziowN0A=="
    },
    "signed_id": "signedidoftheblob"
  }
}

So far, so good.

Uploading the file to S3

I wrote some code in the mobile app to request a signed URL and got a successful response, so started refactoring the actual upload to use the new URL. This was where the problems started.

Every single time I tried to upload a file to S3 using the presigned URL, I got one of the following errors, no matter what I tried:

  • SignatureDoesNotMatch
  • InvalidDigest
  • BadDigest

I also tried manually sending the file through Postman to rule out any library-specific issues, but still no go. At one point, I got it to create a file but it had no content - still not sure how that happened.
I was running this in my terminal (macOS) to generate a MD5 checksum of the file:

$ md5 test.jpg
MD5 (test.jpg) = a98c48553050f5d651cde8a46ee364ff  

After at least two days of head banging, I stumbled upon this post from Cloudway.
As it turns out, the checksum header (Content-MD5) needs to be a base64-encoded 128-bit MD5 digest. This was the root of my problems - I wasn't encoding the checksum as base64 before sending it to our backend, or S3.

Converting to base64 (again, macOS):

$ openssl dgst -md5 -binary test.jpg | base64
qYxIVTBQ9dZRzeikbuNk/w==  

Aha! That looks better.

I tried the whole cycle again (request presigned URL from our backend, then send the file to that), using the new base64-encoded checksum. Voilá! That worked.

It would be nice if S3 could be a little more specific about what the issue with the request actually is (BadDigest isn't particularly helpful). Hey ho, one can dream.

App code

Here is the completed file upload code for the mobile app (React Native, TypeScript):

Obviously, you must have already retrieved the presigned URL, headers and signed_id from your backend before hitting this code. I won't cover that here as it's pretty straightforward.

The only tricky part was getting the correct checksum of the file, so I've included that below:

Next, you just loop through each file and (in parallel):

  • retrieve a presigned URL for each (using the checksum from above, size of the file, filename and mimetype)
  • store the presigned URL, headers and signed_id somewhere
  • send a request to AWS for each file (note: it needs to be a PUT request, not POST)

If you're intending on creating some kind of record on your server to attach the files to (in our case, it's called a Report), you need to get hold of all the signed_ids.
When you create the record on your backend, you need to pass it the signed_ids which ActiveStorage will understand, and automatically link the Blobs we created above.

Linking the Blob(s) to the record

Here's the controller for creating a Report in our backend:

To make this work, just add an attachments field to the request creating the record which is an array of the signed_ids, ActiveStorage will handle the rest 🥰

That should be it!

If you're using Expo or Google Cloud Storage...

This section was added on 23rd May ‘22

If you’re using expo, you may run into some additional issues.

As react-native-fs isn’t available for Expo, you’ll need to use the expo-file-system package instead. In particular, FileSystem.getInfoAsync is your friend (see code snippet below), but again I was stumped for a while again whilst working with urql (a graphql client), expo and Google Cloud Storage (GCS). The familiar BadDigest and InvalidDigest errors surfaced themselves again…

Finally I was able to make it work by forcing Active Storage to use v4 of the GCS presigned post URL. You can achieve this by setting the cache_control property of your service in storage.yml. I’m not sure why this setup doesn’t seem to work with v2, but there you go. Maybe it’ll help you.

More info on v2/v4 GCS urls can be found here: https://cloud.google.com/storage/docs/access-control/signed-urls#types

Conclusion

This whole process took a lot longer than I was expecting, I'd originally budgeted a morning's work, but it ended up taking up a solid two days.

Here's all the code in one go: https://gist.github.com/developius/1fa35f2192b886dfce4e7f4eaed8b923

I read every post under the sun about RN + S3 uploads, the entire Rails docs for Direct Uploads, plus a slew of other random posts too. Here are the ones that helped me out:

Update 23rd May ‘22

Félix Pignard also reached out to me with some suggested edits, a helpful link regarding the use of axios with GCS and some sample code for GCS, axios and expo. Cheers Félix!

Hopefully this is all helpful to someone 😁

Recent Posts