Finnian's blog

Software Engineer based in New Zealand

Colourising Video with OpenFaaS Serverless Functions

How do you deploy a serverless function to colourise videos using machine learning? Take OpenFaaS & combine it with Caffe to deploy an autoscaling service.

7-Minute Read

Introduction

In a previous post I talked about how to deploy serverless functions to an OpenFaaS cluster with ease. This post expands on this to show how we deployed a Machine Learning algorithm to colourise black & white videos using OpenFaaS. We presented the end result of this at DockerCon EU 2017 in the Community Theatre in Copenhagen.

In the beginning

After DockerCon in April of this year, I spotted this post on Mashable with some really cool examples of colourised photos from World War Two. This really hit home with me because they really bring the era to life; colour photos give you far more context than monotone photos do.

I started to wonder how feasible it would be to try and do this in an automated way. My friend Oli Callaghan explained how it would be possible to do this with Machine Learning and AI so we began to think about how we could actually go about doing this.

The approaches

Custom solution?

Oli jumped right in and started developing a neural network from scratch with C++ & OpenCL. Progress was good but we weren’t going to be able to complete the program before DockerCon. We also found that the network was quite slow to train due to not having the knowledge about how we could optimise the learning using things like loss functions. The source code for this is available on GitHub.

Even though we decided not to use this approach in the end, we learnt an awful lot about how neural networks work and all the background behind them. This is really valuable to us because now we can apply this knowledge to other things we might do with machine learning.

Tensorflow?

Tensorflow is a machine learning framework developed by Google which gives you an easy interface to build machine learning models. We attempted to train our own model using Tensorflow but ran into memory & performance issues. We simply couldn’t train the network fast enough to have it done in time for DockerCon.

Caffe pre-trained model

The final approach was to utilise a model trained by the authors of the research by PhD students at the University of Berkley we were building upon. They wrote the original network using Caffe & then trained it on images from ImageNet (a big database of images from the internet) which as a result, works really well because it’s been trained on a lot of data (~4.5M images).

Function architecture

We have three main components to the app.

Firstly, there is a tweetlistener service deployed on docker swarm which simply listens for tweets matching a certain criteria, saves the image into our Minio data store & calls the first function. GitHub source

The second component (colorise) is an OpenFaaS function which colourises an image from Minio & calls the tweetpic function with the filename & tweet ID for the converted image. GitHub source

Finally, the tweetpic function will retrieve the image from Minio & tweet it back to the original tweeter. GitHub source

From top to bottom, a full execution takes about 10s.

Deployment strategy

Having decided on our method for running the network, we next needed to work out how to deploy this so we could demo it at DockerCon. It turns out that OpenFaaS (the open source framework for deploying serverless functions) makes this really simple because it can deploy anything which will run in a container. The code for executing the network was written in Python which made it really easy to deploy to OpenFaaS.

Having got a PoC running on a small scale server in the cloud, we started to load test it. OpenFaaS automatically scaled our functions, however the processes were too CPU-intensive (we only had two CPUs) for us to deploy this at any kind of scale. We started running out of memory & the CPU was blocking when trying to convert more than on image at a time.

To combat this issue, we got in touch with Packet who are a bare metal server provider. They very kindly gave us some credit on their platform which allowed us to deploy a much more powerful server on their infrastructure. This server had 32GB DDR4 & 4 physical cores backed by an SSD - pretty sweet! We were able to deploy 16 replicas of our colourisation function without it even stressing. This came in super handy when the delightful people at DockerCon hammered our service during the talk.

We chose to deploy on Docker Swarm instead of Kubernetes, simply because I have more knowledge of swarm than k8s. OpenFaaS supports both out of the box, so it’s entirely up to you.

Our DockerCon audience didn’t fail to delight with tweets!

Community

The OpenFaaS community has been very supportive of us, helping test, develop and inspire Oli and I to keep improving the project. Although the original idea was ours, the development and deployment has been a group effort.

In particular, we’d like to say a massive thank you to OpenFaaS’s project lead, Alex Ellis for his continued help and support. He’s been a great mentor, giving us invaluable advice on various different aspects varying from presentation guidance to OpenFaaS setup and configuration. He’s even written a neat function which normalises the images to help with the sepia effect which has given great results.

Another shoutout must go to Eric Stoekl for his help showing me how to deploy an update to the bot without any down time. This has been really useful for deploying updates to the bot without any impact. (TL;DR docker service update --image newimage)

After party

We were showing our project to some pretty cool people at the EMC party at DockerCon - including Chloe Condon and Jonas Rosland. It was amazing to receive these kind words from people I respect so much.

But what about video?

Our original goal was to colourise a black & white video to bring it to life. We made it work for photos, but were we able to make it work for videos? Yes. And as far as we know, no one has ever done this before with machine learning.

This works by splitting the video up into frames and the dropping them into the OpenFaaS function stack, gathering up the colourised frames and stitching them back together with ffmpeg.

Colourising video frames

Final thoughts

We showed that it’s possible to deploy an advanced machine learning algorithm on OpenFaaS - zero configuration, zero hassle. Just setup your function, pipe the input in and then pipe the output back out the other end.

We’ve also shown how easy it is to learn about machine learning using popular frameworks such as Tensorflow & Caffe. Using these programs, it’s very easy (although not always quick) to create complex machine learning models to suit your needs.

In the future we’d like to run the conversions on GPUs so that we can leverage the power of graphics processors to run our network even faster. We think we can decrease execution time by a factor of 100 (5s -> 50ms) by using a GPU.

We’d also like to try running our network on a recurrent network, which learns from the frames that came before the current one. This should help with the video conversion which appears to flicker because some of the frames are slightly incorrect.

Our slides are available here and you can watch our DockerCon talk below. Oli has also written a post about the ins and outs (pun intended) of machine learning and also goes over how we adapted our program for video.

Feel free to have a play with @colorisebot on Twitter

Tweet to @colorisebot on Twitter

I’ve got a cool little grafana dashboard linked up to the OpenFaaS Prometheus metrics so I can see what’s going on in real time. We’ve had quite a lot of traffic recently after being featured on The Next Web which has led to a very busy period for the bot, colourising up to 100 images per hour on Twitter! We also got posted on Product Hunt (upvotes welcome! 🔥), which is a first for me 😁

Can @colorisebot be called viral yet?

The 20 failed tweetbacks were due to hitting Twitter’s API limits again, which has now been solved by some clever code written by Alex which will requeue failed function calls - stay tuned for more info on that!

It should be noted that @colorisebot is not intended to be a replacement for professional image colourisation. We wanted to build a proof of concept using machine learning, make it highly accessible via Twitter and see how fast we could make it run.

Here’s a Storify of some of the best tweets:

Recent Posts