Amazon unveiled AWS DeepLens at re:Invent in November 2017 and we knew we’d find some interesting uses for it. We managed to get our hands on it almost six months early, running our first successful Proof of Concept on the 14th June 2018 – the same day it became generally available.
As one of the first AWS consulting partners across Europe to run a successful PoC using AWS DeepLens, we wanted to share our story with you.
What is AWS DeepLens?
AWS DeepLens is not just a video camera – it is the world’s first learning enabled developer kit. The tight integration between hardware, software and the cloud is something that has really been thought through and is unique to the market. Small things like its ability to handle inference of video (and probably audio) inputs in the camera itself, without shipping the captured media towards the cloud, will save you a lot of time, but could also save you money.
As part of their developer enablement, AWS have created tutorials, code, and pre-trained models designed to expand deep learning skills. AWS DeepLens integrates with Amazon Rekognition for advanced image analysis, Amazon SageMaker for training models, and with Amazon Polly to create speech-enabled projects. The device also connects securely to AWS IoT, Amazon SQS, Amazon SNS, Amazon S3, Amazon DynamoDB, and more.
AWS DeepLens is easy to customise and is fully programmable using AWS Lambda. Models trained in Amazon SageMaker can be sent to AWS DeepLens.
Our Proof of Concept
Our customer is one of the largest catering companies in the world. They have thousands of contracted employees working at their events and were registering them with an Excel spreadsheet of names and details.
They were looking for an innovative way of recognising and registering their contractors better, and they wanted something that could integrate with Amazon SNS in the future.
They chose Claranet and AWS as the right partners to deliver this solution – we’d worked with them before, they knew about our expertise, AWS technology and our strong and longstanding partnership. Their annual event was selected for the Proof of Concept because they knew automatically registering attendees by just standing them in front of a camera would start the day with a little wow factor.
What did we actually do?
AWS DeepLens makes machine learning easy, with two simple steps: Train and deploy.
We were given sample pictures of the attendees that were uploaded on a S3 bucket with the expectation that when the same person stood in front of the camera, AWS Rekognition would be able to recognise them and welcome them to the event. Amazon Rekognition detects faces in an image or a video using facial landmarks, such as the position of eyes, and detected emotions (even appearing happy or sad). When you provide an image that contains a face, Amazon Rekognition detects the face in the image, analyses the facial attributes of the face, and then returns a percent confidence score for the face and the facial attributes that are detected in the image.
Claranet built the PoC solution in less than 10 days to ensure it was ready in time for the event. On the day, we recorded a 100% success rate in facial recognition. This obviously went down incredibly well, and created a bit of a talking point (in the right circles, of course).
Do you need AWS DeepLens to do this?
Honestly, no. But you’d probably save a lot of time and effort doing it with AWS DeepLens instead. Once AWS exposes the software layer than runs inside the kit, you’ll probably be able to emulate a similar system on your desktop (or a Raspberry Pi) pretty easily.
Is It Expensive?
As usual, it depends on what you’re doing. The device is only $249 direct from Amazon, which isn’t going to break the bank. But you need to consider the additional AWS services costs (e.g. AWS Greengrass and Amazon SageMaker). AWS Greengrass starts at $1.49 per year per device and Amazon SageMaker costs 20-25% on top of the usual AWS EC2 machine prices, so neither are an issue, but can add up if you’re doing a big project. For our PoC, the consumption was so low that all of these were in the free tier of AWS services.
It’s worthwhile remembering you save time by running things on the device instead of shipping it to the cloud and back – that’ll save you money compared with alternative solutions.
The future of DeepLens and AWS Machine Learning technologies is vast, and launching the device is just the start. Visitor Management Systems, Sentiment Analysis, Attendee Registration, Security Systems and ML project builds are already here.
Amazon are working with developers across the world to make AWS DeepLens samples ready to go without you having to write a single line of additional code. At a recent AWS Hackathon, I saw a DeepLens project that could be used by people suffering from Alzheimer’s to recognise old friends and family.
Our customer plans to build on this PoC and use the device for employee registration and event management. The data collected from the events can be used to build data lake and understand areas they aren’t currently looking at, like consumer behaviour. That’s when we can use the strong set of AWS analytics tools, like Amazon Redshift, to get really clever. Alexa integration could be coming our way, which would allow the solution to call out who it’s looking at and answer questions.
We keep hearing about Machine Vision (or Computer Vision), Deep Learning (or Machine Learning), Serverless, IoT, Edge Computing, and this device is all of these words, but also none of them. It is a first tentative step toward what a camera module will look like in five years, but it’s incredible to play with.
AWS DeepLens is proof that edge computing is here and is real. This is an exciting time to be in the industry, and I knew we could trust Amazon to help make technology like this easy and accessible to users. Claranet has the capabilities and skills to deliver projects related to ML, IoT and Big Data on AWS, but this will give people the confidence to try. Exciting times ahead!