It’s ML-with-everything, as AWS equips builders for (AI) business
As much as a conference as wide-ranging as AWS re:Invent can have a focus, last week’s main message was that developers (“builders”, in CEO Andy Jassy’s parlance) need to embrace machine learning techniques in order to deliver the next generation of applications everywhere. It’s AWS’ mission to make that as accessible as possible.
It’s fair to say that, even with 42,000 delegates at AWS re:Invent, there was probably still something for everyone. AWS is such a force in the market, with services ranging from IaaS through the deep, wide, blurry platform world of PaaS and more specific ‘functional PaaS’, right up to essentially SaaS offerings like WorkDocs (plus the thousands of SaaS products offered through AWS Marketplace), that re:Invent keynote announcements see it striving to illustrate growth and innovation in all corners of its empire. The event brought the usual slew of launches in areas from containers and developer tools, through load balancers, storage, compute, databases (SQL, NoSQL, and Graph), IoT, machine learning, etc. all the way up the stack to media services, and Alexa for Business.
Cutting through the volume of announcements, there were some clear signposts to where the company will concentrate its effort over the next 12 months. 2016 was The Year of Alexa and its underpinning voice (Amazon Polly), speech recognition (Amazon Lex) and image processing (Amazon Rekognition) services – designed to encourage a wealth of AI ‘skills’ development on the Alexa platform for Amazon’s consumer devices. This year AWS sought to in-fill its coverage of the machine learning landscape with a raft of new services, all designed to make the process of developing ML applications ‘easy’:
- Amazon SageMaker is a managed service for simplifying the process of building, training, and deploying ML models – slotting in between higher level services such as Polly, Lex, and Rekognition, and lower level ML frameworks like TensorFlow, Apache MXNet, etc.
- Amazon Rekognition added ‘real-time’ video analysis to AWS’ vision services.
- Language services got a boost with the launch of the Amazon Comprehend natural language processing service (which can extract entities such as people, places, brands, etc., group files by topic, and perform sentiment analysis), the Amazon Translate translation service, and the Amazon Transcribe speech-to-text service that features ML-powered auto-punctuation and promises to add support for multiple speakers.
Building on last year’s announcements which saw connected devices described as the on-premise element in a hybrid cloud (AWS Greengrass bringing Lambda functions to the IoT, etc.), the company has now bolstered its presence at the ‘edge’ by bringing ML to IoT with the launch of DeepLens – a programmable video camera that comes ready-made for machine learning applications, with processing on-board to run deep learning models locally, and connectivity to AWS via the new Amazon Kinesis Streams service (which continuously captures and stores streaming data like video) for more hefty workloads in the cloud.
So what should we make of AWS’ march to occupy the centre ground of ML / deep learning? It’s certainly not the first to offer similar services, of course. Its competitors here (IBM Watson and Watson IoT Platform, Google Cloud Platform, Microsoft Azure) have boasted video recognition, sentiment analysis, transcription services, etc. for some time now – and many have been busy sealing partnerships with SaaS players to power the application of these technologies to make content management and collaboration suites work smarter (such as Box Skills, which we covered from BoxWorks a couple of months ago; and Alfresco Content Services extending processing workflows out to AWS Rekognition).
What makes AWS’ move significant is that it brings a large and mature developer / partner ecosystem with it. Last week’s announcements increase the amount of ‘heavy lifting’ AWS is prepared to offer to entice customers to build for the ML future it sees for all businesses. It’s lowering the barrier to entry for anyone looking to power their applications with deep learning models – and making its bedrock storage and compute services stickier by doing so. The message is clear: if you’re prepared to go all-in with AWS, then you can more easily become all-in with AI.
To paraphrase Andy Jassy, the software developer is dead… long live the machine learning engineer.