INSIGHTS

Why the Future of Machine Learning is Tiny, and the Future of AI is Augmented

Issue 15 - Tiny ML and Augmented AI

The content and distribution of Azafran’s INSIGHTS newsletter is focused to our LP, incubator, research, investment and partner ecosystem. As we look to build a two-way dialogue benefitting our collective efforts, each month we highlight important news and our approach to the emerging intersection of deep technology, end to end solutions and platforms driven by voice, acoustics and imagery.

This article is one section of an entire issue of INSIGHTS. Please sign up to receive access to past and new issues as they are published.

Subscribe to our newsletter

Augmented AI

For some time, our team has been scratching our heads at the loose
interpretation of AI as a catch-all term, serving as both the umbrella
for any tech or platform using machine learning, deep tech or
science and their associated components, alongside the strict definition where AI is sentient machine intelligence, making decisions without human interaction.

This dynamic is confusing for the public at large, who could be watching an episode of Battlestar Galactica (cyborgs = true AI) when a commercial for IBM breaks in talking about how they are using AI at Wimbledon to improve scoring, etc. In the various markets where AI is at work along the lines of the first definition above (i.e. not sci fi), this also creates confusion and different perspectives and definitions.

At Azafran, our focus is not on robotics or the strict definition of AI, but rather, machine learning, deep science/tech served by the input modalities of voice, acoustics and imagery. Through this lens, we see Augmented AI playing a key role in our Fund Two and near to mid term investments, through the current decade.

Augmented AI (some call it the Future of AI), combines humans and machines together to make a decision & requiring human input, hence the augmented moniker. Further, Augmented AI by design helps humans both make decisions and eliminate tasks, and does not seek to replace humans in the equation.

Examples of Augmented AI that our readers might have experienced include Grammerly and Adobe Photoshop’s Select Subject feature. Grammerly assists writers and editors, making suggestions based on strict grammar interpretations but also learns a particular writer’s style and over time adjusts recommendations to fit that style. Adobe’s Select Subject, examines an image & then utilizes a fairly sophisticated Augmented AI to select that part of an image that looks like it’s the focus of the shot. This is an operation that can be done by hand, but it is slow, tedious and error prone.

TinyML

Shifting gears to TinyML, another technology that is on the cusp of advancing the benefits of machine learning, in this case to the edge and more so into our everyday lives.

As defined by VentureBeat: “TinyML broadly encapsulates the field of machine learning technologies capable of performing on-device analytics of sensor data at extremely low power. Between hardware advancements and the TinyML community’s recent innovations in machine learning, it is now possible to run increasingly complex deep learning models directly on microcontrollers.

A quick glance under the hood shows this is fundamentally possible because deep learning models are compute-bound, meaning their efficiency is limited by the time it takes to complete a large number of arithmetic operations. Advancements in TinyML have made it possible to run these models on existing microcontroller hardware.”

Further, as noted by Azafran General Partner, Martin Fisher, “The big difference between full stack machine learning and TinyML is in the constraints. TinyML is running in a micro space, with low energy, versus a phone, desktop, servers or the cloud. You have to put a battery in there, it has to last a few years. In terms of the code, there is no hard drive for data so the data has to be written into the code for the chip / device to truly operate and process inputs on the edge.

In addition, that code is probably in Assembly or C programming, which is not a friendly environment, very strict. Significant size, power, data and code constraints, that about sums up why TinyML is so challenging, but the benefits will be huge.”

Thinking of the use cases and applications that are already being imagined and deployed (see Use Cases on the next page) the possibilities are staggering, ranging from sensors in fields helping grow crops to sensors listening to machines for faults.

One of our favorites is a successful initiative in the African Savannah (spearheaded by Paul Allen and the Allen Institute prior to his passing) where acoustic sensors driven by machine learning have basically eliminated elephant poaching in some reserves that have implemented the technology.

Returning to the challenging development of the TinyML space, the chip tech and infrastructure, how you create MCUs, how you define device trees - almost all of that is in China. Getting just a prototype spun up for testing can be a months-long process with support slow and painstaking. If a developer needs to make changes to the firmware, for example, they are dealing most likely with poor documentation and support and the expertise is highly specialized (electrical engineers), presenting a talent gap on top of these other challenges.

But the payoff is going to be a game changer and our team expects to see a lot of movement and development in this space in the next 2-3 years. Ending on a note looking back to Martin’s first startup (Rainbow Software) he recalled a similar environment to the challenges facing TinyML companies and developers. They built an MS-Office-like software suite (long before Office was imagined) on Commodore 64s. Everything was done moving in and out of registers - no hard drive, and a limited memory environment, not the same but evocative of what currently TinyML developers are facing. Marty eventually sold Rainbow to IBM, his first of many successful exits to come.


© Copyright 2017 - 2020, Azafran Capital Partners, Inc.