Skip to main content

Google open-sources mobile-first computer vision models for TensorFlow

A diagram illustrates the capabilities of Google's MobileNets.
Image Credit: Google

Google is helping smartphones better recognize images without requiring massive power consumption, thanks to a new set of models the company released today. Called MobileNets, the pre-trained image recognition models let developers pick between a set of models that vary in size and accuracy to best suit what their application needs.

Right now, a lot of the machine learning inside mobile apps works by passing data off to cloud services for processing and then providing the resulting insights to users once they return over the network. That means it’s possible to use very powerful computers in a data center and alleviate the burden for processing information on a smartphone. The drawback to that approach is that latency and privacy suffer.

By processing data on a user’s smartphone, it’s possible to return results a lot faster, and data never has to leave the phone. However, optimizing a machine learning model for use on mobile is a tall order. Eating up a bunch of battery with computationally intensive machine learning operations is no good.

Above: A table shows key statistics about the different MobileNet models Google made available on June 14, 2017.

Image Credit: Google

That’s where MobileNets come in: Google has handled all of the optimization ahead of time, so developers just need to implement the model in their application. The models range from one that uses 569 million multiply and addition operations to one that uses just 14 million of those operations.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

In this case, the more operations one of the MobileNet models uses, the higher its accuracy, in exchange for an increased load on a device’s resources.

It’s a move by Google to capitalize on a trend of increased local machine learning processing. The news comes a month after the company revealed TensorFlow Lite, its framework for running machine learning models created using TensorFlow more efficiently on low-power Android devices.

Developers can deploy the models now using TensorFlow Mobile, a system that is designed to help with deploying models onto Android, iOS, and Raspberry Pi.

This release builds on work that Google published in a paper earlier this year.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.