Edge TPU Chip: Google’s Vertical Strategy to Capture Edge IoT Market

0
Dec 11, 2018

In our previous blog post, we highlighted the growing importance of edge computing, which complements cloud computing as we enter the intelligent IoT era. The existing incumbents (Microsoft, Amazon and others) and emerging companies (FogHorn, SAP, ClearBlade and others) are developing innovative variations of edge IoT implementations. These different approaches aim to drive the most efficient implementation of intelligence at the edge.

The two cloud giants, Amazon and Microsoft primarily offer software and services for edge computing, which tightly integrates with their existing cloud data storage and analytics platform. Amazon’s edge offering – Amazon Greengrass, launched in June 2017, extends AWS cloud capabilities to devices for local processing and analysis of data. Microsoft launched its edge platform almost a year later after Greengrass, however, Azure IoT Edge is more comprehensive and an intelligent IoT edge solution.

In July 2018, Google forays into the edge computing realm with Cloud IoT Edge and Edge TPU which aims to integrate tightly with the Google Cloud Platform (GCP). The announcement of Edge TPU has been the highlight of Google’s IoT Edge strategy, even though it has been a late entrant to the Edge Computing market compared to Amazon and Microsoft.

Edge TPU Chip

  • Google has purpose-built an ASIC chip – Edge TPU, which acts as a hardware accelerator to execute TensorFlow Lite ML inferences on mobile and embedded devices
  • This builds upon Google’s in-house developed and deployed Cloud TPUs in the cloud data centers, which trains ML models in the cloud
  • The Edge TPU chip executes these ML inferences at the edge and thus offers tightly integrated cloud-to-edge hardware and software infrastructure
  • The key features of this edge offering is executing intelligence within a small footprint at high performance and low power
  • AI processing ideally requires a lot of power, however, with the Edge TPU chip Google reduced the computational precision of its Cloud TPU from 64- to 8-bits to lower the power requirements while offering higher computational efficiencies at the edge
  • In Q4 2018, Google plans to release the Edge TPU as part of a development kit, which includes a system on module (SOM) & Base Board

Edge TPU Chip

 

  • The SOM board is removable so the Edge TPU module can be integrated with any other hardware. Specifications are NXP CPU, ML Accelerator: Google Edge TPU Coprocessor, RAM 1GB/Flash memory 8GB, Wi-Fi/Bluetooth
  • In May 2018, Microsoft launched a similar effort called Project Brainwave – a hardware system based on FPGAs for running deep learning AI models in the cloud & at Edge with low latency and high performance
  • The big giants are developing competing specialized hardware products to efficiently meet the growing computational needs of deep learning models at edge.

Edge TPU Accelerator

  • Google also introduced the Edge TPU Accelerator - a USB stick mainly intended to accelerate the execution rate of ML inferences on low-power IoT solutions such as Raspberry Pi and others
  • It will allow devices to concurrently run multiple computer vision models on high-resolution video at more than 30 frames per second
  • The device can be easily attached to boards like Raspberry Pi Zero or other custom boards through the mounting holes on the casing making it scalable
  • It can also connect via USB Type-C and run on Debian Linux and Android Things system

Edge TPU Accelerator

  • Google’s strategy with the launch of Edge TPU accelerator is to proliferate across the IoT developers using Raspberry Pi and other low-power computers. This will help developers to efficiently perform various AI related tasks such as image recognition, voice recognition and more at the edge
  • Along with the hardware, Google has also released vision & voice-recognition kits for single-board computers as part of AIY Projects Program to capitalize on the latest voice & camera-based application trends
  • To dive deeper and explore machine learning in action, AIY offers various interesting trained machine learning models mainly based on MobileNet model that can run on the kit

Face DetectorDog/Cat/Human DetectorDish ClassifierImage Classifier

Face Detector             Dog/Cat/Human Detector      Dish Classifier                  Image Classifier

The combination of a dedicated hardware component (Edge TPU) and an integrated software stack (Cloud IoT Edge) on top of Android Things makes Google’s Edge implementation a holistic and unique offering.

Google employed a vertically integrated strategy with the in-house development of the Edge TPU chip. With the launch of Azure Sphere and Project Brainwave, Microsoft, on the other hand, is aiming to extend their reach across the value chain by working with various OEMs & component/device manufacturers to integrate Azure Sphere OS. Whereas Amazon recently announced AWS Graviton- custom SoCs used to power their cloud data centers, following Google’s approach with Cloud TPUs. In future, we foresee Amazon extending the development of custom hardware chips for edge implementation as well, like Google did with Edge TPUs.

However, Google’s IoT Edge Solution is still in the Alpha stage (limited availability only for testing environments), so far, a limited number of its existing clients are planning to incorporate it in their current solutions. Google has a very strong foothold in the consumer market, but in order to penetrate the enterprise verticals with its Cloud IoT Edge software, they need to build partnerships and work closely with enterprise channels and integrators.

Summary

Published

Dec 11, 2018

Back To List