face detection google coral Raspberry Pi Ubuntu

Getting started with Google Coral’s TPU USB Accelerator

In this tutorial, you will discover ways to configure your Google Coral TPU USB Accelerator on Raspberry Pi and Ubuntu. You’ll then discover ways to perform classification and object detection utilizing Google Coral’s USB Accelerator.

A couple of weeks ago, Google released “Coral”, an excellent quick, “no internet required” improvement board and USB accelerator that permits deep studying practitioners to deploy their models “on the edge” and “closer to the data”.

Using Coral, deep learning developers are not required to have an internet connection, which means that the Coral TPU is quick sufficient to perform inference immediately on the system slightly than sending the picture/frame to the cloud for inference and prediction.

The Google Coral comes in two flavors:

  1. A single-board pc with an onboard Edge TPU. The dev board could possibly be thought of an “advanced Raspberry Pi for AI” or a competitor to NVIDIA’s Jetson Nano.
  2. A USB accelerator that plugs into a device (resembling a Raspberry Pi). The USB stick consists of an Edge TPU built into it. Think of Google’s Coral USB Accelerator as a competitor to Intel’s Movidius NCS.

Immediately we’ll be focusing on the Coral USB Accelerator as it’s easier to get started with (and it matches nicely with our theme of Raspberry Pi-related posts the previous few weeks).

To discover ways to configure your Google Coral USB Accelerator (and perform classification + object detection), just hold reading!

Getting started with Google Coral’s TPU USB Accelerator

Figure 1: The Google Coral TPU Accelerator adds deep studying functionality to resource-constrained units like the Raspberry Pi (supply).

In this submit I’ll be assuming that you’ve:

  • Your Google Coral USB Accelerator stick
  • A recent install of a Debian-based Linux distribution (i.e., Raspbian, Ubuntu, and so forth.)
  • Perceive primary Linux instructions and file paths

When you don’t already personal a Google Coral Accelerator, you should purchase one by way of Google’s official web site.

I’ll be configuring the Coral USB Accelerator on Raspbian, however again, offered that you’ve a Debian-based OS, these commands will still work.

Let’s get started!

Downloading and installing Edge TPU runtime library

In case you are using a Raspberry Pi, you first want to install
feh, utilized by the Edge TPU runtime example scripts to show output pictures:

The subsequent step is to download the Edge TPU runtime and Python library. The simplest approach to download the package deal is to simply use the command line +
wget:

Now that the TPU runtime has been downloaded, we will extract it, change listing into
python-tflite-source, and then install it (notice that
sudo permissions usually are not required):

In the course of the set up you’ll be prompted “Would you like to enable the maximum operating frequency?” — be careful with this setting!

In response to Google’s official getting started guide, enabling this feature will:

  1. Enhance your inference velocity…
  2. …however cause the USB Accelerator to turn out to be extremely popular.

Should you have been to the touch it/brush up towards the USB stick, it might burn you, so watch out with it!

My private suggestion is to pick
N (for “No, I don’t want maximum operating frequency”), at the very least in your first set up. You possibly can all the time improve the working frequency later.

Secondly, it’s necessary to notice that you simply want at the very least Python three.5 for the Edge TPU runtime library.

You can’t use Python 2.7 or any Python 3 model under Python 3.5.

The
install.sh scripts assumes you’re using Python three.5, so for those who’re not, you’ll need to open up the
install.sh script, scroll right down to the ultimate line of the file (i.e., the
setup.py) where you’ll see this line:

For those who’re using Python three.6 you’ll merely need to change the Python version number:

After that, you’ll be capable of efficiently run the
set up.sh script.

General, the complete install course of on a Raspberry Pi took just over one minute. When you’re using a extra powerful system than the RPi then the set up ought to be even quicker.

Classification, object detection, and face detection utilizing the Google Coral USB Accelerator

Now that we’ve installed the TPU runtime library, let’s put the Coral USB Accelerator to the check!

First, be sure to are within the
python-tflite-source/edgetpu listing. When you adopted my directions and put
python-tflite-source in your home directory then the following command will be just right for you:

The subsequent step is to download the pre-trained classification and object detection models. The complete record of pre-trained models Google offers could be found right here, including:

  • MobileNet V1 and V2 educated on ImageNet, iNat Insects, iNat Crops, and iNat Birds
  • Inception V1, V2, V2, and V4, all educated on ImageNet
  • MobileNet + SSD V1 and V2 educated on COCO
  • MobileNet + SSD V2 for face detection

Once more, discuss with this hyperlink for the pre-trained models Google Coral offers.

For the sake of this tutorial, we’ll be utilizing the following fashions:

  1. MobileNet V2 educated on ImageNet
  2. MobileNet + SSD V2 for face detection
  3. MobileNet + SSD V2 educated on COCO

You need to use the following commands to download the fashions and comply with alongside with this tutorial:

For convenience, I’ve included all fashions + example photographs used in this tutorial within the “Downloads” part — I might advocate utilizing the downloads to make sure you can comply with along with the guide.

Again, discover how the fashions are downloaded to the
~/edgetpu_models listing — that’s necessary because it ensures the paths used in the examples under will work out of the box for you.

Let’s start by performing a simple image classification instance:

Figure 2: The Google Coral has made a deep studying classification inference on a Macaw/parrot.

As you possibly can see, MobileNet (educated on ImageNet) has appropriately labeled the picture as “Macaw”, a kind of parrot.

Notice: In case you are utilizing a Python virtual setting (coated under) you’d need to use
python somewhat than
python3 as the Python binary.

Now let’s attempt performing face detection using the Google Coral USB Accelerator:

Figure 3: Deep studying face detection with the Google Coral and Raspberry Pi.

Here the MobileNet + SSD face detector was capable of detect all four faces in the picture. That is particularly spectacular given the poor lighting circumstances and the partially obscured face on the far right.

The subsequent instance exhibits how one can perform object detection using a MobileNet + SSD educated on the COCO dataset:

Determine four: Deep studying object detection with the Raspberry Pi and Google Coral.

Notice there are three detections however just one chook within the picture — why is that?

The reason being that the
object_detection.py script shouldn’t be filtering on a minimal chance. You can simply modify the script to ignore detections with < 50% chance (I’ll depart that as an train to you, the reader, to implement).

For fun, I decided to attempt an image that was not included in the instance TPU runtime library demos.

Here’s an instance of applying the face detector to a custom image:

Figure 5: Testing face detection (utilizing my very own face) with the Google Coral and Raspberry Pi.

Positive enough, my face is detected!

Lastly, right here’s an example of operating the MobileNet + SSD on the same picture:

Figure 6: An instance of operating the MobileNet SSD object detector on the Google Coral + Raspberry Pi.

Again, we will improve results by filtering on a minimal chance to remove the extraneous detections. Doing so would go away only two detections: individual (87.89%) and dog (58.20%).

Installing the edgetpu runtime into Python virtual environments

Figure 7: Importing egetpu in Python inside my coral digital surroundings on the Raspberry Pi.

It’s a greatest apply to use Python digital environments for improvement, and as you recognize, we make heavy use of Python digital environments on the PyImageSearch weblog.

Installing the
edgetpu library into a Python digital setting undoubtedly requires a number of extra steps, but is properly value it to make sure you libraries are stored in sequestered, unbiased environments.

The first step is to put in both
virtualenv and
virtualenvwrapper:

You’ll discover that I’m utilizing
sudo right here — this is super necessary as when putting in the TPU runtime, the
install.sh script created
~/.local listing. If we attempt to install
virtualenv and
virtualenvwrapper by way of
pip they might truly go into the
~/.native/bin listing (which is what we don’t want).

The answer is to make use of
sudo with
pip3 (like we did above) so
virtualenvand
virtualenvwrapper end up in
/usr/native/bin.

The subsequent step is to open our
~/.bashrc file:

Then, scroll right down to the underside and insert the next strains to
~/.bashrc:

You’ll be able to then re-load the
.bashrc using
source:

We will now create our Python three digital surroundings:

I’m naming my digital setting
coral however you possibly can name it no matter you like.

Finally, sym-link in the
edgetpu library to your Python digital setting:

Assuming you adopted my actual instructions, your path to the
edgetpu listing ought to match mine. In the event you didn’t comply with my actual instructions then you definitely’ll need to double-check and triple-check your paths.

As a sanity check, let’s try to import the
edgetpu library into our Python virtual surroundings:

As you possibly can see, every part is working and we will now execute the demo scripts above utilizing our Python digital surroundings!

What about customized fashions on Google’s Coral?

You’ll notice that I’m only utilizing pre-trained deep studying models on the Google Coral in this publish — what about customized fashions that you simply practice yourself?

Google does provide some documentation on that however it’s far more superior, far an excessive amount of for me to incorporate on this weblog submit.

When you’re interested by studying tips on how to practice your personal customized fashions for Google’s Coral I might advocate you take a look at my upcoming ebook, Raspberry Pi for Pc Vision where I’ll be masking the Google Coral intimately.

How do I exploit Google Coral’s Python runtime library in my own custom scripts?

Use the
edgetpu library in conjunction with OpenCV and your personal customized Python scripts is outdoors the scope of this publish.

I’ll be overlaying easy methods to use Google Coral in your personal Python scripts in a future weblog submit as well as in my Raspberry Pi for Pc Vision guide.

Thoughts, ideas, and options when using Google’s TPU USB Accelerator

General, I really favored the Coral USB Accelerator. I assumed it was tremendous straightforward to configure and install, and while not all of the demos ran out of the box, with some primary information of file paths, I used to be capable of get them operating in a couple of minutes.

Sooner or later, I want to see the Google TPU runtime library more suitable with Python digital environments.

Technically, I might create a Python digital surroundings and then edit the
install.sh script to install into that digital setting, but modifying the
install.sh script shouldn’t be a strict requirement — as an alternative, I’d wish to see that script detect my Python binary/surroundings after which install for that specific Python binary.

I’ll also add that inference on the Raspberry Pi is a bit slower than what’s advertised by the Google Coral TPU Accelerator — that’s truly not a problem with the TPU Accelerator, however relatively the Raspberry Pi.

What do I imply by that?

Needless to say the Raspberry Pi 3B+ makes use of USB 2.0 but for extra optimum inference speeds the Google Coral USB Accelerator recommends USB three.

Because the RPi 3B+ doesn’t have USB 3, that’s not a lot we will do about that till the RPi four comes out — once it does, we’ll have even quicker inference on the Pi using the Coral USB Accelerator.

Finally, I’ll observe that a few times through the object detection examples it appeared that the Coral USB Accelerator “locked up” and wouldn’t carry out inference (I feel it acquired “stuck” making an attempt to load the model), forcing me to
ctrl + c out of the script.

Killing the script should have prevented a important “shut down” script to run on the Coral — any subsequent executions of the demo Python scripts would end in an error.

To fix the issue I had to unplug the Coral USB accelerator after which plug it back in. Again, I’m unsure why that occurred and I couldn’t find any documentation on the Google Coral website that referenced the difficulty.

Curious about utilizing the Google Coral in your personal tasks?

I guess you’re just as excited concerning the Google Coral as me. Along with the Movidius NCS and Jetson Nano, these units are bringing pc vision and deep studying to useful resource constrained methods reminiscent of embedded units and the Raspberry Pi.

For my part, embedded CV and DL is the subsequent huge wave within the AI group. It’s so huge that it might even be a tsunami — will you be driving that wave?

That will help you get your begin in embedded Pc Vision and Deep Studying, I have decided to write down a brand new guide — Raspberry Pi for Pc Vision.

Inside this ebook you will discover ways to:

  • Construct practical, real-world pc vision purposes on the Pi
  • Create pc vision and Internet of Issues (IoT) tasks and purposes with the RPi
  • Optimize your OpenCV code and algorithms on the useful resource constrained Pi
  • Perform Deep Learning on the Raspberry Pi (including using the Movidius NCS and OpenVINO toolkit)
  • Configure your Google Coral, perform picture classification and object detection, and even practice + deploy your personal customized fashions to the Coral Edge TPU!
  • Utilize the NVIDIA Jetson Nano to run multiple deep neural networks on a single board, together with picture classification, object detection, segmentation, and extra!

I’m operating a Kickstarter campaign to fund the creation of the brand new e-book, and to have fun, I’m offering 25% OFF my present books and programs for those who pre-order a replica of RPi for CV.

In reality, the Raspberry Pi for Pc Imaginative and prescient e-book is virtually free in the event you pre-order it with Deep Learning for Pc Imaginative and prescient with Python or the PyImageSearch Gurus course.

The clock is ticking and these discounts gained’t final — the Kickstarter pre-sale shuts down on Might 10th at 10AM EDT, after which I’m taking the offers down.

Reserve your pre-sale ebook now and when you are there, grab one other course or guide at a discounted fee.

Summary

In this tutorial, you discovered find out how to get started with the Google Coral USB Accelerator.

We started by installing the Edge TPU runtime library in your Debian-based operating system (we specifically used Raspbian for the Raspberry Pi).

After that, we discovered how you can run the example demo scripts included in the Edge TPU library download.

We also discovered easy methods to set up the
edgetpu library into a Python virtual surroundings (that approach we will hold our packages/tasks nice and tidy).

We wrapped up the tutorial by discussing some of my ideas, feedback, and ideas when using the Coral USB Accelerator (be sure you refer them first when you have any questions).

I hope you loved this tutorial!

To download the source code to this submit, and be notified when future tutorials are revealed here on PyImageSearch, simply enter your e-mail tackle in the type under!

Downloads: