resnet

Keras: Feature extraction on large datasets with Deep Learning

In this tutorial, you’ll discover ways to use Keras for function extraction on picture datasets too huge to suit into memory. You’ll make the most of ResNet-50 (pre-trained on ImageNet) to extract features from a large image dataset, and then use incremental learning to train a classifier on prime of the extracted options.

At the moment is a component two in our three-part collection on transfer studying with Keras:

Final week we discussed the right way to perform transfer learning utilizing Keras — inside that tutorial we targeted primarily on switch studying by way of function extraction.

Using this technique we have been capable of utilize CNNs to acknowledge courses it was never educated on!

The problem with that technique is that it assumes that each one of our extracted options can match into reminiscence — that will not all the time be the case!

For example, suppose we have now a dataset of 50,000 photographs and needed to utilize the ResNet-50 community for function extraction by way of the final layer prior to the FC layers — that output volume can be of measurement 7 x 7 x 2048 = 100,352-dim.

If we had 50,000 of such 100,352-dim function vectors (assuming 32-bit floats), then we would wish a total of 40.14GB of RAM to retailer the whole set of function vectors in memory!

Most individuals don’t have 40GB+ of RAM in their machines, so in these conditions, we’d like to be able to carry out incremental studying and practice our mannequin on incremental subsets of the info.

The rest of in the present day’s tutorial will show you methods to do precisely that.

To discover ways to utilize Keras for function extraction on large datasets, simply hold reading!

Keras: Feature extraction on large datasets with Deep Learning

In the first part of this tutorial, we’ll briefly talk about the idea of treating networks as function extractors (which was coated in additional element in final week’s tutorial).

From there we’ll investigate the state of affairs by which your extracted function dataset is just too large to suit into memory — in these conditions, we’ll need to apply incremental studying to our dataset.

Subsequent, we’ll implement Python source code that can be utilized for:

  1. Keras function extraction
  2. Adopted by incremental learning on the extracted features

Let’s get started!

Networks as function extractors

Figure 1: Left: The unique VGG16 network structure that outputs chances for every of the 1,000 ImageNet class labels. Proper: Eradicating the FC layers from VGG16 and as an alternative returning the final POOL layer. This output will serve as our extracted options.

When performing deep learning function extraction, we treat the pre-trained community as an arbitrary function extractor, allowing the input picture to propagate forward, stopping at pre-specified layer, and taking the outputs of that layer as our features.

Doing so, we will nonetheless make the most of the strong, discriminative options discovered by the CNN. We will also use them to acknowledge courses the CNN was by no means educated on!

An example of function extraction by way of deep learning could be seen in Figure 1 at the prime of this part.

Right here we take the VGG16 network, permit an image to forward propagate to the ultimate max-pooling layer (previous to the fully-connected layers), and extract the activations at that layer.

The output of the max-pooling layer has a volume form of seven x 7 x 512 which we flatten into a function vector of 21,055-dim.

Given a dataset of N pictures, we will repeat the process of function extraction for all pictures within the dataset, leaving us with a complete of N x 21,055-dim function vectors.

Given these features, we will practice a “standard” machine learning mannequin (comparable to Logistic Regression or Linear SVM) on these features.

Notice: Feature extraction by way of deep learning was coated in far more element in last week’s submit — discuss with it when you have any questions on how function extraction works.

What in case your extracted options are too large to fit into reminiscence?

Feature extraction by way of deep learning is all high quality and good…

…however what happens when your extracted options are too large to fit into reminiscence?

Remember that (most implementations of, together with scikit-learn) Logistic Regression and SVMs require your whole dataset to be accessible abruptly for coaching (i.e., all the dataset must match into RAM).

That’s nice, but if in case you have 50GB, 100GB, and even 1TB of extracted options, what are you going to do?

Most individuals don’t have entry to machines with a lot reminiscence.

So, what do you do then?

Answer: Incremental learning (i.e., “online learning”)

Determine 2: The process of incremental learning plays a task in deep learning function extraction on large datasets.

When your complete dataset does not match into reminiscence you should perform incremental studying (typically referred to as “online learning”).

Incremental learning lets you practice your mannequin on small subsets of the info referred to as batches.

Utilizing incremental studying the training course of becomes:

  1. Load a small batch of knowledge from the dataset
  2. Practice the model on the batch
  3. Repeat looping by means of the dataset in batches, coaching as we go, until we reach convergence

However wait — doesn’t that course of sound acquainted?

It ought to.

It’s exactly how we practice neural networks.

Neural networks are wonderful examples of incremental learners.

And actually, in case you take a look at the scikit-learn documentation, you’ll find that the classification models for incremental studying are either NNs themselves or immediately related to NNs (i.e.,
Perceptron and
SGDClassifier).

As an alternative of using scikit-learn’s incremental learning fashions, we are going to implement our own neural network using Keras.

This NN shall be educated on prime of our extracted options from the CNN.

Our coaching course of now becomes:

  1. Extract all options from our picture dataset using a CNN.
  2. Practice a simple, feedforward neural network on prime of the extracted options.

The Food-5K dataset

Determine 3: The Foods-5K dataset might be used for this instance of deep learning function extraction with Keras.

The dataset we’ll be using right here right now is the Meals-5K dataset, curated by the Multimedia Signal Processing Group (MSPG) of the Swiss Federal Institute of Know-how.

This dataset consists of 5,000 pictures, every belonging to one among two courses:

  1. Food
  2. Non-food

Our aim immediately is to:

  1. Make the most of Keras function extraction to extract features from the Food-5K dataset using ResNet-50 pre-trained on ImageNet.
  2. Practice a easy neural network on prime of these features to recognize courses the CNN was by no means educated to acknowledge.

It’s value noting that the complete Food-5K dataset, after function extraction, will solely occupy ~2GB of RAM if loaded suddenly — that’s not the point.

The purpose of in the present day’s publish is to point out you the way to use incremental learning to coach a mannequin on the extracted options.

That method, no matter whether you’re working with 1GB of knowledge or 100GB of knowledge, you’ll know the exact steps to coach a mannequin on prime of features extracted by way of deep learning.

Downloading the Food-5K dataset

To start out, be sure to seize the source code for as we speak’s tutorial utilizing the “Downloads” section of the weblog publish.

When you’ve downloaded the source code, change listing into
transfer-learning-keras :

In my expertise, I’ve discovered that downloading the Meals-5K dataset to be a bit unreliable.

Subsequently I’m presenting two choices to download the dataset:

Choice 1: Use
wget  in your terminal

The
wget  software comes on Ubuntu and different Linux distros. On macOS, it’s essential to install it:

To download the Meals-5K dataset, let’s use
wget  in our terminal:

Observe: No less than on macOS, I’ve found that if the
wget  command fails as soon as, just run it once more and then the obtain will begin.

Choice 2: Use FileZilla

FileZilla is a GUI software for FTP and SCP connections. Chances are you’ll obtain it in your OS right here.

Once you’ve put in and launched the appliance, enter the credentials:

You possibly can then join and download the file into the suitable destination.

Determine four: Downloading the Meals-5K dataset using FileZilla.

The username and password combination was obtained from the official Meals-5K dataset web site. If the username/password combination stops working for you, verify to see if the dataset curators changed the login credentials.

Once downloaded, we will go ahead and unzip the dataset (making certain that you’re in the
Meals-5K/  directory that we previously used the cd command to move into):

Venture construction

Go ahead and navigate back to the basis directory:

From there, we’re capable of analyze our venture structure with the
tree  command:

The
config.py  file accommodates our configuration settings in Python type. Our other Python scripts will benefit from the config.

Utilizing our
build_dataset.py  script, we’ll manage and output the contents of the
Meals-5K/  directory to the dataset folder.

From there, the
extract_features.py  script will use transfer learning by way of function extraction to compute function vectors for every picture. These options might be output to a CSV file.

Each
build_dataset.py  and
extract_features.py  have been reviewed intimately last week; nevertheless, we’ll briefly stroll via them once more at the moment.

Finally, we’ll evaluation
practice.py . On this Python script, we’ll use incremental learning to train a easy neural network on the extracted features. This script is totally different than final week’s tutorial and we’ll focus our power right here.

Our configuration file

Let’s get began by reviewing our
config.py  file the place we’ll retailer our configurations, specifically the paths to our input dataset of photographs along with our output paths of extracted options.

Open up the
config.py file and insert the following code:

Take the time to read via the
config.py  script listening to the feedback.

A lot of the settings are related to listing and file paths that are utilized in the rest of our scripts.

For a full evaluate of the configuration, you’ll want to check with last week’s submit.

Constructing the picture dataset

Each time I’m performing machine studying on a dataset (and especially Keras/deep studying), I choose to have my dataset within the format of:

dataset_name/class_label/example_of_class_label.jpg

Maintaining this directory structure not solely retains our dataset organized on disk but in addition allows us to make the most of Keras’
flow_from_directory perform once we get to fine-tuning later in this collection of tutorials.

Because the Food-5K dataset offers pre-supplied knowledge splits our last directory construction could have the form:

dataset_name/split_name/class_label/example_of_class_label.jpg

Once more, this step isn’t all the time crucial, but it is a greatest apply (for my part), and one which I recommend you do as nicely.

At the very least it offers you expertise writing Python code to arrange photographs on disk.

Let’s use the
build_dataset.py  file to construct our directory structure now:

After importing our packages on Strains 2-5, we proceed to loop over the training, testing, and validation splits (Line 8).

We create our cut up + class label listing construction (detailed above) after which populate the directories with the Food-5K pictures. The result is organized knowledge which we will use for extracting features.

Let’s execute the script and evaluate our listing structure as soon as extra.

You need to use the “Downloads” section of this tutorial to obtain the supply code — from there, open up a terminal and execute the next command:

After doing so, you’ll encounter the next directory construction:

Discover that our dataset/ listing is now populated. Every subdirectory then has the following format:

split_name/class_label

With our knowledge organized, we’re ready to move on to function extraction.

Using Keras for deep studying function extraction

Now that we’ve constructed our dataset listing structure for the venture, we will:

  1. Use Keras to extract features by way of deep studying from every picture within the dataset.
  2. Write the class labels + extracted features to disk in CSV format.

To perform these tasks we’ll have to implement the
extract_features.py  file.

This file was coated intimately in final week’s submit so we’ll solely briefly evaluate the script here as a matter of completeness:

On Line 16, ResNet is loaded while excluding the top. Pre-trained ImageNet weights are loaded into the community as nicely. Feature extraction by way of switch learning is now potential utilizing this pre-trained, headless community.

From there, we proceed to loop over the info splits on Line 20.

Inside, we grab all
imagePaths  for the actual
cut up  and match our label encoder (Strains 23-39).

A CSV file is opened for writing (Strains 37-39) in order that we will write our class labels and extracted features to disk.

Now that our initializations are all set, we will begin looping over photographs in batches:

Every
picture  within the batch is loaded and preprocessed. From there it is appended to
batchImages .

We’ll now send the batch by means of ResNet to extract features:

Feature extraction for the batch takes place on Line 72. Using ResNet, our output layer has a quantity measurement of 7 x 7 x 2,048. Treating the output as a function vector, we merely flatten it into an inventory of 7 x 7 x 2,048 = 100,352-dim (Line 73).

The batch of function vectors is then output to a CSV file with the primary entry of every row being the class
label  and the remainder of the values making up the function
vec .

We’ll repeat this course of for all batches inside every cut up until we end. Finally, our label encoder is dumped to disk.

For a extra detailed, line-by-line evaluation, seek advice from last week’s tutorial.


To extract options from our dataset, ensure you use the “Downloads” section of the information to obtain the supply code to this submit.

From there, open up a terminal and execute the following command:

On an NVIDIA Okay80 GPU your complete function extraction course of took 5m11s.

You may also run
extract_features.py on a CPU but it can take much longer.

After function extraction is complete, it is best to have three CSV information in your output listing, one for each of our knowledge splits, respectively:

Implementing the incremental studying training script

Finally, we at the moment are ready to utilize incremental learning to apply transfer studying by way of function extraction on large datasets.

The Python script we’re implementing in this section will probably be chargeable for:

  1. Setting up the straightforward feedforward NN architecture.
  2. Implementing a CSV knowledge generator used to yield batches of labels + function vectors to the NN.
  3. Training the straightforward NN using the info generator.
  4. Evaluating the function extractor.

Open up the
practice.py script and let’s get started:

On Strains 2-10 import our required packages. Our most notable import is Keras’
Sequential  API which we’ll use to build a simple feedforward neural community.

A number of months ago I wrote a tutorial on implementing customized Keras knowledge turbines, and more particularly, yielding knowledge from a CSV file to coach a neural network with Keras.

On the time, I discovered that readers have been a bit confused on sensible purposes the place you’d use such a generator — immediately is a superb instance of such a sensible software.

Once more, remember that we’re assuming on the whole CSV file of extracted options won’t match into memory. Subsequently, we’d like a custom Keras generator to yield batches of labels + knowledge to the network so it may be educated.

Let’s implement the generator now:

Our
csv_feature_generator  accepts 4 parameters:

  • inputPath : The trail to our input CSV file containing the extracted options.

  • bs : The batch measurement (or length) of every chunk of knowledge.

  • numClasses : An integer value representing the number of courses in our knowledge.

  • mode : Whether or not we’re training or evaluating/testing.

On Line 14, we open our CSV file for reading.

Beginning on Line 17, we loop indefinitely, starting by initializing our knowledge and labels. (Strains 19 and 20).

From there, we’ll loop until the length
knowledge  equals the batch measurement beginning on Line 23.

We proceed by studying a line from the CSV (Line 25). As soon as we have now the line we’ll go ahead and course of it:

If the
row  is empty, we’ll restart firstly of the file (Strains 29-32). And if we are in analysis mode, we’ll
break  from our loop, making certain that we don’t fill the batch from the start of the file (Strains 38 and 39).

Assuming we are continuing on, the
label  and
options  are extracted from the
row  (Strains 42-45).

We then append the function vector (
features ) and
label  to the
knowledge  and
labels  lists, respectively, until the lists reach the required batch measurement (Strains 48 and 49).

When the batch is ready, Line 52 yields the
knowledge  and
labels  as a tuple. Python’s
yield  key phrase is essential to making our perform operate as a generator.

Let’s continue — we now have a number of extra steps before we’ll practice the model:

Our label encoder is loaded from disk on Line 54. We then derive the paths to the training, validation, and testing CSV information (Strains 58-63).

Strains 67 and 68 deal with counting the variety of pictures which might be in the coaching and validation units. With this info, we will tell the
.fit_generator  perform what number of
batch_size  steps are in each epoch.

Let’s construct a generator for each knowledge cut up:

Strains 76-81 initialize our CSV function turbines.

We’re now ready to construct a simple neural network:

Contrary to last week’s tutorial where we used a Logistic Regression machine learning mannequin, as we speak we’ll build a easy neural network for classification.

Strains 84-87 outline a easy
100352-256-16-2  feedforward neural network architecture utilizing Keras.

How did I come up with the values of
256  and
16  for the two hidden layers?

A great rule of thumb is to take the sq. root of the earlier number of nodes within the layer after which discover the closest energy of two.

On this case, the closest power of two to
100352  is
256 . The sq. root of
256  is then
16 , thus giving us our architecture definition.

Let’s go forward and
compile  our
model :

We
compile  our
model  using stochastic gradient descent (
SGD ) with an initial learning price of
1e-3  (which can decay over
25  epochs).

We’re using
“binary_crossentropy”  for our
loss  perform right here as we solely have to two courses. In case you have higher than 2 courses then you must use
“categorical_crossentropy” .

With our
mannequin  compiled, now we are ready to coach and evaluate:

Strains 96-101 fit our
mannequin  using our training and validation turbines (
trainGen  and
valGen ). Utilizing turbines with our
model  permits for incremental studying.

Utilizing incremental learning we’re not required to have all of our knowledge loaded into reminiscence at one time. As an alternative, batches of knowledge movement by way of our community making it straightforward to work with large datasets.

In fact, CSV knowledge isn’t exactly an efficient use of area, nor is it quick. Inside Deep Learning for Pc Vision with Python, I train easy methods to use HDF5 for storage more efficiently.

Evaluation of the model takes place on Strains 107-109, where
testGen  generates our function vectors in batches. A classification report is then printed within the terminal (Strains 110 and 111).

Keras function extraction results

Lastly, we’re prepared to train our easy NN on the extracted options from ResNet!

Be sure to use the “Downloads” part of this tutorial to obtain the supply code.

From there, open up a terminal and execute the next command:

Coaching on an NVIDIA Okay80 took approximately ~30m. You would practice on a CPU as nicely however it’s going to take significantly longer.

And as our output exhibits, we are capable of get hold of ~98-99% accuracy on the Meals-5K dataset, despite the fact that ResNet-50 was never educated on food/non-food courses!

As you possibly can see, transfer studying is a very powerful method, enabling you to take the options extracted from CNNs and acknowledge courses they were not educated on.

Later on this collection of tutorials on switch studying with Keras and deep studying, I’ll be displaying you how one can perform fine-tuning, another switch learning technique.

What’s next — where do I study extra about transfer learning and have extraction?

On this tutorial, you discovered the right way to make the most of a CNN to acknowledge class labels it was by no means educated on.

You also discovered the right way to use incremental studying to accomplish this activity.

Incremental learning is essential when your dataset is just too large to fit into reminiscence.

However I do know as soon as this publish is revealed I’m going to get emails and questions within the feedback relating to:

  • “How do I classify images outside my training/testing set?”
  • “How do I load an image from disk, extract features from it using a CNN, and then classify it using the neural network?”
  • “How do I correctly preprocess my input image before classification?”

At the moment’s tutorial is lengthy sufficient as it’s. I can’t, subsequently, embrace these sections of Deep Learning for Pc Imaginative and prescient with Python inside this publish.

In the event you’d wish to study more about transfer learning, including:

  1. More details on the idea of transfer learning
  2. How you can carry out function extraction
  3. Tips on how to fine-tune networks
  4. The best way to classify pictures outdoors your coaching/testing set utilizing both function extraction and fine-tuning

…you then’ll undoubtedly need to confer with my ebook, Deep Learning for Pc Imaginative and prescient with Python.

Apart from chapters on transfer learning, you’ll additionally find:

  • Tremendous practical walkthroughs that current solutions to actual, real-world image classification, object detection, and occasion segmentation problems.
  • Palms-on tutorials (with numerous code) that not only present you the algorithms behind deep studying for pc imaginative and prescient, but their implementations as nicely.
  • A no-nonsense educating fashion that’s guaranteed that will help you grasp deep learning for image understanding and visible recognition.

To study extra concerning the ebook, and seize the table of contents + free pattern chapters, simply click on right here!

Abstract

In this tutorial you discovered the way to:

  1. Utilize Keras for deep learning function extraction.
  2. Carry out incremental studying on the extracted features.

Using incremental studying allows us to coach fashions on datasets too large to suit into reminiscence.

Neural networks are an awesome example of incremental learners as we will load knowledge by way of batches, making certain all the network doesn’t have to suit into RAM directly. Utilizing incremental learning we have been capable of get hold of ~98% accuracy.

I might recommend using this code as a template for each time you’ll want to use Keras for function extraction on large datasets.

I hope you enjoyed the tutorial!

To download the source code to this submit (and be notified when future tutorials are revealed here on PyImageSearch), simply enter your e-mail tackle within the type under!

Downloads: