Monday, November 28, 2022
HomeArtificial IntelligencePicture Augmentation with Keras Preprocessing Layers and tf.picture

Picture Augmentation with Keras Preprocessing Layers and tf.picture

Final Up to date on July 20, 2022

After we work on a machine studying downside associated to photographs, not solely we have to gather some photos as coaching information, but in addition must make use of augmentation to create variations within the picture. It’s very true for extra advanced object recognition issues.

There are lots of methods for picture augmentation. You could use some exterior libraries or write your personal features for that. There are some modules in TensorFlow and Keras for augmentation, too. On this publish you’ll uncover how we are able to use the Keras preprocessing layer in addition to tf.picture module in TensorFlow for picture augmentation.

After studying this publish, you’ll know:

  • What are the Keras preprocessing layers and the right way to use them
  • What are the features offered by tf.picture module for picture augmentation
  • The way to use augmentation along with tf.information dataset

Let’s get began.

Picture Augmentation with Keras Preprocessing Layers and tf.picture.
Picture by Steven Kamenar. Some rights reserved.


This text is cut up into 5 sections; they’re:

  • Getting Pictures
  • Visualizing the Pictures
  • Keras Preprocessing Layesr
  • Utilizing tf.picture API for Augmentation
  • Utilizing Preprocessing Layers in Neural Networks

Getting Pictures

Earlier than we see how we are able to do augmentation, we have to get the photographs. Finally, we’d like the photographs to be represented as arrays, for instance, in HxWx3 in 8-bit integers for the RGB pixel worth. There are lots of methods to get the photographs. Some will be downloaded as a ZIP file. For those who’re utilizing TensorFlow, you might get some picture dataset from the tensorflow_datasets library.

On this tutorial, we’re going to use the citrus leaves photos, which is a small dataset in lower than 100MB. It may be downloaded from tensorflow_datasets as follows:

Working this code the primary time will obtain the picture dataset into your pc with the next output:

The operate above returns the photographs as a tf.information dataset object and the metadata. This can be a classification dataset. We are able to print the coaching labels with the next:

and this prints:

For those who run this code once more at a later time, you’ll reuse the downloaded picture. However the different strategy to load the downloaded photos right into a tf.information dataset is to the image_dataset_from_directory() operate.

As we are able to see the display output above, the dataset is downloaded into the listing ~/tensorflow_datasets. For those who have a look at the listing, you see the listing construction as follows:

The directories are the labels and the photographs are recordsdata saved underneath their corresponding listing. We are able to let the operate to learn the listing recursively right into a dataset:

You could wish to set batch_size=None if you don’t want the dataset to be batched. Normally we wish the dataset to be batched for coaching a neural community mannequin.

Visualizing the Pictures

It is very important visualize the augmentation end result so we are able to confirm the augmentation result’s what we wish it to be. We are able to use matplotlib for this.

In matplotlib, now we have the imshow() operate to show a picture. Nonetheless, for the picture to be displayed appropriately, the picture must be offered as an array of 8-bit unsigned integer (uint8).

Given now we have a dataset created utilizing image_dataset_from_directory(), we are able to get the primary batch (of 32 photos) and show a number of of them utilizing imshow(), as follows:

Right here we show 9 photos in a grid, and label the photographs with their corresponding classification label, utilizing ds.class_names. The photographs must be transformed to NumPy array in uint8 for show. This code shows a picture like the next:

The whole code from loading the picture to show is as follows.

Be aware that, should you’re utilizing tensorflow_datasets to get the picture, the samples are offered as a dictionary as an alternative of a tuple of (picture,label). You need to change your code barely into the next:

In the remainder of this publish, we assume the dataset is created utilizing image_dataset_from_directory(). You could must tweak the code barely in case your dataset is created otherwise.

Keras Preprocessing Layers

Keras comes with many neural community layers reminiscent of convolution layers that we have to prepare. There are additionally layers with no parameters to coach, reminiscent of flatten layers to transform an array reminiscent of a picture right into a vector.

The preprocessing layers in Keras are particularly designed to make use of in early phases in a neural community. We are able to use them for picture preprocessing, reminiscent of to resize or rotate the picture or to regulate the brightness and distinction. Whereas the preprocessing layers are presupposed to be half of a bigger neural community, we are able to additionally use them as features. Under is how we are able to use the resizing layer as a operate to remodel some photos and show them side-by-side with the unique:

Our photos are in 256×256 pixels and the resizing layer will make them into 256×128 pixels. The output of the above code is as follows:

Because the resizing layer is a operate itself, we are able to chain them to the dataset itself. For instance,

The dataset ds has samples within the type of (picture, label). Therefore we created a operate that takes in such tuple and preprocess the picture with the resizing layer. We assigned this operate as an argument for map() within the dataset. After we draw a pattern from the brand new dataset created with the map() operate, the picture will likely be a remodeled one.

There are extra preprocessing layers out there. In beneath, we exhibit some.

As we noticed above, we are able to resize the picture. We are able to additionally randomly enlarge or shrink the peak or width of a picture. Equally, we are able to zoom in or zoom out on a picture. Under is an instance to govern the picture dimension in varied methods for a most of 30% improve or lower:

This code exhibits photos as follows:

Whereas we specified a hard and fast dimension in resize, now we have a random quantity of manipulation in different augmentations.

We are able to additionally do flipping, rotation, cropping, and geometric translation utilizing preprocessing layers:

This code exhibits the next photos:

And at last, we are able to do augmentations on shade changes as nicely:

This exhibits the photographs as follows:

For completeness, beneath is the code to show the results of varied augmentations:

Lastly, it is very important level out that almost all neural community mannequin can work higher if the enter photos are scaled. Whereas we normally use 8-bit unsigned integer for the pixel values in a picture (e.g., for show utilizing imshow() as above), neural community prefers the pixel values to be between 0 and 1, or between -1 and +1. This may be performed with a preprocessing layers, too. Under is how we are able to replace one in all our instance above so as to add the scaling layer into the augmentation:

Utilizing tf.picture API for Augmentation

Moreover the preprocessing layer, the tf.picture module additionally offered some features for augmentation. Not like the preprocessing layer, these features are meant for use in a user-defined operate and assigned to a dataset utilizing map() as we noticed above.

The features offered by tf.picture should not duplicates of the preprocessing layers, though there are some overlap. Under is an instance of utilizing the tf.picture features to resize and crop photos:

Under is the output of the above code:

Whereas the show of photos match what we might anticipate from the code, the usage of tf.picture features is kind of totally different from that of the preprocessing layers. Each tf.picture operate is totally different. Subsequently, we are able to see the crop_to_bounding_box() operate takes pixel coordinates however the central_crop() operate assumes a fraction ratio as argument.

These features are additionally totally different in the best way randomness is dealt with. A few of these operate doesn’t assume random conduct. Subsequently, the random resize ought to have the precise output dimension generated utilizing a random quantity generator individually earlier than calling the resize operate. Another operate, reminiscent of stateless_random_crop(), can do augmentation randomly however a pair of random seed in int32 must be specified explicitly.

To proceed the instance, there are the features for flipping a picture and extracting the Sobel edges:

which exhibits the next:

And the next are the features to govern the brightness, distinction, and colours:

This code exhibits the next:

Under is the entire code to show the entire above:

These augmentation features must be sufficient for many use. However when you’ve got some particular concept on augmentation, most likely you would wish a greater picture processing library. OpenCV and Pillow are frequent however highly effective libraries that permits you to remodel photos higher.

Utilizing Preprocessing Layers in Neural Networks

We used the Keras preprocessing layers as features within the examples above. However they may also be used as layers in a neural community. It’s trivial to make use of. Under is an instance on how we are able to incorporate a preprocessing layer right into a classification community and prepare it utilizing a dataset:

Working this code provides the next output:

Within the code above, we created the dataset with cache() and prefetch(). This can be a efficiency approach to permit the dataset to organize information asynchronously whereas the neural community is educated. This could be important if the dataset has another augmentation assigned utilizing the map() operate.

You will notice some enchancment in accuracy should you eliminated the RandomFlip and RandomRotation layers since you make the issue simpler. Nonetheless, as we wish the community to foretell nicely on a large variations of picture high quality and properties, utilizing augmentation may help our ensuing community extra highly effective.

Additional Studying

Under are documentations from TensorFlow which are associated to the examples above:


On this publish, you could have seen how we are able to use the tf.information dataset with picture augmentation features from Keras and TensorFlow.

Particularly, you realized:

  • The way to use the preprocessing layers from Keras, each as a operate and as a part of a neural community
  • The way to create your personal picture augmentation operate and apply it to the dataset utilizing the map() operate
  • The way to use the features offered by the tf.picture module for picture augmentation

Develop Deep Studying Tasks with Python!

Deep Learning with Python

 What If You Might Develop A Community in Minutes

…with just some strains of Python

Uncover how in my new E book:

Deep Studying With Python

It covers end-to-end initiatives on subjects like:

Multilayer PerceptronsConvolutional Nets and Recurrent Neural Nets, and extra…

Lastly Deliver Deep Studying To

Your Personal Tasks

Skip the Teachers. Simply Outcomes.

See What’s Inside



Please enter your comment!
Please enter your name here

Most Popular

Recent Comments