top of page

Other
Projects

Create Your Own Image Classifier

During the summer of 2023, I received an AWS AI & ML Scholarship to do a Udacity AI Programming with Python (Nano Degree). The course contained instruction in NumPy, Pandas, Matplotlib and Seaborn, as well as linear algebra essentials, calculus essentials and neural networks. There were two projects:

  1. Use a pre-trained image classifier to identify dog breeds

  2. Create your own image classifier.

 

Both projects were engaging and complicated. I enjoyed the challenge to complete them.

​

The second project used PyTorch and pre-trained models from the torchvision.models with the following specifications:

  • Successfully train a new network on a dataset of images.

  • The training loss, validation loss, and validation accuracy are printed out as the network trains.

  • The training script allows users to choose from at least three different architectures available from torchvision.models (alexnet, densnet121, vgg16).

  • The user can set hyperparameters for learning rate, number of hidden units, and training epochs. The user can choose to train the model on a GPU.

  • Successfully reads in a checkpoint and rebuilds the model

  • The trained model is used to predict the class for an input image.

  • The user provides the path to a checkpoint and an image. The user can choose the top K most likely classes to view and provide a mapping of categories to real names. The user has the option to use the GPU to calculate the predictions.

 

The network was trained on the 102 Category Flower Dataset by Maria-Elena Nilsback and Andrew Zisserman (https://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html).

​

The dataset of flower pictures was already divided into train, test and validate sets. The model was trained on the training set of flower images and then tested using the test set. The results were pretty impressive. It trained for only 5 epochs with a training validation accuracy of around 86%.

image.png

Test accuracy was 86.5%.

An example test was on the balloon flower (pictured below). The top prediction returned was balloon flower, and the graph below shows the top 3 classification predictions. I was really impressed to see how quickly the network could be trained, especially using a GPU on a home computer. I was also surprised by the accuracy of the classifications.

​

The coursework was definitely challenging, especially over Christmas when it was very hot. I would have preferred to be doing other fun activities over the break. However, I learned a lot relatively quickly and have tasted what can be accomplished with machine learning. Lately, I have wondered if a similar model could be used to identify plant regrowth from bushfire photos and if it could be incorporated into the Forest Health Project.

Figure_1.png
bottom of page