Like every organization on earth we use cookies. We use cookies to analyze our product usage. We don't use cookies for commercial purposes. Learn More

How to build an AI model in no time!

This article recaps our recent Webinar on how to quickly build an AI model!

How to Build an AI model in no time

You may have been lucky enough to catch our recent webinar, detailing how to build an AI model in no time. If you missed it, no need to worry, you can find it on our YouTube channel or here:

In this 40mn Installment, Etienne, LabelFlow’s Lead Data Scientist, along with Geoffrey, LabelFlow’s co-Founder and CEO, detail well… exactly what it says on the tin.

If you don’t have 40mn to spare - We’ve got you! This article recaps the whole thing!

Before we start

For this Webinar, the team decided to focus on building an AI model to detect electrical distribution grid equipment. Typically, transformers and different types of insulators.

LabelTypes.png

This type of machine learning model is particularly useful to feed preventative maintenance software, and most importantly to save linesmen and technicians time.

Before getting into the specific model and parameters, let’s briefly overview the training process.

Process.png

As can be seen above, the iterative process begins with data collection, followed by a manual labeling step. This Webinar/tutorial should provide a practical example to help you set up your training pipelines for machine learning, allowing you to automatically pre-label large datasets.

At LabelFlow, we know firsthand how the quality of the dataset can make or break the performance of your machine learning models. We’re committed to developing the most streamlined image labeling tool and platform to help our users curate large datasets for machine learning models, and scale their AI projects.

Your images are stored locally on your device for simplicity and security’s sake. Furthermore, should you wish to collaborate, there is an option to upload images to LabelFlow’s secure servers and split the work with colleagues, and create workflows (sign-up for beta access to the collaborative version here early December).

Laying the groundwork for your AI model

The idea here is to easily build a model out of a simple training dataset, which will be used as a prototype to prelabel new images. The goal here isn’t to create the most accurate or sophisticated model possible, but rather to find a good compromise between the time spent building the model and overall quality of the results.

The inspiration for this work comes from the torchvision object detection tutorial with the minimum requirements to actually use the trained model added on top. The framework is based on PyTorch and most of the utils are just copied from torchvision references. PyTorch is one of the most popular Deep Learning libraries, and was primarily developed by Facebook’s AI Research lab (FAIR). It is an open source machine learning framework that is used to accelerate the path from research prototyping to production development.

pytorch.jpeg

The framework's inputs and outputs are all datasets in the COCO format, which is one of the reference formats for manipulating annotated images. Should you wish to follow along, Etienne prepared a public GitHub repository for this specific example.

Repo.png

Let’s get started!

Install the requirements

  • Make sure that you have python 3.x installed. This was tested with python 3.8.0.
  • It is recommended to create a new virtual environment to avoid interference with your global libraries, you can follow this tutorial to create one. Additionally, run pip install Cython on your blank virtual environment before installing the requirements.
  • If you wish to make use of a GPU, you should make sure that the version of torch you use is compatible with your environment. Just follow the instructions on PyTorch.

pip install -r requirements.txt

You should then be able to run the following line without encountering any issue.

python train.py --dataset-path data/sample-coco-dataset

The device that is used for training is logged at the beginning of the training script, e.g. "Running training on device cpu" or "Running training on device cuda". This can help you make sure that your GPU is actually being used if you wish to use it.

Label your images and train a model

Schema1.png

Building and training an AI model is an iterative process that begins, in this case, of course, with collecting images. Electrical grid inspection images are typically captured by helicopter or by drone.

Here are four aspects which will allow you to create great datasets:

High volume of data No secret here, you need a high volume of labeled images to obtain great outputs with your machine learning model. Image augmentation can help you virtually increase this amount of data.

datsetexample.png

Balanced dataset Size matters, but classes and homogeneity as well. If you have 1,500 labels for isolators and only 5 transformers, your model will probably fail at detecting this underrepresented class.

Training pipelines and toolings in place A training pipeline needs to be set up to reach a virtuous circle for your AI. The process could be as follows: machine learning models create detections, or labels, on raw images. Human experts and labelers correct what’s wrong or inaccurate, thus improving the training data quality. The machine learning models can then be trained once again on the improved quality dataset, and repeat.

Collaboration Preparing a high-quality dataset often requires multiple stakeholders and a software platform to facilitate the work. This typically involves creating and modifying labels, merging labels into a larger dataset or even merging multiple datasets, amongst other actions. Data is dynamic, and should be continuously updated to maintain its integrity and relevance.

Connect on LabelFlow, upload and label your raw images. In this case, Etienne took 1,000 images from an Electrical Grid inspection and labeled them taking into account the 5 equipment types that we’re looking for. Once you’ve finished labeling, export them to COCO format, making sure that you toggle the options to include the images.

labelflow-export.gif

The following script will train a new model for you on the coco dataset that you just exported - or any coco format dataset respecting the structure of data/sample-coco-dataset. python train.py --dataset-path <coco-dataset-directory-path>

After each iteration, also known as “epoch”, the script will print evaluation metrics on a validation dataset that is split from the original one. The model's snapshot weights will be stored after each training epoch in outputs/models/<dataset name>/epoch_<snapshot index>.pth. The losses should be smaller for each new epoch, which shows the progress of the training until its completion.

Modelloss.png

Once the model has been trained for the first time, an evaluation step is run to assess the performance.

Make inferences and visualize them

The evaluation step is performed on a separate dataset from the training dataset. The model’s outputs are also known as inferences, which in this case will be AI-generated labels on an evaluation dataset. Once the model has seen each image from the training dataset, we’ll ask the model for inferences on the validation dataset and we'll compare said inferences with the ground truth that was labeled manually.

Evaluatemodel.png

As can be seen above, the evaluation step comes with some interesting analytics. The average precision measures how many of the automated detections are relevant, whereas the average recall measures how many of the relevant detections were made. These ratios depend on the IoU value, which gives the percentage of overlap required between the automated detection and the ground truth in order for a detection to be considered “precise”. In this case, 6 different IoU values are considered, which each give their own average precision and recall values.

Schema2.png

Depending on the specific use case or even label type, it could be interesting to apply different IoU values.

The following script runs your model on a COCO dataset and generates a COCO annotation file containing the inferences in outputs/inferences/<dataset name>_annotations.json. Typically, you can create a raw dataset on LabelFlow and export it as shown above to generate the correct input for the model. python detect.py --dataset-path <coco-dataset-directory-path> --model-path <model-snapshot-path>

As mentioned previously, you can then qualitatively evaluate your prototype by importing the output annotation file to the corresponding raw dataset on LabelFlow as shown in the above schema.

labelflow-import.gif

Next steps

You can tune the parameters in train.py and in detect.py to optimize the performance of your prototype. If the results are satisfying, you can add more training data and switch to a more scalable and configurable framework like Detectron2.

This Webinar and tutorial should provide you with a practical example of how to quickly build an AI model. As mentioned during the Webinar, there are a number of available tools to improve the performance of AI models. Fine-tuning the model’s architecture and parameters are no longer the main areas to improve. The key to improving the performance of your AI models is to set up training pipelines with consistent, high-quality datasets.

Featured Articles

View all articles
LabelFlow

Product

Newsletter

Get news about our product and releases

© 2021 LabelFlow, All rights reserved.