What Is Computer Vision And Object Detection?

Introduction

Object detection algorithms for computer vision tasks are some of the most powerful tools in all of machine learning and artificial intelligence. These are decision algorithms that enable computer systems to make inferences about the real world around them, as filtered through a camera. Without object recognition, robots that manipulate objects, autonomous vehicles, and image classification software would be almost impossible to create.

Understanding computer vision and object detection will allow you to consider more use cases for these tools, allowing you to apply them to innovative and useful tasks, apps, and systems.

What Is Computer Vision?

As object detection is a batch of algorithms and techniques using computer vision, let's first start by defining computer vision and setting the context for our exploration of object detection.

In the most basic terms, computer vision is a field of computer science that seeks to endow computers with the ability to extract and interpret high-level features of images and video.

What is meant by high-level features? The interpretation of high-level features mimics the way humans recognize objects. Humans recognize objects with a bottom-up information processing system. First, the edges of the object are identified and linked together to form an outline of the object. After this, the details of the object are filled in, and the brain uses those recognizable patterns to determine what the object is. High-level interpretation refers to the fact that the many small features which make up the image are joined together and the object is recognized at the highest level of processing. Likewise, the goal of computer vision techniques is to give computers this ability, to examine an image and recognize parts of the image that have specific meanings. 

So rather than returning information about an object in an image at the base level, such as information about the individual pixels that make up the image, the computer vision system can return information at a high level  (a level that has meaning to humans). It can say that an object is a car, or fruit, or a person. A computer can then select the appropriate course of action and carry out other instructions depending on what the object in the image was classified as. For instance, if the object detection system in an autonomous vehicle recognizes an object as a car, the computer can use this information to initiate the braking system to avoid a collision.

Object Detection Steps

As mentioned, the computer vision systems carry out object detection much like humans do, starting at the lowest levels of processing and working upwards, joining features together as they do so. When carrying out digital processing and doing object recognition, the computer will carry out the following steps, which are analogous to how humans recognize objects.

Pattern recognition

During pattern recognition, the network analyzes the entire image and finds recognizable patterns in the image.

Feature extraction

After pattern recognition, the network carries out feature extraction. the network takes the patterns it found and breaks down those patterns into distinct features, selecting the patterns it thinks are important and ignoring other portions of the image.

Classification

Let’s assume the object of interest within an image is a car. During classification, the relevant features are joined together into a representation of the object. This representation is then compared against what the network knows about objects, and the various clusters of shapes and edges are used to place a label on the object. For instance, the network looks at all the features that comprise the object (headlights, doors, windows, etc.) and because of the edges of the object, it classifies it as a car.

Algorithms Used In Object Detection

Now that we have some idea of how computer vision works, we can take a look at the kinds of algorithms used in object detection/object recognition.

What You Need: Bounding Boxes

In order for a neural network to recognize where in an image an object is, a dataset has to be created that the model can learn from. This labeled training set includes images of objects surrounded by a bounding box, a colored box that denotes where the network should look for the object. The network will learn to pay attention to what is in the bounding box, suppressing the influence of regions outside of the bounding box. A bounding box that has been given a predefined ground-truth label is called an anchor box.

General Object Detection Framework

One of the most common ways to handle object detection in computer vision is with the “Sliding Windows” technique. This technique involves creating a small box or window, much smaller than the size of the image, and passing it over the entire image. The regions of the image that fall within this window are isolated and passed into a Convolutional Neural Network to make predictions. The window is then moved over slightly, usually by just a few pixels. The process is now repeated, and this goes on until the entire image has been mapped and passed into the neural network. 

When the windowed images are passed into the network, the network will examine these images and if any of these images align with the representation of the object that the network learned from the anchor boxes, the object will be recognized by the network. The end result of this series of cropping parts of the image and making predictions is a set of regions of the image which have been classified as an object by the network, along with the predicted bounding box.

Using the sliding windows algorithm can be computationally expensive, although there are strategies to reduce the amount of processing power the algorithm needs. Another issue with the sliding windows strategy is that the predicted bounding boxes tend to be somewhat inaccurate.

Sliding Windows vs YOLO

The You-Only-Look-Once (YOLO) algorithm can be considered a tweaked version of the sliding windows algorithm, and it has the benefit of being both faster and more accurate than the basic sliding windows algorithm.

The YOLO algorithm necessitates that the image is divided into a grid, but the grid assists the algorithm in achieving higher accuracy with a faster runtime. Often, an image is divided into a 19 x 19 grid. The network receives the image and generates predictions about where the bounding boxes should be.YOLO seeks to measure the Intersection of Union (IOU), which tracks the amount of overlap between founding boxes. 

The ground truth bounding box/anchor box and the algorithm predicted bounding box are compared for their similarity. Any predictions that have low overlap with the anchor boxes are discarded, and then the predictions with the highest probability are selected.  In comparison to making many windows that are each passed into the network individually, the grid that YOLO is based on enables faster computation by running the computation on the whole image at once, dividing the image as you go.

No matter how you are using computer vision and object detection algorithms, you’ll want to hire professionals to assist you in the task of designing datasets ready for optimal object detection.If your data hasn’t been properly prepared, your object detection network can misclassify objects, so choose to invest in object detection specialists.