Traffic signs recognition with convolutional Neural networks

A traffic sign detection system with Convolutional neural networks and OpenCV.

Previously we implemented a path detection code for an autonomous vehicle. In this, article let's see how to implement a traffic sign detection system for an autonomous vehicle using deep convolutional networks and OpenCV module.

Environment setup/ Project Requirements :
1)Python.
2)OpenCV, Pickle, Numpy, Matplotlib, Keras, Sklearn libraries need to be installed on your computer/laptop.
3) I'm using the Jupyter Notebook environment, You can use any environment. I think you can use google-colab.

Note: You can't run this code in any python online compiler/interpreter.

Work Flow :
  1. Import the data-set and map them to corresponding labels.
  2. We divide the data into a Training set, Validation set, and Testing set.
  3. We convert RGB images to gray and resize them and standardize the images.
  4. Iterate and pre-process all the images.
  5. Data Argumentation to make data more generic.
  6. Neural Network Model.
  7. Setup camera.
Step1: Importing libraries and data.

We import required libraries and data, We have used German Traffic images data, which contains 34,799 images for training and testing the model. You can download data from here.
The data set has 43 different classes of german traffic sign images and an excel sheet. 
Remember, you need both CSV file as well as image dataset to complete the project.


Step 2: Data-Preprocessing.

The drawback of convolutional neural networks is it can be trained on images of fixed dimensions, like 16*16*3 (or) 128*128*3 entire data should be in the same dimension. 
So we have done some preprocessing of data to make sure it is trainable. For this project, I'm resizing all images into 32*32*3
.
We convert images into grayscale and standardize the data set by dividing pixel values with 255.

def grayscale(img):
    img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    return img
def equalize(img):
    img = cv2.equalizeHist(img)
    return img
def preprocessing(img):
    img = grayscale(img)
    img = equalize(img)  
    img = img / 255  
    return img

Now we divide data into the training set, validation set, and testing set.

X_train = np.array(list(map(preprocessing, X_train))) 
X_validation = np.array(list(map(preprocessing, X_validation)))
X_test = np.array(list(map(preprocessing, X_test)))

As convolutional neural networks take three-dimensional input we add depth 1 for grayscale images.
X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], X_train.shape[2], 1)
X_validation = X_validation.reshape(X_validation.shape[0], X_validation.shape[1], X_validation.shape[2], 1)
X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], X_test.shape[2], 1)

After making them three dimensional, we apply data augmentation to make data more generic. we rotate data by 10 degrees.

ImageDataGenerator(width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.2,  
                             shear_range=0.1, rotation_range=10)

Step 3: Neural Network model 

We will train a Convolutional neural network model that was built using Keras.

from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
from keras.utils.np_utils import to_categorical
from keras.layers import Dropout, Flatten
from keras.layers.convolutional import Conv2D, MaxPooling2D
    model = Sequential()
    model.add((Conv2D(no_Of_Filters, size_of_Filter, input_shape=(imageDimesions[0], imageDimesions[1], 1), activation='relu'))) 
    model.add((Conv2D(no_Of_Filters, size_of_Filter, activation='relu')))
    model.add(MaxPooling2D(pool_size=size_of_pool)) 
    model.add((Conv2D(no_Of_Filters // 2, size_of_Filter2, activation='relu')))
    model.add((Conv2D(no_Of_Filters // 2, size_of_Filter2, activation='relu')))
    model.add(MaxPooling2D(pool_size=size_of_pool))
    model.add(Dropout(0.5))

    model.add(Flatten())
    model.add(Dense(no_Of_Nodes, activation='relu'))
    model.add(Dropout(0.5)) 
    model.add(Dense(noOfClasses, activation='softmax'))  
    model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
    return model

Sequential() will arrange layers accordingly.
We are using Adam optimizer
Conv2D will build the 2D convolutional layer.
MaxPooling2D will build the pooling layer,  which reduces the dimensions of the feature map.
Dropout will drop some of the neurons in that particular layer.
Flatten keyword will make the matrix into an array format.
Dense keyword builds the fully connected neuron layers
Relu is a Rectified linear unit, it is a non-linear activation function. 
Softmax is another non-linear activation function, which was generally used for output layers.
model.compile will compile the model.

 If we can see the summary of the model 


Step 4: Test the model accuracy and save the model.

score =model.evaluate(X_test,y_test,verbose=0)

evaluate function will evaluate the accuracy of the model

Test Score: 0.1530995881077887
Test Accuracy: 0.958764367816092

Step 5: Setup Camera
    We will declare the video frame size and import the above-trained model.

frameWidth= 640        
frameHeight = 480
pickle_in=open("model_trained.p","rb") 
model=pickle.load(pickle_in)=

Now, we will map the labels for each class. Our data has 43 classes in total.

    if   classNo == 0: return 'Speed Limit 20 km/h'
    elif classNo == 1: return 'Speed Limit 30 km/h'
    elif classNo == 2: return 'Speed Limit 50 km/h'
    elif classNo == 3: return 'Speed Limit 60 km/h'
    elif classNo == 4: return 'Speed Limit 70 km/h'
    elif classNo == 5: return 'Speed Limit 80 km/h'
    elif classNo == 6: return 'End of Speed Limit 80 km/h'
    elif classNo == 7: return 'Speed Limit 100 km/h'
    elif classNo == 8: return 'Speed Limit 120 km/h'
    elif classNo == 9: return 'No passing'
    elif classNo == 10: return 'No passing for vechiles over 3.5 metric tons'
    elif classNo == 11: return 'Right-of-way at the next intersection'
    elif classNo == 12: return 'Priority road'
    elif classNo == 13: return 'Yield'
    elif classNo == 14: return 'Stop'
    elif classNo == 15: return 'No vechiles'
    elif classNo == 16: return 'Vechiles over 3.5 metric tons prohibited'
    elif classNo == 17: return 'No entry'
    elif classNo == 18: return 'General caution'
    elif classNo == 19: return 'Dangerous curve to the left'
    elif classNo == 20: return 'Dangerous curve to the right'
    elif classNo == 21: return 'Double curve'
    elif classNo == 22: return 'Bumpy road'
    elif classNo == 23: return 'Slippery road'
    elif classNo == 24: return 'Road narrows on the right'
    elif classNo == 25: return 'Road work'
    elif classNo == 26: return 'Traffic signals'
    elif classNo == 27: return 'Pedestrians'
    elif classNo == 28: return 'Children crossing'
    elif classNo == 29: return 'Bicycles crossing'
    elif classNo == 30: return 'Beware of ice/snow'
    elif classNo == 31: return 'Wild animals crossing'
    elif classNo == 32: return 'End of all speed and passing limits'
    elif classNo == 33: return 'Turn right ahead'
    elif classNo == 34: return 'Turn left ahead'
    elif classNo == 35: return 'Ahead only'
    elif classNo == 36: return 'Go straight or right'
    elif classNo == 37: return 'Go straight or left'
    elif classNo == 38: return 'Keep right'
    elif classNo == 39: return 'Keep left'
    elif classNo == 40: return 'Roundabout mandatory'
    elif classNo == 41: return 'End of no passing'
    elif classNo == 42: return 'End of no passing by vechiles  over 3.5 metric tons'

Now, we will predict the image using the camera.

    predictions = model.predict(img)
    classIndex = model.predict_classes(img)
    probabilityValue =np.amax(predictions)

Now, there is a question for you. what is the minimum probability required to classify the image into a particular class?

If you want complete code and want to know the answer to the above question, you can visit my Github by clicking here.

Sources : 

1) Sanket Doshi, "Traffic sign detection using convolutional neural network", towardsdatascience.
2) Traffic sign detection and recognition based on convolutional neural networks, 2019 International  Conference on Advances in Computing, Communication and Control, IEEE

Expected Viva: 

1)What is the project useful for?
A) This project is useful for detecting Traffic signs in autonomous vehicles through accessing the camera attached to it, maybe it cannot be used for Autonomous car but it can be used for an autonomous robot like Amazon's deep racer.

2)Why convolutional neural networks?
A) Convolutional neural networks are trending and can extract more robust features from the given data.

3)Why to resize the input images into 32x32x3?
A) Conv nets cannot deal with images with multiple dimensions, each and every image in the dataset should be of the same size, in order to maintain that we resize the entire data set into some fixed size in this case I resized it into 32x32x3(3 is RGB)

4)Why use this model rather than other models?
A) It is totally fine to use other models if they are much accurate and faster than this model. In this project, we used 39k images to get this accuracy. 
The advantages of this project are 
  1. Has maximum accuracy of 98.5, if it reaches more accuracy there is a high chance of overfitting.
  2. This is a more generalized project because we have used 39k images for training and we performed data augmentation.
5)What is data augmentation? why we need it?
A) Data augmentation is a process that rotates image data into multiple dimensions and generalizes the model and prevents overfitting.

6) What is overfitting?
A) Overfitting is basically, the model will be confined to the particular data set that it was trained with. it cannot predict new data correctly.
 
7) Why to divide with 255 (or) why to standardize the image data?
A)Usually RGB pixel values (or) gray image pixel values vary from 0 to 255, as a result, there will be high variance in the data set, so neural networks may not predict properly. If we dive the pixel values with 255, then pixel values will be in 0 to 1, as a result, the variance will be decreased.

made with 💓 by G. Sai Dheeraj



Comments

Post a Comment

Popular posts from this blog

Object Detection Using OpenCV and Transfer Learning.

Path Detection based on Bird Eye view with OpenCV module