Posts

Showing posts from December, 2020

Object Detection Using OpenCV and Transfer Learning.

Image
 Object Detection Using OpenCV and Transfer Learning. Environment setup/Project Requirements : Python OpenCV, Matplotlib libraries need to be installed. Jupyter Notebook/Pycharm/ Atom  Note : You cannot implement this project in Google-Colaboratory or in any python online Interpreter Workflow: Importing Transfer Learning model  Import dnn_DetecctionModel Looping through the coco dataset Initializing Threshold  Testing the model on Image Looping through the Labels Implementing on Video Step 1: Importing the Transfer learning model Firstly, we initialize the transfer learning model weights and frozen graph t some variable names. config_file = "ssd_mobilenet_v3_large_coco_2020_01_14.pbtxt" frozen_model = "frozen_inference_graph.pb" Step 2: Importing dnn_DetectionModel OpenCV contains dnn_DetectionModel, we can call it from cv2 by importing cv2. model = cv2.dnn_DetectionModel(frozen_model,config_file) Step 3: Looping through the coco dataset Since the coco data set that

Human Action Recognition with convolutional neural networks using Accelerometer Data

Image
Human Action Recognition with convolutional neural networks using Accelerometer Data Environment setup/Project Requirements : Python Numpy, Pandas, Tensorflow, Matplotlib, sklearn libraries need to be installed. Jupyter Notebook Note : I don't recommend using Pycharm or Atom environments because it'll be hard to preprocess the data and visualizing the data Workflow: Importing Data Organizing data Creating a data frame Pre-Processing data  Visulaizing data using matplotlib Creating a time frame Dividing training and testing data. Convolutional Neural network Model. Checking accuracy through a confusion matrix Step 1: Importing Data First, we import the required modules and read the data. Since data is in text format we cannot use the "pandas" framework instead, we use the "File handling" method in python. import tensorflow as tf from tensorflow.keras import Sequential from tensorflow.keras.layers import Flatten , Dense , Dropout , BatchNormalizat

Chat-bot master with artificial neural networks

Image
 Chat-Bot master with neural networks This project is about, building a chatbot that was useful in different real-world scenarios. Environment setup/Project Requirements : Python Numpy, Keras, Pickle, nltk, Tkinter, JSON, random libraries need to be installed. Jupyter Notebook Note: You cannot run this code in Google - colab or in online python interpreter. Workflow: Importing Data Preprocessing Data  Lemmatizing and removing Duplicates words. Creating training and testing data. Neural network Model. Save model Initializing 0 or 1 to data. Getting random output. GUI Step 1: Importing Data      The data used for the project is in JSON format, so we cannot use pandas for reading data. We have to use the file handling technique in Python and iterate through the file. data_file = open ( 'intents.json' ) . read () intents = json . loads ( data_file ) Step 2: Preprocessing the data.      We iterate the patterns using nltk.word_tokenize() function and append each word in the list

Traffic signs recognition with convolutional Neural networks

Image
A traffic sign detection system with Convolutional neural networks and OpenCV. Previously we implemented a path detection code for an autonomous vehicle. In this, article let's see how to implement a traffic sign detection system for an autonomous vehicle using deep convolutional networks and OpenCV module. Environment setup/ Project Requirements : 1)Python. 2)OpenCV, Pickle, Numpy, Matplotlib, Keras, Sklearn libraries need to be installed on your computer/laptop. 3) I'm using the Jupyter Notebook environment, You can use any environment. I think you can use google-colab. Note: You can't run this code in any python online compiler/interpreter. Work Flow : Import the data-set and map them to corresponding labels. We divide the data into a Training set, Validation set, and Testing set. We convert RGB images to gray and resize them and standardize the images. Iterate and pre-process all the images. Data Argumentation to make data more generic. Neural Network Model. Setup camer