Face recognition is a type of biometric identification that uses unique characteristics of a person’s face to verify their identity. This can be done through various methods, including analyzing the shape, size, and features of a person’s face, as well as the patterns and colors of their skin and hair.

There are various applications for face recognition technology, including security and surveillance, access control, and personal identification. It is often used in conjunction with other identification methods, such as fingerprints or iris scans, to provide a more reliable means of identification.

Face recognition systems can be designed to operate in real-time, allowing for the immediate identification of individuals as they appear in front of a camera. These systems can be used in a variety of settings, including airports, banks, and other public or private facilities where security is a concern.

While face recognition technology has the potential to improve security and efficiency in many areas, it also raises concerns about privacy and civil liberties. Some people worry that the use of face recognition technology may infringe on their right to privacy, or that it could be used to track and monitor individuals without their knowledge or consent. It is important for organizations that use face recognition technology to have clear policies in place to ensure that it is used ethically and responsibly.

The Face recognition systems fall under the Computer Vision (CV) umbrella. As we’ve already discussed in previous articles CV is one of the Artificial Intelligence’s (AI) fields that has the most real life implementations that are improving the day to day life of the users. As such an important branch of AI, it is one of the most desired fields for research and employment in the industry.

In this article we are going to show you how to implement face emotion recognition using the OpenCV library and then how to create a neural network model with Keras, that will allow us to increase the range of emotions that we can detect on a human face.

 

Implement Face Emotion Recognition Using Python

Example 1: In this example, we first load the three cascade classifiers for detecting faces, eyes, and smiles. These classifiers are trained to detect specific features in an image, such as edges, lines, and patterns, and are useful for detecting objects in images. The cascades can be found here. You can download them, and use them.

Then, we read an input image and convert it to grayscale, as the cascade classifiers work best with grayscale images. The input images should be downloaded on your local machine. The image used in this example is below the code.

Next, we use the face_cascade classifier to detect faces in the image, and draw rectangles around them. For each detected face, we extract the region of interest (ROI) and convert it to grayscale.

Then, we use the eye_cascade classifier to detect eyes in the ROI, and draw rectangles around them. Finally, we use the smile_cascade classifier to detect smiles in the ROI, and draw

 

import cv2 

# Load the cascade classifier for detecting faces

face_cascade = cv2.CascadeClassifier(‘haarcascade_frontalface_default.xml’)

 

# Load the cascade classifier for detecting eyes

eye_cascade = cv2.CascadeClassifier(‘haarcascade_eye.xml’)

 

# Load the cascade classifier for detecting smiles

smile_cascade = cv2.CascadeClassifier(‘haarcascade_smile.xml’)

 

# Read the input image

image = cv2.imread(‘image2.jpg’)

 

# Convert the image to grayscale

gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

 

# Detect faces in the image

faces = face_cascade.detectMultiScale(gray_image, 1.3, 5)

 

# Iterate over the faces

for (x, y, w, h) in faces:

    # Draw a rectangle around the face

    cv2.rectangle(image, (x, y), (x+w, y+h), (255, 0, 0), 2)

    roi_gray = gray_image[y:y+h, x:x+w]

    roi_color = image[y:y+h, x:x+w]

 

    # Detect eyes in the face

    eyes = eye_cascade.detectMultiScale(roi_gray)

    # Iterate over the eyes

    for (ex, ey, ew, eh) in eyes:

        # Draw a rectangle around the eye

        cv2.rectangle(roi_color, (ex, ey), (ex+ew, ey+eh), (0, 255, 0), 2)

 

    # Detect smiles in the face

    smiles = smile_cascade.detectMultiScale(roi_gray)

    # Iterate over the smiles

    for (sx, sy, sw, sh) in smiles:

        # Draw a rectangle around the smile

        cv2.rectangle(roi_color, (sx, sy), (sx+sw, sy+sh), (255, 0, 255), 2)

 

# Show the output image

cv2.imshow(‘Image’, image)

cv2.waitKey(0)

cv2.destroyAllWindows()

 

The image named ‘image2.jpg’ used in this example

 

The result of the program:

 

As you can see the code drew a smile over the eye as well (the purple color is for smiles). This is tied to the quality of the dataset we got. So in order to have better results, you need higher quality data.

Example 2: In this example, we first create a TensorFlow model for emotion recognition. Then we load the weights. You can find them here. Thanks and KUDOS to atulapra for the model and the weights.

Then, we load the cascade classifier for detecting faces in the image and convert the image to grayscale.

Next, we use the cascade classifier to detect faces in the image and draw rectangles around them. For each detected face, we crop it from the image, resize it to the input size of the model, and expand its dimensions.

Finally, we use the model to predict the emotion of the face, and display the predicted emotion on the image using the cv2.putText function.

 

import cv2

import numpy as np

 

# Label the emotions

from keras import Sequential

from keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense

 

EMOTIONS = [‘angry’, ‘disgust’, ‘fear’, ‘happy’, ‘sad’, ‘surprise’, ‘neutral’]

 

# Create the model

model = Sequential()

 

model.add(Conv2D(32, kernel_size=(3, 3), activation=‘relu’, input_shape=(48,48,1)))

model.add(Conv2D(64, kernel_size=(3, 3), activation=‘relu’))

model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Dropout(0.25))

 

model.add(Conv2D(128, kernel_size=(3, 3), activation=‘relu’))

model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, kernel_size=(3, 3), activation=‘relu’))

model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Dropout(0.25))

 

model.add(Flatten())

model.add(Dense(1024, activation=‘relu’))

model.add(Dropout(0.5))

model.add(Dense(7, activation=‘softmax’))

 

# Load the weights

model.load_weights(‘model.h5’)

 

# Load the cascade classifier for detecting faces

face_cascade = cv2.CascadeClassifier(‘haarcascade_frontalface_default.xml’)

 

# Read the input image

image = cv2.imread(‘image4.jpg’)

 

# Convert the image to grayscale

gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

 

# Detect faces in the image

faces = face_cascade.detectMultiScale(gray_image, 1.3, 5)

 

# Iterate over the faces

for (x, y, w, h) in faces:

    # Draw a rectangle around the face

    cv2.rectangle(image, (x, y), (x + w, y + h), (255, 0, 0), 2)

 

    # Crop the face from the image

    face = gray_image[y:y + h, x:x + w]

 

    # Resize the face to the input size of the model

    face = cv2.resize(face, (48, 48))

 

    # Expand the dimensions of the face to (1, 48, 48, 1)

    face = np.expand_dims(face, axis=0)

    face = np.expand_dims(face, axis=1)

 

    # Predict the emotion of the face

    emotion_index = np.argmax(model.predict(face))

    emotion = EMOTIONS[emotion_index]

 

    # Put the predicted emotion text on the image

    cv2.putText(image, emotion, (x, y – 10), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)

 

# Show the output image

cv2.imshow(‘Image’, image)

cv2.waitKey(0)

cv2.destroyAllWindows()

 

Here are the results we got from the second example:

 

This image is the image we’ve used in the first example. As you can see the results are the same. The model in the first example detected a smile, and here we have a happy face.

In the second example we can demonstrate a list of emotions, so here is the result when we feed the mode an image with a sad face.

 

Conclusion 

I hope you’ve found the examples useful and interesting. You can download more of those cascade classifiers and make your own spin on the examples we’ve provided. 

If you are interested in learning more about Computer Science, we’ve created tons of examples and even a few Computer Science curriculums from different free sources like the MIT OpenCourseWare program, YouTube, etc.. you should check out the rest of our articles.

Like with every post we do, we encourage you to continue learning, trying and creating.

Facebook Comments