skip to Main Content

Pneumonia Diagnosis Detection with OpenCV

Photo by National Cancer Institute on Unsplash

Pneumonia is a severe respiratory illness brought on by an infection that, particularly in groups at risk, can have life-threatening complications. It is essential to diagnose and treat pneumonia as soon as possible to maximize the patient’s chances of recovery. The process of diagnosis is not easy and requires some medical laboratory tools and advanced medical skill, but we can use deep learning and computer vision to build a fast and easy tool that will help doctors be able to detect pneumonia.

Applications for image and video analysis, such as x-ray results, can be created using the open-source computer vision and machine learning software library known as OpenCV (Open Source Computer Vision). Open CV is an open-source library for carrying out computer vision, machine learning, and image processing. We will discover how to use OpenCV to identify pneumonia in chest X-ray images in this lesson.

Install OpenCV

Installing OpenCV is the initial stage. There are various methods to install OpenCV depending on your operating system. Here are a few well-liked choices:

Windows: On the main OpenCV website use the pre-built binaries.

Linux: OpenCV can be installed using the package manager included with your Linux distro. Run the following instruction in the terminal, for instance, on Ubuntu:

Install libopencv-dev with sudo apt-get

Mac OS: OpenCV can be set up using Homebrew the code below should be entered into the terminal.

Brew install opencv

Once OpenCV is loaded, you can use the following Python code to check that it is working properly.

import cv2
print(cv2.__version__)

You should see the version number displayed in the terminal if OpenCV was properly installed.

Download the Dataset

The dataset that will be used to train our pneumonia detection algorithm can be downloaded next. We’ll make use of the Chest X-Ray pictures (Pneumonia) dataset from Kaggle in this exercise. There are 5,856 chest X-ray images overall in the dataset, divided into two categories: pneumonia and normal.

You must sign up for a Kaggle account and agree to the dataset’s terms and conditions in order to obtain the dataset. Once you’ve done that, type the following command in the terminal to obtain the dataset:

kaggle datasets download -d paultimothymooney/chest-xray-pneumonia

A ZIP file containing the information will be downloaded. Create a subfolder on your local computer and extract the ZIP file.

Prepare the Data

The data must then be prepared for our pneumonia recognition model’s training. To create more training samples from the current ones, we’ll employ a method called data augmentation. This is done to increase the performance of the model and help the model build faster. In order to create different versions of the same picture, data augmentation involves applying random transformations to the images, such as rotation, scaling, and flipping.

We will make two directories to prepare the data: one for training pictures and one for validation images. 80% of the pictures will be used for training, and 20% will be used for validation.

Here is the code to get the info ready:

import os
import shutil
import random

# Define the paths
input_dir = 'path/to/input/dir'
train_dir = 'path/to/train/dir'
val_dir = 'path/to/val/dir'

# Create the directories
os.makedirs(train_dir, exist_ok=True)
os.makedirs(val_dir, exist_ok=True)

# Get the list of images
image_paths = []
for root, dirs, files in os.walk(input_dir):
    for file in files:
        if file.endswith('.jpeg'):
            image_paths.append(os.path.join(root, file))

# Shuffle the images
random.shuffle(image_paths)

# Split

split_idx = int(0.8 * len(image_paths))
train_image_paths = image_paths[:split_idx]
val_image_paths = image_paths[split_idx:]

Now copy the images to the directories. Change “path/to/input/dir” to the path to the directory where you extracted the information in this code. The paths to the directories where you want to keep the training and validation images, respectively, should be substituted for “path/to/train/dir” and “path/to/val/dir.”

Struggling to track and reproduce complex experiment parameters? Artifacts are just one of the many tools in the Comet toolbox to help ease model management. Read our PetCam scenario to learn more.

Train the model

Using the training images we created in the previous stage, we must now train the pneumonia detection model. The core of our model will be a pre-trained convolutional neural network (CNN) named VGG16. The popular CNN architecture VGG16 has attained state-of-the-art success on numerous image recognition tasks after being trained on a sizable dataset of images.

Here’s the code to train the model:

from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras.applications import VGG16
from tensorflow.keras.preprocessing.image import ImageDataGenerator

# Define the input shape of the images
input_shape = (224, 224, 3)

# Load the VGG16 model
base_model = VGG16(weights='imagenet', include_top=False, input_shape=input_shape)

# Add a global average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)

# Add a fully connected layer
x = Dense(128, activation='relu')(x)

# Add the output layer
output = Dense(1, activation='sigmoid')(x)

# Define the model
model = Model(inputs=base_model.input, outputs=output)

# Freeze the layers of the VGG16 model
for layer in base_model.layers:
    layer.trainable = False

# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Define the data generators for training and validation
train_datagen = ImageDataGenerator(rescale=1./255,
                                   rotation_range=10,
                                   width_shift_range=0.1,
                                   height_shift_range=0.1,
                                   shear_range=0.1,
                                   zoom_range=0.1,
                                   horizontal_flip=True,
                                   fill_mode='nearest')

val_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(train_dir,
                                                    target_size=input_shape[:2],
                                                    batch_size=32,
                                                    class_mode='binary')

val_generator = val_datagen.flow_from_directory(val_dir,
                                                target_size=input_shape[:2],
                                                batch_size=32,
                                                class_mode='binary')

# Train the model
model.fit(train_generator,
          steps_per_epoch=len(train_generator),
          epochs=10,
          validation_data=val_generator,
          validation_steps=len(val_generator))

First, we loaded the pre-trained weights from the ImageNet dataset into the VGG16 model. We also include an output layer with a sigmoid activation function, a completely connected layer with 128 neurons, and a global average pooling layer. The VGG16 model’s layers are frozen, and the Adam algorithm and binary cross-entropy loss are used to build the model. Following that, we specify data generators for training and validation that augment the data and rescale the pixel values to the [0, 1] range.

Using the fit approach and the training and validation data generators, we train the model for 10 epochs.

Evaluate the model

To determine how well the model generalizes to new data after training, we must evaluate its performance on a test set. To evaluate the model, we will make use of the dataset’s test collection. Additionally, we will display some illustrations of both properly and incorrectly classified images.

Use the code below to evaluate the model and display some instances.

import numpy as np
import matplotlib.pyplot as plt

# Define the path to the test directory
test_dir = 'path/to/input/dir/chest_xray/test'

# Define the data generator for test
test_datagen = ImageDataGenerator(rescale=1./255)

test_generator = test_datagen.flow_from_directory(test_dir,
                                                  target_size=input_shape[:2],
                                                  batch_size=32,
                                                  class_mode='binary',
                                                  shuffle=False)

# Evaluate the model on the test set
loss, accuracy = model.evaluate(test_generator, steps=len(test_generator))
print(f'Test accuracy: {accuracy:.2f}')

# Get the predictions and true labels
predictions = model.predict(test_generator, steps=len(test_generator))
predictions = np.squeeze(predictions)
true_labels = test_generator.labels

# Get the image filenames
filenames = test_generator.filenames

# Find the indices of the correctly and incorrectly classified images
correct_indices = np.where((predictions >= 0.5) == true_labels)[0]
incorrect_indices = np.where((predictions >= 0.5) != true_labels)[0]

# Plot some correctly classified images
plt.figure(figsize=(10, 10))
for i, idx in enumerate(correct_indices[:9]):
    plt.subplot(3, 3, i+1)
    img = plt.imread(os.path.join(test_dir, filenames[idx]))
    plt.imshow(img, cmap='gray')
    plt.title('PNEUMONIA' if predictions[idx] >= 0.5 else 'NORMAL')
    plt.axis('off')

# Plot some incorrectly classified images
plt.figure(figsize=(10, 10))
for i, idx in enumerate(incorrect_indices[:9]):
    plt.subplot(3, 3, i+1)
    img = plt.imread(os.path.join(test_dir, filenames[idx]))
    plt.imshow(img, cmap='gray')
    plt.title('PNEUMONIA' if predictions[idx] >= 0.5 else 'NORMAL')
    plt.axis('off')

plt.show()

In this code, we created a test and evaluation set data generator to assess the model. We also get the predictions and true labels for the test set and find the indices of the correctly and incorrectly classified images. Then, using Matplotlib, we plot some instances of properly and incorrectly classified images.

Conclusion

In this tutorial, we built a pneumonia detection model using OpenCV and TensorFlow. We read, processed, and visualized the images using OpenCV, and we trained and tested the model using TensorFlow. The model was successful in classifying the majority of the test set’s images with a high degree of precision.

Computer vision can be a huge asset to medical diagnostics. While they’re not a replacement for trained healthcare providers, they can decrease time to diagnosis and improve diagnostic accuracy. You can see more CV medical use-cases in action here.

Sandy M

Back To Top