Using CNNs to Improve Brain Tumor Segmentation Accuracy.

The power of machine learning to aid brain cancer patients!

Apurva Joshi
5 min readApr 13, 2020

You’re sitting outside one day with your sister, laughing and smiling while you eat ice cream. That’s the last time the two of you have a full conversation for a while. Over the next few weeks, you see her state deteriorate to the point where it’s hard for her to distinguish between shapes or even speak full sentences. The doctors confirm the worst after the MRIs come bacl: she has a brain tumor.

Your sister isn’t alone though: In 2020, about 23,000 adults are going be diagnosed with some form of brain tumor.

Magnetic resonance imaging (MRI) scans provide detailed images of the brain and are currently the most common way to find and diagnose brain tumors, like your sister’s. The process of scouring MRI scans to find a tumor is called brain tumor segmentation, and they often deploy simple machine learning methods to help do this. The goal of brain tumor segmentation is to detect the location and extension of the tumor regions, namely active tumorous tissue, necrotic tissue, and edema (swelling near the tumor). This is done by identifying abnormal areas when compared to normal tissue.

Brain tumors, specifically glioblastoma, are infiltrative in nature, their borders are hard to detect from standard MRI scans. In the case of your sister, a surgeon may accidentally take out too much of the grey matter surrounding the tumor, which could be wreak havoc on the rest of her cognitive functions!

To help improve the accuracy of brain tumor segmentation methods, scientists at universities from Canada and France developed a convolutional neural network (CNN) that uses two-pathway and input cascade architectures for brain tumor segmentation. For my project (and purely for academic purposes!), I went through and replicated the code of this model to understand how this project worked.

Before we jump into the actual model…

What Datasets Were Used?:

  • Dataset used was BRATS 2013 dataset, it consists of 4 modalities(different types of scans) namely T1, T1C, T2 and T2-FLAIR.
  • Each of the modalities were created into channels and taken slice wise, as seen below with the ground truth label.
A picture of the dataset.

Now it’s time to break down the model’s architecture!

The basic CNN.

From the paper, the architecture of the standard CNN

The main aspect of a convolutional neural network is (surprise, surprise) the convolutional layer. What’s neat about CNNs is that multiple convolutional layers can be stacked on top of each other to form a more convoluted network.

A single convolutional layer takes as input a stack of input planes and produces as output some number of output planes or feature maps. If we have multiple convolutional layers, then the convolutional layer following this one will actually take the output of the one before it as its input and produce a slightly different feature map should it pass the activation function.

Computing a feature map takes three steps:

  1. Convolution of kernels (filters) for the input images
  2. Applying a non-linear activation function: this study used the softmax function; activation functions such as these help determine whether the output of one node can move to the next to provide an eventual final output.
  3. Max pooling, which helps identify the tumor more accurately.

The scientists used two types of architectures to build the convolutional layers: two-pathway and cascaded.

Two-pathway architecture

A depiction of the proposed two-pathway architecture

As the name suggests, this particular architecture consists of a 7x7 kernel (called the local path) and a 13x13 kernel (called the global path). This is done to ensure the prediction of the label of a pixel is influenced by two aspects: the visual details of the region around that pixel and its larger “context”, i.e.roughly where the patch is in the brain.

Cascaded architectures

You’ll notice that the segmentation labels are predicted separately from each other.

Cascaded architectures essentially connect the outputs of one node to the inputs of the next through concatenation. This is known most directly as input concatenation.

The second type of cascaded architecture deployed in this model connected the 1st output to 2nd local path’s 1st activation.

The last type of cascaded architecture deployed connected the 1st output to the 2nd output.

Training the Model

In order to maximize the maximize the probability of all labels in the training set, gradient descent was used to selecting labels at a random subset of patches within each brain, computing the average negative log-probabilities for this mini-batch of patches and performing a gradient descent step on the CNNs parameters (i.e. the kernels at all layers).

Brain tumors make up a relatively small area of the MRI scan in comparison to the overall brain. Even then, the proportions are such that the tumor only makes a small percentage of the ‘unhealthy’ area. To account for this, a two-pathway approach was used.

The first training phase allowed for the construction of the patches data set such that all labels were equiprobable of finding. In the second phase, the outputs of the nodes are passed through the kernels, this time with a more representative distribution of the labels.

This was done to get the best of both worlds: most of the capacity (the lower layers) is used in a balanced way to account for the diversity in all of the classes,while the output probabilities are calibrated correctly due to the training of the output layer with the according frequencies of the data.

Dropout was utilized so that the model could learn to ignore irrelevant pieces of data (such as the edema and the like).

The original model was run on a GPU at the universities. I chose to implement the code on the Google Colab platform that also utilizes a GPU, which you can view here!

What do the results show?

Overall, the results show that the CNN architecture presented in the paper can segment brain tumors from regular brain MRI scans with a high degree of accuracy!

That definitely means good news for your sister: thanks to the help of CNNs and machine learning methods as a whole, she can worry less about a surgeon accidentally cutting out too much of her brain! Maybe now, the two of you can finish that scintillating conversation about the perfect temperature (hint: 75 degrees F on a cloudy day? Superb!)

Thanks for reading my review of Brain Tumor Segmentation with Deep Neural Networks! If you’d like to talk more about use cases of ML models in the healthcare space, I’d love to get in touch! You can reach me on Linkedin or through my personal email: writetoapurva@gmail.com. Until next time!

--

--

Apurva Joshi

Currently conducting independent research in iPSC derivation. Outside of that: 2nd-yr bchm & neuro @ brandeis, alum @ TKS, writer of medium articles