Technion-developed deep-learning system examines breast cancer scans better than humans

Researchers at the Technion have made it their mission to turn computers into effective pathologists’ assistants.

A 2D visualization of the image feature vectors by applying t-SNE. (photo credit: TECHNION-ISRAEL INSTITUTE OF TECHNOLOGY)
A 2D visualization of the image feature vectors by applying t-SNE.
(photo credit: TECHNION-ISRAEL INSTITUTE OF TECHNOLOGY)

Computers can never replace physicians, but a deep-learning system developed at the Technion-Israel Institute of Technology in Haifa has been found to decipher breast cancer scans better than a human ever could.

One in eight women in Israel will be diagnosed with breast cancer at some point in her life. The disease occurs in men in 1% of cases. The prevalence of breast cancer is increasing, an effect caused in part by a modern lifestyle and increased lifespans. Thankfully, treatments are becoming more efficient and more personalized.

But what isn’t increasing – and is in fact decreasing – is the number of pathologists, the medical specialists who examine body tissues to provide the specific diagnosis necessary for personalized medicine.

A team of researchers at the Technion has therefore made it their mission to turn computers into effective pathologists’ assistants, simplifying and improving the human physician’s work. Their new peer-reviewed study was just published in Nature Communications under the title “Deep learning-based image analysis predicts PD-L1 status from H&E-stained histopathology images in breast cancer.”

The specific task that Dr. Gil Shamai and Amir Livne from the lab of Prof. Ron Kimmel from the Technion’s Taub Faculty of Computer Science set out to achieve lies within the realm of immunotherapy, which has been gaining prominence in recent years as an effective, sometimes even game-changing treatment for several types of cancer.

L-R: Amir Livne, Dr. Gil Shamai and Prof. Ron Kimmel (credit: TECHNION-ISRAEL INSTITUTE OF TECHNOLOGY)
L-R: Amir Livne, Dr. Gil Shamai and Prof. Ron Kimmel (credit: TECHNION-ISRAEL INSTITUTE OF TECHNOLOGY)

The basis of this form of therapy is encouraging the body’s own immune system to attack the tumor. However, such therapy needs to be personalized, as the correct medication must be administered to the patients who stand to benefit from it based on the specific characteristics of the tumor.

Multiple natural mechanisms prevent our immune systems from attacking our own bodies. These mechanisms are often exploited by malignant tumors to evade the immune system.

Dr. Gil Shamai (credit: TECHNION-ISRAEL INSTITUTE OF TECHNOLOGY)
Dr. Gil Shamai (credit: TECHNION-ISRAEL INSTITUTE OF TECHNOLOGY)

One such mechanism is related to the PD-L1 protein, which some tumors display. It acts as a sort of password by erroneously convincing the immune system that the cancer should not be attacked.

Specific immunotherapy for PD-L1 can persuade the immune system to ignore this particular password, but of course would only be effective when the tumor expresses PD-L1.

Determining whether a patient’s tumor expresses PD-L1

It is a pathologist’s task to determine whether a patient’s tumor expresses PD-L1. Expensive chemical markers are used to stain a biopsy taken from the tumor to get the answer. The process is complicated, time-consuming and at times inconsistent.

Shamai and his team took a different approach. In recent years, it has become an FDA-approved practice for biopsies to be scanned so they can be used for digital pathological analysis.

“They told us it couldn’t be done. So, of course, we had to prove them wrong.”

Shamai, et al.

Livne, Shamai and Kimmel wanted to know if a neural network could use these scans to make the diagnosis without requiring additional processes.

“They told us it couldn’t be done,” the team said. “So, of course, we had to prove them wrong.”

Neural networks are trained in a manner similar to how children learn: They are presented with multiple tagged examples. A child is shown many dogs and various other things, and from these examples forms an idea of what “dog” is.

The neural network Kimmel’s team developed was presented with digital biopsy images from 3,376 patients who were tagged as either expressing or not expressing PD-L1. After preliminary validation, the network was asked to determine whether additional clinical trial biopsy images from 275 patients were positive or negative for PD-L1.

It performed better than expected. For 70% of the patients, it was able to confidently and correctly determine the answer. For the remaining 30%, the program could not find the visual patterns that would enable it to decide one way or the other. Interestingly, in the cases where the artificial intelligence disagreed with the human pathologist’s determination, a second test proved the AI to be right.

“This is a momentous achievement,” Kimmel noted. “The variations that the computer found are not distinguishable to the human eye. Cells arrange themselves differently if they present PD-L1 or not, but the differences are so small that even a trained pathologist can’t confidently identify them. Now our neural network can.”

Shamai said, “It’s an amazing opportunity to bring together artificial intelligence and medicine. I love mathematics, I love developing algorithms. Being able to use my skills to help people, to advance medicine – it’s more than I expected when I started out as a computer science student.” He is now leading a team of 15 researchers, who are taking this project to the next level.

Kimmel concluded, “We expect AI to become a powerful tool in doctors’ hands. AI can assist in making or verifying a diagnosis, it can help match the treatment to the individual patient, it can offer a prognosis. I do not think it can or should replace the human doctor. But it can make some elements of doctors’ work simpler, faster and more precise.”