• Profile
Close

How AI technologies can learn how to spot and visualize cancer

Institute of Cancer Research News May 17, 2018

Visual messages have high impact. A picture speaks louder than a thousand words. So it is unsurprising that the ability to image the structure and function of the human body underpins and continues to revolutionize medical practice.

Medical imaging has come a very long way in the century since the first grainy and much celebrated x-ray image of Wilhelm Roentgen’s wife’s hand. Ian Donaldson’s first ultrasound showed a large cyst in the ovary. A later study showed a few echoes from a barely recognizable foetus. Today, we visualize the antics of the unborn in real time and in 3D.

Scans using CT used to take hours to obtain a single slice through the brain, without any differentiation between structures. Today, we can accomplish that feat with far greater resolution in less than a second.

The first ‘noisy’ MRI scanning images also took hours. Nowadays, we expect images of not only the anatomy but also the blood flow, function, and molecular features of the tissue in a single comprehensive ‘multiparametric’ MRI examination.

And all this born of British invention! Ian Donaldson was a Scot, with his laboratory in Glasgow; Sir Godfrey Hounsfield, Nobel Laureate for his CT development, worked at the EMI Centre in Hayes; and Sir Peter Mansfield, a physicist from Nottingham, received the Nobel prize jointly with Paul Lauterbur for their development of clinical MRI.

How do we use medical imaging in cancer?

We use imaging during three key stages in the cancer pathway—to make the diagnosis, to deliver therapy, and to assess response to treatment. There are various screening program to detect and help diagnose asymptomatic disease.

The NHS breast cancer screening service established in 1987 diagnoses half these cancers in middle-aged women. Screening for lung cancer is being trialed with low radiation dose CT, and although finding incidental nodules (false-positives) remains a huge issue, it does save lives.

For the purposes of screening, imaging also can be used in conjunction with other tests—for instance, with a fecal occult blood (FOB) test for bowel cancer screening (a pack is mailed out to the over 60s), followed by CT colonography if the FOB is positive.

However, a CT colonography cannot ‘see and treat’ like a colonoscopy (direct visual inspection), so it's underused.

In ovarian cancer screening, a large trial investigated ultrasound in conjunction with a blood test—the CA125—but the positive predictive value for cancer was too low for it to be cost-effective.

Assessments that inform treatment

Once a positive diagnosis of cancer has been made, imaging is used to describe the extent (“stage”) of the disease. Evaluation of tumor size and local, regional, and distant spread must all be documented prior to embarking on treatment.

To understand how a tumor might behave, we currently amass a range of information from different types of imaging and quantify and classify these data to extract features that inform on tumor function and predict growth patterns.

Imaging is vitally important for delivering targeted therapies. Cyberknife radiotherapy techniques use CT with markers for guidance. A new machine called an MR Linac delivers radiotherapy using the superior contrast of MRI to track the tumor.

We can burn away tumors very accurately using a noninvasive technique that focuses sound waves (high-intensity focused ultrasound, or HIFU) under MRI guidance. Or, we can radiolabel drugs and assess their distribution and dose to tissues using imaging.

To assess response to treatment, it is often the norm to do multiple types of imaging at multiple time points. Numerical information is extracted from these extensive datasets and helps indicate whether the treatment is working. Different types of images tell us different things about the tumor. It is an awful lot of imaging and a minefield of data.

Data overload

The main challenge in imaging today is the overwhelming overload of information. The sheer volume of imaging, not just in number of patients imaged annually in radiology departments everywhere, but in the number of images (often several thousand) generated for each patient examination, leads to inattention blindness.

A landmark study in the US, much publicized in the news, showed that when searching for lung nodules on CT scans, 20 of 24 radiologists did not spot a picture of a gorilla embedded in images of the lungs, even though eye-tracking showed they looked directly at it!!

More worryingly, only 55% of nodules were spotted. It is essential to find means for dealing with the explosion of imaging data that is being generated. And that is even before considering the serious manpower (and woman power) crisis in radiology.

Artificial intelligence is coming

Artificial intelligence, (AI, aspects of which are also referred to as machine learning, deep learning, or artificial neural networking) learns through trial and error, and is more robust and less variable than humans in its output (although it obviously depends on the quality of the data you put in!).

How often is our momentary guilty pleasure at clicking on a desirable article online rewarded by a relentless barrage of similar items every time we access the Internet? The machine has ‘learned’ our preferences in a few clicks!

In an imaging department, this kind of ‘intelligence’ can be harnessed to automate the setting up of patient scans and acquisition of data from them. Not rocket science, but with a huge impact not just on workflow, throughput, and costs, but also in achieving more precise comparisons and hence better diagnostics. Less is left to chance when estimating tumor response for instance, because all the data are perfectly aligned.

For diagnostic purposes, we can prescribe the imaging features which we wish an algorithm to use to discriminate cancer. This is supervised ‘machine learning.’ Deep learning goes a step further—it is adaptive and is based on other outcome data (pathological information, patient outcomes). It has a greatly superior performance to supervised machine learning for discriminating ‘normal’ from ‘abnormal’ or ‘bad’ from ‘good.’

Automated image registration and recall systems are already finding their way into radiology departments at reasonable costs. Algorithms can cost as little as $1 per scan analyzed, and can be fully integrated with hospital information and the picture archiving and communication systems (PACS) used in radiology departments.

Making a step change

Just like Darwinian evolution, an evolutionary approach to AI will bring bigger gains. An evolutionary algorithm approach for diagnosis employs thousands of algorithms at the outset to come up with the best match answer.

The worst 50% are dumped, and the rest are each ‘mutated’ in a single way (a change to one mathematical parameter), and the process repeated. In this way, the poorer algorithms are always discarded, and the best ones emerge that are most fit for purpose.

But even this may not be enough. After all, it is not just a tumor but a whole patient at the end of each decision. In complex cases, where various other factors need to be considered, algorithms will fail or give the wrong answer. What we need here is amplified human intelligence.

Swarm AI

Swarm AI is becoming a boom industry for predictions, and there are several impressive studies showing its power for predicting all sorts of things from which horse will win the Epsom Derby, to who will win the Oscars.

Swarm AI is not a crowd aggregate; ie, an average of votes of all the participants. It is a closed loop feedback system, so that the decision of each participant is determined by decisions of nearest neighbors and the group swarms together to home in on the target.

Even if the swarm AI participants are uninitiated in the area—for example, horse racing—the collective effect is that a correct prediction can be reached.

Lessons from AlphaGo

Another lesson in achieving a step change comes from Deep Mind’s recent coup with its algorithm AlphaGo. The ancient Chinese game of Go with hundreds of billions of possible moves was successfully ‘learned’ by this algorithm, which was trained on hundreds of datasets from professional human games.

It eventually defeated the previously unbeaten human world champion from Korea, Lee Seedol. In a subsequent iteration, the algorithm did not receive any training (AlphaGoZero), but learned by playing against itself starting with random moves.

This matched opponent strategy meant it got stronger and stronger and stronger and in a quote from Deep Mind ‘removed the constraints of human knowledge… and can create new knowledge.’

This concept of self-learning when applied to what combination of imaging features are associated with genetically and clinically aggressive tumors may in the future give us a better handle on tumor behaviour.

Visualizing when treatments are activated

Another avenue for exploration is in the design of therapeutic agents that can be imaged upon activation. This will allow calibration of the treatment administered in real-time based on imaging response.

Examples of such agents are already emerging—a photosensitizer molecule linked to a chemotherapeutic agent is inactive by virtue of its linkage, but when the link is broken by enzymes in the tumor, the photosensitizer fluoresces.

This indicates that the link is broken, and that the chemotherapeutic drug is active. With the photosensitizer on board, photodynamic therapy can also be delivered as a treatment!

So what will imagers do?

In the future, we will have AI algorithms to seek out an abnormality, automatically position over it, and obtain relevant, detailed scans of the region of interest. They will also track this region and other suspicious areas automatically at the next visit.

So with computers being so clever, what role will there be for human imagers?

Well, we will be the creators and inventors of new imaging techniques. We have had PET, PET-CT, PET-MR, and maybe HyperPET, which combines metabolic imaging from two different modalities (MR and PET). But there are other exciting combinations, such as photoacoustics (‘the sound of cells’), and yet others waiting to be dreamt up. Also, it is not just about diagnostics; we need to combine diagnosis with therapeutic options and become the imaging clinicians of the future.

And there will be virtual reality to get to grips with to administer these treatments. It already exists in the surgical environment and is finding an increased use in training, intra-operative planning, and for doing procedures with real-time input from multiple operators who are geographically far away. A whole new world of networking and discovery!

Go to Original
Only Doctors with an M3 India account can read this article. Sign up for free or login with your existing account.
4 reasons why Doctors love M3 India
  • Exclusive Write-ups & Webinars by KOLs

  • Nonloggedininfinity icon
    Daily Quiz by specialty
  • Nonloggedinlock icon
    Paid Market Research Surveys
  • Case discussions, News & Journals' summaries
Sign-up / Log In
x
M3 app logo
Choose easy access to M3 India from your mobile!


M3 instruc arrow
Add M3 India to your Home screen
Tap  Chrome menu  and select "Add to Home screen" to pin the M3 India App to your Home screen
Okay