Insights

How Will AI Transform Healthcare? - Prognos Health

Written by Nick Zgorski | Apr 25, 2017 7:50:42 PM

More Thoughts from a Chief Data Scientist…

Not all artificial intelligence is created equal. Some is optimized for playing poker, some – like Siri or Alexa –  are engineered for voice recognition. Depending on the problem you are trying to solve, you may need to deploy a different type of AI. The same holds true when talking about AI in healthcare. The flavor of AI you need to use depends on the problem you are trying to solve. So, what types of problems do we address at Prognos, and what types of AI we use to solve them?

Prognos’ core mission is to eradicate disease by tracking and predicting it at the earliest possible time. More precisely, we are interested in developing predictive models that empower our datasets in determining how likely it is that a patient will undergo a specific health event in the future. We build these models using patient’s anonymized records, and we have an exponentially-growing amount of them (over 8 billion by Q1 of 2017).

What does it mean to build a predictive model for a health event, and how do we do it?

Let’s start with the data. At Prognos we build predictive models using our registry, which not only contains the largest lab dataset in the country, but also the most beautiful!

What do I mean by “the most beautiful”? Isn’t it just data? Hint: Think about a recording, or a picture. The quality of the data matters. The details in the data matter. Good-quality data is rich, carries precious information, and AI loves it. At Prognos we take great care of our data. We use cutting-edge warehouse systems and architect it so that is available—in real time and at optimal throughput rates—for enhancing our partners’ proprietary datasets, but it’s not just that. There’s yet another layer of care needed, where AI plays a central role (since we have so much data): cleanup!

Most of our data reaches Prognos from our extensive network of partner labs. Our platform, which contains proprietary anonymization software, not only contains the largest repository of lab data in the country, but also one of the country’s largest royal messes! I’ll elaborate: lab data is quite messy from an AI perspective, but this is expected. After all, lab test results are designed to make sense to physicians (humans), not necessarily to AI (machines). For example, many fields—or entries—in our dataset are typed by people, so they are error-prone. Specifically, my name could be punched-in as “Frnando”. If you see this you’d probably guess my actual name is “Fernando”. Not so much a machine. To a machine, “Frnando” looks like:

“01000110011100100110111001100001011011100110010001101111”,

whereas “Fernando” appears to be:

“0100011001100101011100100110111001100001011011100110010001101111”.

Very different indeed!

It is not uncommon for our data to contain typos in name fields, mis-matched addresses, different codes for the same lab tests, and so on. These “impurities” within the dataset require us to deploy a first layer of AI…just to clean it up! At Prognos we call this layer of technology Pre-Processing. We most definitely need it because nobody wants to, nor can they, correct 8 billion records by hand! We work very hard using in-house optimized machine learning algorithms including NLP (natural language processing) to make sure our logs are the cleanest, most rich, and most beautiful logs ever…in the eyes of an AI at least. This task makes us very proud: we use one layer of AI to optimize our logs, and prepare them for another layer of AI. Call that intelligence!

Once we have “cleaned up” our logs using one type of AI, we deploy another type of AI to process the by-now clean, rich dataset.

The second layer of AI we deploy in our (beautiful) logs consists of a predictive layer, which is a layer that is useful for finding patterns at scale—a job a human would fail miserably. Indeed, as an example, let’s consider the following picture:

If I asked you to guess, based on the picture above, what color a circle containing number 3 would be, what would you say?

One reasonable guess would be “yellow,” since in the picture even numbers lie inside blue circles, and odd numbers are inside yellow circles. Now imagine a similar scenario, containing 8 billion numbers inside circles painted with 8 million different colors…what could you say about coloring patterns there?

Not much I bet…but you get the point. A machine can be used to figure this out, namely to find patterns at a massive scale.

It can’t be that easy…

The catch 22 here is that machines aren’t that great (yet) at finding patterns within datasets by themselves. They pick it up very quickly, but they have to be properly educated—or as specialist like to call it, they need to be trained. That’s where the bottleneck is today. In order to deploy AI in a commercially viable way you need to have training data, which in turn needs flesh-and-blood-real-people to be generated. At Prognos we use our clinical expertise and extreme wizardry to generate vast collections of training datasets. We then use these to train our AI, as in showing it a set of people where some individuals have been flagged with a health event, and some haven’t. Once AI is calibrated properly, the machine figures out (by itself!) how likely are the remaining individuals to undergo that health event in the future. Success!

It can’t be that easy, part II

Not so fast. Although the above statement is correct, it is lacking a particularly important ingredient. How do we know if the machine is predicting correctly? In other words, AI makes prediction (check!), but are we holding it accountable for what it predicts, can we somehow guarantee its accuracy? In the business of machine learning we can guarantee only one thing: outcomes will be quite inaccurate! (at least initially). More important than being accurate from the get-go, is knowing how to measure prediction errors, and how to take these measurements into consideration for correcting the course. That’s where the science meets the art. By way of example, think of this problem as problem of designing the autopilot function for an airplane. Any airplane will inevitably encounter side winds. Because of this, the plane will move a bit away from the planned route.  But that doesn’t mean it will not reach its destination! Rather, the information about sliding sideways, off the path, is quickly ingested by the plane’s computer, and actions are taken so that a new course is planned and followed. In technical terms, a feedback loop has been established and fed into a control system. Having powerful, properly deployed AI is as much as about its internal tech, as it is about its ability to correct the course. Or in our setting, this translates into developing a system that adjusts the AI’s parameters so that its predictions increase in accuracy by using a feedback loop.

Got it. Now what is all this good for?

Putting it all together: we have rich, clean data, at a massive scale; we have meaningful training sets tailored for specific health events; and we have an AI engine deployed on cutting edge infrastructure. Using these ingredients, we are able to represent not only each patient’s current health status, but also predict the likelihood of health events for all of the 150 million+ patients in our dataset. The last step consists of linking these anonymized, patient-centric bits of information together with our records of treating physicians. We end up with a vast, continuously updating dataset containing relevant health information on over 150 million patients—including their physicians.

We use this massive, predictive dataset to help life science, payer, and lab partners find efficiencies within their workflows. What can use this data for? Leave us a comment by live chatting on our website or send us an email at pharma@prognos.wpengine.com!