Experts discuss a medical ecosystem where people, policies and technology interconnect, allowing doctors to administer precise treatment to one patient — the N of 1 — and then deliver it to everyone.
The old “take two aspirin and call me in the morning” cure is antithetical to today’s precision medicine approach to healthcare, which zeroes in on an individual’s unique genetic makeup, environment and lifestyle to determine an exacting treatment.
Traditional healthcare is broadly based on cohort studies that compare risk factors and outcomes for a large set of patients who share similar symptoms. Cohort studies are more credible when the “N” — the number of patients included in the study — is very large.
That’s diametrically at odds with the goal of precision medicine, which aims to find a treatment plan tailored for a particular person: “The N of 1.”
This data-driven approach to finding treatment focuses on individuals, but industry experts want advancements in precision medicine to benefit broader populations.
It used to be that research institutions and treatment facilities would hoard data to stay competitive, said Anthony Philippakis, a cardiologist and chief data officer at the Broad Institute of MIT and Harvard.
Now they can’t wait to share it.
“Data is the key,” he said. “It’s the fuel of precision medicine, making all the parts of the complex engine work together to find the right course of treatment.”
Medicine has always produced large amounts of data, from radiology images to clinical trial results. Genomics has kicked data collection to a whole new level.
“The fingerprint of disease is in people’s DNA,” said Bryce Olson, global marketing director for the Health and Life Sciences at Intel — and a stage-4 prostate cancer survivor.
He said genomic sequencing — basically the process of turning a lab sample into 1s and 0s that a computer can analyze and determine the clinically significant mutations that are actually driving disease — helps doctors focus on finding targeted treatment plans that are potentially more effective and less toxic.
Genome sequencing is computationally expensive, requiring enormous amounts of computing performance and data storage. Sharing and updating these large data files can be challenging.
“Even when receiving targeted treatments, eventually a person’s cancer may start to grow again,” said Olson. “That generally means cancer figured out how to evade therapy and mutated yet again. Staying on top of this through continued genome sequencing is important, it’s basic blocking and tackling.”
Finding a mutation by comparing different sequences is “like sifting through the Library of Congress to find a spelling mistake in one particular book,” Olson said.
Technologies Helping Meet Health Data Challenges
For many hospitals and clinics, collecting and using data for precision medicine is still too expensive, but a company called QIAGEN is working with Intel to bring the cost of genome analysis — the bioinformatics done on genome sequencing data — down to as little as $22 per patient per genome.
QIAGEN (Clinical Insight for Genomic Testing and Analytics) uses the Intel Scalable System Framework and QIAGEN Clinical Insight (QCI) software enables next-generation sequencing (NGS) genetic testing across a range of cancer, hereditary and rare diseases. This simplifies doctors’ ability to actually guide clinical decision making with genomic data.
Aside from being a new tool to guide treatment planning, precision healthcare can be overwhelming, especially when considering other data sources beyond genomics to really understand everything about the patient.
It requires database technologies that can manage genomic data as well as “unstructured” data — insights scribbled in doctors’ notes or recordings that can give valuable assistance to creating a structured treatment plan. Then there are personal tracking devices, from Fitbits to glucose monitors, that can also produce valuable data, especially around patient lifestyles, according to Chilmark Research analyst Brian Eastwood.
While it’s still early days in terms of clinical applicability, Eastwood said this kind of data can help doctors prescribe more informed and realistic prevention, treatment and recovery plans.
Eastwood pointed out that information about a patient’s neighborhood living conditions can also help. A patient who lives on the sixth floor of a building without an elevator probably has a greater risk of falling down again. A patient who lives in an area with no easy access to fresh food may struggle to follow a prescribed dietary plan, and a neighborhood with no sidewalks may derail an exercise regimen that calls for walking.
“Understanding those other factors will help deliver on the promise of precision medicine,” said Eastwood.
The amount of precision healthcare data generated just for one person is enormous, requiring new technologies to manage and make sense of it, according to David Holmes, an assistant professor of Biomedical Engineering at the Mayo Clinic. It will need to be analyzed and simplified quickly for many patients in order to be useful.
“You’ll need artificial intelligence to analyze [huge datasets] to determine the best-match cohort, and then make care/cost decisions in real time,” said Holmes.
Researchers at the UK’s Medical Research Council recently showed this approach in action. Using a machine learning system that automatically learns from experience, improving its accuracy as it gathers more data, researchers could predict heart failure in individual patients. This kind of predictive work can help doctors develop proactive or preventative care plans.
When deciding whether or not surgery is the next step for a patient, medical experts could soon turn to supercomputers that simulate potential outcomes. For example, researchers at Johns Hopkins are feeding data from CT scans, ultrasound and electrical monitors into a supercomputer to simulate a human heart.
“It reduces the [decision-making] time from hours to tens of minutes, and decreases mortality,” said Bob Rogers, chief data scientist of Analytics and Artificial Intelligence at Intel.
In an effort to make this data-intensive approach work and be accessible to more people, Olson sees the healthcare and technology industries working together to make sure hospitals and clinics can easily and quickly share large amount of information.
“Any given hospital, if they have a rare cancer patient come through, they may not have enough data to help that person,” said Olson. “The same applies to a patient with a more frequently occurring cancer like breast, prostate, or lung. When these patients are given targeted therapies to inhibit a clinically significant mutation, what happens if their cancer ultimately progresses again?”
He said these patients now have cancers that might be driven by entirely new variants or mutated pathways that essentially make them rare as well. “In the future, cancer centers will need to work together so hospitals can share insights to help find the needles in the haystack.”
In addition to the Broad Institute, there are many efforts to tackle healthcare data challenge, including government-driven work at the U.S. Centers for Disease Control’s Biosense platform and at private member-based networks like PatientsLikeMe. The Targeted Agent and Profiling Utilization Registry (TAPUR) Study offers patients with advanced cancer across 25 clinical sites access to molecularly-targeted cancer drugs. It’s a clinical trial that maps commercially available, targeted anticancer drugs with advanced cancer patients who have a potentially actionable genomic variant.
Some cover multiple conditions and diseases, while other like the Collaborative Cancer Cloud pilot (founded by the Oregon Health and Science University and Intel) are focused on a particular area of medicine.
Data sharing is also happening at the community level. Three hospitals in Camden, NJ, for example, joined up under the banner of the Camden Coalition of Healthcare Providers, sharing data to get a city-wide picture of how violent crime affected hospital usage and healthcare costs.
Cloud-based datasets and tools that “bring the researchers to the data,” rather than the other way around, are one new approach gaining traction, said the Broad Institute’s Philippakis.
He said rather than a single platform, the data-sharing system supporting precision medicine for the masses will be a collection of data repositories, many cloud-based, with tools plugged in to help medical professionals perform powerful analyses.
Beyond advances in technology and data sharing, this system will require participation from insurance companies to help patients cover the cost. While many insurance policies cover the genomic sequencing for advanced cancer patients, patients can find it challenging to get coverage for participation in a clinical trial.
“I think the onus is going to be on healthcare providers to show the clinical benefit of genomic sequencing and targeted therapies,” said Olson. “If they can show clinical outcomes that improve upon the current standard of care and allow them to see the value, reimbursement will follow.”
For experts like Philippakis, the future is bright.
“It’s breathtaking what’s changed in the last 15 years,” he said. “There are so many reasons for optimism.”