Pistoia Alliance Symposium: Bioinformatics in Support of Precision Medicine
The recent Pistoia Alliance Symposium in London focused on the role of Bioinformatics in supporting Precision Medicine. There were some very interesting use cases ranging from principle concepts of bioinformatics to clinical applications to the importance of HPC and cloud for scaling up bioinformatics applications within the clinical setting. Below are some of the take-home messages that stuck with me.
Regulatory requirements for clinical utility of biomarkers seems to be a big concern amongst the community, quickly followed by the inability to access the right data at the right time. Members of the Pistoia alliance are working towards a proposal to align data collection and analysis strategies across the discovery and clinical domains. The aim being to aid the clinical utility of genetic biomarkers as a companion diagnostic. The CDISC consortium is doing an excellent job of trying to apply SDTM standards for genomics data whereby gene-related datasets can be submitted as part of a clinical trial or as product application (e.g. a new genetic test) to a regulatory agency. Initiatives such as these that are focused on defining standards across the domains are much needed if the promise of genomics is to cross over from research into highly regulated clinical environment, whether during the drug development process or within a clinical diagnostic setting.
This need for standardisation is not limited to data formats alone but also extends to the processing of genomics data, and the need to ensure that bioinformatics pipelines are validated much like a conventional clinical diagnostic software would. There is an immense amount of bioinformatic code out there and some of it will contribute in part to becoming a gold-standard for one or another bioinformatic pipeline. However, the utility of the code does not stop at writing algorithms and packaging them up. Testing the software in different settings to make sure that the outputs are consistent is just another essential step to making sure that it is ready for prime-time in a regulated clinical environment. And, all of this is extremely resource intensive and time consuming. As a result, the industry is thinking even more deeply about how and whether increased machine learning capabilities can address these challenges and help improve this process of validation and testing environments.
All of this is further compounded by the ability to scale up Bioinformatics analysis efficiently. The increasing use of HPC and cloud analytics is most definitely addressing this challenge. The need to use cloud analytics as we begin to collect ever more data seems to be a foregone conclusion. However, there is a delicate balance to be struck between upfront investments versus the ability to scale. A decision to go with one or the other should ultimately be dependent on an in-depth consideration of the specific use cases, changing infrastructure needs and changing complexity of the data analysis!
All in all, a great symposium.
Want to learn more about how to bring together translational and clinical data into a single scalable solution, then click here
Published on Linkedin on March 15, 2019