Current Genetic Medicine Reports
Volume 8, Issue 1, March 2020
Network Analysis for Complex Neurodegenerative Diseases
Claudia Manzoni, Patrick A. Lewis, Raffaele Ferrari
Purpose of Review
Biomedicine is witnessing a paradigm shift in the way complex disorders are investigated. In particular, the need for big data interpretation has led to the development of pipelines that require the cooperation of different fields of expertise, including medicine, functional biology, informatics, mathematics and systems biology. This review sits at the crossroad of different disciplines and surveys the recent developments in the use of graph theory (in the form of network analysis) to interpret large and different datasets in the context of complex neurodegenerative diseases. It aims at a professional audience with different backgrounds.
Biomedicine has entered the era of big data, and this is actively changing the way we approach and perform research. The increase in size and power of biomedical studies has led to the establishment of multi-centre, international working groups coordinating open access platforms for data generation, storage and analysis. Particularly, pipelines for data interpretation are under development, and network analysis is gaining momentum since it represents a versatile approach to study complex systems made of interconnected multiple players.
We will describe the era of big data in biomedicine and survey the major freely accessible multi-omics datasets. We will then introduce the principles of graph theory and provide examples of network analysis applied to the interpretation of complex neurodegenerative disorders.
The research community is witnessing a very productive moment in biomedicine, experiencing an exponential growth in the amount of data that is generated with many initiatives taking place to improve the way we analyse data to extract biologically meaningful information to be translated for the benefit of medical practice. Of course, even if the computational power, the statistical approaches and the mathematics of graph theory are available, such paradigm shift in basic and applied research is still in its infancy. There still are levels of complexity that need to be overcome; for example, networks are more static than dynamic objects, where both edges and nodes can reconfigure themselves as in the real biological context [76•], and many omics datasets still lack that critical cell specificity type of information that would be necessary to draw more comprehensive functional conclusions. A specific initiative called Dialogue for Reverse Engineering Assessment and Methodology (DREAM) challenge (http://dreamchallenges.org) has been launched in 2006 as a crowdsourcing effort, where teams from all over the world are competing to develop the best performing pipelines to address compelling, big data problems in biomedicine. Analytical pipelines are being generated at a fast pace; however, these will need to stand the test of time; particularly, the next critical step will be validating the in silico findings, thus develop useful functional systems to model disease and highlight efficient endpoints for therapeutic drug intervention.