Summary
The introduction of high-throughput single-cell sequencing has produced a flood of data at the resolution of the single cell, including spatiotemporal information and different molecular facets of a cell, a.k.a. multi-omics. Their integration through MultiModal Learning (MML), aimed at combining multiple complementary views, offers great promise to understand the spatiotemporal phenotypic evolution of a cell and its molecular regulators. However, integrating multi-omics data across space and time is a huge computational challenge requiring radically new MML approaches.
MULTIview-CELL will infer multimodal spatiotemporal phenotypic cell trajectories by combining back-translation, to allow the unsupervised dimensionality reduction of multimodal data, with a new Optimal Transport distance, allowing the spatiotemporal pairing of cells (Aim1). MULTIview-CELL will then pinpoint the molecular regulators of such trajectories by combining new Graph Convolutional Networks with topological evolutions and Heterogeneous Multilayer Graphs, allowing the integration of graphs inferred from multimodal data (Aim2). Finally, all developed methods will be implemented in open-source software, with an emphasis on GPU-friendly scalable computations, a unique feature among existing single-cell tools (Aim3).
These core contributions will impact Machine Learning, but more importantly, will have profound biological implications. The application of the tools developed to cutting-edge single-cell data from muscle stem cells will lead to new biological hypotheses on their heterogeneity and crosstalk, to be validated through wet-lab experiments (Transversal Tasks). In addition, by allowing to answer longstanding questions on the spatiotemporal phenotypic evolution of a cell, MULTIview-CELL will catalyze the generation of crucial knowledge in fundamental biology and it will be key to preventing disease onset or therapy resistance, thus impacting health, society and economy.
MULTIview-CELL will infer multimodal spatiotemporal phenotypic cell trajectories by combining back-translation, to allow the unsupervised dimensionality reduction of multimodal data, with a new Optimal Transport distance, allowing the spatiotemporal pairing of cells (Aim1). MULTIview-CELL will then pinpoint the molecular regulators of such trajectories by combining new Graph Convolutional Networks with topological evolutions and Heterogeneous Multilayer Graphs, allowing the integration of graphs inferred from multimodal data (Aim2). Finally, all developed methods will be implemented in open-source software, with an emphasis on GPU-friendly scalable computations, a unique feature among existing single-cell tools (Aim3).
These core contributions will impact Machine Learning, but more importantly, will have profound biological implications. The application of the tools developed to cutting-edge single-cell data from muscle stem cells will lead to new biological hypotheses on their heterogeneity and crosstalk, to be validated through wet-lab experiments (Transversal Tasks). In addition, by allowing to answer longstanding questions on the spatiotemporal phenotypic evolution of a cell, MULTIview-CELL will catalyze the generation of crucial knowledge in fundamental biology and it will be key to preventing disease onset or therapy resistance, thus impacting health, society and economy.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101115618 |
Start date: | 01-04-2024 |
End date: | 31-03-2029 |
Total budget - Public funding: | 1 285 938,00 Euro - 1 285 938,00 Euro |
Cordis data
Original description
The introduction of high-throughput single-cell sequencing has produced a flood of data at the resolution of the single cell, including spatiotemporal information and different molecular facets of a cell, a.k.a. multi-omics. Their integration through MultiModal Learning (MML), aimed at combining multiple complementary views, offers great promise to understand the spatiotemporal phenotypic evolution of a cell and its molecular regulators. However, integrating multi-omics data across space and time is a huge computational challenge requiring radically new MML approaches.MULTIview-CELL will infer multimodal spatiotemporal phenotypic cell trajectories by combining back-translation, to allow the unsupervised dimensionality reduction of multimodal data, with a new Optimal Transport distance, allowing the spatiotemporal pairing of cells (Aim1). MULTIview-CELL will then pinpoint the molecular regulators of such trajectories by combining new Graph Convolutional Networks with topological evolutions and Heterogeneous Multilayer Graphs, allowing the integration of graphs inferred from multimodal data (Aim2). Finally, all developed methods will be implemented in open-source software, with an emphasis on GPU-friendly scalable computations, a unique feature among existing single-cell tools (Aim3).
These core contributions will impact Machine Learning, but more importantly, will have profound biological implications. The application of the tools developed to cutting-edge single-cell data from muscle stem cells will lead to new biological hypotheses on their heterogeneity and crosstalk, to be validated through wet-lab experiments (Transversal Tasks). In addition, by allowing to answer longstanding questions on the spatiotemporal phenotypic evolution of a cell, MULTIview-CELL will catalyze the generation of crucial knowledge in fundamental biology and it will be key to preventing disease onset or therapy resistance, thus impacting health, society and economy.
Status
SIGNEDCall topic
ERC-2023-STGUpdate Date
12-03-2024
Images
No images available.
Geographical location(s)