Summary
Much of the research on the foundations of graph algorithms is carried out under the assumption that the algorithm has full knowledge of the input data.
In spite of the theoretical appeal and simplicity of this setting, the assumption that the algorithm has full knowledge does not always hold.
Indeed uncertainty and partial knowledge arise in many settings.
One example is where the data is very large, in which case even reading the entire data once is infeasible, and sampling is required.
Another example is where data changes occur over time (e.g., social networks where information is fluid).
A third example is where processing of the data is distributed over computation nodes, and each node has only local information.
Randomization is a powerful tool in the classic setting of graph algorithms with full knowledge and is often used to simplify the algorithm and to speed-up its running time.
However, physical computers are deterministic machines, and obtaining true randomness can be a hard task to achieve.
Therefore, a central line of research is focused on the derandomization of algorithms that relies on randomness.
The challenge of derandomization also arise in settings where the algorithm has some degree of uncertainty.
In fact, in many cases of uncertainty the challenge and motivation of derandomization is even stronger.
Randomization by itself adds another layer of uncertainty, because different results may be attained in different runs of the algorithm.
In addition, in many cases of uncertainty randomization often comes with additional assumptions on the model itself, and therefore weaken the guarantees of the algorithm.
In this proposal I will investigate the power of randomization in uncertain environments.
I will focus on two fundamental areas of graph algorithms with uncertainty.
The first area relates to dynamic algorithms and the second area concerns distributed graph algorithms.
In spite of the theoretical appeal and simplicity of this setting, the assumption that the algorithm has full knowledge does not always hold.
Indeed uncertainty and partial knowledge arise in many settings.
One example is where the data is very large, in which case even reading the entire data once is infeasible, and sampling is required.
Another example is where data changes occur over time (e.g., social networks where information is fluid).
A third example is where processing of the data is distributed over computation nodes, and each node has only local information.
Randomization is a powerful tool in the classic setting of graph algorithms with full knowledge and is often used to simplify the algorithm and to speed-up its running time.
However, physical computers are deterministic machines, and obtaining true randomness can be a hard task to achieve.
Therefore, a central line of research is focused on the derandomization of algorithms that relies on randomness.
The challenge of derandomization also arise in settings where the algorithm has some degree of uncertainty.
In fact, in many cases of uncertainty the challenge and motivation of derandomization is even stronger.
Randomization by itself adds another layer of uncertainty, because different results may be attained in different runs of the algorithm.
In addition, in many cases of uncertainty randomization often comes with additional assumptions on the model itself, and therefore weaken the guarantees of the algorithm.
In this proposal I will investigate the power of randomization in uncertain environments.
I will focus on two fundamental areas of graph algorithms with uncertainty.
The first area relates to dynamic algorithms and the second area concerns distributed graph algorithms.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/803118 |
Start date: | 01-10-2019 |
End date: | 30-09-2025 |
Total budget - Public funding: | 1 500 000,00 Euro - 1 500 000,00 Euro |
Cordis data
Original description
Much of the research on the foundations of graph algorithms is carried out under the assumption that the algorithm has full knowledge of the input data.In spite of the theoretical appeal and simplicity of this setting, the assumption that the algorithm has full knowledge does not always hold.
Indeed uncertainty and partial knowledge arise in many settings.
One example is where the data is very large, in which case even reading the entire data once is infeasible, and sampling is required.
Another example is where data changes occur over time (e.g., social networks where information is fluid).
A third example is where processing of the data is distributed over computation nodes, and each node has only local information.
Randomization is a powerful tool in the classic setting of graph algorithms with full knowledge and is often used to simplify the algorithm and to speed-up its running time.
However, physical computers are deterministic machines, and obtaining true randomness can be a hard task to achieve.
Therefore, a central line of research is focused on the derandomization of algorithms that relies on randomness.
The challenge of derandomization also arise in settings where the algorithm has some degree of uncertainty.
In fact, in many cases of uncertainty the challenge and motivation of derandomization is even stronger.
Randomization by itself adds another layer of uncertainty, because different results may be attained in different runs of the algorithm.
In addition, in many cases of uncertainty randomization often comes with additional assumptions on the model itself, and therefore weaken the guarantees of the algorithm.
In this proposal I will investigate the power of randomization in uncertain environments.
I will focus on two fundamental areas of graph algorithms with uncertainty.
The first area relates to dynamic algorithms and the second area concerns distributed graph algorithms.
Status
SIGNEDCall topic
ERC-2018-STGUpdate Date
27-04-2024
Images
No images available.
Geographical location(s)