Summary
Many recent advances in machine learning and computational statistics rely on algorithms that calculate derivatives. This use of derivatives has motivated the creation of domain specific modelling languages in which each program can be differentiated automatically, by the compiler. This technique is known as automatic differentiation (AD). AD is typically implemented through source-code-transformations, either directly or indirectly via operator overloading. These transformations become intricate in languages with expressive language features like algebraic data types and higher-order functions. Meanwhile, traditional calculus and differential geometry do not suffice to prove their correctness or even give them meaning, as ordinary differential geometry cannot support higher-order functions. Indeed, such formal correctness proofs have never been published.
This project will use the mathematical foundations of diffeological spaces, a conservative extension of traditional differential geometry to higher-order types, to give precisely such correctness proofs. In particular, it will give appropriate source-code transformations for both the forward mode and reverse mode techniques of AD on a language with specified semantics in diffeological spaces. Next, it will prove that these source-code transformations correctly implement the canonical semantic notion of differentiation, as given by the diffeological spaces semantics. It will perform this analysis for a higher-order language with tuples and variant types. These formal descriptions and correctness proofs of AD for expressive languages will be accompanied by closely matching implementations, built on top of the Accelerate framework for purely functional GPU programming.
This project will use the mathematical foundations of diffeological spaces, a conservative extension of traditional differential geometry to higher-order types, to give precisely such correctness proofs. In particular, it will give appropriate source-code transformations for both the forward mode and reverse mode techniques of AD on a language with specified semantics in diffeological spaces. Next, it will prove that these source-code transformations correctly implement the canonical semantic notion of differentiation, as given by the diffeological spaces semantics. It will perform this analysis for a higher-order language with tuples and variant types. These formal descriptions and correctness proofs of AD for expressive languages will be accompanied by closely matching implementations, built on top of the Accelerate framework for purely functional GPU programming.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/895827 |
Start date: | 15-03-2020 |
End date: | 14-03-2022 |
Total budget - Public funding: | 187 572,48 Euro - 187 572,00 Euro |
Cordis data
Original description
Many recent advances in machine learning and computational statistics rely on algorithms that calculate derivatives. This use of derivatives has motivated the creation of domain specific modelling languages in which each program can be differentiated automatically, by the compiler. This technique is known as automatic differentiation (AD). AD is typically implemented through source-code-transformations, either directly or indirectly via operator overloading. These transformations become intricate in languages with expressive language features like algebraic data types and higher-order functions. Meanwhile, traditional calculus and differential geometry do not suffice to prove their correctness or even give them meaning, as ordinary differential geometry cannot support higher-order functions. Indeed, such formal correctness proofs have never been published.This project will use the mathematical foundations of diffeological spaces, a conservative extension of traditional differential geometry to higher-order types, to give precisely such correctness proofs. In particular, it will give appropriate source-code transformations for both the forward mode and reverse mode techniques of AD on a language with specified semantics in diffeological spaces. Next, it will prove that these source-code transformations correctly implement the canonical semantic notion of differentiation, as given by the diffeological spaces semantics. It will perform this analysis for a higher-order language with tuples and variant types. These formal descriptions and correctness proofs of AD for expressive languages will be accompanied by closely matching implementations, built on top of the Accelerate framework for purely functional GPU programming.
Status
TERMINATEDCall topic
MSCA-IF-2019Update Date
28-04-2024
Images
No images available.
Geographical location(s)