Summary
Unsupervised visual inference can often be performed by exploiting the internal redundancy inside a single visual datum (an image or a video). The strong repetition of patches inside a single image/video provides a powerful data-specific prior for solving a variety of vision tasks in a “blind” manner: (i) Blind in the sense that sophisticated unsupervised inferences can be made with no prior examples or training; (ii) Blind in the sense that complex ill-posed Inverse-Problems can be solved, even when the forward degradation is unknown.
While the above fully unsupervised approach achieved impressive results, it relies on internal data alone, hence cannot enjoy the “wisdom of the crowd” which Deep-Learning (DL) so wisely extracts from external collections of images, yielding state-of-the-art (SOTA) results. Nevertheless, DL requires huge amounts of training data, which restricts its applicability. Moreover, some internal image-specific information, which is clearly visible, remains unexploited by today's DL methods. One such example is shown in Fig.1.
We propose to combine the power of these two complementary approaches – unsupervised Internal Data Recurrence, with Deep Learning, to obtain the best of both worlds. If successful, this will have several important outcomes including:
• A wide range of low-level & high-level inferences (image & video).
• A continuum between Internal & External training – a platform to explore theoretical and practical tradeoffs between amount of available training data and optimal Internal-vs-External training.
• Enable totally unsupervised DL when no training data are available.
• Enable supervised DL with modest amounts of training data.
• New applications, disciplines and domains, which are enabled by the unified approach.
• A platform for substantial progress in video analysis (which has been lagging behind so far due to the strong reliance on exhaustive supervised training data).
While the above fully unsupervised approach achieved impressive results, it relies on internal data alone, hence cannot enjoy the “wisdom of the crowd” which Deep-Learning (DL) so wisely extracts from external collections of images, yielding state-of-the-art (SOTA) results. Nevertheless, DL requires huge amounts of training data, which restricts its applicability. Moreover, some internal image-specific information, which is clearly visible, remains unexploited by today's DL methods. One such example is shown in Fig.1.
We propose to combine the power of these two complementary approaches – unsupervised Internal Data Recurrence, with Deep Learning, to obtain the best of both worlds. If successful, this will have several important outcomes including:
• A wide range of low-level & high-level inferences (image & video).
• A continuum between Internal & External training – a platform to explore theoretical and practical tradeoffs between amount of available training data and optimal Internal-vs-External training.
• Enable totally unsupervised DL when no training data are available.
• Enable supervised DL with modest amounts of training data.
• New applications, disciplines and domains, which are enabled by the unified approach.
• A platform for substantial progress in video analysis (which has been lagging behind so far due to the strong reliance on exhaustive supervised training data).
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/788535 |
Start date: | 01-05-2018 |
End date: | 31-10-2023 |
Total budget - Public funding: | 2 466 940,00 Euro - 2 466 940,00 Euro |
Cordis data
Original description
Unsupervised visual inference can often be performed by exploiting the internal redundancy inside a single visual datum (an image or a video). The strong repetition of patches inside a single image/video provides a powerful data-specific prior for solving a variety of vision tasks in a “blind” manner: (i) Blind in the sense that sophisticated unsupervised inferences can be made with no prior examples or training; (ii) Blind in the sense that complex ill-posed Inverse-Problems can be solved, even when the forward degradation is unknown.While the above fully unsupervised approach achieved impressive results, it relies on internal data alone, hence cannot enjoy the “wisdom of the crowd” which Deep-Learning (DL) so wisely extracts from external collections of images, yielding state-of-the-art (SOTA) results. Nevertheless, DL requires huge amounts of training data, which restricts its applicability. Moreover, some internal image-specific information, which is clearly visible, remains unexploited by today's DL methods. One such example is shown in Fig.1.
We propose to combine the power of these two complementary approaches – unsupervised Internal Data Recurrence, with Deep Learning, to obtain the best of both worlds. If successful, this will have several important outcomes including:
• A wide range of low-level & high-level inferences (image & video).
• A continuum between Internal & External training – a platform to explore theoretical and practical tradeoffs between amount of available training data and optimal Internal-vs-External training.
• Enable totally unsupervised DL when no training data are available.
• Enable supervised DL with modest amounts of training data.
• New applications, disciplines and domains, which are enabled by the unified approach.
• A platform for substantial progress in video analysis (which has been lagging behind so far due to the strong reliance on exhaustive supervised training data).
Status
CLOSEDCall topic
ERC-2017-ADGUpdate Date
27-04-2024
Images
No images available.
Geographical location(s)