Unsupervised Domain Adaptation Within Deep Foundation Latent Spaces


The methodology scheme.

Prof. Plamen Angelov and Dr. Dmitry Kangin have had a paper accepted as part of a Workshop on Mathematical and Empirical Understanding of Foundation Models at the International Conference on Learning Representations 2024 (ICLR 2024) in Vienna. The paper is 'Unsupervised Domain Adaptation Within Deep Foundation Latent Spaces'.

The vision transformer-based foundation models, such as ViT or Dino-V2, are aimed at solving problems with little or no finetuning of features. Using a setting of prototypical networks, we analyse to what extent such foundation models can solve unsupervised domain adaptation without finetuning over the source or target domain. Through quantitative analysis, as well as qualitative interpretations of decision making, we demonstrate that the suggested method can improve upon existing baselines, as well as showcase the limitations of such approach yet to be solved.

This workshop aims to bring together researchers who work on developing an understanding of FMs, through either careful experimentation or theoretical work. Rigorous characterization of FMs can also contribute to the broader goal of mitigating undesirable behaviours. FMs are now broadly available to users, so misaligned models present real-world risk. The workshop will focus on three main aspects of FMs: pretraining, adaptation, and emergent capabilities.

Back to News