Imaging Science Thesis Defense: Transfer Learning Across Domains and Sensing Modalities
Imaging Science Thesis Defense
Transfer Learning Across Domains and Sensing Modalities
Chowdhury Sadman Jahan
Imaging Science
Rochester Institute of Technology
Register for Zoom Link Here
Abstract:
Transfer learning facilitates the training of a deep learning (DL) model with limited or no labeled data, by initializing the network parameters using a similar model already trained on a different but related dataset or task. This dissertation examines two special cases of transfer learning for image classification tasks: cross-modal supervised learning, and cross-domain unsupervised adaptation. We propose to apply cross-modal transfer learning to guide the training process of a DL model on Synthetic Aperture Radar (SAR) images via knowledge distillation from a DL model trained on corresponding electro-optical (EO) images. Furthermore, this approach explores class-balanced sampling strategies and multi-stage training procedures to account for the high class-imbalance encountered in a real-world SAR image dataset.
When models trained in one domain (source) are deployed in a new environment (target), they may encounter performance degradation due to the data distribution shift between the source and the target. Domain adaptation (DA) aims to address this limitation by aligning the source domain features with those extracted from the target domain. Drawing inspiration from continual learning, we refine source-free continual unsupervised domain adaptation methods ConDA and UCL-GV, which are buffer-fed networks that adapt to the continually incoming small batches of unlabelled target data. Our models outperform state-of-the-art (SOTA) continual DA models on both static, and dynamic (gradually changing) target domains. We further introduce new synthetic aerial datasets under gradually degrading weather conditions, and propose techniques to improve training stability of continual DA methods.
Recent tools for the commercialization of DL models have sparked concerns about protecting proprietary DL technologies during end-user deployment. We explore black-box domain adaptation (BBDA) to mitigate these concerns. We propose a curriculum-guided domain adaptation method called CABB that splits the target data into clean and noisy subsets via pseudolabel distribution modeling, and progressively adapts to the reliable and clean pseudolabels first, and then to the noisy pseudolabels later. Our method outperforms existing BBDA models by up to 9.3% across several popular DA datasets, and is on par with white-box DA models.
All the object categories in the source and the target domains may not necessarily fully overlap, and the target domain may contain samples from novel classes that are absent in the source domain. We introduce Unknown Sample Discovery (USD) as a source-free open set domain adaptation (SF-OSDA) method that also utilizes pseudolabel distribution modeling to conduct known-unknown target sample separation. USD operates within a teacher-student framework using co-training and temporal consistency between the teacher and the student models, thereby significantly reducing error accumulation resulting from imperfect known-unknown sample separation. Empirical results show that USD is superior to existing SF-OSDA methods by as much as ~20% in terms of prediction accuracy.
Intended Audience:
All are Welcome!
To request an interpreter, please visit myaccess.rit.edu
Event Snapshot
When and Where
Who
This is an RIT Only Event
Interpreter Requested?
No