We explore the problem of sparse and continuous domain adaptation using incremental learning. Our motivation to approach this problem is from a standpoint of making networks adaptive to unseen classes from different domain.


The concept of catastrophic forgetting has been the foundation of continual learning, however, this phenomenon is only attributed to the generalization capabilities of the neural network. We hypothesize that there is a strong trigonal relationship between Catastrophic Forgetting, Generalization and Robustness.