a tiny research group on the edge of theoretical deep learning

Find out more

We explore the problem of sparse and continuous domain adaptation using incremental learning. Our motivation to approach this problem is from a standpoint of making networks adaptive to unseen classes from different domain.

Read more

The concept of catastrophic forgetting has been the foundation of continual learning, however, this phenomenon is only attributed to the generalization capabilities of the neural network. We hypothesize that there is a strong trigonal relationship between Catastrophic Forgetting, Generalization and Robustness.

Read more

Diganta Misra

Research MSc student, MILA

Himanshu Arora

Machine Learning Engineer III, Workday

Trikay Nalamada

Undergraduate Student, Indian Institute of Technology, Guwahati

Ajay Uppili Arasanipalai

Undergraduate Student, University of Illinois at Urbana-Champaign

Alex Gu

Graduate CS Student, MIT

Ching Lam Choi

Undergraduate Student, CUHK

Javier Ideami

CEO, Ideami Studios

Jaegul Choo

Associate Professor, Graduate School of Artificial Intelligence, KAIST

Full team


In collaboration with:

Continual AI
Weights & Biases

Supported by contributions from:

OpenPOWER Foundation