Reprogrammers are robust learners

Ongoing Project

Abstract

In this work, we investigate the effect of adversarial reprogramming on the robustness of the model upon transfer. We further are interested in adding a robustness objective into the constrained optimization process of reprogramming. We further demonstrate the effect of scaling on reprogramming and do an in-depth cost analysis breakdown comparison of reprogramming and fine-tuning.

References:

[1] Elsayed, Gamaleldin F., Ian Goodfellow, and Jascha Sohl-Dickstein. “Adversarial reprogramming of neural networks.” arXiv preprint arXiv:1806.11146 (2018).

[2] Wortsman, Mitchell, Gabriel Ilharco, Mike Li, Jong Wook Kim, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, and Ludwig Schmidt. “Robust fine-tuning of zero-shot models.” arXiv preprint arXiv:2109.01903 (2021).

Participants