Header menu link for other important links
X
RetroKD : Leveraging Past States for Regularizing Targets in Teacher-Student Learning
S. Jandial, Y. Khasbage, A. Pal, B. Krishnamurthy,
Published in Association for Computing Machinery
2023
Pages: 10 - 18
Abstract
Several recent works show that higher accuracy models may not be better teachers for every student, and hence, refer this problem as student-teacher "knowledge gap". Further, they propose techniques, which, in this paper, we discuss are constrained to certain pre-conditions: 1). Access to Teacher Model/Architecture 2). Retraining Teacher Model 3). Models in Addition to Teacher Model. Being well known that for a lot of settings, these conditions may not hold true challenges the applicability of such approaches. In this work, we propose RetroKD, which smoothes out the logits of a student network by leveraging students' past state logits with the ones from the teacher. By doing so, we hypothesize that the present target will no longer be as hard as the teacher target and not as more uncomplicated as the past student target. Such regularization on learning the parameters alleviates the needs as required by other methods. Our extensive set of experiments comparing against the baselines for CIFAR 10, CIFAR 100, and TinyImageNet datasets and a theoretical study further help in supporting our claim. We performed crucial ablation studies such as hyperparameter sensitivity, the generalization study by showing the flatness on loss landscape and feature similarly with teacher network. © 2023 ACM.
About the journal
JournalACM International Conference Proceeding Series
PublisherAssociation for Computing Machinery