Abstract
In this paper, we hypothesize that gradient-based meta-learning (GBML) implicitly suppresses the Hessian along the optimization trajectory in the inner loop. Based on this hypothesis, we introduce an algorithm called SHOT (Suppressing the Hessian along the Optimization Trajectory) that minimizes the distance between the parameters of the target and reference models to suppress the Hessian in the inner loop. Despite dealing with high-order terms, SHOT does not increase the computational complexity of the baseline model much. It is agnostic to both the algorithm and architecture used in GBML, making it highly versatile and applicable to any GBML baseline. To validate the effectiveness of SHOT, we conduct empirical tests on standard few-shot learning tasks and qualitatively analyze its dynamics. We confirm our hypothesis empirically and demonstrate that SHOT outperforms the corresponding baseline.
Episodic Meta Learning
Gradient Based Meta Learning (GBML), solves each episode with gradient descent. Which means, it solves episode within few optimization steps.
Problem formulation
Our hypothesis: GBML suppresses the Hessian along the inner loop.
SHOT: Suppressing the Hessian along the Optimization Trajectory
Results
Citation
Acknowledgements
This work was supported by NRF grant (2021R1A2C3006659) and IITP grants (2021-0-01343, 2022-0-00953), all of which were funded by Korean Government (MSIT).