A note on exact correspondences between adaptive learning algorithms and the Kalman filter
Digressing into the origins of the two main algorithms considered in the literature of adaptive learning, namely Least Squares (LS) and Stochastic Gradient (SG), we found a connection between their non-recursive forms and their interpretation within a state-space unifying framework. Based on such connection, we extend the correspondence between the LS and the Kalman filter recursions to a formulation with time-varying gains of the former, and also present a similar correspondence for the case of the SG. Our correspondences hold exactly, in a computational implementation sense, and we discuss how they relate to previous approximate correspondences found in the literature.