Abstract
This thesis studies associative memories, a broad class of machine learning models. These models offer powerful content-addressable information storage and have strong connections to human cognition, yet remain underutilized in modern machine learning research. Our research focuses on two specific members of the class; the Hopfield network and the Dense Associative Memory. These models represent the classical and contemporary study of associative memories respectively. We introduce a mathematically rigorous framework for prototype formation in the Hopfield network (Chapter 3), providing a previously missing foundation for studies on prototype learning. We also significantly extend the existing research of state classification in the Hopfield network (Chapter 4) with a focus on prototype classification and generalizability of the classifier models. We give a modification to the original Dense Associative Memory (Chapter 5) that prevents numerical instability, greatly improves hyperparameter selection, and does not interfere with existing literature and theoretical results. Finally, we present the first foundational study of sequential learning in the Dense Associative Memory (Chapter 6), and demonstrate the existence of previously unknown properties of the model.