December 14, 2022

Like last year, for summer 2023 we have a three-month internship open for research at Microsoft Research Redmond on compositionality in deep learning. This internship is for fundamental research: we are developing new architectures that will allow deep learning models to invent their own explicitly compositionally-structured (continuous) representations (a bit more info at the end of this message). An appropriate intern would have considerable experience in analyzing and adapting deep learning models to train and generalize well, and an interest in instantiating the abstract nature of compositional processing within neural architectures to promote robust compositional generalization. Experience with the design and implementation of novel neural architectures would be a significant plus.
We are excited that internships in 2023 will be on-site, for the first time in several years. Any potentially interested students should contact us directly, and could in addition apply at
Thank you for your attention to this email, and best wishes for a healthy and productive new year!
Paul & Roland
Brief description of the research project
Human learners exhibit robust generalization from relatively modest learning data because they understand that the world is strongly compositional: their representations of the world have compositional structure. At the same time, human representations also enable strong similarity-based generalization because they are continuous vectors. Just how these generalization abilities can arise from representations with continuous compositional structure is characterized precisely by recent mathematical results. The research project to which the interns will contribute develops novel neural network architectures for models that learn to deploy continuous mechanisms for processing such representations compositionally, and ultimately, use deep learning to invent their own, data-driven continuous compositional representations.
An overview of the research program to which this project contributes is presented in a recent AI Magazine article (preprint: