Cartoon videos as a test-bed for grounding spoken language in vision

Mitja Nikolaus, Afra Alishahi and Grzegorz Chrupała

A major unresolved issue from for current models of spoken language acquisition is the unrealistic training data, consisting of images or videos paired with spoken descriptions of what is depicted. Such a setup guarantees a strong correlation between speech and the visual world. In the real world the coupling between the linguistic and the visual modality is loose, and often confounded by correlations with non-semantic aspects of the speech signal. We address this shortcoming by using a dataset based on the children’s cartoon Peppa Pig. We train a bi-modal architecture on the portion of the data consisting of dialog between characters, and evaluate on segments containing
descriptive narrations. Despite the weak and confounded signal in this training data our model succeeds at learning aspects of the visual semantics of spoken language.