Leveraging Emerging Similarities for Latent Space Communication

Abstract

Neural networks encode the complex structure of data manifolds in high-dimensional spaces through latent representations. The distribution of data points in the latent space should ideally rely solely on the task, data, loss, and specific architecture constraints. However, factors like random weight initialization, training hyperparameters, and other sources of randomness during training can lead to incoherent latent spaces that hinder reuse.

Notably, a consistent phenomenon emerges when data semantics remain unchanged: the angles between encodings within distinct latent spaces exhibit similarity. During this talk, we will delve into two empirical strategies that harness this phenomenon, facilitating latent communication across diverse architectures and data modalities:

  • Relative Projection: We will demonstrate the creation of a new, relative representation that inherently remains invariant against such transformations.
  • Direct Transformation: We will showcase how prior knowledge about relationships/transformations between different spaces can directly guide the translation from one space to another.

In both cases, we facilitate efficient communication between latent spaces, bridging gaps between distinct domains, models, and modalities; enabling zero-shot model stitching, reuse and latent evaluation. This holds true for both generation and classification tasks, showcasing the versatility and applicability of these strategies.

Date
Sep 13, 2023
Location
Universitat Pompeu Fabra, Spain
Luca Moschella
Luca Moschella
ELLIS Ph.D. in Computer Science

I’m excited by many Computer Science fields and, in particular, by Artificial Intelligence.