Do neural networks dream of black cats?

Memory Snowglobe

LAST UPDATED APRIL 2025

NOTE: Please play video in the highest resolution possible!

TOOLS:  Python, C#, VVVV

SKILLS:  GenAI, Visual Art, ML Engineering

Intention

Memory Snowglobe is a moving-image project built from a machine’s attempt to reconstruct my life. Using a generative adversarial network trained entirely on my personal iPhone photo archive — thousands of images spanning the past decade — I created a system that tries to rebuild, remix, and reinterpret what I’ve seen, who I’ve known, and how I’ve remembered.

The project started as an experiment to understand Refik Anadol’s Unsupervised, where his lab trained StyleGAN2 on the entire MoMA collection and visualized the resulting latent space.

I used a similar model architecture, but instead of using a large external repository to train the model, I turned inward and used my own life: mundane, fractured, often emotionally charged photos taken casually, haphazardly over time.

Model training

I preprocessed over 7500+ photos to train StyleGAN3, an state-of-the-art generative adversarial network, and used a vision-aided variant  with  pre-ensembled models to improve training results (to account for the large variety in my photo content).

The model contains two networks. The first, a discriminator, has full access to the training set (my photo archive) and judges if a new image given to it is real or fake. The second, a generator, has no access to the training set and creates fake images to fool the discriminator.

With these two game-theory objectives, over time the networks tend to an equilibrium where the model generates  images that it thinks could be part of my photo album, but are wholly new reimaginings.

On the left is a late timestamp of model training outputs, which contain features of the underlying photo album, but stitch together new textures, faces, and landscapes.

Latent space exploration

The features that emerged from the model were unnerving, strange, and sometimes unexpectedly funny. Faces merged, landscapes folded, and impossible objects surfaced.  I rendered these into a continuous latent space interpolation, a high-dimensional manifold which the model uses to represent abstract features — like color, shape, or identity — that it combines to generate new images.

As the model "walks" through this embedding, it hallucinated new features from my documented life and creates new mountains, cities, and people. It created dreamlike transitions in false memories from my digital past, both bizarre and comforting.

generative visualization

I then fed that into a real-time visual environment in vvvv and C# using signed distance fields — creating a foggy, undulating waveform that feels less like a memory and more like peering inside one.

This piece explores computational memory, self-surveillance, and the fragility of personal archives in the age of generative media. In further work, I'd like to build into an immersive installation — using projection, sound, and real-time viewer input to create an interactive memory environment.

I also want to add generative text, trained on my personal writing archives, to introduce fragmented narrative and deepen the emotional texture of the system.