Polymorph I was an early prototype of the system, composed of water tanks containing submerged steel filaments, with pumps placed within the tanks and activated once specific data thresholds were reached. The system incorporated generative models, including a fine-tuned Stable Diffusion model and a RAVE model. Cameras were directed toward the tank surfaces, registering subtle distortions, reflections, and movements within the system.
Each element within the system functions interchangeably as both input and output. This distributed internal data is then used as prompts to generate subsequent images and sound. A key aspect was ensuring the system’s responsiveness to physical processes such as vibration, movement, and electric current.
- 2 rows of conductive steel hair (1) in each tank (2) react to the movement of water. Those waving proximities produce variable data that gets fed into TouchDesigner.
- Water pumps (3) placed in each tank are triggered by changes of light in the room, affecting water motion.
- A reflector (4), triggered by TouchDesigner, casts light over the tanks, producing dynamic light patterns that are captured by the camera (5) and fed back into the system.
- A Stable Diffusion model trained on cephalopod skin transformations and underwater light patterns is embedded in the TouchDesigner setup.
- The TouchDesigner environment triggers image generation through multiple data streams, with outputs continuously generated and projected onto the canvas (6) as a pulsating plane. Their shape, pulsation, and behaviour result from the changing conditions of the system.
- A microphone (7) picks up sound from the speakers, water tanks, and projector.
- A RAVE model generates sound based on the varying systemic input. The sound is played through the speakers (8).
The resulting outputs were both projected and printed onto material supports, allowing the generated imagery to re-enter the physical arrangement and take part in its ongoing modulation. As these elements circulated, slight discrepancies emerged between what was registered, processed, and materially reintroduced, producing small misalignments that carried forward into subsequent transformations.
Cameras, microphones, and conductive filaments operated as sensors, through which generated sound and light became entangled with the presence of observers and the surrounding environment. Within this configuration, the trajectories of the generative models shifted in response to these inputs, producing continuous variation in the audio-visual output.