Ever since I started vjing, I’ve strived to make a tool that creates audio and video simultaneously to deliver a unique synaesthetic output. My ambition, always being to create a versatile instrument that can be picked up to instantly create a unique audiovisual composition.
Through on-going experimentation, I’m fine-tuning the Zenoid, trying to achieve an effortless sensory tie between audio and video.
Despite there being plenty of sound reactive video tools out there, I’ve always found them to be “noisy” the second there’s more than one sound element playing. This meaning that the moment there’s more than one sound source, the sound reaction controlling the video doesn’t convey the subjective and emotional human experience of the audio. While it’s easy to achieve audio/video correlation when a single kick drum is controlling a video parameter, getting the same sensory experience when the audio input is a complex composition is no mean feat, direct correlation of audio and video parameters doesn’t deliver an output that the brain unifies as one.
Using my experiences as a DJ and VJ, I’m gradually working towards defining a paradigm that will unify audio and video from the subjective, emotional, human experience. BPM, pitch, timbre and volume are not enough when it comes to creating emotional equivalences between auditory and visual stimuli.
My interest in audio synthesis provided me with the methodology from which to start exploring the audiovisual parallels; in the same way that a modular synthesiser starts with an oscillator, I started my visual synthesis with a visual wave: not a visual representation of the sound wave but just “a” wave. The sound source used in the Zenoid MK 1 was a sine wave as it is the simplest, purest waveform and I had the means to easily create a smooth wave visually.
From the 2 initial audio and video seeds, parameters affecting the audio and visual wave have been paired following on-going experimentation to create a whole where human perception ties both stimuli to interpret them as one.
While this is the starting point, this methodology is far from achieving a complex audio or video output. Again, the process used in audio synthesis is used as an analogy to create complexity from a basic visual seed. After the seeds are created, they are routed through a chain of audio and visual effects which create the layers of complexity in the audio and video outputs.