Home >> Linux >> Data Scientist Tries AI/Human Collaboration For Audio-Visual Art

Data Scientist Tries AI/Human Collaboration For Audio-Visual Art

“Swirls of color and images blend together as faces, scenery, objects, and architecture transform to music.” That’s how AI training company Lionbridge is describing Neural Synesthesia.

Slashdot reader shirappu explains:

Neural Synesthesia is an AI art project that creator Xander Steenbrugge calls a collaboration between man and machine. To create each piece, he feeds a generative network with curated image datasets and combines the ever-transforming results with music that is programmed to control the shifting visuals.
Steenbrugge describes how the music controls the visuals in an interview with Lionbridge:
I think coding for the first rendered video took over six months because I was doing it in my spare time. The biggest challenge was how to manipulate the generative adversarial network (GAN)’s latent input space using features extracted from the audio track. I wanted to create a satisfying match between visual and auditory perception for viewers.

I apply a Fourier Transform to extract time varying frequency components from the audio. I also perform harmonic/percussive decomposition, which basically separates the melody from the rhythmic components of the track. These three signals (instantaneous frequency content, melodic energy, and beats) are then combined to manipulate the GANs latent space, resulting in visuals that are directly controlled by the audio…
[Y]ou are not limited by your own imagination. There’s an entirely alien system that is also influencing the same space of ideas, often in unexpected and interesting ways. This leads you as a creator into areas you never would have wandered by yourself.


Read more of this story at Slashdot.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

*