Computers don't dream of electric sheep, they imagine the dulcet tones of legendary public access painter, Bob Ross. Stochastic artist and engineer Alexander Reben has produced an incredible feat of machine learning in honor of the late Ross, creating a mashup video that applies Deep Dream-like algorithms to both the video and audio tracks. The result is an utterly surreal experience that will leave you pinching yourself.
"A lot of my artwork is about the connection between technology and humanity, whether it be things that we're symbiotic with or things that are coming in the future," Reben told me during a recent interview. "I try to eek a bit more understanding from what technology is actually doing." For his latest work, dubbed "Deeply Artificial Trees", Reben sought to represent "what it would be like for an AI to watch Bob Ross on LSD."
To do so, he spent a month feeding a season's worth of audio into the WaveNet machine learning algorithm to teach the system how Ross spoke. Wavenet was originally developed to improve the quality and accuracy of sounds generated for text-to-speech systems by directly modelling the original waveform using each sample point (up to 16,000 every second for 16KHz audio), rather than rely on less effective concatenative or parametric methods.
Read Andrew Tarantola's full article in EnGadget