An anonymous reader quotes a report from Ars Technica: Google researchers have developed a deep-learning system designed to help computers better identify and isolate individual voices within a noisy environment. As noted in a post on the company’s Google Research Blog this week, a team within the tech giant attempted to replicate the cocktail party effect, or the human brain’s ability to focus on one source of audio while filtering out others — just as you would while talking to a friend at a party. Google’s method uses an audio-visual model, so it is primarily focused on isolating voices in videos. The company posted a number of YouTube videos showing the tech in action.
The company says this tech works on videos with a single audio track and can isolate voices in a video algorithmically, depending on who’s talking, or by having a user manually select the face of the person whose voice they want to hear. Google says the visual component here is key, as the tech watches for when a person’s mouth is moving to better identify which voices to focus on at a given point and to create more accurate individual speech tracks for the length of a video. According to the blog post, the researchers developed this model by gathering 100,000 videos of “lectures and talks” on YouTube, extracting nearly 2,000 hours worth of segments from those videos featuring unobstructed speech, then mixing that audio to create a “synthetic cocktail party” with artificial background noise added. Google then trained the tech to split that mixed audio by reading the “face thumbnails” of people speaking in each video frame and a spectrogram of that video’s soundtrack. The system is able to sort out which audio source belongs to which face at a given time and create separate speech tracks for each speaker. Whew.
Read more of this story at Slashdot.