Many researchers realize that mice and rats are social and chatty. They spend all day talking to each other, but what are they really saying? Not only are many rodent vocalizations unable to be heard by humans, but also existing computer programs to detect these vocalizations are flawed. They pick up other noises, are slow to analyze data, and rely on inflexible. rules-based algorithms to detect calls.
Two young scientists at the University of Washington School of Medicine developed a software program called DeepSqueak, which lifts this technological barrier and promotes broad adoption of rodent vocalization research.
This program takes an audio signal and transforms it into an image, or sonogram. By reframing an audio problem as a visual one, the researchers could take advantage of state-of-the-art machine vision algorithms developed for self-driving cars. DeepSqueak represents the first use of deep artificial neural networks in squeak detection.
The program is highlighted in a recent paper published in Neuropsychopharmacology and was presented at Neurosciences 2018.
“DeepSqueak uses biomimetic algorithms that learn to isolate vocalizations by being given labeled examples of vocalizations and noise,” said co-author Russell Marx. Marx is a technician in the Neumaier lab, which investigates complex behaviors relating to stress and addiction, and created the program with Kevin Coffey, whose specialty is studying the psychological aspects of drugs.
So what have the researchers found out so far?