ZeroSep is a novel approach for audio source separation that requires zero training. Users can specify any sound source they want to separate using natural language prompts. Our method leverages pretrained audio diffusion models to achieve separation without requiring specialized training data or fine-tuning.
This demo video showcases our user-friendly interface for real-world audio source separation. Users can isolate any sound source through natural language prompts, and ZeroSep performs zero-shot separation without any training requirements. All code and demo implementations are available in our GitHub repository.
ZeroSep interface demonstration: separating audio sources using natural language prompts
Below are comparison results on the MUSIC and AVE datasets. We compare ZeroSep against training-based methods (LASS-Net, AudioSep, FlowSep) and training-free methods (AudioEdit and ZeroSep (Ours)). The natural language prompt used for source specification is shown at the beginning of each row.