Learning to Highlight Audio by Watching Movies

1University of Rochester   2The University of Maryland College Park   3Meta Reality Labs Research

CVPR 2025

tl;dr: We introduce a novel task: visually-guided acoustic highlighting, which aims to transform audio to deliver appropriate highlighting effects guided by the accompanying video.

๐Ÿ“ข News and Resources
  • ๐ŸŽ‰
    Feb 2025: Our paper is accepted to CVPR 2025!

Abstract

Recent years have seen a significant increase in video content creation and consumption. Crafting engaging content requires the careful curation of both visual and audio elements. While visual cue curation, through techniques like optimal viewpoint selection or post-editing, has been central to media production, its natural counterpart, audio, has not undergone equivalent advancements. This often results in a disconnect between visual and acoustic saliency. To bridge this gap, we introduce a novel task: visually-guided acoustic highlighting, which aims to transform audio to deliver appropriate highlighting effects guided by the accompanying video, ultimately creating a more harmonious audio-visual experience. We propose a flexible, transformer-based multimodal framework to solve this task. To train our model, we also introduce a new dataset---the muddy mix dataset, leveraging the meticulous audio and video crafting found in movies, which provides a form of free supervision. We develop a pseudo-data generation process to simulate poorly mixed audio, mimicking real-world scenarios through a three-step process---separation, adjustment, and remixing. Our approach consistently outperforms several baselines in both quantitative and subjective evaluation. We also systematically study the impact of different types of contextual guidance and difficulty levels of the dataset.

🎥 Comparison to Other Methods

We present examples from our Muddy Mixed Dataset, showcasing the following: the input poorly mixed video, the highlighting results produced by LCE, the outputs from our VisAH model, and the original movie clips for comparison.


Example1: Movie "No way out"

In the input audio, the speech is not highlighted properly, and our model resolve this.
Input LCE Ours Original Movie

Example2: Movie "Shooter"

In this audio, our model highlight the sound effect properly.
Input LCE Ours Original Movie

Example3: Movie "The Amazing Spider Man"

In this audio, our model highlight the speech properly.
Input LCE Ours Original Movie

Example4: Movie "Superman III"

In this audio, our model highlight the music properly.
Input LCE Ours Original Movie

Example5: Movie "Jurassic Park 3"

In this audio, our model highlight the sound effect properly.
Input LCE Ours Original Movie


☞ Application: V2A Refinement

Our VisAH model has several potential downstream applications, one of which is refining video-to-audio generation. We demonstrate this by using the audio generated by a video-to-audio model as input, along with the corresponding video as guidance. Our VisAH model effectively rebalances the audio sources in alignment with the video, resulting in improved audio-visual coherence. Human evaluations further confirm these enhancements, with preferences favoring the outputs from our model.


Note that the generated videos used in our experiments are sourced from the MovieGen website, which already provides high-quality content. All videos are adjusted at the same loudness level.

MovieGen Video MovieGen Video + Our Model

(2~4s) Our method highlights the sound of the skateboard scraping against the slabs.

Our method highlights the sound effects.

In the following, the audio is generated by Seeing-and-Hearing.

Seeing-and-Hearing Seeing-and-Hearing + Our Model

Our method highlights the sound effects.