In this blog post, we're going to take a look at vocal folds. There are a lot of good videos of vocal folds in action. However, they only show one part of the story. I want to take the opportunity and add an extra element of visualization to a video found on the internet: a spectrogram.
If you're only interested in the result, here is what we will end up with:
The first step is to find a good vocal folds video. I've been searching a little bit and it turns out there is one that got my attention:
from IPython.display import YouTubeVideo
youtube-dl and fetch this video to disk:
[youtube] vUF6kvh5GJQ: Downloading webpage [youtube] vUF6kvh5GJQ: Downloading video info webpage [youtube] vUF6kvh5GJQ: Extracting video information [youtube] vUF6kvh5GJQ: Downloading MPD manifest WARNING: Requested formats are incompatible for merge and will be merged into mkv. [download] Opera singer Vocal Folds-vUF6kvh5GJQ.mkv has already been downloaded and merged
Let's now extract some audio information from the video. We'll use the
moviepy library to do this.
import moviepy.editor as mpy
clip = mpy.VideoFileClip('Opera singer Vocal Folds-vUF6kvh5GJQ.mkv')
Once the clip is loaded, we can access its audio via its
clip.audio attribute, which is documented here: https://zulko.github.io/moviepy/ref/AudioClip.html. What we want is to extract the audio, plot it and superimpose it on the existing video. By accessing the audio clip object, we can the sound data in array format.
sample_rate = clip.audio.fps audio_data = clip.audio.to_soundarray()
The audio data is made of two columns, which makes sense since the audio is stereo. Let's see what the sampling rate is:
This is the classically used 44100 Hz.
Let's see what the sound data looks when it's plotted using
import numpy as np import matplotlib.pyplot as plt %matplotlib inline
fig, ax = plt.subplots(figsize=(14, 4)) t = np.arange(audio_data.shape, dtype=np.float) / sample_rate ax.plot(t, audio_data)
[<matplotlib.lines.Line2D at 0x117b75ac8>, <matplotlib.lines.Line2D at 0x10b48b4a8>]
It's not easy to make out what this actually means. Therefore, let's move on to a spectrogram vizualisation.
NFFT = sample_rate / 25
fig, ax = plt.subplots(figsize=(14, 4)) spectrum, freqs, time, im = ax.specgram(audio_data.mean(axis=1), NFFT=NFFT, pad_to=4096, Fs=sample_rate, noverlap=512, mode='magnitude', cmap='plasma') fig.colorbar(im)
<matplotlib.colorbar.Colorbar at 0x117d7e588>
This is quite interesting. I guess that the upper frequencies show up as noice. This makes me wonder whether the original sound was recorded at a sample rate of 20 kHz rather than 44.1 kHz.
Can we use that data and render it on top of the original videoclip?
What we need to do is to layout a matplotlib figure, extract as an image and fuse it with the original video frames. Let's start with the standard moviepy matplotlib example and adjust it to our needs.
First, I define my dpi values (this depends on my screen, I used https://www.sven.de/dpi/ to determine this value).
my_dpi = 226.98
Now, let's use the same pixel sizes as those of the original clip:
width, height = clip.size width, height
We can now generate a short animation, that we will try to superpose with the existing movie:
from moviepy.editor import VideoClip from moviepy.video.io.bindings import mplfig_to_npimage x = np.linspace(-2, 2, 200) duration = 2 fig, ax = plt.subplots(figsize=(width/my_dpi, height/my_dpi), dpi=my_dpi) def make_frame(t): ax.clear() ax.plot(x, np.sinc(x**2) + np.sin(x + 2*np.pi/duration * t), lw=3) ax.set_ylim(-1.5, 2.5) return mplfig_to_npimage(fig) animation = VideoClip(make_frame, duration=duration) plt.close(fig) animation.ipython_display(fps=20, loop=True, autoplay=True)
98%|█████████▊| 40/41 [00:02<00:00, 15.43it/s]
Let's check the sizes of the generated images:
(240, 320, 3)
<matplotlib.image.AxesImage at 0x1182b6be0>
(240, 320, 3)
<matplotlib.image.AxesImage at 0x118a4ecc0>
Okay, everything matches, let's try to merge the two clips together.
mask_array = np.ones(clip.size[::-1]) * 0.6
mask = mpy.ImageClip(img=mask_array,ismask=True)
stack = mpy.CompositeVideoClip([clip.subclip(t_start=0, t_end=2), animation.set_mask(mask)], use_bgclip=True)
mpy.ipython_display(stack, loop=True, autoplay=True)
98%|█████████▊| 50/51 [00:02<00:00, 17.60it/s]
Now that this works, let's do the final video:
fig, ax = plt.subplots(figsize=(width/my_dpi, height/my_dpi), dpi=my_dpi, ) selection = np.abs(0 - time) < 1. freq_selection = freqs < 10000 ax.pcolorfast(time[selection], freqs, 20 * np.log10(spectrum[:, selection])) ax.set_xticklabels() ax.set_yticklabels() plt.tight_layout(pad=0) def make_specgram(t): ax.clear() selection = np.abs(t - time) < 1. ax.pcolorfast(time[selection], freqs[freq_selection], 20 * np.log10(spectrum[freq_selection, :][:, selection])) ax.axis('off') ax.set_frame_on(False) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) return mplfig_to_npimage(fig) specgram_animation = VideoClip(make_specgram).set_duration(clip.duration).set_audio(clip.audio) plt.close(fig) stack = mpy.CompositeVideoClip([clip, specgram_animation.set_mask(mask)], use_bgclip=True) stack.subclip(t_start=30, t_end=90).write_videofile('output.mp4')
[MoviePy] >>>> Building video output.mp4 [MoviePy] Writing audio in outputTEMP_MPY_wvf_snd.mp3
100%|██████████| 1323/1323 [00:01<00:00, 995.20it/s]
[MoviePy] Done. [MoviePy] Writing video output.mp4
100%|█████████▉| 1500/1501 [02:47<00:00, 9.31it/s]
[MoviePy] Done. [MoviePy] >>>> Video ready: output.mp4
I've uploaded the output to YouTube, which you can see below:
In this blog post, we managed to plot a spectrogram on top of an existing clip of vocal folds. This opens interesting perspectives for trying to understand this complex part of the singing process. To achieve that, we leveraged the power of the matplotlib and moviepy libraries in a relatively straightforward fashion and finally used YouTube to broadcast the result. It could be fun to program a YouTube bot that could do this automatically and annotate existing vocal folds videos like these.