My Summer With the Blender Video Sequencer

2024/08/21

This summer I tripped and fell into Blender’s video sequencer codebase and ended up having to fix all the bugs to escape.

As part of Google’s Summer of Code program for 2024, I was graciously mentored by Aras Pranckevičius, who helped to coordinate my involvement with the sequencer module, manage pull requests and represent me at meetings (since they were held far too early for me to wake up in time to attend from Minneapolis, UTC-5:00).

Why Video Sequencer?

I’ve used Blender on and off for nearly 7 years now and had always wanted to contribute code to the project. After seeing that they had been accepted for Google Summer of Code 2024, I made a leap for it.

The Blender admins had listed potential improvements on a provided project ideas page, which included some proposals for the video sequencer. This caught my eye, since I’ve edited quite a bit in the past using mostly Adobe software *screams, jeers, shrieks*

My experience with Premiere Pro has been quite unpleasant… there are bugs I run into regularly that just haven’t been fixed for years. I keep updating and nothing changes. What kind of utopia do I have to live in in order to be able to just go in and fix the damn problem myself? Why not this timeline?

I know for certain I’m not alone in my frustrations. When drama ensued due to changes to the Adobe Terms of Use earlier this year that suggested users’ content would be forcibly extracted for use in training generative AI, even adobe employees were unhappy with the response.

But what about other open source alternatives, like OpenShot, Shotcut, or Kdenlive? To that I’d respond that Blender stands alone with the potential to integrate an entire 3D suite into their video editor. I strongly believe in Blender’s mission and approach to designing human interfaces, so I think there’s a lot of latent power here.

In my initial proposal I set out to tackle the following three issues:

  1. The preview region should have snapping just like the timeline.
  2. Newly imported movies should have audio and video linked together, so that by default, moving one moves the other.
  3. You should be able to specify which channels you’d like to drop content into, so that the location is not as haphazard.

Although I only ended up fully delivering on points 1 and 2 and thus didn’t quite accomplish everything, in some respects, I did more than I anticipated (fixed a looot of bugs. And I’m not done yet!).

Preview Snaps!

After meeting with Aras at the beginning of the summer, we decided that my first order of business should be to get snapping working in the VSE Preview – in theory it’s simple, wouldn’t require as much back-and-forth to determine what the UI should look like or how it should work, etc.

I immediately began to devour the Blender snap system. In doing so, I found quite a few peculiarities and idosyncrasies…

My first implementation for snapping in the VSE Preview mimicked the overall structure for timeline snapping in the timeline. Roughly, the code was structured as follows:

  1. The user initiates strip movement (internally called a “sequence slide”).
  2. Information about the location of the strip is stored, including data for snap source and target points (which places to snap from, and which places to snap to)
  3. The user moves the strip around, updating the current values parameter which corresponds to the strip’s “delta” away from its starting position
  4. If a snap source point gets close enough to a target point, the distance between the two is added to the values parameter, updating the location of the strip

Sounds fine… but when I tried using this algorithm with a first draft of preview snapping, I got really glitchy results. Strips would successfully find valid snaps, but would “bounce around” borders and never actually “snap” cleanly.

I then realized that this led to quirky behavior with the timeline, too… it turns out that the solution was to modify step 4 slightly – instead of adding to the values delta, we needed to set it directly.

Before After
before-fix-glitchy-snapping.gif after-fix-glitchy-snapping.gif

With this out of the way,I could now finish up snapping for the VSE preview:

static moving

Linked Connected Strips

Next up, I started work on a system for linking disparate strips in the sequencer. This involved prodigious amounts of study, since I wanted to make sure I understood selection code inside and out.

But there was one problem – while “linked” is commonly used in other NLEs as the term to describe two clips that share selection status and location, this term was already used throughout Blender in different contexts.

Before any names could be put forward, I asked Blender admins and the community for advice at my dedicated feedback thread for thoughts about how we should go forth naming this new functionality. Gathering feedback was really valuable, and eventually we decided on naming them “connected strips.”

The pull request for connected strips gives more information about how they work, but in a nutshell, they let you edit multiple strips simultaneously with much greater ease – these could be audio and video channels from the same movie file, or multiple images/video grouped together.

connected connected2
connected3 connected4

This PR went through many iterations. At first, the implementation would only allow you to select individual strips using the “toggle” select option (which is exposed as Shift+Left Click by default). This proved to be quite confusing to work with if you already had strips selected, since it would deselect selected strips instead of individually selecting and focusing them.

One suggestion was to add a “cycle click” for selecting individual strips, so that if you selected a strip that was already selected, it would individually select that strip. I quickly added the functionality…only to get further feedback that argued against it.

Although the drawn out back-and-forth was tiring at times, on the whole I found it to be really enjoyable and valuable. Getting quick prototypes out for testing proved to be really efficient and let the community quickly come to an agreement on the proposed features.

Bugs Stomped, Small Features

All things considered, I managed to get quite a few bugs fixed. I’m very satisfied with the work I managed to accomplish right from the get-go.

It really surprised me just how easy it was to submit code and have it built into Blender. As long as I’d propose something sensible and incorporate feedback from others, the ideas would materialize like magic. There had always been a moat with lava and skeevy fire creatures separating me from “the developers” but I had never noticed that it was scribbled into the road with chalk.

On one occasion my code didn’t manage to make it through: An attempt to fix retiming menu polling didn’t turn out as expected. Turns out that having to iterate over every strip in the timeline is too costly to include in a poll event which runs regularly.

What’s Left?

Even as the summer comes to a close, my mission isn’t over. I still have yet to implement the “active channels” functionality, which I intend to complete before Blender 4.3 is finalized. Beyond my deliverables, I can foresee myself continuing to contribute to the project until it’s in a powerful state. I’d like to see a future where the Blender VSE has its “2.8 moment”, experiencing mass adoption by users who are discontent with the other options.

Takeaways