Building a Linux video production workflow for the SRA22 Social Responsibility of Algorithms conference

As you might know, most of my toolchain is Linux and opensource-based. So when it came time to produce a video for the upcoming Social Responsibility of Algorithms conference, with my doctoral supervisor, Dr. Liz T. Williams, Linux-based options were my first choice. However, the Linux video production workflow is not as smooth as it could be – and this blog post documents my trials and tribulations.

Web camera software

While web cameras on Linux these days tend to “just work”, there are no Linux counterparts to the Logitech or other vendor-specific software that the hardware usually ships with. Way in back in January when I presented at linux.conf.au, part of my tech check (big thanks, Chris and jwoithe!) involved using the qv4l2 utility to provide finer-grained control over settings such as white balance, contrast, and brightness. This handy utility also controls auto and manual zoom on cameras, and I adjusted the zoom to better fill the screen.

From memory, I didn’t have to add an apt repository for this – it’s in the Ubuntu repos by default – and can be installed with:

sudo apt install qv4l2

Recording video and setting up a virtual background

Open Broadcaster Software

The next challenge was to record the video, using a virtual background. The problem of recording video on Linux is mostly solved these days using the Open Broadcaster Software (OBS) software.

Background removal plugin

While this software comes with a “chroma key” plugin (for green screens), it doesn’t come with an automatic background removal plugin. I read up on what plugins were available and ended up choosing this one from Roy Shilkrot (big thank you, Roy!) –

https://github.com/royshil/obs-backgroundremoval

It was well documented, and very clear about the installation steps. It did require compiling from source, but the instructions were detailed. Because I already had CUDA installed on my machine (I have an RTX2060 NVIDIA GPU, don’t get me started on Nvidia and Linux drivers and how hard they are to get working, but there are signs of hope, with NVIDIA recently open sourcing their Linux drivers – albeit through moving most of the functionality to firmware), I opted to compile with GPU support. Compilation was very straightforward, however the instructions provided for sym-linking the compiled binaries were out of date. Luckily, there was already an Issue that had been raised for this, and I used the information in the Issue to fix the sym-linking. This exercise gave me a deep appreciation for the effort that goes into packaging, and the complexity that it abstracts away for the end user.

Machine vision algorithms

This plugin uses machine vision (which is why OpenCV is one of the package’s dependencies) to determine the outline of the subject from the background, and then uses the Chroma Key to fill the non-subject space with a single colour. It allows you to choose from a range of algorithms to do the subject outline, such as SINet, MODnet, mediapipe and a model for selfies. I found I got the best results using mediapipe. The plugin also allows you to configure other inference settings, such as the frame interval between which inference samples are taken. Again, there is another trade-off here; if you resample the image frequently, this takes more computational power, but if you sample less frequently then the outline may not “keep up” with the movement of the subject. I chose to inference every frame – primarily because I had the compute power to do so šŸ˜ˆ.

All of these computer vision (CV) algorithms are very recent – for example SINet [0] was developed just over a year ago, and MODnet [1] is less than 6 months old; I continue to be surprised at how fast the ML space is moving, and how commodotised it has become. If you have ever used the “virtual background” feature in a videoconferencing application like Zoom or Teams, it will be using a similar algorithm. However, a lot of the implementation decisions – like how often to do inference, which algorithm to use, or the characteristics of the algorithm such as smoothing or contrast, are abstracted away from the end user in these applications. What we lose in complexity and control, we gain in ease of use.

Running a noise removal filter on the video using ffmpeg

Although I had managed to do a video recording, sans background, when I played back the video, the audio had a lot of background noise. This came from the sound of the fans on my laptop; when the GPU is under load, the fans increase their speed to dissipate the heat that is generated. The inference of the background removal algorithm was causing the fans to spin up – and to generate background noise that was captured by the microphone. Tradeoffs, tradeoffs and more tradeoffs!

Rather than re-record the video, or reduce the inference frequency of the background removal plugin, I decided to see if I could improve the noise quality by running the video through a noise reduction filter. Signal processing, and acoustic signal processing, is definitely not an area that I feel comfortable with, but I had a general idea of how noise reduction filters worked – by gating (cutting off) noise above or below a particular frequency (the gating floor or gating ceiling). When I worked in videoconferencing, I’d seen a similar approach used with special digital signal processing equipment in videoconferencing-equipped spaces. Again, there’s a tradeoff – if you have a quietly spoken person, and a loud, deep speaker, but use the same gating settings, you’re going to get unexpected results.

Although I had Audacity installed, I reached for the command-line tool ffmpeg. The newer versions of ffmpeg have noise reduction filters built in, and the documentation includes several command line examples. Running the video through the noise reduction filter improved the quality markedly.

Video editing, compositing and rendering in OpenShot

Next, I used OpenShot to do video editing, addition of slides, and “topping and tailing” with the conference branding. OpenShot is a breeze to use, and although it doesn’t have a lot of fancy transitions, it’s an excellent choice for this step of the workflow. I previously converted the PowerPoint I was using for slides into PDF, then used GIMP to create a 1920 x 1080 px PNG image of each PDF. The PNG images were then imported into OpenShot and overlaid over the video tracks.

One of the branding artefacts was in a format (Quicktime MOV) that OpenShot didn’t particularly work well with; so I changed the format using Handbrake.

Again, having a reasonable GPU helped with the rendering process; each conference video is around 15 minutes long, and OpenShot rendered them at 1080p 30fps resolution in around 5 minutes šŸ˜ˆ.

Subtitles

Subtitles are helpful for many people, and in particular they increase the accessibility of video content for folx who are hard of hearing. Several video creation tools can produce automatic subtitles. However, these are usually inaccurate at best, and at worst can semantically change the meaning of the video content. Moreover, I knew that in this video we would be using some domain-specific vocabulary (the video is all about accent bias in language technologies like speech recognition) – and that the automatic subtitles would be even less accurate.

For this task, I tried out the application Subtitle Editor, but found the interface difficult to use. I’d heard good things too about Aegisub, but in the end I just manually created the .srt file by hand-editing it in Atom. The .srt file format is surprisingly basic – and hand-editing was made a lot easier because we created a script for the video beforehand. It may not work as well if you are transcribing the subtitles from the spoken audio.

Edit: I eventually figured Subtitle Editor’s interface out, when I needed to move all the subtitles back by 7 seconds after topping and tailing the video; this feature itself is worth installing the application.

The one thing that tripped me up in hand-editing subtitles was forgetting that the time format is HH:MM:SS,ms. My video was only 16 minutes in length, and I erroneously placed the “minutes” in the “hours” column. For example, I wrote 00:05:00, 400 to mean 5 seconds into the video, but this is actually 5 minutes into the video. The millisecond separator – the comma – is unusual in English, but the SRT format was developed in France; which uses the comma. This reminded me of Lawrence Busch‘s excellent book Standards: Recipes for reality, which articulates the path dependencies that standards create.

1
00:00:05,400 --> 00:00:12,300
[Kathy] Hi everyone, Iā€™m Kathy Reid, a PhD candidate
at the School of Cybernetics
at Australian National University,
2
00:00:12,400 --> 00:00:22,300
and Iā€™m recording today from the unceded lands
of the Wadawarrung people here in Geelong,
on the south-west coast of Victoria.

Conclusion

Video production for conferences is becoming more prevalent, particular as the COVID-19 pandemic has taken many conferences online. Tools like Zoom and Camtasia, while mature, abstract away many of the underlying algorithms, and simplify their interfaces for usability by removing fine-grained controls from the user. By exploring open source tools for video production workflows, we obtain a deeper understanding of “what’s under the hood”.

To extend this work in the future, one of the capabilities I would like to explore is automation of the workflow pipeline. This may be possible for tasks such as topping and tailing videos with conference branding, but the actual composition of video and slides is still going to require human attention.

Footnotes

[0] Fan, D. P., Ji, G. P., Sun, G., Cheng, M. M., Shen, J., & Shao, L. (2020). Camouflaged object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2777-2787).
https://github.com/DengPingFan/SINet

[1] Ke, Z., Sun, J., Li, K., Yan, Q., & Lau, R. W. (2020). MODNet: real-time trimap-free portrait matting via objective decomposition.
https://github.com/ZHKKKe/MODNet

State of my toolchain 2021

I’ve been doing a summary of the state of my toolchain now for around five years (2019, 2018, 2016). Tools, platforms and techniques evolve over time; the type of work that I do has shifted; and the environment in which that work is done has changed due to the global pandemic. Documenting my toolchain has been a useful exercise on a number of fronts; it’s made explicit what I actually use day-to-day, and, equally – what I don’t. In an era of subscription-based software, this has allowed me to make informed decisions about what to drop – such as Pomodone. It’s also helped me to identify niggles or gaps with my existing toolchain, and to deliberately search for better alternatives.

At a glance

Hardware, wearables and accessories

Software

Techniques

  • Pomodoro (no change since last report)
  • Passion Planner for planning (no change since last report)

What’s changed since the last report?

Writing workflow

Since the last report in 2019, I’ve graduated from a Masters in Applied Cybernetics at the School of Cybernetics at Australian National University. I was accepted into the first cohort of their PhD program. This shift has meant an increased focus on in-depth, academic-style writing. To help with this, I’ve moved to a Pandoc, Atom, Zotero and LaTeX-based workflow, which has been documented separately. This workflow is working solidly for me after about a year. Although it took about a weekend worth of setup time, it’s definitely saving me a lot of time.

Atom in particularly is my predominant IDE, and also my key writing tool. I use it with a swathe of plugins for LaTeX, document structure, and Zotero-based academic citations. It took me a while to settle on a UI and syntax theme for Atom, but in the end I went with Atom Solarized. My strong preference is to write in MarkDown, and then export to a target format such as PDF or LaTeX. Pandoc handles this beautifully, but I do have to keep a file of command line snippets handy for advanced functionality.

Primary machine

I had an ASUS Zenbook UX533FD – small, portable and great battery life, even with an MX150 GPU running. Unfortunately, the keyboard started to malfunction just after a year after purchase (I know, right). I gave up trying to get it repaired because I had to chase my local repair shop for updates on getting a replacement. I lodged a repair request in October, and it’s now May, so I’m not holding out hope… That necessitated me getting a new machine – and it was a case of getting whatever was available with the Coronavirus pandemic.

I settled on a ASUS ROG Zephyrus G15 GA502IV. I was a little cautious, having never had an AMD Ryzen-based machine before, but I haven’t looked back. It has 16 Ryzen 4900 cores, and an NVIDIA GeForce RTX 2060 with 6GB of RAM. It’s a powerful workhorse and is reasonably portable, if a little noisy. It get about 3 hours’ battery life in class. Getting NVIDIA dependencies installed under Ubuntu 20.04 LTS was a little tricky – especially cudnn, but that seems to be normal for anything NVIDIA under Linux. Because the hardware was so new, it lacked support in the 20.04 kernel, so I had to pull in experimental Wi-Fi drivers (it uses Realtek).

To be honest I was somewhat smug that my hardware was ahead of the kernel. One little niggle I still have is that the machine occasionally green screens. This has been reported with other ROG models and I suspect it’s an HDMI-under-Linux driver issue, but haven’t gone digging too far into driver diagnostics. Yet.

One idiosyncrasy of the Zephyrus G15 is that it doesn’t have built-in web camera; for me that was a feature. I get to choose when I do and don’t connect the web camera. And yes – I’m firmly in the web-cameras-shouldn’t-have-to-be-on by default camp.

Machine learning work, NVIDIA dependencies and utilities

Over the past 18 months, I’ve been doing a lot more work with machine learning, specifically in building the DeepSpeech PlayBook. Creating the PlayBook has meant training a lot of speech recognition models in order to document hyperparameters and tacit knowledge around DeepSpeech.

In particular, the DeepSpeech PlayBook uses a Docker image to abstract away Python, TensorFlow and other dependencies. However, this still requires all NVIDIA dependencies such as drivers and cudnn to be installed beforehand. NVIDIA has made this somewhat easier with the Linux CUDA installation guide, which advises on which version to install with other dependencies, but it’s still tough to get all the dependencies installed correctly. In particular, the nvtop utility, which is super handy for monitoring GPU operations (such as identifying blocking I/O or other bottlenecks) had to be compiled from source. As an aside, the developer experience for getting NVIDIA dependencies installed under Linux is a major hurdle for developers. It’s something I want NVIDIA to put some effort into going forward.

Colour customisation of the terminal with Gogh

I use Ubuntu Linux for 99% of my work now – and rarely boot into Windows. A lot of that work is based in the Linux terminal; from spinning up Docker containers for machine learning training, running Python scripts or even pandoc builds. At any given time I might have 5-6 open terminals, and so I needed a way to easily distinguish between them. Enter Gogh – an easy to install set of terminal profiles.

One bugbear that I still have with the Ubuntu 20.04 terminal is that the fonts that can be used with terminal profiles are restricted to only mono-spaced fonts. I haven’t been able to find where to alter this setting – or how the terminal is identifying which fonts are mono-spaced for inclusion. If you know how to alter this, let me know!

Linux variants of Microsoft software intended for Windows

ANU has adopted Microsoft primarily for communications. This means not only Outlook for mail – for which there are no good Linux alternatives (and so I use the web version), but also the use of Teams and OneNote. I managed to find an excellent alternative in OneNote for Linux by @patrikx3, which is much more usable than the web version of OneNote. Teams on Linux is usable for messaging, but for videoconferencing I’ve found that I can’t use USB or Bluetooth headphones or microphones – which essentially renders it useless. Zoom is much better on Linux.

Better microphone for videoconferencing and conference presentations

As we’ve travelled through the pandemic, we’re all using a lot more videoconferencing instead of face to face meetings, and the majority of conferences have gone online. I’ve recently presented at both PyCon AU 2020 and linux.conf.au 2021 around voice and speech recognition. Both conferences used the VenueLess platform. I decided to upgrade my microphone for better audio quality. After all, research has shown that speakers with better audio are perceived as more trustworthy. I’ve been very happy with the Stadium USB microphone.

Taskwarrior over Pomodone for tasks

I tried Pomodone for about 6 months – and it was great for integrating tasks from multiple sources such as Trello, GitHub and GitLab. However, I found it very expensive (around $AUD 80 per year) and the Linux version suddenly stopped working. The scripting options also only support Windows and Apple, not Linux. So I didn’t renew my subscription.

Instead, I’ve moved to Taskwarrior via Paul Fenwick‘s recommendation. This has some downsides – it’s a command line utility rather than a graphical interface, and it only works on a single machine. But it’s free, and it does what I need – prioritises the tasks that I need to complete.

What hasn’t changed

Wearables and hearables

My Mobvoi TicWatch Pro is still going strong, and Google appears to be giving Wear OS some love. It’s the longest I’ve had a smart watch, and given how rugged and hardy the TicWatch has been, it will definitely be my first choice when this one reaches end of life. My Plantronics BB Pro 2 are still going strong, and I got another pair on sale as my first pair are now four years old and the battery is starting to degrade.

Quantified self

I’ve started using Sleep as Android for sleep tracking, which uses data from the TicWatch. This has been super handy for assessing the quality of sleep, and making changes such as adjusting going-to-bed times. Sleep as Android exports data to Google Drive. BeeMinder ingests that data into a goal, and keeps me accountable for getting enough sleep.

RescueTime, BeeMinder and Passion Planner are still going strong, and I don’t think I’ll be moving away from them anytime soon.

Assistant services

I still refuse to use Amazon Alexa or Google Home – and they wouldn’t work with the 5GHz-band WiFi where I am living on campus. Mycroft.AI is still my go-to for a voice assistant, but I rarely use it now because the the Spotify app support for Mycroft doesn’t work anymore after Spotify blocked Mycroft from using the Spotify API.

One desktop utility that fits into the “assistant” space that I’ve found super helpful has been GNOME extensions. I use extensions for weather, peripheral selection and random desktop background selection. Being able to see easily during Australian summer how hot it is outside has been super handy.

Current gaps in my toolchain

I don’t really have any major gaps in my toolchain at the moment, but there are some things that could be better.

  • Visual Git Editor – I’ve been using command line Git for years now, but having a visual indicator of branches and merges is useful. I tried GitKraken, but I don’t use Git enough to justify the monthly-in-$USD price tag. The Git plugin for Atom is good enough for now.
  • Managing everything for me – I looked a Huginn a while back and it sounds really promising as a “second brain” – for monitoring news sites, Twitter etc – but I haven’t had time to have a good play with it yet.

State of my toolchain 2019

What’s changed in the last year?

As you might be aware, I’ve been doing a writeup of my toolchain every year or so for the last couple of years (2016, 2018). There are a couple of reasons for this:

  • The type of work that I do has changed in that time, necessitating exploring different tools, and different equipment
  • And the technology that I work with continues to evolve – new models, new ways of working, and new mindsets – and our toolchains need to evolve to

This year, I’m studying a Master of Applied Cybernetics at the 3A Institute in Canberra – back to being a student; which I haven’t done for five years. Interestingly, my tools of choice 5 years ago have remained steady – Zotero for referencing, LibreOffice for writing essay type work, and Atom as my IDE of choice.

The key changes are;

  • A change in the main laptops I use
  • I’ve adopted Trello / Pomodone / RescueTime as a combination for personal productivity, with Passion Planner as a written diary / visual planner
  • My Fitbit Ionic died an inelegant death and has been replaced by the Mobvoi TicWatch Pro

Main laptop

My Asus N76 finally gave up the ghost and had unrecoverable hardware failure, including failure of the Bluray/DVD-rom drive that was built in – it’s not worth repairing and I think I’ll send it to disposal / recycling after taking 7 years’ worth of stickers off the front.

You were a Good Computer, N76. You were a Very Good Computer.

In my previous Toolchain tear-down, you would have read about my interest in System 76‘s Oryx Pro 3. One of my friends was selling hers (huge thanks, Pia!), and I immediately fell in love with this hard working, nerd-first beast of a laptop. I chose to flash it with Ubuntu 18.04 LTS rather than System 76’s POP OS, basically because I’m so familiar with Ubuntu and I didn’t want any additional learning curve. This machine continues to be my desk-based workhorse of choice. It’s a beautiful, solid, high-performance machine, but it’s not a good mobile choice.

Enter the ASUS Vivobook (my model is the X510UQ). I bought one of these devices for Mum, as she needed a new machine, and was so impressed with it – it has 16GB of RAM and a reasonable NVIDIA GPU (!) that I went back to the shop and got one for myself. The mobility is so-so – with a battery of about 4 hours if the screen is reasonably dim, but then I tend to run a lot of CPU- and battery-hungry apps. It’s lightweight, has HDMI out and 3 USB ports and the small bevel means plenty of screen space. I’ve set it up to dual boot Windows and Ubuntu, and if I’m honest it could use a much bigger SSD. That will be a holiday job.

Mobile phone

My Pixel died a couple of months ago after the battery life suddenly dropped to less and 30 minutes after the update to Android 9 – a problem that seems to be quite widespread. I’ve been on a Pixel 3 since; primarily because it’s what JB Hi-Fi in Geelong had in stock. The camera is amazing, and I’ve finally ditched my 3.5mm audio jack headphones for Bluetooth headphones.

Wearables

My Fitbit Ionic was a beautiful device until a release of Android in around November last year; after which I could no longer pair the Ionic with the Pixel phone. Getting support for this was incredibly problematic; it was difficult, time-consuming and very poor after-sales support from Fitbit. As a result, I ditched Fitbit and made the switch to WearOS, and have been on the Mobvoi TicWatch Pro ever since. The device is too chunky for most women, but well, I’m not most women, and it fits on my giant fat wrist just fine. The battery life isn’t great, but I’ve found that the heart rate monitor is the largest drain on battery.

One gotcha with the Mobvoi Ticwatch Pro is the charger. I bought two chargers with the device, and managed to “fry” – short circuit – them both by running higher than 1 Amp current through them (with a high current charger). This is well documented on Reddit. This was pretty poor poor IMHO for a high-end smartwatch.

WearOS has been an unexpectedly smooth experience; it doesn’t have the ecosystem or the integration that FitBit has, but that’s also a positive. I can choose the apps and watch faces that best suit me, from multiple different vendors. I’ve settled on the Venom watch face in neutral colours.

A smartwatch remains a key part of my toolchain – moreso than ever.

Quantified Self

I continue to use and be very happy with RescueTime and BeeMindr. I’ve been through a myriad of to-do tools in the past few years and seem to have settled on a combination of both Trello and Pomodone this year. Pomodone is beautiful; it’s an electron-based app that’s available for Linux (Woot!). Seriously considering upgrading to the paid version in a couple of months if it continues to prove its value.

For visual planning and diarising, I went to Passion Planner, driven by being a full time student again. I’ve been very happy with the model it uses – iterative goal setting and pattern-forming, and have already bought in my 2020 diary. As a visual person, it gives me plenty of space to visualise, to draw and to map out plans, goals and actions. I used the medium size this year, and found it marginally too small; so have upgraded to the large size for 2020.

Headphones

No change, the Plantronics Backbeat Pro bluetooth headphones are still fantastically awesome.

Streaming Media

No change, still Spotify premium.

Input devices

No change.

Voice Assistant

No change, still the awesome Mycroft.AI

Internet of Things and Home Automation

I’m on residential college this year at Burgmann College at ANU. Their Wifi network is a 5Ghz spectrum, PEAP/MSCHAPv2 authenticated beastie, and nothing much in the IoT space speaks to it, because IoT standards and security, what are they even? šŸ™

It feels really weird to have to physically turn my light off now – my default behaviours have been changed by home automation.

Gaps in my toolchain and how they’ve been plugged

In the last edition of State of the Toolchain, these were my key bugbears:

  • Visual Git Editor – I’ve given up on this and learned to love the command line. In hindsight it’s been a great learning experience, and my git fluency has improved out of sight (hah!).
  • Better internet – ANU is on gig internet. *laughs in TCP/IP* I’m going to be in dire straights though if/when I have to go back to a copper-based NBN FttN service *cries in copper*.

Have I missed anything? What do you use?