Building a Linux video production workflow for the SRA22 Social Responsibility of Algorithms conference

As you might know, most of my toolchain is Linux and opensource-based. So when it came time to produce a video for the upcoming Social Responsibility of Algorithms conference, with my doctoral supervisor, Dr. Liz T. Williams, Linux-based options were my first choice. However, the Linux video production workflow is not as smooth as it could be – and this blog post documents my trials and tribulations.

Web camera software

While web cameras on Linux these days tend to “just work”, there are no Linux counterparts to the Logitech or other vendor-specific software that the hardware usually ships with. Way in back in January when I presented at linux.conf.au, part of my tech check (big thanks, Chris and jwoithe!) involved using the qv4l2 utility to provide finer-grained control over settings such as white balance, contrast, and brightness. This handy utility also controls auto and manual zoom on cameras, and I adjusted the zoom to better fill the screen.

From memory, I didn’t have to add an apt repository for this – it’s in the Ubuntu repos by default – and can be installed with:

sudo apt install qv4l2

Recording video and setting up a virtual background

Open Broadcaster Software

The next challenge was to record the video, using a virtual background. The problem of recording video on Linux is mostly solved these days using the Open Broadcaster Software (OBS) software.

Background removal plugin

While this software comes with a “chroma key” plugin (for green screens), it doesn’t come with an automatic background removal plugin. I read up on what plugins were available and ended up choosing this one from Roy Shilkrot (big thank you, Roy!) –

https://github.com/royshil/obs-backgroundremoval

It was well documented, and very clear about the installation steps. It did require compiling from source, but the instructions were detailed. Because I already had CUDA installed on my machine (I have an RTX2060 NVIDIA GPU, don’t get me started on Nvidia and Linux drivers and how hard they are to get working, but there are signs of hope, with NVIDIA recently open sourcing their Linux drivers – albeit through moving most of the functionality to firmware), I opted to compile with GPU support. Compilation was very straightforward, however the instructions provided for sym-linking the compiled binaries were out of date. Luckily, there was already an Issue that had been raised for this, and I used the information in the Issue to fix the sym-linking. This exercise gave me a deep appreciation for the effort that goes into packaging, and the complexity that it abstracts away for the end user.

Machine vision algorithms

This plugin uses machine vision (which is why OpenCV is one of the package’s dependencies) to determine the outline of the subject from the background, and then uses the Chroma Key to fill the non-subject space with a single colour. It allows you to choose from a range of algorithms to do the subject outline, such as SINet, MODnet, mediapipe and a model for selfies. I found I got the best results using mediapipe. The plugin also allows you to configure other inference settings, such as the frame interval between which inference samples are taken. Again, there is another trade-off here; if you resample the image frequently, this takes more computational power, but if you sample less frequently then the outline may not “keep up” with the movement of the subject. I chose to inference every frame – primarily because I had the compute power to do so 😈.

All of these computer vision (CV) algorithms are very recent – for example SINet [0] was developed just over a year ago, and MODnet [1] is less than 6 months old; I continue to be surprised at how fast the ML space is moving, and how commodotised it has become. If you have ever used the “virtual background” feature in a videoconferencing application like Zoom or Teams, it will be using a similar algorithm. However, a lot of the implementation decisions – like how often to do inference, which algorithm to use, or the characteristics of the algorithm such as smoothing or contrast, are abstracted away from the end user in these applications. What we lose in complexity and control, we gain in ease of use.

Running a noise removal filter on the video using ffmpeg

Although I had managed to do a video recording, sans background, when I played back the video, the audio had a lot of background noise. This came from the sound of the fans on my laptop; when the GPU is under load, the fans increase their speed to dissipate the heat that is generated. The inference of the background removal algorithm was causing the fans to spin up – and to generate background noise that was captured by the microphone. Tradeoffs, tradeoffs and more tradeoffs!

Rather than re-record the video, or reduce the inference frequency of the background removal plugin, I decided to see if I could improve the noise quality by running the video through a noise reduction filter. Signal processing, and acoustic signal processing, is definitely not an area that I feel comfortable with, but I had a general idea of how noise reduction filters worked – by gating (cutting off) noise above or below a particular frequency (the gating floor or gating ceiling). When I worked in videoconferencing, I’d seen a similar approach used with special digital signal processing equipment in videoconferencing-equipped spaces. Again, there’s a tradeoff – if you have a quietly spoken person, and a loud, deep speaker, but use the same gating settings, you’re going to get unexpected results.

Although I had Audacity installed, I reached for the command-line tool ffmpeg. The newer versions of ffmpeg have noise reduction filters built in, and the documentation includes several command line examples. Running the video through the noise reduction filter improved the quality markedly.

Video editing, compositing and rendering in OpenShot

Next, I used OpenShot to do video editing, addition of slides, and “topping and tailing” with the conference branding. OpenShot is a breeze to use, and although it doesn’t have a lot of fancy transitions, it’s an excellent choice for this step of the workflow. I previously converted the PowerPoint I was using for slides into PDF, then used GIMP to create a 1920 x 1080 px PNG image of each PDF. The PNG images were then imported into OpenShot and overlaid over the video tracks.

One of the branding artefacts was in a format (Quicktime MOV) that OpenShot didn’t particularly work well with; so I changed the format using Handbrake.

Again, having a reasonable GPU helped with the rendering process; each conference video is around 15 minutes long, and OpenShot rendered them at 1080p 30fps resolution in around 5 minutes 😈.

Subtitles

Subtitles are helpful for many people, and in particular they increase the accessibility of video content for folx who are hard of hearing. Several video creation tools can produce automatic subtitles. However, these are usually inaccurate at best, and at worst can semantically change the meaning of the video content. Moreover, I knew that in this video we would be using some domain-specific vocabulary (the video is all about accent bias in language technologies like speech recognition) – and that the automatic subtitles would be even less accurate.

For this task, I tried out the application Subtitle Editor, but found the interface difficult to use. I’d heard good things too about Aegisub, but in the end I just manually created the .srt file by hand-editing it in Atom. The .srt file format is surprisingly basic – and hand-editing was made a lot easier because we created a script for the video beforehand. It may not work as well if you are transcribing the subtitles from the spoken audio.

Edit: I eventually figured Subtitle Editor’s interface out, when I needed to move all the subtitles back by 7 seconds after topping and tailing the video; this feature itself is worth installing the application.

The one thing that tripped me up in hand-editing subtitles was forgetting that the time format is HH:MM:SS,ms. My video was only 16 minutes in length, and I erroneously placed the “minutes” in the “hours” column. For example, I wrote 00:05:00, 400 to mean 5 seconds into the video, but this is actually 5 minutes into the video. The millisecond separator – the comma – is unusual in English, but the SRT format was developed in France; which uses the comma. This reminded me of Lawrence Busch‘s excellent book Standards: Recipes for reality, which articulates the path dependencies that standards create.

1
00:00:05,400 --> 00:00:12,300
[Kathy] Hi everyone, I’m Kathy Reid, a PhD candidate
at the School of Cybernetics
at Australian National University,
2
00:00:12,400 --> 00:00:22,300
and I’m recording today from the unceded lands
of the Wadawarrung people here in Geelong,
on the south-west coast of Victoria.

Conclusion

Video production for conferences is becoming more prevalent, particular as the COVID-19 pandemic has taken many conferences online. Tools like Zoom and Camtasia, while mature, abstract away many of the underlying algorithms, and simplify their interfaces for usability by removing fine-grained controls from the user. By exploring open source tools for video production workflows, we obtain a deeper understanding of “what’s under the hood”.

To extend this work in the future, one of the capabilities I would like to explore is automation of the workflow pipeline. This may be possible for tasks such as topping and tailing videos with conference branding, but the actual composition of video and slides is still going to require human attention.

Footnotes

[0] Fan, D. P., Ji, G. P., Sun, G., Cheng, M. M., Shen, J., & Shao, L. (2020). Camouflaged object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2777-2787).
https://github.com/DengPingFan/SINet

[1] Ke, Z., Sun, J., Li, K., Yan, Q., & Lau, R. W. (2020). MODNet: real-time trimap-free portrait matting via objective decomposition.
https://github.com/ZHKKKe/MODNet

Setting up an academic writing workflow using Pandoc, Markdown, Zotero and LaTeX on Ubuntu Linux using Atom

This year, I started a PhD at the 3A Institute, within the College of Engineering and Computer Science at Australian National University. I came into the PhD not from a researcher or academic background, but from a career in industry as a technology practitioner. As such, my experience with formatting academic papers, for example for publication in journals, is limited. Our PhD program is hands-on; as well as identifying a specific research topic and articulating a research design, we undertake writing activities – both as academic writing practice, and to help solidify the theoretical concepts we’ve been discussing in Seminar. I needed an academic writing workflow.

For one of these writing exercises, I decided to build out a toolchain using LaTeX – the preferred typesetting tool for academic papers. This blog post documents how I approached this, and serves as a reference both for myself and others who might want to adopt a similar toolchain.

In summary, the process was:

  • Define overall goals
  • Install dependendies such as pandoc, Zotero and the BetterBibtex extension for citations, LaTeX
  • Experiment with pandoc on the command line for generating PDF from Markdown via LaTeX

Goals of an academic writing workflow

In approaching this exercise, I had several key goals:

  • create an academic writing workflow using my preferred Atom IDE on Linux;
  • that could easily be set up and replicated on other machines if needed;
  • which would allow me to use my preferred citation editor, Zotero;
  • and which would allow me to use my preferred writing format, Markdown;
  • and which would support available LaTeX templates available for journals (and from the ANU for formatting theses)

Why not use OverLeaf?

For those who’ve worked with LaTeX before, one of the key questions you might have here is “why not just use a platform like Overleaf for your academic writing workflow and not worry about setting up a local environment?”. Overleaf is a cloud LaTeX service, that provides a collaborative editing environment, and a range of LaTeX templates. However, it’s not free – plans range from $USD 12-35 per month. Overleaf is based on free and open source software, and then adds a proprietary collaboration layer over the top – it abstracts away complexity – and this is what you pay a monthly fee for.

In principle, I have no issue with cloud platforms adding value to FOSS, and then charging for that value, but I doubt any of the profits from Overleaf are going to folks like John MacFarlane – writer of pandoc, and Donald E. Knuth and Leslie Lamport – who contributed early work to tex and LaTeX respectively. I felt like I owed it to these folx to “dig a little under the hood” and learn some of that complexity instead of outsourcing it away.

So, to the process …

Installing pandoc

Pandoc is a free and open source tool for converting documents between different formats. It’s widely used in academia, and used extensively in publishing and writing workflows. Pandoc is written by John MacFarlane, who is a philosophy professor at UC Berkeley.

One of the other things that John is less known for is Lucida Navajo, which is a font for the Navajo language, a Native American language of the Southern Athabascan family. Although it’s based on Latin, it contains a number of diacritical marks not found in other Latin languages.

Pandoc is available for all platforms, but because my goal here was to develop a workflow on Ubuntu Linux, I’ve only shown installation instructions for that platform.

To install pandoc, use the following command:

$ sudo apt install pandoc 

I also had to make sure that the pandoc-citeproc tool was installed; it’s not installed by default as part of pandoc itself. Again, this was a simple command:

$ sudo apt install pandoc-citeproc

Installing Zotero and the BetterBibtex extension

My next challenge was to figure out how to do citations with pandoc. In the pandoc documentation, there is a whole section on citations, and this provided some pointers. This blog post from Chris Krycho was also useful in figuring out what to use.

This involved installing the BetterBibtex extension for Zotero (which I already had installed). You can find installation instructions for the BetterBibtex extension here. It has to be downloaded as a file and then added through Zotero (not through Firefox).

BibTex is a citation standard, supported by most academic journals and referencing tools. Google Scholar exports to BibTex.

Once installed, BetterBibtex updates each Zotero reference with a citation key that can then be used as an “at-reference” in pandoc – ie @strengersSmartWifeWhy2020. Updating citation keys can take a few minutes – I have several thousand references stored in Zotero and it took about 6 minutes to ensure that each reference had a BibTex citation key.

In order to use BetterBibtex, I had to make sure that the specific export format for Zotero was Better BibTex, and that Zotero updated on change:

Installing LaTeX

Next, I needed to install LaTeX for Linux. This was installed via the texlive package:

$ sudo apt install texlive Based on this Stack Overflow error message I got early on in testing, I also installed the texlive-latex-extra package.

$ sudo apt-get install texlive-latex-extra

Zotero citations package for Atom

Next, I needed to configure the Atom IDE to work with Zotero and BetterBibtex. This involved installing several plugins, including;

Once these were installed, I was ready to start experimenting with a workflow.

Experimenting with a workflow

To start with, I used a based workflow to go from a Markdown formatted text file to PDF. pandoc converts this to LaTeX as an interim step, and then to PDF, using the inbuilt templates from pandoc. This first step was a very basic attempt to go from pandoc to PDF, and was designed to “shake out” any issues with software installation.

The pandoc command line options I used to start with were:

pandoc -f markdown \
writing.md \
-o writing.pdf

In this example, I’m telling pandoc to expect markdown-styled input, to use the file writing.md as the input file and to write to the output file writing.pdf. pandoc infers that the output file is PDF-formatted from the .pdf extension.

Adding citations

Next, I wanted to include citations. First, I exported my Zotero citations to a .bib-formatted file, using the BetterBibtex extension. I stored this in a directory called bibliography in the same directory as my writing.md file. The command line options I used here were:

$ pandoc -f markdown \
--filter=pandoc-citeproc \
--bibliography=bibliography/blog-post-citations.bib \
writing.md \
-o writing.pdf

Note here the two additional commands – the --filter used to invoke pandoc-citeproc, and the --bibliography filter to include a BibTex formatted file. This worked well, and generated a plainly formatted PDF (based on the pandoc default format).

Using a yaml file for metadata

Becoming more advanced with pandoc, I decided to experiment with including a yaml file to help generate the document. The yaml file can specify metadata such as author, date and so on, which can then be substituted into the PDF file – if the intermediary LaTeX template accommodates these values. The basic LaTeX template included with pandoc includes values for author, title, date and abstract.

Here’s the yaml file I used for this example:

--- 
author: Kathy Reid 
title: blog post on pandoc 
date: December 2020 
abstract: | This is the abstract. 

...

Note that the yaml file must start with three dashes ---and end with three periods ...

The pandoc command line options I used to include metadata were:

$ pandoc -f markdown+yaml_metadata_block \
  --filter=pandoc-citeproc \
  --bibliography=bibliography/blog-post-citations.bib \
  writing.md metadata.yml \
  -o writing.pdf

Note here that in the -f switch an additional option for yaml_metadata_block is given, and that the yaml file is listed after the first input file, writing.md. By adding the metadata.yml file in the command line, pandoc considers them both to be input files.

By using the yaml file, this automatically appended author, title, date and abstract information to the resulting PDF.

I also found that I could control the margins and paper size of the resulting PDF file by controlling these in the yaml file.

---
 author: Kathy Reid
 title: blog post on pandoc
 date: December 2020
 abstract: |
   This is the abstract.
 fontsize: 12pt
 papersize: a4
 margin-top: 25mm
   margin-right: 25mm
   margin-bottom: 25mm
   margin-left: 25mm
...

Adding in logging

It took a little while to get the hang of working with yaml, and I wanted a way to be able to inspect the output of the pandoc process. To do this, I added a switch to the command line option, and also piped the output of the command to a logging file.

$ pandoc -f markdown+yaml_metadata_block \
  --verbose 
  --filter=pandoc-citeproc \
  --bibliography=bibliography/blog-post-citations.bib \
  writing.md metadata.yml \
  -o writing.pdf 
  > pandoc-log.txt

The --verbose switch tells pandoc to use verbose logging, and the logging is piped to pandoc-log.txt. If the output wasn’t piped to a file, it would appear on the screen as stdout, and because it’s verbose, it’s hard to read – it’s much easier to pipe it to a file to inspect it.

Working with other LaTeX templates

Now that I had a Markdown to PDF via LaTeX workflow working reasonably well, it was time to experiment using an academic writing workflow with other templates. Many publications provide a LaTeX template for submission, such as these from the ACM, and ideally I wanted to be able to go from Markdown to a journal template using pandoc.

I’d come across other blog posts where similar goals had been attempted, but this proved significantly harder to implement than I’d anticipated.

My first attempt entailed trying to replicate the work Daniel Graziotin had done here – but I ran into several issues. After copying over the table-filter.py file from the blog post, an ACM .cls file, and copying the ACM pdf file to default.pdf in my pandoc-data directory, and running the below command, I got the following error.

$ pandoc -f markdown+yaml_metadata_block \
  --verbose \
  --data-dir=pandoc-data \
  --variable documentclass=acmart \
  --variable classname=acmlarge \
  --filter=pandoc-citeproc \
  --filter=table-filter.py \
  --bibliography=bibliography/blog-post-citations.bib \
  writing.md metadata-acm.yml \
  -o writing.pdf
Error running filter table-filter.py:
Could not find executable python

My first thought was that python somehow was aliased to an older version of python – ie python 2. To verify this I ran:

$ which python

This didn’t return anything, which explained the error. From this, I assumed that pandoc was expecting that the python alias resolved to the current python. I didn’t know how to change this – for example changing the pandoc preferences to point to the right python binary. Instead, I created a symlink so that pandoc could find python3.

$ pwd
/usr/bin
$ ls | grep python 
python3 
python3.8 
python3.8-config 
python3-config 
python3-futurize 
python3-pasteurize 
x86_64-linux-gnu-python3.8-config 
x86_64-linux-gnu-python3-config 
$ sudo ln -s python3 python 

Installing pandocfilters package for Python

I attempted to run the pandoc command again but ran into another error.

Traceback (most recent call last):
   File "table-filter.py", line 6, in 
     import pandocfilters as pf
 ModuleNotFoundError: No module named 'pandocfilters'
 Error running filter table-filter.py:
 Filter returned error status 1

My guess here was that the python module pandocfilters had not been installed via pip. I installed this through pip.

$ pip3 install pandocfilters
Collecting pandocfilters
Downloading pandocfilters-1.4.3.tar.gz (16 kB)
Building wheels for collected packages: pandocfilters
Building wheel for pandocfilters (setup.py) … done
Created wheel for pandocfilters: filename=pandocfilters-1.4.3-py3-none-any.whl size=7991 sha256=3c4445092ee0c8b00e2eab814ad69ca91d691d2567c12adbc4bcc4fb82928701
Stored in directory: /home/kathyreid/.cache/pip/wheels/fc/39/52/8d6f3cec1cca4ceb44d658427c35711b19d89dbc4914af657f
Successfully built pandocfilters
Installing collected packages: pandocfilters
Successfully installed pandocfilters-1.4.3

This again swapped one error for another.

Error producing PDF.
! LaTeX Error: Missing \begin{document}.
See the LaTeX manual or LaTeX Companion for explanation.
Type H for immediate help.
…
l.55 u

Luckily, I had set –verbose, and piped to a log file. I went digging through the log file to see if I could find anything useful.

Class acmart Warning: You do not have the libertine package installed. Please upgrade your TeX on input line 669.
Class acmart Warning: You do not have the zi4 package installed. Please upgrade your TeX on input line 672.
Class acmart Warning: You do not have the newtxmath package installed. Please upgrade your TeX on input line 675.

After reading through this Stack Overflow article, these all looked like font issues. I used the solution given in the Stack Overflow article, which was to install another Ubuntu package:

$ sudo apt-get install texlive-fonts-extra

Again, this swapped one error message for another.

! LaTeX Error: Command `\Bbbk' already defined.
See the LaTeX manual or LaTeX Companion for explanation.
Type H for immediate help.
…
l.261 …ol{\Bbbk} {\mathord}{AMSb}{"7C}
! ==> Fatal error occurred, no output PDF file produced!
Transcript written on ./tex2pdf.-bf3aef739e05d883/input.log.
Error producing PDF.
! LaTeX Error: Command `\Bbbk' already defined.
See the LaTeX manual or LaTeX Companion for explanation.
Type H for immediate help.
…
l.261 …ol{\Bbbk} {\mathord}{AMSb}{"7C}

Another Google search, another solution from Stack Overflow, which suggested removing the LaTeX line \usepackage{amssymb} from the template. The challenge was, I wasn’t sure where the LaTeX template for pandoc was stored. Reading through this Stack Overflow post, it looked liked the default LaTeX template is stored at:

~/.pandoc/templates 

But on my system, this directory didn’t exist. I created it, and used a command to store a default LaTeX file in there. Then I was able to remove the line.

\usepackage{amssymb,amsmath}

This then resulted in yet another error which required Googling.

! LaTeX Error: Missing \begin{document}.See the LaTeX manual or LaTeX Companion for explanation.
Type H for immediate help
…l.54 u
sepackage{booktabs} setcopyright{acmcopyright} doi{10.475/123_4}
! ==> Fatal error occurred, no output PDF file produced!

Looking through the log file, this was generated by the parskip.sty package. More Googling revealed another incompatibility with the ACM class. I removed the parskip package from the LaTeX default template, but was continually met with similar errors.

Parting thoughts

In the end I gave up trying to get this academic writing workflow to function with an ACM template; there were too many inconsistencies and errors for it to be workable, and I’d already sunk several hours into it.

On the plus side though, I learned a lot about pandoc, how to reference with it, and how to produce simple LaTeX documents with this workflow.

What sort of academic writing workflow do you use?

linux.conf.au 2019 Christchurch – The Linux of Things

linux.conf.au 2019 this year went over the Tasman to New Zealand for the fourth time, to the Cantabrian university city of Christchurch. This was the first year that Christchurch had played host and I sincerely hope it’s not the last. First, to the outstanding presentations. NOTE: You can see all the presentations from linux.conf.au 2019 at this YouTube channel

Open Artificial Pancreas System (OpenAPS) by Dana Lewis

See the video of Dana’s presentation here Dana Lewis lives with Type 1 diabetes, and her refusal to accept current standards of care with diabetes management led her to collaborate widely, developing OpenAPS. OpenAPS is a system that leverages existing medical devices, and adds a layer of monitoring using open hardware and open software solutions. This presentation was outstanding on a number of levels. As a self-experimenter, Dana joins the ranks of scientists the world over putting their own health on the line in the strive for progress. Her ability to collaborate with others from disparate backgrounds and varied skillsets to make something greater than the sum of its parts is a textbook case in the open source ethos; moreover the results that the OpenAPS achieved were remarkable; significant stabilization in blood sugars and better predictive analytics – providing better quality of life to those living with Type 1 diabetes. Dana also touched on the Open Humans project, which is aiming to have people share their medical health data publicly so that collective analysis can occur – opening up this data from the vice-like grip of medical device manufacturers. Again, we’re seeing that data itself has incredible value – sometimes more so than the devices which monitor and capture the data itself.

Open Source Magnetic Resonance Imaging: From the community to the community by Ruben Pellicer Guridi

You can view the video of Ruben’s presentation here Ruben Pellicer Guridi‘s talk centred on how the Open Source MRI community has founded to solve the problems of needing more MRI machines, particularly in low socio-economic areas and in developing countries. The project has attracted a community of health and allied health professionals, and has made available both open hardware and open software, with the first image from their Desktop MR software being acquired in December. Although the project is in its infancy, the implications are immediately evident; providing better public healthcare, particularly for the most vulnerable in the world.

Apathy and Arsenic: A Victorian era lesson on fighting the surveillance state by Lilly Ryan

You can view the video of Lilly’s presentation here Lilly Ryan’s entertaining and thought-provoking talk drew parallels between our current obsession with privacy-leaking apps and data platforms and the awareness campaign around the detrimental effects of arsenic in the 1800s. Her presentation was a clarion call to resist ‘peak indifference’ and increase privacy awareness and digital literacy.

Deep Learning, not Deep Creepy by Jack Moffitt

You can view the video of Jack’s presentation here Jack Moffitt is a Principal Research Engineer with Mozilla, and in this presentation he opened by providing an overview of Deep Learning. He then dug a little bit deeper into the dangers of deep learning, specifically the biases that are inherent in current deep learning approaches, and some of the solutions that have been trialled to address them, such as making gender and noun pairs – such as “doctor” and “man” – equidistant – so that “doctor” is equally predictive for “man” and “woman”. He then covered the key ML projects from Mozilla such as Deep Speech, Common Voice and Deep Proof. This was a great corollary to the two talks I gave;

Computer Science Unplugged by Professor Tim Bell

You can view Tim’s presentation here Part of the Open Education Miniconf, Tim‘s presentation covered how to teach computer science in a way that was fun, entertaining and accessible. The key problem that Computer Science Unplugged solves is that teachers are often afraid of CS concepts – and CS Unplugged makes teaching these concepts fun for both learners and teachers.

Go All In! By Bdale Garbee

You can view Bdale’s talk here Bdale’s talk was a reinforcement of the power of open source collaboration, and the ideals that underpin it, with a call to “bet on” the power of the open source community.

Open source superhumans by Jon Oxer

You can view Jon’s talk here Jon Oxer’s talk covered the power of open source hardware for assistive technologies, which are often inordinately expensive.

Other conversations

I had a great chat with Kate Stewart from the Linux Foundation and the work she’s doing in the programmatic audit of source code licensing space – her talk on grep-ability of licenses is worth watching – and we covered metrics for communities with CHAOSS, and the tokenisation of Git commits to understand who has committed which code, specifically for unwinding dependencies and copyright.

Christchurch as a location

Christchurch was a wonderful location for linux.conf.au – the climate was perfect – we had a storm or two but it wasn’t 45 C burnination like Perth. The airport was also much bigger than I had expected and the whole area is set up for hospitality and tourism. It won’t be the last time I head to CHC!