Building a database to handle PhD interview tracking using MySQL and noco-db

So, as folx probably know, I’m currently during a PhD at the Australian National University’s School of Cybernetics, investigating voice data practices and what we might be able to do to change them to have less biased voice data, and less biased voice technology products. If you’d like to see some of the things I’ve been working on, you can check out my portfolio. Two of my research methods are related to interviews; the first tranche being shorter exploratory interviews and the second being in-depth interviews with machine learning practitioners.

Because there are many stages to interviews – identifying participants, approaching them for interviews, obtaining consent, scheduling, transcription and coding – I needed a way to manage the pipeline. My PhD cohort colleagues use a combination of AirTable and Notion, but I wanted an opensource alternative (surprise!).

Identifying alternatives and choosing one to use

I did a scan of what alternatives were available simply by searching for “open source alternative to AirTable”. Some of the options I considered but discarded were:

  • BaseRow: While this is open source, and built in widely adopted frameworks such as Django and Vue.js, and available in Docker and Heroku deploys, the commercial framing behind the product is very much open core. That is, there are a lot of features that are only available in the paid / premium version. I’ve worked with open core offerings before, and I’ve found that the most useful features are usually those that are behind the paywall.
  • AppFlowy: While this looked really impressive, and the community behind it looked strong, the use of Flutter and Rust put me off – I’m not as familiar with either of them compared to Vue.js or Django. I also found the documentation really confusing – for example, to install the Linux version it said to “use the official package”, but it didn’t give the name of the official package. Not helpful. On this basis I ruled out AppFlowy.
  • DBeaver: This tool is more aimed at people who have to work with multiple databases; it provides a using GUI over the top of the database, but is not designed to be a competitor to Notion or AirTable. I wanted something more graphically-focused, and with multiple layout styles (grid, card etc).

This left me with NoCoDB. I kicked the tyres a bit by looking at the GitHub code, and read through the documentation to get a feel for whether it was well constructed; it was. Importantly, I was able to install it on my localhost; my ethics protocol for my research method prevented it being hosted on a cloud platform.

Installation

Installation was a breeze. I set up a database in MySQL (also running locally), then git clone‘d the repo, and used npm to install the software:

git clone https://github.com/nocodb/nocodb-seed
cd nocodb-seed
npm install
npm start

nocodb uses node.js’s httpd server, and starts the application by default on port 80, so to start using it, you simply go to: http://localhost:8080/. One slightly frustrating thing is that it does require an email address and password to log in. nocodb is a commercial company – they’ve recently done a raised and are hiring – and I suspect this is part of their telemetry, even for self-hosted solutions. I run Pihole as my DNS server, and I don’t see any telemetry from nocodb in my block list, however.

Next, you need to provide nocodb with the MySQL database details that you created earlier. This creates some additional tables. nocodb then creates some base views, but at this point you are free to start creating your own.

Deciding what fields I needed to capture to be able to visualise my interview pipeline

Identifying what fields I needed to track was a case of trial and error. As I added new fields, or modified the datatypes of existing ones, nocodb was able to be easily re-synced with the underlying database schema. This makes

Identifying what fields I needed to track was a case of trial and error. As I added new fields, or modified the datatypes of existing ones, nocodb was able to be easily re-synced with the underlying database schema. This makes nocodb ideal for prototyping database structures.

nocodb showing tables out of sync
nocodb now in sync with the underlying tables

In the end, I settled on the following tables and fields:

Interviewees table

  • INTERVIEWEE_ID – a unique, auto-incrementing ID for each participant
  • REAL_NAME – the real name of my participant (and one of the reasons this is running locally and not in the cloud)
  • CODE_NAME – a code name I ascribed to each participant, as part of my Ethics Protocol
  • ROLE_ID – foreign key identifier for the ROLES table.
  • EMAIL_ADDRESS – what it says on the tin.
  • LINKEDIN_URL – I used LinkedIn to contact several participants, and this was a way of keeping track of that information.
  • HOMEPAGE_URL – the participant’s home page, if they had one. This was useful for identifying the participant’s background – part of the purposive sampling technique.
  • COUNTRY_ID – foreign key identifier for the COUNTRIES table – again used for purposive sampling.
  • HOW_IDENTIFIED – to identify whether people had been snowball sampled
  • HAS_BEEN_CONTACTED – Boolean to flag whether the participant had been contacted
  • HAS_AGREED_TO_INTERVIEW – Boolean to flag whether the participant had agreed to be interviewed
  • NO_RESPONSE_AFTER_SEVERAL_ATTEMPTS – Boolean to flag whether the participant hadn’t responded to a request to interview
  • HAS_DECLINED – Boolean to flag an explicit decline
  • INTERVIEW_SCHEDULED – Boolean to indicate a date had been scheduled with the participant
  • IS_EXPLORATORY – Boolean to indicate the interview was exploratory rather than in-depth. Having an explicit Boolean for the interview type allows me to add others if needed (while I felt that a full blown table for interview type was overkill).
  • IS_INDEPTH – Boolean for the other type of interview I was conducting.
  • INTERVIEWEE_DESCRIPTION – descriptive information about the participant’s background. Used to help me formulate questions relevant to the participant.
  • CONSENT_RECEIVED – Boolean to flag whether the participant had provided informed consent.
  • CONSENT_URL – A space to record the file location of the consent form.
  • CONSENT_ALLOWS_PARTICIPATION – A flag relevant to specific type of participation in my ethics protocol, and my consent form
  • CONSENT_ALLOWS_IDENTIFICATION_VIA_PARTICIPANT_CODE – A flag relevant to how participants were able to elect to be identified, as part of my ethics protocol.
  • INTERVIEW_CONDUCTED – Boolean to flag that the interview had been conducted.
  • TRANSCRIPT_DONE – Boolean to flag that the transcript had been created (I used an external company for this).
  • TRANSCRIPT_URL – A space to record the file location of the transcript.
  • TRANSCRIPT_APPROVED – Boolean to indicate the participant had reviewed and approved the transcript.
  • TRANSCRIPT_APPROVED_URL – A space to record the file location of the approved transcript
  • CODING_FIRST_DONE – Boolean to indicate first pass coding done
  • CODING_FIRST_LINK – A space to record the file location of the first coding
  • CODING_SECOND_DONE – Boolean to indicate second pass coding done
  • CODING_SECOND_URL – A space to record the file location of the second coding
  • NOTES – I used this field to make notes about the participant or to flag things to follow up.
  • LAST_CONTACT – I used this date field so I could easily order interviewees to follow them up.
  • LAST_MODIFIED – This field auto-updated on update.

Countries table

  • COUNTRY_ID – Unique identifier, used as primary key and foreign key reference in the INTERVIEWEES table.
  • COUNTRY_NAME – human readable name of the country, useful for demonstrating purposive sampling.
  • LAST_MODIFIED – This field auto-updated on update.

Roles table

  • ROLE_ID – Unique identifier, used as primary key and foreign key reference in the INTERVIEWEES table.
  • ROLE_TITLE – human readable title of the role, used for purposive sampling.
  • ROLE_DESCRIPTION – descriptive information about the activities performed by the role.
  • LAST_MODIFIED – This field auto-updated on update.

If I were to update the database structure in the future, I would be inclined to have a “URLs” table, where the file links for things like consent forms and transcripts are stored. Having them all in one table would make it easier to do things like URL validation. This was overkill for what I needed here.

Thinking also about the interview pipeline, the status of the interviewee in the pipeline is a combination of various Boolean flags. I would have found it useful to have a summary STATUS_ID with a useful descriptor of the status.

Get the SQL to replicate the database table structure

I’ve exported the table structure to SQL in case you want to use it for your own interview tracking purposes. It’s a Gist because I can’t be bothered altering my wp_options.php to allow for .sql uploads, and that’s probably a terrible idea, anyway šŸ˜‰

Creating views based on field values to track the interview pipeline

Now that I had a useful table structure, I settled on some Views that helped me create and manage the interview pipeline. Views in nocodb are lenses on the underlying database – that restrict or constrain the data that is shown so that it’s more relevant to the task at hand. This is done through showing or hiding fields, and then filtering the selected fields.

  • Data entry view – this was a form view where I could add new Interviewees.
  • Views for parts of the pipeline – I set up several grid views that restricted Interviewees using filters to the part of the interview pipeline they were in. These included those I had and hadn’t contacted, those who had a scheduled interview, those who hadn’t responded, as well as several views for where the interviewee was in the coding and consent pipeline.
  • At a glance view – this was a gallery view, where I could get an overview of all the potential and confirmed participants.

A limitation I encountered working with these views is that there’s no way to provide summary information – like you might with a SUM or COUNT query in SQL. Ideally I would like to be able to build a dashboard that provides statistics on how many participants are at each part of the pipeline, but I wasn’t able to do this.

Updating nocodb

nocodb is under active development, and has regular updates. Updating the software proved to be incredibly easy through npm, with a two-line command:

Uninstall NocoDB package

npm uninstall nocodb

Install NocoDB package

npm install --save nocodb

Parting thoughts

Overall, I have been really impressed by nocodb – it’s a strong fit for my requirements in this use case – easily prototypable, runs locally, and is easily updateable. The user interface is still not perfect, and is downright clunky in places, but as an open source alternative to AirTable and Notion, it hits the spot.

Building a Linux video production workflow for the SRA22 Social Responsibility of Algorithms conference

As you might know, most of my toolchain is Linux and opensource-based. So when it came time to produce a video for the upcoming Social Responsibility of Algorithms conference, with my doctoral supervisor, Dr. Liz T. Williams, Linux-based options were my first choice. However, the Linux video production workflow is not as smooth as it could be – and this blog post documents my trials and tribulations.

Web camera software

While web cameras on Linux these days tend to “just work”, there are no Linux counterparts to the Logitech or other vendor-specific software that the hardware usually ships with. Way in back in January when I presented at linux.conf.au, part of my tech check (big thanks, Chris and jwoithe!) involved using the qv4l2 utility to provide finer-grained control over settings such as white balance, contrast, and brightness. This handy utility also controls auto and manual zoom on cameras, and I adjusted the zoom to better fill the screen.

From memory, I didn’t have to add an apt repository for this – it’s in the Ubuntu repos by default – and can be installed with:

sudo apt install qv4l2

Recording video and setting up a virtual background

Open Broadcaster Software

The next challenge was to record the video, using a virtual background. The problem of recording video on Linux is mostly solved these days using the Open Broadcaster Software (OBS) software.

Background removal plugin

While this software comes with a “chroma key” plugin (for green screens), it doesn’t come with an automatic background removal plugin. I read up on what plugins were available and ended up choosing this one from Roy Shilkrot (big thank you, Roy!) –

https://github.com/royshil/obs-backgroundremoval

It was well documented, and very clear about the installation steps. It did require compiling from source, but the instructions were detailed. Because I already had CUDA installed on my machine (I have an RTX2060 NVIDIA GPU, don’t get me started on Nvidia and Linux drivers and how hard they are to get working, but there are signs of hope, with NVIDIA recently open sourcing their Linux drivers – albeit through moving most of the functionality to firmware), I opted to compile with GPU support. Compilation was very straightforward, however the instructions provided for sym-linking the compiled binaries were out of date. Luckily, there was already an Issue that had been raised for this, and I used the information in the Issue to fix the sym-linking. This exercise gave me a deep appreciation for the effort that goes into packaging, and the complexity that it abstracts away for the end user.

Machine vision algorithms

This plugin uses machine vision (which is why OpenCV is one of the package’s dependencies) to determine the outline of the subject from the background, and then uses the Chroma Key to fill the non-subject space with a single colour. It allows you to choose from a range of algorithms to do the subject outline, such as SINet, MODnet, mediapipe and a model for selfies. I found I got the best results using mediapipe. The plugin also allows you to configure other inference settings, such as the frame interval between which inference samples are taken. Again, there is another trade-off here; if you resample the image frequently, this takes more computational power, but if you sample less frequently then the outline may not “keep up” with the movement of the subject. I chose to inference every frame – primarily because I had the compute power to do so šŸ˜ˆ.

All of these computer vision (CV) algorithms are very recent – for example SINet [0] was developed just over a year ago, and MODnet [1] is less than 6 months old; I continue to be surprised at how fast the ML space is moving, and how commodotised it has become. If you have ever used the “virtual background” feature in a videoconferencing application like Zoom or Teams, it will be using a similar algorithm. However, a lot of the implementation decisions – like how often to do inference, which algorithm to use, or the characteristics of the algorithm such as smoothing or contrast, are abstracted away from the end user in these applications. What we lose in complexity and control, we gain in ease of use.

Running a noise removal filter on the video using ffmpeg

Although I had managed to do a video recording, sans background, when I played back the video, the audio had a lot of background noise. This came from the sound of the fans on my laptop; when the GPU is under load, the fans increase their speed to dissipate the heat that is generated. The inference of the background removal algorithm was causing the fans to spin up – and to generate background noise that was captured by the microphone. Tradeoffs, tradeoffs and more tradeoffs!

Rather than re-record the video, or reduce the inference frequency of the background removal plugin, I decided to see if I could improve the noise quality by running the video through a noise reduction filter. Signal processing, and acoustic signal processing, is definitely not an area that I feel comfortable with, but I had a general idea of how noise reduction filters worked – by gating (cutting off) noise above or below a particular frequency (the gating floor or gating ceiling). When I worked in videoconferencing, I’d seen a similar approach used with special digital signal processing equipment in videoconferencing-equipped spaces. Again, there’s a tradeoff – if you have a quietly spoken person, and a loud, deep speaker, but use the same gating settings, you’re going to get unexpected results.

Although I had Audacity installed, I reached for the command-line tool ffmpeg. The newer versions of ffmpeg have noise reduction filters built in, and the documentation includes several command line examples. Running the video through the noise reduction filter improved the quality markedly.

Video editing, compositing and rendering in OpenShot

Next, I used OpenShot to do video editing, addition of slides, and “topping and tailing” with the conference branding. OpenShot is a breeze to use, and although it doesn’t have a lot of fancy transitions, it’s an excellent choice for this step of the workflow. I previously converted the PowerPoint I was using for slides into PDF, then used GIMP to create a 1920 x 1080 px PNG image of each PDF. The PNG images were then imported into OpenShot and overlaid over the video tracks.

One of the branding artefacts was in a format (Quicktime MOV) that OpenShot didn’t particularly work well with; so I changed the format using Handbrake.

Again, having a reasonable GPU helped with the rendering process; each conference video is around 15 minutes long, and OpenShot rendered them at 1080p 30fps resolution in around 5 minutes šŸ˜ˆ.

Subtitles

Subtitles are helpful for many people, and in particular they increase the accessibility of video content for folx who are hard of hearing. Several video creation tools can produce automatic subtitles. However, these are usually inaccurate at best, and at worst can semantically change the meaning of the video content. Moreover, I knew that in this video we would be using some domain-specific vocabulary (the video is all about accent bias in language technologies like speech recognition) – and that the automatic subtitles would be even less accurate.

For this task, I tried out the application Subtitle Editor, but found the interface difficult to use. I’d heard good things too about Aegisub, but in the end I just manually created the .srt file by hand-editing it in Atom. The .srt file format is surprisingly basic – and hand-editing was made a lot easier because we created a script for the video beforehand. It may not work as well if you are transcribing the subtitles from the spoken audio.

Edit: I eventually figured Subtitle Editor’s interface out, when I needed to move all the subtitles back by 7 seconds after topping and tailing the video; this feature itself is worth installing the application.

The one thing that tripped me up in hand-editing subtitles was forgetting that the time format is HH:MM:SS,ms. My video was only 16 minutes in length, and I erroneously placed the “minutes” in the “hours” column. For example, I wrote 00:05:00, 400 to mean 5 seconds into the video, but this is actually 5 minutes into the video. The millisecond separator – the comma – is unusual in English, but the SRT format was developed in France; which uses the comma. This reminded me of Lawrence Busch‘s excellent book Standards: Recipes for reality, which articulates the path dependencies that standards create.

1
00:00:05,400 --> 00:00:12,300
[Kathy] Hi everyone, Iā€™m Kathy Reid, a PhD candidate
at the School of Cybernetics
at Australian National University,
2
00:00:12,400 --> 00:00:22,300
and Iā€™m recording today from the unceded lands
of the Wadawarrung people here in Geelong,
on the south-west coast of Victoria.

Conclusion

Video production for conferences is becoming more prevalent, particular as the COVID-19 pandemic has taken many conferences online. Tools like Zoom and Camtasia, while mature, abstract away many of the underlying algorithms, and simplify their interfaces for usability by removing fine-grained controls from the user. By exploring open source tools for video production workflows, we obtain a deeper understanding of “what’s under the hood”.

To extend this work in the future, one of the capabilities I would like to explore is automation of the workflow pipeline. This may be possible for tasks such as topping and tailing videos with conference branding, but the actual composition of video and slides is still going to require human attention.

Footnotes

[0] Fan, D. P., Ji, G. P., Sun, G., Cheng, M. M., Shen, J., & Shao, L. (2020). Camouflaged object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2777-2787).
https://github.com/DengPingFan/SINet

[1] Ke, Z., Sun, J., Li, K., Yan, Q., & Lau, R. W. (2020). MODNet: real-time trimap-free portrait matting via objective decomposition.
https://github.com/ZHKKKe/MODNet

State of my toolchain 2021

I’ve been doing a summary of the state of my toolchain now for around five years (2019, 2018, 2016). Tools, platforms and techniques evolve over time; the type of work that I do has shifted; and the environment in which that work is done has changed due to the global pandemic. Documenting my toolchain has been a useful exercise on a number of fronts; it’s made explicit what I actually use day-to-day, and, equally – what I don’t. In an era of subscription-based software, this has allowed me to make informed decisions about what to drop – such as Pomodone. It’s also helped me to identify niggles or gaps with my existing toolchain, and to deliberately search for better alternatives.

At a glance

Hardware, wearables and accessories

Software

Techniques

  • Pomodoro (no change since last report)
  • Passion Planner for planning (no change since last report)

What’s changed since the last report?

Writing workflow

Since the last report in 2019, I’ve graduated from a Masters in Applied Cybernetics at the School of Cybernetics at Australian National University. I was accepted into the first cohort of their PhD program. This shift has meant an increased focus on in-depth, academic-style writing. To help with this, I’ve moved to a Pandoc, Atom, Zotero and LaTeX-based workflow, which has been documented separately. This workflow is working solidly for me after about a year. Although it took about a weekend worth of setup time, it’s definitely saving me a lot of time.

Atom in particularly is my predominant IDE, and also my key writing tool. I use it with a swathe of plugins for LaTeX, document structure, and Zotero-based academic citations. It took me a while to settle on a UI and syntax theme for Atom, but in the end I went with Atom Solarized. My strong preference is to write in MarkDown, and then export to a target format such as PDF or LaTeX. Pandoc handles this beautifully, but I do have to keep a file of command line snippets handy for advanced functionality.

Primary machine

I had an ASUS Zenbook UX533FD – small, portable and great battery life, even with an MX150 GPU running. Unfortunately, the keyboard started to malfunction just after a year after purchase (I know, right). I gave up trying to get it repaired because I had to chase my local repair shop for updates on getting a replacement. I lodged a repair request in October, and it’s now May, so I’m not holding out hope… That necessitated me getting a new machine – and it was a case of getting whatever was available with the Coronavirus pandemic.

I settled on a ASUS ROG Zephyrus G15 GA502IV. I was a little cautious, having never had an AMD Ryzen-based machine before, but I haven’t looked back. It has 16 Ryzen 4900 cores, and an NVIDIA GeForce RTX 2060 with 6GB of RAM. It’s a powerful workhorse and is reasonably portable, if a little noisy. It get about 3 hours’ battery life in class. Getting NVIDIA dependencies installed under Ubuntu 20.04 LTS was a little tricky – especially cudnn, but that seems to be normal for anything NVIDIA under Linux. Because the hardware was so new, it lacked support in the 20.04 kernel, so I had to pull in experimental Wi-Fi drivers (it uses Realtek).

To be honest I was somewhat smug that my hardware was ahead of the kernel. One little niggle I still have is that the machine occasionally green screens. This has been reported with other ROG models and I suspect it’s an HDMI-under-Linux driver issue, but haven’t gone digging too far into driver diagnostics. Yet.

One idiosyncrasy of the Zephyrus G15 is that it doesn’t have built-in web camera; for me that was a feature. I get to choose when I do and don’t connect the web camera. And yes – I’m firmly in the web-cameras-shouldn’t-have-to-be-on by default camp.

Machine learning work, NVIDIA dependencies and utilities

Over the past 18 months, I’ve been doing a lot more work with machine learning, specifically in building the DeepSpeech PlayBook. Creating the PlayBook has meant training a lot of speech recognition models in order to document hyperparameters and tacit knowledge around DeepSpeech.

In particular, the DeepSpeech PlayBook uses a Docker image to abstract away Python, TensorFlow and other dependencies. However, this still requires all NVIDIA dependencies such as drivers and cudnn to be installed beforehand. NVIDIA has made this somewhat easier with the Linux CUDA installation guide, which advises on which version to install with other dependencies, but it’s still tough to get all the dependencies installed correctly. In particular, the nvtop utility, which is super handy for monitoring GPU operations (such as identifying blocking I/O or other bottlenecks) had to be compiled from source. As an aside, the developer experience for getting NVIDIA dependencies installed under Linux is a major hurdle for developers. It’s something I want NVIDIA to put some effort into going forward.

Colour customisation of the terminal with Gogh

I use Ubuntu Linux for 99% of my work now – and rarely boot into Windows. A lot of that work is based in the Linux terminal; from spinning up Docker containers for machine learning training, running Python scripts or even pandoc builds. At any given time I might have 5-6 open terminals, and so I needed a way to easily distinguish between them. Enter Gogh – an easy to install set of terminal profiles.

One bugbear that I still have with the Ubuntu 20.04 terminal is that the fonts that can be used with terminal profiles are restricted to only mono-spaced fonts. I haven’t been able to find where to alter this setting – or how the terminal is identifying which fonts are mono-spaced for inclusion. If you know how to alter this, let me know!

Linux variants of Microsoft software intended for Windows

ANU has adopted Microsoft primarily for communications. This means not only Outlook for mail – for which there are no good Linux alternatives (and so I use the web version), but also the use of Teams and OneNote. I managed to find an excellent alternative in OneNote for Linux by @patrikx3, which is much more usable than the web version of OneNote. Teams on Linux is usable for messaging, but for videoconferencing I’ve found that I can’t use USB or Bluetooth headphones or microphones – which essentially renders it useless. Zoom is much better on Linux.

Better microphone for videoconferencing and conference presentations

As we’ve travelled through the pandemic, we’re all using a lot more videoconferencing instead of face to face meetings, and the majority of conferences have gone online. I’ve recently presented at both PyCon AU 2020 and linux.conf.au 2021 around voice and speech recognition. Both conferences used the VenueLess platform. I decided to upgrade my microphone for better audio quality. After all, research has shown that speakers with better audio are perceived as more trustworthy. I’ve been very happy with the Stadium USB microphone.

Taskwarrior over Pomodone for tasks

I tried Pomodone for about 6 months – and it was great for integrating tasks from multiple sources such as Trello, GitHub and GitLab. However, I found it very expensive (around $AUD 80 per year) and the Linux version suddenly stopped working. The scripting options also only support Windows and Apple, not Linux. So I didn’t renew my subscription.

Instead, I’ve moved to Taskwarrior via Paul Fenwick‘s recommendation. This has some downsides – it’s a command line utility rather than a graphical interface, and it only works on a single machine. But it’s free, and it does what I need – prioritises the tasks that I need to complete.

What hasn’t changed

Wearables and hearables

My Mobvoi TicWatch Pro is still going strong, and Google appears to be giving Wear OS some love. It’s the longest I’ve had a smart watch, and given how rugged and hardy the TicWatch has been, it will definitely be my first choice when this one reaches end of life. My Plantronics BB Pro 2 are still going strong, and I got another pair on sale as my first pair are now four years old and the battery is starting to degrade.

Quantified self

I’ve started using Sleep as Android for sleep tracking, which uses data from the TicWatch. This has been super handy for assessing the quality of sleep, and making changes such as adjusting going-to-bed times. Sleep as Android exports data to Google Drive. BeeMinder ingests that data into a goal, and keeps me accountable for getting enough sleep.

RescueTime, BeeMinder and Passion Planner are still going strong, and I don’t think I’ll be moving away from them anytime soon.

Assistant services

I still refuse to use Amazon Alexa or Google Home – and they wouldn’t work with the 5GHz-band WiFi where I am living on campus. Mycroft.AI is still my go-to for a voice assistant, but I rarely use it now because the the Spotify app support for Mycroft doesn’t work anymore after Spotify blocked Mycroft from using the Spotify API.

One desktop utility that fits into the “assistant” space that I’ve found super helpful has been GNOME extensions. I use extensions for weather, peripheral selection and random desktop background selection. Being able to see easily during Australian summer how hot it is outside has been super handy.

Current gaps in my toolchain

I don’t really have any major gaps in my toolchain at the moment, but there are some things that could be better.

  • Visual Git Editor – I’ve been using command line Git for years now, but having a visual indicator of branches and merges is useful. I tried GitKraken, but I don’t use Git enough to justify the monthly-in-$USD price tag. The Git plugin for Atom is good enough for now.
  • Managing everything for me – I looked a Huginn a while back and it sounds really promising as a “second brain” – for monitoring news sites, Twitter etc – but I haven’t had time to have a good play with it yet.