State of my toolchain 2018

Back in mid-2016, about two years ago, I did a run-down of my personal productivity stack – essentially a ‘State of my Toolchain’. After almost 2 years, it’s time to provide an update and see what’s changed.

Main laptop

My Asus N76 17.3″ laptop is still going strong as my main workhorse; but its days are numbered. I’ve had to rebuild a couple of times now after hard disk drive sectors have failed, so it’s a matter of time before it’s forced into retirement – but at nearly six years old, it’s had a good run.

So the question becomes – what replaces it? I’ve always been very happy with the ASUS gear I’ve had over the years, but the Zenbook range doesn’t seem to have that much in the way of high end GPU specs – which I need for both gaming and machine learning stuff. On the other had, the RoG range doesn’t seem to have good battery life; although that really isn’t a major consideration.

Enter System 76. I hadn’t heard of these guys until some of my linux.conf.au and Mycroft AI mates mentioned them, including this kick-ass video.

https://youtu.be/TcWVKqeF0MY

After doing some asking around, folks seem pretty happy with them, but the downside is that they’re costly; especially with the poor $AUD exchange rate – and then on top of that you have to pay import duty. Might have to see if the $AUD/$USD exchange rate improves.

Mobile laptop

My Asus Trio Transformer TX201LA is still going strong as a mobile laptop; the battery life isn’t great but having the Android & Linux combination on one device has come in very very handy. I’ll be hanging on to this until it dies – and then I’m very interested in one of the newer Transformer models.

Mobile phone

Two years ago I was using the LG Nexus 5X but unfortunately it was victim to the Bootloop issue. Now I have a Pixel, and it’s brilliant. Right size, great battery life, and great bluetooth and NFC support. And yes, I often use it with with headphones.

Wearables

With Pebble being acquired by Fitbit and subsequently sunsetted, I needed to find a new smartwatch. My Fitbit flex was also degrading, so it was a natural choice to go with the Fitbit Ionic – essentially combining two wearables – fitness and watch – into one. I’ve been incredibly happy with Ionic – I was skeptical at first, but the battery life is long – about 3-4 days and the reminders to move are useful. The range of applications is limited, but the key feature – of passing notifications from my phone to my watch – works well.

I’ve found that over time, my smartwatch is very definitely part of my toolchain – it’s no longer a nice-to-have extra – it’s a tool that I regularly check and rely on.

Quantified Self

My Fitbit records and stores a lot of data about how active I am, however I’m still using RescueTime and BeeMindr to help with day to day productivity and long term goals. RescueTime gave me a great deal on a premium upgrade (big ups, guys!) and I’m using the “focus” features a lot – which prevent you from using time-wasting websites like Facebook for a period of time. RescueTime also continues to deliver great visualisaitons that help to see where you’re spending your time.

rescuetime-usage-2017
My RescueTime logged time by category for 2017

Headphones

Plantronics Backbeat Go 2 Bluetooth headphones were great, but being an idiot I left them in a hotel room while travelling. I replaced them with the Jabra Rox – the magnetic earbuds are great for not losing them, however I’ve struggled to use the “wings” to get a good fit.

My Logitech H800 is still going strong. Great headphones.

I did splash out on some Plantronics Backbeat Pro bluetooth headphones that have noise cancellation for concentrated, focused work in noisy places – like co-work spaces. They’re great – 20-odd hour battery life, and they really do cancel out a lot of distracting background sound. My one niggle with them is that the ‘active off’ feature – which pauses music when you take them off – activates with movement, like walking around the house or getting up off a chair.

Streaming Media

With Pandora moving out of the Australian and New Zealand market, I needed to find another streaming music provider. Spotify was an easy choice because of their cross-device support – including a native Linux desktop app. On the plus side, Mycroft AI has a Spotify Skill that, due to API restrictions, is only available with Spotify Premium accounts.

Input devices

My keyboard, graphics tablet and presentation pointers haven’t changed in two years, but I did back Sensel Morph on Kickstarter, and have started using it, but because the Linux driver isn’t great (yet), it tends to work better under Windows. I’m hoping that the Linux support matures in the future.

Voice Assistant

Would I like an always-on spy listening device in my house? Hell no.

Would I like a useful voice assistant that doesn’t save what I say to sell me advertising and invade my privacy? Hell yes.

Which is one of the reasons I went to work for Mycroft AI. But I digress. As part of my role, I do a lot of testing and documenting for the Mark 1 hardware – and I have three of them around the house. They’re solid little units with microphones that are better than I expected for RPi-based devices.

One thing I did need to get for working with the Mycroft Mark 1 was a new set of torx hex keys – the ones I had didn’t have a long enough handle to disassemble the Mark 1.

We also have a build of Mycroft for Raspberry Pi – Picroft – that needs a microphone and speakers. For this I got  a Jabra 410 – it’s much better than I expected for a mid-range omni-directional USB microphone.

For Picroft I also need some Micro SD cards; my key learning here has been that cheap Micro SD cards will cause you pain and misery and suffering and segfaults. Don’t use cheap Micro SD cards. You’re better than that.

Internet of Things and Home Automation

My bevvy of LIFX light bulbs continues to grow; I really like the range. I did have an issue with their LIFX Z light strip; one of the three strips that was delivered didn’t work, but it was covered under warranty and they shipped me a replacement. One of my favourite integrations here is with Google Home; I can turn off my bedroom light using the power of my voice.

I’ve also been hacking around with some Ruuvi tags; I want to spend more time on these, they’re pretty cool as sensors.

Software

My software stack hasn’t really changed in two years – I’m still using LibreOffice, with Firefox and Thunderbird, and Atom Editor. In particular,  LibreOffice Draw is becoming my go-to tool for diagrams and process flows. Scribus, Inkscape and GIMP are still top in my toolbox too. The new version of GIMP is much smoother.

Gaps in my toolchain

Even with all these great tools, I’m still missing a few components from my overall stack.

  • Visual Git Editor – The range of visual editors for Git on Linux is limited. I tried GitKraken but didn’t like it much. GitHub for desktop doesn’t yet have an official Linux build; I tried to install the shiftkey fork, but couldn’t figure out how to install it.
  • Better internet – my internet is connected at about 6Mbps down, 1Mbps up. It’s slightly faster than two years ago. It’s usable, but very slow. If I have to download or upload a large image – which I often have to do for work – I have to plan ahead. Oh NBN. I simply don’t have the words.

 

Have I missed anything? What do you use?

Joining the Dots Data Visualisation Symposium 2017

Joining the Dots – The Art and Science of Data Visualisation came about as the brainchild of Fiona Tweedie – a business analyst and data scientist who has worked in open knowledge, open data and digital humanities for several years, after completing her PhD in humanities at the University of Sydney. At Pycon AU, Fiona identified that most of the talks on data visualisation had strong representation from STEM – science, technology, engineering and mathematics – but poorer representation from the humanities. Held at the Walter and Eliza Hall Institute, part of the broader University of Melbourne research precinct, #jtdwehi sought to address that by providing the opportunity to cross-pollinate multiple disciplines – and by all accounts it was a roaring success.

There were several excellent and engaging presentations over the course of the day, and my personal highlights are covered below.

Keynote – Professor Deb Verhoeven

Deb Verhoeven is incredibly respected in digital humanities for her creative take on visualisation and sonification – and not least of all for her untiring efforts to improve gender representation and diversity in the digital humanities  – for more on this, check out her famous ‘Where are the women?’ speech at DH2015:

https://vimeo.com/144863312[/embed]

Her incisive presentation covered broad ground. In particular, her exposé of “gender offenders” in Australian cinema – men who do not work with women, and choose to work exclusively with other men – denying women opportunities in the industry – was one of the most impactful data visualisations I’ve ever seen.

This is what the patriarchy looks like! – Professor Deb Verhoeven, speaking about data visualising of gender representation in  Australian cinema

Using a technique called social network analysis, Verhoeven’s team were able to show the gender of project members and how they clustered. Words don’t do it justice.

https://twitter.com/datakid23/status/898399563559587840

You can read more about the project via this article on The Conversation.

Another thought-provoking element of Verhoeven’s keynote was the work her research team had done on sonofication, as part of The Ultimate Gig Guide project. Walking us through the project, Verhoeven explained how the team had gathered data on the spread of bands across Melbourne via gig records. To add an extra degree of difficulty, many of these records were not digitised, and the data had to be gathered manually (another argument for digitalisation projects – it makes accessing and using data so much easier). The team then sonofied the data, resulting in a sequence of notes representing the frequency of gigs and their location as distance from Melbourne CBD. To add additional interest, a backing track was added, and the data was transposed into Cmaj scale. A meta gig – a gig about a gig!

Mind much blown.

You can read more about Deb Verhoeven’s academic work.

“Visualising the Australian Transport Network” by Xavier Ho, CSIRO

Xavier, an interactive data visualisation specialist with CSIRO, presented on TraNSIT – the Transport Network Strategic Investment Tool. This tool is designed to help identify and implement efficiencies in agribusiness supply chains by mapping the logistics and transport networks of different modes of transport – road, rail, air and sea. This work was amazing – not just because the data needed to be sourced from so many different repositories – another argument for open data  – but because of the direct impact data visualisation could have on planning and strategy.

Xavier was a seasoned presenter, with an engaging style – an excellent speaker.

“Ungodly cocktail – visualising three editions of Raynal’s “Histoire”” by Geoff Hinchcliffe, Australian National University

I cannot honestly say that French literature is something which excites me, but Geoff Hinchcliffe’s excellent presentation brought this project – which sought to visualise the differences between editions of Raynal’s Histoire – to life. Using the ‘ungodly cocktail’ of several data visualisation tools, combined with an iterative design and development process (instead of the usual tiered and discrete ‘front end’ and ‘back end’ approach), the changes between versions were mapped and visualised, providing a narrative to explore the influence of collaborator (in writing), Diderot.

What struck me about Hinchcliffe’s approach was the remarkable work that had gone into making something so esoteric and complex so accessible and simple – the true power of data visualisation.

You can follow Hinchcliffe as @gravitron on Twitter.

Further thoughts

Throughout the day, I came to a number of conclusions:

  • There are a small number of ‘tried and true’ tools for data visualisation specialists – among them d3.js and R. Processing did not seem to have found the same traction in the datavis community, likely because its mature implementation is still Java-based, while the Javascript – and therefore more web accessible and interactive implementation – is not as mature. There are several Python libraries for visualisation, and Python continues to ascend in popularity across not just the sciences but increasingly the humanities – and is firmly established as a programming language of first choice. Colour choices remain important, guided by tools like Color Brewer. Typography choices remain geared to the minimal and the sans-serif, indicating a need to have the visualisation speak for itself.
  • Interactivity is not a necessary part of every visualisation – with some visualisations such as Hinchcliffe’s not having a high degree of interactivity.
  • The interplay between design and development is tightly coupled – as seen with presenters having both back-end and front-end and process ’round tripping’ skills – data visualisation combines design, coding and statistical skills in equal measure and the more highly sought after practitioners will be able to work ‘full stack’.

Save

Save

Linux Australia expense breakdown – a data visualisation in d3.js

After learning a lot of new techniques and approaches (and gotchas) in d3.js in my last data visualisation (Geelong Regional Libraries by branch), I wanted to turn my new-found skills to Linux Australia’s end of year report. This is usually presented in a fairly dry manner at the organisation’s AGM each year, and although we have a Timeline of Events, it was time to add some visual interest to the presentation of data.

Collecting and cleaning the data

The dataset that I chose to explore was the organisation’s non-event expenses – that is, the expenditure of the organisation not utilised on specific events – items like insurance, stationery, subscriptions to online services and so on. These were readily available in the accounting software – Xero, then a small amount of data cleansing yielded a simple CSV file. The original file had a ‘long tail’ distribution – there were many data point that had only a marginal value and didn’t help in explaining the data, so I combined these into an ‘other’ category.

Visualising the data

Using the previous annular (donut chart) visualisation as the base, I set some objectives for the visualisation;

  • The colours chosen had to match those of Linux Australia’s branding
  • The donut chart required lines and labels
  • The donut chart required markers inside each arc
  • The donut chart had to be downloadable in svg format so that it could be copied and pasted into Inkscape (which has svg as its standard save format)
Colour choice

There was much prototyping involved with colour selection. The first palette selected used shading of a base colour (#ff0000 – red), but the individual arcs were difficult to distinguish. A second attempt added many (16) colours into the palette, but they didn’t work as a colour set. I settled on a combination of three colours (red, yellow, dark grey) and shades of these, with the shading becoming less saturated the smaller the values of the arc.

For anyone interested, the color range was defined as a d3.scaleOrdinal object as below.

var color = d3.scaleOrdinal()
    .range([
      '#ffc100',
      '#ff0000',
      '#393939',
      '#ffcd33',
      '#ff3333',
      '#616161',
      '#ffda66',
      '#ff6666',
      '#888888',
      '#ffe699',
      '#ff9999',
      '#b0b0b0',
      '#fff3cc',
      '#ffcccc',
      '#fff'
    ])
Lines and markers

I hadn’t used lines (polylines) and markers in d3.js before and this visualisation really needed them – because the data series labels were too wordy to easily fit on the donut chart itself. There were some examples that were particularly useful and relevant in figuring this out:

The key learning from this exercise about svg polylines is that the polyline is essentially a series of x,y Cartesian co-ordinates – the tricky part is actually using the right circular trigonometry to calculate the correct co-ordinates. This took me right back to sin and cos basics, and I found it helpful to sketch out a diagram of where I wanted the polyline points to be before actually trying to code them in d3.js.

A gotcha that tripped me up for about half an hour here was that I hadn’t correctly associated the markers with the polylines – because the markers only had a class attribute, but not an id attribute. Whenever I use markers on polylines from now on, I’ll be specifying both class and id attributes.

    .attr('class', 'marker')
    .attr('id', 'marker')

I initially experimented with a polyline that was drawn not just from the centroid of the arc for each data point out past the outerArc, but one that also went horizontally across to the left / right margin of the svg. While I was able to achieve this eventually, I couldn’t get the horizontal spacing looking good because there were so many data points on the donut chart – this would work well with a donut chart with far fewer data points.

Markers were also generally straightforward to get right, after reading up a bit on their attributes. Again, one of the gotchas I encountered here was ensuring that the markerWidth and markerHeight attributes were large enough to contain the entire marker – for a while, the markers were getting truncated, and I couldn’t figure out why.

    .attr('markerWidth', '12')
    .attr('markerHeight', '12')
Labels

Once the positioning for the polylines was solved, then positioning the labels was relatively straightforward, as many of the same trigonometric functions were used.

The challenge I encountered here was that d3.js by default has no text wrapping solution built in to the package, although alternative approaches such as the below had been documented elsewhere. From what I could figure out, d3.js does not support the tspan svg element. That is, I can’t just append tspan elements to text elements to achieve word-wrapping.

In the end I ended up just abbreviating a couple of the data point labels rather than sink several hours into text wrapping approaches. It seems odd that svg provides such poor native support for text wrapping, but considering the myriad ways that text – particularly foreign language text – can be wrapped – it’s incredibly complex.

Downloadable svg

The next challenge with this visualisation was to allow the rendered svg to be downloaded – as the donut chart was intended to be part of a larger infographic. Again, I was surprised that a download function wasn’t part of the core d3.js library, but again a number of third party functions and approaches were available:

  • Example block from Miłosz Kłosowicz – ‘Download svg generated from d3‘: in this example, the svg node is converted to base-64 encoded ASCII then downloaded.
  • d3-save-svg plugin: this plugin provides a number of methods to download the svg, and convert it to a raster file format (such as PNG). This is a fork of the svg-crowbar tool, written for similar purposes by the New York Times data journalism team.

I chose to use the d3-save-svg plugin simply because of the abstraction it provided. However, I came up against a number of hurdles. When I first used the example code to try and create a download button, the download function was not being triggered. To work around this, I referenced the svg object by id:

d3_save_svg.save(d3.select('#BaseSvg').node(), config);

The other hiccup with this approach was that CSS rules were not preserved in the svg download if the CSS selector had scope outside the svg object itself. For instance, I had applied basic font styling rules to the entire body selector, but in order for font styling to be preserved in the download, I had to re-specify the font styling at the svg selector level in the CSS file. This was a little frustrating, but the ease of using a function to do the download compensated for this.

Linux Australia expenses 2015-2016 infographic
Linux Australia expenses 2015-2016 infographic

 

Save