My talk picks for #lca2017 – linux.conf.au

linux.conf.au 2017 heads to Hobart, where it was last held in 2009. I absolutely love Tasmania – especially its food and scenery – and am looking forward to heading over.

So, here’s my talk picks¬† – keeping in mind that I’m more devops than kernel hacker – so YMMV.

Executive Summary

  • Monday 16th – Networking breakfast, possibly some WootConf sessions and / or Open Knowledge Miniconf sessions.
  • Tuesday 17th – Law and policy Miniconf, Community Leadership Summit
  • Wednesday 18th – Future Privacy by Michael Cordover, In Case of Emergency – Break Glass by David Bell, Handle Conflict Like a Boss by Deb Nicholson, Internet of Terrible Things by Matthew Garrett.
  • Thursday 19th – Network Protocols for IoT Devices by Jon Oxer, Compliance with the GPL by Karen Sandler and Bradley M. Kuhn, Open source and innovation by Allison Randall and Surviving the next 30 years of open source by Karen Sandler.
  • Friday 20th – Publicly releasing government models by Audrey Lobo-Pulo

Monday 16th January

I’m keeping Monday open as much as possible, in case there are last minute things we need to do for the Linux Australia AGM, but will definitely start the day with the Opening Reception and Networking Breakfast. A networking breakfast is an unusual choice of format for the Professional Delegates Networking Session (PDNS), but I can see some benefits to it such as being able to initiate key relationships and talking points early in the conference. The test of course will be attendance, and availability of tasty coffee ūüėÄ

If I get a chance I’ll see some of the WootConf sessions and/or Open Knowledge Miniconf sessions (the Open Knowledge Miniconf schedule hadn’t been posted at the time of writing).

Tuesday 17th January

The highlight for me in Tuesday’s schedule is the excellent Pia Waugh talking ‘Choose your own Adventure‘. This talk is based on Waugh’s upcoming book, and the philosophical foundations, macroeconomic implications and strategic global trends cover a lot of ground – ground that needs to be covered.

As of the time of writing, the schedule for the Law and Policy Miniconf hadn’t been released, but this area is of interest to me – as is the Community Leadership Summit. I’m interested to see how the Community Leadership Summit is structured this year; in 2015 it had a very unconference feel. This was appropriate for the session at the time, but IMHO what the Community Leadership Summit needs to move towards are concrete deliverables – such as say a whitepaper advising Linux Australia Council on where efforts should be targeted in the year ahead. In this way, the Summit would be able to have a tangible, clear impact.

Wednesday 18th January

I’ll probably head to Dan Callahan’s keynote on ‘Designing for failure’. It’s great to see Jonathan Corbet’s Kernel Report get top billing, but my choice here is between the ever-excellent Michael Codover’s ‘Future Privacy‘ and Cedric Bail’s coverage of ‘Enlightenment Foundation Libraries for Wearables‘. Next up, I’ll be catching David Bell (Director, LCA2016) talking ‘In case of emergency – break glass – BCP, DRP and Digital Legacy‘. There’s nothing compelling for me in the after lunch session, except perhaps Josh Simmon’s ‘Building communities beyond the black stump‘, but this one’s probably too entry-level for me, so it might be a case of long lunch / hallway track.

After afternoon tea, I’ll likely head to Deb Nicholson’s ‘Handle conflict like a boss‘, and then Matthew Garett‘s ‘Internet of terrible things‘ – because Matthew Garrett ūüėÄ

Then, it will be time for the Penguin Dinner!

Thursday 19th January

First up, I’m really looking forward to Nadia Eghbal’s ‘People before code‘ keynote about the sustainability of open source projects.

Jon Oxer’s ‘Network Protocol Analysis for IoT Devices‘ is really appealing, particularly given the rise and rise of IoT equipment, and the lack of standards in this space.

It might seem like a dry topic for some, but Bradley M. Kuhn and Karen Sandler from the Software Freedom Conservancy will be able to breathe life into ‘Compliance with the GPL‘ if anyone can; they also bring with them considerable credibility on the topic.

After lunch, I’ll be catching Allison Randall talking on ‘Open source and innovation‘ and then Karen Sandler on ‘Surviving the next 30 years of open source‘. These talks are related, and speak to the narrative of how open source is evolving into different facets of our lives – how does open source live on when we do not?

Friday 20th January

After the keynote, I’ll be catching Audrey Lobo-Pulo on ‘Publicly releasing government models‘ – this ties in with a lot of the work I’ve been doing in open data, and government open data in particular. After lunch, I’m looking forward to James Scheibner’s ‘Guide to FOSS licenses‘, and to finish off the conference on a high note, the ever-erudite and visionary George Fong on ‘Defending the security and integrity of the ‘Net’. Internet Australia, of which Fong is the chair, has many values in common with Linux Australia, and I foresee the two organisations working more closely together in the future.

What are your picks for #lca2017?

Geelong Libraries by branch – a data visualisation

At a glance

Introduction

Geelong Regional Libraries Corporation (GRLC) came on board GovHack this year, and as well as being a sponsor, opened a number of datasets for the hackathon. Being the lead organiser for GovHack, I didn’t actually get a chance to explore the open data during the competition. However, while I was going for a walk one day – as it always does – I had an idea around how the GRLC datasets could be visualised. I’d previously done some work visualising data using a d3.chord layout, and while this data wasn’t suitable for that type of layout, the concept of using annulars – donut charts – to represent and compare the datasets seemed appealing. There was only one problem – I’d never tackled anything like this before.

Challenge: accepted

Understanding what problem I was trying to solve

Of course the first question here was what problem I was trying to solve (thanks Radia Perlman for teaching me to always solve the right problem – I’ll never forget your LCA2013 keynote). Was this an exploratory data visualisation or an explanatory one? This led to formulating a problem statement:

How do the different Libraries in the Geelong region compare to each other in terms of holdings, membership, visits and other attributes?

This clearly established some parameters for the visualisation: it was going to be exploratory, and comparative. It would need to have a way to identify each Library – likely via colour code, and have appropriate use of shapes and axes to allow for comparison. While I was tempted to use a stacked bar chart, I really wanted to dig deeper into d3.js and extend my skills in this Javascript library – so resolved to visualise the data using circular rings.

Colour selection

The first challenge was to ensure that the colours of the visualisation were both appealling and appropriate. While this seems an unlikely starting place for a visualisation – with most practitioners opting to get the basic shape right first, for this project getting the colours right felt like the best starting point. For inspiration, I turned to the Geelong Regional Library Corporation’s Annual Report, and used the ColorZilla extension to eyedropper the key brand colours used in the report. However, this only provided about 7 colours, and I needed 17 in order to map each of the different libraries. In order to identify ‘in between’ colours, I used this nifty tool from Meyerweb, which is super-handy for calculating gradients. The colours were then used as an array for a d3.scaleOrdinal object, and mapped to each library.

var color = d3.scaleOrdinal()
    .range([
        "#59d134",
        "#4CCA62",
        "#40C28F",
        "#33bbbd",
        "#36AFC7",
        "#3b98da",
        "#427DC9",
        "#5148a6",
        "#8647A8",
        "#BC47A9",
        "#f146ab",
        "#F03E85",
        "#F0355E",
        "#f0431e",
        "#F8880F",
        "#FFCC00",
        "#ACCF1A"
      ])
    .domain([
        "Geelong",
        "Belmont",
        "Corio",
        "Geelong West",
        "Waurn Ponds",
        "Ocean Grove",
        "Newcomb",
        "Torquay",
        "Drysdale",
        "Lara",
        "Bannockburn",
        "Queenscliff",
        "Chilwell",
        "Highton",
        "Mobile Libraries",
        "Barwon Heads",
        "Western Heights College"
    ]);

Annular representation of data using d3.pie

Annular representation of data - step 1
First step in annular representation

The first attempt at representing the data was … a first attempt. While I was able to create an annular representation (donut chart) from the data using d3.pie and d3.arc, the labels of the Libraries themselves weren’t positioned well. The best tutorial I’ve read on this topic by far is from data visualisation superstar, Nadieh Bremer, over on her blog, Visual Cinnamon. I decided to leave labels on the arcs as a challenge for later in the process, and instead focus on the next part of visualisation – multiple annulars in one visualisation.

Multiple annulars in one visualisation

Annular representation of data - step 2
Uh-oh!

The second challenge was to place multiple annulars – one for each dataset – within the same svg. Normally with d3.js, you create an svg object which is appended to the body element of the html document. So what happens when you place two d3.pie objects on the svg object? You guessed it! Fail! The two annulars were positioned one under the other, rather than over the top of each other. I was stuck on this problem for a while, until I realised that the solution was to place different annulars on different layers within the svg object. This also gave more control over the visualisation. However, SVG doesn’t have layers as part of its definition – objects in SVG are drawn one on top of the other, with the last drawn object ‘on top’ – sometimes called stacking . But by creating groups within the BaseSvg like the below, for shapes to be drawn within, I was able to approximate layering.

var BaseSvg = d3.select("body").append("svg")
    .attr("width", width)
    .attr("height", height)
    .append("g")
    .attr("transform", "translate(" + (width / 2 - annularXOffset) + "," + (height / 2 - annularYOffset) + ")");

/*
  Layers for each annular
*/

var CollectionLayer = BaseSvg.append('g');
var LoansLayer      = BaseSvg.append('g');
var MembersLayer    = BaseSvg.append('g');
var EventsLayer     = BaseSvg.append('g');
var VisitsLayer     = BaseSvg.append('g');
var WirelessLayer   = BaseSvg.append('g');
var InternetLayer   = BaseSvg.append('g');
var InternetLayer   = BaseSvg.append('g');
var TitleLayer      = BaseSvg.append('g');
var LegendLayer     = BaseSvg.append('g');

At this point I found Scott Murray’s SVG Primer very good reading.

Annular representation of data - step 3
The annulars are now positioned concentrically

I was a step closer!

Adding in parameters for spacing and width of the annulars

Once I’d figured out how to get annulars rendering on top of each other, it was time to experiment with the size and shape of the rings. In order to do this, I tried to define a general approach to the shapes that were being built. That general approach looked a little like this (well, it was a lot more scribble).

General approach to calculating size and proportion of multiple annulars
General approach to calculating size and proportion of multiple annulars

By being able to define a general approach, I was able to declare variables for elements such as the annular width and annular spacing, which became incredibly useful later as more annulars were added – the positioning and shape of the arcs for each annular could be calculated mathematically using these variables (see the source code for how this was done).

var annularXOffset  = 100; // how much to shift the annulars horizontally from centre
var annularYOffset  = 0; // how much to shift the annulars vertically from centre
var annularSpacing  = 26; // space between different annulars
var annularWidth    = 22; // width of each annular
var annularMargin   = 70; // margin between annulars and canvas
var padAngle        = 0.027; // amount that each segment of an annular is padded
var cornerRadius    = 4; // amount that the sectors are rounded

This allowed me to ‘play around’ with the size and shape of the annulars until I got something that was ‘about right’.

Annular representation of data - step 4
Annular spacing overlapped

 

Annular representation of data - step 3
Annular widths and spacing looking better

At this stage I also experimented with the padAngle of the annular arcs (also defined as a variable for easy tweaking), and with the stroke weight and colour, which was defined in CSS. Again, I took inspiration from GRLC’s corporate branding.

Placing dataset labels on the arcs

Now that I had the basic shape of the visualisation, the next challenge was to add dataset labels. This was again a major blocking point, and it took me a lot of tinkering to finally realise that the dataset labels would need to be svg text, sitting on paths created from separate arcs than that rendered by the d3.pie function. Without separate paths, the text wrapped around each arc segment in the annular – shown below. So, for each dataset, I created a new arc and path for the dataset label to be rendered on, and then appended a text element to the path. I’d never used this technique in svg before and it was an interesting learning experience.

Annular representation of data - step 6
Text on arcs is a dark art

Having sketched out a general approach again helped here, as with the addition of a few extra variables I was able to easily create new arcs for the dataset text to sit on. A few more variables to control the positioning of the dataset labels, and voila!

Annular representation of data - step 7
Dataset labels looking good

Adding a legend

The next challenge was to add a legend to the diagram, mostly because I’d decided that the infographic would be too busy with Library labels on each data point. This again took a bit of working through, because while d3.js has a d3.legend function for constructing legends, it’s only intended for data plotted horizontally or vertically, not 7 data sets plotted on consecutive annulars. This tutorial from Zero Viscosity and this one from Competa helped me understand that a legend is really just a group of related rectangles.

var legend = LegendLayer.selectAll("g")
    .data(color.domain())
    .enter()
    .append('g')
    .attr('x', legendPlacementX)
    .attr('y', legendPlacementY)
    .attr('class', 'legend')
    .attr('transform', function(d, i) {
        return 'translate(' + (legendPlacementX + legendWidth) + ',' + (legendPlacementY + (i * legendHeight)) + ')';
});

legend.append('rect')
    .attr('width', legendWidth)
    .attr('height', legendHeight)
    .attr('class', 'legendRect')
    .style('fill', color)
    .style('stroke', legendStrokeColor);

legend.append('text')
    .attr('x', legendWidth + legendXSpacing)
    .attr('y', legendHeight - legendYSpacing)
    .attr('class', 'legendText')
    .text(function(d) { return d; });
Annular representation of data - step 8
The legend isn’t positioned correctly

Again, the positioning took a little work, but eventually I got the legend positioned well.

Annular representation of data - step 9
The legend is finally positioned well

Responsive design and data visualisation with d3.js

One of the other key challenges with this project was attempting to have a reasonably responsive design. This appears to be incredibly hard to do with d3.js. I experimented with a number of settings to aim for a more responsive layout. Originally, the narrative text was positioned in a sidebar to the right of the image, but at different screen resolutions the CSS float rendered awkwardly, so I decided to use a one column layout instead, and this worked much better at different resolutions.

Next, I experimented with using the Javascript values innerWidth and innerHeight to help set the width and height of the svg element, and also dynamically positioned the legend. This gave a much better, while not perfect, rendering at different resolutions. It’s still a little hinkey, particularly at smaller resolutions, but is still an incremental improvement.

Thinking this through more deeply, although SVG and d3.js in general are vector-based, and therefore lend themselves well to responsive design to begin with, there are a number of elements which don’t scale well at different resolutions – such as text sizes. Unless all these elements were to be made dynamic, and likely conditional on viewport and orientation, then it’s going to be challenging indeed to produce a visualisation that’s fully responsive.

Adding tooltips

While I was reasonably pleased with the progress on the project, I felt that the visualisation needed an interactive element. I considered using some sort of arc tween to show movement between data sets, but given that historical data (say for previous years) wasn’t available, this didn’t seem to be an appropriate choice.

After getting very frustrated with the lack of built in tooltips in d3.js itself, I happened upon the d3.tip library. This was a beautifully written addition to d3.js, and although its original intent was for horizontal and vertical chart elements, it worked passably on annular segments.

Annular representation of data - step 10
Adding tooltips

Drawbacks in using d3.tip for circular imagery

One downside I found in using this library was the way in which it considers the positioning of the tooltip – this has some unpredictable, and visually unpleasant results when data is being represented in circular format. In particular, the way that d3.tip calculates the ‘centre’ of the object that it is applied to does not translate well to arc and circular shapes. For instance, look at how the d3.tip is applied to arc segments that are large and have only small amounts of curvature – such as the Geelong arc segment for ‘Members’. I’ve had a bit of a think about how to solve this problem, and the solution involves a more optimal approach to calculating the the ‘centre’ point of an arc segment.

This is beyond what I’m capable of with d3.js, but wanted to call this out as a future enhancement and exploration.

Adding percentage values to the tooltip with d3.nest and d3.sum

The next key challenge was to include the percentage figure, as well as Library and data value in the d3.tip. This was significantly more challenging than I had anticipated, and meant learning up on d3.nest and d3.sum functions. These tutorials from Phoebe Bright, and LearnJS were helpful, and Zan Armstrong’s tutorial on d3.format helped me get the precision formatting correct. After much experimentation, it turned out that summing the values of each dataset (in order to calculate percentage) was but a mere three lines of Javascript:

var CollectionItemCount = d3.nest()
    .rollup(function (v) { return d3.sum(v, function(d) { return d.Items})})
    .entries(CollectionData);

Concluding remarks

Data visualisation is much more challenging than I thought it would be, and the learning curve for d3.js is steep – but it’s worth it. This exercise drew on a range of technical skills, including circular trigonometry, HTML and knowledge of the DOM, CSS and Javascript, and above all the ability to ‘break a problem down’ and look at it from multiple angles (no pun intended).

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

The Light Clock

My Light Clock arrived on Friday, and the weekend was a great opportunity to set it up and learn more about how it worked.

I’d backed this project for two key reasons;

  • The project was run by an Australian hardware and software engineer – Chris Carter – who was recommended by colleagues in the opensource community. I’m passionate about opensource development, and I wanted to help back an Australian project, particularly given the success of LIFX.
  • The project was based on open hardware and open software. The base board for The Light Clock appears to be the arduino-compatible ESP8266 which is fast becoming the go-to board for open hardware developers. The lighting is based on AdaFruit’s neopixel range.

The box was very plain and simple, and the device itself was packed with polystyrene peanuts and bubble wrap – very secure nonetheless. The Australian adaptor was included in the box, however the lead on the device was only about 1.5m long. The Light Clock sticker on the back of the device was a nice touch, however I would have liked a Light Clock sticker separate in the box for say laptop stickering. Being one of the first 200 people to receive a Light Clock device, a ‘Kickstarter Edition’ engraving or similar would have been a welcomed addition, but understandably not part of minimum viable shippable product.

First steps with #thelightclock, a @kickstarter project I backed.

A photo posted by @kathyreid_id_au on

The short lead presented the first design and installation challenge; ostensibly this device is aimed at replacing existing analogue clocks that are wall-mounted. However, it’s rare that someone would have a general power outlet (GPO) high up on their wall, necessitating a fairly long lead run to a ground-level GPO. This may not be the case in say corporate offices, which may already have networked clocks in place, or existing infrastructure for digital signage.

Connecting to the network

The next challenge was connecting to the Light Clock, and getting it on to my home wi-fi network, so that it could use NTP to keep in sync. The Light Clock correctly appeared as an advertised SSID in my Network Manager, however every attempted connection to this SSID failed. Rather than spend the time diagnosing it, I used my Nexus 5X mobile phone, running stock Android, to connect to The Light Clock SSID. This was successful on the first attempt, and I was able to join The Light Clock to my home wifi network. As expected, The Light Clock could not see my 5GHz SSID, and could only see my 2.4GHz SSID. This appears to be pretty normal for most IoT devices at the moment, but I suspect we’ll see more support for the 5GHz frequency over time. The service that joined The Light Clock wasn’t responsively designed, so it was a bit tricky on a mobile device.

Once I got the device on the network, I then went back to try and diagnose why I couldn’t connect to The Light Clock SSID via Ubuntu, and found something very interesting. The MAC address picked up by the router, shown in the image below, was;

18:fe:34:e2:14:43

however, the MAC address picked up in dmesg (the Ubuntu Network Manager log) was

1a:fe:34:e2:14:43

So, I think there may be an issue with the MAC address it’s broadcasting, or how my machine was picking up the MAC address. Here’s a link to the dmesg logs in case anyone is curious. For the record, I’m using an Atheros network card in my ASUS N76. It’s otherwise generally pretty reliable.

How The Light Clock appears as a device on the router
How The Light Clock appears as a device on the router

Configuring The Light Clock

Configuring The Light Clock proved much easier than getting the device on the network. You simply connect to a web interface to the device over your WiFi network and adjust the settings.

Another observation was that clear setup instructions were at thelightclock.com/setup.

/setup is becoming the default setup URL for devices such as this

The Light Clock settings screen
The Light Clock settings screen

Experimenting with colours yielded some interesting conclusions. The colour settings tended to work best when both colours – the hours colour and the minutes colour – were heavily saturated and bright. Neon type colours – bright pinks, yellows, blues and greens – tended to work best in terms of contrast between hours and minutes. For someone whose house is pretty much all neutral shades – stones, earthy colours – finding a colour palette that was both clearly readable but resonant with the rest of the interior design was very challenging, and I couldn’t settle on a palette that met both requirements.

The blending option when set high tended to make the time much more difficult to read, and I settled on the lowest blending setting. The other feature that would be useful here would be the ability to adjust the brightness of the hours colour setting and minutes colour setting independently, so for instance you could have a very bright hours setting and a very dull minutes setting. I’m not sure if this is possible with the Neopixel hardware though. I did have a look at the source code to see if it was an easy pull request to do, but I couldn’t figure out how the brightness value is added to the pixel colours.

The settings also had three slots to save different colour schemes, which is a useful UX addition, however I would have liked to have seen more slots. In experimenting with the hour markers, I found that no hour markers at all actually made the time more readable, which was counter-intuitive.

With a little tweaking, I think this device could be integrated into other design projects, such as on canvas or with something like LilyTwinkle.

Integration with other IoT devices

One of the key drawbacks of The Light Clock is that it doesn’t appear to have any integration with other IoT devices, such as LIFX, Hue, Nest and so on. There are a number of use cases I can see for The Light Clock to have a lot of additional value if integrated such as;

  • Using The Light Clock as a visual indicator of notifications, phone ringing and alerts
  • Synchronising The Light clock as a wake-up device. Currently I use LIFX to slowly turn on my bedroom light in the morning, and I’d like The Light Clock to be synchronised with this, particularly given that it tends to work best with neon colours.
  • Integrating Light Clock control in to other apps – such as LIFX. I’m really glad that I don’t have to install yet another mobile application to control The Light Clock – because the IoT app market is already so fragmented.

I was also half expecting some sort of documented API for The Light Clock so that I could experiment with some integration myself, and although the source code is available, the device itself doesn’t appear to have a documented API or web service. From what I can tell, the settings page basically takes a bunch of GET variables and writes them to the board, so even knowing the range of GET vars would help to be able to integrate The Light Clock with other devices.

The verdict

This is a great product for people passionate about open hardware, and who like to tinker, but it’s not yet mature enough for a mainstream product. With some small design tweaks and attention to detail in the codebase, it would be a strong standalone product, however it’s key value lies in integrating with other IoT devices to provide meaningful and valuable interactions.

I’m not sure what I’ll do with my Light Clock – I don’t have a wall mounted GPO or GPO in range where I could mount one, unless I find a longer-lead adaptor.

List of feedback for next iteration of this product

  • Include an adaptor that has a long lead to cater for the use case where someone doesn’t have a wall-mounted GPO available, or allow this to be selected during the purchase process.
  • Alternatively, the product could be redesigned to run on batteries (wasteful) or better, power over ethernet – but again the same design limitation remains – just as people are unlikely to have a wall mounted GPO available, they’re even less likely to have a wall mounted RJ45 ethernet port available. I suspect this will change as more and more people have networked devices on their wall though, so I don’t see it as a major limitation.
    Note to self: I need to include wall-mounted GPOs and RJ45 sockets in my home renovation master plan
  • Chris Carter’s The Light Clock source code is available on GitHub, but isn’t in its own project. There also aren’t any license files for the different repositories, so I’m not sure if I’m allowed to fork it or issue pull requests.
  • In the settings, you manually have to set whether it’s daylight savings time or not. Given that it uses NTP for keeping network time, I would have thought it would be possible to get it to automatically accommodate daylight savings time. Could be wrong here, NTP may not store that data, or it may be difficult to pick up the geolocation from the home wifi network.
  • Have separate brightness settings for minutes and hours
  • The web interface for adjusting The Light Clock settings would benefit from being responsively designed
  • Can haz API plzkthx ūüėÄ

 

Update: trying to mount it to the wall

So, I gave mounting it to the wall a go. This was a nightmare. The two circular openings to hang The Light Clock with are flush to the back of the clock, meaning that I couldn’t mount it with cuphooks¬† as the hooks were too curved to snag into the openings. I also tried with the Command re-usable big hooks, and tried to assemble them so I stuck them in the openings first, then tried to stick the entire lot to the wall, with no success. Definitely frustrating. Even if I had got it to mount on the wall, I would have still had a cord trailing down the wall, and the 1.5m power cord is still insufficient to reach the GPO.

Was anyone else able to mount this successfully? How did you do it?