Greg McKeown’s easy-to-read tome on ‘Essentialism’ is a field manual – a guide for the busy manager or multi-tasker who is poor at saying no to commitments, and who erroneously believes we can do it all. Reading this book is a valuable use of time for the new manager, or the seasoned leader who finds their success has bred too many different projects.
The overarching frame of reference is that there are two types of managerial and leadership behaviour (the book equivocates management and leadership together) – Essentialist and Non-Essentialist, and that effectiveness is the product only of the former.
The book is well structured and each chapter clearly articulates an aspect of being ‘non-essential’ – illustrating the consequences with (at times, kitsch) anecdotes. The solution is then provided, in the form of take-away behaviours that can be practised over time.
This book would have been improved with the addition of the following artefacts:
– A wall guide or infographic contrasting the ‘Essentialist’ and ‘Non-Essentialist’ behaviours for easy reference and to refer back to
– A maturity model or similar allowing the fledgling leader to self-rate their behaviours
I also found this book lacking in solid empirical research; much of the narrative is fleshed with anecdotal rather than research or evidence-based information, which detracts overall from the credibility of the book.
This year, linux.conf.au 2017 headed to the picturesque state of Tasmania, to Hobart’s Wrest Point convention centre, and the theme of the conference was ‘the future of open source’. My key takeaway from the conference was that:
The future will be built on trust, and trust takes many forms –
Trusting that data and systems have confidentiality, integrity and availability – traditional security
Trusting that digital experiences will be pleasant, safe and as frictionless as possible – user experience and community experience
Trusting that people will build the future that they want – agency and empowerment
This blog post is going to explore some of my picks from the conference through these lenses.
Security, privacy and integrity
Security, privacy and integrity was a recurring theme of the conference.
Michael Cordover – The Future of Privacy
Michael Cordover‘s talk, ‘The Future of Privacy‘, was perhaps the most thought-provoking talk around privacy. Michael provided a history of privacy, underscoring how technology has shaped notions of what it means to be left alone, and what it means to have personal data remain private. In our ubiquitously-connected, always-on world, it’s becoming harder to delineate what informed consent means – given that data can be inferred by association (which is exactly how Tapad‘s technology is designed). It’s also harder for people to be aware of how apps and platforms are using data – terms and conditions are hard to read, and detract from usability. Practically, it’s hard to own your own data – you essentially have to run your own services. Open systems, decentralisation, federation and non-permissive by default are Cordover’s answers to these problems – but these all pay a usability price. In Cordover’s words,
There’s no easy path forward that ordinary people can take.
David Bell – In Case of Emergency: Break Glass – BCP, DRP, & Digital Legacy
As a first time linux.conf.au Speaker, David delivered a solid presentation covering business continuity planning, disaster recovery planning and digital legacy. His focus was on ensuring that appropriate planning was done before business interruption events. He also covered personal digital legacy – an almost-unexplored topic – for example – would the people you leave behind when you die know how to access your passwords?
George Fong – The Security and Integrity of the Internet
The key takeaway from George’s talk that continued to resonate for days afterwards was:
Trust is the byproduct of integrity
Using examples such as Dirty COW and Heartbleed, Fong opined that we as an opensource community need to make sure that Linux – which the foundation of the internet rests upon – is trustworthy. Bugs are only shallow if many eyeballs are on them, and all too often there aren’t enough eyeballs. Using the analogy of seatbelts, and how few of us would ever feel safe and secure driving without one, he articulated how the internet in many ways is still a frontier, devoid of strong security measures and protocols that ensure safety and integrity – and therein, trust.
Touching on another key theme of the conference – agency and empowerment – he urged the audience to grasp that they, we, the open source community are the voices of the internet. Fong encouraged us to use those voices to better educate the public on what we do – we need to promote our activities to strengthen integrity. Things are broken – and we’re not helping. It’s up to us to fix the problem.
On a side note, as the recently-elected President of Linux Australia, I’m looking forward to working with George, and recently-appointed Chair of Internet Australia, Anne Hurley, to identify how we can work collaboratively together on some of these aims – as Internet Australia and Linux Australia have some overlap in mission, values and remit.
Jon Oxer – Network Protocol Analysis for IoT Devices
Nowhere is security, privacy and integrity more pressing that in the field of Internet of Things. There were several IoT related talks this year, but two that stood out. Firstly, Jon Oxer‘s talk on Network Protocol Analysis for IoT Devices was an eye-opener into the history of the radio frequency spectrum, how some of it is unregulated, but moreover how device protocols can be reverse engineered with simple equipment and a penchant for code-breaking. Oxer showed how simple it is to launch a man-in-the-middle attack on IoT devices on the RF 422 MHz band by intercepting their transmissions, decoding their protocols and then using a playback attack. We definitely need better encryption in IoT.
Christopher Biggs – How to Defend Yourself from your Toaster
Christopher Biggs also gave an excellent security talk around IoT – ‘How to defend yourself from your toaster‘, however he tackled it from the perspective of an IoT device manufacturer or developer – clearly articulating what features and functions should be included in new IoT devices. Although he didn’t frame it as such, his talk was basically outlining a maturity model for IoT devices. For example, devices with low maturity have poor user interfaces, no provision for maintenance, and employ poor security practices – such as having insecure protocols (such as telnet) available. He provided useful advice for improving maturity, for instance port-scanning devices to see which ports are open, and what data is being transmitted. One of the key takeaways here was that if you are designing an IoT device, or managing a fleet of IoT devices, that you need to get someone else to do the hard parts. Apple, Amazon and Google all now have SDKs available for IoT, but the drawback is that most of them are not open sourced.
Biggs spoke of a metric that I hadn’t heard before in this space – MTT1C – mean time to first compromise – or the length of time it takes an IoT device to be compromised once it’s placed on the public internet. This got me thinking that I haven’t seen anywhere a capability maturity model for enterprise IoT – for instance the practices, support, metrics and continuous improvement that would be used in a large organisational deployment of IoT. Perhaps this is something that the standards bodies in this space – Open Connectivity Foundation, BITAG and Resin.io – will develop in time.
Dr Vanessa Teague – Election Software
Dr Vanessa Teague gave one of my favourite talks of the conference on e-voting systems, and the general problem of end to end verification. Using a number of examples of how companies have (or have not) implemented verification, she articulated a number of anomalies with current e-voting systems in NSW, which are soon to be used in both WA and Victoria. Given the recent controversy around United States elections, this talk was particularly timely, and gave rise to a number of uncomfortable questions – such as just how many votes does it take to change an election result, and possibly the course of history?
One of the most resonating points within Dr Teague’s talk was the rejection of an e-voting system – V-Vote – which had superior verification capabilities, but poor user experience and usability qualities. This touches on the second theme which emerged from #lca2017 – it is not sufficient for a product, tool or platform to be functional – it must also have form. People are persuaded by the shiny – and rather than scoff at this – default behaviour for a lot of our community – we need to recognise and respond to this.
Dr Teague was an engaging, humourous and articulate speaker, and I’d really like to hear more from her in future conf lineups.
It may be unusual to relate user experience and customer / community experience to trust, but I see it as fitting. Our experience with a task, a process, or an interaction either enhances or erodes our trust in the organisation, platform or person with whom we’re interacting.
Donna Benjamin – I am your User, why do you Hate me?
Donna Benjamin‘s excellent talk aimed to bring a user experience / human-centred design element to open source developers by questioning some of the fundamental ‘defaults’ we tend to hold. Using project onboard experiences as a lens to explore how we treat newcomers, she demonstrated that our actions are turning people away from opensource – exactly the opposite effect that we’re aiming for. She outlined how contributions in triage, review and testing are not valued as highly as code contributions, again presenting a barrier to increasing participation and diversity. Benjamin argued for the open source community to see users not in terms of what they can’t do – develop software – but as people – with needs and emotions.
This talk highlighted for me the lack of design thinking, human-centred design and user experience practices that are adopted not just on open source products, but to communities in general. Lowering ‘friction’ – the antithesis of good user experience – is something that both open source products and open source communities need to get better at.
Rikki Endsley – The proper care and feeding of communities and carnivorous plants
Rikki Endsley‘s talk likewise touched on how managing communities is a complex task, often fraught with pitfalls. The key takeaway was that you can’t change everything at once – you need to change elements of the community carefully, then have the metrics available to measure the impact of the change.
VM Brasseur – The Business of Community
VM Brasseur‘s talk was a practical guide for people working inside companies to ‘sell’ support of open source projects to management. This talk was framed along three key topics – benefits, costs and implementation. Benefits such as word of mouth marketing, stronger brand recognition, and more effective upstream support are all selling points. One of the strong points of this talk was the recognition of in-kind / non-monetary support to open source communities by business, such as the provision meeting space, marketing, guidance, leadership and mentoring. In particular, Brasseur cautioned that businesses should ask the community what it needed – rather than making assumptions – and providing, for instance, unwanted promotional goodies. Although implementation plans will vary across companies, Brasseur provided some generic advice, such as having clear goals and objectives for community support, setting expectations and being transparent about the company’s intentions.
Nadia Eghbal – Consider the Maintainer (keynote)
Nadia’s keynote brought to the fore many simmering tensions within the open source community. Essentially, the burden of maintaining open source software falls to a few dedicated maintainers, who in some cases may be supporting a product with a user base of tens or thousands of uses.
Eghbal set out four freedoms for open source producers / maintainers, being:
The freedom to decide who participates in your community
The freedom to say no to contributions or requests
The freedom to define the priorities and policies of the project
The freedom to step down or move on from a project, temporarily or permanently
Whether these freedoms are embraced and used to support open source maintainers remains to be seen.
Agency and empowerment
The third key theme that was reflected in the conference programme was that of agency and empowerment – being the changes that we want to see in the open source world.
Pia Waugh – Choose your own adventure
Pia Waugh kicked off this theme, delivering the first conference keynote, where she gave a retrospective on human evolution, and then extrapolated this to the future of open source, articulating how we’re likely to see a decentralisation of power in order to strengthen democracy. She went on to challenge a number of existing paradigms, calling them out as anachronisms as the world has evolved.
This talk was full of Waugh’s trademark energy and vibrancy, and was an excellent choice to open the conference.
Dr Audrey Lobo-Pulo – Publicly Releasing Government Models
Dr Audrey Lobo-Pulo’s talk extended the open data movement by advocating for the public release of government open source models – financial and economic models used to assess public policy decisions – in essence, virtual worlds to explore the implications of policy.
The key takeaway from her talk was that industry and business also stand to benefit greatly from the release of these models, as they could then be combined with private data – in a unique public private partnership. Lobo-Pulo put forward the four components of government policy models (shown below) – and how each contributes the accuracy and validity of the model.
Karen M. Sandler – Surviving the Next 30 Years of Free Software
Karen‘s sensitive and tactful talk recognised the fact that as a community, many of our pillars and key contributors are aging, and that over the next few years we are likely to bid goodbye to many in our community. Her talk explored the different ways in which copyrights can be assigned after death, and the key issues to consider – empowering us to make informed and well founded decisions while we are in a position to do so. Few presenters could have handled this difficult topic with such aplomb, and as usual Karen’s grace, wit and wisdom shone through.
Again, linux.conf.au delivered engaging, thought-provoking and future-looking talks from a range of experienced, vibrant and wise Speakers – and again it was an excellent investment of time. The diversity of Speakers this year was excellent, if perhaps erring on the non-technical side.
Open source still faces a number of challenges – the ecosystem is often underfunded, maintainers are prone to burnout and we still haven’t realised that UX needs to be a key part of what we’re all about. But that’s part of the fun – we have the power to evolve just like the rest of the world.
And I can’t wait for a bit of history repeating at Sydney 2018!
After learning a lot of new techniques and approaches (and gotchas) in d3.js in my last data visualisation (Geelong Regional Libraries by branch), I wanted to turn my new-found skills to Linux Australia’s end of year report. This is usually presented in a fairly dry manner at the organisation’s AGM each year, and although we have a Timeline of Events, it was time to add some visual interest to the presentation of data.
Collecting and cleaning the data
The dataset that I chose to explore was the organisation’s non-event expenses – that is, the expenditure of the organisation not utilised on specific events – items like insurance, stationery, subscriptions to online services and so on. These were readily available in the accounting software – Xero, then a small amount of data cleansing yielded a simple CSV file. The original file had a ‘long tail’ distribution – there were many data point that had only a marginal value and didn’t help in explaining the data, so I combined these into an ‘other’ category.
Visualising the data
Using the previous annular (donut chart) visualisation as the base, I set some objectives for the visualisation;
The colours chosen had to match those of Linux Australia’s branding
The donut chart required lines and labels
The donut chart required markers inside each arc
The donut chart had to be downloadable in svg format so that it could be copied and pasted into Inkscape (which has svg as its standard save format)
There was much prototyping involved with colour selection. The first palette selected used shading of a base colour (#ff0000 – red), but the individual arcs were difficult to distinguish. A second attempt added many (16) colours into the palette, but they didn’t work as a colour set. I settled on a combination of three colours (red, yellow, dark grey) and shades of these, with the shading becoming less saturated the smaller the values of the arc.
For anyone interested, the color range was defined as a d3.scaleOrdinal object as below.
I hadn’t used lines (polylines) and markers in d3.js before and this visualisation really needed them – because the data series labels were too wordy to easily fit on the donut chart itself. There were some examples that were particularly useful and relevant in figuring this out:
The key learning from this exercise about svg polylines is that the polyline is essentially a series of x,y Cartesian co-ordinates – the tricky part is actually using the right circular trigonometry to calculate the correct co-ordinates. This took me right back to sin and cos basics, and I found it helpful to sketch out a diagram of where I wanted the polyline points to be before actually trying to code them in d3.js.
A gotcha that tripped me up for about half an hour here was that I hadn’t correctly associated the markers with the polylines – because the markers only had a class attribute, but not an id attribute. Whenever I use markers on polylines from now on, I’ll be specifying both class and id attributes.
I initially experimented with a polyline that was drawn not just from the centroid of the arc for each data point out past the outerArc, but one that also went horizontally across to the left / right margin of the svg. While I was able to achieve this eventually, I couldn’t get the horizontal spacing looking good because there were so many data points on the donut chart – this would work well with a donut chart with far fewer data points.
Markers were also generally straightforward to get right, after reading up a bit on their attributes. Again, one of the gotchas I encountered here was ensuring that the markerWidth and markerHeight attributes were large enough to contain the entire marker – for a while, the markers were getting truncated, and I couldn’t figure out why.
Once the positioning for the polylines was solved, then positioning the labels was relatively straightforward, as many of the same trigonometric functions were used.
The challenge I encountered here was that d3.js by default has no text wrapping solution built in to the package, although alternative approaches such as the below had been documented elsewhere. From what I could figure out, d3.js does not support the tspan svg element. That is, I can’t just append tspan elements to text elements to achieve word-wrapping.
Example block from Mike Bostock – ‘Wrapping long labels‘: in which Mike Bostock (the creator and maintainer of d3.js) has written a custom function for wrapping text
In the end I ended up just abbreviating a couple of the data point labels rather than sink several hours into text wrapping approaches. It seems odd that svg provides such poor native support for text wrapping, but considering the myriad ways that text – particularly foreign language text – can be wrapped – it’s incredibly complex.
The next challenge with this visualisation was to allow the rendered svg to be downloaded – as the donut chart was intended to be part of a larger infographic. Again, I was surprised that a download function wasn’t part of the core d3.js library, but again a number of third party functions and approaches were available:
Example block from Miłosz Kłosowicz – ‘Download svg generated from d3‘: in this example, the svg node is converted to base-64 encoded ASCII then downloaded.
d3-save-svg plugin: this plugin provides a number of methods to download the svg, and convert it to a raster file format (such as PNG). This is a fork of the svg-crowbar tool, written for similar purposes by the New York Times data journalism team.
I chose to use the d3-save-svg plugin simply because of the abstraction it provided. However, I came up against a number of hurdles. When I first used the example code to try and create a download button, the download function was not being triggered. To work around this, I referenced the svg object by id:
The other hiccup with this approach was that CSS rules were not preserved in the svg download if the CSS selector had scope outside the svg object itself. For instance, I had applied basic font styling rules to the entire body selector, but in order for font styling to be preserved in the download, I had to re-specify the font styling at the svg selector level in the CSS file. This was a little frustrating, but the ease of using a function to do the download compensated for this.