Geelong Libraries by branch – a data visualisation

Estimated reading time: 9 minutes

At a glance


Geelong Regional Libraries Corporation (GRLC) came on board GovHack this year, and as well as being a sponsor, opened a number of datasets for the hackathon. Being the lead organiser for GovHack, I didn’t actually get a chance to explore the open data during the competition. However, while I was going for a walk one day – as it always does – I had an idea around how the GRLC datasets could be visualised. I’d previously done some work visualising data using a d3.chord layout, and while this data wasn’t suitable for that type of layout, the concept of using annulars – donut charts – to represent and compare the datasets seemed appealing. There was only one problem – I’d never tackled anything like this before.

Challenge: accepted

Understanding what problem I was trying to solve

Of course the first question here was what problem I was trying to solve (thanks Radia Perlman for teaching me to always solve the right problem – I’ll never forget your LCA2013 keynote). Was this an exploratory data visualisation or an explanatory one? This led to formulating a problem statement:

How do the different Libraries in the Geelong region compare to each other in terms of holdings, membership, visits and other attributes?

This clearly established some parameters for the visualisation: it was going to be exploratory, and comparative. It would need to have a way to identify each Library – likely via colour code, and have appropriate use of shapes and axes to allow for comparison. While I was tempted to use a stacked bar chart, I really wanted to dig deeper into d3.js and extend my skills in this Javascript library – so resolved to visualise the data using circular rings.

Colour selection

The first challenge was to ensure that the colours of the visualisation were both appealling and appropriate. While this seems an unlikely starting place for a visualisation – with most practitioners opting to get the basic shape right first, for this project getting the colours right felt like the best starting point. For inspiration, I turned to the Geelong Regional Library Corporation’s Annual Report, and used the ColorZilla extension to eyedropper the key brand colours used in the report. However, this only provided about 7 colours, and I needed 17 in order to map each of the different libraries. In order to identify ‘in between’ colours, I used this nifty tool from Meyerweb, which is super-handy for calculating gradients. The colours were then used as an array for a d3.scaleOrdinal object, and mapped to each library.

var color = d3.scaleOrdinal()
        "Geelong West",
        "Waurn Ponds",
        "Ocean Grove",
        "Mobile Libraries",
        "Barwon Heads",
        "Western Heights College"

Annular representation of data using d3.pie

Annular representation of data - step 1
First step in annular representation

The first attempt at representing the data was … a first attempt. While I was able to create an annular representation (donut chart) from the data using d3.pie and d3.arc, the labels of the Libraries themselves weren’t positioned well. The best tutorial I’ve read on this topic by far is from data visualisation superstar, Nadieh Bremer, over on her blog, Visual Cinnamon. I decided to leave labels on the arcs as a challenge for later in the process, and instead focus on the next part of visualisation – multiple annulars in one visualisation.

Multiple annulars in one visualisation

Annular representation of data - step 2

The second challenge was to place multiple annulars – one for each dataset – within the same svg. Normally with d3.js, you create an svg object which is appended to the body element of the html document. So what happens when you place two d3.pie objects on the svg object? You guessed it! Fail! The two annulars were positioned one under the other, rather than over the top of each other. I was stuck on this problem for a while, until I realised that the solution was to place different annulars on different layers within the svg object. This also gave more control over the visualisation. However, SVG doesn’t have layers as part of its definition – objects in SVG are drawn one on top of the other, with the last drawn object ‘on top’ – sometimes called stacking . But by creating groups within the BaseSvg like the below, for shapes to be drawn within, I was able to approximate layering.

var BaseSvg ="body").append("svg")
    .attr("width", width)
    .attr("height", height)
    .attr("transform", "translate(" + (width / 2 - annularXOffset) + "," + (height / 2 - annularYOffset) + ")");

  Layers for each annular

var CollectionLayer = BaseSvg.append('g');
var LoansLayer      = BaseSvg.append('g');
var MembersLayer    = BaseSvg.append('g');
var EventsLayer     = BaseSvg.append('g');
var VisitsLayer     = BaseSvg.append('g');
var WirelessLayer   = BaseSvg.append('g');
var InternetLayer   = BaseSvg.append('g');
var InternetLayer   = BaseSvg.append('g');
var TitleLayer      = BaseSvg.append('g');
var LegendLayer     = BaseSvg.append('g');

At this point I found Scott Murray’s SVG Primer very good reading.

Annular representation of data - step 3
The annulars are now positioned concentrically

I was a step closer!

Adding in parameters for spacing and width of the annulars

Once I’d figured out how to get annulars rendering on top of each other, it was time to experiment with the size and shape of the rings. In order to do this, I tried to define a general approach to the shapes that were being built. That general approach looked a little like this (well, it was a lot more scribble).

General approach to calculating size and proportion of multiple annulars
General approach to calculating size and proportion of multiple annulars

By being able to define a general approach, I was able to declare variables for elements such as the annular width and annular spacing, which became incredibly useful later as more annulars were added – the positioning and shape of the arcs for each annular could be calculated mathematically using these variables (see the source code for how this was done).

var annularXOffset  = 100; // how much to shift the annulars horizontally from centre
var annularYOffset  = 0; // how much to shift the annulars vertically from centre
var annularSpacing  = 26; // space between different annulars
var annularWidth    = 22; // width of each annular
var annularMargin   = 70; // margin between annulars and canvas
var padAngle        = 0.027; // amount that each segment of an annular is padded
var cornerRadius    = 4; // amount that the sectors are rounded

This allowed me to ‘play around’ with the size and shape of the annulars until I got something that was ‘about right’.

Annular representation of data - step 4
Annular spacing overlapped


Annular representation of data - step 3
Annular widths and spacing looking better

At this stage I also experimented with the padAngle of the annular arcs (also defined as a variable for easy tweaking), and with the stroke weight and colour, which was defined in CSS. Again, I took inspiration from GRLC’s corporate branding.

Placing dataset labels on the arcs

Now that I had the basic shape of the visualisation, the next challenge was to add dataset labels. This was again a major blocking point, and it took me a lot of tinkering to finally realise that the dataset labels would need to be svg text, sitting on paths created from separate arcs than that rendered by the d3.pie function. Without separate paths, the text wrapped around each arc segment in the annular – shown below. So, for each dataset, I created a new arc and path for the dataset label to be rendered on, and then appended a text element to the path. I’d never used this technique in svg before and it was an interesting learning experience.

Annular representation of data - step 6
Text on arcs is a dark art

Having sketched out a general approach again helped here, as with the addition of a few extra variables I was able to easily create new arcs for the dataset text to sit on. A few more variables to control the positioning of the dataset labels, and voila!

Annular representation of data - step 7
Dataset labels looking good

Adding a legend

The next challenge was to add a legend to the diagram, mostly because I’d decided that the infographic would be too busy with Library labels on each data point. This again took a bit of working through, because while d3.js has a d3.legend function for constructing legends, it’s only intended for data plotted horizontally or vertically, not 7 data sets plotted on consecutive annulars. This tutorial from Zero Viscosity and this one from Competa helped me understand that a legend is really just a group of related rectangles.

var legend = LegendLayer.selectAll("g")
    .attr('x', legendPlacementX)
    .attr('y', legendPlacementY)
    .attr('class', 'legend')
    .attr('transform', function(d, i) {
        return 'translate(' + (legendPlacementX + legendWidth) + ',' + (legendPlacementY + (i * legendHeight)) + ')';

    .attr('width', legendWidth)
    .attr('height', legendHeight)
    .attr('class', 'legendRect')
    .style('fill', color)
    .style('stroke', legendStrokeColor);

    .attr('x', legendWidth + legendXSpacing)
    .attr('y', legendHeight - legendYSpacing)
    .attr('class', 'legendText')
    .text(function(d) { return d; });
Annular representation of data - step 8
The legend isn’t positioned correctly

Again, the positioning took a little work, but eventually I got the legend positioned well.

Annular representation of data - step 9
The legend is finally positioned well

Responsive design and data visualisation with d3.js

One of the other key challenges with this project was attempting to have a reasonably responsive design. This appears to be incredibly hard to do with d3.js. I experimented with a number of settings to aim for a more responsive layout. Originally, the narrative text was positioned in a sidebar to the right of the image, but at different screen resolutions the CSS float rendered awkwardly, so I decided to use a one column layout instead, and this worked much better at different resolutions.

Next, I experimented with using the Javascript values innerWidth and innerHeight to help set the width and height of the svg element, and also dynamically positioned the legend. This gave a much better, while not perfect, rendering at different resolutions. It’s still a little hinkey, particularly at smaller resolutions, but is still an incremental improvement.

Thinking this through more deeply, although SVG and d3.js in general are vector-based, and therefore lend themselves well to responsive design to begin with, there are a number of elements which don’t scale well at different resolutions – such as text sizes. Unless all these elements were to be made dynamic, and likely conditional on viewport and orientation, then it’s going to be challenging indeed to produce a visualisation that’s fully responsive.

Adding tooltips

While I was reasonably pleased with the progress on the project, I felt that the visualisation needed an interactive element. I considered using some sort of arc tween to show movement between data sets, but given that historical data (say for previous years) wasn’t available, this didn’t seem to be an appropriate choice.

After getting very frustrated with the lack of built in tooltips in d3.js itself, I happened upon the d3.tip library. This was a beautifully written addition to d3.js, and although its original intent was for horizontal and vertical chart elements, it worked passably on annular segments.

Annular representation of data - step 10
Adding tooltips

Drawbacks in using d3.tip for circular imagery

One downside I found in using this library was the way in which it considers the positioning of the tooltip – this has some unpredictable, and visually unpleasant results when data is being represented in circular format. In particular, the way that d3.tip calculates the ‘centre’ of the object that it is applied to does not translate well to arc and circular shapes. For instance, look at how the d3.tip is applied to arc segments that are large and have only small amounts of curvature – such as the Geelong arc segment for ‘Members’. I’ve had a bit of a think about how to solve this problem, and the solution involves a more optimal approach to calculating the the ‘centre’ point of an arc segment.

This is beyond what I’m capable of with d3.js, but wanted to call this out as a future enhancement and exploration.

Adding percentage values to the tooltip with d3.nest and d3.sum

The next key challenge was to include the percentage figure, as well as Library and data value in the d3.tip. This was significantly more challenging than I had anticipated, and meant learning up on d3.nest and d3.sum functions. These tutorials from Phoebe Bright, and LearnJS were helpful, and Zan Armstrong’s tutorial on d3.format helped me get the precision formatting correct. After much experimentation, it turned out that summing the values of each dataset (in order to calculate percentage) was but a mere three lines of Javascript:

var CollectionItemCount = d3.nest()
    .rollup(function (v) { return d3.sum(v, function(d) { return d.Items})})

Concluding remarks

Data visualisation is much more challenging than I thought it would be, and the learning curve for d3.js is steep – but it’s worth it. This exercise drew on a range of technical skills, including circular trigonometry, HTML and knowledge of the DOM, CSS and Javascript, and above all the ability to ‘break a problem down’ and look at it from multiple angles (no pun intended).














Australian Internet Governance Forum 2016

Estimated reading time: 8 minutes

The Australian Internet Governance Forum – #auigf – was held at the Park Hyatt, Melbourne, October 11th-12th, 2016. This was the first time I’d had an opportunity to attend the #auigf, and I wasn’t sure what to expect. Internet users are a diverse cohort – and auDA – regulator for the .au namespace, and the body which auspices #auigf classifies members into supply class – those providing internet services – and demand class – those consuming services.

My first impression was one of surprise. The #auigf theme for the forum was ‘a focus on a competitive digital future for Australia’  – and given the significant influence that digital technology, policy and communities will play in an era of digital disruption, I couldn’t help but wonder why more key players weren’t passionate about driving the future of the internet in Australia.


Stuart Benjamin, Chairman of auDA

The regulator has been the subject of criticism in recent years, particularly around its engagement and consultation practices, and long-serving CEO Chris Disspain left the organisation in March, being replaced by former Liberal state parliamentarian, Cameron Boardman. This #auigf was therefore a symbolic opportunity for Boardman to signal to stakeholders the organisation’s new focus.  auDA chairman Stuart Benjamin in his opening address tackled this head on, outlining a renewed focus on stakeholder engagement, particularly in the area of building international partnerships, and relatedly, cybersecurity. He framed this strategic shift as auDA ‘growing up’ – moving from adolescence into maturity. In particular he flagged a shift from reactive approaches to domain administration, to more proactive approaches, underpinned by stronger relationships, renewed processes and systems and more innovative thinking. Linking board performance as critical to the success of the organisation, he introduced new Board Directors, Michaella Richards and Dr Leonie Walsh. Continuing the theme of advancing women in the organisation, Benjamin congratulated lawyer Rachael Falk on her appointment as Director of Technology, Security and Strategy, a newly created role tasked with catalysing auDA’s new directions. Acknowleding that auDA needs to win back the trust of the community it serves, Benjamin emphasised higher expectations of auDA – both externally from stakeholders and driven internally by the organisation itself, announcing he will be “seeking a lot more”.

Prof Paul Cornish, former Professor of International Security at Chatham House and independent consultant and author

Prof Cornish outlined how auDA is heading towards a more international posture and developing a number of partnerships. His main argument was that the future of the internet – and the digital economy – needs to be secured. Cybersecurity needs to evolve as the internet does, using a capability maturity model.

Cybersecurity Plenary – Chaired by Rachael Falk, with Alistair MacGibbon, Laura Bell, Prof Chris Leckie, Simon Raik-Allen, Craig McDonald

Rachael Falk opened by drawing attention to the National Cyber Security Strategy, urging attendees to become familiar with it. The discussion quickly turned to why there wasn’t more focus on cyber security, and Prof Cornish had a very incisive response – “interest follows money”. Money is starting to flow to cyber security, and interest will follow. Prof Leckie outlined challenges getting cyber security research from the lab into mainstream commercialisation. Researchers are challenged by the rate of change – for example, hypothetical attacks are quickly becoming reality. Academia is also confronted by getting business and industry to recognise the threat that cyber security presents. The other challenge is getting boards to recognise that cyber security is many different problems – which need many solutions. This is overwhelming for small businesses who “just want it to work”.

One of the best insights on the plenary came from Laura Bell – @lady_nerd on Twitter – who recounted the example of big corporations acquiring smaller firms – who may have a very different security posture, thus putting the larger corporation at risk.

The plenary used the term “happy clickers” to denote people who click on phishing emails without critically assessing their validity. This was the first time I’d heard that term, but it captures the psychological state accurately. Interesting, there was discussion around how people who are disengaged in their roles being more likely to be ‘happy clickers’ – because the phishing email represents a welcome distraction – another reason to ensure positive employee engagement.

Another very interesting discussion thread in this plenary was the paradox of cyberware – people personal information freely with services like Google and Facebook, but resent government intrusion as seen recently with the census. This may come down to the compulsion element – it’s about giving information freely versus being compelled to disclose. There’s an element here for government design of online services – another job for the DTO! – around information design. Imagine a census that was voluntary rather than mandatory, but got people to participate because of the social good involved. I think it would be a much more positive process.

This led into a discussion around corporate use of data – and whether consumers understand the value of their own data – essentially we’re trading our data for ‘free products’. For many online services we have to consent to data disclosure to get access to the service, but in the background there’s data matching going on – there’s a ‘creep factor’. The link was drawn from ‘creep factor’ behaviour to band value – trust and transparency are linked to the public’s view of the brand.

Key takeaway: The pub test for data use – “is it creepy?” If so, don’t do it.

This plenary also covered the practice of ‘hacking back‘ – where individuals or businesses use information security counter-measures to retaliate. The consensus in the room is that this is a poor response, largely because identifying the aggressor is so difficult. The group also highlighted that Australia has an offensive cyber capability – again linking cyber security to an international, nation-state based context. The lack of a standard response protocol for dealing with hacking incidents was also covered – many businesses are afraid of disclosing and are reluctant to do so, but having a standard response protocol would allow businesses to respond in a mature way.

In summary, cyber security is hard – there’s lots of layers and issues to consider, there’s a lack of general awareness in business and industry, the field is rapidly changing and no defined response protocols for business to use.

Women in STEM Plenary – Dr Rowan Brookes, Renee Noble, Dr Catherine Lang, Dr Leonie Walsh, Luan Heimlich

Dr Brookes introduced the plenary with an apology for not being able to include more women of colour and from the LBGQTI spectrum, particularly on Ada Lovelace Day. The key themes of needing to address systemic issues and create a pipeline for women in STEM were prevalent throughout the conversation.

What struck me first up with this plenary was the range of initiatives, groups and organisations that are working to further women in STEM, and I wondered whether this fragmentation is actually a disservice – so many voices have less volume.

Key takeaway: Are there too many women in STEM groups that are too fragmented? Do we need an Australia ecosystem map of women / females in STEM / ICT

Luan Heimlich opened the plenary by asking the audience who young girls look up to; met with responses of pop stars, sports celebrities and models. Not a science or technology role model in sight! She followed up by questioning whether these role models are going to solve the problems of tomorrow – digital disruption, climate change and public health, and let the audience ponder on the gap.

Dr Leonie Walsh covered efforts to help encourage early to mid career researchers to further their careers, noting that it’s difficult for women to step out of their careers to have a family – as this often puts them several years behind. She also noted that employers are looking for candidates with more well rounded skills, and her program provides exposure to work environments. Dr Catherine Lang highlighted the influence of pre-service teachers in promoting STEM. Another key thread in this discussion was that professions are socially constructed, and that this can be changed – but it’s an uphill battle because ICT careers are not even on the radar as a career choice for young women.

While programs are having localised success, there are still major gaps at a systemic level, and better consistency and co-ordination is required at a national level.

Behavioural insights panel – Kirstan Corban, Dr Alex Gyani, Christian Stenta, Helen Sharpley

This panel was a series of vignettes centred around how behavioural insights had led to social change. The standout piece was by Alex Gyani, who ran the audience through examples of where minor changes had a major impact – using a framework of

  • Easy – interventions should be easy for people, but this is hard to do
  • Attractive – the intervention has to be attractive for people
  • Timely – try something, see if it works – don’t be caught in analysis paralysis
  • Social – social norms are a powerful influencer for change

A key concept from Gyani’s talk was the concept of cognitive budget – we have so many choices to make every day we need to think critically about choice architecture.

The other three speakers, from health and government, highlighted case studies that showcased design thinking, co-design, and approaches to difficult problems.

Key takeaway – minor changes can make a big impact

Internet of Things Plenary – Pablo Hinojosa, Matthew Pryor, Phil Goebel, Lorraine Tighy, Dr Kate Auty

Hinojosa opened proceedings by outlining how the internet has reached 3.5 billion users – half of this volume in Asia – and there are double the number of internet connected devices than people. We’re on the cusp of a revolution.

Matthew Pryor outlined the use of IOT in agriculture and agribusiness, and emphasised how IoT helps with decision making. He highlighted how it’s hard to scale infrastructure in regional and rural areas – and questioned whether we should be investing in networks that connect people or devices or both? He gave the example that as soon as farmers leave the farmhouse, they have no internet – they need to go back to the farmhouse to make better decisions, and this reduces their ability to deliver economic benefit. We need to consider the principle of universal access as we build out infrastructure.

Phil Goebel used the Disneyland Magic Band example to highlight how IoT has taken a purely physical experience and used connectivity to enhance that – leading to “augmented experience”. For example, the band allows Disney to know where the longest queues are, how the park is being used, what facilities are important for which demographics – very granular marketing data. He outlined that there are multiple users of the data – different actors in the ecosystem – administration, marketers and the users themselves – using the data gathered by wearables for different purposes. He flagged the issue that there are no guidelines around how the data is being used – for instance is it being sold on – we need to consider transparency.

Lorraine Tighe is the Smart City and Innovation Manager at City of Melbourne, and outlined how vendors she mets present the IoT as a silver bullet. She outlined the use cases for IoT in smart cities, including parking sensors – to reduce traffic that is searching for a car park – leading to traffic efficiencies. She positioned local government at the coalface of the community, and bringing the community along on the journey – using the City Lab as a vehicle to test and prototype solutions. As part of this, the City of Melbourne made the decision to go open by default with their data, encouraging smart people to co-create with the City.


Dr Kate Auty spoke on projects like RedMap and Atlas of Living Australia providing citizen scientists with tools to protect biodiversity. She related how ‘super science’ projects like AURIN and NECTAR are important for understanding how cities work.

Scott Seely had the quote of the panel though;



In summary, the #auigf reflected many of the contemporary themes of digital society. Digital disruption and digital society are changing at a rapid pace, and we have a dearth of tools, approaches, standards and response protocols to handle them. We need to start by clearly defining the problems we’re trying to solve, and approach solving them with new types of problem solving approaches, such as design thinking, co-creation and open data. Many of the problems we’re trying to solve require national and international co-operation to build ecosystems, standards and agreed approaches – and the #auigf is a good starting point.


State of my toolchain 2016

Estimated reading time: 8 minutes

In July, I transitioned from a 16-year career in digital and IT with a regional university to setting up my own digital consultancy. This meant that I no longer had a Managed Operating Environment (MoE) to rely on, and instead had to build my own toolchain. Both to document this toolchain, and to provide a snapshot to compare to in the future, this post articulates the equipment, software and utilities I use, from hardware up the stack.


I have three main devices;

  • Asus N76 17.3″ laptop – not really a portable device, but a beast of a work machine. I’ve had this since January 2013, and it hasn’t let me down yet. It has 16GB of RAM, 4 dual core Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz CPUs, so 8 cores in total, and it basically needs its own power station to run. This machine is a joy to own. It speeds through GIMP and video processing operations, and has plenty of grunt to do some of data visualisation (Processing) work that I do. The NVIDIA graphics are beautiful. The only upgrade in this baby’s near term future is to swap out the spinning rust HDD (x2) with some solid state goodness.
  • Asus Trio Transformer TX201LA – a portal device, useful for taking on trains and to meetings. I’ve had this for around 18 months now, and while it’s a solid little portable device, it does have some downsides. This is a dual operating system device – the screen, which is a touchscreen, and detaches, runs stock Android (which hasn’t had an update since 4.2.2 – disappointing), while I’ve got the base configured via Grub to dual boot Win10 and Ubuntu 16.04 LTS. Switching between the mobile OS and desktop OS is generally seamless, but I’ve had some glitches switching between Ubuntu and Android – in ASUS’ defence, they did tell me that Linux wasn’t supported on this device, and of course you all knew what my response that was, didn’t you? Challenge: accepted. The hardware on this device is a little less grunty than I’d like – 4GB RAM and Intel® Core™ i7-4500U processor. It just isn’t enough RAM, and I have to pretty much limit myself to running 3-4 apps at a time, and less than 10 Firefox tabs. But, that said, I *do* like the convenience of having the Android device as well – and the screen is a joy to work with. One little niggle is that VGA / HDMI out are via mini display port – and only a VGA adaptor was provided in the box. I’ll have to get a mini display port to HDMI adapter at some stage, as the world embraces digital video out. For the meantime, I’ll have to party like it’s 1999 with VGA.
  • LG Nexus 5X – my mobile phone. Purchased in January 2016, it’s running stock Android Marshmallow, and I’ve been super happy with how fast Android OTA updates ship to this device. For non-RAM-intensive operations it’s pretty snappy, and the quality of the camera is fantastic. The battery life is pretty good compared to my old Nexus 4, and I can usually go a full day on a charge, if I’m not Ingressing. This device has some pretty major downsides though. The USB-C charging cable is frustrating, given everything else I own charges on micro USB, so I’ve had to shell out for new cables. The RAM on this device just isn’t enough for its processor, and I’m constantly experiencing lag on operations, making for a frustrating user experience. The camera is buggy as hell, and there’s more than once I’ve taken a great shot, only to find it hasn’t been saved. I’ll be looking for a different model next time, but I can’t justify replacing this at the moment – it’s only around 8 months old.

My hardware overview wouldn’t be complete without these other useful peripherals:


The two key wearables I have are the Pebble Time and Fitbit. As Pebble Time’s GPS and fitness tracking capabilities increase, I’m expecting to be able to decom my Fitbit. I can’t imagine living without the Pebble now – it’s a great wearable device. The battery life is pretty good – 3-4 days, and the charging connector is robust – unlike my poor experiences with the Fitbit – both with the device battery itself degrading over time, and having been through 5-6 chargers in 3 years. I’ve Kickstarted the Pebble Core, and can’t wait to see where this product line goes next.


At the operating system level, both my laptops dual boot both Windows 10 and Ubuntu LTS 16.04, with my preference to be to use Ubuntu if possible. This generally works well, but there are some document types that I can’t access readily on Ubuntu – such as Microsoft Project. Luckily, most of the work I do these days is web-based. I still need Windows for gaming, because not all the titles I play are delivered via Steam – with the key one being The Secret World. Total addict 🙂

Office productivity

  • LibreOffice – my office suite of choice is LibreOffice. OpenOffice is pretty much dead, and the key driver of that is being umbrella’d by Oracle. Open source communities don’t want to be owned by large corporates who purchase things, like, oh I don’t know, MySQL, to simply gain market share rather than ascribing to the open source ethos.
  • Firefox – my browser of choice. Yes, I know it’s slower. Yes, I know it’s a memory hog. But it’s Firefox for me. I really like the Sync feature, meaning that the plugins and addons that I have on one installation automatically download on another – very useful when you’re running essentially four machines. My favourite and most used extensions would have to be LeetKey, Awesome Screenshot, Zotero, ColorZilla and of course Web Developer tools.
  • Thunderbird – I run Thunderbird with a bunch of extensions like Enigmail, Lightning (with a Google Calendar integration for scheduling) and Send Later – so that if I write a bunch of emails at 2am in the morning, they actually send at a more humane hour.
  • Zotero – I used Zotero, and its LibreOffice plugin for referencing. It’s beautiful. And open source.
  • Slack – Slack is the new killer app. I use it everywhere, on all the things. The integrations it has are so incredibly useful. In particular, I use an integration called Tomato Bot for Pomodoro-style productivity.
  • Xero – Yes, I have a paid account to Xero for accounting and bookkeeping. It’s lovely and simple.
  • Trello – For all the project management goodness. I got some free months of Trello Gold, and I’ve let it lapse, but will probably buy it again. It’s $USD 5 per month and has great integration with Slack. Again, if there were an open source alternative I’d give that a go, but, well, there just isn’t.
  • GitHub and Git – If your office is about digital and technology, then GitHub is an office productivity tool! I use Git from the command line, because it’s just easier than running another application on top of everything else.

Social media and radio

  • Hootsuite – Yes, I have a paid account to Hootsuite. There just isn’t a comparable open source alternative on the market yet. It has some limitations – such as lack of strong integration with newer social media platforms such as Instagram and SnapChat, but you can’t go past it for managing multiple Facebook pages or Twitter accounts at once.
  • Pandora – I stream with Pandora, but I really, really, really miss Rdio.

Quantified self

Over the years, I’ve found a lot of value in running a few quantified self applications to get a better idea of how I’m spending my time – after all, making a problem visible is the first step toward a solution.

  • RescueTime – the visualisations are beautiful, and it runs on every device I have, including Linux. It provides great insights, and makes really clear when I’ve been slacking off and not doing enough productive work. One of the features that I appreciate most is to be able to set your own categorisations. For examples, Ingress in my RescueTime is categorised as neutral – yes it’s a game, but I only play it when I’m walking – so that’s something I’m aiming to do more of.
  • BeeMindr – this nifty little app puts a sting in the tail of goals – and charges you money if you don’t stick with strong habits. I’ve found it’s started to help change my behaviour and build some better habits, such as more sleep and more steps. It has a huge range of integrations with other tools such as RescueTime and Fitbit.

Coding, data visualisation and other nerdery

  • Atom Editor – this is my editor of choice, again because it works on both Windows and Linux. The only downside is that plugins – I run many – have to be individually installed. If Atom had something like Firefox Sync, it would be a killer product. It’s so much lighter than Eclipse and other Java-based editors I’ve used in the past.
  • D3.js – this is my go to Javascript visualisation library. V4 has some pitfalls – namely syntax changes since v3, but it’s still a beautiful visualisation library.
  • Processing – I’ve used Processing a little bit, but I’m frustrated that it’s Java-based. Processing.js is a library that attempts to replicate the Java-based Processing, but the functionality is not yet fully equivalent – particularly for file manipulation operations. The concept behind Processing – data visualisation for designers, not programmers – is sound, but I feel that they’ve made an architectural faux pas by not going Javascript right from the start. I haven’t really gotten in to R or Python yet, but I can see that on the horizon.

Graphics, typography and design

  • Scribus – in the past year I’ve had to do quite a few posters, thank you certificates and so on – and Scribus has been my go to tool. The user interface is a little awkward in places, but it provides around 60% of the functionality of desktop publishing tools like QuarkXPress and InDesign – for free.
  • InkScape and GIMP – my go to tools for vector and raster work respectively. Although, I have started to experiment a little with Krita lately. One of the things I’ve found a little frustrating with both InkScape and GIMP is the limited range of palettes that they ship with, so I started writing some of my own.
  • Typecatcher – for loading Google fonts on to Linux.

Next steps

Thin client computing seems to be taking off in a big way – virtualised desktops are all the rage at the moment, but I don’t think they would work for me, primarily because I tend to work in low bandwidth situations. My home internet is 4-5Mbps, and my 4G dongle gets about the same, but is pre-paid, so data is expensive. For now, I’ll have to manage my own desktop environment!

What do you think? Are these choices reasonable? Are there components in the stack that should be replaced? Appreciate your feedback 😀