My talk picks for #lca2017 – linux.conf.au

linux.conf.au 2017 heads to Hobart, where it was last held in 2009. I absolutely love Tasmania – especially its food and scenery – and am looking forward to heading over.

So, here’s my talk picks¬† – keeping in mind that I’m more devops than kernel hacker – so YMMV.

Executive Summary

  • Monday 16th – Networking breakfast, possibly some WootConf sessions and / or Open Knowledge Miniconf sessions.
  • Tuesday 17th – Law and policy Miniconf, Community Leadership Summit
  • Wednesday 18th – Future Privacy by Michael Cordover, In Case of Emergency – Break Glass by David Bell, Handle Conflict Like a Boss by Deb Nicholson, Internet of Terrible Things by Matthew Garrett.
  • Thursday 19th – Network Protocols for IoT Devices by Jon Oxer, Compliance with the GPL by Karen Sandler and Bradley M. Kuhn, Open source and innovation by Allison Randall and Surviving the next 30 years of open source by Karen Sandler.
  • Friday 20th – Publicly releasing government models by Audrey Lobo-Pulo

Monday 16th January

I’m keeping Monday open as much as possible, in case there are last minute things we need to do for the Linux Australia AGM, but will definitely start the day with the Opening Reception and Networking Breakfast. A networking breakfast is an unusual choice of format for the Professional Delegates Networking Session (PDNS), but I can see some benefits to it such as being able to initiate key relationships and talking points early in the conference. The test of course will be attendance, and availability of tasty coffee ūüėÄ

If I get a chance I’ll see some of the WootConf sessions and/or Open Knowledge Miniconf sessions (the Open Knowledge Miniconf schedule hadn’t been posted at the time of writing).

Tuesday 17th January

The highlight for me in Tuesday’s schedule is the excellent Pia Waugh talking ‘Choose your own Adventure‘. This talk is based on Waugh’s upcoming book, and the philosophical foundations, macroeconomic implications and strategic global trends cover a lot of ground – ground that needs to be covered.

As of the time of writing, the schedule for the Law and Policy Miniconf hadn’t been released, but this area is of interest to me – as is the Community Leadership Summit. I’m interested to see how the Community Leadership Summit is structured this year; in 2015 it had a very unconference feel. This was appropriate for the session at the time, but IMHO what the Community Leadership Summit needs to move towards are concrete deliverables – such as say a whitepaper advising Linux Australia Council on where efforts should be targeted in the year ahead. In this way, the Summit would be able to have a tangible, clear impact.

Wednesday 18th January

I’ll probably head to Dan Callahan’s keynote on ‘Designing for failure’. It’s great to see Jonathan Corbet’s Kernel Report get top billing, but my choice here is between the ever-excellent Michael Codover’s ‘Future Privacy‘ and Cedric Bail’s coverage of ‘Enlightenment Foundation Libraries for Wearables‘. Next up, I’ll be catching David Bell (Director, LCA2016) talking ‘In case of emergency – break glass – BCP, DRP and Digital Legacy‘. There’s nothing compelling for me in the after lunch session, except perhaps Josh Simmon’s ‘Building communities beyond the black stump‘, but this one’s probably too entry-level for me, so it might be a case of long lunch / hallway track.

After afternoon tea, I’ll likely head to Deb Nicholson’s ‘Handle conflict like a boss‘, and then Matthew Garett‘s ‘Internet of terrible things‘ – because Matthew Garrett ūüėÄ

Then, it will be time for the Penguin Dinner!

Thursday 19th January

First up, I’m really looking forward to Nadia Eghbal’s ‘People before code‘ keynote about the sustainability of open source projects.

Jon Oxer’s ‘Network Protocol Analysis for IoT Devices‘ is really appealing, particularly given the rise and rise of IoT equipment, and the lack of standards in this space.

It might seem like a dry topic for some, but Bradley M. Kuhn and Karen Sandler from the Software Freedom Conservancy will be able to breathe life into ‘Compliance with the GPL‘ if anyone can; they also bring with them considerable credibility on the topic.

After lunch, I’ll be catching Allison Randall talking on ‘Open source and innovation‘ and then Karen Sandler on ‘Surviving the next 30 years of open source‘. These talks are related, and speak to the narrative of how open source is evolving into different facets of our lives – how does open source live on when we do not?

Friday 20th January

After the keynote, I’ll be catching Audrey Lobo-Pulo on ‘Publicly releasing government models‘ – this ties in with a lot of the work I’ve been doing in open data, and government open data in particular. After lunch, I’m looking forward to James Scheibner’s ‘Guide to FOSS licenses‘, and to finish off the conference on a high note, the ever-erudite and visionary George Fong on ‘Defending the security and integrity of the ‘Net’. Internet Australia, of which Fong is the chair, has many values in common with Linux Australia, and I foresee the two organisations working more closely together in the future.

What are your picks for #lca2017?

Geelong Libraries by branch – a data visualisation

At a glance

Introduction

Geelong Regional Libraries Corporation (GRLC) came on board GovHack this year, and as well as being a sponsor, opened a number of datasets for the hackathon. Being the lead organiser for GovHack, I didn’t actually get a chance to explore the open data during the competition. However, while I was going for a walk one day – as it always does – I had an idea around how the GRLC datasets could be visualised. I’d previously done some work visualising data using a d3.chord layout, and while this data wasn’t suitable for that type of layout, the concept of using annulars – donut charts – to represent and compare the datasets seemed appealing. There was only one problem – I’d never tackled anything like this before.

Challenge: accepted

Understanding what problem I was trying to solve

Of course the first question here was what problem I was trying to solve (thanks Radia Perlman for teaching me to always solve the right problem – I’ll never forget your LCA2013 keynote). Was this an exploratory data visualisation or an explanatory one? This led to formulating a problem statement:

How do the different Libraries in the Geelong region compare to each other in terms of holdings, membership, visits and other attributes?

This clearly established some parameters for the visualisation: it was going to be exploratory, and comparative. It would need to have a way to identify each Library – likely via colour code, and have appropriate use of shapes and axes to allow for comparison. While I was tempted to use a stacked bar chart, I really wanted to dig deeper into d3.js and extend my skills in this Javascript library – so resolved to visualise the data using circular rings.

Colour selection

The first challenge was to ensure that the colours of the visualisation were both appealling and appropriate. While this seems an unlikely starting place for a visualisation – with most practitioners opting to get the basic shape right first, for this project getting the colours right felt like the best starting point. For inspiration, I turned to the Geelong Regional Library Corporation’s Annual Report, and used the ColorZilla extension to eyedropper the key brand colours used in the report. However, this only provided about 7 colours, and I needed 17 in order to map each of the different libraries. In order to identify ‘in between’ colours, I used this nifty tool from Meyerweb, which is super-handy for calculating gradients. The colours were then used as an array for a d3.scaleOrdinal object, and mapped to each library.

var color = d3.scaleOrdinal()
    .range([
        "#59d134",
        "#4CCA62",
        "#40C28F",
        "#33bbbd",
        "#36AFC7",
        "#3b98da",
        "#427DC9",
        "#5148a6",
        "#8647A8",
        "#BC47A9",
        "#f146ab",
        "#F03E85",
        "#F0355E",
        "#f0431e",
        "#F8880F",
        "#FFCC00",
        "#ACCF1A"
      ])
    .domain([
        "Geelong",
        "Belmont",
        "Corio",
        "Geelong West",
        "Waurn Ponds",
        "Ocean Grove",
        "Newcomb",
        "Torquay",
        "Drysdale",
        "Lara",
        "Bannockburn",
        "Queenscliff",
        "Chilwell",
        "Highton",
        "Mobile Libraries",
        "Barwon Heads",
        "Western Heights College"
    ]);

Annular representation of data using d3.pie

Annular representation of data - step 1
First step in annular representation

The first attempt at representing the data was … a first attempt. While I was able to create an annular representation (donut chart) from the data using d3.pie and d3.arc, the labels of the Libraries themselves weren’t positioned well. The best tutorial I’ve read on this topic by far is from data visualisation superstar, Nadieh Bremer, over on her blog, Visual Cinnamon. I decided to leave labels on the arcs as a challenge for later in the process, and instead focus on the next part of visualisation – multiple annulars in one visualisation.

Multiple annulars in one visualisation

Annular representation of data - step 2
Uh-oh!

The second challenge was to place multiple annulars – one for each dataset – within the same svg. Normally with d3.js, you create an svg object which is appended to the body element of the html document. So what happens when you place two d3.pie objects on the svg object? You guessed it! Fail! The two annulars were positioned one under the other, rather than over the top of each other. I was stuck on this problem for a while, until I realised that the solution was to place different annulars on different layers within the svg object. This also gave more control over the visualisation. However, SVG doesn’t have layers as part of its definition – objects in SVG are drawn one on top of the other, with the last drawn object ‘on top’ – sometimes called stacking . But by creating groups within the BaseSvg like the below, for shapes to be drawn within, I was able to approximate layering.

var BaseSvg = d3.select("body").append("svg")
    .attr("width", width)
    .attr("height", height)
    .append("g")
    .attr("transform", "translate(" + (width / 2 - annularXOffset) + "," + (height / 2 - annularYOffset) + ")");

/*
  Layers for each annular
*/

var CollectionLayer = BaseSvg.append('g');
var LoansLayer      = BaseSvg.append('g');
var MembersLayer    = BaseSvg.append('g');
var EventsLayer     = BaseSvg.append('g');
var VisitsLayer     = BaseSvg.append('g');
var WirelessLayer   = BaseSvg.append('g');
var InternetLayer   = BaseSvg.append('g');
var InternetLayer   = BaseSvg.append('g');
var TitleLayer      = BaseSvg.append('g');
var LegendLayer     = BaseSvg.append('g');

At this point I found Scott Murray’s SVG Primer very good reading.

Annular representation of data - step 3
The annulars are now positioned concentrically

I was a step closer!

Adding in parameters for spacing and width of the annulars

Once I’d figured out how to get annulars rendering on top of each other, it was time to experiment with the size and shape of the rings. In order to do this, I tried to define a general approach to the shapes that were being built. That general approach looked a little like this (well, it was a lot more scribble).

General approach to calculating size and proportion of multiple annulars
General approach to calculating size and proportion of multiple annulars

By being able to define a general approach, I was able to declare variables for elements such as the annular width and annular spacing, which became incredibly useful later as more annulars were added – the positioning and shape of the arcs for each annular could be calculated mathematically using these variables (see the source code for how this was done).

var annularXOffset  = 100; // how much to shift the annulars horizontally from centre
var annularYOffset  = 0; // how much to shift the annulars vertically from centre
var annularSpacing  = 26; // space between different annulars
var annularWidth    = 22; // width of each annular
var annularMargin   = 70; // margin between annulars and canvas
var padAngle        = 0.027; // amount that each segment of an annular is padded
var cornerRadius    = 4; // amount that the sectors are rounded

This allowed me to ‘play around’ with the size and shape of the annulars until I got something that was ‘about right’.

Annular representation of data - step 4
Annular spacing overlapped

 

Annular representation of data - step 3
Annular widths and spacing looking better

At this stage I also experimented with the padAngle of the annular arcs (also defined as a variable for easy tweaking), and with the stroke weight and colour, which was defined in CSS. Again, I took inspiration from GRLC’s corporate branding.

Placing dataset labels on the arcs

Now that I had the basic shape of the visualisation, the next challenge was to add dataset labels. This was again a major blocking point, and it took me a lot of tinkering to finally realise that the dataset labels would need to be svg text, sitting on paths created from separate arcs than that rendered by the d3.pie function. Without separate paths, the text wrapped around each arc segment in the annular – shown below. So, for each dataset, I created a new arc and path for the dataset label to be rendered on, and then appended a text element to the path. I’d never used this technique in svg before and it was an interesting learning experience.

Annular representation of data - step 6
Text on arcs is a dark art

Having sketched out a general approach again helped here, as with the addition of a few extra variables I was able to easily create new arcs for the dataset text to sit on. A few more variables to control the positioning of the dataset labels, and voila!

Annular representation of data - step 7
Dataset labels looking good

Adding a legend

The next challenge was to add a legend to the diagram, mostly because I’d decided that the infographic would be too busy with Library labels on each data point. This again took a bit of working through, because while d3.js has a d3.legend function for constructing legends, it’s only intended for data plotted horizontally or vertically, not 7 data sets plotted on consecutive annulars. This tutorial from Zero Viscosity and this one from Competa helped me understand that a legend is really just a group of related rectangles.

var legend = LegendLayer.selectAll("g")
    .data(color.domain())
    .enter()
    .append('g')
    .attr('x', legendPlacementX)
    .attr('y', legendPlacementY)
    .attr('class', 'legend')
    .attr('transform', function(d, i) {
        return 'translate(' + (legendPlacementX + legendWidth) + ',' + (legendPlacementY + (i * legendHeight)) + ')';
});

legend.append('rect')
    .attr('width', legendWidth)
    .attr('height', legendHeight)
    .attr('class', 'legendRect')
    .style('fill', color)
    .style('stroke', legendStrokeColor);

legend.append('text')
    .attr('x', legendWidth + legendXSpacing)
    .attr('y', legendHeight - legendYSpacing)
    .attr('class', 'legendText')
    .text(function(d) { return d; });
Annular representation of data - step 8
The legend isn’t positioned correctly

Again, the positioning took a little work, but eventually I got the legend positioned well.

Annular representation of data - step 9
The legend is finally positioned well

Responsive design and data visualisation with d3.js

One of the other key challenges with this project was attempting to have a reasonably responsive design. This appears to be incredibly hard to do with d3.js. I experimented with a number of settings to aim for a more responsive layout. Originally, the narrative text was positioned in a sidebar to the right of the image, but at different screen resolutions the CSS float rendered awkwardly, so I decided to use a one column layout instead, and this worked much better at different resolutions.

Next, I experimented with using the Javascript values innerWidth and innerHeight to help set the width and height of the svg element, and also dynamically positioned the legend. This gave a much better, while not perfect, rendering at different resolutions. It’s still a little hinkey, particularly at smaller resolutions, but is still an incremental improvement.

Thinking this through more deeply, although SVG and d3.js in general are vector-based, and therefore lend themselves well to responsive design to begin with, there are a number of elements which don’t scale well at different resolutions – such as text sizes. Unless all these elements were to be made dynamic, and likely conditional on viewport and orientation, then it’s going to be challenging indeed to produce a visualisation that’s fully responsive.

Adding tooltips

While I was reasonably pleased with the progress on the project, I felt that the visualisation needed an interactive element. I considered using some sort of arc tween to show movement between data sets, but given that historical data (say for previous years) wasn’t available, this didn’t seem to be an appropriate choice.

After getting very frustrated with the lack of built in tooltips in d3.js itself, I happened upon the d3.tip library. This was a beautifully written addition to d3.js, and although its original intent was for horizontal and vertical chart elements, it worked passably on annular segments.

Annular representation of data - step 10
Adding tooltips

Drawbacks in using d3.tip for circular imagery

One downside I found in using this library was the way in which it considers the positioning of the tooltip – this has some unpredictable, and visually unpleasant results when data is being represented in circular format. In particular, the way that d3.tip calculates the ‘centre’ of the object that it is applied to does not translate well to arc and circular shapes. For instance, look at how the d3.tip is applied to arc segments that are large and have only small amounts of curvature – such as the Geelong arc segment for ‘Members’. I’ve had a bit of a think about how to solve this problem, and the solution involves a more optimal approach to calculating the the ‘centre’ point of an arc segment.

This is beyond what I’m capable of with d3.js, but wanted to call this out as a future enhancement and exploration.

Adding percentage values to the tooltip with d3.nest and d3.sum

The next key challenge was to include the percentage figure, as well as Library and data value in the d3.tip. This was significantly more challenging than I had anticipated, and meant learning up on d3.nest and d3.sum functions. These tutorials from Phoebe Bright, and LearnJS were helpful, and Zan Armstrong’s tutorial on d3.format helped me get the precision formatting correct. After much experimentation, it turned out that summing the values of each dataset (in order to calculate percentage) was but a mere three lines of Javascript:

var CollectionItemCount = d3.nest()
    .rollup(function (v) { return d3.sum(v, function(d) { return d.Items})})
    .entries(CollectionData);

Concluding remarks

Data visualisation is much more challenging than I thought it would be, and the learning curve for d3.js is steep – but it’s worth it. This exercise drew on a range of technical skills, including circular trigonometry, HTML and knowledge of the DOM, CSS and Javascript, and above all the ability to ‘break a problem down’ and look at it from multiple angles (no pun intended).

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Australian Internet Governance Forum 2016

The Australian Internet Governance Forum – #auigf – was held at the Park Hyatt, Melbourne, October 11th-12th, 2016. This was the first time I’d had an opportunity to attend the #auigf, and I wasn’t sure what to expect. Internet users are a diverse cohort – and auDA – regulator for the .au namespace, and the body which auspices #auigf classifies members into supply class – those providing internet services – and demand class – those consuming services.

My first impression was one of surprise. The #auigf theme for the forum was ‘a focus on a competitive digital future for Australia’¬† – and given the significant influence that digital technology, policy and communities will play in an era of digital disruption, I couldn’t help but wonder why more key players weren’t passionate about driving the future of the internet in Australia.

 

Stuart Benjamin, Chairman of auDA

The regulator has been the subject of criticism in recent years, particularly around its engagement and consultation practices, and long-serving CEO Chris Disspain left the organisation in March, being replaced by former Liberal state parliamentarian, Cameron Boardman. This #auigf was therefore a symbolic opportunity for Boardman to signal to stakeholders the organisation’s new focus.¬† auDA chairman Stuart Benjamin in his opening address tackled this head on, outlining a renewed focus on stakeholder engagement, particularly in the area of building international partnerships, and relatedly, cybersecurity. He framed this strategic shift as auDA ‘growing up’ – moving from adolescence into maturity. In particular he flagged a shift from reactive approaches to domain administration, to more proactive approaches, underpinned by stronger relationships, renewed processes and systems and more innovative thinking. Linking board performance as critical to the success of the organisation, he introduced new Board Directors, Michaella Richards and Dr Leonie Walsh. Continuing the theme of advancing women in the organisation, Benjamin congratulated lawyer Rachael Falk on her appointment as Director of Technology, Security and Strategy, a newly created role tasked with catalysing auDA’s new directions. Acknowleding that auDA needs to win back the trust of the community it serves, Benjamin emphasised higher expectations of auDA – both externally from stakeholders and driven internally by the organisation itself, announcing he will be “seeking a lot more”.

Prof Paul Cornish, former Professor of International Security at Chatham House and independent consultant and author

Prof Cornish outlined how auDA is heading towards a more international posture and developing a number of partnerships. His main argument was that the future of the internet – and the digital economy – needs to be secured. Cybersecurity needs to evolve as the internet does, using a capability maturity model.

Cybersecurity Plenary – Chaired by Rachael Falk, with Alistair MacGibbon, Laura Bell, Prof Chris Leckie, Simon Raik-Allen, Craig McDonald

Rachael Falk opened by drawing attention to the National Cyber Security Strategy, urging attendees to become familiar with it. The discussion quickly turned to why there wasn’t more focus on cyber security, and Prof Cornish had a very incisive response – “interest follows money”. Money is starting to flow to cyber security, and interest will follow. Prof Leckie outlined challenges getting cyber security research from the lab into mainstream commercialisation. Researchers are challenged by the rate of change – for example, hypothetical attacks are quickly becoming reality. Academia is also confronted by getting business and industry to recognise the threat that cyber security presents. The other challenge is getting boards to recognise that cyber security is many different problems – which need many solutions. This is overwhelming for small businesses who “just want it to work”.

One of the best insights on the plenary came from Laura Bell – @lady_nerd on Twitter – who recounted the example of big corporations acquiring smaller firms – who may have a very different security posture, thus putting the larger corporation at risk.

The plenary used the term “happy clickers” to denote people who click on phishing emails without critically assessing their validity. This was the first time I’d heard that term, but it captures the psychological state accurately. Interesting, there was discussion around how people who are disengaged in their roles being more likely to be ‘happy clickers’ – because the phishing email represents a welcome distraction – another reason to ensure positive employee engagement.

Another very interesting discussion thread in this plenary was the paradox of cyberware – people personal information freely with services like Google and Facebook, but resent government intrusion as seen recently with the census. This may come down to the compulsion element – it’s about giving information freely versus being compelled to disclose. There’s an element here for government design of online services – another job for the DTO! – around information design. Imagine a census that was voluntary rather than mandatory, but got people to participate because of the social good involved. I think it would be a much more positive process.

This led into a discussion around corporate use of data – and whether consumers understand the value of their own data – essentially we’re trading our data for ‘free products’. For many online services we have to consent to data disclosure to get access to the service, but in the background there’s data matching going on – there’s a ‘creep factor’. The link was drawn from ‘creep factor’ behaviour to band value – trust and transparency are linked to the public’s view of the brand.

Key takeaway: The pub test for data use – “is it creepy?” If so, don’t do it.

This plenary also covered the practice of ‘hacking back‘ – where individuals or businesses use information security counter-measures to retaliate. The consensus in the room is that this is a poor response, largely because identifying the aggressor is so difficult. The group also highlighted that Australia has an offensive cyber capability – again linking cyber security to an international, nation-state based context. The lack of a standard response protocol for dealing with hacking incidents was also covered – many businesses are afraid of disclosing and are reluctant to do so, but having a standard response protocol would allow businesses to respond in a mature way.

In summary, cyber security is hard – there’s lots of layers and issues to consider, there’s a lack of general awareness in business and industry, the field is rapidly changing and no defined response protocols for business to use.

Women in STEM Plenary – Dr Rowan Brookes, Renee Noble, Dr Catherine Lang, Dr Leonie Walsh, Luan Heimlich

Dr Brookes introduced the plenary with an apology for not being able to include more women of colour and from the LBGQTI spectrum, particularly on Ada Lovelace Day. The key themes of needing to address systemic issues and create a pipeline for women in STEM were prevalent throughout the conversation.

What struck me first up with this plenary was the range of initiatives, groups and organisations that are working to further women in STEM, and I wondered whether this fragmentation is actually a disservice – so many voices have less volume.

Key takeaway: Are there too many women in STEM groups that are too fragmented? Do we need an Australia ecosystem map of women / females in STEM / ICT

Luan Heimlich opened the plenary by asking the audience who young girls look up to; met with responses of pop stars, sports celebrities and models. Not a science or technology role model in sight! She followed up by questioning whether these role models are going to solve the problems of tomorrow – digital disruption, climate change and public health, and let the audience ponder on the gap.

Dr Leonie Walsh covered efforts to help encourage early to mid career researchers to further their careers, noting that it’s difficult for women to step out of their careers to have a family – as this often puts them several years behind. She also noted that employers are looking for candidates with more well rounded skills, and her program provides exposure to work environments. Dr Catherine Lang highlighted the influence of pre-service teachers in promoting STEM. Another key thread in this discussion was that professions are socially constructed, and that this can be changed – but it’s an uphill battle because ICT careers are not even on the radar as a career choice for young women.

While programs are having localised success, there are still major gaps at a systemic level, and better consistency and co-ordination is required at a national level.

Behavioural insights panel – Kirstan Corban, Dr Alex Gyani, Christian Stenta, Helen Sharpley

This panel was a series of vignettes centred around how behavioural insights had led to social change. The standout piece was by Alex Gyani, who ran the audience through examples of where minor changes had a major impact – using a framework of

  • Easy – interventions should be easy for people, but this is hard to do
  • Attractive – the intervention has to be attractive for people
  • Timely – try something, see if it works – don’t be caught in analysis paralysis
  • Social – social norms are a powerful influencer for change

A key concept from Gyani’s talk was the concept of cognitive budget – we have so many choices to make every day we need to think critically about choice architecture.

The other three speakers, from health and government, highlighted case studies that showcased design thinking, co-design, and approaches to difficult problems.

Key takeaway – minor changes can make a big impact

Internet of Things Plenary – Pablo Hinojosa, Matthew Pryor, Phil Goebel, Lorraine Tighy, Dr Kate Auty

Hinojosa opened proceedings by outlining how the internet has reached 3.5 billion users – half of this volume in Asia – and there are double the number of internet connected devices than people. We’re on the cusp of a revolution.

Matthew Pryor outlined the use of IOT in agriculture and agribusiness, and emphasised how IoT helps with decision making. He highlighted how it’s hard to scale infrastructure in regional and rural areas – and questioned whether we should be investing in networks that connect people or devices or both? He gave the example that as soon as farmers leave the farmhouse, they have no internet – they need to go back to the farmhouse to make better decisions, and this reduces their ability to deliver economic benefit. We need to consider the principle of universal access as we build out infrastructure.

Phil Goebel used the Disneyland Magic Band example to highlight how IoT has taken a purely physical experience and used connectivity to enhance that – leading to “augmented experience”. For example, the band allows Disney to know where the longest queues are, how the park is being used, what facilities are important for which demographics – very granular marketing data. He outlined that there are multiple users of the data – different actors in the ecosystem – administration, marketers and the users themselves – using the data gathered by wearables for different purposes. He flagged the issue that there are no guidelines around how the data is being used – for instance is it being sold on – we need to consider transparency.

Lorraine Tighe is the Smart City and Innovation Manager at City of Melbourne, and outlined how vendors she mets present the IoT as a silver bullet. She outlined the use cases for IoT in smart cities, including parking sensors – to reduce traffic that is searching for a car park – leading to traffic efficiencies. She positioned local government at the coalface of the community, and bringing the community along on the journey – using the City Lab as a vehicle to test and prototype solutions. As part of this, the City of Melbourne made the decision to go open by default with their data, encouraging smart people to co-create with the City.

 

Dr Kate Auty spoke on projects like RedMap and Atlas of Living Australia providing citizen scientists with tools to protect biodiversity. She related how ‘super science’ projects like AURIN and NECTAR are important for understanding how cities work.

Scott Seely had the quote of the panel though;

 

Conclusions

In summary, the #auigf reflected many of the contemporary themes of digital society. Digital disruption and digital society are changing at a rapid pace, and we have a dearth of tools, approaches, standards and response protocols to handle them. We need to start by clearly defining the problems we’re trying to solve, and approach solving them with new types of problem solving approaches, such as design thinking, co-creation and open data. Many of the problems we’re trying to solve require national and international co-operation to build ecosystems, standards and agreed approaches – and the #auigf is a good starting point.

Save