Book review: Technically wrong: Sexist apps, biased algorithms and other threats of toxic tech by Sara Wachter-Boettcher

A must read for anyone who designs digital experiences, and doesn’t want to be an inadvertent dude-bro.

Against a backdrop of increasingly ubiquitous technology, with every online interaction forcing us to expose parts of ourselves, Sara Wachter-Boettcher weaves a challenging narrative with ease. With ease, but not easily. Many of the topics covered are confronting, holding a lens to our internalised “blind spots, biases and outright ethical blunders”.

As Wachter-Boettcher is at pains to highlight, all of this is not intentional – but the result of a lack of critical evaluation, thought and reflection on the consequences of seemingly minor technical design and development decisions. Over time, these compound to create systemic barriers to technology use and employment – feelings of dissonance for ethnic and gender minorities, increased frustration for those whose characteristics don’t fit the personas the product was designed for, the invisibility of role models of diverse races and genders – and reinforcement that technology is the domain of rich, white, young men.

The examples that frame the narrative are disarming in their simplicity. The high school graduand whose Latino/Caucasian hyphenated surname doesn’t fit into the form field. The person of mixed racial heritage who can’t understand which one box to check on a form. The person who’s gender non-conforming and who doesn’t fit into the binary polarisation of ‘Male’ or ‘Female’. Beware, these are not edge cases! The most powerful take-away for me personally from this text is that in design practice, edge cases are not the minority. They exist to make us recognise of the diversity of user base that we design for.

Think “stress cases” not “edge cases”. If your design doesn’t cater for stress cases, it’s not a good design.

While we may have technical coding standards, and best practices that help our technical outputs be of high quality, as an industry and as a professional discipline, we have a long way to go in doing the same for user experience outputs. There are a finite number of ways to write a syntactically correct PHP function. Give me 100 form designers, and I will will give you 100 different forms that provide 100 user experiences. And at least some of those 100 users will be left without “delight” –  a nebulous buzzword for rating the success (or otherwise) of digital experiences.

Wachter-Boettcher takes precise aim at another seemingly innocuous technical detail – application defaults – exposing their (at best) benign, and, at times, malignant utilisation to manipulate users into freely submitting their personal data. It is designing not for delight, but for deception.

“Default settings can be helpful or deceptive, thoughtful or frustrating. But they’re never neutral.”

Here the clarion call for action is not aimed at technology developers themselves, but at users, urging us to be more careful, more critical, and more vocal about how applications interact with us.

Artificial intelligence and big data do not escape scrutiny. Wachter-Boettcher illustrates how algorithms can be inequitable – targeting or ignoring whole cohorts of people, depending on the (unquestioned) assumptions built into machine learning models. Big data is retrospective, but not necessarily predictive. Just because a dataset showed a pattern in the past does not mean that that pattern will hold true in the future. Yet, governments, corporations and other large institutions are basing large policies, and practice areas on algorithms that remain opaque. Yet while responsibility for decision making might be able to be delegated to machines, accountability for how those decisions are made cannot be.

The parting thought of this book is that good intentions aren’t enough. The implications and cascading consequences of seemingly minor design and development decisions need to be thought through, critically evaluated, and handled with grace, dignity and maturity. That will be delightful!

Joining the Dots Data Visualisation Symposium 2017

Joining the Dots – The Art and Science of Data Visualisation came about as the brainchild of Fiona Tweedie – a business analyst and data scientist who has worked in open knowledge, open data and digital humanities for several years, after completing her PhD in humanities at the University of Sydney. At Pycon AU, Fiona identified that most of the talks on data visualisation had strong representation from STEM – science, technology, engineering and mathematics – but poorer representation from the humanities. Held at the Walter and Eliza Hall Institute, part of the broader University of Melbourne research precinct, #jtdwehi sought to address that by providing the opportunity to cross-pollinate multiple disciplines – and by all accounts it was a roaring success.

There were several excellent and engaging presentations over the course of the day, and my personal highlights are covered below.

Keynote – Professor Deb Verhoeven

Deb Verhoeven is incredibly respected in digital humanities for her creative take on visualisation and sonification – and not least of all for her untiring efforts to improve gender representation and diversity in the digital humanities  – for more on this, check out her famous ‘Where are the women?’ speech at DH2015:[/embed]

Her incisive presentation covered broad ground. In particular, her exposé of “gender offenders” in Australian cinema – men who do not work with women, and choose to work exclusively with other men – denying women opportunities in the industry – was one of the most impactful data visualisations I’ve ever seen.

This is what the patriarchy looks like! – Professor Deb Verhoeven, speaking about data visualising of gender representation in  Australian cinema

Using a technique called social network analysis, Verhoeven’s team were able to show the gender of project members and how they clustered. Words don’t do it justice.

You can read more about the project via this article on The Conversation.

Another thought-provoking element of Verhoeven’s keynote was the work her research team had done on sonofication, as part of The Ultimate Gig Guide project. Walking us through the project, Verhoeven explained how the team had gathered data on the spread of bands across Melbourne via gig records. To add an extra degree of difficulty, many of these records were not digitised, and the data had to be gathered manually (another argument for digitalisation projects – it makes accessing and using data so much easier). The team then sonofied the data, resulting in a sequence of notes representing the frequency of gigs and their location as distance from Melbourne CBD. To add additional interest, a backing track was added, and the data was transposed into Cmaj scale. A meta gig – a gig about a gig!

Mind much blown.

You can read more about Deb Verhoeven’s academic work.

“Visualising the Australian Transport Network” by Xavier Ho, CSIRO

Xavier, an interactive data visualisation specialist with CSIRO, presented on TraNSIT – the Transport Network Strategic Investment Tool. This tool is designed to help identify and implement efficiencies in agribusiness supply chains by mapping the logistics and transport networks of different modes of transport – road, rail, air and sea. This work was amazing – not just because the data needed to be sourced from so many different repositories – another argument for open data  – but because of the direct impact data visualisation could have on planning and strategy.

Xavier was a seasoned presenter, with an engaging style – an excellent speaker.

“Ungodly cocktail – visualising three editions of Raynal’s “Histoire”” by Geoff Hinchcliffe, Australian National University

I cannot honestly say that French literature is something which excites me, but Geoff Hinchcliffe’s excellent presentation brought this project – which sought to visualise the differences between editions of Raynal’s Histoire – to life. Using the ‘ungodly cocktail’ of several data visualisation tools, combined with an iterative design and development process (instead of the usual tiered and discrete ‘front end’ and ‘back end’ approach), the changes between versions were mapped and visualised, providing a narrative to explore the influence of collaborator (in writing), Diderot.

What struck me about Hinchcliffe’s approach was the remarkable work that had gone into making something so esoteric and complex so accessible and simple – the true power of data visualisation.

You can follow Hinchcliffe as @gravitron on Twitter.

Further thoughts

Throughout the day, I came to a number of conclusions:

  • There are a small number of ‘tried and true’ tools for data visualisation specialists – among them d3.js and R. Processing did not seem to have found the same traction in the datavis community, likely because its mature implementation is still Java-based, while the Javascript – and therefore more web accessible and interactive implementation – is not as mature. There are several Python libraries for visualisation, and Python continues to ascend in popularity across not just the sciences but increasingly the humanities – and is firmly established as a programming language of first choice. Colour choices remain important, guided by tools like Color Brewer. Typography choices remain geared to the minimal and the sans-serif, indicating a need to have the visualisation speak for itself.
  • Interactivity is not a necessary part of every visualisation – with some visualisations such as Hinchcliffe’s not having a high degree of interactivity.
  • The interplay between design and development is tightly coupled – as seen with presenters having both back-end and front-end and process ’round tripping’ skills – data visualisation combines design, coding and statistical skills in equal measure and the more highly sought after practitioners will be able to work ‘full stack’.



VALATechCamp 2017: Libraries as the vanguard of digital literacy

Libraries have played a vital role in shaping society for thousands of years. From the library of Alexandria – which preserved ancient knowledge, to the American Library of Congress – which facilitates access to political and historical records, to the Geelong Library and Heritage Centre, which actively engages a post-industrial community – the role of the library is evolving.

Coupled with the rise of digital technology, the move of services to e-delivery models and new personal threats in the form of cyber-attacks, libraries are now standing as the vanguard of the most important skill our citizens will have in the 21st century: digital literacy. It is fitting then that VALA – an independent not-for-profit organisation that aims to promote the understanding of information technology within libraries and the broader information sector – sought this year to run an event at the intersection of GLAM – galleries, libraries, archives and museums – and technology.

Enter: #VALATechCamp

The event aimed to help librarians and associated professions ‘level up’ on emerging technology, and inspire people to continue their technical learning journeys. By all accounts, it was a strong success.

Talk highlights

There were several excellent presentations at #VALATechCamp, and these were my personal highlights.

Ingrid Mason (AARNet) – Infrastructure, research and innovation as components of digital literacy

Ingrid’s presentation was pitched perfectly for the audience, and challenged us to think about the concept of infrastructure literacy as a subset of digital literacy – the ability to understand how pipes, and bytes, and bits all fit together, thus providing a Rosetta stone for the seemingly arcane languages used by technologists. This has touchpoints with initiatives such as Skills for the Information Age, which is a rubric of technical and related skills for professional development in technically-driven organisations. Reflecting on this more however, we almost need something like ‘SFIA for ordinary people’ – some form of syllabus which imparts basic digital and infrastructure literacy. Ingrid’s talk was very well received by the audience, due both to her conceptualisation of the topic and the empathy and warmth with which she was able to deliver it.

Athina Mavromataki (Melbourne Library Service) – Defence Against the Dark Arts – Cryptoparties in Libraries

Athina’s engaging and energetic presentation highlighted again the role that libraries play in imparting digital literacy, as she recounted her experience in delivering knowledge on personal privacy and encryption through running Cryptoparties at the Melbourne Library Service. It was refreshing to hear from someone who is self-admittedly not a “techie” – and the challenges faced by explaining concepts like privacy and encryption to people who have only a basic understanding of computers. Again, this made me reflect on the digital divide and digital inclusion – digital literacy is now required because of the push toward e-services by government and other service providers, but personal digital literacy – the ability to safeguard one’s own privacy in a digital environment – is not emphasised. Thus, the digital divide not only disadvantages people because of the barriers it creates in accessing services, it also entrenches disadvantage because only the skilled will be able to protect themselves against new threats.

Natasha Simons (Australian National Data Service) – Unpacking persistent identifiers for research

Natasha, who is well-renowned for her work on DoI – Digital Object Identifier System in Australia – presented on the different schemes for persistent identifiers for research artefacts – journal articles, datasets, and ‘grey literature‘. The challenges here mirrored those of other archiving and referent systems – what happens if the auspicing organisation no longer exists? From an open data viewpoint, what struck me here was so many different competing standards – some auspiced by government, some by corporate interests such as publishing houses and others by NGO bodies – who are reliant on member funding to operate. As an international community, we still have a long way to go in negotiating, adhering and nurturing international open standards – but with someone of the calibre of Simons in the mix, there’s strong hope for the future.

Linux Australia Diversity Scholarship

With thanks to my colleague, Sae Ra Germaine, Linux Australia was able to partner with VALA to provide a Diversity Scholarship. As with many other areas of life – employment, social mobility, access to education and healthcare – Indigenous, regional and remote Australians have poorer digital literacy and participation in STEM – science, technology, engineering and mathematics – than the non-Indigenous and urban population [1]. A diversity scholarship is a small step towards providing additional opportunities to help address the digital divide. Through a rigourous selection process, Wiradjuri man Nathan Sentance, Project Officer at the Australian Museum, was selected as this year’s Diversity Scholarship recipient. Nathan will be able to share his learnings from #VALATechCamp with his broader community.



Concluding thoughts

In conclusion, I was left with the impression that libraries and the broader GLAM sector are realising that while their core remit – that of preserving, facilitating access to, and engaging communities around knowledge – remains valid and pertinent, the ways in which those services are delivered is rapidly changing, concomitant with the wave of digital transformation. Just as libraries of yore helped citizens become literate, librarians are now the vanguard of digital literacy, and events like #VALATechCamp are providing a sorely-needed arsenal.

Further information

#VALATechCamp on Twitter

#VALATechCamp programme


[1] Chubb, I. (2014). Science, Technology, Engineering and Mathematics: Australia’s Future. Office of the Chief Scientist, Commonwealth of Australia. Retrieved from