Setting up an academic writing workflow using Pandoc, Markdown, Zotero and LaTeX on Ubuntu Linux using Atom

This year, I started a PhD at the 3A Institute, within the College of Engineering and Computer Science at Australian National University. I came into the PhD not from a researcher or academic background, but from a career in industry as a technology practitioner. As such, my experience with formatting academic papers, for example for publication in journals, is limited. Our PhD program is hands-on; as well as identifying a specific research topic and articulating a research design, we undertake writing activities – both as academic writing practice, and to help solidify the theoretical concepts we’ve been discussing in Seminar. I needed an academic writing workflow.

For one of these writing exercises, I decided to build out a toolchain using LaTeX – the preferred typesetting tool for academic papers. This blog post documents how I approached this, and serves as a reference both for myself and others who might want to adopt a similar toolchain.

In summary, the process was:

  • Define overall goals
  • Install dependendies such as pandoc, Zotero and the BetterBibtex extension for citations, LaTeX
  • Experiment with pandoc on the command line for generating PDF from Markdown via LaTeX

Goals of an academic writing workflow

In approaching this exercise, I had several key goals:

  • create an academic writing workflow using my preferred Atom IDE on Linux;
  • that could easily be set up and replicated on other machines if needed;
  • which would allow me to use my preferred citation editor, Zotero;
  • and which would allow me to use my preferred writing format, Markdown;
  • and which would support available LaTeX templates available for journals (and from the ANU for formatting theses)

Why not use OverLeaf?

For those who’ve worked with LaTeX before, one of the key questions you might have here is “why not just use a platform like Overleaf for your academic writing workflow and not worry about setting up a local environment?”. Overleaf is a cloud LaTeX service, that provides a collaborative editing environment, and a range of LaTeX templates. However, it’s not free – plans range from $USD 12-35 per month. Overleaf is based on free and open source software, and then adds a proprietary collaboration layer over the top – it abstracts away complexity – and this is what you pay a monthly fee for.

In principle, I have no issue with cloud platforms adding value to FOSS, and then charging for that value, but I doubt any of the profits from Overleaf are going to folks like John MacFarlane – writer of pandoc, and Donald E. Knuth and Leslie Lamport – who contributed early work to tex and LaTeX respectively. I felt like I owed it to these folx to “dig a little under the hood” and learn some of that complexity instead of outsourcing it away.

So, to the process …

Installing pandoc

Pandoc is a free and open source tool for converting documents between different formats. It’s widely used in academia, and used extensively in publishing and writing workflows. Pandoc is written by John MacFarlane, who is a philosophy professor at UC Berkeley.

One of the other things that John is less known for is Lucida Navajo, which is a font for the Navajo language, a Native American language of the Southern Athabascan family. Although it’s based on Latin, it contains a number of diacritical marks not found in other Latin languages.

Pandoc is available for all platforms, but because my goal here was to develop a workflow on Ubuntu Linux, I’ve only shown installation instructions for that platform.

To install pandoc, use the following command:

$ sudo apt install pandoc 

I also had to make sure that the pandoc-citeproc tool was installed; it’s not installed by default as part of pandoc itself. Again, this was a simple command:

$ sudo apt install pandoc-citeproc

Installing Zotero and the BetterBibtex extension

My next challenge was to figure out how to do citations with pandoc. In the pandoc documentation, there is a whole section on citations, and this provided some pointers. This blog post from Chris Krycho was also useful in figuring out what to use.

This involved installing the BetterBibtex extension for Zotero (which I already had installed). You can find installation instructions for the BetterBibtex extension here. It has to be downloaded as a file and then added through Zotero (not through Firefox).

BibTex is a citation standard, supported by most academic journals and referencing tools. Google Scholar exports to BibTex.

Once installed, BetterBibtex updates each Zotero reference with a citation key that can then be used as an “at-reference” in pandoc – ie @strengersSmartWifeWhy2020. Updating citation keys can take a few minutes – I have several thousand references stored in Zotero and it took about 6 minutes to ensure that each reference had a BibTex citation key.

In order to use BetterBibtex, I had to make sure that the specific export format for Zotero was Better BibTex, and that Zotero updated on change:

Installing LaTeX

Next, I needed to install LaTeX for Linux. This was installed via the texlive package:

$ sudo apt install texlive Based on this Stack Overflow error message I got early on in testing, I also installed the texlive-latex-extra package.

$ sudo apt-get install texlive-latex-extra

Zotero citations package for Atom

Next, I needed to configure the Atom IDE to work with Zotero and BetterBibtex. This involved installing several plugins, including;

Once these were installed, I was ready to start experimenting with a workflow.

Experimenting with a workflow

To start with, I used a based workflow to go from a Markdown formatted text file to PDF. pandoc converts this to LaTeX as an interim step, and then to PDF, using the inbuilt templates from pandoc. This first step was a very basic attempt to go from pandoc to PDF, and was designed to “shake out” any issues with software installation.

The pandoc command line options I used to start with were:

pandoc -f markdown \
writing.md \
-o writing.pdf

In this example, I’m telling pandoc to expect markdown-styled input, to use the file writing.md as the input file and to write to the output file writing.pdf. pandoc infers that the output file is PDF-formatted from the .pdf extension.

Adding citations

Next, I wanted to include citations. First, I exported my Zotero citations to a .bib-formatted file, using the BetterBibtex extension. I stored this in a directory called bibliography in the same directory as my writing.md file. The command line options I used here were:

$ pandoc -f markdown \
--filter=pandoc-citeproc \
--bibliography=bibliography/blog-post-citations.bib \
writing.md \
-o writing.pdf

Note here the two additional commands – the --filter used to invoke pandoc-citeproc, and the --bibliography filter to include a BibTex formatted file. This worked well, and generated a plainly formatted PDF (based on the pandoc default format).

Using a yaml file for metadata

Becoming more advanced with pandoc, I decided to experiment with including a yaml file to help generate the document. The yaml file can specify metadata such as author, date and so on, which can then be substituted into the PDF file – if the intermediary LaTeX template accommodates these values. The basic LaTeX template included with pandoc includes values for author, title, date and abstract.

Here’s the yaml file I used for this example:

--- 
author: Kathy Reid 
title: blog post on pandoc 
date: December 2020 
abstract: | This is the abstract. 

...

Note that the yaml file must start with three dashes ---and end with three periods ...

The pandoc command line options I used to include metadata were:

$ pandoc -f markdown+yaml_metadata_block \
  --filter=pandoc-citeproc \
  --bibliography=bibliography/blog-post-citations.bib \
  writing.md metadata.yml \
  -o writing.pdf

Note here that in the -f switch an additional option for yaml_metadata_block is given, and that the yaml file is listed after the first input file, writing.md. By adding the metadata.yml file in the command line, pandoc considers them both to be input files.

By using the yaml file, this automatically appended author, title, date and abstract information to the resulting PDF.

I also found that I could control the margins and paper size of the resulting PDF file by controlling these in the yaml file.

---
 author: Kathy Reid
 title: blog post on pandoc
 date: December 2020
 abstract: |
   This is the abstract.
 fontsize: 12pt
 papersize: a4
 margin-top: 25mm
   margin-right: 25mm
   margin-bottom: 25mm
   margin-left: 25mm
...

Adding in logging

It took a little while to get the hang of working with yaml, and I wanted a way to be able to inspect the output of the pandoc process. To do this, I added a switch to the command line option, and also piped the output of the command to a logging file.

$ pandoc -f markdown+yaml_metadata_block \
  --verbose 
  --filter=pandoc-citeproc \
  --bibliography=bibliography/blog-post-citations.bib \
  writing.md metadata.yml \
  -o writing.pdf 
  > pandoc-log.txt

The --verbose switch tells pandoc to use verbose logging, and the logging is piped to pandoc-log.txt. If the output wasn’t piped to a file, it would appear on the screen as stdout, and because it’s verbose, it’s hard to read – it’s much easier to pipe it to a file to inspect it.

Working with other LaTeX templates

Now that I had a Markdown to PDF via LaTeX workflow working reasonably well, it was time to experiment using an academic writing workflow with other templates. Many publications provide a LaTeX template for submission, such as these from the ACM, and ideally I wanted to be able to go from Markdown to a journal template using pandoc.

I’d come across other blog posts where similar goals had been attempted, but this proved significantly harder to implement than I’d anticipated.

My first attempt entailed trying to replicate the work Daniel Graziotin had done here – but I ran into several issues. After copying over the table-filter.py file from the blog post, an ACM .cls file, and copying the ACM pdf file to default.pdf in my pandoc-data directory, and running the below command, I got the following error.

$ pandoc -f markdown+yaml_metadata_block \
  --verbose \
  --data-dir=pandoc-data \
  --variable documentclass=acmart \
  --variable classname=acmlarge \
  --filter=pandoc-citeproc \
  --filter=table-filter.py \
  --bibliography=bibliography/blog-post-citations.bib \
  writing.md metadata-acm.yml \
  -o writing.pdf
Error running filter table-filter.py:
Could not find executable python

My first thought was that python somehow was aliased to an older version of python – ie python 2. To verify this I ran:

$ which python

This didn’t return anything, which explained the error. From this, I assumed that pandoc was expecting that the python alias resolved to the current python. I didn’t know how to change this – for example changing the pandoc preferences to point to the right python binary. Instead, I created a symlink so that pandoc could find python3.

$ pwd
/usr/bin
$ ls | grep python 
python3 
python3.8 
python3.8-config 
python3-config 
python3-futurize 
python3-pasteurize 
x86_64-linux-gnu-python3.8-config 
x86_64-linux-gnu-python3-config 
$ sudo ln -s python3 python 

Installing pandocfilters package for Python

I attempted to run the pandoc command again but ran into another error.

Traceback (most recent call last):
   File "table-filter.py", line 6, in 
     import pandocfilters as pf
 ModuleNotFoundError: No module named 'pandocfilters'
 Error running filter table-filter.py:
 Filter returned error status 1

My guess here was that the python module pandocfilters had not been installed via pip. I installed this through pip.

$ pip3 install pandocfilters
Collecting pandocfilters
Downloading pandocfilters-1.4.3.tar.gz (16 kB)
Building wheels for collected packages: pandocfilters
Building wheel for pandocfilters (setup.py) … done
Created wheel for pandocfilters: filename=pandocfilters-1.4.3-py3-none-any.whl size=7991 sha256=3c4445092ee0c8b00e2eab814ad69ca91d691d2567c12adbc4bcc4fb82928701
Stored in directory: /home/kathyreid/.cache/pip/wheels/fc/39/52/8d6f3cec1cca4ceb44d658427c35711b19d89dbc4914af657f
Successfully built pandocfilters
Installing collected packages: pandocfilters
Successfully installed pandocfilters-1.4.3

This again swapped one error for another.

Error producing PDF.
! LaTeX Error: Missing \begin{document}.
See the LaTeX manual or LaTeX Companion for explanation.
Type H for immediate help.
…
l.55 u

Luckily, I had set –verbose, and piped to a log file. I went digging through the log file to see if I could find anything useful.

Class acmart Warning: You do not have the libertine package installed. Please upgrade your TeX on input line 669.
Class acmart Warning: You do not have the zi4 package installed. Please upgrade your TeX on input line 672.
Class acmart Warning: You do not have the newtxmath package installed. Please upgrade your TeX on input line 675.

After reading through this Stack Overflow article, these all looked like font issues. I used the solution given in the Stack Overflow article, which was to install another Ubuntu package:

$ sudo apt-get install texlive-fonts-extra

Again, this swapped one error message for another.

! LaTeX Error: Command `\Bbbk' already defined.
See the LaTeX manual or LaTeX Companion for explanation.
Type H for immediate help.
l.261 …ol{\Bbbk} {\mathord}{AMSb}{"7C}
! ==> Fatal error occurred, no output PDF file produced!
Transcript written on ./tex2pdf.-bf3aef739e05d883/input.log.
Error producing PDF.
! LaTeX Error: Command `\Bbbk' already defined.
See the LaTeX manual or LaTeX Companion for explanation.
Type H for immediate help.
l.261 …ol{\Bbbk} {\mathord}{AMSb}{"7C}

Another Google search, another solution from Stack Overflow, which suggested removing the LaTeX line \usepackage{amssymb} from the template. The challenge was, I wasn’t sure where the LaTeX template for pandoc was stored. Reading through this Stack Overflow post, it looked liked the default LaTeX template is stored at:

~/.pandoc/templates 

But on my system, this directory didn’t exist. I created it, and used a command to store a default LaTeX file in there. Then I was able to remove the line.

\usepackage{amssymb,amsmath}

This then resulted in yet another error which required Googling.

! LaTeX Error: Missing \begin{document}.See the LaTeX manual or LaTeX Companion for explanation.
Type H for immediate help
…l.54 u
sepackage{booktabs} setcopyright{acmcopyright} doi{10.475/123_4}
! ==> Fatal error occurred, no output PDF file produced!

Looking through the log file, this was generated by the parskip.sty package. More Googling revealed another incompatibility with the ACM class. I removed the parskip package from the LaTeX default template, but was continually met with similar errors.

Parting thoughts

In the end I gave up trying to get this academic writing workflow to function with an ACM template; there were too many inconsistencies and errors for it to be workable, and I’d already sunk several hours into it.

On the plus side though, I learned a lot about pandoc, how to reference with it, and how to produce simple LaTeX documents with this workflow.

What sort of academic writing workflow do you use?

linux.conf.au 2019 Christchurch – The Linux of Things

linux.conf.au 2019 this year went over the Tasman to New Zealand for the fourth time, to the Cantabrian university city of Christchurch. This was the first year that Christchurch had played host and I sincerely hope it’s not the last.

First, to the outstanding presentations.

NOTE: You can see all the presentations from linux.conf.au 2019 at this YouTube channel

Open Artificial Pancreas System (OpenAPS) by Dana Lewis

See the video of Dana’s presentation here

Dana Lewis lives with Type 1 diabetes, and her refusal to accept current standards of care with diabetes management led her to collaborate widely, developing OpenAPS. OpenAPS is a system that leverages existing medical devices, and adds a layer of monitoring using open hardware and open software solutions.

This presentation was outstanding on a number of levels.

As a self-experimenter, Dana joins the ranks of scientists the world over putting their own health on the line in the strive for progress. Her ability to collaborate with others from disparate backgrounds and varied skillsets to make something greater than the sum of its parts is a textbook case in the open source ethos; moreover the results that the OpenAPS achieved were remarkable; significant stabilization in blood sugars and better predictive analytics – providing better quality of life to those living with Type 1 diabetes.

Dana also touched on the Open Humans project, which is aiming to have people share their medical health data publicly so that collective analysis can occur – opening up this data from the vice-like grip of medical device manufacturers. Again, we’re seeing that data itself has incredible value – sometimes more so than the devices which monitor and capture the data itself.

Open Source Magnetic Resonance Imaging: From the community to the community by Ruben Pellicer Guridi

You can view the video of Ruben’s presentation here

Ruben Pellicer Guridi‘s talk centred on how the Open Source MRI community has founded to solve the problems of needing more MRI machines, particularly in low socio-economic areas and in developing countries. The project has attracted a community of health and allied health professionals, and has made available both open hardware and open software, with the first image from their Desktop MR software being acquired in December.

Although the project is in its infancy, the implications are immediately evident; providing better public healthcare, particularly for the most vulnerable in the world.

Apathy and Arsenic: A Victorian era lesson on fighting the surveillance state by Lilly Ryan

You can view the video of Lilly’s presentation here

Lilly Ryan’s entertaining and thought-provoking talk drew parallels between our current obsession with privacy-leaking apps and data platforms and the awareness campaign around the detrimental effects of arsenic in the 1800s. Her presentation was a clarion call to resist ‘peak indifference’ and increase privacy awareness and digital literacy.

Deep Learning, not Deep Creepy by Jack Moffitt

You can view the video of Jack’s presentation here

Jack Moffitt is a Principal Research Engineer with Mozilla, and in this presentation he opened by providing an overview of Deep Learning. He then dug a little bit deeper into the dangers of deep learning, specifically the biases that are inherent in current deep learning approaches, and some of the solutions that have been trialled to address them, such as making gender and noun pairs – such as “doctor” and “man” – equidistant – so that “doctor” is equally predictive for “man” and “woman”.

He then covered the key ML projects from Mozilla such as Deep Speech, Common Voice and Deep Proof.

This was a great corollary to the two talks I gave;

Computer Science Unplugged by Professor Tim Bell

You can view Tim’s presentation here

Part of the Open Education Miniconf, Tim‘s presentation covered how to teach computer science in a way that was fun, entertaining and accessible. The key problem that Computer Science Unplugged solves is that teachers are often afraid of CS concepts – and CS Unplugged makes teaching these concepts fun for both learners and teachers.

Go All In! By Bdale Garbee

You can view Bdale’s talk here

Bdale’s talk was a reinforcement of the power of open source collaboration, and the ideals that underpin it, with a call to “bet on” the power of the open source community.

Open source superhumans by Jon Oxer

You can view Jon’s talk here

Jon Oxer’s talk covered the power of open source hardware for assistive technologies, which are often inordinately expensive.

Other conversations

I had a great chat with Kate Stewart from the Linux Foundation and the work she’s doing in the programmatic audit of source code licensing space – her talk on grep-ability of licenses is worth watching – and we covered metrics for communities with CHAOSS, and the tokenisation of Git commits to understand who has committed which code, specifically for unwinding dependencies and copyright.

Christchurch as a location

Christchurch was a wonderful location for linux.conf.au – the climate was perfect – we had a storm or two but it wasn’t 45 C burnination like Perth. The airport was also much bigger than I had expected and the whole area is set up for hospitality and tourism. It won’t be the last time I head to CHC!

linux.conf.au 2018 Sydney – A little bit of history repeating

This year, linux.conf.au 2018 headed back to Sydney, where it hasn’t been held since 2007. This year I skipped quite a few sessions due to having Linux Australia duties and tasks to do, and because the heat and humidity were exhausting. Thankfully, the videos by Next Day Video were released very quickly, so I’m spending “Week 2” of linux.conf.au catching up!

On reflection, several themes came through.

  • Volunteers, volunteering and volunteer labour – There are several free software and opensource organisations across the world, and they’re all vying for volunteer contributions. Moreover, the volunteer base itself is ageing; we’re getting older and having children and families and other family responsibilities – we simply don’t have the time to contribute that we once did. At the other end of the demographic curve, younger people don’t have the same passion and ‘fire in the belly’ for free and open source software. In one sense, that’s a product of the success of the free and open source software movement – because it’s been normalised; but on the other hand this leaves us with a gap in the ‘compelling-reasons-to-join-a-free-and-open-source-project’ list. As a concrete example, during the opening of linux.conf.au, no less than three organisations – Open Source Initiative, Free Software Foundation, and Code Club Australia – did a shoutout for volunteers. At the same time, Linux Australia – the auspicing body of the conference – had fewer nominations to its board than open vacancies. I want to be clear: The Organisers and Volunteers of linux.conf.au did a phenomenal job. They were dedicated, professional, resilient and awe-inspiring. As individuals, and as a conference team, amazing. Systemically though, open source has some major issues to address to avoid burnout, and worse, resentment.
  • Infrastructure-as-code continues to gain maturity – As more and more devices become internet-connected, and we’re managing more and more devices, we need better orchestration. We’re seeing this manifest in container-all-the-things, in MQTT for unified messaging and in our approach to IoT hardware and open hardware. Standards however remain a barrier to interoperability and greater maturity in code-based orchestration, as outlined brilliantly by Kathy Giori.
  • Open source touches many disciplines – the range of Miniconfs available this year sent a strong and undeniable message – free and open source software, hardware and practices are touching many disciplines. Art, genomics, games, galleries, libraries and museums (GLAM) – Linux and open source touch each of these in fundamental ways. Personally, I’m delighted to see this cross-pollination happening in our communities. Together, we do better.

On communities, volunteering and volunteer labour

“A division of labour in free software” – Molly de Blanc, Free Software Foundation

Molly’s talk used the results of different surveys of opensource communities to show visually that labour in free software is gendered, ageist, and that these schisms also apply to what is considered technical and non-technical work. The implications of these findings are that these patterns are repeated without intervening action, such as having quotas on leadership boards. Importantly, anecdotal data shows that we still value technical work over important non-technical work; people still justify their non-technical contributions to an opensource project by emphasising the technical contributions they do make.

This resonated strongly with me; as the leader of an organisation that turns over around $AUD 1 million a year – Linux Australia – there are a number of skills I need to have – budgeting, strategic communications, strategic and operational management – and of course, the ability to be an efficient administrator. None of these are technical skills; yet, as the leader of a technical organisation I am expected to have a strong grasp of technology issues. Even in a non-technical role, you’re not allowed to be non-technical.

https://youtu.be/6NDB2VFYlfg

“Dealing with Contributor Overload” – Holden Karau

Holden Karau is a core contributor to the Apache Spark project, and this war story and guidance was learned the hard way – when the project became so big that contributors were significantly overloaded. She provided a number of strong pieces of guidance for dealing with contributor overload, including:

  • Developing a contributor pipeline to allow users of the project to become contributors, and in time, core committers
  • Not ‘raising the bar’ for changes and requests because these have very unattractive downsides such as making the contribution pipeline harder and paradoxically increasing the contributor workload by increasing questions and requests for assistance.
  • The power of having clear roadmaps which make it clear what the core project is, and is not going to do, so that people can either start their own project, or plan around it. The Roadmap also helps guide contributions, and show how smaller tasks contribute to larger milestones.
  • Focussing on committer productivity – such as better tools to merge changes, making it easier to review changes, and more tests – can have significant long term dividends. Imagine what a 1% productivity increase would mean across say 10-20 committers? 50 committers? 100 committers?
  • Creating safe spaces to ask questions and contribute without being mocked – people who feel safe to fail are going to commit more.

https://youtu.be/BempWfBkvs8

 

“Burning Down the Castle” – Daniel Vetter, Intel (previous graphics kernel maintainer)

Daniel’s talk was an eye-opener. As a previous graphics kernel maintainer, Dan has seen a whole range of poor behaviours that contribute to maintainer burn-out, rage-quitting and other unproductive outcomes. His talk advocates for a kinder, gentler approach to maintaining a technically elite community.

 

https://www.youtube.com/watch?v=BB0luXmuo3g

 

“Mirror, mirror on the wall: testing Conway’s Law in open source communities” – Lindsay Holmwood

Lindsay provided an outline of Conway’s law of organisational communication patterns, and the concept of mirroring – the mapping between the organisational structure and the supporting technical structures for communication. Strong mirroring leads to strong ownership – you are led to the actors who own a system. Using an overview of the empirical literature on organisational development and he explained how organisations try to solve the problem of communication – using different structural strategies. But mirroring works poorly in unstable environments – those undergoing radical change and innovation. This has led to the rise of structures like guilds. These theories are then applied to open source to show that shifts away from the ‘core’ of an open source project can indicate a decline in the project itself. This necessitates a need to build a pipeline – again the pipeline – of people moving closer to the core in their contributions.

This talk was intense – but the key takeaway was that the way we design organisational structures has a significant impact on organisational outputs and long term organisation success. This is of particular importance for projects that are scaling up significantly; poor choices during scale up will lead to poor productivity later in the project’s lifecycle.

https://www.youtube.com/watch?v=xYkh1sAu0UM

 

Orchestrate all the things. With code.

“MQTT as a Unified Message Bus for Infrastructure Services” – Matthew Treinish

This was an excellent talk by Matt Treinish, who outlined the reasons behind the design of MQTT, which was originally designed for sensor telemetry. He goes on to show there are different levels of quality of service for the broker. An excellent introduction to how MQTT can be used as a unified messaging bus – as used in FireHose.

https://www.youtube.com/watch?v=y6xN6S407Xc

 

“What does the buyout of @arduino mean for #openhardware?” – Kathy Giori, IoT at Mozilla

I was truly disappointed not to be able to make it to Kathy’s presentation, as it came about partially because of a tweet I’d sent out to #lcapapers in mid-2017 – and which Kathy shouted out to me for. Thank you, and apologies for not being there in person.

Giori provided an overview of the corporate history of Arduino and how it’s now consolidated under one company; lamenting the drawn-out legal process that led to this point.

She continued to outline some of the challenges in licensing for open hardware and how manufacturers are being cheated by lower-quality knock-offs; with those same manufacturers then expecting the original author of open hardware / open software to provide ongoing support. This led to a discussion on the different levels of openness in open hardware, and the pros and cons of each.

Concluding the talk, Kathy provided an overview of the Mozilla Web of Things project, which is attempting to bring some standardisation and streamlining to the very fragmented IoT and open hardware space. There are competing standards, competing platforms, and the piece that I didn’t realise was that this is actually inflating costs for consumers. Because individual companies need to make hubs and supporting infrastructure for “their” range of IoT hardware, this means each endpoint device – light bulb, sensor, thermostat and so on – is quite expensive. Mozilla is seeking to have stronger interoperability in this space by creating the ‘Web of Things’:

“The “Web of Things” (WoT) is the idea of taking the lessons learned from the World Wide Web and applying them to IoT. It’s about creating a decentralized Internet of Things by giving Things URLs on the web to make them linkable and discoverable, and defining a standard data model and APIs to make them interoperable.”

If anyone can drive this, Mozilla can, but my personal feeling is that they’re going to come up against significant corporate interests in doing so – at a time when their own corporate mis-steps (Mr Robot, anyone) have significantly backfired. I live in hope.

https://www.youtube.com/watch?v=x2ltqoAqJbY

Cross-pollination, because together we do better

“The Future of Art” by J Rosenbaum

This was the mind-blowing talk of #lca2018 for me personally. Academic and artist J Rosenbaum took us through their research, which sits at the intersection of machine learning, neural networks and the production of art.

J’s talk started with an overview of machine learning projects, such as Botnik and Janelle Shae, and moved on to underscoring the collaboration between human and machine in generative art.

The future is not man versus machine  – the future of art is man with machine.

https://www.youtube.com/watch?v=lTT2mq692JQ

 

“The Knitting Printer” by Sarah Spencer

Again a brilliant intersectional talk by Melbourne-based hobbyist and knitter, Sarah Spencer, in which she provides an introduction to knitting machines, and provides a breakdown of how she reverse engineered a hack to a 32-bit knitting machine to be able to get images from her computer to the knitting machine.

Massive respect, @chixor.

https://www.youtube.com/watch?v=Y6k15pdFTsA

 

“Wearing access: a story about open collections, a sewing machine and the nation’s secrets” – Bonnie Wildie

Bonnie’s talk, from the OpenGLAM Miniconf, was very much a hidden gem of the conference. She talked about the concept of redaction art, created from files that have been redacted – and remixed. Bonnie even turned the redaction art into a dress, which opened up a conversation on the politics and power of what we wear. Dress and costume become media for subversion. Much awesome.

https://www.youtube.com/watch?v=XhTzE67HrhE