# Setting up an academic writing workflow using Pandoc, Markdown, Zotero and LaTeX on Ubuntu Linux using Atom

This year, I started a PhD at the 3A Institute, within the College of Engineering and Computer Science at Australian National University. I came into the PhD not from a researcher or academic background, but from a career in industry as a technology practitioner. As such, my experience with formatting academic papers, for example for publication in journals, is limited. Our PhD program is hands-on; as well as identifying a specific research topic and articulating a research design, we undertake writing activities – both as academic writing practice, and to help solidify the theoretical concepts we’ve been discussing in Seminar. I needed an academic writing workflow.

For one of these writing exercises, I decided to build out a toolchain using LaTeX – the preferred typesetting tool for academic papers. This blog post documents how I approached this, and serves as a reference both for myself and others who might want to adopt a similar toolchain.

In summary, the process was:

• Define overall goals
• Install dependendies such as pandoc, Zotero and the BetterBibtex extension for citations, LaTeX
• Experiment with pandoc on the command line for generating PDF from Markdown via LaTeX

## Goals of an academic writing workflow

In approaching this exercise, I had several key goals:

• create an academic writing workflow using my preferred Atom IDE on Linux;
• that could easily be set up and replicated on other machines if needed;
• which would allow me to use my preferred citation editor, Zotero;
• and which would allow me to use my preferred writing format, Markdown;
• and which would support available LaTeX templates available for journals (and from the ANU for formatting theses)

For those who’ve worked with LaTeX before, one of the key questions you might have here is “why not just use a platform like Overleaf for your academic writing workflow and not worry about setting up a local environment?”. Overleaf is a cloud LaTeX service, that provides a collaborative editing environment, and a range of LaTeX templates. However, it’s not free – plans range from $USD 12-35 per month. Overleaf is based on free and open source software, and then adds a proprietary collaboration layer over the top – it abstracts away complexity – and this is what you pay a monthly fee for. In principle, I have no issue with cloud platforms adding value to FOSS, and then charging for that value, but I doubt any of the profits from Overleaf are going to folks like John MacFarlane – writer of pandoc, and Donald E. Knuth and Leslie Lamport – who contributed early work to tex and LaTeX respectively. I felt like I owed it to these folx to “dig a little under the hood” and learn some of that complexity instead of outsourcing it away. So, to the process … ## Installing pandoc Pandoc is a free and open source tool for converting documents between different formats. It’s widely used in academia, and used extensively in publishing and writing workflows. Pandoc is written by John MacFarlane, who is a philosophy professor at UC Berkeley. One of the other things that John is less known for is Lucida Navajo, which is a font for the Navajo language, a Native American language of the Southern Athabascan family. Although it’s based on Latin, it contains a number of diacritical marks not found in other Latin languages. Pandoc is available for all platforms, but because my goal here was to develop a workflow on Ubuntu Linux, I’ve only shown installation instructions for that platform. To install pandoc, use the following command: $ sudo apt install pandoc

I also had to make sure that the pandoc-citeproc tool was installed; it’s not installed by default as part of pandoc itself. Again, this was a simple command:

$sudo apt install pandoc-citeproc ## Installing Zotero and the BetterBibtex extension My next challenge was to figure out how to do citations with pandoc. In the pandoc documentation, there is a whole section on citations, and this provided some pointers. This blog post from Chris Krycho was also useful in figuring out what to use. This involved installing the BetterBibtex extension for Zotero (which I already had installed). You can find installation instructions for the BetterBibtex extension here. It has to be downloaded as a file and then added through Zotero (not through Firefox). BibTex is a citation standard, supported by most academic journals and referencing tools. Google Scholar exports to BibTex. Once installed, BetterBibtex updates each Zotero reference with a citation key that can then be used as an “at-reference” in pandoc – ie @strengersSmartWifeWhy2020. Updating citation keys can take a few minutes – I have several thousand references stored in Zotero and it took about 6 minutes to ensure that each reference had a BibTex citation key. In order to use BetterBibtex, I had to make sure that the specific export format for Zotero was Better BibTex, and that Zotero updated on change: ## Installing LaTeX Next, I needed to install LaTeX for Linux. This was installed via the texlive package:$ sudo apt install texlive Based on this Stack Overflow error message I got early on in testing, I also installed the texlive-latex-extra package.

$sudo apt-get install texlive-latex-extra ## Zotero citations package for Atom Next, I needed to configure the Atom IDE to work with Zotero and BetterBibtex. This involved installing several plugins, including; Once these were installed, I was ready to start experimenting with a workflow. ## Experimenting with a workflow To start with, I used a based workflow to go from a Markdown formatted text file to PDF. pandoc converts this to LaTeX as an interim step, and then to PDF, using the inbuilt templates from pandoc. This first step was a very basic attempt to go from pandoc to PDF, and was designed to “shake out” any issues with software installation. The pandoc command line options I used to start with were: pandoc -f markdown \ writing.md \ -o writing.pdf In this example, I’m telling pandoc to expect markdown-styled input, to use the file writing.md as the input file and to write to the output file writing.pdf. pandoc infers that the output file is PDF-formatted from the .pdf extension. ### Adding citations Next, I wanted to include citations. First, I exported my Zotero citations to a .bib-formatted file, using the BetterBibtex extension. I stored this in a directory called bibliography in the same directory as my writing.md file. The command line options I used here were: $ pandoc -f markdown \  --filter=pandoc-citeproc \  --bibliography=bibliography/blog-post-citations.bib \  writing.md \  -o writing.pdf

Note here the two additional commands – the --filter used to invoke pandoc-citeproc, and the --bibliography filter to include a BibTex formatted file. This worked well, and generated a plainly formatted PDF (based on the pandoc default format).

### Using a yaml file for metadata

Becoming more advanced with pandoc, I decided to experiment with including a yaml file to help generate the document. The yaml file can specify metadata such as author, date and so on, which can then be substituted into the PDF file – if the intermediary LaTeX template accommodates these values. The basic LaTeX template included with pandoc includes values for author, title, date and abstract.

Here’s the yaml file I used for this example:

---
author: Kathy Reid
title: blog post on pandoc
date: December 2020
abstract: | This is the abstract.

...

Note that the yaml file must start with three dashes ---and end with three periods ...

The pandoc command line options I used to include metadata were:

$pandoc -f markdown+yaml_metadata_block \ --filter=pandoc-citeproc \ --bibliography=bibliography/blog-post-citations.bib \ writing.md metadata.yml \ -o writing.pdf Note here that in the -f switch an additional option for yaml_metadata_block is given, and that the yaml file is listed after the first input file, writing.md. By adding the metadata.yml file in the command line, pandoc considers them both to be input files. By using the yaml file, this automatically appended author, title, date and abstract information to the resulting PDF. I also found that I could control the margins and paper size of the resulting PDF file by controlling these in the yaml file. --- author: Kathy Reid title: blog post on pandoc date: December 2020 abstract: | This is the abstract. fontsize: 12pt papersize: a4 margin-top: 25mm margin-right: 25mm margin-bottom: 25mm margin-left: 25mm ... ### Adding in logging It took a little while to get the hang of working with yaml, and I wanted a way to be able to inspect the output of the pandoc process. To do this, I added a switch to the command line option, and also piped the output of the command to a logging file. $ pandoc -f markdown+yaml_metadata_block \
--verbose
--filter=pandoc-citeproc \
--bibliography=bibliography/blog-post-citations.bib \
-o writing.pdf
> pandoc-log.txt

The --verbose switch tells pandoc to use verbose logging, and the logging is piped to pandoc-log.txt. If the output wasn’t piped to a file, it would appear on the screen as stdout, and because it’s verbose, it’s hard to read – it’s much easier to pipe it to a file to inspect it.

### Working with other LaTeX templates

Now that I had a Markdown to PDF via LaTeX workflow working reasonably well, it was time to experiment using an academic writing workflow with other templates. Many publications provide a LaTeX template for submission, such as these from the ACM, and ideally I wanted to be able to go from Markdown to a journal template using pandoc.

I’d come across other blog posts where similar goals had been attempted, but this proved significantly harder to implement than I’d anticipated.

My first attempt entailed trying to replicate the work Daniel Graziotin had done here – but I ran into several issues. After copying over the table-filter.py file from the blog post, an ACM .cls file, and copying the ACM pdf file to default.pdf in my pandoc-data directory, and running the below command, I got the following error.

$pandoc -f markdown+yaml_metadata_block \ --verbose \ --data-dir=pandoc-data \ --variable documentclass=acmart \ --variable classname=acmlarge \ --filter=pandoc-citeproc \ --filter=table-filter.py \ --bibliography=bibliography/blog-post-citations.bib \ writing.md metadata-acm.yml \ -o writing.pdf Error running filter table-filter.py:Could not find executable python My first thought was that python somehow was aliased to an older version of python – ie python 2. To verify this I ran: $ which python

This didn’t return anything, which explained the error. From this, I assumed that pandoc was expecting that the python alias resolved to the current python. I didn’t know how to change this – for example changing the pandoc preferences to point to the right python binary. Instead, I created a symlink so that pandoc could find python3.

$pwd/usr/bin $ ls | grep python
python3
python3.8
python3.8-config
python3-config
python3-futurize
python3-pasteurize
x86_64-linux-gnu-python3.8-config
x86_64-linux-gnu-python3-config
$sudo ln -s python3 python  ### Installing pandocfilters package for Python I attempted to run the pandoc command again but ran into another error. Traceback (most recent call last): File "table-filter.py", line 6, in import pandocfilters as pf ModuleNotFoundError: No module named 'pandocfilters' Error running filter table-filter.py: Filter returned error status 1 My guess here was that the python module pandocfilters had not been installed via pip. I installed this through pip. $ pip3 install pandocfiltersCollecting pandocfiltersDownloading pandocfilters-1.4.3.tar.gz (16 kB)Building wheels for collected packages: pandocfiltersBuilding wheel for pandocfilters (setup.py) … doneCreated wheel for pandocfilters: filename=pandocfilters-1.4.3-py3-none-any.whl size=7991 sha256=3c4445092ee0c8b00e2eab814ad69ca91d691d2567c12adbc4bcc4fb82928701Stored in directory: /home/kathyreid/.cache/pip/wheels/fc/39/52/8d6f3cec1cca4ceb44d658427c35711b19d89dbc4914af657fSuccessfully built pandocfiltersInstalling collected packages: pandocfiltersSuccessfully installed pandocfilters-1.4.3

This again swapped one error for another.

Error producing PDF.
! LaTeX Error: Missing \begin{document}.
See the LaTeX manual or LaTeX Companion for explanation.
Type H for immediate help.
…
l.55 u

Luckily, I had set –verbose, and piped to a log file. I went digging through the log file to see if I could find anything useful.

Class acmart Warning: You do not have the libertine package installed. Please upgrade your TeX on input line 669.
Class acmart Warning: You do not have the zi4 package installed. Please upgrade your TeX on input line 672.
Class acmart Warning: You do not have the newtxmath package installed. Please upgrade your TeX on input line 675.

After reading through this Stack Overflow article, these all looked like font issues. I used the solution given in the Stack Overflow article, which was to install another Ubuntu package:

\$ sudo apt-get install texlive-fonts-extra

Again, this swapped one error message for another.

! LaTeX Error: Command \Bbbk' already defined.
See the LaTeX manual or LaTeX Companion for explanation.Type H for immediate help.…
l.261 …ol{\Bbbk} {\mathord}{AMSb}{"7C}
! ==> Fatal error occurred, no output PDF file produced!Transcript written on ./tex2pdf.-bf3aef739e05d883/input.log.
Error producing PDF.! LaTeX Error: Command \Bbbk' already defined.
See the LaTeX manual or LaTeX Companion for explanation.Type H for immediate help.…
l.261 …ol{\Bbbk} {\mathord}{AMSb}{"7C}

Another Google search, another solution from Stack Overflow, which suggested removing the LaTeX line \usepackage{amssymb} from the template. The challenge was, I wasn’t sure where the LaTeX template for pandoc was stored. Reading through this Stack Overflow post, it looked liked the default LaTeX template is stored at:

~/.pandoc/templates

But on my system, this directory didn’t exist. I created it, and used a command to store a default LaTeX file in there. Then I was able to remove the line.

\usepackage{amssymb,amsmath}

This then resulted in yet another error which required Googling.

! LaTeX Error: Missing \begin{document}.See the LaTeX manual or LaTeX Companion for explanation.
Type H for immediate help
…l.54 u
! ==> Fatal error occurred, no output PDF file produced!

Looking through the log file, this was generated by the parskip.sty package. More Googling revealed another incompatibility with the ACM class. I removed the parskip package from the LaTeX default template, but was continually met with similar errors.

## Parting thoughts

In the end I gave up trying to get this academic writing workflow to function with an ACM template; there were too many inconsistencies and errors for it to be workable, and I’d already sunk several hours into it.

On the plus side though, I learned a lot about pandoc, how to reference with it, and how to produce simple LaTeX documents with this workflow.

What sort of academic writing workflow do you use?

# linux.conf.au 2020 – Who’s watching?

Yes, it’s July, and linux.conf.au 2020 #lca2020 happened over 6 months ago – but like everyone in The Time of ‘Rona, the last few months have been a little strange, so better late than never!

## Location

This year, linux.conf.au headed to the Gold Coast – a location that it’s never been to before. That in itself is a major accomplishment by Joel Addison, Ben Stevens and the linux.conf.au 2020 #lca2020 team. linux.conf.au, and other open source events in the Linux Australia stable such as PyConAU and DrupalSouth, are entirely volunteer-run. Those volunteer communities tend to spring up geographically – and due to population and surrounding ecosystems this tends to happen more frequently in large capital cities. For example, if we look at the Startup Genome ecosystem reports, only three cities in Australia are significant enough to make the global map – Melbourne, Sydney and Brisbane – frequent locations for linux.conf.au.

As a location, the Gold Coast was outstanding (n=9). There were plentiful accommodation options to suit a range of budgets, nearby food options, plenty of outstanding venues for conference events like the Penguin Dinner and PDNS, and a beach a couple of blocks away. The Gold Coast Convention and Exhibition Centre was a an excellent venue for the conference. It had a mix of room sizings, comfortable seating, breakout and quiet space, organiser space and enough “space” in general for the 600-ish delegates. I can’t remember whether Wi-Fi this year was venue-provided, or whether AARNet deployed this as they often do – but I do remember Wi-Fi being strong, fast and reliable. A convention centre is always a tough choice for LCA – the additional venue costs compared to, say, a University, are a major proportion of the conference budget, so to justify that the venue really has to deliver, and as a delegate, this one did.

## Community

The Welcome to Country was given by the Yugambeh people of the Gold Coast. A Welcome to Country is now part of the LCA tradition; however I also know from experience how difficult it can be to arrange – when we tried to arrange a Welcome to Country for LCA2016 we were unable to get in contact with the Traditional Owners of our region.

Another beautiful community memory I have of #lca2020 was this gorgeous rainbow “Yarn Chicken” pin gifted to me by Keith Packard – also an avid knitter (for anyone of a similar purl-suasion, there is an LCA knitters group on Ravelry). Keith’s sister owns Island Wools in PDX, and if you would like a pin you can get one from there (‘Rona permitting).

## Notable presentations

### Donna Benjamin – Who’s watching?

You can see Donna’s keynote here on YouTube

Using the experience of her grandparents escaping Nazi Germany as a departure point, and her Dad’s ASIO file – created for such hideous crimes as advocating for Indigenous people to be able to vote (applause), Donna posed some very uncomfortable questions to the audience.

We are surveillance arms dealers for the persuasion industry. Are we accountable for the tools we make?

In a nuanced, multi-dimensional talk about the benefits and drawbacks of surveillance of technology, Donna took intent as her index point, outlining how intent is the key differentiator for whether technologies contribute to collective good, or collective evil. What is the intent of our actions? How does that intent change over time?

Drawing trajectories and threads from the past, she painted some clear trajectories for the future, and outlined the key actions we as a community can take to shape the world in ways we want to see – collective privacy, and collective efforts to hold others accountable for the ways in which they use technology.

Donna left me with a clear and resounding resonance.

The fight isn’t over, and our work is not yet done.

## A/Prof Vanessa Teague – Who cares about Democracy? The sorry state of Australia’s election software and what we can do about it

You can see this presentation on YouTube

Vanessa’s keynote was a state-of-the-landscape talk which outlined the cryptographic deficiency of several of the e-election software models being deployed across the country. Using mathematical proofs of cryptography – something the deeply technical LCA audience was at ease with, she shows how flaws in implementation imperil not just the integrity of elections, but undermine our democratic processes.

Vanessa is someone I admire greatly.

Her personal integrity – she recently resigned from the University of Melbourne shortly after LCA after the Department of Health pressured the University over her (and colleagues’) research findings that supposedly unanimous and de-identified health records were re-identifiable – is something I deeply respect. Her work on assiduously interrogating the COVIDSafe app and, again, identifying flaws in its implementation, makes her a vanguard of privacy, digital rights, and of building systems that are able to be validated.

## Open Education Miniconf Keynote – The Who of CSIRAC – Roland Gesthuizen, Gillian Kidman, Hazel Tan, Caroline Pham

You can see this presentation on YouTube

This talk was one of my favourites from the conference, and provided a history of CSIRAC – Australia’s first programmable computer. The talk drew through-lines from HAL, 2001: A Space Odyssey and Memory Alpha, to the NASA space program and rope memory, to the women who first occupied the skilled role of “computers”, doing astronomical trajectory calculations. It went on to outline key milestones in the history of CSIRAC – being switched on for the first time in 1949, using the electricity of a small town, how it adopted technology from the telegraph, and the jacquard loom. It highlighted the small obstacles the team had to overcome to reach larger goals – which on reflection appears to be a recurring theme in technological development – the need for horizontal axis storage, and the problem of digital decay and bit error.

The talk went on to explore the role of Trevor Pearcey in CSIRAC’s development and his contribution to Australian computation, and his role of opening up CSIRAC “to the people”, and furthering their understanding of its capabilities – virtually unknown at the time. Issues such as trust and faith in technology were examined; one failed program on CSIRAC meant that people would be wary of using it a second time – something that echoes through our use of technology today. The talk highlighted the culture that surrounded CSIRAC – one of tinkering, one of playing, and exploring, one of creativity.

What would have happened if the research assistants were required to submit a project plan? You need freedom and the space to explore; and from this can emerge unexpected and unanticipated benefits.

My takeaways from this talk were that understanding our technical history, and the challenges that have been faced, and overcome, help us to understand how the technologies of today emerge, evolve, and go to scale – and provide lessons on how we can shape those trajectories. We have a role to play in ensuring students are creators of technology, not just consumers of technology.

## Christopher Biggs – The Awful Design of Everyday Things

You can see this presentation on YouTube

Drawing links between design, documentation and technology, Christopher took us on an entertaining, insightful and challenging tour-de-force of design fail. He challenged us to improve our human-centred design skills – because;

Documentation is required when the design has failed

Drawing from Asimov’s laws of robotics, he put forward rules for human-centred design of technology:

• Machines must be beautiful (or invisible)
• Machines must co-operate for the benefit of humanity
• Machines must communicate, and obey instructions
• Machines must be as simple and reliable as possible

Extrapolating these to the internet of things, he provided principles for design;

• discoverability – how does the user discover how to use the interface?
• test on beginners – how does someone without context use the interface or product? Watch people, what do people expect?
• feedback – how does the interface provide useful feedback?
• affordances – what are the affordances of the interface? How does the user know this?
• completion – how does the user know they’ve completed their task?

A house is a machine to live in – and we need to be friends with the machines.

## Joshua Simmons – Open Source Citizenship

You can see this presentation on YouTube

Josh’s presentation focused on the ways in which companies and large organisations can be good open source “citizens”. As citizens, we have a duty to the society and communities of which we are a part – and from which we benefit, and companies that profit from open source software have similar obligations.

He outlined practical ways in which businesses can support open source, and in doing so, support the technical foundations on which their profits are generated, including;

• understanding the technical dependencies of their products, and supporting the components on their stacks. It’s this contribution that helps the communities maintaining those projects to continue to do so. Open source is part of a business supply chain – and if you don’t want part of that supply chain to vanish, then it needs to be supported.
• sending people to conferences, and paying for travel, as conferences themselves – such as linux.conf.au – often provide a revenue stream to open source organisations.
• encouraging universities to give students credit for contributing to FLOSS projects – as this is analogous to “paid” work in industry.

## Jussi Pakkanen – Fonts and Math

You can see this presentation on YouTube

From the Creative Arts Miniconf, I really appreciated Jussi’s presentation as an amateur font designer, working through some of the approaches in font design. One of the approaches is to design each glyph in the alphabet individually, which is time consuming.

This led Donald Knuth in 1977 to mimicking the way that a person draws with a nib – the shape of the pen – by defining the strokes of the pen mathematically. This information can then be used to generate the glyphs in the alphabet, using a set of linear equations.

The work of Knuth has been extended to projects such as MetaFont and Tex.

My key takeaway from this talk with that it sits at the intersection of both the mathematical and the artistic – maths has an inherent beauty to it – the curves of linear equations. It is by combining both the artistic and the mathematical that we can design beautiful, re-usable, extendable, scalable fonts.

## Thank you to all the Volunteers, Core Team and Sponsors

I know how hard it is to deliver an outstanding LCA – I’ve done it twice. It’s a huge amount of work, for a long time – planning for an LCA can take 12-18 months – and in that time other priorities can slip – like family and relationships. A huge, huge thank you to the whole #lca2020 team for your outstanding efforts, dedication and contribution not only to Australia’s open source community, but to open source efforts worldwide.

## Call for Volunteers for #lca2021 – linux.conf.au goes online

Following the announcement earlier this year that linux.conf.au 2021, originally set to be in Canberra, would be postponed to 2022 due to the coronavirus pandemic, and that linux.conf.au 2021 would be an online event, the Call for Volunteers has now opened. Being a Volunteer at linux.conf.au is a significant commitment, but is also a great way to meet new people, and get experience in many areas that might complement your career path, such as project management, team leading and people management, media and marketing, audio visual, logistics and event co-ordination.

# My talk picks for #lca2020 – Who’s Watching?

Wow! It’s that time of year when linux.conf.au has come around, and this year, for the first time, it’s in the stunning Gold Coast, 13th-17th January 2020. After having a read through the schedule, I’ve made a plan for which talks I’d like to see.

### Monday and Tuesday – Miniconfs

Monday and Tuesday of conf are Miniconfs – essentially special interest groups in different areas of open source. The schedules aren’t all up for Miniconfs at the time of writing, but on Monday I’ll probably be somewhere between Creative Arts, Documentation and Sysadmin. On Tuesday I’ll be between GO Glam (top work, Sae Ra Germaine and Hugh Rundle!) and Identity, Privacy and Security (likewise, Ben Dechrai!). In particular, I’d like to hear William Brown’s talk on the psychology of multi-factor authentication. I heard William talk at PyconAU lightning talks earlier this year and he’s an excellent presenter.

Monday night is the Linux Australia Annual General Meeting, which I’d like to attend. Voting is open for the elections, if you’re a member. You’re not member? You should be. It’s free as in beer, and free as in freedom.

### Wednesday – Main conf

I’ll be giving my talk on Wednesday about SenseBreast – a mastectomy prosthetic that was developed as a student project as part of the Masters in Applied Cybernetics at the 3A Institute earlier in the year. Apart from that, I’d like to see Karen Sandler’s talk on how to understand the intentions of others and build stronger communities. Karen is amazing, and I always get something out of her talks.

Keith’s talk on his new Snek Python-based language for embedded devices will be great for the Pythonistas, but I’m also worried that the fragmentation in this space – Micropython is a clear leader – will actually make adoption of Python on embedded devices and micro-controllers harder. I also don’t want to miss Daniel McCarthy’s talk on building hexapod robots – I know Daniel from the two years I spent leading GovHack Geelong and he has an incredible mind.

Dr Peter Chubb is always an excellent presenter, and his talk on electronics from household components promises to be entertaining. David Tulloh is always a leader in the “crack track” of LCA (his Linux microwave talk from Geelong 2016 is unmissable), and I had considered going to his presentation on KiCAD PCB-drafting software, but I think it will be over my head. Those in the HPC space might want to consider seeing Hugh Blemings’ talk about the OpenPower stack and ecosystem, which focuses on open hardware for HPC, which is a relatively new development.

### Thursday – Main conf

On Thursday, I want to see Marissa Takahashi’s talk on an ethical data infrastructure, and Christopher Bigg’s talk on privacy-preserving IoT, but they’re scheduled together.

I definitely want to catch Opal Symes’ talk on collecting information with care. She has a rich, and challenging narrative to share and we all stand to learn a lot from her journey. Nicola Nye’s talk seeks to challenge the prevailing opinion that capitalism and ethics can’t co-exist, and I want to hear more on this.

### Friday – Main conf

On Friday, I’ll be giving a tutorial on Scribus, and will probably need a break in the morning, but I do want to catch mnot’s talk on security internet protocols, and the work that’s left to be done in this space – particularly given how antiquated TCP/IP is now, and that Australia doesn’t have good penetration of IPv6 yet.