The first day of LinuxCon Europe 2012 went brilliantly. After landing in Barcelona after a short hop from Edinburgh via London City, I had a great night’s rest, if a little jetlagged, and woke up early and rearing to go. The only bugbear with the hotel room (which was spacious and well appointed) was that the hotel internet was as slow as a wet week.
Conference registration was very smooth, with my badge printed on the spot based on my emailed registration number. The Rego Desk staff were also very welcoming and friendly – always a good omen for what a conference is going to be like – and easily recognisable from their (black) t-shirts.
Speaking of t-shirts, I nearly died when they had one in my size! Oh yes, I am liking this conference. It’s OK, no t-shirtgate this time around 🙂 Of course, it will look better when spiced up with some Ardunio lilypad goodness.
Next, it was time to plan my schedule for the day, which was made much easier by the use of mobile friendly schedule planning tools. One of the key gaps that I think we had when running linux.conf.au earlier in the year was the lack of mobile support that ZooKeepr, the conference software that we used, had.
The projection and screen displays were top notch, particularly in the keynote venue. Audio and acoustics were great, and speakers could easily be heard. Apparently the event was being live streamed – I didn’t check, but the audio was certainly good enough for live streaming.
The one thing I didn’t see but expected to was more women – Leslie Hawthorn was the only lady I recognised on Day 1, but maybe there are more coming later in the week. After the 10-15% female registration at linux.conf.au, it did seem a bit unusual. The lack of female attendance also drew attention on Twitter – not in an unwelcoming way, just observational.
There were big names amongst the sponsors this year too – with Intel and Qualcomm major sponsors. In a move that I also found surprising at Open Source Systems, Microsoft were again gold sponsors. This seems to fit with their cloud strategy – with products such as Azure becoming a key product in their portfolio.
Introduction – Jim Zemlin
Executive Director of the Linux Foundation, Jim Zemlin officially opened proceedings, remarking on the current trend of collaborative open development, and invited attendees to read the Linux Foundation’s latest IDC whitepaper Linux and open cloud http://www.linuxfoundation.org/publications/linux-foundation. He also welcomed new and upgrading members of the Linux Foundation, and welcomed the Automotive Linux Initiative – lamenting that he would have ordered a Linux BMW already – if only he could get is wife to agree. Zemlin concluded by encouraging the audience to vote for Linux in the ‘biggest social impact’ category of the 2012 TechCrunch Awards – the ‘Crunchies’.
Zemlin also used one of the breaks to recognise the outstanding achievements of long time Linux advocate Mr Masahiro Date, who is soon to retire from Fujitsu.
Advancing the User Experience – Mark Shuttleworth
Mark Shuttleworth, founder of Canonical and a luminary in the Ubuntu community, made the case for ‘connecting the dots’ for the entire user experience across the entire Linux platform – thus making developers more comfortable on Linux – and with a plethora of choice at their fingertips – and making Linux their preferred option.
Shuttleworth explained that in order to do this, the ‘operational friction’ of using Linux needs to be reduced – it needs to be a smooth, seamless experience, and help developers and sysadmins to manage the increasing complexity they are faced with. Indeed, he described a key failure of Linux, and a possible reason for its delayed dominance as its failure to produce clarity of user experience.
Reducing operational friction can be achieved by putting effort into user experience and by using design thinking – the philosophy behind Canonical’s JuJu offering. Based on the concept of ‘charms’, JuJu attempts to distill applications and knowledge about them into the cloud – encapsulating and reusing heterogenous application components.
Shuttleworth went on to describe the convergence of all devices to one platform. Clients will be doing less processing, which means that the heavy processing will be done in the data centre. The cost of VMs is decreasing, and the cost of desktop PC management is increasing. We are facing a thin client/thick cloud world. His vision is that there will be a “common version of Linux on every device across the institution”.
Mostly Sunny – Dave Engberg
This presentation by Engberg, Evernote‘s Chief Technology Officer, was one of my personal highlights. He went through a detailed business case and cost comparison of why they have chosen to host their own 400-500 boxen in a seemingly contrarian manner to current best practice. As he explained, it all comes down to
“What is the cloud good at, and is this what you need?”
Engberg worked through several slides which compared performance features of their in-house setup, and what it would cost them to replicate this capability using cloud solutions. The application characteristics for Evernote – large storage, high use of meta data and indexing for their Lucene database – are their highest costs. They rarely see high CPU spikes, and so purchasing cloud services doesn’t meet their needs.
Their total cost in house was around $90.5k per month compared with going to cloud services at a cost of $182.5 – $284.3k per month. Of course, as Engberg noted, your organisation has to factor in labour costs and risk into the equation – and make the decision that is right for your organisation. Essentially, his point was that you should be using the cloud unless you can justify not doing so – in Evernote’s case, he had.
To me this presentation was an excellent use of metrics, and a great example of why it makes good business sense to have a solid understanding of total cost of ownership of services – so that you can make informed strategic decisions about procurement and sourcing.
Xen in the cloud – Lars Kurth
Kurth introduced this topic by providing a brief history of the Xen Project and explaining the two key architectures of hypervisors – the first being the bare metal hypervisor, such as ESX and VMWare, with higher security, and the second being hosted and which runs within the host environment. He explained that the Xen hypervisor uses a Dom 0 kernel – essentially the hypervisor roots to the Dom 0 kernel, and thus there is an advantage in re-using the kernel. However, Xen is not in the kernel itself – by everything you need to run Xen is – Xen packages are in most distros.
He went on to explain the concept of disaggregation and the driver/stub/service domain model where applications are deprivileged and isolated. He also explained the concept of paravirtualisation, and the virtualisation spectrum of options from fully virtualised to paravirtualised.
I fell asleep from jetlag at this point and had a siesta 🙂
Metrics for open source projects – Dawn Foster
“The right metrics for my project are not necessarily the right metrics for your project”
Foster opened her talk by explaining that metrics are useful for a wide range of reasons – who contributes, where, what they’re interested in, to help recognise contributors and so on. The open source community uses a whole range of different tools, and these can be measured in different ways. However, as she pointed out, it’s important that you know from the start what the objectives of your project are. Do you wish to grow your number of contributors? Do you want to resolve a number of bugs, or squash a number of outstanding ones? You need to know what you want to measure and why – how will it drive action in your open source project?
Some of the tools she covered included
- mlstats – a command line tool for mail which can analyse mailing list data, content and top contributors
- google groups – which sadly has no API and had to be manually scraped
- IRC stats – where a range of tools such as irssistats, pisg and superserious stats were used depending on log file format
- gitdm – for measuring git contributions
- trac – for bugs
- graphite – for visualisation
- gather – for collecting data
The path to open source virtualisation – Adam Jollans
Jollans’ talk opened with the top 3 external factors that CEOs believe impact their business – they were
- Big data – massive amounts of data being processed
- Dispersed mobile workforce
- Inefficient use of capacity, with around 85% server idle time
The key technologies he saw for innovation were
- Analytics – where tools such as Hadoop have a role to play
- Virtualisation – where tools such as KVM, Xen etc have a major role. Virtualisation is the foundation of the cloud.
- The Cloud – where tools such as OpenStack etc are playing major parts
Essentially, Jollans was arguing that the major problems CEOs face today are solved with Linux.
In regard to virtualisation specifically, he demonstrated that many companies are now running more than one hypervisor in their environment, with the key reasons for multiple hypervisors being technical reasons between solutions, and the cost factor of having multiple installations of expensive hypervisors such as VMWare/ESX.
He explained that KVM (kernel virtual machine) plugs into Linux (it’s a type 1 bare metal hypervisor), and that QEMU provides I/O virtualisation. Over time, it’s only been worthwhile virtualising because you need a very big machine to reap benefits – and he explained that IBM has been investing heavily in virtualisation for many years – with scalability and security being the key focus points.
Jollans went on to explain that IBM has founded the Open Virtualisation Alliance for promoting open source virtualisation.
He also presented use cases for KVM, including;
- Linux server consolidation – Linux server consolidation is not as widespread as Win server consolidation (yet), and KVM is an easy way to achieve this
- Hypervisor diversity – many businesses are choosing to have hypervisor diversity to suit different requirements
- Virtual desktops – VDI is on the increase and needs high levels of memory scalability.
Linux at the forefront – Brian Stevens
Stevens, the Chief Technology Officer and Vice-President of Worldwide Engineering for RedHat opened by advocating that Red Hat had achieved ’20 years of disruption’ – with the customer value of Linux driving wider adoption. He outlined five principles on which Red Hat business model was founded;
- Invest in the advancement of Linux
- Enablement – hardware capability
- Facilitate fast upstream development
- Ecosystem – it’s not about Linux, it’s about the ecosystem
- Boring – enable hardware upgrades without churning the application stack
With this model, RedHat eclipsed over $1 billionin revenue last year, but Stevens insists that open source is not a business model, rather it is
“the best development model on the planet – modular innovation which can be consumed incrementally”
Stevens went on to show that Linux is at the heart of most hottest technologies at the moment – and that if you build your application on Linux it will run anywhere.
He highlighted RedHat’s OpenShift offering and how it supports some of today’s challenges – such as constantly growing data, and data which is less structured than it used to be. To underscore this, his presentation was done in reveal.js and hosted off OpenShift.
Kernel report – Jonathan Corbet
Corbet highlighted that over 400 employers contribute to the Linux kernel, but that over time voluntary contributions have been dropping. Reasons for this include the fact that skilled volunteer contributors are readily hired, but that we might as a community need to encourage newer contributors to join the fold.
Mobile and embedded participation is also growing rapidly, mirroring developments in the wider technology sphere. Linux is also leading networking developments, and ARM architecture is starging to dominate. Security efforts, once-neglected, are now being revamped.
UEFI secure boot is still a challenge, but solutions do now exist for Linux, but only as long as Microsoft continues to be co-operative. This is a cause for concern – as Microsoft could easily withdraw their support, and this issue should be watched.
On the filesystem from, ext4 is still the key workhorse, but btrfs is starting to mature and stabilise.
On the gap side of things, regression tracking is now a key gap area and something which Corbet encouraged the community to become involved in.