Ian Hobson is the Technical Principle for DSP & Devices at Ableton, where he works on the effects and instruments included with Live. Ian spoke to me about life at Ableton, advantages and disadvantages of developing plugins for a DAW, and more about his talk at ADC on Rust, a programming language some think will eventually supersede C++ as the go-to language for real-time audio. (Interview by Joshua Hodge)

You’re English! How did you get over to Berlin, and where did you study?

I moved over to Berlin to work at Ableton in 2011. I studied for an MSc in Digital Music Processing at the Centre for Digital Music at Queen Mary in London, and while I was there I applied to work as a software developer at Ableton, and I've been there ever since!

Before Queen Mary, I had a couple of other software development jobs, and before that I studied for a BA in Music Technology at De Montfort University in Leicester.

I imagine you must have seen a lot of changes at the company, because as a music producer I’ve seen it grow exponentially in popularity over that same period of time.

I joined Ableton during the development of Live 9, and since then we’ve grown a fair amount, from 130ish up to somewhere around 350. At the same time we’ve branched out with other products and offerings like Push, Link, the Making Music book, the Learning website, and the Loop event. The company’s developed into a position where we can make contributions to the music-making world in addition to Live, so there's been a fair amount of organisational change that's gone into making this work, and it's been really interesting and exciting to be part of it.

One thing that you’ve touched on that’s important is the culture in a tech company. At Roli, there are initiatives in place that help to create a sense that we are a team and in this together. I’m curious about how life is working at Ableton…

We have a pretty clear sense of purpose, in that it's generally accepted that helping people to make music is a decent thing to be spending time on! What really helps us is that working towards our goals is balanced with a focus on the people in the company. Culturally there's an emphasis on staying healthy and minimising stress. I've never had pressure to work weekends or cancel holidays or anything like that, which I'm grateful for. Any pressure to work beyond usual hours is self-imposed, and unusual, although I can imagine that this wasn't always the case in the early days at Ableton. These days I would describe it as a post-startup culture which focuses on people, hiring carefully and considerately, while maintaining a healthy work/life balance. It’s also nice to be able to contribute to something that I care deeply about.

I guess there are both advantages and disadvantages to creating a device in Ableton. One advantage would be that you don’t have to optimize the device for use in multiple DAWs, but I guess a disadvantage would be that all your devices are designed to fit within the plugin space at the bottom (though you do have an option for a foldout).

Yes, there are advantages from not having to support multiple hosts, but like you say we have pretty tight constraints to work within when making devices for Live. There's the constraint you mentioned of building a UI in a tight space, although now that screen sizes are much larger than in the old days, we're a bit freer to take up more space.

We also have the challenge of maintaining a consistent design language and UX across all of our devices. Devices can have unique elements or behaviours (e.g. the delay visualisation in Echo or the modulation matrix in Wavetable) but they need to be introduced with care, and it's no exaggeration to say that every pixel we touch is considered thoroughly!

There are also the constraints of having the device behave well as a part of Live. Devices need to load extremely quickly, and performance needs to be as stable as possible. Also, ensuring that we maintain our principle of backwards-compatibility — any set from a previous version of Live (as far back as version 1) is able to open in Live 10 and sound the same — takes quite a bit of thought!

As a user of Ableton, it’s great to speak to someone that has contributed to Wavetable, one of the new devices in Ableton. At first, I wasn’t sure that there was more that could be brought to the table with so many wavetable synths on the market today (like Serum and Massive). I was really surprised about how easily I was able to create new timbres from the synth, and felt like every time I thought I’d found everything, I was discovering a new dial to turn that would take me into another dimension. What were some of your thoughts when working on Wavetable?

That's great to hear! We wanted to present wavetable synthesis in a way that fit naturally with Live and Push, and for us that meant providing a lot of flexibility and sonic power with (hopefully) an intuitive workflow. Finding a balance between power and accessibility isn't easy to achieve, but I think we found a good approach of starting with the basics and only adding complexity when really necessary.

I'm really proud to have been part of the team that produced Wavetable. Everyone involved contributed brilliantly while solving some tough problems in all areas of the device’s UX, UI, engineering, testing, project management, etc. In terms of the development process, Wavetable existed in a few incarnations as prototypes for some time before we had the chance to go into production, which meant there was a decent amount of research and discovery work to pull ideas from while working on the final thing. This gave us the confidence to follow a pattern of starting with the minimal basics of the synth and then continuously delivering features one by one, deciding where the biggest weaknesses or opportunities were, and then making the next change.

This flexibility in our approach allowed us to look for opportunities along the way to be creative within our constraints; some of my favourite features in Wavetable only came along once we had features in place, and then someone would find a great solution for a problem that we wouldn't have considered before.

You can read more about our thinking here

Ableton was originally a Max device, right?

Live itself wasn't, although many of the ideas in Live were inspired by Max/MSP tools that were made and used by Monolake before Ableton got started. Many of the devices in Live were originally prototyped in Max, and that's still the case today; we used Max for Live for prototyping during Live 10’s development and we’ll be using it more in the future.

When Ableton purchased Cycling 74 (who develop Max MSP), I thought that maybe there would be an integration where Ableton would really try to lower the barrier for entry into audio programming. Is that something that Ableton are looking to do?

I think for many musicians who are interested in making their own tools, flow-based graphical programming is an intuitive way of working, so Max is a great way to get into music and audio programming. Max is also highly extensible (you can work in pretty much any programming language within the Max environment via externals), so you can break away from the graphical approach if you need.

I can't speak to what Ableton or Cycling '74 are planning for the future, but in Live 10 we made Max for Live more accessible by bundling the Max runtime with the Live installer. The Cycling '74 team also did great work in making devices load more quickly, so Max for Live devices feel more like native devices to use now. I’d love to see it become easier for curious developers to get started in building their own tools. I’m excited to see what happens with Max for Live in the future!

I’ve heard that Ableton use their own framework, is this true?

Ableton rolled their own frameworks in the early days for pretty much the whole of Live, as did many software companies that started out in the 90s. Over the years various parts have been replaced with third party libraries, and modernisation efforts have led to the core Live framework being somewhat unfairly called ‘ALF’, for ‘Ableton Legacy Framework’, but it would be more fair to call it the ‘Ableton Live Framework’! Many of the components are pretty decent and well thought through, so it's not such a bad legacy to have.

Do you have a team there whose sole job is to maintain and extend the framework itself?

There isn't a single framework team, but sometimes you’ll get a team that specialises in one area for a while, like the audio engine or the data model, and then they'll make some improvements in that area. In general the approach is that a team will work on a feature, and then incorporate some framework improvements while they're active in an area of the code. The development team is growing, so to make sure this approach scales well into the future we've introduced cross-team ‘chapters’ made up of developers who ensure that a particular framework evolves properly over time, and is ready for the product needs of the future.

Your talk at ADC is about Rust, which is a language that I’ve heard a lot about! I heard that it can be useful for real-time audio development, and that syntactically it’s not too different to C++. Can you tell me more, without giving away the talk itself, of course!

Well there's not so much to give away really! I'll be giving an overview of the Rust language from the perspective of an audio developer, which should hopefully be of interest to some of the ADC attendees.

The reason Rust is potentially interesting for audio developers is that it's a new language which stands as one of only a few viable alternatives to C or C++ for realtime audio processing. It offers a similar mix to C++ of low-level power with higher-level abstractions, without garbage collection or other features that might interrupt your audio callback.

Its major departure from C/C++ is its focus on safety when accessing memory. There are whole classes of programming errors that are unfortunately common in C/C++ applications that you simply can't express in Rust (unless you're explicitly declaring that you're doing something ‘unsafe’), so for that reason alone it’s worthy of study.

You're right that syntactically it borrows a lot from C++, but its unique properties mean that you often have to think about your program structure a bit differently, but in a good way! My main goal with the talk is to simply give people an overview of the language so that they know how to explore it. While it's unlikely that many people in the audience will be in a position to use Rust in their daily work any time soon, I think there's a lot to learn and take away from it when working in C++. For me it’s really interesting, and I’m looking to talking about it!

Thanks for all your time! It was a pleasure meeting and talking to you!

Likewise! Thanks for the chat, great to talk with you!