Can you elaborate on your project with Thom Yorke at the ISM Hexadome? How will the visuals for the upcoming solo tour differ from this?

The project with Thom Yorke at the Hexadome was a project in which I used my software to create both the visuals and the sound. I used it very much in the way that I’d originally had in mind when I first created it. The installation was basically a flight through 3D audiovisual landscapes that Thom and I had created, with six huge screens all around the audience, and 54 speakers, so the sound would come from exactly that direction where you’d see it.

The upcoming Thom Yorke tour however is completely different; Thom and Nigel create and perform all the music with their electronics while I take care of the visual side without having any influence on the sounds. Since in this setup sound and visuals are technically completely independent, I’m totally free to do anything I want and go wild visually, without worrying about sonic consequences. Generally this means that I take it all into a more painterly and often psychedelic direction, where colours and shapes blend seamlessly into each other.

You’re performing the visuals and lighting live – how will you be preparing for this and how will it be achieved?

Preparation initially will happen by creating visual presets within my software that somehow seem to feel right with the music. I basically just sit at home and listen to tracks while I fool around with visual elements until it feels like some sort of a symbiotic relationship is appearing. After this period we’ll then have a week of rehearsals in the UK where we take all the elements and merge it into a bigger audiovisual story, a proper set. During this time I try to get a feel for the directions that the music and the tracks can take in the live context, and I make sure that I’ll have an easy and intuitive access to all the controls and parameters I need to change in order to improvise on any turns and changes that our set may make in the moment. This means that I pay great attention to defining which visual consequences specific slider movements or button presses have. I basically define where I need the freedom to improvise and change things on the spot, and which parts I’d rather leave to the automated presets. It’s about choosing exactly where I want to focus my mental energy during the gig and how far off the rails I need to be able to go.

How do you keep the musician and music in mind when creating your visuals?

It could seem like a bit of a juxtaposition – on the one hand there’s the musician and the music, on the other hand there’s me and my ideas, and the goal is to somehow merge those. But that’s not really how it works, especially not since I only work with people whose music I really love. When I’m really into someone’s music there is no real distinction between myself and what they do: as I listen to their music, I can strongly feel it living inside of me, it becomes a part of me, so all of my expressions will necessarily also be expressions of the music as it flows through me. Obviously that doesn’t necessarily mean that the musician will always totally agree with what I’m making. But our interpretations will almost always be very compatible while their criticism often helps me express my perception of their music even more closely.