Redhoot Oboemonger has just put out his second release on iTunes and Spotify. Not only that, but he has shared all 7 tracks from the album as Renoise playthroughs on Youtube (awesome!), and is also an accomplished visual artist.

Granularity (video edit) from redhoot on Vimeo.

Hello there Redhoot. Can you tell us about that artist name of yours?

"Redhoot Oboemonger", I've used a lot of different names over the years but for some reason this one stuck with me for the last couple of years.

And when did you start producing music?

I have an older brother who got into the demoscene in the early 90s late 80s, he got me hooked on ScreamTracker2 (the text-mode one) then I was around 9-10. I always wanted to play with instruments and synths but as a kid you can't really afford anything. ScreamTracker3 came along just around the time my paper-boy job bought me my first proper soundcard; The Advanced Gravis Ultrasound. Then FT2. BUZZ and Renoise, These days it's mostly Renoise for the main work, but with Reaktor, SuperCollider and PureData thrown into the mix to help out.

Can you tell us how you approach a new song?

Do you have an idea before you start, or do you experiment until something interesting catches your attention?

I love building procedural and fractal systems, for both my visual arts and music. Then find moments, sounds or sequences in them that I can develop into a track. Sometimes it would start with just the samples that inspires me, or other times its coming from some of my generative sequencers I've built that I feed Renoise. I've built some really simple LUA scripts for Renoise to help me with doing rapid prototyping if ideas based on simple probability triggering. Phrases have been really cool in this regard to abstract some of these workflows.

But in the end, I think how I start a track has the same answer as how do you finish a track. Its mostly just happy accidents that I like and then take them to completion. I have about 60+ hours of music kicking around that never got finished. Just by doing volume SOMETHING has to be decent right ?

Starting out with trackers at such an early age was probably an advantage. So what do you think of the theory that we settle in our musical taste in the early teenage years? Or, to put it differently, which artists have you discovered since those early years that inspired you to rethink what music is, or could be?

There weren't really any alternatives to trackers when it came to music software for us on 286/386 PCs. My brother got a hold of a bunch of floppies with all these unnamed great Amiga mods on them, and it made me obsess over the idea of using the computer to make music. The idea that I had not just the song; but the entire formula behind it right here in my bedroom was amazing to me. I would definitely say that post-pop 10 year old me suddenly got very heavily influenced by the Amiga mods. Travolta/Spaceballs, Zodiak/Cascada, Purplemotion/FC were all on my walkman at some point.

Later in the 90s as I got more and more esoteric interest in how music and sound is made; I have to give credit to Lassi Nikko (DUNE/Brothomstates) for opening up my eyes to how far you can take trackers, mixing musical styles and sound design through minimalism. Other artists at the time I were discovering like Aphex Twin and Squarepusher (this was around 95?-97?) were also pivotal, but Lassi made his mods available for anyone to look at. And seeing the inner workings of his songs made it even more enticing.

Trackers in general have a tendency to not focus on the graphics representation of sound; which in turn makes you focus even harder on the sounds and music. It abstracts the compositions into some kind of advanced looking spreadsheet where you're not distracted by moving bars, sample waveforms, instrumental representation through things like piano rolls and frequency analyzers. The notes, the composition and structure is at the center where everything is a first class citizen. This is the same reason why I love SuperCollider and other visual programing tools for creating sound, you don't sit in front of your screen and look at the sound, you just sit and listen.

I appreciate and enjoy a lot of different music, from jazz to pop to weird noise performances. But when it comes to making music myself I just like to make something I haven't heard or made before.

Your sound is very complex and layered. Do you use a lot of processing on the sounds? The videos you've shared seems to reveal that source samples are very "hot".

I love the Renoise sampler for how much you can mangle a sound, but I also generate a lot of my sounds through my own granular samplers I've built in Reaktor and SuperCollider, then tweak and finess them in Renoise for use in the compositions. I have a pretty terrible sample library, but if you have some minor DSP skills you can turn that limited data-set into an infinite resource. And by that design; some of the sounds come out as "happy accidents". Sometimes that means during recording the sound might be a bit messed up/hot/wrong phases etc. But If I like it I like it and just use it regardless. In the context of the Renoise where I make the actual tracks the sounds usually works for me.

OK, it's sort of a tradition that these interviews also turn to Renoise, and what you'd like to see from it in the future. So, if you could name just one or two features, what would that (they) be?

My unrealistic wish would be to look at modular frameworks for building DSP instruments, samplers and effects, and would love to see a nodal approach to not just routing but low level access to sound manipulation. Its very large undertaking to develop something like this so I've got my fetish satisfied elsewhere for the time being. But there's something to be said about having a framework that is almost entirely user driven from a content point of view. I would argue that Reaktor gets is money worth from its user content alone, despite of the lackluster software updates. This has given some software a very long lifespan.

On a more realistic level, 3.0 brought so many new things that I'm still overwhelmed when it comes to exploring phrases and the new instrument possibilities. It’s definitely made tracking a whole lot faster. That said; sample handling is at the heart of Renoise, so I'd love to see some more advanced options for pitch, time and other granular (prehehe) DSP functions. I know proper pitch and time warping are very complex procedures if you're going for quality so as far as feature requests I'd be happy to just see envelopes smaller than 25ms so I can roll out my own crappy resynthesis phrases for now (ed: this just settled the headline)

That said, I think I could spend half an eternity just exploring the possibilities with the current features.

You mentioned having quite a lot of music in the redhoot vault. Are we going to have to wait another decade for your third album to arrive?

I’m hoping to do more frequent but smaller releases. I do a lot of visuals (https://www.instagram.com/redhoot_/ - NSFW) that tie into the music, so by doing smaller EPs there's room for even more experimentation in various other formats.

Redhoot Oboemonger’s new album is out now

On iTunes

On Spotify

The whole album as Renoise playthroughs