Towards the end of 2015 we were faced with a dilemma. No sooner had we released the flowkey iPad app than the requests started streaming in for an Android version. “Don’t you know that Android has the majority market share?” “You do know that iPads are for mindless hipsters, right?” “Don’t you love us anymore??” “Pleeease??”

The message was clear, but the problem was large: With Android’s large market share comes a host of issues that iOS development simply doesn’t have to deal with: different screen resolutions, widely varying performance characteristics even on new high-end devices, even fundamentally different processor architectures on same-generation devices. As a small development team (there were four of us at time of writing), the prospect of making an Android app on an unfamiliar stack to try to cover all of those bases in any reasonable amount of time, all the while developing, debugging and improving the browser and iPad versions could have seemed impossible.

What we did have however were the blessings of good fortune in the form of our CTO’s early decision to go hybrid (via Meteor), and of good preparation via the huge efforts we put into optimising performance for the iPad version to make it run smoothly even on a prehistoric (in computer years) iPad 2.

The core feature of flowkey is what we simply call “the player”. You can watch and listen to the song or excerpt you are trying to learn, slow it down, make loops and – importantly – learn in “wait mode”.

Wait Mode gives you as a learner the opportunity to get comfortable with finding the notes in a song at your own pace. The video and sheet music play up until a note or chord, then pause and wait for you to play the right notes on your real piano or keyboard before continuing again to the next note or chord.

In the browser, the pitch detection works via JavaScript’s Web Audio API, including getUserMedia() to access the device’s microphone. But this is where the hybrid dream starts to fall apart: the iPad’s browser, Safari, and its programmatic counterparts UIWebView and WKWebView do have an implementation of the Web Audio API, but not of getUserMedia(). This restriction makes it impossible to access the microphone from the browser context, and puts one of our core features out of reach for hybrid use.

At first we tried to get around this by using native code to inject microphone audio buffers into the WebView for processing in our existing JavaScript code. It worked, but only barely — there was a significant lag and it had a noticeable performance impact even on the latest model iPad Air. Not wanting to settle for a second-rate experience, we made a decision that would directly affect the future for our Android app as well: we decided to rewrite our pitch detection routines in Swift.

Swift is an absolute pleasure to use. Using Swift, it is fun to write concise, beautiful and performant code that reads well, is easy to reason about and easy to debug. Two years after starting with Swift I still very much have the same sentiment about it as the one expressed by Dan Kim about Kotlin here.

The Swift version of our pitch detection worked. It was extremely performant and provided a way better experience than the Web Audio API ever could have, with significantly less latency and higher accuracy. And there was much rejoicing.