The design of the violin, and all other instruments, is a product of an evolution shaped by the demands of its players. The exact proportions, lengths, and various details were iteratively tweaked over the years till we arrived at its modern form. The particular manner in which one uses it maximizes the players expressive power: the precise and small movements of the fingers are concentrated on manipulating the pitch, while the strong and broad movements of the shoulders, arms, and torso work to control the tone of the piece, ranging from the delicately soft to the powerful. The player’s auditory, spatial, tactile-kinesthetic, and visual centers are all put to work to control it.

It’s obvious to say that the violin is the perfect interface for the kind of music you can produce with it — the two concepts are inseparable. The sound that emanates from it is part of the interface. When a musician plays with a violin, it becomes a part of them. There is an expressive link of super-dense information flowing between the player and the instrument.

So why am I writing so much about violins in a post about VR? Well, I argue that the expressive bandwidth between our minds and our computers is currently a tiny trickle of information compared to the superhighway of feedback a musician exchange with their instrument. That isn’t to say the potential isn’t there. The comparison isn’t really fair either, since the violin has a much longer history, and is a much more focused experience compared to the broad space of computable things.

And we’ve made some great strides in linguistic, auditory, and 2D visual interfaces from the early days of punchcards and terminals. But in terms of harnessing the power of our kinetic, tactile, locomotive, and 3D spatial reasoning it’s mostly been rehashes of the same concept for a long time. The violin finished its search a while ago, but we have just begun on the quest of finding the “violin” of human-computer-interaction.