[Edit by Cassius: This thread was started in response to my asking about Major and Minor key, which came up in the discussion of Romanze in Moll (the Romance in Minor Key" movie. I asked: Nate if you get a chance to glance at this thread: Can you explain to a non-musician like me what "minor key" is and how it is musically able to evoke sadness, as opposed to major key? I will look this up on Wikipedia but I would be interested in your comment.]





Yes! So, to dive into this, I'd like to talk about two, different, creative arenas.





First, we have an immediate phenomenology of music: what is music, and how do we experience music?





Second, we need to explore the cultural environment in which the appearance of structures like "major", and "minor" arise (because they are not, themselves, universal variables). Furthermore, I'll discuss "major" and "minor" specifically, to explain why those two structures (of many) are the most useful examples for non-musicians to regularly cite to acknowledge how human emotion corresponds with soundwaves.





First, what is music? Music is a storytelling art in which music-listeners accept sound as the medium through which the story is told; jumping deeper, sound is the reverberation of mechanical energy and, physically, mechanical energy is a sin wave. So, phenomenologically, music, as we experience it, is the story our minds spin when the mind anticipates patterns in the sin waves of mechanical energy (captured by the fleshy satellite dishes on either side of our cranium). Most of the time, we assume music to be an artificially-generated (i.e. intentional) composition––this is not always true, for Nature, itself, is inherently musical. The parts of our brain that register auditory impulses are simply looking for periodic (regularly patterned) sound waves. While most sounds we hear in nature are aperiodic (irregularly patterned) sound waves (which we call technically refer to as "noise"), that does not mean that natural patterns do not exist. For example, consider the "Wow! signal" [https://en.wikipedia.org/wiki/Wow!_signal ]––which, in this case, deals with electromagnetic, and not mechanical waveforms, but still demonstrates the point, which is that the mind starts writing stories when it begins anticipating patterns, regardless of whether or not those patterns were intentionally-generated. To summarize, music is the story that our minds spin, according to the patterns it interprets and anticipates from sound waves.





Next, let's explore the perceived structures of music. Starting a few levels of scale above atoms, let's first acknowledge that our ears (the hosts of our internal auditorium) will only identify mechanical energy that vibrates between 20 and 20,000 Hz. That's the full sonic spectrum with which we have to paint. But we don't use that full spectrum––the full spectrum sometimes looks like 'Waves Crashing On Rocks' or 'Volcanic Explosions', a lot of musical colors (notes) that, together, just create dissatisfying messes of mutually-indistinguishable farts. Herein, the musician's job is to select a few musical colors (notes) that most adequately express the acoustic picture they are trying to audibly paint. Like the colors of the rainbow, which reduce the visible spectrum of electromagnetic radiation vibrating between 430 and 770 THz to "Roy G. Biv", we identify the audible spectrum of sound by symbolic qualia. For example, the mind of a painter does not mathematically register light at 430 THz, but it does artistically know precisely what deep red looks like. Similarly, the mind of a musician does not mathematically register sound at 440 Hz, but we know exactly how 'Middle A' sounds. The qualitaties we use to express anticipatory patterns of mechanical energy (the C note), as with light (the color Red), correspond with cultural-linguistic symbols. So when we're talking about "major" and "minor", we need to discuss it within the system we call modern, "Western" music theory, and its antecedent.





Once upon a time, Pythagoras realized that you can "double" the frequency (highness or lowness––pitch) of a plucked string by halving its length. In modern language, an example would be middle 'A'––it works out mathematically that 880 Hz is the 'A' immediately above the middle 'A' at 440 Hz––Pythagoras certainly loved numbers, which is where we derive the flexible number '12' notes per set of repeating values (12 is divisible by 1, 2, 3, 4, and 6, and that was ... I don't know ... a source of arousal for Pythagoras? He based his entire music theory off of the ratio 3:2, which deserves a thread all on its own, but that's getting off-topic) . The original Hz for each note was based off of an explicit, mathematical ratio ... without delving into the volumes of information that describes the evolution of tuning, and the history of tones in Western music, let's just conclude that, by the 18th-century, musicians were using the standard tuning that we use today, because, earlier, purely ratio-based tunings would lead to ... sounds that aren't pleasing to contemporary ears (as unusual as I'm sure contemporary music would seem to ancient ears). I'm bringing up the following because we're Epicureans, and this provides some philosophical context into the history of music: in terms of metaphysics, Pythagoras freaked out when he realized that the very aesthetically pleasing number '2' did not have a perfect square root; similarly, he rejected certain pitches that could not be defined by the ratios of pure integers. This lead to an attempt, for centuries, by philosophers to harmonize number theory, music theory, humor theory, and celestial science––so we get weird ideas like the Celestial Spheres, and the Perfect Forms of the Heavens that correspond with ratios which sound is capable of audibly expressing. That is just an example of how the ancient Greek search for 'ideal forms' can generate mathematical ideals that may not be subjectively pleasing (at least, not to many of us).





There's this brilliant episode of Star Trek: Voyager that beautifully demonstrates this: a planet of non-musical humanoids accidentally hear the ship's doctor sing an operatic piece. They are inspired by the music––utterly inspired. The inspiration echoes throughout the planet, and many of the alien beings begin attempting to emulate the operatic voice they so loved. Now, while these beings didn't sponsor the subjective art of sound we call music, they did have an advanced understanding of number theory, so they could only comfortable interface with human music through an intentional analysis of mathematics (like good old Pythagoras). Twenty minutes of plot or so later, the doctor becomes dismayed to find that he is no longer a planetary celebrity: local musicians have––according to their own tastes––surpassed the doctor's operatic baritone. The doctor is hurt, but respectfully agrees to attend a performance to which he has been invited. He sits with other crewmates, and they listen with anticipation ... and, to the surprise of their anticipatory minds, the alien opera sounds like abysmal trash. Rather than making the subjective switch that Renaissance and Modern artists made, the aliens took a queue from Pythagoras, and employed advanced differential equations to determine which notes would be sung, and in which order they would be arranged. To the crew, it sounded like a comptuer generating tones according to a string of prime numbers, which, though being intentionally-composed, periodic sound waves (i.e. music) has no ability to tell humans a story––it just comes off as a brown fart. What I want to convey with this example is that the aliens most certainly had "a specific musical structure that corresponds to the subjective expeirence of pain " as well as "a specific musical structure that corresponds to the subjective expeirence of pleasure", but they weren't the same physical structures as "major" and "minor", which technically do not even have relevance to all human populations, but only those that can interface with the music tradition since the 18th-century.