

For a detailed summary of the project, consult the project's NIME conference paper, ICHEP outreach paper and CHI extended abstract. A tiny subset of collision data from the ATLAS Detector is being generated and streamed in real-time into a sonification engine built atop Pure Data, A tiny subset of collision data from the ATLAS Detector is being generated and streamed in real-time into a sonification engine built atop Python Ableton , and IceCast . The sonification engine takes data from the collision event, scales and shifts the data (to ensure that the output is in the audible frequency range) and maps the data to different musical scales. From there, a midi stream triggers sound samples according to the geometry and energy of the event properties. The composer has a lot of flexibility regarding how the data from the collision is streamed with respect the geometry of the event, and can set whether or not to discretize the data into beats. The data currently being streamed includes energy deposit data from the hadronic endcap calorimeter and the liquid argon calorimeter, as well as energy from particle tracks and location information from a muon trigger ( RPC ). You can learn more about the different layers of the ATLAS detector here. Each event currently produces ~30 seconds of unique audio. Some composers have gone further and built their own patches in Max MSP or their own web apps using the processed data streaming out of Python. This platform provides physics enthusiasts with a new way to experience real-time ATLAS collision data, and musical composers a new way to play with it. It provokes questions about how audio might serve a more practical role in data analysis.



Audio is an effective tool for communicating information. A smoke detector warns of potential fire and a stethoscope helps a medical doctor determine the rhythms of a patient’s heart. Audio is also used when monitoring physics experiments to give information about the status and quality of the equipment and data. For example, an audio cue alerts control room workers when the particle beams in the Large Hadron Collider are dumped. Sonification has developed into an active area of research. (ICAD Conference). As technology advances, more complicated and subtle information can be communicated using audio, even in real-time. The Quantizer platform allows researchers to sonify and communicate more complex particle physics information to its listeners. The strength of the Quantizer platform lies in how users can compose aesthetically pleasing music with ease. This gives composers the chance to learn about the physics in the ATLAS data by trying to creatively make use of some of the underlying patterns. Those patterns, if highlighted by the composer in the audio, can then also teach listeners about the physics properties of the collision event.







Artist Statement: Ewan Hill (House and Suitar Samba streams) Composing a musical piece that plays by itself for hours on end without any musician interaction/intervention is a slightly complicated goal; you cannot as easily force separate verses, a chorus, and solos. With these two pieces I focused on music styles where large amounts repetition is common and embraced. There is no beginning, no climax, and no end, just the simple variations of the journey. Only four parts of the ATLAS detector's data was used to compose the House piece. The guitars are predominantly controlled by two parts of the ATLAS calorimeters. Their rhythms are human composed but the musical notes are mostly determined by the data. The freestyle organs come from the track and RPC data. The minor percussive elements are not dependent on the data but help to fill out the sound. I invite a drummer to overlay a more interesting drum track; to play along with the data from world's highest energy particle accelerator. One of the major differences between these two pieces and other sonifications is the imposition of timing, and especially the imposition of a beat. Many sonifications let the data determine as many aspects of the music as possible and this often results in music structures that sound very random. There is actually a huge amount of timing structure to the LHC and ATLAS data imposed by the experimenters so I have imposed some timing structure too. Also, this timing takes sonifications from a world of experimental sounds to the more mainstream styles of music. The ATLAS particle physics data has a lot of patterns in them but the data can seem random to the untrained eye / ear. When composing the House piece I used the non obvious physics patterns in the data to produce some patterns in the music. For example the most probable energies are more closely aligned with the root notes of the music scale. This means that the music will more likely return to a pleasing home-base music note at the end of the bar rather than constantly leaving the listener hanging. Only a small number of physics patterns in the data have been exploited in these two pieces and I look forward to pushing that further in future compositions. Partial details on how the data is turned into music for the House stream:

Approximate detector locations: Pink: Tracks detector

Light green: LAr Calorimeter

Dark green: HEC Calorimeter

Purple: RPC

Grey: Other detector components that are not currently being used for sonification The following four quantities determine the musical note frequencies: HEC Energy, LAr Energy, RPC eta, track momentum

The different types of data are played by multiple different instruments (using Ableton Live instruments): RPC = Funky organ, Organ5 Slow decay

Tracks = Funky organ, Organ5 Slow decay

LAr = Grand piano

HEC = Electric1 basic bass, Please rise for Jimi, Woofer loving bass, strangler (guitar)

A tom drum is also played but is in not connected in any way to the data. step 1: Clean and filter the data. Only use the parts of the event that have high enough energy and momentum and are in the central region of the detector.

Clean and filter the data. Only use the parts of the event that have high enough energy and momentum and are in the central region of the detector. step 2: Filter the data even further by requiring the data to lie between specific minimum and maximum values (e.g. enforce a minimum momentum on each track). This ensures that the data can be properly scaled for sonification. These are very loose selections to avoid imposing too many restrictions on the composer.

Filter the data even further by requiring the data to lie between specific minimum and maximum values (e.g. enforce a minimum momentum on each track). This ensures that the data can be properly scaled for sonification. These are very loose selections to avoid imposing too many restrictions on the composer. step 3: Split the detector up into bins of "pseudo-rapidity" (eta), a special angle used in particle physics and partially depicted in the above gif. Associate all of the data to a nearby eta bin. The musical notes will be played in the order as defined by eta positions going from negative eta to positive eta.

Split the detector up into bins of "pseudo-rapidity" (eta), a special angle used in particle physics and partially depicted in the above gif. Associate all of the data to a nearby eta bin. The musical notes will be played in the order as defined by eta positions going from negative eta to positive eta. step 4: Stream all the data associated to the single eta bin from python to Pure Data. Do this for each eta bin.

Stream all the data associated to the single eta bin from python to Pure Data. Do this for each eta bin. step 5: Apply a finer selection to the data based on the energies and positions. These selections can be changed in real-time by the composer in a more DJ-like fashion.

Apply a finer selection to the data based on the energies and positions. These selections can be changed in real-time by the composer in a more DJ-like fashion. step 6: Scale the data from their allowed ranges into the allowed musical midi note ranges. For example, a track’s momentum could be required to be between 20 GeV and 1000 GeV. The track momenta are scaled into the musical midi note range of 60 to 90.

Midi note ranges: RPC: 60-96 Track: 60-90 HEC: 48-60 LAr: 36-48 (Middle C on a piano is 60)

Scale the data from their allowed ranges into the allowed musical midi note ranges. For example, a track’s momentum could be required to be between 20 GeV and 1000 GeV. The track momenta are scaled into the musical midi note range of 60 to 90. step 7: Set the duration of the notes and determine if it is allowed to be played on this beat as set by the composer’s beat structure.

Musical note durations: HEC: 1/2 beat LAr: 1/2 beat RPC: 3s = almost 3 bars (exact duration depends on the selected instrument) track: 2s = almost 2 bars (exact duration depends on the selected instrument)

The musical beats for the LAr, HEC, and the drums are forced to follow the following patterns: LAr Beats HEC Beats Drum Beats

Set the duration of the notes and determine if it is allowed to be played on this beat as set by the composer’s beat structure. step 8: Round the midi note value to the nearest allowed midi note value as defined by the scale. Midi note musical note scale: 0,3,5,7,10,12 [Equivalent to musical note scale: C, Eb, F, G, Bb, C]

Round the midi note value to the nearest allowed midi note value as defined by the scale. step 9: Output the musical notes as sound using the instruments listed above.

Note that between events, the HEC and LAr (and drums) will continue the same rhythm while just repeated the same music note that was last played. This helps transition from one event to the next in a more pleasing manner.

Artist Statement: Evan Lynch (Cosmic stream) HEC Energy, LAr Energy, RPC eta, track momentumA tom drum is also played but is in not connected in any way to the data. This work attempts to provide a musical metaphor for the collisions and subsequent energy showers that happen in the ATLAS detector using spatial cues to reflect the geometry of each event. Each collision event begins with the colliding protons approaching from the left and right. As particles initially carrying all of the energy of the reaction, they approach with a sound like a truck. At the moment of collision, when they meet in the center, a wavefront of sound begins, made of filtered noise and choir reflecting the total energy of the event and the missing momentum. On top of this ambient sound, each layer of the detector chimes in from the center outward. First, particle tracks warble across the space. Then, calorimeter energy deposits register around the space. Finally, bright plinks emanating from the periphery reflect muon detection by the RPC. ____________

Technical Overview



step 1: Physics data is selected based on desired properties of the data (see House stream explanation for details)

Physics data is selected based on desired properties of the data (see House stream explanation for details) step 2: The first few seconds of audio are driven by a quantity that probes the total amount of energy measured in the event as well as a quantity that indirectly probes the amount of undetected energy. These data properties are streamed first and are used to control certain characteristics of a choir/filtered noise wavefront audible at the beginning of each event

The first few seconds of audio are driven by a quantity that probes the total amount of energy measured in the event as well as a quantity that indirectly probes the amount of undetected energy. These data properties are streamed first and are used to control certain characteristics of a choir/filtered noise wavefront audible at the beginning of each event step 3: Next, the remaining physics information is streamed from the innermost detector layer to the outermost layer (first tracks, then calorimeter energy, then muon candidate hits, using a 5:2:2 time ratio for the total 25 seconds of audio for the event). For each layer, physics information is streamed purely with respect to an angular coordinate. This means that the timing of the data is entirely controlled by the detector geometry. There is no beat structure imposed, in contrast to the other two live streams.

Next, the remaining physics information is streamed from the innermost detector layer to the outermost layer (first tracks, then calorimeter energy, then muon candidate hits, using a 5:2:2 time ratio for the total 25 seconds of audio for the event). For each layer, physics information is streamed purely with respect to an angular coordinate. This means that the timing of the data is entirely controlled by the detector geometry. There is no beat structure imposed, in contrast to the other two live streams. step 3a: The track data is spatialized, meaning that the audio is distributed between each of the two speakers to reflect the directions the particles are traveling. (Note that the starting position of each track is manually shifted by the composer to create the auditory effect of the tracks passing through the listener’s head). This effect is more easily audible when listening to a version of the stream with a higher number of channels/speakers. Since only some geometric coordinates are available in the data, information about the track momentum is used to calculate the duration of each track’s auditory representation.

The track data is spatialized, meaning that the audio is distributed between each of the two speakers to reflect the directions the particles are traveling. (Note that the starting position of each track is manually shifted by the composer to create the auditory effect of the tracks passing through the listener’s head). This effect is more easily audible when listening to a version of the stream with a higher number of channels/speakers. Since only some geometric coordinates are available in the data, information about the track momentum is used to calculate the duration of each track’s auditory representation. step 3b: Next, the calorimeter energy deposits trigger sounds timed by their location in the detector

Next, the calorimeter energy deposits trigger sounds timed by their location in the detector step 3c: Finally, the muon detector hits trigger sounds timed by their location in the detector ____________