Conference Schedule / Archive

Timetable Format: Plain List | Table | iCal | Printable Version

All times are CEST = UTC+2

Filter: Day: -all- 0 - Wednesday, May/8 1 - Thursday, May/9 2 - Friday, May/10 3 - Saturday, May/11 4 - Sunday, May/12 Type: -all- Paper Presentation Workshop Concert Installation Other Author: -all- Albert Gräf Alberto Bernal Alex Hofmann Alexander Mayer Alexandre Quessy Alexandros Drymonitis Bent Bisballe Nyeng Bernhard Wodni Bernt Isak Wærstad Bill Gribble Björn Lindig Bruno Gola Bruno Ruviaro Christoph Kuhr Chuckk Hubbard Chun Lee Claude Heiland-Allen Cyrille Henry Daniel Fritzsche David Adler David Garcia-Garzón David Pirrò Dominik Schmidt-Philipp Edgar Berdahl Emmanuel Durand Enrique Tomás Federico Barabino Fernando Lopez-Lezcano Florian Hollerweger Fons Adriaensen Frank Neumann Franz Zotter Gianfranco Ceccolini Hannes Zellhofer Helene Hedsund Henning Thielemann IOhannes m zmölnig Jakob Leben Jan Jacob Hofmann Jeremy Jongepier Joachim Heintz Joe O'Farrell John ffitch Jonas Christensen Jonathan Abel João Pais Jörn Nettingsmeier Karl Freiberger Kim Ervik Krzysztof Gawlas Kyriakos Tsoukalas Lars 'Muldjord' Jensen Leonardo Germani Li Chi Hsiao Liam O'Sullivan Louise Harris Lucas Samaruga Lucia Egaña Ludwig Mohr Luka Princic Magnus Johansson Maja Delak Malte Steiner Marco Donnarumma Margarethe Maierhofer-Lischka Marian Weger Marije Baalman Martin Rumori Martins Rokis Matthias Frank Matthias Grabenhorst Matthias Kronlachner Max Neupert Michal Seta Nicolas Bouillot Nicolas Montgermont Nils Gey Nils Micheli Oscar Pablo Di Liscia Paula Pin Peter B. Peter Meerwald Peter Plessas Peter Venus Reni Hofmüller Renick Bell René Bastian Rob Canning Robin Gareus Roman Haefeli Rui Nuno Capela Servando Barreiro Sigurd Saue Stefan Huber Steven Yi Tarmo Johannes Tesia Kosmalski Thomas Gerwin Tim Blechmann Travis Skare Victor Lazzarini Vincent Rateau Werner Goebl William Light Wilson Michael Wolfgang Spahn XBloome Xavi Manzanares Xavier Serra Román Yen Tzu Chang Zack Settle Øyvind Brandtsegg

Day 0 - Wednesday, May/8

Workshops & Events

19:00

LAC Exhibition Vernissage

(misc event)

The installation program of the LAC will be officially opened on the day preceding the conference.

21:00

Inofficial Opening Dinner

(misc event)

For those of you who are already in Graz on Wednesday evening, we will have dinner somewhere in town.

Day 1 - Thursday, May/9

Main Track

(Hall i7)

10:00

Conference Welcome

10:20

Keynote

11:00

netpd - A Collaborative Realtime Networked Music Making Environment written in Pure Data

This paper presents netpd, a framework intended for making music collaboratively and in real-time written in Pure Data (Pd). Users join by connecting to a central server in order to have a session together (not much unlike a jam session in Jazz music) and load self-written or pre-existing instruments (Pd patches) to play with. The framework maintains state synchronicity between clients at any given time by exchanging control messages over the server. The protocol in use is fully based on OSC.

11:40

Byzantium in Bing: Live Virtual Acoustics Employing Free Software

A Linux-based system for live auralization is described, and its use in recreating the reverberant acoustics of Hagia Sophia, Istanbul, for a Byzantine chant concert in the recently inaugurated Bing Concert Hall is detailed. The system employed 24 QSC full range loudspeakers and six subwoofers strategically placed about the hall, and used Countryman B2D hypercardioid microphones affixed to the singers' heads to provide dry, individual vocal signals. The vocals were processed by a custom-built Linux-based computer running Ardour2, jconvolver, jack, SuperCollider and Ambisonics among other free software to generate loudspeaker signals that, when imprinted with the acoustics of Bing, provided the wet portion of the Hagia Sophia simulation.

12:20

Combining granular synthesis with frequency modulation

Both granular synthesis and frequency modulation are well-established synthesis techniques that are very flexible. This paper will investigate different ways of combining the two techniques. It will describe the rules of spectra that emerge when combining, compare it to similar synthesis techniques and suggest some aesthetic perspectives on the matter.

14:30

SuperCollider IDE: A Dedicated Integrated Development Environment for SuperCollider

SuperCollider IDE is a new cross-platform integrated development environment for SuperCollider. It unifies user experience across platforms and brings improvements and new features in comparison with previous coding environments, making SuperCollider easier to begin with for new users, easier to teach for teachers, and more efficient to work with for experienced users. We present an overview and evaluation of its features, and explain motivations from the point of view of user experience.

15:10

An Approach to Live Algorithmic Composition using Conductive

Algorithmic composition can be done as a live performance using live coding tools. An example approach to such performances is described. Using the Conductive library for the Haskell programming language in conjunction with some external tools, samples are triggered according to interonset interval patterns generated at a variety of densities. Automatic movement through those density levels is accomplished through a specialized data structure, which is also used to time-vary other parameter values. The performer manages the state of the above items, and audio is finally output through effects.

16:10

MorphOSC- A Toolkit for Building Sound Control GUIs with Preset Interpolation in Processing

MorphOSC is a new toolkit for building graphical user interfaces for the control of sound using morphing between parameter presets. It uses the multidimensional interpolation space paradigm seen in some other systems, but hitherto unavailable as open-source software in the form presented here. The software is delivered as a class library for the Processing Development Environment and is cross-platform for desktop computers and Android mobile devices.

This paper positions the new library within the context of similar software, introduces the main features of the initial code release and details future work on the project.

16:50

Design of an audio oscilloscope application

This paper documents some aspects of the design of zita-scope, an Audio Oscilloscope application for the GNU/Linux system. It is designed to permit accurate display and measurements on audio waveforms captured from any source via the Jack audio server. Topics covered include performance requirements, an analysis of some problems that need to be considered, and an overview of the implemention structure. The software will be available at the time this paper is presented at the 2013 Linux Audio Conference in Graz.

Workshops & Events

11:00

Music in expression -- A DSP based compositional methodology

Workshop

This workshop aims to provide hands on experience, as well as in-depth practical demonstration, on the compositional techniques outlined in author's previous paper entitled "Music in expression -- A DSP based compositional methodology". The original paper was presented as part of the Pure Data convention in Weimar, 2011.

Additionally, the content of the workshop will include new findings and methods which the author has since developed. Moreover, although the workshop will be taught in Pure Data, users of other DSP programming languages will be welcome and encouraged to participate, to further experiment the underlying adaptability and universality of the proposed compositional methodology.

14:30

Lilypond: High-Quality Music Notation for Everyone

Workshop

In this workshop, we will present Lilypond as a viable alternative to commercial notation packages (Sibelius, Finale) for composers and musicologists alike. We will explain Lilypond's editing paradigm, where music is not entered graphically but as plain text, which is then translated to the actual sheet music. While this non-WYSIWYG editing style can initially seem complex and takes time to get used to, it offers many advantages (concerning, for example, the typesetting quality) that we intend to demonstrate in detail. After this workshop, participants should feel confident to get started with Lilypond in order to typeset their next small-to-medium composition or musicological paper. They should regard it as natural to consult the documentation and participate in the Lilypond community to overcome any problems that they might face a part of this process. We will use the cross-platform Lilypond editor 'Frescobaldi' (http://frescobaldi.org) as the main tool in the workshop.

We plan to cover the following subjects:

- Showcase of some examples

- Comparison with commercial notation packages (typesetting quality, license situation)

- Installation on different operating system

- Introduction to HTML help system and realtime support (mailing list, IRC)

- Editing paradigm, basic syntax

- Workflow suggestions for preparing larger scores and individual parts

- Conversion to and from other formats (e.g., .sib and .etf)

- Addressing specific questions by workshop participants ("How do I...?")

- Preparing musicological documents with lilypond-book and LaTeX or the OOoLilyPond plugin for OpenOffice

- Alternative editors ('LilyPondTool' jEdit plugin, Denemo, Rosegarden, Musescore)

16:10

Qstuff*: past present and future

Workshop

The proposed workshop shall present a bit of escathology and a hands-on demonstration of the most important aspects of the Qtuff* software package, ultimately focused in the Qtractor audio/MIDI multi-track sequencer own project. Audition, discussion and assistance from attending developers, composers and users in general are the main tracks for help on how the Qstuff* suite can be improved to fit best each one's purposes.

18:00

Midvinterblot

Concert

Midvinterblot. Divertimento for eight loudspeakers.

8.18", eight channels.

Midvinterblot (Midwinter sacrifice) is an algorithmic composition based on the Japanese collaborative poetry form renku ("linked verses"). Traditionally, renku is an often humourous poem written by a group of poets taking turns to write the verses, each verse being a response to the previous one, in accordance to a somewhat complicated set of rules.

In this piece, these ideas are explored in a musical context. Three SuperCollider poets borrow their voices from a recorded western transverse flute. The recorded sound material is processed using granulation, delays and filters, to a large extent controlled by pentatonic scales.

The piece was written using SuperCollider and scvim.

18:00

Endphase 20

Concert

Each Endphase is a unique conceptually defined composition created in a collaborative environment, which assumes its final form in real time through improvisation. Once an "endpoint" (Endphase) has been reached, the individual performance will not be performed again.

For LAC 2013, we present an Endphase based on recent events involving Aaron Swartz (1), and his efforts in the realm of freedom of information. The indictment of Swartz's trial report (2) was recorded and then grammatically and semantically analyzed. Treating each word as an individual element, this recorded text is (re)composed live to form the work using specific algorithmic processes controlled in real time in an ambisonic space. In addition to commenting on the need for commitment to free culture, the piece also problematizes the discrepancy between an æsthetic appreciation of the electroacoustic music in the work and the harsh reality of the textual content. "Intent to defraud" is among the accusations brought on Swartz during his 2011 trial.

1) http://en.wikipedia.org/wiki/Aaron_Swartz

2) http://archive.org/details/UsaV.AaronSwartz-CriminalDocument2



18:00

Mesmo que depois

Concert

18:00

Waterfall Music

Concert

Waterfall Music, commissioned by the Society for Arts and Technology [SAT], is a logical follow-up to Handel's Water Music. However, instead of being performed on a barge navigating Thames River, Waterfall Music flows along fiberoptics and other similarly suitable materials, between the [SAT] in Montreal and IEM. In the performance, waterfall is also designating the particular technique employed for synchronization of musical events, providing performers an opportunity to merge voices, dance, improvised music, digital signal processing, virtual worlds and digital video effects in a collective creation from which synchronicity may emerge once in a while.

Melatab members are: at the Cube: Michal Seta (guitar & electronics), at the SAT (Montreal): Nicolas Bouillot (bass), Emmanuel Durand (live rendering), Alexandre Quessy (dance) and Zack Settel (thumb piano)

18:00

Cancelled:

Phoebe the Amoeba

Concert

This piece of music is a sketch. It was produced in a serial manner on a conventional electric guitar, spiffed up using Ambi panning in Ardour, and printed in UHJ stereo for this demo.

Seeking perfection in improvisation is superfluous. It is what it is. Yet how sweet is the search for perfection. I try to practise the sketch - not the music.

In this project we look at some questions:

Is there improvisation independent of the timeline?

How can improvisation incorporate information about the room?

How do you prepare youself for improvisation?



The music will be performed live on a hexaphonic guitar and a few MIDI controllers and rendered in third-order Ambisonics.

21:00

Erklaer' mir. Liebe

Concert

The presented work gives a preview to the music of a one woman theater that is develloped in collaboration with the author Katrin Lindig. The theater piece questions the discourses of gender and the human body. Similarily, the music reflects the perception of the inner and the outer sound and uses discourses derived from the individual sonic perception of the sound of the body to subcultural dancefloor music.

21:00

Second Sense

Concert

We use a hand-made installation to create sound and visuals. Li Chi makes it like a turntable. She uses processing to make visuals and Yen Chi uses Pure Data to make sound. Although this is our first time cooperation, we try to match our works as much as possible.

21:00

Bernt Isak Wærstad - Solo performance

Concert

The music is free improvised, but with a focus on musical form and exploration of timbres. It could perhaps be thought of as real time composition and sound sculpting. The Csound based Hadron Particle Synthesizer (an open source granular synthesis effect and synthesizer - www.partikkelaudio.com) is a key element in the timbral exploration of sonic textures. Combined with more traditional guitar effect pedals and digital effects, it widely expands the sonic palette of the guitar.

21:00

Restlichtverstärker

Concert

Restlichtverstärker is a Berlin music duo of Servando Barreiro and Malte Steiner, working together since 2011.They created a complex stepsequencing and soundsynthesis patch for Pure Data and work with two synchronized laptops via OSC.

21:00

Cancelled:

((-_-))

Concert

Live AV session with Pd patches driven by unconventional generative patterns and sequencers oriented to deconstructed bodily movements, with the next ingredients:

Soundscapes extracted from a Genetic Sampling techniques,

...Granulated particles from Microphones feeds

...Inmersive SubBassLines

...DIY DIWO devices extracting electromagnetic waves translated to sound

... bended keyboards and old Toys from the garbage that sounds more distorted than a MetalZone,

...visuals merging the limits between body empowerment, Abstraction, Glitch and Other ('post')Pornographys

....and above all Audio/Visual/Wet Distorsions.



((-_-))

Is a metaidentity working with FLOSS tools.

People involved is a team of collaborators

that works with the DIT/DIWO/DIT paradigm

and the association of research methods into

creative programming around the next tags:

#Glitch #BioHacking #Generativity #Codes

#BioPolitics #Body #DataTranslations

#Sonifications #Perception #Ressonances

#Probability #Deconstruction #Empowerment



21:00

Cancelled:

chdh - egregore

Concert

Definition

"Egrégore" means an energy produced by the desires of many individuals in a common goal. This is the starting point of this audiovisual performance that aims to exploit the group movement phenomenas. Complex and expressive behaviors are generated and controlled by a computer and transcribed in sound and image. A crowd of particles deploys itself, reorganizes, blends into living structures more or less coherent, evolving from a chaotic movement toward a cohesive group. This project is a continuation of chdh's work on audiovisual instruments, but aims to radicalize the search.

Collective Behaviors

Groups behavior is an interesting field of research due to the great complexity of evolution and forms generated by the interaction of subjects in a crowd. A crowd of identical elements can create surprising shapes and movements. This is the case in some schools of fish or flock of birds: each element adapts its speed and position based on those of its neighbors. From these elementary movements rises structures organized in shapes whose diversity is a reflect of the behavior they are based on. A global shape is created by the sum of individual wills: noisy lines or clouds turn into fractals from chaotic dynamics. All these elements can have a common purpose, be opposed in different subgroups or evolved independently.

Objectives

Egregore seeks to capitalize the experience gained since 2002 around the main axes of chdh's work: physical modeling and synthesis of behavior applied to the creation of audiovisual instruments. In egregore, there is only one composed form, a macro structure born thanks to a crowd of micro-elements creating a visual and sound space. Under the action of the two instrumentalists, this audiovisual form evolves and mutates. In one continuous gesture, the elements evolves from a chaotic movement to a consistent organism.

Day 2 - Friday, May/10

Main Track

(Hall i7)

10:00

Ambisonics plug-in suite for production and performance usage

Ambisonics is a technique for the spatialization of sound sources within a spherical loudspeaker arrangement. This paper presents a suite of Ambisonics processors, compatible with most standard DAW plug-in formats on Linux, Mac OS X and Windows. Some considerations about usability did result in features of the user interface and automation possibilities, not available in other surround panning plug-ins. The encoder plug-in may be connected to a central program for visualisation and remote control purposes, displaying the current position and audio level of every single track in the DAW. To enable monitoring of Ambisonics content without an extensive loudspeaker setup, binaural decoders for headphone playback have been implemented.

10:40

The Rationale behind Rationale: Designing a Sequencer for Unlimited Just Intonation

This article presents some of the considerations that went into determining how the Rationale Just Intonation sequencer should work. Various special problems that came about because of the number and nature of usable tones are discussed, as well as reasons for eschewing other existing notation systems. Programming specifics are ignored in favor of questions of interface design.

11:40

Chino -- a framework for scripted meta-applications

Chino is presented, a framework for creating meta-applications from Linux audio and Midi tools. It provides command line options to create or open sessions, a runtime user interface for adding, restarting or removing applications and a hand-editable file format to which sessions are saved. Graphviz is used to optionally display the layout of a session.

Chino itself is a Bash script that just provides generic functionality, users can create presets to implement what is desired for their use cases. Presets are prototypes for sessions, multiple sessions can be derived from a preset.

A preset is made up of a number of applications, each defined as a program together with its usage. For every application, the preset contains required application files and a library file that, via variables and functions, defines how the program is to be started and interconnected. Defining applications together with their connections results in dependencies, which are tied via user defined port-groups. In this paper, we will explain the architecture of Chino and take a look at some implications and limitations of this session management model.

12:20

[Lightning talk] openGL UIs for LV2 plugins

"new unprecedented LV2 tech, use of the otherwise unused graphics card in your audio-box..."

12:40

[Lightning talk] Drum Gizmo demo

13:00

[Lightning talk] Laborejo - Music Notation Workshop Feature highlights and

A demonstration of the program "Laborejo -- Music Notation Workshop". In 15 minutes we will see some of the more interesting features like note generation, redundancy fighting, "Performance signatures", advanced exporting and briefly scripting.

14:30

Csound6; old code renewed

This paper describes the current status of the development of a new major version of Csound. We begin by introducing the software and its historical significance. We then detail the important aspects of Csound 5 and the motivation for version 6. Following, this we discuss the changes to the software that have already been implemented. The final section explores the expected developments prior to the first release and the planned additions that will be coming on stream in later updates of the system.

15:10

Live music programming in Haskell

We aim to compose algorithmic music in an interactive way with multiple participants. To this end we develop an interpreter for a sub-language of the non-strict functional programming language Haskell that allows to modify the program during its execution. Our system can be used both for musical live-coding and for demonstration and education of functional programming.

16:10

ipyclam, empowering CLAM with Python

This paper introduces ipyclam, a new way of manipulating networks in CLAM (C++ Library for Audio and Music) by using the Python language. This extends the power of the framework in many ways. Some of them are exploring and manipulating live processing networks via interactive Python shells, or extending the power of visual prototyping in CLAM by adding elaborated application logic and user interfaces with PyQt/PySide. The described Python API, ipyclam, by redefining the back-end layer, can be reused to control other patching based systems such as JACK, gAlan...

16:50

Music for Programmers (MFP): A Dataflow Patching Language

MFP is a graphical dataflow patching language in the tradition of Max/MSP and Pure Data. It expands on its predecessors by integration of higher-level language constructs from Python, including a variety of data types and operations and the widespread use of the Python evaluator. A new lexical scoping system, a layers approach to building logical code blocks, and a UI optimized for keyboard control are also featured.

Workshops & Events

10:00

Workshop on Making Embedded Musical Instruments and Embedded Installations using Satellite CCRMA

Workshop

This workshop focuses on teaching participants how to make embedded musical instruments and embedded installations using Satellite CCRMA. By the close of the three-hour workshop, each participant will complete a new self-contained instrument or installation using a take-home kit, which is about twice the size of a deck of cards. Beginning and intermediate participants will benefit primarily from being led through a series of basic exercises in using the kit, while advanced participants may be most interested in discussing how to extend the kits.

Satellite CCRMA is currently based on the powerful Raspberry Pi embedded Linux board, which executes floating-point instructions natively at 700MHz. Participants will be shown how to run Pure Data (pd) on the board, but participants are welcome to explore other software available on the Satellite CCRMA memory image. Additional topics include Arduino, Firmata, pico projectors, open-source hardware, SuperCollider, and more.

10:00

IOSONO

Workshop

IOSONO develops its multichannel audio interfaces for the control of sound field reproduction under linux. In the workshop, we will demonstrate their user interface to create spatial audio scenes using different audio rendering methods, such as amplitude panning, wave field synthesis, and an Ambisonic panning we were able to develop for IOSONO. Hereby we start from scratch and show how to specify the loudspeaker locations and how to automatically equalize them using the user interfaces provided by IOSONO when setting up. Audio scene editing is remotely done from the rendering machine by a VST plugin in Nuendo that specifies the locations of the different audio tracks in space by OSC commands in real time. We will arrange a small spatial audio composition together and maybe try to control the audio objects via OSC by other interfaces.

14:30

AVB Linux Stack Workshop

Workshop

This workshop and discussion panel is intended to get a working concept for an IEEE 802.1AVB (Audio Video Bridging) Linux stack. AVB is an audio and video (AV) network streaming infrastructure, designed to provide realtime transmission and admission control of AV streams on the DLL/MAC layer (OSI layer 2). To enable Linux computers to stream AV using an AVB infrastructure, an AVB Linux stack is needed.

14:30

Linux/Ardour in a Recording Studio

Workshop

Currently the Digital Audio Workstation (DAW) in the SPSC Recording Studio is a MS Windows based machine. We want to demonstrate that it is entirely possible to achieve the same workflow and level of integration of the mixing console and the DAW using free software (Linux, Ardour).

We plan to provide pre-recorded audio tracks. The workshop itself will consist of an overview over the Linux DAW and its integration into the recording studio and the mixing of the tracks, in this participants will be able to get a hands-on experience with our setup.

If possible, we intend to provide the opportunity of recording overdubs to the existing tracks in order to complete the demonstration of free software in a professional recording studio environment.

16:10

Using Python in CsoundQt

Workshop

Since version 0.7, CsoundQt offers not only a built-in Python interpreter, but grants this interpreter direct access to all levels of usage. The user is able to:

- generate and modify Csound code

- generate and modify CsoundQt's widgets

- run Csound instances including live coding

- build his or her own Graphical User Interfaces.

This workshop will show and discuss examples for all these features. The ideal user would have some knowledge in both, Csound and Python, and have CsoundQt installed.

20:00

Feedback performance

Concert

The piece is an improvised performance exploring the resonant characteristics of the performance space. This is done by using shotgun microphones to precisely pick up sound from specific spots in the room, processing the signal with a slowly reacting feedback reducer algorithm implemented in Csound. The sound from Csound is fed to speakers in the room, creating potential for a traditional audio feedback loop. Feedback is controlled by the software process and the performer moves the microphones to "play" different resonant spots found in the performance space. One could say that the piece is related to the feedback based installations of Agostino di Scipio.

20:00

Hrafntinnusker

Concert

The name "Hrafntinnusker" ("Raven-stone-scissor") refers to a volcanically active region, more precisely, a mountain in Iceland. The elevation Hrafntinnusker is exceptionally rich in obsidian, a kind of glassy solidified lava. On the way to that area, the Laugavegur leads through an area in which the local mountains seemed to be able to perform the technique of grannular synthesis perfectly: grained rhyolite rock is superimposed in light and dark distribution patterns by apparently stochastic laws to create in regard to an artistic point of view an extraordinarily sophisticated designed landscape.

The structure, formation processes, but also the material and the mood of this landscape have created the starting point of the piece "Hrafntinnusker". Grannular generated structures are distributed on tilted planes in space and are in a continuous transformation process. The sounds used vaguely reminiscent of hard materials such as stone, glass and metal.

This piece was composed entirely in Csound and blue, using Cmask as an external sound object within blue. The composition consists of these external sound objects, each of them producing a cloud of sound grains according to stochastic distributions. Also the information of the location of the sound is thus generated stochastically. Some parameters weregenerated using blue's capabilities of automatisation. Then the whole information was fed into my Csound code for spatialisation. The result then is the 3rd order Ambisonic encoded piece, which may be decoded in a versatile way to several possible layouts of speakers.

20:00

Biomimesis II

Concert

Custom made software generates dense auditory stream based on "nature inspired sound design”. These sounds have not been composed, nor follow musical laws. Rather, they simulate a complex soundscape where sounds, according to John Cage, "live their own lives".

The result is a hybrid between real and virtual when sounds seems recognizable and familiar evoking personal associations and cultural projections yet in the same time alien, otherworldly and unidentifiable, blurring boundaries between what we consider natural and artificial.

Listeners can “zoom” in or out on various elements that makes the whole piece, segregating or integrating them, discovering new details or subtle changes.

20:00

Miškas

Concert

Miškas means forest in Lithuanian.

The piece was created during a residency in Druskininkai, Lithuania. The sound material is made up of field recordings from the forests in Druskininkai. The sounds have been processed in SuperCollider using filters with varying band widths and according to different scales, among others a scale based on the golden mean - like the one John Chowning used in Stria.

20:00

Parallaxis: For Four Instruments and Electronics

Concert

The score for Parallaxis is in four parts and can be played on any four melodic instruments. Multiple quartets may perform simultaneously under the direction of one or more directors.

The music notation is presented to the performers through the authors web-based score playback system. This allows performers to interact with the score and to make decisions regarding the preparation of a "version" before a performance. There is also the possibility of live interventions during a performance by a "Director/Conductor".

The score adopts a "scrolling score" paradigm, this combined with graphic notation allows the score to be interpreted by musicians without strict "classical" training. The individually controllable scrolling score's "parts" allows a high degree of coordinated ensemble playing (complex poly-rhythms etc.) regardless of the musicians experience in such ensemble styles.

20:00

Velvet Skin, Heart of Steel

Concert

The newly inaugurated Bing Concert Hall at Stanford University may appear to be just sonic velvet, gracefully covered with multiple overlapping sinusoidal curves carved from warm resonant wood, its sails and cloud ceiling caressing all sounds produced on stage into enveloping beauty, but at the core, the Bing Concert Hall is made of steel. I had a chance to bang o n the steel beams as they were waiting on the ground before construction began. Sleeping steel, biding for its time of hidden glory. I also climbed on top of the “cloud” ceiling after construction wa s finished, recorder in hand, and spent one hour getting sounds out of anything that could be banged or scrapped or bowed. Some of those sounds are included in this collage and short etude that is a prelude for a longer piece. Included are metal doors banging in asymmetrical rhythms, steel pipes and beams of all shapes and sizes, big ventilation fans left over after construction, and much more. The sonic materials were coaxed into musical form using Bill Schottstaedt's s7 Scheme language interpreter and CLM.

20:00

Ominous - Incarnated sound sculpture (Xth Sense technology)

Concert

Ominous (OMN) is a sculpture of incarnated sound. The piece was commissioned on occasion of the finals of the 5th Live Electronic Music Project Competition, organized by the European Conference of Promoters of New Music (ECPNM).

The performance embodies, before the audience, the metaphor of an invisible and unknown object enclosed in my hands. This is made of malleable sonic matter. Similarly to a mime, I model the object in the empty space by means of whole-body gestures. By using my visceral, new musical instrument "Xth Sense", the bioacoustic sound produced by the contractions of my muscle tissues is amplified, digitally processed, and played back through nine loudspeakers. The natural sound of my muscles and its virtual counterpart blend together into an unstable sonic object. This oscillates between a state of high density and one of violent release. As the listeners imagine the object's shape by following my gesture, the sonic stimuli induce a perceptual coupling. The listeners see through sound the sculpture which their sight cannot perceive.

20:00

Soundscape mit Cage und Joyce

Concert

John Cage's The Wonderful Widow of Eighteen Springs is the point of departure of this composition. It puts the listener inside the piano (many pianos) and explores its sounds and dimenssions. The beginning vocal phrase transforms immediatly into instrumental sound. The sound material was recorded at the acoustic instrument, specially for the purpose of the composition and then transformed and spatialized in SuperCollider.

Day 3 - Saturday, May/11

Main Track

(Hall i7)

10:00

A Pure Data toolkit for real-time synthesis of ATS spectral data

This paper presents software development and research on the field of digital audio synthesis of spectral data using the Pure Data environment (Miller Puckette et al) and the ATS spectral analysis technique (by Juan Pampin [6]). The ATS technique produces spectral data using a deterministic-plus-stochastic representation. The focus is on the methods by which such data may be real-time read and synthesized using several Pure Data externals developed by the author and others, as well as on the involved audio synthesis strategies. All the software involved in this development is GNU Licensed and runs under Linux.

10:40

Multi-Channel Noise/Echo Reduction in PulseAudio on Embedded Linux

Ambient noise and acoustic echo reduction are indispensable signal processing steps in a hands-free audio communication system. Taking the signals from multiple microphones into account can help to more effectively reduce disturbing noise and echo. This paper outlines the design and implementation of a multi-channel noise reduction and echo cancellation module integrated in the PulseAudio sound system. We discuss requirements, trade-offs and results obtained from an embedded Linux platform.

11:40

Lyapunov Space of Coupled FM Oscillators

I consider two coupled oscillators, each modulating the other's frequency. This system is governed by four parameters: the base frequency and modulation index for each oscillator. For some parameter values the system becomes unstable. I use the Lyapunov exponent to measure the instability. I generate images of the parameter space, implementing the number crunching on graphics hardware using OpenGL. I link the mouse position over the displayed image to realtime audio output, creating an audio-visual browser for the 4D parameter space.

11:40

Qstuff*: part2

12:20

Production and Application of Room Impulse Responses for Multichannel Setups using FLOSS Tools

We present the outcomes of a series of room impulse response (IR) measurements. We have recorded binaural, Ambisonic-encoded and regular stereo/mono IRs of multichannel loudspeaker arrays in various concert halls in Austria, Northern Ireland, Germany and New Zealand. The resulting IRs and accompanying documentation have been made publicly available on a website for composers to use in the production and documentation of their multichannel pieces. The paper also discusses several custom-written shell scripts and extensions to the Aliki and Jconvolver software packages, which we have developed for the production of the presented IRs.

14:30

Pitch-class Set design in SuperCollider

The Pitch-class set theory and its extensions constitute an important basis for mastering multi- layered atonal composition. The SuperCollider environment offers significant possibilities of applying this technique in the creation of abstract Pitch-class designs that may be used as a part of more complex algorithmic composition developments. This paper presents pcslib-sc a quark (library) for Pitch-class set design in SuperCollider and an use case in order to demonstrate its musical relevance.

15:10

Experiments with dynamic convolution techniques in live performance

This article discusses dynamic convolution techniques motivated by the musical exploration of interprocessing between performers in improvised electroacoustic music. After covering some basic challenges with convolution as live performance tool we present experimental work that enables dynamic updates of impulse responses and parametric control of the convolution process. An audio plugin implemented in the open source software Csound and Cabbage integrates the experimental work in a single convolver.

16:10

Creating LV2 Plugins with Faust

The faust-lv2 project aims to provide a complete set of LV2 plugin architectures for the Faust programming language. It currently implements generic audio and MIDI plugins with some interesting features such as Faust MIDI controller mapping, polyphonic instruments with automatic voice allocation and support for the MIDI tuning standard. You can use these architectures to quickly turn Faust programs into working LV2 audio effects and instrument plugins, ready to be run with LV2-capable DAWs such as Ardour and Qtractor. The plugin architectures and some helper scripts are now also available in the Faust distribution, and the Faust online compiler supports these as well.

16:50

[Lightning Talk] Towards a live-electronic setup with a sensor-reed saxophone and Csound

This paper presents a setup to pick up saxophone reed vibrations directly, in an attempt to monitor the saxophone signal without risky feedbackloops despite drastic dynamic manipulations. We prepared synthetic saxophone reeds with strain gauge sensors and proposed a circuit to connect the sensor reed to a line-level soundcard input. Furthermore, we discussed possible open-source software to emulate classic stompbox effects. Finally, we presented a Csound instrument design, that allows on-the-fly signal routing between multiple effects in an ongoing live performance.

17:10

[Lightning talk] MOD - An LV2 host and processor at your feet

MOD is a linux-based LV2 plugins processor and controller. Musicians access it via bluetooth and setup their pedalboards by making internal digital connections between audio sources, plugins and audio outputs. After a pedalboard set is saved, it can be shared with other users at the MOD Social network. The software components are Open Source, which means you can also run it on any linux machine, not only on MOD hardware. The presentation aims to introduce the device to the community and discuss how its development may interact with plugins development and the development of the LV2 standard itself.

17:30

Closing Ceremony and Group Photo

Workshops & Events

10:00

Using your electric guitar with Linux

Workshop

Almost 20 years of experience with playing guitar and over 10 years of Linux experience, one day that just had to come together. With the advent of guitarix, a virtual guitar amplifier for Linux, this became reality and coupled with the modularity of the Linux audio ecosystem a whole plethora of possibilities became accessible. In this workshop I will show the current possibilities for a guitarist with Linux audio in a hands-on, live setting.

11:40

Take it or Fake it - Spatial miking techniques and post-production spatialisation tricks for recordists

Workshop

A workshop on stereo miking techniques for musicians and recording amateurs, to explore ways to capture a natural ambience if the room is good, or, under adverse acoustic conditions, to obtain a dry enough recording so that a convincing spatial impression can be faked in post-production.

Participants are encouraged to bring their own laptop and headphones with Ardour3 or another DAW of their choice and a selection of generic signal processing plugins pre-installed. After some introductory remarks on various microphone techniques and a plenary demo over loudspeakers, I will distribute example audio snippets for everybody to play with and dissect.

Participants are also encouraged to bring problem recordings of their own for us to discuss, where they attempted to record and/or synthesize spaces and found the result unsatisfactory.

14:30

Extended View Toolkit - Video projection mapping with Linux/Pd

Workshop

The workshop should enable people to work with the Extended View Toolkit. The ExtendedViewToolkit serves as a multi-platform solution for projection-mapping with scene-based control and automation features. It is based on Puredata/GEM. The toolkit is a set of abstractions for combining multiple video or image sources into a panoramic image and for projection mapping setups with multiple projectors or projection environments with challenging geometric forms.

16:10

Building distributed graph of live audio/video/data streaming with switcher/shmdata, puredata and your application

Workshop

Switcher is a new modular streaming engine for telepresence applications. It relies heavily on shmdata, a library enabling real time sharing of data flows between applications. shmdata, is able to share audio, video and any standard or custom buffer of data. Switcher and shmdata are based on the GStreamer framework and has already been used for 3D telepresence installation where remote participants were captured in real-time as point clouds with lip-synchronized audio.

The workshop will introduce switcher/shmdata and the Puredata shmsrc~ and shmsink~ externals. Participants will learn how to build a networked graph involving multichannel audio (with synchronized video and data) where each node in the graph can process audio with Puredata.

Switcher and shmdata library are developed by Nicolas Bouillot at the Society for Arts and Technology [SAT] in Montreal, Canada, and is supported by funds from the Ministère du Développement Economique, de l'Innovation et de l'Exportation du Québec.

20:00

Android drummers - interactive sound game for participants with android phones or tablets, conductor

Concert

"Android drummers" is a project created for performing in schools or elsewhere, addressed to youngsters at the age of around 12..20. The goal of this sound game is to bring teenagers to experience, that music can be not only beat, but, and foremost, it is the sound. To take part in the piece one has to have an android phone or tablet and install an app that enables user to play on a very primitive drum machine. User can choose how many beats are played in a measure, how many subdivisions are in a beat, how regular are the beatings etc. The device makes some sound so that user can hear that his actions cause some result. The app sends signals about every single local beat and action via OSC messages to central computer that is connected to PA. The computer plays off the sounds all together and starts slowly to change the overall sound more and more away from normal drum beats. The players have to follow some simple commands of conductor (the author) that help to control the overall form of the piece.

The sound synthesis is written and realized in Csound (the app in Eclipse), installed on a computer running openSuse 12.1.

To perform the piece at least 8..15 (or more) persons from the public must have an android device and install the app (preferably beforehand) OR have a laptop with Csound and CsoundQt installation and run the enclosed file drumclient2.csd there. The csd file and android app are enclosed wit the submission.

20:00

Improvisation

Concert

To do live coding, I use a library called Conductive that I have written in the Haskell programming language and hsc3 (the Haskell bindings to the SuperCollider synthesis engine) on top of a standard Linux audio system (ALSA and JACK) with Patchage for routing.

The interaction method is loading prepared code, editing that code, and entering new code in the vim text editor. The code is sent to the interpreter of the Glasgow Haskell Compiler (GHCi) using tmux (a terminal multiplexer) and a plugin for vim (tslime).

The contents of that interaction involves managing multiple concurrent processes that spawn events. The code being edited or written in the performance involves controlling things such as:

- the number of concurrent processes running at a given time

- what kinds of events those processes are spawning

- the rate and rhythm of event spawning

- the setting and adjustment of time-variable parameters

20:00

The Infinite Repeat

Concert

A musician with over 20 years of experience and a computer with Linux. That's what it boils down to. The result: conventional, decent song-writing, different sounding because of the choice to not walk the threaded paths and because of an autodidactic background, an outspoken personal taste and an open-minded worldview.

20:00

Improvisation

Concert

'Improvisation' is an improvisation exploring physical and digital feedback, as well as "wrong" usage of audio streaming via the UDP protocol, taking advantage of the distortion created when streaming locally with very small block sizes. It is a result of a new set exploring the relationship between software and hardware, using a BeagleBoard controlled with an Arduino as a compact computer and controller bundle, plus a small amplifier, a mixing console and a contact microphone.

The combination of the contact microphone with the amplifier is used for physical feedback which gives an impression to the sense of touch as an acoustic instrument does -very small changes in the position of the microphone and the pressure applied to it, cause big timbre changes. The sound is also being locally transmitted and fed back from the receiver to the transmitter utilising the 8bit format of the transmitting object class, in order to create harsh noise.

20:00

Superdirt² - Live Performance

Concert

Superdirt² - fascinating electro beats mixed in with virtuous performed cello sounds which give a result of a never achieved before dance ability! With Ras Tilo at the synthesizers and Käpt'n Dirt with the cello it provides a musical experience which is situated between drum'n'bass, jungle, dub, dubstep and even far beyond... All the music is created and performed using GNU/Linux and released under the Creative Commons License.

20:00

Cancelled:

MUTE

Concert

Synthesthesia is a workshop and software developed by artists members of Perte de Signal in which participants are invited to create visual audio compositions with simple materials like white paper and a black background. The shapes detected by a camera and control audio parameters such as pitch, amplitude and spectrum.

In MUTE, Quessy plays with Synthesthesia to generate intense noise and glitch music.

24:00

[LinuxSoundNight] ClaudiusMaximus

Concert

open jack session at the linux sound night

24:00

[LinuxSoundNight] Blankest Slate

Concert

open jack session at the linux sound night

24:00

[LinuxSoundNight] untitled

Concert

open jack session at the linux sound night

24:00

[LinuxSoundNight] zynadds*bfx

Concert

open jack session at the linux sound night

Day 4 - Sunday, May/12

Main Track

(Hall i7)

11:00

Excursion

The final event of the conference will be a trip to the beautiful south-eastern Styrian countryside, renowned for its vineyards and pumpkin seed oil...

Daily Events / Exhibitions

Installations & Listening sessions

The following pieces are presented as installations or part of a loop-playlist in the on-the-air listening room. They are accessible on each day of the conference during opening hours (14:00 - 19:00 or by prior appointment); except for the "On-the-air Listening Room" which will be broadcast by Radio Helsinki at irregular intervals, starting on Tuesday 7th of May 10:00-12:00.

cs2

cs2 is one in a series of pieces examining chaotic systems. The visual system involves a series of relatively simple calculations, but using sound to push the system to it's extremes causes unexpected behaviours, oscillations and the collapse of some of the visual structures. This series of works involves pushing sonic and visual structures to the point of collapse, and the aesthetic consequences of doing so.

Vitreous Intermission

This is a self-generating sound installation with adjustable pauses between the sound events. It can be put anywhere, preferably in a space where people work or have a break. You only need a computer and usual external computer speakers.

"Adjustable" means to find the level of volume and the duration of the pauses which makes this installation a part of a space, nearly forgotten, but not completely. .

Cancelled:

3d Audio Glocke

It´s a portable third order full space ambisonics audio system designed for one single listener. Until now it´s designed to just reproduce compositions. The idea is to develop a full space system, which is low in costs and easy to use. The isolation of the listener is a reference to the common isolation of the modern individual listening to music and sound. That refers to headphones, single work places, computergames, single settings to see movies, etc.. This evolution to an isolated individual in real space for listening to music and sound increases the importance of the virtual space.

The realized works are own compositions, some in co-production with Bernhard Rietbrock and also one remix of a composition by T.Reznor & A.Ross, who offer the multitrack for free use. The objective of the compositional decisions are primary to explore the full space.

PhoneMI

It is an old analog phone equipped with an arduino board that is connected to buttons, potentiometers and sensors to interact with the user. The real-time synthesis engine is the software puredata, that runs in a linux box (laptop) and communicates with the arduino board through usb. Additionally, the phone has 2 contact mics to input "live" audio for processing.

augen-auf-schlag

This interactive sound-light-installation was built in cooperation with the composer and sound artist Thomas Gerwin and the media artist Wolfgang Spahn. It is considered as an instrument to generate and play sounds and liquid colors.

Three, especially designed and developed projectors are filled with a mixture of colored liquids and Ferro-Fluid. In each projector these liquids can be activated and simulated with 4 magnetic coils.

Opposite the projectors stands a Midi-Drum-Pad. The visitors can play on it. A Pure Data Patch generates sounds. And with three controllers it also triggers the magnetic coils (depending on note and dynamics of the drum pad). Each of the three pads stands for one of the three RGB colors, the other four pads stands for four directions: above, below, to the left and to the right. If you hit one of the pads, the according liquids in the projectors will react. Between the projectors and the drum pad hangs a pane of glass with a round projection screen. On that screen the three monochrome projections will be mixed to one full colour projection.

Cancelled:

The Echo Coats

The "Echo Coats" are sound-driven, nostalgically designed garments that provide a means for women to playfully and sonically intervene in public spaces. The Andante Coat teases the world around its wearer by uttering sensual cosmetic titles, originally meant to tempt her own purchasing power. And at the attack of a boot heel on the pavement, the Staccato Coat releases machine sounds from its shoulders to urge people to get out of her way.

The technology of the coats integrates mini-speakers, headset microphones and iPods. The iPods run RjDj, a reactive music application that combines live environmental sound through the headset microphone and sound programming within the iPod. The coats then employ these mikes as touch sensors and sound detectors to inspire playback. Also here, mini-speakers embedded in the exterior of the coats have replaced the headphones to make this previously personal now public.

Live and direct

Live and direct is an audio-visual installation using video footage of the character Max Headroom (from the TV series of the same name), to resynthesize live news radio broadcasts in real-time.

Album: X Marks The Spot

Concert

We, the electronic music band "XBloome" from Vienna, have produced "X marks the spot": An album that was produced exclusively with Free Software from beginning to end.

Objectus

Concert

Camphor and you

Concert

This piece, "camphor and you", is a mid-tempo electronic song, with heavy analog influences and a somewhat hazy sound about it. It was composed in Renoise on Linux with a selection of both open-source and proprietary plugins.

The In-Tune Sonata

Concert

Movement 1: Sonata

Movement 2: Theme and Variations

Movement 3: Waltz

Movement 4: Rondo



The In-Tune Sonata was created using the Rationale Just Intonation sequencer. It is written in extended Just Intonation, meaning many frequencies are used that are not available in equal temperament. The synthesis was realized with Csound.

Sounding Cinema - remixing the internet archive

Concert

soundingcinema.net is a collection of radio stations using the audio from films as its source.

\ soundingcinema scrapes films from the internet archive based on genre -- the audio is then separated from these films and streamed as "film on radio". There are separate streams for each genre, Film Noir, Horror, SciFi, Comedy etc. There is also the soundingcinema.net meta-channel which creates randomised montage soundscapes from the merging of multiple audio sources creating constantly changing waves of "genrescapes".

soundingcinema.net is dedicated to remixing creative commons and public domain material and does so through the sole use of free and open source software tools on free and open operating system.

soundingcinema.net exploits the latest tools and standards in the world of FLOSS audio streaming. The streams are assembled and encoded using the stream scripting language Liquidsoap and are distributed via its own Icecast Server. soundingcinema.net streams in open codecs including multichannel Vorbis and now also streaming using the fantastic new Opus codec.

Wanda & Nova deViator: RESISTANCE

Concert

RESISTANCE started as a clubbing derivative of multimedia concert performance Frozen Images. It is a move towards the transcendence of a dancefloor, especially of its potential of conscious and deliberate bodily resistance to the force of rationality. It uses a language of electronic rhythms, repetitive patterns of metalic melody, violent funk and carefuly crafted dynamics of suspense, peaks and minimalism. Away from the spectacle it reaches closer to sweaty bodies in darkness.

Thematically and formally it is talking about sexuality at the root of human motivation - it's a sonic, textual and visual space that poses questions about how do we relate to each other and the world. What is the emotional architecture of this process of relating? When does power turn into domination and in what circumstances a lovely person desires its own submission? How much fake sugar is needed to cover up one's depression and how powerful are visual strategies for ideologies of capitalistic commodification so that one stops seeing another human being the way it is? A hybrid performance and electronic music somewhere between electropunk, triphop and breaks.

Acerca del gesto como índice de la materia

Concert

The work explores the gesture of the sound field and its influence on our perception, maintaining a relationship even-even between the assembly and the choice of sounds. Belongs to the serie Audio, compositions wherein the ratio of recorded music and concert halls are exploring the field itself. To be reproduced in any media, the work itself is complete without the physical presence of the composer.

The schedule is a major guideline. There is no guarantee events will take place at the announced timeslot.