Conference Schedule

Timetable Format: Plain List | Table | iCal | Printable Version

All times are CEST = UTC+2

Filter: Day: -all- 1 - Thursday, May/1 2 - Friday, May/2 3 - Saturday, May/3 4 - Sunday, May/4 Type: -all- Paper Presentation Workshop Lightning Talk Poster Concert Installation Other Author: -all- Aaron Heller Albert Gräf Ali Ostovar Amos Przekaza Anders Vinjar Andrew Best Andrew McPherson Anthony Di Furia Antonio Goulart Baptiste Caramiaux Barbara Minder Bart Brouns Bent Bisballe Nyeng Bernardo Barros Bjoern Lindig Bojan Gagić Bruno Ruviaro Carr Wilkerson Christian Schörkhuber Clemens von Reusner Daniel Fritzsche Daniel James David Robillard David Wagner Dominique Fober Edward Costello Eric Benjamin Fernando Lopez-Lezcano Filipe Coelho Florian Hartlieb Florian Meier Fons Adriaensen Frank Neumann Funs Seelen Gabriel Nordeborn Gianfranco Ceccolini Giorgio Klauer Götz Dipper Hanspeter Portner Harry van Haaren IOhannes zmölnig Ivica Bukvic Jae Ho Youn Jakub Pišek James Topliss Jan Jacob Hofmann Jason Jones Jean Bresson Jenny Pickett Jeremy Jongepier Jesse Crowley Jochen Arne Otto John ffitch Jon Nordby José Rafael Subía Valdez Juan-Pablo Caceres Julien Ottavi Julius Smith Jure Pohleven Jörn Nettingsmeier Jürgen Reuter Kjetil Matheussen Louigi Verona Louise Harris Ludger Brümmer Luis Fagundes Luis Valdivia Malte Steiner Marc Groenewegen Marco Donnarumma Marco Fink Marije Baalman Markus Demmel Markus Zaunschirm Martin Hünniger Martin Rumori Matthias Kronlachner Matthieu Amiguet Mauricio Dwek Mauricio Valdés Michael Hohendorf Mike Solomon Miller Puckette Miodrag Gladović Myles Borins Mário del Nunzio Neil Funk Patrick Hartono Pieter Suurmond Renick Bell Robin Gareus Romain Michon Romain Papion Rui Nuno Capela Sarah Denoux Sebastian Kraft Steven Yi Stéphane Letz Tim Blechmann Udo Zölzer Urban Schlemmer Victor Lazzarini Victor Zappi Vincent Rateau William Light Winfried Ritsch Wolfgang Spahn Yan Michalevsky Yana Il Yann Orlarey Yen Tzu Chang

Day 1 - Thursday, May/1

Main Track (ZKM_Media Theater)

Miscellaneous

10:00

Conference Welcome

Miscellaneous

10:30

Keynote

Interfaces and Hardware

11:45

A TouchOSC MIDI Bridge for Linux

Mobile applications such as hexler's TouchOSC offer a cheap and convenient alternative to traditional controller hardware for computer music programs. TouchOSC is available for Android and iOS devices and supports both OSC and MIDI, two widespread standards for transmitting control data between computer music applications. On the host side the TouchOSC MIDI Bridge is required for MIDI support, which unfortunately is proprietary software and only available for Mac and Windows systems. This paper presents pd-touchosc, a library of Pd externals which aims to bring most of the functionality of the TouchOSC MIDI Bridge to Linux.

Tools to make Tools

14:00

LV2 Atoms: A Data Model for Real-Time Audio Plugins

This paper introduces the LV2 Atom extension, a simple yet powerful data model designed for advanced control of audio plugins or other real-time applications. At the most basic level, an atom is a standard header followed by a sequence of bytes. A standard type model can be used for representing structured data which is meaningful across projects. Atoms are currently used by several projects for various applications including state persistence, time synchronisation, and network-transparent plugin control. Atoms are intended to form the basis of future standard protocols to increase the power of host:plugin, plugin:plugin, and UI:plugin interfaces.

Tools to make Tools

14:45

Muditulib, a multi-dimensional tuning library

The "Muditulib" library is introduced and explained. Muditulib is mainly a library consisting of a collection of C header files that include functions written for the purpose of tuning tonal music within the diatonic scale. This scale, as well as the library's functions, along with pitch representation systems, will be explained in detail or just shortly with reference to other literature. A music theoretical background is useful, though not necessary. Along with the core of header files an implementation for Pure Data is published. Developers are encouraged to write implementations for other synthesizers, music production platforms or any other link in the chain of tonal music production workflow.

Tools to make Tools

15:30

towards message based audio systems

The deployment of distributed audio systems in the context of computermusic and audio installation is explored here to expand the vision of works on complex dynamic audio networks. The idea of message based audio systems, in contradiction to stream based ones, is investigated showing applications for spatial audio systems and a computermusic ensemble. As a base the usage of audio messages via "open sound Control (OSC) "Audio over OSC" is shown and explored within an implementation using embedded devices and showing interactions with audio combined audio signals as messages on the net.



Interfaces and Hardware

16:30

The JamBerry - A Stand-Alone Device for Networked Music Performance Based on the Raspberry Pi

Today's public Internet availability and capabilities allow manifold applications in the field of multimedia that were not possible a few years ago. One emerging application is the so-called Networked Music Performance, standing for the online, low-latency interaction of musicians. This work proposes a stand-alone device for that specific purpose and is based on a Raspberry Pi running a Linux-based operating system.

Interfaces and Hardware

17:15

Case Study: Building an Out Of The Box Raspberry Pi Modular Synthesizer

The idea is simple and obvious: Take some Raspberry Pi computing units, each as a reusable synthesizer module. Connect them via a network. Connect a notebook or PC to control and monitor them. Start playing on your virtually analog modular synthesizer.

However, is existing Linux audio software sufficiently mature to implement this vision out of the box? We investigate how far we get in building such a synthesizer, what existing software to choose, analyse what limits we hit and what features still need to be implemented to make our vision become reality.



Workshops & Events

Workshop

15:45

DrumGizmo Drumkit Creation Workshop

Workshop

DrumGizmo is an open source drum machine.

In this workshop the participants will be guided through the recording of a drum kit using different artifacts brought to the workshop by the participants.

The recordings will then be processed using the DrumGizmo editor and finally be played live using a midi drumpad setup.

The intention of this workshop is to illustrate to the participants how easy and fun it is to create a drumkit.

It will also be the first time the DrumGizmo editor is showcased in public.

Kubus Concert

20:00

Through the space of crying

Concert

The project is based on the TIN (chemical element) as a conceptual starting material and the analysis of the sound of "tin cry". TIN META-SONIFICATION SYNTH is a software written in SuperCollider based on two sonifications: the first is derived from the physical-chemical characteristics of the TIN, the second on the atomic number and atomic radius. The variation of pressure and temperature controls in real time the values of density, sound velocity, state of matter, boiling point and melting, which controls a first synth.

The atomic radius and atomic number are the basis of an additive synthesis complex, modulated in frequency, amplitude and phase. The generated sound is spatialized in ambisonics first order (ATK Ambisonic-Toolkit), according to the theory of atomic orbitals.

The composition/improvisation derivative, is a journey through the sonic dimensions of the TIN, it

creates a "bond" between the real sound of "tin cry" and an imaginary soundscape.

Kubus Concert

20:00

The Complete Series of Kecapi (2012-2013)

Concert

This is a version of the complete series of Kecapi I, II, III (2012-2013), that I recomposed for Gaudeamus Muziekdag (Gaudeamus Jonge Componistenbal) at Rasa theatre Utrecht 25 jan 2014.

Kecapi are series of electroacoustic composition that was begun since october 2012, and has been developed far into three different version, included complete series that has been finished in january 2014. The Sound Material of Kecapi is basically manipulated recorded sound of Kecapi Instrument that were recorded in Jogjakarta.

Each individual version of kecapi has difference approach, and concept.

Kecapi I was selected to be premiered on Sound Gallery During Wocmat 2012, Taiwan.

Kecapi II was selected to be world premier during WOCMAT 2013 Concert, also selected as Finalist of Taiwan International Electroacoustic Music Award.

Kecapi III was premiered on Behind The Score Concert at Codarts Rotterdam Conservatorium 2013.



Kubus Concert

20:00

Chiral - for Piano & Electronics

Concert

The piece was written for Rei Nakamura, and her project Movement2Sound -Sound2Movement project. It was premiered during the IMATRONIK 2013 festival in the Piano+ Concerts. The piece uses an autonomous Pure Data patch that generates all the electroacoustic sounds. In was composed using the PSClib Pure Data library developed in UNQ-Argentina and many of the community based abstractions and external/libraries.

Kubus Concert

20:00

Cancelled:

Music for Unfinished Body

Concert

If we look at the human body closely, consider its visceral processes of self-regulation and perception, we soon realise that the body is an incomplete object. It is incomplete because it is constantly shifting between one state and the other, attempting as it does routinely, to keep itself intact and make a sense of the surrounding world. From this viewpoint, the human body is unfinished. Always adapting, changing, mutating and evolving. With this work, I wanted to create a musical performance that is, in the same way as the human body, incomplete, unfinished, emergent.

Music for Unfinished Body (Marco Donnarumma, in progress) is a physical performance of emergent music.The player wears two different types of biosensors, over 4 input channels. One pair of sensors capture the muscle acoustic energy using the Xth Sense1, and the register their electrical tension using custom hardware. A dedicated software extracts a set of feature from each of the channels. The features set is a computational model of the high-level characteristics of the player's movement, namely, tension, strength, complexity, and activation. Rather than mapping the set of features to sound control parameters, the feature set is fed to an unsupervised machine learning algorithm. The algorithm does not merely classify the performer's physical gesture, it rather continuously model the variations between one gesture and the other, learning in real time, one gesture after the other. The algorhitm evolves by looking for differences and emeregent behaviours of my muscle tissues. It looks at the unfinished movements and attempts to foresees what the performer will do next. Then, the algorithm creates gesture-to-sound mapping according to the recurrent patterns in the gesture variations, and use that information to modulate a music system based on iterative scanned wave synthesis. One sinewave at a time, the performer is able to build an increasingly complex sonic world which cannot be planned beforehand, but it rather emerges through the entanglement of physicality, effort, musical intention and computational modeling.

Kubus Concert

20:00

Haar

Concert

Decomposing sound into particles, sensitivity and masking effects in auditory perception, friction in bowed instruments are the themes getting intertwined in this composition and signed in the title: Haar, like Pferdehaar, Haarzelle, Alfréd Haar.

The sound actuation model is the bowed instrument's, yet it has been implemented through 50-70 cm long, thick, black human hairs gently rubbed against a moving magnet phono cartridge cantilever. The sonic characterization has been afterwards dramatically emphasized by means of a granular composition environment programmed in sclang. In this implementation, envelope, pitch, spatialization, indexing and further grain controls have been imposed by perceptual feature descriptors extracted by the very same sounds, with the result of an anamorphic and multidimensional micro-editing process.

Kubus Concert

20:00

sys_m1

Concert

sys_m1 is an eight-minute electroacoustic composition realized using systemic, a system I constructed for real-time composition, performance and sound spatialisation controlled via a physics-based visual environment. In systemic, physics-based algorithms govern the behaviour of objects in a visual system, and the movement of those visual objects controls the spatialisation, via vector-based amplitude panning, of corresponding sound objects over an 8-channel circular speaker configuration.

sys_m1 is composed from a number of recordings taken from systemic. The sonic material is a combination of pre-composed sound objects and real-time synthesized sound. By utilizing a physics-based visual system to control the spatialisation of sound, I am effectively removing decision-making from the spatialisation process.

Kubus Concert

20:00

Spaces

Concert

Spaces explores the relation of two different sound objects in three different spaces. The inspiration to this piece stems from the mathematical theory of a "topological space", a structure that allows one to define notions such as connectedness, continuity, inside, outside, openness and closedness. The composition develops a path from the pure abstract space through the inside of an acoustically closed room into an outside scenery.

The composition is written completely in SuperCollider and runs on any up to date computer. It features aleatoric elements, so each performance differs in a subtle way from the other. The overall structure is fixed.

Day 2 - Friday, May/2

Main Track (ZKM_Media Theater)

Live Coding

10:00

Experimenting with a Generalized Rhythmic Density Function for Live Coding

A previously implemented realtime algorithmic composition system with live coding interface had rhythm functions which produced stylistically limited output and lacked flexibility. Through a cleaner separation between the generation of base rhythmic figures and the generation of variations at various rhythmic densities, flexibility was gained. These functions were generalized to make a greater variety of output possible. As examples, L-systems were implemented, as well as the use of ratios for generating variations at different rhythmic densities. This increased flexibility should enable the use of various standard algorithimic composition techniques and the development of new ones.



Live Coding

10:45

Live-Coding-DJing with Mixxx and SuperCollider

This paper describes a suggestion for a modified dance music DJ performance, based on common DJing techniques enriched with live coding moments either along records or unsupported, instead of only reproducing previous-made tracks. That way, all the different possibilities offered by live coding are put together with more commercial tracks, promoting live coding while maintaining the dance music atmosphere and opening more improvisation possibilities for the DJ. It is also an easy and fun way to start learning programming and live coding.

All software involved are open-source and the workflow is based on the author's. The primary intention here is to stimulate DJs to try live coding, at the same time helping promote live coding to audiences other than experimental music enthusiasts.



Lightning Talk

11:30

Cancelled:

Integrated Audio DSP Development With FAUST, JACK, FOSSIL, Octave, MHWaveEdit, (and BASH)

We use FAUST for rapid prototyping DSP routines which are later adopted in VHDL for hardware implementation.

Special focus here is to integrate preset management of the audio plugins with source code management (SCM). This is crucial because often during development certain presets require a corresponding source code version to sound as intended.

We use the FAUST jackgtk architecture to compile a stand alone JACK application which saves the current state as a config file. A console UI then takes care of managing these config files and makes the necessary connections to other applications.

Second, the SCM is integrated in compile scripts and preset management, freeing the developer from typing commit messages and automatically linking the commit to the newly created presets in a comprehensive way in the background.

Last not least analysis and sound testing utilities are knit together with compilation and preset management, including playing back test sounds through the plugin, inspecting the block diagram or samples in text format and viewing impulse- and frequency responses.



Lightning Talk

11:30

DISTRHO Plugin Framework

This presentation gives a brief overview and demonstration about the DISTRHO Plugin framework (DPF).

DPF is designed to make development of new plugins an easy and enjoyable task. It allows developers to create plugins with custom UIs using a simple C++ API. The framework facilitates exporting various different plugin formats (e.g. LV2 and VST) from the same code-base.

Lightning Talk

11:40

Cancelled:

MicroFlo: Visual programming for microcontrollers

When a sound artist wants to create new input devices or interaction, they often reach for their Arduino.

A popular set of tools is Firmata+PureData (or Max MSP), interacting with the microcontroller through visual dataflow programming. This requires a computer to run the custom program, and limits programs to what the Firmata firmware can do.

With MicroFlo one can visually program the microcontroller itself, allowing to make use of the small size, low-power, and real-time functionality of the microcontroller - opening up additional applications.

Lightning Talk

11:40

Audio Signal Visualization and Measurement

The authors offer an introductory walk-through of professional audio signal measurement and visualisation.

The presentation focuses on the SiSco.lv2 (Simple Audio Signal Oscilloscope) and the Meters.lv2 (Audio Level Meters) LV2 plugins, which have been developed since August 2013. The plugin bundle is a super-set, built upon existing tools with added novel GUIs (e.g ebur128, jmeters,..), and features new meter-types and visualisations unprecedented on GNU/Linux (e.g. true-peak, phase-wheel,..). Various meter-types are demonstrated and the motivation for using them explained. The accompanying documentation provides an overview of instrumentation tools and measurement standards in general, emphasising the requirement to provide a reliable and standardised way to measure signals.

The talk is aimed at developers who validate DSP during development, as well as sound-engineers who mix and master according to commercial constraints.



Lightning Talk

11:50

Exploiting Coloured Hearing for Research on Acoustic Perception

Coloured hearing is a form of synaesthesia with co-perception of acoustic stimuli as visual effects. In contrast to acquired or induced synaesthesia, the genuine form is thought to origin very early in life and to not relevantly change over time. Therefore, finding correlations between acoustic stimuli and visual effects with genuine coloured hearing and evaluating their visual significance even on adults gives hints on which acoustic properties have been significant for an individual since early life. With this snapshot of significance taken from many individuals, we hope to better identify the border between congenital perception and perception habits created from cultural influence. Knowledge about this border is essential for composition of contemporary music, as it marks the limit where musical parameters go beyond trainable perception and thus render irrelevant as compositorial means. So far, for a preparatory case study we developed a set of sounds, tested it on a single genuine coloured hearing individual and present first results.



Lightning Talk

12:00

Using KMI's SoftStep Foot Controller with Linux

KMI's SoftStep Foot Controller is a very compact yet incredibly powerful foot controller. Roughly the size and weight of an average USB keyboard, it features 44 pressure sensors grouped in 14 "keys", allowing for new and creative ways to control computer-based live music performances. Unfortunately, to get the full power of the device, you need to use KMI's software which, in addition to being rather unintuitive, is only compatible with Windows and Mac Os.

This lightning talk will present FooCoCo (The Foot Controller Controller), a python/pyo-based project that intends to unleash the full power of the SoftStep (allowing possibly even more than KMI's original software) in a FOSS, Linux-compatible way. The project is in early stages of development and will welcome ideas and contributions!

Lightning Talk

12:10

Meet MOD!

Last year we showed at LAC a working MOD prototype. After an year of further development, the MOD Quadra is stable, has several improvements and is being used by the adventurous musicians that believed the concept enough to spend more than a few bucks to acquire a unit from the first batch. We are grateful to all plugin developers that provided such a diversity and quality of sound processing that made our product possible.

The goal of this lightning talk is to, first, show the latest version. After that, we would like to invite the Linux Audio Community to discuss ways in which our business could support further Free Software plugin development for Linux platform. Investors shine their eyes with the idea of a plugin market place that would allow recurrent revenue from each device sold, and now we're looking for ways to do that while encouraging Free Software models.

Licensing

14:00

Field Report on the OpenAV Release System

This paper discusses the OpenAV release system, a new release system with at its core a balance between release date and financial support. The release system works by creating the software, announcing it, and releasing after a waiting time. If money is donated to the project, the waiting time is reduced, which in turn results in an accelerated release.

This paper details the process of the OpenAV release system, discusses it in relation to other release systems. Finally the author draws on the experience gained by OpenAV Productions.

Audio and Web

14:45

Csound on the web

This paper reports on two approaches to provide a general-purpose audio programming support for web applications based on Csound. It reviews the current state of web audio development, and discusses some previous attempts at this. We then introduce a Javascript version of Csound that has been created using the Emscripten compiler, and discuss its features and limitations. In complement to this, we look at a Native Client implementation of Csound, which is a fully-functional version of Csound running in Chrome and Chromium browsers.

Audio and Web

15:30

BeaqleJS: HTML5 and JavaScript based Framework for the Subjective Evaluation of Audio Quality

Subjective listening tests are an essential tool for the evaluation and comparison of audio processing algorithms. In this paper we introduce BeaqleJS, a framework based on HTML5 and JavaScript to run listening tests in any modern web browser. This makes it easy to distribute the test and to reach a significant amount of participants in combination with simple configuration and good expandability.

Audio and Web

16:30

Providing Music Notation Services over Internet

The GUIDO project gathers a textual format for music representation, a rendering engine operating on this format, and a library providing a high level support for all the services related to the GUIDO format and it's graphic rendering. The project includes now an HTTP server that allows users to access the musical-score-related functions in the API of the GUIDOEngine library via uniform resource identifiers (URIs). This article resumes the core tenants of the REST architecture on which the GUIDO server is based, going on to explain how the server ports a C/C++ API to the web. It concludes with several examples as well as a discussion of how the REST architecture is well suited to a web-API that serves as a wrapper for another API.



Audio and Web

17:15

From Faust to Web Audio: Compiling Faust to JavaScript using Emscripten

The Web Audio API is a platform for doing audio synthesis in the browser. Currently it has a number of natively compiled audio nodes capable of doing advanced synthesis. One of the available nodes the "JavaScriptNode" allows individuals to create their own custom unit generators in pure JavaScript. The Faust project, developed at Grame CNCM, consists of both a language and a compiler and allows individuals to deploy a signal processor to various languages and platforms. This paper examines a technology stack that allows for Faust to be compiled to highly optimized JavaScript unit generators that synthesize sound using the Web Audio API.

Workshops & Events

Workshop

10:00

Groovin' high: Remixing music in full-sphere surround

Workshop

In this 90 minutes workshop, we will take a recent track by Gabbe, look into its composition and production techniques, and then take it to the third dimension using Ambisonics.

After a short introduction to the technical side of with-height surround production, we will focus on its musical application.

Spatialisation is just another production tool which, above all else, must support the song, so the mixing process will begin with an analysis of the musical material.

While we indulge in full 3D playback in this workshop, the spatial ideas and concepts discussed here will be applicable to horizontal surround and good old stereo as well.

We will demonstrate the downwards-compatibility of full-sphere Ambisonics by showing you automated downmixes for 5.1, 4.0 and stereo, and discuss their advantages and limitations.

Our workstation of choice will be Ardour3. Target audience is just about everybody. We will assume basic knowledge about the fundamentals of music production and gracefully sidestep Ambisonic theory.

Workshop

11:30

Workshop on blue-environment for higher order Ambisonic spatialisation and spatial granular synthesis in Csound

Workshop

"Hands on"-workshop on the concept of my environment for score generation and sound production for 3rd order Ambisonic spatialisation using Csound and Blue. This environment makes the production of spatial music finally nearly as easy as conventional stereo production. An overview about its features will be given as well as well as a description how to operate it.

My latest piece is taken as an example to demonstrate, which things can be done with the code and how a piece of higher order Ambisonic music featuring spatial granular synthesis and spatial algorithmic composition becomes possible with this environment.

The participants may bring their own computer with blue and Csound installed to make their own experiences with the code.

The whole environment is arranged as a blue-project, freely available for download.



Workshop

14:00

Exploring the Zirkonium MK2 Toolkit for Sound Spatialization

Workshop

For controlling the ZKM Klangdom the Institute for Music and Acoustics (IMA) is developing the free software Zirkonium for spatial composition. It is designed as a standalone application for Apple OS X and handles multichannel sound files or live audio. In 2012 the IMA started reengineering the system taking into account the experience of the staff and guest composers. The result is a more stringent modular client-server based toolkit which includes a hybrid spatialisation server, a trajectory editor and an application for speaker setup creation as its core components.

In the workshop the participants first will be introduced to the basic features of Zirkonium and exemplary case studies. Afterward the participants can form groups to explore a certain feature of their choice. Conclusively the results and impressions in working with the software will be discussed under the aspect of extending Zirkonium with Linux based modules.

Workshop

14:00

A realtime synthesizer controlled by singing and beatboxing

Workshop

A realtime instrument was implemented in pd-extended that translates singing, beatboxing, and both simultaneous, to a wide range of melodic and percussive synthetic sounds. Analysis, synthesis and processing algorithms where selected and integrated with a focus on expressive and reliable voice-control, with an intuitive correlation between the input and output.

The workshop will be an interactive combination of demo, explanation and brainstorming on future ideas and their implementation. The participants of the workshop decide where the focus will be. An improvised live performance with it will be given on May/3, during the Sound Night, 22:00 on the balcony.

Workshop

15:15

project

Workshop

A workshop on the concepts and makings of project "droning", a long-term ambient project made entirely with GNU/Linux.

Workshop

16:30

OpenAV Workshop

Workshop

Participants of the OpenAV workshop will learn about the set of OpenAV Productions tools. Software used during the workshop will include Luppp, Fabla, Sorcer, ArtyFX, and (at time of writing) unannounced programs.

Lounge Concert

22:00

Cancelled:

G.I.A.S.O - Great International Audio Streaming Orchestra

Concert

An international online orchestra developed by APO33 whose goal is to create a place for networked performance.

"Great International Audio Streaming Orchestra" uses a bidirectional multiplex platform to perform and mix different audio sources via streaming. Over the time of the performance, streams (web-transmission) are re-made in the local space using a system based on mixing multiple audio-streams through a spatial diffusion. GIASO creates a distributed orchestra, where musicians and composers can become virtual entities that emerge from a global community of nodes -- audio explorers and performers' networks.

Lounge Concert

22:00

"Random Noise" - Concert for Sound Column Four Hands

Concert

Two players give a concert in a competitive manner. They put and rearrange colored shapes and symbols on an advertising column that slowly rotates. The surface of the column is scanned, and a computer program renders the shapes and symbols into sound, as they move under a virtual playhead cursor that is projected onto the column.

Since the players compete in uncoordinated fashion rather than cooperate, the overall picture grows wildly. Both players are struggling to dominate the system by putting as much information as possible onto the column. As their competition finally results in big chaos, the overall informational content approaches zero, resulting in random noise.



Lounge Concert

22:00

Vowelscape 1.0

Concert

Vowelscape 1.0 is a collaborative audiovisual performance by Bruno Ruviaro (Santa Clara University) and Carr Wilkerson (CCRMA/Stanford). Strangled robotic voices and flickering letters are some of the building blocks of this study on the poetic resonances of isolated vowels.

Lounge Concert

22:00

Live Performance

Concert

We are two musicians based in Ljubjana working in different projects that involve free software. We recently decided to form a duet; Mauricio Valdés is a professional composer and Jure Pohleven is a PhD biochemist. We are working with free software in order to perform live improvised music. We are starting our project by playing and getting to know more what are the musical ideas of both of us. This is one of the musical outputs we are working on besides some draft ideas about how to join our ideas and link them (biochemistry and music). At the moment we are drawing lines to spread more into the scene of free software with Linux and this conference seems like the right place to go.

Lounge Concert

22:00

Panela de Pressão

Concert

Panela de Pressão is an improvisation over the network with Bruno Ruviaro and Juan-Pablo Caceres. Juan-Pablo will be playing live from Santiago, Chile. The two musicians started playing together in 2004 when they first met in the United States. After a long hiatus, the duo finally resumed playing last year, now mostly through network performances using JackTrip. "Panela de pressão" means "pressure cooker" in Portuguese.

Lounge Concert

22:00

intervention:coaction

Concert

The project is a live, audiovisual, beat-and-noise-based performance work. The intention is to create a symbiotic system, in which live decision making by the performer impacts on both the audio and visual components of the work but also in which both the audio and visual components can interact with one another, causing behaviours that are not directly controlled by the system performer. There is also an element of chaotic behaviour built into the system, causing unpredictable audio and visual outcomes.

Lounge Concert

22:00

Elektronengehirn: concert reqPZ

Concert

Audiovisual electroacoustic concert by Malte Steiner's project Elektronengehirn.

The input of piezo contact mics are taken and analyzed with Pure Data on Linux laptop controlling sound and graphics. The usage is between percussion trigger and pick up, sometimes the piezo sound is used directly, in other parts only as controldata for synthesis.

Day 3 - Saturday, May/3

Main Track (ZKM_Media Theater)

SurSound and Ambisonics

10:00

The Ambisonic Decoder Toolbox

We present extensions to the Ambisonic Decoder Toolbox to efficiently design periphonic decoders for non-uniform speaker arrays such as hemispherical domes and multilevel rings. These techniques in- clude AllRad, Spherical Slepian function-based, and modified mode-matching decoders. We also describe a new backend for the toolbox that writes out full- featured decoders in the Faust DSP specification language, which can then be compiled into a variety of plugin formats. Informal listening tests and performance measurements indicate that these decoders work well for speaker arrays that are difficult to handle with conventional design techniques. Additionally, the computation is relatively quick and more reliable compared to non-linear optimization techniques used previously.

SurSound and Ambisonics

10:45

WiLMA - a Wireless Large-scale Microphone Array

Everyday situations are rich in numerous acoustic events emerging from different origins.

Such acoustic scenes may comprise discussions of our fellow human beings, chirping birds, cars, cyclists, and many more. So far, no recording or scene analysis technique for this rich and dynamically changing acoustic environment exists, though it would be needed in order to document or actively shape an acoustic scene.

We know customised techniques for recording symphony orchestras with a static cast, but none that automatically readjusts to scenes with varying content.

Thus, a new recording technique that analyses the signal content, the position and the activity of all sources in a scene, is required.

We present WiLMA, a wireless large scale microphone array, a mobile infrastructure that allows for investigating into new recording and analysis techniques.

Music Programming

14:00

Processes in real-time computer music

The historical origin of currently used programming models for doing real-time computer music is examined, with an eye toward a critical re-thinking given today’s computing environment, which is much dif- ferent from what prevailed when some major design decisions were made. In particular, why are we tempted to use a process or thread model? We can provide no simple answer, despite their wide use in real-time software.

Music Programming

14:45

FAUSTLIVE: Just-In-Time Faust Compiler... and much more

FaustLive is a standalone just-in-time Faust compiler. It tries to bring together the convenience of a standalone interpreted language with the efficiency of a compiled language. Based on libfaust, a library that provides a full in-memory compilation chain, FaustLive doesn't require any external tool (compiler, linker, etc.) to translate \faust source code into binary executable code.

Thanks to this technology FaustLive provides several advanced features. For example it is possible, while a \faust application is running, to replace on-the-fly its behavior without any sound interruption. It is also possible to migrate a running application from one machine to another, etc.



Music Programming

15:45

OpenMusic on Linux

We present a recent port of the OpenMusic computer-aided composition environment to Linux.

The text gives a brief presentation of OpenMusic and typical use-cases of the environment. We also present a short history of its development, and mention previous attempts at porting it to Linux.

The main technical challenges involved with developing the current Linux port are discussed, as well as solutions to these. We end the paper by outlining some possible areas for future work.

Music Programming

16:30

Radium: A Music Editor Inspired by the Music Tracker

Radium is a new type of music editor inspired by the music tracker. Radium's interface differs from the classical music tracker interface by using graphical elements instead of text and by allowing musical events anywhere within a tracker line.

Chapter 1: The classical music tracker interface and how Radium differs from it. Chapter 2: Related software. Chapter 3: Radium Features. a) The Editor; b) The Modular Mixer; c) Instruments and Audio Effects; d) Instrument configuration; e) Common Music Notation. Chapter 4: Implementation details. a) Painting the Editor; b) Smooth Scrolling; c) Embedding Pure Data; d) Collecting Memory Garbage in C and C++.



Miscellaneous

17:15

Closing Ceremony

Poster Presentations

Poster Session

11:30

Mephisto: an Open Source WIFI OSC Controller for Faust Applications

Poster Presentation

Mephisto is a small battery powered open source Arduino based device. Up to five sensors can be connected to it using simple 1/8" stereo audio jacks. The output of each sensor is digitized and converted to OSC messages that can be streamed on a WIFI network to control the parameters of any Faust generated app.

Poster Session

11:30

Linux as a Low-Latency, Rock-Stable Live Performance System

Poster Presentation

At Les Chemins de Traverse we've been using Linux on stage for live looping and live sound processing in dozens of gigs for the last four years.

Although most audio-oriented Linux distributions focus more on studio setups than live performance, our experience shows that it is possible to build reliable, very low-latency, highly customized live music systems - even using ridiculously old computers that are generally thought to be completely inadequate for any kind of audio processing.

We will present our live music system - based on existing software (jack, SooperLooper, rakarrack, ...), custom software (made with ChucK, python/pyo, ...) and a bunch of "glue" scripts (bash, python, ...). We will also discuss our experience about the strengths and weaknesses of the Linux audio ecosystem for live use on stage.

Poster Session

11:30

Extending the Faust VST Architecture with Polyphony, Portamento and Pitch Bend

Poster Presentation

We introduce the vsti-poly.cpp architecture for the Faust programming language. It provides several features that are important for practical use of Faust-generated VSTi synthesizers. We focus on the VST architecture as one that has been used traditionally and is supported by many popular tools, and add several important features: polyphony, note history and pitch-bend support. These features take Faust-generated VST instruments a step forward in terms of generating plugins that could be used in Digital Audio Workstations (DAW) for real-world music production.

Poster Session

11:30

Routing Open Sound Control messages via vanilla JACK to build low-latency event translator/filter chains and map unconventional controller data to musical events

Poster Presentation

Although MIDI is an adequate tool to serve terminal sinks with musical events, it is not the ideal choice as a primary transport layer for more complex and/or expressive event streams like e.g. multi-touch, gesture or motion based controller data. Open Sound Control (OSC) is a better candidate and generally more adaptable for such unconventional music controllers.

Once the OSC event stream arrives on our Linux box, we would like to be able to handle arbitrary events analogously to audio and MIDI streams, e.g. with low-latency, sample-accuracy and dynamic routing with the JACK audio connection kit. Currently, OSC bundles may be scheduled and carry a time stamp for future dispatch, intra-host routing of OSC messages via UDP/TCP sockets is non real-time and introduces unnecessary latency and each terminal OSC server eventually needs to relate OSC time stamps to JACK sample time which can introduce considerable jitter.

A least-effort approach is to inject and route raw OSC messages directly via JACK MIDI ports running on an unaltered, vanilla JACK server. From a users perspective, JACK MIDI (contrary to its naming) can be used as general-purpose stateless event system as it is indifferent to the form of transported events as long as they reside inside the JACK graph.

Sample-accurate, low-latency OSC routing via vanilla JACK thus enables us to design specialized event filter chains to map unconventional controller data to musical events or build smart translators to e.g. bridge between MIDI and OSC, for both of which we bring experimental implementations.

Poster Session

11:30

Latency Performance for Real-Time Audio on BeagleBone Black

Poster Presentation

In this paper we present a set of tests aimed at evaluating the responsiveness of a BeagleBone Black board in real-time interactive audio applications. The default Angstrom Linux distribution was tested without modifying the underlying kernel. Latency measurements and audio quality were compared across the combination of different audio interfaces and audio synthesis models. Data analysis shows that the board is generally characterised by a remarkably high responsiveness; most of the tested configurations are affected by less than 7ms of latency and under-run activity proved to be contained using the correct optimisation techniques.

Poster Session

11:30

Music Feature Extraction and Clustering with Hadoop and MARSYAS

Poster Presentation

Many examples of large-scale music data mining use pre-extracted audio features in textual format as their input, such as the Million Song Dataset. But these datasets must originally have been somehow extracted from the raw audio, and this process needs to become more scalable to handle larger datasets more efficiently. This project uses Apache Hadoop to scale MARSYAS (Music Analysis, Retrieval and Synthesis for Audio Signals) into the cloud. Additional work has been done towards a recommender system, using Apache Mahout for scalable fuzzy k-means clustering of the extracted audio features.

Workshops & Events

Workshop

10:00

Qstuff* past, present, future and beyond

Workshop

The proposed workshop is a continuation of the previous ones presented on LAC2013@IEM-Graz, as a hands-on discussion and demonstration of the most important as well the esoteric aspects of the Qtuff* software suite. Mainly focused in the Qtractor audio/MIDI multi-track sequencer project, attending developers, composers and users in general are invited to expose and discuss their concerns on how the Qstuff* suite of audio/MIDI applications and Qtractor in particular may apply, evolve and improve to fit each one's purposes better.

Workshop

14:00

Audio measurements using free software and some simple hardware

Workshop

Many users of audio software at some time face some problem that requires understanding things like signal to noise ratio, dynamic range, operating levels, etc. For example, someone making recordings of acoustic instruments would want them to be less noisy. Does he or she need a better microphone, or another preamp? What does it mean if a microphone has a S/N ratio of X dB? At the same time the published specifications of most hardware are usually incomplete in the best case, or deliberately misleading in most.

The workshop has two major aims: to provide a theoretical introduction that enables the participants to interpret and correctly understand audio specs, and to show how to measure an verify the real performance of some hardware using free software.



Kubus Concert

20:00

Xaev1uox

Concert

The piece works with Physical Modellings und was finished in January 2014. Xaev1uox was made with SuperCollider on Fedora.

Kubus Concert

20:00

Out of the Fridge

Concert

Out of the Fridge was composed as a ballet-insertion for a new staging of Christoph-Willibald Gluck's opera Il Parnaso Confuso", which was premiered in the Schönbrunner Schlosstheater in Vienna in 2011.

Alienated sounds from a refridgerator, like the fridge-buzzing, shaking ice cubes or the clicking noise inside the freezer are mainly the source material for the piece. (In the ballet, the refrigerator was an important part of the scenery). All sound processing was realised with the language Csound.

The first part of the work is about constructing and deconstructing, with a clear harmonic structure and some rhythmic elements. The second part dissolves the harmonic structure, it has more confusion and a processed collocation of the material.

Kubus Concert

20:00

Divertimento de Cocina

Concert

Divertimento de Cocina ("Kitchen Divertimento" in English) stages several "kitchen scenes", with sounds and rhythms layered, controlled and triggered by a live performer. Kitchen utensils are mixed, transformed and orchestrated in real-time through a LaunchPad controller and a custom set of SuperCollider classes and programs. What are initially extremely simple rhythms get progressively more complicated as they are layered together in increasingly thicker textures in the initial section of the piece. While the performer walks you through different soundscapes, the initial rhythms form the backbone and guide for the rest of the piece. The SuperCollider program also spatializes all sounds under the control of the performer in a 3D soundscape that can be diffused through an arbitrary number of speakers (the original soundstream is internally generated in Ambisonics, with at least 3rd order periphonic resolution).



Kubus Concert

20:00

rooms without walls

Concert

"rooms without walls" has been composed 2012 for an array of loudspeakers of 4 x 4 speakers build at the Platz der Weltausstellung (Expo 2000) in Hannover, Germany.

The arrangement of the 16 speakers / light steles in her sculptural appearance in Hanover is strictly geometric. In an abstract way it reminds to geometric spatial divisions in baroque gardens as today still can be found in the Royal Gardens in Hannover.

In the composition "rooms without walls" each four corners of a square define 14 square and overlapping areas of different size and position. In the great square, which forms the entire system, hence 4 medium and 9 small squares are included. These 14 rooms are implemented in a special acoustic method (Ambisonic) as sound "rooms" so that it is possible to place different sounds in each room and to move them in circular orbits simultaneously.

The purely electronically generated sound material (Csound) on which this composition is based upon has been designed in terms of its spectro-morphological development and its structure contrasting with the existing sound of the public space. The relationship of sound events with each other is as well contrasting and similar by varying development. 3rd-order ambisonic spatialization was done with Csound.

Due to the very unique setting of 4 x 4 loudspeakers at the city of Hannover, an 8-channel concert-version of the piece is played.

Kubus Concert

20:00

Music

Concert

A thin light behind the fog for electronic sounds.

Kubus Concert

20:00

Cancelled:

Improvisation

Concert

Free improvisation with Bernardo Barros (electronics, SuperCollider on Linux), Mário del Nunzio (electric guitar) and guests.

Sound Night

22:00

Cancelled:

ENTROPIE

Concert

ENTROPIE is a noise and projection performance by Wolfgang Spahn.

Both, sound and projection are based on different analogue and digital machines developed by the artist. Each system generates simultaneously structured noise as well as abstract light pattern.

The invention of moving pictures went along with an artificial separation of sound and visuals. ENTROPIE merge them again. It makes the data stream of a digital projector hearable and gives an audio-visual presentation of the electromagnetic-fields of coils and motors.

Sound Night

22:00

Algorave Improvisation

Concert

This performance of improvised programming generates algorave, danceable percussive music emphasizing generative rhythms. The rapidly changing algorithmic bass music is intended to stimulate dancing.

Using a custom live coding system called Conductive with the Vim text editor and GHCi, the Haskell language interpreter, Bell manages multiple concurrent processes to trigger a SuperCollider-based software sampler loaded with thousands of audio samples. At least two methods of rhythm pattern generation will be used: stochastic methods and L-systems. Patterns from both are then processed to generate variations with higher and lower density, which are then chosen at will during the performance. The performance also involves programming to control other parameters. The programming activity is projected for the audience to see. That output has been refined for greater clarity for the programmer and audience about the operations being performed. The performance is 100 percent Linux!



Sound Night

22:00

Tiny Boats - Burn in the Sun

Concert

A song composed by two people, recorded, edited, mixed, and mastered in Linux with Harrison Mixbus at Art City Sound in Springville, UT

Sound Night

22:00

Locum Meum

Concert

A new remake of an old piece I recorded originally in 1997. Now produced with a new DAW and VST plug-ins. Melodic, and danceable music. The piece combines sentences and patterns from dance and house music together with a melodic, synthetic choir and orchestral phrases.

Sound Night

22:00

Unsound Scientist: Selected Works

Concert

Unsound Scientist Selected Works, is a collection of pieces composed mostly during 2013. Consisting of about 10 songs. It represents the learning process of using Linux tools for music production and compositional exploration in general. All works were made entirely on with Linux software, mainly LMMS and Ardour along with ZynaddsubFX, AMsynth, Hydrogen, TAL noisemaker, Qtractor, Phasex, Synthv1 and other tools packaged with the KXstudio distribution. Live instrumentation along with software and hardware synthesizers were used as well.

Sound Night

22:00

Self-luminous2 -Unbalance

Concert

Self-luminous 2 is a little bit different from first one because the way of controlling became more easy to take it in the show. I added an electronic compass to my instrument, so when I move it to different direction, some accidental sound (ex: some sound suddenly disappeared or the other sounds cover it...) will come from speakers. Sometime, it's "dangerous". However, some of my performance is impromptu. When I play it, it will be interesting that performance is intervened violently from the "self".

My instrument was built by Pure Data and Arduino.

Sound Night

22:00

Superdirt²

Concert

Superdirt² - fascinating electro beats mixed in with virtuous performed cello sounds which give a result of a never achieved before dance ability! With Ras Tilo at the synthesizers and Käpt'n Dirt with the cello it provides a musical experience which is situated between drum'n'bass, jungle, dub, dubstep and even far beyond...

Sound Night

22:00

The Infinite Repeat

Concert

A musician with over 20 years of experience and a computer with Linux. That's what it boils down to. The result: conventional, decent song-writing, with an eclectic tinge because of the choice to not walk the threaded paths coupled with an auto-didactic background, an outspoken personal taste and an open-minded world-view.

Sound Night

22:00

Cancelled:

Against All That Was: live performance stereo PA

Concert

The system mainly deals with machine-listening, complex audio/control signal routing, high level data mapping, and various wave shaping techniques using computer. Instead of aligning and layering musical "events" on linear fashion or repetition process (Reich), or based on arbitrary random process (Cage), Jae Ho Youn is working on "cause and effect" mechanism to implement on compositional practice. He is especially inspired by Buddhist notion of causality (pratityasamutpada): everything arises in dependence upon multiple causes and conditions; nothing exists as a singular, independent entity.

Such system, at least the first version has been developed using computer programming & DSP technique, allowing Jae Ho Youn to write various "units" that generates events depending on its parameters actively communicating with other units. Everything modifies everything in such environment, and as the complexity grows, the result would be highly unpredictable.

He's expecting to develop the whole environment further enough that it becomes "self-sustainable".

"Against All That Was" is the title of the event, given by the composer himself, trying to reject everything, including and especially his own past compositional practice, which was based on aesthetic/stylistic decisions, worked in a way of "making parts then assemble them"...

Sound Night

22:00

The WOP Machine

Concert

One man, 160 oscillators. An improvised live performance with a realtime synthesizer controlled by singing and beat-boxing, the subject of the workshop on May/1 14:45 in the Workshop-space.

Sound Night

22:00

Cancelled:

visinin - modern electronic club music

Concert

Live performances based around monome hardware using renoise and ardour.

Sound Night

22:00

Cancelled:

dots

Concert

Sound Night

22:00

Turbosampler

Concert

turbosampler is audio-video synchronized mashup. hard sound, freeze, matematik, stolen loops, freestyle, improvisation, realtime min generated compositions. hevy noiz / sweet disco

Day 4 - Sunday, May/4

11:00

Excursion

Daily Events / Exhibitions

(30 min) Götz Dipper (75 min) Jörn Nettingsmeier (45 min) Albert Gräf (45 min) David Robillard (45 min) Funs Seelen (45 min) Winfried Ritsch (45 min) Florian Meier (45 min) Jürgen Reuter (135 min) Bent Bisballe Nyeng » Location: ZKM_Lecture Hall Anthony Di Furia » Location: ZKM_Cube Patrick Hartono » Location: ZKM_Cube José Rafael Subía Valdez » Location: ZKM_Cube Marco Donnarumma » Location: ZKM_Cube Giorgio Klauer » Location: ZKM_Cube Louise Harris » Location: ZKM_Cube Martin Hünniger » Location: ZKM_Cube(45 min) Renick Bell (45 min) Antonio Goulart (10 min) Urban Schlemmer (10 min) Filipe Coelho (10 min) Jon Nordby (10 min) Robin Gareus (10 min) Jürgen Reuter (10 min) Matthieu Amiguet (10 min) Gianfranco Ceccolini (45 min) Harry van Haaren (45 min) Edward Costello (45 min) Sebastian Kraft (45 min) Mike Solomon (45 min) Myles Borins (90 min) Gabriel Nordeborn Jörn Nettingsmeier » Location: ZKM_Cube(60 min) Jan Jacob Hofmann » Location: ZKM_Cube(75 min) David Wagner » Location: ZKM_Cube(60 min) Bart Brouns » Location: ZKM_Lecture Hall(60 min) Louigi Verona » Location: ZKM_Lecture Hall(60 min) Harry van Haaren » Location: ZKM_Lecture Hall Romain Papion » Location: ZKM_Music Balcony Jürgen Reuter » Location: ZKM_Music Balcony Carr Wilkerson » Location: ZKM_Music Balcony Jure Pohleven » Location: ZKM_Music Balcony Juan-Pablo Caceres » Location: ZKM_Music Balcony Louise Harris » Location: ZKM_Music Balcony Malte Steiner » Location: ZKM_Music Balcony(45 min) Aaron Heller (45 min) IOhannes zmölnig (45 min) Miller Puckette (45 min) Sarah Denoux (45 min) Anders Vinjar (45 min) Kjetil Matheussen (45 min) Götz Dipper (60 min) Romain Michon » Location: ZKM_Foyer(60 min) Matthieu Amiguet Barbara Minder » Location: ZKM_Foyer(60 min) Yan Michalevsky Andrew Best » Location: ZKM_Foyer(60 min) Hanspeter Portner » Location: ZKM_Foyer(60 min) James Topliss Andrew McPherson » Location: ZKM_Foyer(60 min) Neil Funk » Location: ZKM_Foyer(90 min) Rui Nuno Capela » Location: ZKM_Lecture Hall(180 min) Fons Adriaensen » Location: ZKM_Lecture Hall Luis Valdivia » Location: ZKM_Cube Florian Hartlieb » Location: ZKM_Cube Fernando Lopez-Lezcano » Location: ZKM_Cube Clemens von Reusner » Location: ZKM_Cube Ali Ostovar » Location: ZKM_Cube Mário del Nunzio » Location: ZKM_Cube Wolfgang Spahn » Location: ZKM_Music Balcony Renick Bell » Location: ZKM_Music Balcony Jesse Crowley » Location: ZKM_Music Balcony Yan Michalevsky » Location: ZKM_Music Balcony Amos Przekaza » Location: ZKM_Music Balcony Yen Tzu Chang » Location: ZKM_Music Balcony Daniel Fritzsche » Location: ZKM_Music Balcony Jeremy Jongepier » Location: ZKM_Music Balcony Jae Ho Youn » Location: ZKM_Music Balcony Bart Brouns » Location: ZKM_Music Balcony William Light » Location: ZKM_Music Balcony Markus Demmel » Location: ZKM_Music Balcony Jakub Pišek » Location: ZKM_Music Balcony(360 min)

Art installations are exhibited at the media art space on the ZKM_Music Balcony.

The exhibition is open from 14:00 to 18:00 every day of the conference.

indusium

indusium is a work-in-progress; an audiovisual exploration of additive compositional process. Groups of material are added to and extended by step, the visuals reflecting the changing timbral colours caused through unexpected crossovers in pitch and rhythm.

Lighterature reading - luminoacoustic installation

Lighterature Reading: Chapter 12 is an ambient audio/visual luminoacoustic installation. Chapter 12 consists of nine solar panels that convert lights and video projection into sound images.

Composition is seventeen minutes long and is repeated in a loop. Duration was chosen as the ideal length of one side of vinyl LP record, twelve inch.

Beside the basic composition, which is fully programmed by authors, it also includes the audience intervention with different types of hand lamps, which they can choose at the entrance. Each visitor who wishes to intervene in the work can also enter personal email address on the computer near the entrance to get a snapshot of composition with his intervention as an audio recording via email. Duration of snapshot is three minutes, which is ideal duration of one side vinyl single, seven inch.

CHIMAERA - the poly-magneto-phonic theremin

The Chimaera is a touch-less, expressive, polyphonic and electronic music controller based on magnetic field sensing. An array of linear hall-effect sensors and their vicinity make up a continuous two dimensional interaction space. The sensors are excited with Neodymium magnets worn on fingers. The device continuously tracks position and vicinity of multiple present magnets along the sensor array to produce event signals accordingly. It is a kind of mixed analog/digital offspring of the theremin and trautonium. These general-purpose event signals are transmitted via Open Sound Control to a Linux host running SuperCollider, translated into musical events and rendered to audio according to ever morphing mappings in respondence to the visitors input dynamics.

Visitors are not only free to interact with the instrument, they are also encouraged to hook up their own notebooks as NetJack2 slaves to intercept and process the original audio/MIDI/OSC data streams.

Cancelled:

Septic v1.0

One at a time, a visitor kneels down on a pedestal embedded with a computer, the visitor's head is resting on a pedestal. The computer is filled with digital viruses gathered from the net. The viruses are transduced into infrasounds and low frequencies oscillations that mechanically resonate the visitor's bones and skull by means of high-power skeletal resonance. While the viruses are spread inside the visitor's body, the unique nature of the raw data they are composed of alters the rhythm of the body internal organs.

"Septic" was commissioned by Transmediale / Art Hack Day for the recent 2014 exhibition at the Haus der Kulturen der Welt, Berlin.

The schedule is a major guideline. There is no guarantee events will take place at the announced timeslot.