Program as of 3 September 2018 and is subject to change.
The KISS2018 events will take place at the orange-flagged venues shown in the map below:
Click on the name of a presentation to view the program notes or abstract for that presentation.
Thursday 6 September 2018
What do you do when your collaborator dies? Death, like birth, is fundamentally an altering state. We go from Here…. to…. well…. somewHere. With Tesla Circles the Moon, Ella Gant and Ben Salzman choose to honor the collaborative legacies of samNella (Ella Gant and Sam Pellman) and Pellman/Salzman (Sam Pellman and Ben Salzman), by altering our relationship as we launch the new team salzmaNella. We are simultaneously a celebration of Sam Pellman and all that he altered during his brief orbit on Earth, and an alteration of the next generation of collaborative possibilities for two of those he left behind. In this virtual reality piece, Sam, Ella, and Ben orbit space together in an endless loop of processed sounds and images. Sam Pellman is driving, and he would approve of this message.
A fantasy of our solar system’s formation, and a tree’s altered states from creative chaos. Limited performance time will allow only the kyma, poetry and the 3D video shown in the link below.
https://www.stlmag.com/health/anna-lum/3D animator and artist, with work in toy design, broadcast commercials, app design and games. https://wetransfer.com/downloads/935c0dc2ac21ede7bbe964a2ea395eeb20180725153252/f8e4b4069c9b7048c1e52c9574b0963420180725153252/62e8cc
Piano, Kyma, video
This new composition will incorporate spoken text from Kathleen Dean Moore’s Annenberg Scholar talk at Principia College. Her words, used by permission, address the immediate need to solve climate change as a moral imperative. Presented with the grace of a writer, she builds a compelling argument that it is critical that we act now. Her words are urgent and powerful. The composition will draw upon excerpts from her talk which will be performed live as samples in Kyma using a Wacom pen controller. The piano and electronics will be improvised in tandem with the showing of an eight minute film in a direct cinema style of the coal-fired power plant that stands on the opposite bank of the Mississippi River across from the college.
Kathleen Dean Moore is an author moral philosopher, and environmental advocate. She has said, “For the sake of the beautiful, innocent children of all species, I stand against the corporate plunder of the planet.” Her books include Great Tide Rising: Finding Clarity and Moral Courage to Confront Climate Change and Moral Ground: Ethical Action for a Planet in Peril, among many others. It is a privilege for me to be given permission to use her words to help communicate the importance of these issues. https://www.riverwalking.com
“Nature is beautiful, when it imitates art.”
LUC FERRY, The new ecological order
After living all our lives in cities large and small, the pervasive absence of noise pollution was the first thing we noticed when we moved to The Southwest (… that and the light and how it emerged and receded over endless space). In our urban realities, we had been incessantly surrounded by sonic distractions. Silence was hard if impossible to come by. In The Southwest… Silence rose to meet us as a compositional element. Standing in the open-air, listening to the night, gazing at the stars has been as profound as a deep listening state.
The more you listen to the silence, the more it reveals the things that you had not noticed. Sound particles, distant sounds, insect dances, air moving leaves, grass, tree branches, and animal calls galore. All such compositional elements emerge into soundscapes; they emerge from the quiet and into focus, into sonic frame. In true Audio Vérité sense, Nature is a unitary phenomenon, a deeply interwoven organism in which every part contains every other part within itself.
“Soundwalks Into Midnight” takes environmental sound recordings as its starting point, its base material. Mostly empty spaces (sometimes very large ones) will be sonically captured. These recordings will be organized as sound events. Performers will follow a score to interact with these elements through gestures (using several controllers) and thereby create unique events. Synchronically, Kyma will create human/environment feedback loops (aka “Echo-systems”) between sound events, resulting in a morphology of compositional forms.
“Soundwalks Into Midnight” is, in essence, an instance of environmental sound art. Performers play sounds as instruments in a methodology that is nothing less than a dialogue between human being and Nature. Kyma is both a conductor triggering sequences and an agitator responding to performers and sometimes distorting their articulations. This piece directly connects musical systems and vital energy changes in the real world. The listener thus comes to understand the composition as a shared process of making music in the world.
Origin is an interactive composition based on audio recordings of various Chinese percussion instruments. By using the pen and touch controls of the Wacom Tablet to control the sound-producing algorithms contained in the Kyma creation environment my musical ideas unfold as musical journeys that are both dramatic and nuanced as the sonic material develops.
The Lighted Windows
This is the third composition of a children’s trilogy.
Story 1: The Ocean Thief
Story 2: The Beautiful Feather
Story 3: The Lighted Windows
Unhappy at home, a young girl escapes to walk her street at night and wonder about the different lives being lived behind the lighted windows.
The Lighted Windows is about longing, imagining, and how one sees one’s life.
Friday 7 September 2018
TBD
Tower of Voices is a National Memorial commemorating the 40 passengers and crew of United Flight 93 that was hijacked and crashed in Shanksville, Pa., on Sept. 11, 2001. The 93 foot-tall chime structure is unlike any other in the world, and includes 40 massive chimes that represent the 40 people killed. The project is currently under construction and will be dedicated at the September 11 ceremony in 2018. Sam Pellman created the pitch design of the wind chimes for the memorial. With Sam’s passing in 2017, Jon Bellona and Ben Salzman will talk about their late mentor’s work and vision for Tower of Voices, including how Kyma played a role. Tower of Voices will be one of the most important works of Sam’s legacy. http://bit.ly/tower-of-voices
The Lighted Windows is the third in a trilogy of children’s story-compositions. The first story-composition, The Ocean Thief, premiered at the KISS 2016 Conference in Leicester, England. The second story-composition, The Beautiful Feather, premiered at the SEAMUS 2018 Conference in Eugene, Oregon. This third and final story, The Lighted Windows, is produced in similar fashion as the other two, integrating spoken word with sound manipulation.
Having recently modulated from a university academic working in London to a freelance sound designer living on the Isle of Man, Charlie will present an creative discussion centered around using Kyma to alter the state of original environmentally influenced recordings, transmogrifying them into artistically and technically intriguing pieces. The source recordings draw from natural, mechanical, and human sources.
– How important is the starting point & source material when considering the final outcome?
– Can Charlie get out of his comfort zone and illuminate & explore more personally uncharted Kyma territory?
How will his new surroundings and mindset impact the process and destination?
Roughly 5-10 mins of audio, 12 – 15 mins of slides, and 5 mins for questions.
“Nature is beautiful, when it imitates art.”
LUC FERRY, The new ecological order
After living all our lives in cities large and small, the pervasive absence of noise pollution was the first thing we noticed when we moved to The Southwest (… that and the light and how it emerged and receded over endless space). In our urban realities, we had been incessantly surrounded by sonic distractions. Silence was hard if impossible to come by. In The Southwest… Silence rose to meet us as a compositional element. Standing in the open-air, listening to the night, gazing at the stars has been as profound as a deep listening state.
The more you listen to the silence, the more it reveals the things that you had not noticed. Sound particles, distant sounds, insect dances, air moving leaves, grass, tree branches, and animal calls galore. All such compositional elements emerge into soundscapes; they emerge from the quiet and into focus, into sonic frame. In true Audio Vérité sense, Nature is a unitary phenomenon, a deeply interwoven organism in which every part contains every other part within itself.
“Soundwalks Into Midnight” takes environmental sound recordings as its starting point, its base material. Mostly empty spaces (sometimes very large ones) will be sonically captured. These recordings will be organized as sound events. Performers will follow a score to interact with these elements through gestures (using several controllers) and thereby create unique events. Synchronically, Kyma will create human/environment feedback loops (aka “Echo-systems”) between sound events, resulting in a morphology of compositional forms.
“Soundwalks Into Midnight” is, in essence, an instance of environmental sound art. Performers play sounds as instruments in a methodology that is nothing less than a dialogue between human being and Nature. Kyma is both a conductor triggering sequences and an agitator responding to performers and sometimes distorting their articulations. This piece directly connects musical systems and vital energy changes in the real world. The listener thus comes to understand the composition as a shared process of making music in the world.
This presentation details the inner workings of the artist’s real-time quadraphonic looping, processing, and spatialization environment which was designed and deployed with Kyma with respect to the spatialized intricacies of the quadraphonic output of the Buchla 200e modular music system. MIDI control via Lemur software and dedicated MIDI hardware allows the performer to integrate Kyma’s real-time environment and the Buchla as part of a single and organic hybrid instrument, a large musically immersive ecosystem, if you will.
An exploration of hypnotic techniques and syntactical ambiguity to produce altered states with ambient music.
Baion draws its inspiration from several distinct sources including northern Japanese shamanism, noise music, isochronic tones, and one-button games.
Baion is performed on “The Catalyst” a custom interface that facilitates both performer and audience transition into an altered state. Kyma and Unity communicate via bidirectional OSC, enabling interactive, adaptive audio and visuals.
Like many young couples, Peter and Margaret live in a small cramped apartment, have fun together, eat and sleep, fight and argue, and try to communicate. Unlike most couples, their apartment is filled with water and Peter is a dolphin!
Based on the groundbreaking research of Dr. John C. Lilly, this operetta explores both the sonic and visual aspects of his 1965 Dolphic Cohabitation experiments conducted in St. Thomas, as well as other striking details from Dr. Lilly’s lifelong research into altered states.
The production will be an interdisciplinary dolphin-operatic presentation, including synchronized projected video, moving lights, and the extensive use of KYMA for musical performances and for synthesizing dolphin vocalizations and water sounds.
A shadow’s shadow was dependent by the shadow, so does the shadow, so does the man who owns the shadow. Free will does not exist no matter how we think we do.
The music applies layers of Markov chain symbolizing layers of conscious and subconscious. It may influence the sound synthesizing, and the control interface between live performer and Kyma.
We will demonstrate how we integrate KYMA and the Pacarana into our Cinebrain Hypervisor for experimental show control in our performance.
I will discuss the connection of shamanism and Japanese music, particularly focusing on the tradition of Tsugaru Shamisen and the contemporary practice of avant-garde musician Keiji Haino. I’ll explain how these ideas influenced my new work, Baion (and my work from KISS2013, Shin no Shin), and how I used Kyma to as a conduit for these traditional ideas (showing specific Kyma Sounds and Timeline techniques).
An exploration of hypnotic techniques, specifically ambiguity, to induce altered states in audiences.
Sharing my experiment about different ways to apply Markov chains for sound synthesizing and control interface.
Powerstation for piano, Kyma, and video, presents live improvisation methods in a composition focused on environmental issues through the use of excerpts from Kathleen Dean Moore’s talk, “The World in our Hands: The Environmental Emergencies are a Call to Imagine.” (used by permission)
Timeline with over 15 sounds, in series and parallel controlled by a wii and nunchuck, coordinating the action on the video and poem.
N/A
The Kyma performance system I have been creating uses a 3U “Eurorack” modular system, along with a CV to MIDI converter and a DC-coupled audio interface (for signals that shift from DC to audio rate). One of the advantages of combining a eurorack synthesis system with a Paca Rana is that, like the earlier Moog and Serge modular synthesizers, and like Kyma itself, every input is capable of receiving both control and audio rate voltages. It is possible to create complex feedback systems in which sounds are received, analyzed, processed, split, output, and fed back into the eurorack system as a audio/control signal with very little delay. Kyma is therefor ideal for prototyping designs that can be fully integrated into the eurorack system. My presentation will focus on potential use cases, what the benefits of both systems are, how they may be used together and a detailed examination of the various means by which both systems can be incorporated.
Fantasies of the Mind is an interactive composition for piano and real-time processing in the Kyma sound specification environment. Fantasies of the Mind is not a normal piano and electronic composition in any ordinary sense. Instead of a performer “playing the piano” in a traditional manner, a performer “pours the sounds of the piano into the mouth of Kyma,” which digests and transforms the piano sounds to create a new sonic tapestry that is stretched across time and is scattered throughout the multi-channel listening environment. Some of the sonic transformations that Kyma executes are pre-determined while others produced indeterminate results. This means that the performer must respond in an interactive way to the musical actions of Kyma, thus making each performance a new sonic experience.
The title derives its name from the fact that the overtone structure of the clarinet has only odd harmonics (frequencies that are 1, 3, 5, 7, etc. times the fundamental pitch played by the performer). It shares this characteristic with square waves and triangle waves, so the “ecosystem” of this performance is dominated by odd harmonics throughout. All sounds in the accompaniment have some connection to this sonic signature or to non-traditional clarinet noises. Among the techniques utilized are synthesized triangle and square waves, filtering, granular synthesis, live processing, sampling, and physical modeling. The Euclidean rhythmic engine in the final section of the piece consists of bass clarinet samples that I recorded in my first-ever encounter with the instrument as a “performer,” having had absolutely no prior experience playing any reed instrument.
This solo excerpt from “Ghosts in the Uncanny Valley II” is drawn from a ~40-minute composition for improvising acoustic quartet and live electronics programmed in Kyma. The electronics analyze, manipulate, and expand upon the sounds of the acoustic instruments in real time, creating an electronically “extended” (and altered) quartet. The piece builds upon the composer’s earlier “Ghosts in the Uncanny Valley I (2015),” which encompassed an acoustic quartet composition that was transformed into a fully electronic work using a Serge modular system.
Turritopsis dohrnii, also known as the “Immortal Jellyfish” is a small, biologically immortal species. It is capable of reverting back to a younger version of itself under certain environmental circumstances after having reached adulthood. It does this through a process called transdifferentiation that the jellyfish is literally altering the state of its cells and reprogram them into new ones. The idea of reversion and transdifferentiation of this jellyfish is the key concept and applied in the composition.
Fue Sho is for solo flute and Kyma. The work draws on the Japanese musical tradition of Gagaku for its compositional framework, specifically the Manzairaku. Gagaku, and especially Manzairaku display rich timbral qualities, shimmering layers of simultaneously complex and yet simple tones form an ancient yet simultaneously post-modern music. The Sho provides the harmonic infrastructure for the Manzairaku, and uses a specific scale that reveals several chords. This composition, Fue Sho, is written for flute (Fue) but with the aid of the live electronic processing. Fue Sho exhibits rich timbral fields characteristic of the traditional use of the Sho to produce finely nuanced multiphonic progressions.
Saturday 8 September 2018
Is there a ecology of sound? Designing algorithms to produce sound is not just a concrete, mathematical and engineering task, but simultaneously finely nuanced to produce temporal and timbral characteristics that are fundamental to how we receive the sounds, to their emotional power, and to the creation of spatial perception and our place within the rich and enveloping sound world produced. The sounding material has mass, density, texture, velocity, acceleration, surface, viscosity etc. – material properties that define the sensation of our reception of sound and music but are not fully encapsulated in traditional music education, or the strictures of music theory.
Kyma is itself an ecosystem, fed from outside by data, by touch, by context and producing rich, multilayered sonic experiences. The recombinant synthesis tag pointing us clearly to a dynamic space of evolution, re-synthesis, context and interplay.
This talk will unpack some of these ideas and illustrate them with works that range from sound installation to live performance with processing of acoustic instruments to the research of the Acoustic Ecology lab at Arizona State University, to a new work I have been developing at IRCAM this year titled Future Perfect, which implements sound spatialization techniques across the audiences smart phones in addition to ambisonic spatial audio reproduction and a virtual reality visual world.
Pure randomness is noise. Pure sameness is silence. Somewhere in the vast between is music. Cryptography is designed to transform information into noise. Noise is not interesting. Maybe the information is also boring. What if we made deliberately weakened Crypto to transform simple information? Could it get us to a place between silence and noise? I explore a modified and simplified version of an American WWII cryptographic device (the M-209, a purely mechanical device, unlike the more famous German electro-mechanical Enigma machine). Different methods of building the state machine and interacting with it will be explored.
I will provide an overview of how “Favorable Odds” evolved from an informal jam session with all live processing into a more structured formalized composition for Andrea Cheeseman. I will show some of the Kyma sounds that I made to create the immersive “clarinet-y” ecosystem for the solo clarinetist to float above. And I’ll close with a demo of the Euclidean rhythmic engine that powers the final section of the piece.
What’s New in Kyma
Between July and December of this year, I recorded myself playing a single repeated note on the flute, once a day (most days). Some days I repeated it a lot, and some just a few times; long, short, loud, quiet, etc. all were left up to my mood or creative whim. I recorded these notes in a lot of different environments and many different times of day, sometimes with guitar pedal distortion.
I intend to make some sort of interactive array for live performance using these recorded notes. This array will interact with a pitch-tracking environment I’ve developed with the help of Carla Scaletti. The audience will be allowed/encouraged to trigger samples from this virtual flute calendar, which will correspond to very short musical excerpts that I can then perform, which in turn will interact with the magic Kyma pitch-tracking stuff I DON’T ACTUALLY KNOW HOW KYMA WORKS BUT I’LL FIGURE IT OUT.
To add an extra layer of musical meringue to this Baked Alaska of flutiness, I’d like to create responsive virtual environments for the performance; in other words, as time “passes” and more flute-days are selected, the sonic environment changes to suggest different virtual environments (distance, reflectiveness, wateriness?).
The project consist in blending, mixing, morphing different samples with a disruptive geometric multitouch interface that interpolates multiple sound parameters (states) of the Kyma. Clearly, the relation with the conference’s theme is that we use a process whereby sounds emerges when altering (blending, mixing and morphing) simple and small pieces of samples.
This piece is an electro-acoustic environment according to the definition “surroundings or conditions in which a person, animal, or plant lives or operates”. Each element is given a path to influence and be influenced by any other element in the electro-acoustic system.
My interface is a repurposed (altered state) Gametrak. I created this interface, the Tirare, for live musical performance.
Robert will be using a Kyma system based on the cryptographic sequencer (state machine) described in his talk, and Ilker will be accompanying/leading.
My presentation will focus on all aspects of selecting the Gametrak, identifying its components, connecting to a microcomputer, and generating MIDI (and MPE). I will talk about the physical challenges, affordances, and possibilities.
To complete the live demonstration, the concepts behind the state-based controller will be further explained. The controller is based on geometric shapes and interactions between these shapes. Each shape is linked to a static state of control (preset) defined by the value of controllable parameters of the Kyma. Multi-touch manipulation of these geometrical shapes then allows the player to dynamically interpolate and create new states of control.
I will discuss how I magically made an interesting quasi-feedback pitch-tracking-and-reproducing live performance environment, and how I intend to use this environment along with filters and delay and some sort of responsive tracking of selections made to shape and reproduce virtual spaces, which will themselves determine the structure of the overall piece.
I will be describing the network between a plant, a metal resonator, a wooden resonator, an analog signal processor and Kyma’s Digital Signal processing. I will show the connections between each element in the piece and describe how each element influences/is influenced by another. I will also be describing myself as am element of the environment by what musical gestures I’m playing in the piece and why I picked them.
Utilizing the burgeoning technology of Playtronica, contact microphones, rhythmic impulse responses, and various synthesis techniques ‘The Endless Wastes of Samsara’ is a sound design cycle that both progresses and revisits states of consciousness on the path to enlightenment by means of both musical and abstract sounds.
An explanation of how Kyma’s frequency and amplitude followers can be used to trigger, shape, and confound video by sending MIDI information to VJ software.
Many times in an airplane descending to land, I have felt the effort of the plane cutting through the atmosphere—an ocean of air, surrounding our planet, with all of us living on its ocean floor. This feeling hit home when we went on a family road trip to the Grand Canyon for the first time in 2015. Its immensity can’t be captured in a photo; you really have to see it in person. The gulf of empty air before me, echoing from the rock walls, left me speechless. My friend and brilliant trombonist, Ken Thompkins, had asked me to write a piece for him: standing there at the top of a cliff, I realized that synthesized reverberation effects, combined with the majesty of the trombone, would be ideal to capture this vast emptiness. But as the title says, it’s not really empty: it’s an ocean, we can feel it around us, it keeps us alive.
Hearing Corwin Hall uses sound sources from the third floor of two adjoining now-demolished buildings, resulting in a radical altering of the acoustic space of that location since the time of recording. It is fascinating to look up at the open space where the third floor used to be, twenty feet above the ground, and imagine the former building structure and its acoustic signature, as well as the mics, cables, and people present during the recording. Corwin and Larimore Halls, formerly on the campus of the University of North Dakota, demolished in May of 2018, were originally constructed in 1909 as part of the all-female Methodist-Church-affiliated Wesley College, which later was absorbed by UND. For a period of time Corwin Hall housed the Music Department, and adjoining Larimore hall served as a women’s dormitory. Hearing Corwin Hall uses source sounds from a recording taken on March 13, 2018, after the buildings were abandoned, but before they were demolished. The recording took place on the third floor, which included the Corwin recital hall. Eight microphones were placed in a variety of locations. During the recording the composer presented an informal “memorial service” that included a keyboard synthesizer performance of five hymns from the Methodist hymnal, after which attendees were encouraged to wander the third floor and engage with the microphones in order to capture the acoustics of the space. Hearing Corwin Hall takes digital data from these sound sources and applies them to time-based parameters in a live-processing environment in order to symbolize the fleeting nature of time and objects, as well as highlight the acoustic signature of the space. Using time in this manner creates an altered state in contradiction to the usual passing of time. Video projection of historical and recent photos, as well as photos of the demolition and post-existence of the buildings accompany the music.
This piece is an exploration between natural and synthesized sound. The natural sounds are samples of many different animal growls, roars, calls, etc. and the synthesized sounds are sounds of computers, processors, and early analog synthesizers, etc. The sounds are combined in the tau editor and are then altered and mapped onto a Wacom tablet. The philosophy of the piece is unity vs. separation and moving between those different states seamlessly. The pen of the Wacom tablet is consistently altering the states of each sample making them extremely different from the rest and then uniting them so they are almost the same sound. The sounds move through a spherical eight channel array making the experience of changing between different states more powerful. This piece explores the symbiotic relationship between natural and synthesized sounds.
Improvising an agile soundscape at the finest-level of form, live vocals provide input to a graph of audio effects, featuring looping, granular sampling and synthesis through adjacent channels of a timeline. The vocal style is mantric in character, varying between ambient drone and rhythmic effect, breath-noise and higher-frequency ambience. A state of presence is invoked through the sonic qualities of breath, responding in focused meditation upon an I Ching symbol associated with the region of 3D musical space currently being explored through gestural controls.
Establishing a meso-level of form via real-time navigation between aesthetic analogues of yin and yang, rhythmic and amorphous grain clouds are shaped through density control, timbre control and envelope sequencing. These are triggered and controlled through the player’s 3D navigation of an aesthetically-ordered space of event sequences evolved through the current epoch of evolution.
For unfolding a larger sense of form during the performance, two channels of the timeline are prepared with eight clips of audio each, for use in performance-time access and blending. Analogous to the major changes of mood or song within a traditional live music set, a new epoch of evolution is triggered at occasional points during the performance, repopulating the 3D control space with fresh musical material.
The evolutionary component was developed in Swift 4 using Playground-based development. In a running Swift process, the triggering of each new epoch of evolution results in a new round of data, which is then sent to Kyma in bulk via OSC, populating a collection of step sequencers with events and other objects with parameter settings.
Parallel processes of grammatical-evolution (GE) supply input arguments to a octad of parametric grammars, resulting in eight freshly-evolved populations of individual data members as particular derivation paths through their octant’s grammar.
Each of these eight populations is assigned to a unique corner of a virtual unit cube along with a three-bit vector describing its location in virtual space.
These eight binary vectors are isomorphic to the eight trigrams of the Chinese classic “I Ching”, a particular partition of which provides the initial materials for each of the eight parametric grammars. A result of the evolutionary process, the evolved members of each corresponding population describe particular variations on how to remix these initial materials via selection, recombination, and mapping.
A binary classifier applied to each population divides the individuals into two subgroups per population, yin and yang, according to the amount of repetition vs. change within the sequence data for each individual.
Selected members from each of the eight populations are superimposed upon the virtual unit cube at regular intervals, enabling the evolutionary results to be explored in real time via the 3D control source. Within each population, the subgroups corresponding to yin & yang are also toggled between, selecting material from either as currently available for play, effectively controlling the amount of repetition vs. change during the performance.
This performance relates to the theme of “Altered States” in two related ways, akin to the metaphysical concepts of immanence and transcendence:
An expression of immanence is characterized through the vocal qualities of breath responding to perceived changes in the sensorium interface between outer and inner, environment and symbol.
An analogue of transcendence unfolds in musical time through the live exploration of a virtual cube of space and its mappings, between the archetypes of eight semiotic states expressed through aesthetically-representative musical event sequences. Placed upon the cube in a state-like-form, these musical materials are arranged like charts on an atlas, made navigable through the gestural expression of paths blending between them in real time during performance.
The secondary theme of “Ecosystems” is addressed through the use of an evolutionary process evolving parallel populations of new materials on demand, reordered and classified according to some objectively-measured qualities. These qualities have emerged ecologically through the evolutionary processes of selection, mutation, and recombination working in parallel within each of the eight divergent populations.
Chaos and Calm.
The message of this music is simple and profound – “get outdoors”.
The performance is outdoors to draw attention to the natural environment around us and to highlight to the participants their relationship to the natural environment in that moment.
This music is not just inspired by nature but it is a music of nature. It is situated in nature — both affecting it and being affected by it.
We are fascinated by the contrast between chaos and calm — how the chaos of the natural environment can induce calm in the individual. This is the first “altered state” that the piece is inspired by, the transition within ourselves from chaos to calm, from an indoor synthetic to an outdoor organic mindset. Nature is brought into relief by contrasting it against synthetic electronic sounds while at the same time knocking the synthetic edges off the technology to meet in an analog middle ground. We do this through feedback. All sounds are produced through analog, over the air, feedback which is by its nature messy, organic and chaotic. No oscillators or samples are used; no sound is originated by Kyma. Instead, Kyma is used to shape the sound of the environment.
Feedback is itself a state. That howling constant tone we associate with microphonic feedback is a steady, stable, and ultimately boring state. The act of the performance is to keep the sound dynamic, actively balancing it to keep it out of the stable state by shaping it with kyma and the movement of the microphone. With this instrument we can transition through a landscape that is droning, chaotic, harmonic to percussive. We alter the high quality pristine DSP state of Kyma into to a grungy electro-mechanical sound.
The performance uses microphones, speakers, wires and performers situated in the surrounding ecosystem. Ideally the performance is in the round, with the audience within the circle of speakers and able to move within that space. The physical presence of the audience will affect the sound itself. The performers at the centre will shape and guide the music through the motion of the tuned microphone tubes as directed by their perceived shifts in the environment, wind, sun, clouds, colours and sounds.
It is through the contrast of the tree and DSP we want to highlight the natural ecosystem and make us more acutely aware of our place within it. We want to create a music that celebrates and connects us to the outdoors. Our ultimate goal is not just to continuously alter the state of a complex system in feedback but to alter the mental state of our audience. To bring our consciousness to our relationship to the outdoor ecosystem.
The piece, takes its score from nature. It ends as it begins, in the middle having taken us on a journey into the outdoors.
Sunday 9 September 2018
Among the chief objectives of traditional Western musical notation are the indications of musical content and performance directives in a composition. Historically there have been three primary strategies used in traditional Western musical scores. These strategies are 1) graphical, 2) symbolic, and 3) textual. The techniques used to obtain these objectives have evolved over time and most recently have migrated to inhabit computer-related technologies. The techniques that have evolved, as you may expect, have related principally to note-based compositions. The notational needs of sound-based compositions are somewhat different, but related. The discussion will focus on how these historical strategies can be implemented within a computer-centric environment to assist composers and performers in the creation and performance of sound-based music.
EEG signals are an interesting source of sound control/generation. Following up on methods presented and a composition performed at KISS2013 (Brussels: Wetware Fantasy #1), this talk provides a background on EEG signals and their use as controllers using OSC into Kyma. I will also explore entrainment and biofeedback applications enabling the induction of altered states. There is no Wetware Fantasy #2 on the program since I am not quite ready for that yet. Nonetheless, I hope this review of my working status will be interesting and informative.
What happens when a visual artist creates a drawing in real time on a Wacom table and the pen data is used to control an interactive Kyma Timeline? Does the artist keep to her original drawing plan while the Kyma sounds unfold? Or, does the artist respond to Kyma and alter the drawing plan (which then alters Kyma)? As the artist learns the larger ecosystem environment, does she even anticipate future sound results, which then affects her drawing plan? Does the artist watch the projected image (as the Wacom surface itself is blank)? What is the dividing line between sonification and a dual artistic creation? Who interacts with who, and what affects what?
This presentation will first demonstrate the technical issues that surround this project, including difficulties that exist when using the Wacom pen’s output data to simultaneously drive a drawing program and Kyma as input data. The aesthetic issues listed above will also be discussed, with solutions presented in both the visual and aural domains. These two areas are precursors to a future live concert performance environment, which will also be discussed. The two authors will demonstrate aspects of their own artistic realms, and the interaction between the two will be demonstrated, with both successes and challenges remaining presented as a work in progress report.
Visual artist Marianne Bickett enjoys blind contour drawing and gesture sketching, techniques that lend themselves to real-time performance. In this context, the process of drawing is the composition. Composer Brian Belet uses Kyma as live performance system in addition to a research sound design platform. For each artist, the spontaneity of gestures in performance, based on a preconceived macro plan, is the primary interest for this collaboration. For this project, a Kyma TimeLine is constructed to accept input data from the Wacom tablet, using the pen’s X, Y, Z, and tilt motions to control various parameters. The visual artist creates a drawing on the Wacom tablet (projected onto a screen), and the pen data is routed to Kyma. What emerges is a linked (in both concrete and abstract ways) performance art work that takes shape over a set period of time in concert.
Building on my work as presented in Kyma Confused: Chaos and the Problem of Time at KISS 2012, this presentation demonstrates techniques employed in Hearing Corwin Hall that use audio data to alter time. In Kyma Confused, the focus was on using data from an audio file to alter its own time. This piece focuses primarily on applying one audio file’s data to alter the time of another audio file. The aural results are similar, but more easily controlled in the latter scenario.
This will be an inquiry led presentation being guided by the questions from the audience. However we will provide some direction to the audience to steer the discussion to the direction of “how” we did it rather than “why”. As we will have kyma, a projector and our equipment on hand it will be an ideal opportunity to show what we did and how we did it. The “why” questions are better answered over dinner.
What we think you may find interesting are the details of how we take kyma outside and how it works with and compares to other technologies we take into the wild.
This presentation is about how the main ideas (reversion and transdifferentiation) are applied in the composition and Kyma sound design.
Parallel processes of grammatical-evolution (GE) supply input arguments to a octad of parametric grammars, yielding eight freshly-evolved populations of individual data members as particular derivation paths through their octad’s grammar.*
The resulting derivation path is a function whose parameters have been completely filled in by a genotype-mapping process. A resulting artifact, or phenotype, is created by simply calling this function. This results in a data sequence of musically-interpretable values and events for each individual.
Each of these eight populations is assigned to a unique corner of a virtual unit cube along with a three-bit vector describing its location in virtual space.
These eight binary vectors are isomorphic to the eight trigrams of the Chinese classic “I Ching”, a particular partition of which provides the initial materials to each of the eight parametric grammars. A result of the evolutionary process, the evolved members of each corresponding population describe particular variations on how to remix these initial materials via selection, recombination, and mapping.
A binary classifier applied to each population divides the individuals into two subgroups per population, yin and yang, according to the amount of repetition vs. change within the sequence data for each individual’s phenotype.
Selected members from each of eight populations are superimposed upon the virtual unit cube at regular intervals, enabling the evolutionary results to be explored in real time via a 3D control source. Within each population, the subgroups corresponding to yin & yang are also toggled between, selecting material from either as currently available for play, effectively controlling the amount of repetition vs. change during the performance.
The evolutionary component was developed in Swift 4 using Playground-based development. In a running Swift process, the triggering of each new epoch of evolution results in a new round of data, which is then sent to Kyma in bulk via OSC, populating a collection of step sequencers with events and other objects with parameter settings. The sequencer design is based on a variation of Cristian Vogel’s Global Sequencer, available from NELabs.
To unfold a larger sense of form during the performance, two channels of a timeline were prepared with eight clips of audio each, for use in performance-time access and blending. Analogous to major changes of mood or song, a new epoch of evolution is triggered at occasional points during the performance, repopulating the eight contexts with fresh musical material.
Establishing a meso-level of form via real-time navigation between aesthetic analogues of yin and yang, rhythmic and amorphous grain clouds are shaped through density control, timbre control and envelope sequencing. These were triggered and controlled through the player’s 3D navigation of the aesthetically-ordered space of event sequences evolved through the current epoch of evolution.
Improvising an agile soundscape at the finest-level of form, live vocals provided input to a graph of audio effects, featuring looping, granular sampling and synthesis through adjacent channels of the timeline. The vocal style was conceived to be mantric in character, varying between ambient drone and rhythmic effect, breath-noise, and higher-frequency ambience. A state of presence is invoked through these sonic qualities of breath, responding in focused meditation upon a selected I Ching symbol associated with the area of the 3D space being explored.
This project relates to the theme of “Altered States” in two related ways, akin to the metaphysical concepts of immanence and transcendence.
An expression of immanence is characterized through the vocal qualities of breath responding to perceived changes in the sensorium interface between outer and inner, environment and symbol.
An analogue of transcendence unfolds in musical time through the live exploration of the virtual cube of space and its mappings, between the archetypes of eight semiotic states expressed through their current aesthetically-representative musical event sequences. Placed upon the cube in a state-like-form, these musical materials are arranged like charts on an atlas, made navigable through the gestural expression of paths blending between them in real time during performance.
The additional theme of “Ecosystems” is addressed through the use of an evolutionary process evolving parallel populations of new materials on demand, ordered and classified according to some objectively-measured qualities. These qualities have emerged ecologically through the evolutionary processes of selection, mutation, and recombination working in parallel within each of the eight divergent populations.
* Each data member of a population consists of a (genotype, phenotype) pair. The genotype is a coded specification that produces a specific artifact, the phenotype, when run through a mapping process. Simply a list of integers, the genotype supplies input to the associated parametric grammar. Passing through consecutive levels of the grammar, each integer in turn fills in the next required parameter slot or determines the next available rule selection, systematically resolving into a particular derivation path through the grammar.
Predators and preys: minimal adaptive rhythms and shapes.
A community of competing euclidean rhythms (with a limited life-cycle), escaping from “bass predators”, hunts a flock of “chords and bleeps preys” that randomly walks through a quadraphonic ecosystem haunted by . The characteristics of their walk is related to their pitch and timbral content, and capturing a prey increases their life cycle. But at the same time, they can be killed by the bass predators. The more they live, the more probability they have to replicate: a genetic algorithm controls their asexual reproduction and their mutation.
All the elements of the sonic ecosystem are sending OSC messages to a Processing sketch, that render a minimalistic visualization of the the activity of the ecosystem, in which the sounds become essential geometrical shapes.
Program Notes:
The beautiful trees that I see on my evening runs inspire this piece. The way they move in the wind makes me imagine a magical forest, in which trees can speak to each other, along with other magical creatures. The material for the composition is created from sonified wind data. I am transforming the sound by combining custom waveforms created out of this sonified data with various pitched percussive sounds. Using Symbolic Sound’s Kyma I am controlling sound in real time via data streams from the Kinect controller.
“Wind in the Forest” being inspired by trees and created out of sonified wind data could resemble the beautiful temperate rainforest consisting of tall, powerful, green trees. Santa Cruz Mountains with moist coast ecosystem is a home to many beautiful trees, such as coastal redwoods, coastal Douglas fir, and California black oaks.
unFamiliar is a musical journey inspired by the familiar yet exotic imagery produced by scanning electron microscopes. Source material for this piece was derived from recording everyday environments using an ambisonic microphone, producing a full-sphere surround sound recording. In this way, the natural spatialization and timbral qualities of the sound have been preserved, while the combination of the Game-Trak entertainment controller and Symbolic Sound’s Kyma system allow the performer to explore, dissect, and narrate a soundscape of microsonic detail.
Coming from Universidad San Francisco de Quito (Quito, Ecuador), Nelson Garcia and Gabriel Montufar present En Garde (a contemporary duel). The territory belonging to Contemporary Music is a place of constant competition and struggles for power. There is a constant dispute over what is (and is not) considered Contemporary Music in different cities and scenes; a constant combat. Different gestures, positions, movements and sounds produced by fencers during combat are mapped in real-time and this controls a series of musical parameters. There is also a vocal choir, singing live and reading the combat and its movements like a score. Borrowing a strategy from contextual art, the choral group, as well as the fencers and referee, are selected in situ in each place the work is to be performed.
I will discuss the implementation of an evolutionary algorithm to build a fictional living ecosystem, in which sounds born, move, hunt, replicate and die inside into the four corners of the listening space.
Inspired by the John Holland´s “Adaptation in natural and artificial systems” and Daniel Shiffman´s “The Nature of Code”, I have constructed a Multigrid with a community of sounds evolving in real-time, assembled with many SoundToGlobalControllers, Replicators and MultichannelPans.
I will illustrate the usefulness of the “ArrayToGlobalControllers” and the WireFrames classes (developed by the NeverEngineLabs) to process large arrays of values, and the use of an OrderedCollection stuffed with Random elements as a pool for the mutated genotypes of the sounds.
I will finally talk about the probabilistic methods involved in the replication and in the adaptation of the creatures of this sonic ecosystem and I will briefly discuss random number generation and operations on arrays in Smalltalk.
I would like to share with fellow KISS members how I created the composition based on the wind data that I was able to get from scientists working on the Pacific Ocean. The interface selection/data mapping/performative actions and sound design creation were all used to compliment the wind source, which is basis for this composition.
Coming from Universidad San Francisco de Quito (Quito, Ecuador), Nelson Garcia and Gabriel Montufar present En Garde (a contemporary duel). Fencers duel, their movements and breath are transformed to intricate sounds that live on and off stage, that in turn alter the fencer’s own states for combat. A choir is present that reacts to this, creating an even more complex array of sounds that lives within itself and continues as the match continues. This desktop demo goes into the programming of the piece and how the system works for the piece to be performed.
A compositional overview of unFamiliar, a piece for Game-Trak and Kyma. This presentation will focus on data-mapping strategies of performative gestures of the Game-Trak to variables in Kyma’s TauPlayer utilizing ambisonic recordings taken with a modified Zoom H2n recorder.
After recording Ken Thompkins’s solo trombone part, making sounds in Kyma, and putting everything together in Logic, I’ve made performance versions for both Max and Kyma; this presentation will talk about these different approaches.
During this open lab, I will dissect how I built my piece, Animal Tech, and discuss the following topics: Multi-channel possibilities within Kyma, the Tau editor, Wacom tablet control, and exploration vs response as a method of composition.
Kyma provides multiple frequency and amplitude followers to send MIDI information to VJ software that enables my voice, as both an auditory event and a data controller, to perform as an extension of my body that overlaps with, shapes, and drives the video itself: an uncanny space that promotes an altered state.
Encouraged by realtime processing and infinite recombining in Kyma, ‘The Endless Wastes of Samsara’ is a sonic exploration and expression of being in various states of consciousness repeatedly on the path to enlightenment. Using both live inputs and Kyma synthesis, sounds will be revisited, resynthesized and thus realized in altered states.
A Strange Diversion is a real-time composition for two synthesis systems: Stephen Ruppenthal performing on Buchla Music Easel & 200 System analog synthesizers and Brian Belet performing using the Kyma digital sound design system. The Buchla and Kyma instruments exist as independent sound producing systems, but are here linked into a large dynamic system on several levels. While each instrument creates its own soundscape, both accept the output of the other as input for additional processing. The two become four: Buchla, Kyma, Buchla processed by Kyma, and Kyma processed by Buchla. The two performers improvise within a pre-composed gesture and time structures, adapting to each other and the sounds they hear. Aspects of unpredictability are built into the Buchla patches and Kyma algorithms, and both performers are able to further shape the soundscape via real-time controls of the instruments. The ensemble ecosystem is distributive, adaptive, open structured, self-organizing, subject to disturbances and attempts at recovery, and just way too much fun! The composition’s title is an homage to Allen Strange (1943-2008), a good friend and mentor to both Ruppenthal and Belet.
The Haunted Shores of the Sea of Sleep consists of sounds generated by a hybrid digital/analog synthesis system that are processed by Kyma patches, which are, in turn, modulated by control voltage from the modular system. The title is derived from William Hope Hodgson’s 1908 speculative fiction/horror novel The House on the Borderland. The novel’s protagonist witnesses the acceleration of time and the eventual death of the solar system. While composing this piece, I was inspired by several passages from the end of the novel such as
“Gradually, as time fled, I began to feel the chill of a great winter. Then, I remembered that, with the sun dying, the cold must be, necessarily, extraordinarily intense. Slowly, slowly, as the aeons slipped into eternity, the earth sank into a heavier and redder gloom. The dull flame in the firmament took on a deeper tint, very somber and turbid.”
Through using Kyma, I became fascinated with the way in which sounds provide the potential for uncanny alteration. By pairing Kyma with a modular synthesizer it is possible to create reconfigurable performance systems for real time manipulation of fixed sounds, impulse responses and feedback networks. I am particularly interested in shaping spaces through unrealistic and impossible changes. In keeping with the theme of Hodgson’s work, The Sea of Sleep will feature shifting timescales and continually altering reverbs to evoke the sense of spaces reformed by the passage of time.
The piece is linked to the concept of altered states both literally–it is a system focused on the continual and observable alteration of system states–and metaphorically–through the connection to Hodgson’s novel. Although The House on the Borderland predates psychedelia by half a century, the novel prefigures much of the imagery and concepts later associated with Altered States. Reading the novel for the first time, I was immediately struck by the connections to Ken Russell’s film and Paddy Chayefsky’s novel–both of which I experienced years before reading Hodgson. This piece connects to a larger project, Repairer of Reputations, that is focused on creating sounds from speculative futures that never arrived (i.e. ‘outdated’ science-fiction narratives).
For its inputs, the performance system will use 6 channels of audio into Kyma via the audio interface, 2 channels of control voltage, and 16 channels of cv to MIDI conversion. The outputs will be 4 channels of balanced, line level audio to the house speaker system, along with 2 channels of control voltage and 4 channels of MIDI to CV/gate into the eurorack system.
Generative and iterative organic electro-sonic systems from the altar of the Buchla 200e abound and surround as unfound sound alters the state of mind of the hive’s eye. A darkness casts its light in the vast sonic space that our ears delight.
Blonda is a collaboration between musicologist Madison Heying and composer David Kant. This performance is an extension of their earlier work, in which they used homemade circuitry including non-linear oscillator networks, touch-sensitive feedback boxes, and sound reactive lights, to explore noise and unstable circuitry. In this performance, Blonda’s analog circuitry is processed with Kyma, enabling further examination of stable and unstable states.