Interim Presentations – Where the projects are and feedback – Part Three

Create an immersive audio-visual experience

Dave Smith

My project has gone through a number of changes since mid-May. I initially intended to produce an audio-visual fixed media piece to explore movement and transition in sound. I ran with this idea for a few weeks and built some sketches around this theme. However, I felt that this linear approach was perhaps too simplistic and was lacking in scope, so I have decided to explore this theme within the context of an interactive environment.

Accordingly, I have been building an audio-visual environment with FMOD and Unity. The idea is to offer the participant a chance to play with the sonic environment based on the choices they make within it. The most obvious choice is where to go (in this case, where to direct the first-person controller). Although this may seem like a rudimentary and self-evident thing to point out, I think there is potential for some really interesting results just within the bounds of this simple variable. Since sounds are placed within a (virtual) space, a particular route through the space can influence fundamental properties of the composition such as what sounds are heard, when they are heard, for how long and their relative levels in the mix.

Below is a brief sketch of an environment I’m working on at the moment – dark and ominous which I think works quite well. One thing I’ve been trying to do is to construct sounds in such a way that moving between their relative locations creates a feeling of gradual transition and movement with intermingling textures of sound. I want the sonic environment to flow with little in the way of obvious boundaries between one sound and the next. Here, I’ve attached sounds to each of the dark and light ‘monoliths’. You can move around and explore the ways in which sound transitions from place to place. Lots of work needed obviously, although I’m quite happy with the way the dark monolith sounds create a kind of broad, composite texture while at the same time providing individual areas of interest. I intend to work on more interesting ways of triggering effects when you move through the monoliths, randomise things a little, and generally build a bigger environment.

 

To an extent, I’ve also tried to segregate frequency bands when creating sounds and placing them around the environment. For example, I want bass heavy sounds to have more range and form a kind of foundation upon which the ‘short-range’ mid and treble sounds can sit. This way I have a bit more control over the general structure of the sonic environment rather than having bass/mid/treble frequencies ‘locked’ in individual sound files.

Problems: Mainly technical at the moment – FMOD and Unity can be an unstable combination to work with and a few things have hindered workflow considerably. For example, Unity has a habit of randomly freezing my laptop (Windows 10 issue I suspect) requiring reboot. In addition, tweaking parameters in FMOD requires rebuilding master banks which then need re-importing to Unity to test whether the tweak sounds any good. This takes a lot of time (and patience). Perhaps I am doing something wrong here…any technical advice would be much appreciated! Another issue I managed to trace to overuse of the FMOD convolution reverb parameter which was presumably asking too much of the CPU, creating glitches in the sound. However, I liked the sound of the convolution reverb too much so I applied the convolution offline and fed the rendered sounds into FMOD instead. I suppose sometimes it’s about a compromise between what you want and what is technically feasible.

Any comments, suggestions or constructive criticism would be great.

Here is just some of the comments on Dave’s project, for the full comment section click here

Owen –

Thanks Dave. Is there a title for the project yet? It seems to me you have a range of possibilities in front of you and having your themes developed will help pin down what you do next. As it stands there’s scope to develop more variety in to the sound world but the degree to which you pursue that, and how you do it depends largely on what it is you want people to do / have done to them. Is this an explorable composition, for instance, or an instrument? What do such categories imply for the ways in which you might curtail / encourage certain behaviours and possibilities? Are there features of using 3D worlds as interfaces for things like this that seem to dictate aspects of the sonic form? At the moment everything is quite slow and gliding: how would one make a jump cut in this kind of compositional process? Can you trace out particular kinds of overall shape in the environment? Can you do an A-B-A? A sonata form? A rondo? If you were to perform this, what would be the lower / upper limits for a workable duration?

tinpark –

Thanks for your post Dave and for the demo example. I don’t like to pass you onto Jules WRT the FMOD stuff as he’s so busy, but I’m a year out of date with it at the moment and haven’t caught up this summer yet. If Jules doesn’t respond in time, maybe we can resolve some of these issues as a group. Andrea has been making progress inspite of FMOD/Unity too… If you’re still in the experimental stages of this (sounds like you’re past that now), you might consider sending commands from Unity to Max and bypassing FMOD all together.
I’m drawn to you considering this an explorable composition, rather than an instrument. For me, this kind of work brings sound design and composition together in meaningful ways. You have the formal concerns of shape and experience to deal with (composition), but are designing these as “ranges” rather than shaped absolutes. If you’re composing with shape, terrain, object, speed, how can you parameterise these? What are you designing with – spectra, pace, rhythm? What kinds of compositional models are you interested in borrowing from and what formal shapes work well in these kinds of environments or might be explored further because of this environment? How do we listen/hear if you’re also playing as opposed to consuming in a still listening capacity? Do we begin to participate in the music differently and how does this affect the short to mid term form and structure. Can a piece of music like this be performed or can we only play it?
Get your title and chapterlist nailed down this week, make sure that you promise what you’ve been doing and enjoy making your piece/pieces. What’s the piece about/for? Why does it exist. Not just to be droney and dark right? It’s droney and dark for a reason isn’t it…?

 

Exploring the potential of percussive impulses as a gestural means of parameter control for live electronics

Angus Stewart

Current progress:

At the moment my most universal means of parameter control is based on a 4-way switch system, where one of 4 possible sound analysis values can be output:

Current value _ This is the most obviously transparent and linear of the four. Parameters will be set immediately to the current value.

Mean value _ Another relatively linear means of parameter control, inspired by one of Frederick Rzewski’s Little Bangs basic propositions of free improvisation, stating that “the past determines the present.” This results in less transparent changes (especially as the amount of data received by the mean object expands) but also has a reset function so the user can gain a more immediate sense of control.

Differential between current and mean values_ this provides a more subtle and less linear means of control by simply subtracting the current value from the mean. I have found this to be most effective when used with frequency orientated processing. It is made especially interesting in the way that materiality orientated extended techniques (Chris Corsano (www.youtube.com/watch?v=2dIpdqH22kY), Mark Guiliana (www.youtube.com/watch?v=qDQ7eHNufNs)) can break the system, rendering it less linear by emphasizing the offset irregularly and at extremes.

Stored sequences that move between previous two values_ This function stores the previous two current values, and then repeatedly moves between the two (line object) in the amount of time between the two values changing (timer object). This duration of time is randomly changed slightly (by a maximum of 50 ms) with each repartition (drunk object) in order to give it a looping quality that is constantly changing subtly. Multiple loops can be set up on multiple parameters in order to create multi-metered textures.

vimeo.com/173440385 password is: sd

(apologies for the lack of audio example, I’ve been away from my drum kit for the last week. One will follow shortly.)

In terms of parameters that these values can be sent to, I am considering these in three main categories: Time based processing (Delay Rate, Tremolo Rate, Line loop parameters, Reverb time), Frequency based processing (Pitch Shift, Ring mod Frequency, Filter Frequency) & Dynamic based processing (Overdrive intensity, Tremolo envelope, Delay feedback.) At present my patch analyses both global and drum specific frequency, dynamics and envelope length, as well as durations of time between impulses from drum to drum. With these obviously frequency, amplitude and duration focused player and processing parameters, it is possible to set up some relatively direct and transparent relationships between acoustic material produced by the player, and the patch’s electronic response.

Problems and how I plan to fix them:

Whilst some of my previously mentioned successes in creating a sense of transparency between audience and user have been effective, I feel that I have somewhat negated the more non-linear aspects that I initially set out to incorporate into my patching. One way that I feel could be an effective means of doing this is through creating a blurring between these three categories of processing:

  • Something I discovered very recently and would like to experiment with is the Squid Axon module. I feel this could work effectively as a means of merging the aesthetic properties of dynamic modulation (tremolo) and frequency modulation (ring mod.) and thus blurring the properties of frequency and time-based amplitude modulation (all of which go back to oscillation anyway.) This is something that could potentially not work at all, but I feel that if the resultant waveform could be smoothed a bit to eliminate popping, the signal could work effectively as an amplitude controller.
  • Hz to millisecond conversion and vice versa. I have already incorporated millisecond to Hz conversion into my tremolo modules tap tempo. I feel that converting milliseconds to Hz in a delay could work effectively as a means of giving delay wet signal a more pitched quality. I have been working on a module that combines a reverb with a bitcrusher, in an attempt to imitate the effects of the Old Blood Noise Endeavors Black Star Pad Reverb guitar effects pedal’s ‘crush’ mode (www.youtube.com/watch?v=XPiCxpEXpe0 lots of talking in this one, sorry), and feel this could be implemented into this module effectively.

When performing and practicing with the patch, I’ve found that I have struggled particularly at dealing with longer formed improvisations. This is largely down to transitions between combinations of modules being currently somewhat difficult. This is something that I feel is essential to be able to do in performance situations, as it gives the player a larger palette of sounds at their disposal. I feel that some additional means of control are necessary, and would therefore like to explore the potential of using a single expression pedal alongside a pattr object to switch between a number of preset signal path configurations.

Here is just some of the comments on Angus’ project, for the full comment section click here

awrwsound –

I like the sound of the experiments with snares you’ve done previously and I’d be interested to see how your patch can work with different snare types. How would a ringier snare sound compared so something a bit more woody?

Besides from that, it’d be interesting to see how the system reacts to what the player’s doing and how the player then reacts to that. It could be almost like a drum jam with a machine, rather than a performance aid. I’m definitely hoping there’s a chance to see it in action soon.

Seek could be handy as well, which could dip into a buffer and pull out certain bits. The regularity of this could be dictated to the player and controlled using tempo/random. Could seem super basic but might be interesting to see in use. Could also be useless info as there’s a very good chance that I’ve completely misinterpreted everything and am just typing garbage.

Looking forward to hearing more!

Caleb James Abbott –

Hey Angus!
I wanted to reiterate some of my thoughts on your performance the other night, which might help you understand how I approach what you’re doing, which may or may not be useful. Nonetheless, here I go! I really enjoyed your performance. As you know I am also a percussionist, so I can appreciate your project from several perspectives.
It’s something I’ve brought up in my feedback to Mike and Matt as well, and I think it’s worth noting here too. I don’t think we should forget about the performative aspects of what we do, beyond what the computer processes. You mentioned finding it difficult to generate long form pieces (not that this is the answer to your issue, but I think it’s one helpful consideration) and I found I wanted to hear you play more, without the tool. I’ll elaborate. I think audiences will see it in a similar way as well – performer first and tool second. I realize this project is about development of a tool, but I feel that your performance can benefit from gradually bringing the tool in and out, to accompany your performance. This doesn’t mean that the tool can’t take over and become the focus, but I wonder if you thought more about your relationship to it (the tool) and how that would enhance your performances.
I look forward to more of this! And to trying it at some point too!

How are techniques such as Diegesis, Hyperreality and Anempathetic Sound are used as Thematic tools in the creation of sound designs?

Corin Ashfield

For my summer project I’ve been looking at how techniques/features such as diegesis/non-diegesis, anempathetic sound and hyperreality can be used as thematic tools in the construction of sound designs and how they can be used to evoke conceptual resonance and establish temporalization. Originally, I was planning on researching how selective use of sound and tones can affect the theme of a piece and creating a range of examples consisting of a short film for which I would construct 6 or 7 sound designs to prove my point before analysing them further. However, After extensively researching the field of audio-vision through a plethora of reading material I realised how broad and diverse this area of research is and decided that I needed to broaden my scope of study to include, alongside the aforementioned areas, subcategories such as dubbing, rhetorical effects and synchronisation.

Alongside my extensive research into the area, I have been experimenting with creating sound designs (often rough and needing refinement) communicating a certain area of research I’m particularly interested in at a given time. Many examples can be found in my blog posts.

Below is a link to an example of a video I created in an attempt to effectively display anempathetic sound. This is an area that I have particularly been interested in, as I have frequently come across the question “If anempathetic sound  works to effectively convey/emphasise a mood on-screen, can it truly be called anempathetic?”. To try and effectively convey anempathetic sound I created a three-part sound design, one that worked with the picture and two that were completely unrelated –vimeo.com/172330971

I admit I have become somewhat obsessed with researching these areas and analysing examples I find, and one of the main successes of my project has been that I have considerably improved my understanding of a range of areas in sound design and have introduced myself to areas that I had not even thought of before (i.e. temporalisation). One hangup I may have has been the practical side of my project, as I have created a range of videos but nothing that I feel is substantial enough to submit as part of the final hand-in. I am therefore working on finding videos long/substantial enough for me to effectively display a number of the areas that I have been researching  while also working on a photo montage that I may temporalise/communicate conceptual resonance through sound.

Here is just some of the comments on Corin’s project, for the full comment section click here

Owen –

Thanks Corin. How intact you do feel that your themes are at the moment? Anempathy seems to be your main focus here. Are you still probing your ideas about hyperreality etc. Meanwhile, there are certainly some interesting questions raised by your work so far. The most urgent, I think, is where the division is to be found between anempathetic sound and something just arbitrary laid over a clip. One aspect of this is that the anempathetic effect only stands much chance of taking hold in the context of empathetic sound design having already established a context, i.e. it works in juxtaposition to its temporal surroundings. Another aspect may concern the resonance (for want of a better word) of how anempathetic materials are presented, e.g. the common combination of brutal violence with the everyday (such as light music). A third could be to do with playfulness around the diegetic boundary. I think you could benefit from letting some writing and analysis guide your practical work here. How, really, does anempathetic sound seem to be doing its work in some canonical examples? Ditto for your other concerns with hyperreality etc. Can you identify principles along the lines I’ve suggested? Do these seem to work in practice?

tinpark –

Thanks Corin, I’m intrigued by your work and delighted that your research is opening up new approaches for you. I’m also fascinated by the film(s) you’ve made. It may be that you don’t even need to make a longer video, but several other short experiments that pepper and illustrate a serious discussion of the areas you’ve been working in. To that end, it’s worth really concentrating on the shape of your writing and seeing what design work is needed to complete your quest as it stands. How’s the title and chapter list coming on? Does what you’ve put down create an opportunity to design more sounds? I like the idea of making a photo montage, but not if that’s an extra thing that dilutes the depth you’ve been getting to with the video above. Finding a longer film online might be a way to move forwards, but are there any collaborators from the Film classes that we know who might have access to something? The film curators from SFM, Carys and Devin know loads of short film makers, is it worth asking them for some suggestions or contacts?

Mike Parr-Burman

My project is based around a live performance system that interacts and improvises along with its audio surroundings, using rehashed fragments of that input as the source for its sonic output.

 

obsessions

My main obsession while working through this project so far has been to find a way of exploring the space between live music systems that aim for autonomy and independence from the human musician (in the vein of George Lewis’s Voyager, and Adam Linson’s Odessa), and those conceived as extensions of the instrument, under the ‘control’ of their human performer (collectively grouped under the banner of effects processors).

On a conceptual and political level, these two ideas are practically opposites. At the most basic technical level though:

(external sound input —> processing of this in some way —> output sound derived from processes)

they could also be seen as technically very similar, part of the same broad family of sound tools.

This observation has convinced me that there might be something worthwhile in attempting to deconstruct the autonomous/instrument opposition in a live performance system.

 

problems #1

One of the main problems I’ve had with this ‘obsession’ is that the autonomy/subservience of a given system to large degree a conceptual construct embedded in design choices that are quite distinct from each other and do not easily combine.

The following represents my attempts at trying to bridge the gap – while seeking also not to camp too firmly in either field. I’d be interested to hear your feelings/critiques of how well this comes across…

 

successes #1

An early process-sketch that I am fond of involved using concatenative synthesis to replace the live sound of my solo improvisations with grains from a soundbank made from the same live stream. Sound here, the top one is just the ‘wet’ concatenation, the bottom is a different improvisation, wet+dry :

Audio Player

Audio Player

The thing I liked the most about this setup is the way the almost all of the sonic details and intention in what I try to put out in the guitar get swapped out and re-purposed in a way that rips up and disembodies my intentions, while reflecting the same gestures and shapes. I’ve found that despite its simplicity this in itself can be a pretty fruitful ‘interactive’ system for solo improvising as the transformations keep twisting and pushing you around cutting off paths and creating new ones.

 

successes #2 + problems #2

Growing out of the experiments above, I have built a more independent improvising agent that still uses concatenative synthesis of the live input as it’s voice, but which generates its own distinct phrasing and gestures, rather than mirroring the input directly.

This incarnation had its first test run in a small live improvisation last week:

Audio Player

I have mixed feelings about this performance. The voice tended towards using samples of the strongest attacks, while generally ignoring more textural sounds. This made its gestural content fairly predictable and somewhat unsatisfying for me to interact with.

On the positive side however, as a first outing and proof of concept, it could certainly gone a lot worse – it worked. My immediate goal for this now is to see what tweaking can be done to improve the behaviour and to try and coax some unpredictability, belligerence and subtlety out of it…

Here is just some of the comments on Mike’s project, for the full comment section click here

Owen –

Thanks Mike. This hazy territory between co-player and instrument suggests a rich seam of themes to focus on. The idea of mirroring (or not) certainly suggests itself as one to explore. How can the possibilties be broken down here? You’ve got slavish repetition at one pole, and complete inversion at the other (but this kind of comes back on itself, I guess). My instinct is to guess that it’s in the timing of the machine’s utterances that the most profound differences will emerge (Ivan Illich once said something along the lines of ‘in order to master a language one must learn its pauses’). Have you read my phd thesis? I riff on some of the same ideas you’re playing with here. Technically you might be almost there, and much of the remaining work could be in actually learning the possibilities of what you have. Can you have multiple corpora in a piece that the machine uses for its voice? This might give the possibility of making some richer forms, if you introduce some distinctions about where you put certain samples as you play.

Martin Parker –

I’m enjoying listening to what you’ve made Mike while commenting so forgive me if I change tack half way through a sentence. Based on the aura that I’m in now (first file) you have an issue with playback speed which eclipses by some way what you’re able to play physically. Slow her down and stop her stuttering and see what she says. What if her speed was glacial by comparison with yours? A sympathetic electronic voice is not a still one, but it doesn’t have to be a fast one.
Structurally (file 2), I feel like I believe that she’s making you play differently, so you’re getting to that co-player place you wanted to be in and making form with it. I’m also agreeing (with Owen) that your reflections on merging autonomous systems with effects processors are timely.
The so-called live electronics world grew up, in part, in reaction to the limitations of liveness and presence that reveal themselves when recorded music gets played on on stage, but the environment we’re in now is also an inevitable outcome of technological development; everything has sped up. Are we still playing electroacoustic music just with close-to-real-time tools?
The challenge of liveness and presence in music making also have a lot to do with human to human communication and I’d be really interested to read a section in your essay about how you as a human feel a part of the instruments you play and design and the extent to which they are structures you climb over and explore, taking different routes each time. What of the audience, where are they as receivers of information you provide in sound an gesture?

Leave a Reply

Your email address will not be published. Required fields are marked *


*