Sound Designers Exhibition

Today the Sound Designers, as well as other ECA courses are displaying their projects at the Edinburgh College of Art.

14067649_10206855005043669_2399360205048264177_n

The exhibition will be from Wednesday 24th – Saturday 27th of August, between 11am and 6pm. Free and welcome to all.

The exhibition will be in the sculpture court in ECA and will be displaying the work of Sound Design, Acoustics and Music Technology and Digital Composition and Performance. Saturday the 27th of August will feature performances as well as exhibitions. They are free to attend and you can find tickets here.

Check it out and see the work of some of the most creative and talented students of ECA

For more information on the event check out the Facebook event here : www.facebook.com/events/315043795553332/?notif_t=plan_reminder&notif_id=1472029141499820

ECA Main Building and Hunter Building, 74 Lauriston Pl, Edinburgh EH3 9DF

Screen Shot 2016-08-24 at 14.54.43

Interim Presentations – Where the projects are and feedback – Part Three

Create an immersive audio-visual experience

Dave Smith

My project has gone through a number of changes since mid-May. I initially intended to produce an audio-visual fixed media piece to explore movement and transition in sound. I ran with this idea for a few weeks and built some sketches around this theme. However, I felt that this linear approach was perhaps too simplistic and was lacking in scope, so I have decided to explore this theme within the context of an interactive environment.

Accordingly, I have been building an audio-visual environment with FMOD and Unity. The idea is to offer the participant a chance to play with the sonic environment based on the choices they make within it. The most obvious choice is where to go (in this case, where to direct the first-person controller). Although this may seem like a rudimentary and self-evident thing to point out, I think there is potential for some really interesting results just within the bounds of this simple variable. Since sounds are placed within a (virtual) space, a particular route through the space can influence fundamental properties of the composition such as what sounds are heard, when they are heard, for how long and their relative levels in the mix.

Below is a brief sketch of an environment I’m working on at the moment – dark and ominous which I think works quite well. One thing I’ve been trying to do is to construct sounds in such a way that moving between their relative locations creates a feeling of gradual transition and movement with intermingling textures of sound. I want the sonic environment to flow with little in the way of obvious boundaries between one sound and the next. Here, I’ve attached sounds to each of the dark and light ‘monoliths’. You can move around and explore the ways in which sound transitions from place to place. Lots of work needed obviously, although I’m quite happy with the way the dark monolith sounds create a kind of broad, composite texture while at the same time providing individual areas of interest. I intend to work on more interesting ways of triggering effects when you move through the monoliths, randomise things a little, and generally build a bigger environment.

 

To an extent, I’ve also tried to segregate frequency bands when creating sounds and placing them around the environment. For example, I want bass heavy sounds to have more range and form a kind of foundation upon which the ‘short-range’ mid and treble sounds can sit. This way I have a bit more control over the general structure of the sonic environment rather than having bass/mid/treble frequencies ‘locked’ in individual sound files.

Problems: Mainly technical at the moment – FMOD and Unity can be an unstable combination to work with and a few things have hindered workflow considerably. For example, Unity has a habit of randomly freezing my laptop (Windows 10 issue I suspect) requiring reboot. In addition, tweaking parameters in FMOD requires rebuilding master banks which then need re-importing to Unity to test whether the tweak sounds any good. This takes a lot of time (and patience). Perhaps I am doing something wrong here…any technical advice would be much appreciated! Another issue I managed to trace to overuse of the FMOD convolution reverb parameter which was presumably asking too much of the CPU, creating glitches in the sound. However, I liked the sound of the convolution reverb too much so I applied the convolution offline and fed the rendered sounds into FMOD instead. I suppose sometimes it’s about a compromise between what you want and what is technically feasible.

Any comments, suggestions or constructive criticism would be great.

Here is just some of the comments on Dave’s project, for the full comment section click here

Owen –

Thanks Dave. Is there a title for the project yet? It seems to me you have a range of possibilities in front of you and having your themes developed will help pin down what you do next. As it stands there’s scope to develop more variety in to the sound world but the degree to which you pursue that, and how you do it depends largely on what it is you want people to do / have done to them. Is this an explorable composition, for instance, or an instrument? What do such categories imply for the ways in which you might curtail / encourage certain behaviours and possibilities? Are there features of using 3D worlds as interfaces for things like this that seem to dictate aspects of the sonic form? At the moment everything is quite slow and gliding: how would one make a jump cut in this kind of compositional process? Can you trace out particular kinds of overall shape in the environment? Can you do an A-B-A? A sonata form? A rondo? If you were to perform this, what would be the lower / upper limits for a workable duration?

tinpark –

Thanks for your post Dave and for the demo example. I don’t like to pass you onto Jules WRT the FMOD stuff as he’s so busy, but I’m a year out of date with it at the moment and haven’t caught up this summer yet. If Jules doesn’t respond in time, maybe we can resolve some of these issues as a group. Andrea has been making progress inspite of FMOD/Unity too… If you’re still in the experimental stages of this (sounds like you’re past that now), you might consider sending commands from Unity to Max and bypassing FMOD all together.
I’m drawn to you considering this an explorable composition, rather than an instrument. For me, this kind of work brings sound design and composition together in meaningful ways. You have the formal concerns of shape and experience to deal with (composition), but are designing these as “ranges” rather than shaped absolutes. If you’re composing with shape, terrain, object, speed, how can you parameterise these? What are you designing with – spectra, pace, rhythm? What kinds of compositional models are you interested in borrowing from and what formal shapes work well in these kinds of environments or might be explored further because of this environment? How do we listen/hear if you’re also playing as opposed to consuming in a still listening capacity? Do we begin to participate in the music differently and how does this affect the short to mid term form and structure. Can a piece of music like this be performed or can we only play it?
Get your title and chapterlist nailed down this week, make sure that you promise what you’ve been doing and enjoy making your piece/pieces. What’s the piece about/for? Why does it exist. Not just to be droney and dark right? It’s droney and dark for a reason isn’t it…?

 

Exploring the potential of percussive impulses as a gestural means of parameter control for live electronics

Angus Stewart

Current progress:

At the moment my most universal means of parameter control is based on a 4-way switch system, where one of 4 possible sound analysis values can be output:

Current value _ This is the most obviously transparent and linear of the four. Parameters will be set immediately to the current value.

Mean value _ Another relatively linear means of parameter control, inspired by one of Frederick Rzewski’s Little Bangs basic propositions of free improvisation, stating that “the past determines the present.” This results in less transparent changes (especially as the amount of data received by the mean object expands) but also has a reset function so the user can gain a more immediate sense of control.

Differential between current and mean values_ this provides a more subtle and less linear means of control by simply subtracting the current value from the mean. I have found this to be most effective when used with frequency orientated processing. It is made especially interesting in the way that materiality orientated extended techniques (Chris Corsano (www.youtube.com/watch?v=2dIpdqH22kY), Mark Guiliana (www.youtube.com/watch?v=qDQ7eHNufNs)) can break the system, rendering it less linear by emphasizing the offset irregularly and at extremes.

Stored sequences that move between previous two values_ This function stores the previous two current values, and then repeatedly moves between the two (line object) in the amount of time between the two values changing (timer object). This duration of time is randomly changed slightly (by a maximum of 50 ms) with each repartition (drunk object) in order to give it a looping quality that is constantly changing subtly. Multiple loops can be set up on multiple parameters in order to create multi-metered textures.

vimeo.com/173440385 password is: sd

(apologies for the lack of audio example, I’ve been away from my drum kit for the last week. One will follow shortly.)

In terms of parameters that these values can be sent to, I am considering these in three main categories: Time based processing (Delay Rate, Tremolo Rate, Line loop parameters, Reverb time), Frequency based processing (Pitch Shift, Ring mod Frequency, Filter Frequency) & Dynamic based processing (Overdrive intensity, Tremolo envelope, Delay feedback.) At present my patch analyses both global and drum specific frequency, dynamics and envelope length, as well as durations of time between impulses from drum to drum. With these obviously frequency, amplitude and duration focused player and processing parameters, it is possible to set up some relatively direct and transparent relationships between acoustic material produced by the player, and the patch’s electronic response.

Problems and how I plan to fix them:

Whilst some of my previously mentioned successes in creating a sense of transparency between audience and user have been effective, I feel that I have somewhat negated the more non-linear aspects that I initially set out to incorporate into my patching. One way that I feel could be an effective means of doing this is through creating a blurring between these three categories of processing:

  • Something I discovered very recently and would like to experiment with is the Squid Axon module. I feel this could work effectively as a means of merging the aesthetic properties of dynamic modulation (tremolo) and frequency modulation (ring mod.) and thus blurring the properties of frequency and time-based amplitude modulation (all of which go back to oscillation anyway.) This is something that could potentially not work at all, but I feel that if the resultant waveform could be smoothed a bit to eliminate popping, the signal could work effectively as an amplitude controller.
  • Hz to millisecond conversion and vice versa. I have already incorporated millisecond to Hz conversion into my tremolo modules tap tempo. I feel that converting milliseconds to Hz in a delay could work effectively as a means of giving delay wet signal a more pitched quality. I have been working on a module that combines a reverb with a bitcrusher, in an attempt to imitate the effects of the Old Blood Noise Endeavors Black Star Pad Reverb guitar effects pedal’s ‘crush’ mode (www.youtube.com/watch?v=XPiCxpEXpe0 lots of talking in this one, sorry), and feel this could be implemented into this module effectively.

When performing and practicing with the patch, I’ve found that I have struggled particularly at dealing with longer formed improvisations. This is largely down to transitions between combinations of modules being currently somewhat difficult. This is something that I feel is essential to be able to do in performance situations, as it gives the player a larger palette of sounds at their disposal. I feel that some additional means of control are necessary, and would therefore like to explore the potential of using a single expression pedal alongside a pattr object to switch between a number of preset signal path configurations.

Here is just some of the comments on Angus’ project, for the full comment section click here

awrwsound –

I like the sound of the experiments with snares you’ve done previously and I’d be interested to see how your patch can work with different snare types. How would a ringier snare sound compared so something a bit more woody?

Besides from that, it’d be interesting to see how the system reacts to what the player’s doing and how the player then reacts to that. It could be almost like a drum jam with a machine, rather than a performance aid. I’m definitely hoping there’s a chance to see it in action soon.

Seek could be handy as well, which could dip into a buffer and pull out certain bits. The regularity of this could be dictated to the player and controlled using tempo/random. Could seem super basic but might be interesting to see in use. Could also be useless info as there’s a very good chance that I’ve completely misinterpreted everything and am just typing garbage.

Looking forward to hearing more!

Caleb James Abbott –

Hey Angus!
I wanted to reiterate some of my thoughts on your performance the other night, which might help you understand how I approach what you’re doing, which may or may not be useful. Nonetheless, here I go! I really enjoyed your performance. As you know I am also a percussionist, so I can appreciate your project from several perspectives.
It’s something I’ve brought up in my feedback to Mike and Matt as well, and I think it’s worth noting here too. I don’t think we should forget about the performative aspects of what we do, beyond what the computer processes. You mentioned finding it difficult to generate long form pieces (not that this is the answer to your issue, but I think it’s one helpful consideration) and I found I wanted to hear you play more, without the tool. I’ll elaborate. I think audiences will see it in a similar way as well – performer first and tool second. I realize this project is about development of a tool, but I feel that your performance can benefit from gradually bringing the tool in and out, to accompany your performance. This doesn’t mean that the tool can’t take over and become the focus, but I wonder if you thought more about your relationship to it (the tool) and how that would enhance your performances.
I look forward to more of this! And to trying it at some point too!

How are techniques such as Diegesis, Hyperreality and Anempathetic Sound are used as Thematic tools in the creation of sound designs?

Corin Ashfield

For my summer project I’ve been looking at how techniques/features such as diegesis/non-diegesis, anempathetic sound and hyperreality can be used as thematic tools in the construction of sound designs and how they can be used to evoke conceptual resonance and establish temporalization. Originally, I was planning on researching how selective use of sound and tones can affect the theme of a piece and creating a range of examples consisting of a short film for which I would construct 6 or 7 sound designs to prove my point before analysing them further. However, After extensively researching the field of audio-vision through a plethora of reading material I realised how broad and diverse this area of research is and decided that I needed to broaden my scope of study to include, alongside the aforementioned areas, subcategories such as dubbing, rhetorical effects and synchronisation.

Alongside my extensive research into the area, I have been experimenting with creating sound designs (often rough and needing refinement) communicating a certain area of research I’m particularly interested in at a given time. Many examples can be found in my blog posts.

Below is a link to an example of a video I created in an attempt to effectively display anempathetic sound. This is an area that I have particularly been interested in, as I have frequently come across the question “If anempathetic sound  works to effectively convey/emphasise a mood on-screen, can it truly be called anempathetic?”. To try and effectively convey anempathetic sound I created a three-part sound design, one that worked with the picture and two that were completely unrelated –vimeo.com/172330971

I admit I have become somewhat obsessed with researching these areas and analysing examples I find, and one of the main successes of my project has been that I have considerably improved my understanding of a range of areas in sound design and have introduced myself to areas that I had not even thought of before (i.e. temporalisation). One hangup I may have has been the practical side of my project, as I have created a range of videos but nothing that I feel is substantial enough to submit as part of the final hand-in. I am therefore working on finding videos long/substantial enough for me to effectively display a number of the areas that I have been researching  while also working on a photo montage that I may temporalise/communicate conceptual resonance through sound.

Here is just some of the comments on Corin’s project, for the full comment section click here

Owen –

Thanks Corin. How intact you do feel that your themes are at the moment? Anempathy seems to be your main focus here. Are you still probing your ideas about hyperreality etc. Meanwhile, there are certainly some interesting questions raised by your work so far. The most urgent, I think, is where the division is to be found between anempathetic sound and something just arbitrary laid over a clip. One aspect of this is that the anempathetic effect only stands much chance of taking hold in the context of empathetic sound design having already established a context, i.e. it works in juxtaposition to its temporal surroundings. Another aspect may concern the resonance (for want of a better word) of how anempathetic materials are presented, e.g. the common combination of brutal violence with the everyday (such as light music). A third could be to do with playfulness around the diegetic boundary. I think you could benefit from letting some writing and analysis guide your practical work here. How, really, does anempathetic sound seem to be doing its work in some canonical examples? Ditto for your other concerns with hyperreality etc. Can you identify principles along the lines I’ve suggested? Do these seem to work in practice?

tinpark –

Thanks Corin, I’m intrigued by your work and delighted that your research is opening up new approaches for you. I’m also fascinated by the film(s) you’ve made. It may be that you don’t even need to make a longer video, but several other short experiments that pepper and illustrate a serious discussion of the areas you’ve been working in. To that end, it’s worth really concentrating on the shape of your writing and seeing what design work is needed to complete your quest as it stands. How’s the title and chapter list coming on? Does what you’ve put down create an opportunity to design more sounds? I like the idea of making a photo montage, but not if that’s an extra thing that dilutes the depth you’ve been getting to with the video above. Finding a longer film online might be a way to move forwards, but are there any collaborators from the Film classes that we know who might have access to something? The film curators from SFM, Carys and Devin know loads of short film makers, is it worth asking them for some suggestions or contacts?

Mike Parr-Burman

My project is based around a live performance system that interacts and improvises along with its audio surroundings, using rehashed fragments of that input as the source for its sonic output.

 

obsessions

My main obsession while working through this project so far has been to find a way of exploring the space between live music systems that aim for autonomy and independence from the human musician (in the vein of George Lewis’s Voyager, and Adam Linson’s Odessa), and those conceived as extensions of the instrument, under the ‘control’ of their human performer (collectively grouped under the banner of effects processors).

On a conceptual and political level, these two ideas are practically opposites. At the most basic technical level though:

(external sound input —> processing of this in some way —> output sound derived from processes)

they could also be seen as technically very similar, part of the same broad family of sound tools.

This observation has convinced me that there might be something worthwhile in attempting to deconstruct the autonomous/instrument opposition in a live performance system.

 

problems #1

One of the main problems I’ve had with this ‘obsession’ is that the autonomy/subservience of a given system to large degree a conceptual construct embedded in design choices that are quite distinct from each other and do not easily combine.

The following represents my attempts at trying to bridge the gap – while seeking also not to camp too firmly in either field. I’d be interested to hear your feelings/critiques of how well this comes across…

 

successes #1

An early process-sketch that I am fond of involved using concatenative synthesis to replace the live sound of my solo improvisations with grains from a soundbank made from the same live stream. Sound here, the top one is just the ‘wet’ concatenation, the bottom is a different improvisation, wet+dry :

Audio Player

Audio Player

The thing I liked the most about this setup is the way the almost all of the sonic details and intention in what I try to put out in the guitar get swapped out and re-purposed in a way that rips up and disembodies my intentions, while reflecting the same gestures and shapes. I’ve found that despite its simplicity this in itself can be a pretty fruitful ‘interactive’ system for solo improvising as the transformations keep twisting and pushing you around cutting off paths and creating new ones.

 

successes #2 + problems #2

Growing out of the experiments above, I have built a more independent improvising agent that still uses concatenative synthesis of the live input as it’s voice, but which generates its own distinct phrasing and gestures, rather than mirroring the input directly.

This incarnation had its first test run in a small live improvisation last week:

Audio Player

I have mixed feelings about this performance. The voice tended towards using samples of the strongest attacks, while generally ignoring more textural sounds. This made its gestural content fairly predictable and somewhat unsatisfying for me to interact with.

On the positive side however, as a first outing and proof of concept, it could certainly gone a lot worse – it worked. My immediate goal for this now is to see what tweaking can be done to improve the behaviour and to try and coax some unpredictability, belligerence and subtlety out of it…

Here is just some of the comments on Mike’s project, for the full comment section click here

Owen –

Thanks Mike. This hazy territory between co-player and instrument suggests a rich seam of themes to focus on. The idea of mirroring (or not) certainly suggests itself as one to explore. How can the possibilties be broken down here? You’ve got slavish repetition at one pole, and complete inversion at the other (but this kind of comes back on itself, I guess). My instinct is to guess that it’s in the timing of the machine’s utterances that the most profound differences will emerge (Ivan Illich once said something along the lines of ‘in order to master a language one must learn its pauses’). Have you read my phd thesis? I riff on some of the same ideas you’re playing with here. Technically you might be almost there, and much of the remaining work could be in actually learning the possibilities of what you have. Can you have multiple corpora in a piece that the machine uses for its voice? This might give the possibility of making some richer forms, if you introduce some distinctions about where you put certain samples as you play.

Martin Parker –

I’m enjoying listening to what you’ve made Mike while commenting so forgive me if I change tack half way through a sentence. Based on the aura that I’m in now (first file) you have an issue with playback speed which eclipses by some way what you’re able to play physically. Slow her down and stop her stuttering and see what she says. What if her speed was glacial by comparison with yours? A sympathetic electronic voice is not a still one, but it doesn’t have to be a fast one.
Structurally (file 2), I feel like I believe that she’s making you play differently, so you’re getting to that co-player place you wanted to be in and making form with it. I’m also agreeing (with Owen) that your reflections on merging autonomous systems with effects processors are timely.
The so-called live electronics world grew up, in part, in reaction to the limitations of liveness and presence that reveal themselves when recorded music gets played on on stage, but the environment we’re in now is also an inevitable outcome of technological development; everything has sped up. Are we still playing electroacoustic music just with close-to-real-time tools?
The challenge of liveness and presence in music making also have a lot to do with human to human communication and I’d be really interested to read a section in your essay about how you as a human feel a part of the instruments you play and design and the extent to which they are structures you climb over and explore, taking different routes each time. What of the audience, where are they as receivers of information you provide in sound an gesture?

Interim Presentations – Where the projects are and feedback – Part Two

Acheron Crossing: a practical study of narration through dynamic and fixed spatialisation

Andrea Trinci

My project wants to explore the potentials of spatialization, in particular its capabilities to create rich and immersive ambiences. The storyboard with draw from the classical myth of the Acheron as threshold between the world of the living and the world of the damned. This set is chosen from both a practical and narrative point of view. It allows me to create a completely dark space (possibilities are that the user is the damned soul or the river is underground) which is a key element to focus on the narration only through sound and it will refer to something commonly known but it will give me a lot of freedom from a design prospective as the myth is open to interpretations. The Idea is to create an immersive VR environment that manages to unfold a story only through sounds.

I am currently developing the backbone of the scenes I’m missing. I have almost 4 out of 8 scenes fully working (but still not at the level I want them to be). the last 4 scenes are very passive so it won’t take long to develop them.

Once I’m done with Unity I can focus more on the audio refinement. The spatialization is working but I want to put my efforts in refining parameters further (i.e. distance or surround) within FMOD through filters and ambiences.

Obsessions: I spent almost a month trying to figure out how to create a 3D audio environment for the purpose of this project. Bouncing between toolkits and plugins for cinematic VR, everything was so confusing and without a proper manual to explain the workflow, I almost risked changing the entire project out of frustration. In the end I managed to use the Oculus toolkit properly and started doing the designs.

Problems: Even with everything working there’s still a major bug that limits my design possibilities as the attenuation curve for the FMOD event is fixed and if I disable the standard one, the event does not play in Unity. Scripting is definitely not my strong point, therefore, in order to design all the interactions of the scenes it takes me longer than it should. Another problem is that directionality is very subjective and dependent on the knowledge of the source so I’ll need a lot of testing from people that never listened to my project to achieve something realistic because, as it stands, it is tuned for myself and I’ve been working on this project for months. Lastly the biggest problem I’m facing is the lack of a definite vision of the final shape of the project, I don’t know if I want to focus on narrative or on spatialization. What I’m doing for now is just the surface of this project that I perceive as soulless, with no clear direction.

Successes: Once set up properly, spatialization works really well. It’s a weird feeling that you’re not used to have in an interactive experience as it emulates reality almost perfectly. Working with such a new topic can often be really frustrating but when you manage to achieve something that works it feels immensely rewarding. It is a project that I would gladly show to people and this is certainly symptomatic of a project that for me is worth working on as I’m always very critical of my works.

I’ll attach a video that shows the first scene in action:

 

I’d like to have some criticisms/suggestions from other people so feel free to comment this post and thanks in advance.

Here is just some of the comments on Andrea’s project, for the full comment section click here

Owen –

Thanks Andrea, good to see that things are coming together. At this stage I think it’s going to be very important to develop your thematic ideas to help guide the rest of the practical work. In particular, my feeling is that fewer scenes with their sound fully developed will make a stronger submission, with more to talk about, than risking many scenes and running out of time. So now is a good moment to reflect on what we have and think about how it’s sonically populated. How much sound do you need to inhabit the scene to make it feel suitably hellish? How can some fitting textures be developed that give the impression of crowdedness without using up too many resources? What sense of materiality do you wish to convey, given the virtual acousma you’ve adopted. Are there solid surfaces (the reverb implies that there are)? What of the floor? What possibilities exist for the voice? Can its intimacy be modulated? Can the transformations be varied to keep us, as ‘players’ unbalanced and suitably fearful?

davesmithsound –

Hi Andrea, I really enjoyed listening to your walkthrough video. The spatialization works very well. I’m wondering whether there is a “goal” for the player in this environment? Is there something they have to do within the story or are they passive observers? You were questioning whether you should focus more on narrative or spatialization – I am tempted to suggest narrative since you have chosen to base your project on mythical underworld which has so much potential for telling a story through sound. I know this is relatively early days, but I was hungry for more layers of sound in the ambience, suggestive of things to explore in the distance perhaps. Since there is a horror element here, perhaps you could use sounds that are ambiguous (particularly in the distance) as I think that’s an effective way to build tension.

I was also wondering what you meant about the FMOD attenuation curve thing – do you mean you tried disabling the “distance attenuation” on the Event Macro tab but the event doesn’t play? Did you add a distance parameter? Try adding the inbuilt distance parameter as normal, then set the Event Macro distance attenuation to “off” (and set the min and max distance to whatever you need). When I do that, the event still plays in Unity and I can use automation on the master volume to get a custom distance curve. Perhaps you mean something different though?

Matt Harold –

Hell(o)

The localisation is great and it seems worth the effort of getting it right! Is it possible to monitor the persons head movements? e.g. if they spend a lot of time looking in direction then the ‘scene’ will move onwards so there is a sense of response. A subtle sound could exist which needs to be centrally focused on by the user and after they ‘center’ it for a few seconds a transition occurs? Also where do you want to take your audience? I once created a piece based on the Greek rivers that led to hell and Archeron was specifically related to sorrow as opposed to lament, fire, forgetfulness or hate, so maybe it’s worth thinking about the kind of emotion you want to portray in contrast to other hellish emotions in order to fully explore a specific version of hell. Or as it is a river you could construct it as a journey on boat where you can look at the scenes around you on shore or within the river.

 

spawning and swarming: sounding the expanded audio input

Caleb Abbott

For my final project, I am developing a vocal processing tool for live performance. To listen to samples of the tool go here. I recommend the following recordings to give a sense of where I’m at.

  1. scale/spawn/swarm/presence
  2. live 24/06/2016
  3. pace & body

Spawning & swarming is, in essence, the way I have come to describe the process for the tool. Essential, I treat spawn/swarm as the way in which the sounds are transmitted, collected, and used within the tool. The tool is broken down into five parameters. They are:

  1. body
  2. presence
  3. scale
  4. pace
  5. weight

The final submission will contain:

  • video documentation of the parameters of the tool
  • live performance
  • 4 recordings of performances
  • the code – Max/MSP
  • the report

Obsessions:

I have been actively coding, recording, researching, and blogging (documenting) about the tool for just under two months. The focus has been heavily placed on the development of the tool. In the next few weeks I will be ending this section of it and moving more towards the report writing, the final presentation/performance, and rehearsing with the tool. There appears to be no shortage of time that could be spent on development.

Problems:

Initially, I felt compelled to create a generalized tool which could be used by anyone, for anything sound related. The difficulty of this project, so far, is in accomplishing that. That isn’t to say the tool, in its current form, isn’t learnable, it’s just not user friendly. I feel this is because I have customized every aspect of it to my performance habits. This isn’t entirely bad. The tool was intended, either way, to be functional for my use first, and secondly for the use of others. I may altogether abandon the idea of a commercial version for this project.

In addition to the latter, finding a suitable device to perform the patch with has also bee tricky. I have settled on temporary usage of the Korg nanokontrol2, but will be upgrading to the Livid cntrl: r for the last stage of development.

Successes:

On June 24, 2016 myself and 3 others from the cohort (Mike, Matt, and Angus) did a scratch performance at Alison House Atrium to showcase our work in progress. This was the first live demonstration of the project and it reflects where I am at with the code, my approach to performing, and aesthetics and compositional ideas to date. The primary purpose of this experience was to see how the tools would act in a live setting, how they would sound together amplified (beyond headphones), and to gain a bit of insight into how the audience would engage with the piece. In general, and given the positive feedback, I feel this was a successful and useful experience.

Lastly, I am happy and encouraged when I work on this project. I think this is probably the most successful aspect I can highlight. I feel I am pushing my abilities as it pertains to Max/MSP, recording, and performing, and that I’m challenging my own comforts.

Mike Fowler

ohp (1 of 1)

I have been exploring the use of optical electronics in systems for sound design.  I was interested in this field as I have some practical experience of working  with electronics, and wanted to apply this in order to  create unique sound designs, installations, and performances that expose correlations between what we see and what we hear. Early experiments involved the use of light sensors circuits using light dependent resistors (LDRs), and two main methods of application became apparent.

The first method, which I will call the AC method, is where output from a light sensitive circuit is passed into an audio amplifier via a AC coupling capacitor. This capacitor removes any DC component from the signal, while allowing any fluctuations (the AC or audio component) to pass, and subsequently be amplified and manipulated as with any other audio signal. This method provides a clear sonic representation of the light hitting the sensor, revealing a hidden soundscape of our local environments.   LED and LCD lights found in everything from CD players to bike lights create all kinds of interesting bleeps and drones, as do digital projectors and most other digitally controlled light sources. I created a simple ‘light microphone’ using a solar cell connected to a 3.5mm audio jack. Most audio recorders have an AC coupling capacitor in the preamp so the solar cell can be connected directly to the recorder.

VDiv1
Simple LDR circuit

solarmic (1 of 1)
                      even simpler solar cell ‘light microphone’

The second method is the DC method. Here there is no coupling capacitor present and we can use the DC output from a light sensor circuit as a control voltage (CV). This CV signal can be used in limitless ways to control audio (or anything else).  I have been using both AC and DC methods with an analogue modular synthesiser to create some unique light-modulated timbres and drones.

For control of light input I have been using digital and analogue projectors, both of which have their own unique interactions with the sensors. The digital projector’s output can be controlled using sound as an input signal in order to create a novel autonomous feedback system that can be interacted with.

I am asking myself what is the reason behind this research? what is the critical angle?

At this juncture, I would like to change direction slightly with the project, to help answer the questions above. I aim to apply my previous experience in the development and delivery of educational workshops to design a cross-disciplinary workshop where participants can build a solar powered noise making device. In addition to building the device, participants would learn some fundamental aspects of sound and science, while introducing skills such as soldering and building circuits.  The device would retain the direct correlation between sound and light that I have been exploring so far, taking it a step further by being light dependent for power and input simultaneously.

Prototype to follow soon…

Here is just some of the comments on Matt’s project, for the full comment section click here

Nikita Gaidakov –

I enjoyed your video. There’s something quite impressive about this rather simple but unique and effective setup – the interaction of how the board looks graphically, your stripped down use of a projector as an excitor, and the sonic results. It strikes me that your core piece is a very simple idea – the transduction of light into sound – and a very simple technology, the basic elements of which can be recombined in a number of permutations, scales, performance and installation situations. I don’t know how the teachers feel, but to me that is substantive enough to be your piece: the prism of different configurations that result from one jumping off point. Your themes: DIY, accessibility, recombination, simplicity, and the raw interaction of light and sound. So, personally I would encourage you not to be hung up on producing a single final product, but rather an installation which showcases this prismaticism. Invent many combinations, keep them all basic but punchy, contrasting in scale, agency, interaction, positive/negative (what affects what, who plays what, by means of adding or subtracting? etc.)

Martin Parker –

Mike, I’m delighted by the prototype you’ve submitted, and unnerved (slightly) by the fact that you’re shifting focus, however, it tunes in with what you’ve been doing this summer elsewhere and also with a long-term direction in your career. So, if you’re proposing to submit an educational workshop, which models of workshop design are you following? How’s the discourse on participatory arts? What’s going on in the world of coding and education? How’s the open source community faring these days? What’s the point of education workshops if there is no infrastructure to sustain development after the workshop? How do you build communities around hacking and making with electronics? Are they sustainable? Should they be? What of the sound? Do we need any more glitchy rickety flakey electronic devices making noise? if so WHY do we need them, what’s essential about their sound and our relationship to it as makers (and parents/audience/friends)? OK, some broad questions here to draw attention to the fact that you could focus this in any way, but you can’t go in all ways. I’m drawn in particular to the sound-related flakey stuff and how important such sounds might be to society. To dig into this, you’ll find Attali’s ‘Noise’ useful: Attali, J. (1985). Noise: The Political Economy of Music. Minneapolis: University of Minnesota Press. Check out the last chapter at least.

Interim Presentations – Where the projects are and feedback – Part One

The interim presentations have wrapped up and all our students displayed their projects so far and allowed others to have a look and give some feedback.

 

Narrative within the Live Environment

Matt Harold

My trumpet audio is passed into a harmoniser. This allows me to define what chord the (monophonic) signal will be pitch shifted to- which has led me down a musical/compositional route. I can select different chords & inversions using a Launchpad. My ideas pertain to self-accompaniment by the use of freezing to capture moments of energy and I want the system to be able to both match my live playing energy as well as act to contrast it autonomously.

I have been using a patch which records live audio information into a jitter matrix. I can then scroll through this audio (using a foot pedal) or use the ‘drunk’ object to move between frames. The frames are ‘frozen’ indefinitely so there are no moments of silence (before the audio is sent elsewhere).

blur

The audio output from this is being sent to a bandpass filter which splits the audio into sub, low mid, high mid and high frequencies. Each band’s volume is controlled by a randomly generated envelope and resized based on a tap tempo, available to trigger by the user on command:

env

It feels like my system does not make enough choices by itself enough and I would like it to guide the piece more, perhaps by using probability gates that are influenced by decisions on the fly but I am not sure how to implement these in my patch. Within the compositional process I am also wondering about specific ideas for pieces that would relate to ‘utterance’ somehow, e.g. pieces with rules based on language syntax- e.g. taking the attack from one utterance which transitions into the sustain of a different utterance. Or possibly with political allusions; utterances being silenced as a vague example. Here is a recent improvisation performed with the system (to download): we.tl/Q7PNECwffj

Here is just some of the comments on Matt’s project, for the full comment section click here
Tinpark –

Going forwards, I’d recommend that you try to make a form that you know already exists and works well before you get on stage. Use a timer to work through different sections and get used to how long the different sections are, by practicing. If you don’t like the rigid feel of fixed length sections, you can randomly set section lengths when the piece starts so that they’re different each time.

Scaling down from the large form, what about the shape and content of each section? Could this also be shaped in advance and then played through on stage?

Finally on the smallest scale, what might you be able to do with the sound coming in at any moment to modulate how the system sounds? For example, an envelope follower (peakamp~) could be mapped to different parameters and different parameter widths as the performance moves on – this would be relatively easy to implement with some good use of presets.

Owen –

Developing more autonomy would indeed be a way to grow the formal possibilities you have with this system. It’ll probably take less than you think, e.g. you can trigger processes that don’t listen at all, so forcing you to adapt; you can introduce a predictive element that conditions what the machine does on the basis of what it guesses you’re about to do rather than tracking your input directly.

Ferdinando Valsecchi –

Hi Matt, I am really enjoying your project so far. Since you raised some preoccupation about the automatic choice-making of your system, I would like to give you some suggestions and places you can look into and then decide whether it could be useful to you or not. For my project I’m using Markov Chain of probabilities in order to determine the pitches which are imposed on the feedback loops. To implement them, I followed this blog, which was very helpful and I think you can find other things in there as well =>www.algorithmiccomposer.com/2010/05/algorithmic-composition-markov-chains.html. Apart from the direct and rough implementation of the above method (like creating your own midi file which will contain only two notes that you will use to either open or close a gate), maybe you can find it helpful in order to see how Max parses probability based situations.

angusestewart –

Hi Matt. Last week I saw a guy called Ben Neill performing with a seriously modified trumpet (it had three bells, it was pretty crazy and actually really split the audience down the middle) have you heard his stuff before? The impression I’ve gotten from the material I’ve heard so far is that your main focus is on extending the trumpet to a point where it is capable of implying harmony. This isn’t very techy, but it might be worth reading some of the writings about John Coltrane in his late career when he became obsessed with being able to play multiple notes on the sax (I’ve got a pretty hefty bibliography on that from my undergrad dissertation which I can dig out and send your way if you want?) as that can provide a kind of acoustic bench mark of capabilities (although I realize a trumpet is a totally different beast) which you can move off from.

 

IESEI

Ferdinando Valsecchi

I’m writing this post to update my current state of development and present my work to my fellow Sound Design Classmates and Tutors.

For anyone who is wondering about the name IESEI, it stands for Interactive Eco-Systemic Electronic Instrument. My project started with the intention of creating a system which could extract a musical soundscape out of the ambience-sounds that surround us every day. That meant to create something which is able to self-manage and evolve, but still capable ofreacting to any changes in the audio source.

To me a key point was for the system to create the soundscape in real-time. Therefore, I opted to use MaxMSP, creating several engines which would mostly rely on feedback-loops to create the soundscapes (derived from the sound-source) and then to impose pitches controlled by different algorithms (either random or probability-based).

A more in-depth look into the Max patch, and how it evolved, can be found in other posts, up to its current form which can be found here.

Lately, I also created a Max4Live version of my system, breaking it into four different engines. I did this in order to have more control on the single parts of the system and create new possibilities for the signal-path (sending audio from one engine to another, stacking several engines in series etc.).

As my final hand-in, I plan on providing several examples of what this system can do. I would like to create an example of the usage on a short-movie, one on a music-related application and one on a sound-walk. In fact, the very initial idea spawned from the concept of creating a musical soundscape that could be manipulated by other apps (like maps and navigation apps) in order to direct people differently than speech-based or radar-like navigation. Moreover, if I have enough time, I’d like to test the system for aperformance and/or an installation.

As of now, I just finished the first-mentioned example, involving a short movie. It may not be in its final form yet, but I would like to ask everybody involved to comment on it. I would also like everyone to continue checking the blog for further developments and examples on other sources.

I include 5 videos of the original short-movie, the one with the soundscapes created by mysystem and three screen-grabs of the system generating the audio in real time. In order to create the final version I used a combination of all three passes, choosing the most appropriate sounds out of the three versions for each scene. The way I envisioned my project is in not intended as a complete subsitute for composers/sound designers. However, it could be useful to inspire professionals and facilitate their work.

Unfortunately, due to contest regulations, I cannot make these videos public. The password to view the videos can be found in the email I sent.

Film with the Original Audio
Final version of the system generating musical ambiences for the short movie.
Combination of the three different versions (video above).
Version 1
Version 2
Here is just some of the comments on Ferdinando’s project, for the full comment section click here

Martin Parker –

Thanks for all of the examples Ferdinando, these are great to have. It’s impressive that you’ve made an algorithmic system work for audio design for film and I’d like to advise that this is where you concentrate your remaining energy and time and avoid the installation idea. If you feel a performance with the system would be a good way to show it, then go for it; there is value in doing something like this in a live situation as you get to learn a great deal about your system and its limits on stage, also a live performance to film can help to liberate the film from the obvious interpretation and open up other meanings, meanings that can emerge and fall away in different ways on each encounter. However, a performance might also be a red-herring as from what I’ve seen and heard above, you can keep your focus around the audio-visual challenges and opportunities of working in this way.

Paul Meikle –

Hey man, you’ve done a good job of explaining your idea here, I really enjoyed listening and viewing the various versions of your program. First off, I really love your idea and I think the results from it so far are pretty fantastic, when you initially explained your idea to me a few months ago I kind of imagined this sort of outcome and for it to have the kind of aesthetic that it does. Just like you have done here with the film examples I think it would be really cool to see and hear how it works with outside environments, could be cool to have a go pro or something and film yourself walking around and just like this have a dry and wet version. Another idea that popped into my head when I was looking through this would be to have different user controlled ‘modes’. A lot of the sounds being produced through your patch does come through quite deep, drone-y and almost dark. Maybe it could be cool to have some kind of control where you could flick through different types of moods or emotional states that control the type of sound produced. I think this could be especially good for the idea you had before that someone would be using this to create soundscapes or compositions from the outside environment. Its kind of the same idea where we would select music of a particular characteristic or emotional state depending on where your walking, time of day, time of year, weather etc… Anyway nice job, good luck

Caleb Abbott –

Hi! I think back to when we were sitting and you were first showing me your patch. I had no idea what it was doing, and was pretty impressed when you told me. I thought it was really enjoyable and pleasant to experience. The dilemma of what this can and should do seems to be the main focus, and in agreement with Martin and Owen, I think you should focus or limit it. The film examples are very interesting, and like Matt articulated well, it’s a very intriguing mix between sound design and composition, which (in my opinion) is kinda where you sit, personally. I don’t know what the limitations of this type of programming will lead you or where else it could go in the future (probably a silly thing to say), but I would think a bit about that. Meaning, perhaps you should generate another series of different perspectives of the tool and find where it becomes repetitive, boring, too similar, etc. I would consider its limitations, and how to best work with them. It’s an issue I’m sure most of us are addressing, but it’s an important one. You want to get as much out this project as possible. As Owen and Martin expressed, in regards to performing with the tool, I can see why it might be a red herring, but, perhaps you could record the performance and remix the pieces using your tool to see if it can be stretched in that direction as well. Anyways, keep it up!

Sonic Association

Paul Meikle

13573501_259276254442599_1644276858_nThroughout this project I have explored the alphabet, experimenting with ways of creating new sounds for individual letters based upon their characteristics; font, shape, connecting points and so on. The purpose of this is to investigate how a person may respond upon hearing these new manufactured sounds, sounds which are completely unassociated with the spoken version of each letter. I want to provide an immersive, interactive walk through where people can come and experience both letter and audio.

So far I have explored multiple avenues regarding sonifying letters and as the project has progressed I have opted to create sounds for 26 naturally occurring letters. This means I will go around and photograph different letters I find on the street, it could be a letter etched onto a bus shelter, graffiti or a convenient crack on the pavement. I will then create individual sound designs for each instance and try to capture the ‘character’ of each letter, the situation it was found in, its shape, its texture, material etc. I hope that these small sound designs will tell story about each letter. The benefit of going out and finding these letters myself is that each letter will be completely different from the last and will not necessarily abide by any particular fonts making for rich material to create sounds for.

Some difficulties I’ve had so far is finding other projects to reference and take inspiration or cite in the dissertation. I’ve also been thinking that on the writing side of things it may be useful to look into game or film sound and investigate what makes a characters sounds within these areas convincing and successful, this could be useful as my project has turned in the direction where I’m basically creating characters and accompanying sound. Any encouragement or recommendations in this area would be welcomed

Here is just some of the comments on Paul’s project, for the full comment section click here

koolmatt6 –

So the sounds are inspired by a combination of the letter itself as well as the style that it is presented in? Could be worth thinking about the way that the letter happened to appear in the picture, like if someone dropped the banana skin then maybe the sound could be infused with the act as well. Unless you are solely aiming to concentrate on the visual aesthetic, in which case perhaps you could analyse ways that people may perceive a picture and play with ways that it could be interpreted. I remember you thinking about the possibility of stringing letters together. It could be interesting to look at prosody- the so called ‘musicality of language’ which isn’t necessarily melodic or beat driven. Maybe you could introduce short sound designs for words that make a story through the letter pictures which constitute the word or something along those lines.

tinparkagram –

Finally, plenty of work out there on letter design and sound. First look at Jules Rawlinson’s work on graffiti and sound design. Skr1bl’s a good one.You might be interested in what’s happening in children’s education and early learning: www.letters-and-sounds.com/

empeeby –

Nice work, I’m enjoying the act of trying to match up the sonic and visual shapes and symmetries while going through what you’ve posted here. In a related way to what Martin has mentioned, one of the things that strikes me most about the visual representations here is the sense of location, texture and atmosphere they all have. Have you considered/had any success using manipulated samples in the sound designs to reflect this? My immediate impression is that this could be an effective way of transposing the resonances + stories behind each letter into the sound realm – with some spectral processing you could merge your synthesised gestures with some real world grit?

A Study into Turning Brain Activity Into Generative Sound

Alex Williamson

So my final project is based around creating an easily set up and accessible system that can take a person’s brain activity information from a simple EEG monitor, separate the different wave types and use the real-time information to control sound parameters to create what could be a performance, installation or sell it on to GCHQ for $$$. A video of the current state of the project is below, and previous blog posts may give a bit of info about the context of the project.

Here you get a basic idea of what’s going on and a look at how the parameters and sounds change based on the incoming data. The sound is likely to change and the sound samples (rain & noise) are definitely going to change, what you hear there is just a placeholder for now.

HOW’S IT LOOKING?

Things have gone from a state of “not progressing at all” to “actually, this might work” in a small period of time. I was having problems scaling the signals coming in from the EEG headset as they could be quite erratic and jump quite a lot. It came to just sitting down, letting it run and seeing what the maximum and minimum values were over time and using that to scale it. I still need to introduce a cut-off for anomalies and maybe have a look at refining the data to have it fit within a more controllable set of values.

The sound is very rudimental at the moment and I will be spending a lot of the remaining time actually getting it sounding better. The reson~ object seemed like a good idea at the beginning, but it’s become very expensive computationally and might have to be refined. The tones are working, but it’s a lot darker than I wanted and will work to have it sounding less ominous, which will involve tweaking the tables that decide what the notes are going to be. I’m going to expand on this in the next blog post, to be posted tomorrow afternoon.

PROBLEMS

The main problem that you can see in the video is that I’ve accidentally patched things the wrong way round, so things speed up when they should be slowing down and vice versa.

Another problem is the headset having connectivity issues and quite a large delay which seems to be a problem within how the headset works. To counteract this, I’ve been aggregating the incoming data and using an average to base the output on the last ten readings from the device. In the video above, you can see this in play at 02:51. This helps with the erratic readings that the device can sometimes put out.  After changing the project fairly recently, I’m kind of against the clock when it comes to finishing it on time. Because of this reason I’m generally going to focus on making it sound right, now that the inputs have been tamed.

Also, the headset is very particular to how it is placed on the user’s head, so if it does end up being  used as part of an installation or something to be used by people who haven’t worked with the device before, it may need somebody on-hand to help set it up right. On a performance basis, this should not be too much of a problem.

SUCCESSES 

My main successes have come from the logic side of things, rather than the sound. Getting the headset to work in Max/MSP in the first place was a massive trial of patience and understanding the data coming out of it was another. I’m at a stage now where I can understand what is happening when I’m getting signals that range from a thousand to a million and have been able to scale them accordingly.

At 01:47 of the Vimeo video above, you can see how I’ve been taking the values from the headset data and converting that into signals that can control audio commands and parameters. Being able to do that across the board and being able to look at the balance of the data and see whether the brain is alert or more relaxed is essential in being able to control the sound. However, I feel like from here on out, it is the sound that needs to be sorted out, which is maybe what I feel more passionate about, rather than number crunching.

OBSESSIONS & GOALS

I guess my main obsession from here is to make the sound less basic and start looking at how to make it more complex, polyphonic and easier to listen to. I’d like to have a greater change in the sound between different states so it represents these changes better. I have been exploring Markov chains so that the sound can be more complex and have better direction. It could also contribute to changing scales and having more interesting tonality to the overall sound. I’ve also been doing some work with delay lines and would like to incorporate this as well, to replace the gen~ stuff that’s in there at the moment (and would be my own work instead).

I would like to incorporate Jitter as well, to help with visual feedback that will help the project have more personality to it. Basically, I’m just going to crack on with getting it sound good.

Despite not sleeping much and burning the candle at all ends, I’m actually kind of enjoying working on the project in what is probably a weirdly sick way. I’m definitely pushing myself in patching and synthesis terms and would love to take what I’ve learned from this project and use it in future work.

Please email any feedback to s1345245@sms.ed.ac.uk

Here is just some of the comments on Alex’s project, for the full comment section click here

andreatrinciblog –

Really interesting project. One thing that I noticed is that this is something similar to what I worked on for the DMSP project. This is data sonification as well, and I encountered pretty much the same setup/problems when getting volatile inputs from an external source and sonifying those.

What I did to solve the problem is to assign the parameters you are receiving to a specific set of sound parameter. of course this is not something I can help you with, that’s up to you how you link those but I can tell you that there are multiple ways of doing it. I’ll link you my blogpost where I explained how I intended to link things:dmsp.digital.eca.ed.ac.uk/blog/artscience2016/2016/02/16/sonification-submission-1/

An incredibly useful source for this topic is “The sonification Handbook” by Hermann, Hunt & Neuhoff. It really helped me understand how to use the parameters I was receiving properly.

tinpark –

Thanks Alex, I’m very pleased to see and hear that you’re making some progress with the patching side of things and that you’ve got some discernible results from your efforts so far.

For me, the main questions are not so much to do with the sound, until you’ve resolved some of the bigger things about this project that so far seem to have been overlooked, such as, why a brain sensor might be useful, valuable, interesting in a sound-context and of course (if we are to believe all that is claimed about sound’s influence on our cognitive processes) what happens when you play a sound to a brain that’s controlling that sound?

If you can dig into these questions a bit, it might tell you the kinds of sound and sound systems you need to develop next.

Caleb Abbott –

Very interesting project. I was wondering what you were working on. The video is very helpful, and gets me into it. This is a difficult zone for me as I don’t know much about the technology, but I can kinda imagine what it could be used for and where it could be enhanced. Have you considered video games with generative music? Meaning, is it possible to take this data and have it somehow reflect the soundtrack of a game? As the game becomes more stressful/calm, perhaps that is reflected in the EEG and in return feeds back into the game (interesting feed back loop). I realize you mention possible uses are for installations.
I think there are several really good (commercial) uses for this, especially as we move more into user-lead technologies and this kind of prototype could push some types of games further. Thanks!

Final Projects Performance – 15th of July 2016

13567214_10101520448502026_3533720904326043592_n

This Friday (15/07/16) there will be a second performance from our sound designers. It will showcase the progress so far of those experimenting with projects for live performance. The blogs of each student gives a great insight to their work, but for those using their projects for liver performance, this is the best way to see what they are bringing to the table.

Hope to see you all there

Welcome to the blog of the Msc Sound Design Students!

This blog was created to give an overview of the Sound Designer’s progress on their summer projects. . Starting from the beginning and seeing it to the end.

It is an opportunity to explore the the in’s and out’s of what it is to take on projects such as these and how they become what they are. There also links to every students individual blog to get more information and depth of the projects that interest you the most.

Please see our Bio’s tab to get more information on the students involved, who they are and what they wish to accomplish throughout this time.