The interim presentations have wrapped up and all our students displayed their projects so far and allowed others to have a look and give some feedback.
Narrative within the Live Environment
Matt Harold
My trumpet audio is passed into a harmoniser. This allows me to define what chord the (monophonic) signal will be pitch shifted to- which has led me down a musical/compositional route. I can select different chords & inversions using a Launchpad. My ideas pertain to self-accompaniment by the use of freezing to capture moments of energy and I want the system to be able to both match my live playing energy as well as act to contrast it autonomously.
I have been using a patch which records live audio information into a jitter matrix. I can then scroll through this audio (using a foot pedal) or use the ‘drunk’ object to move between frames. The frames are ‘frozen’ indefinitely so there are no moments of silence (before the audio is sent elsewhere).

The audio output from this is being sent to a bandpass filter which splits the audio into sub, low mid, high mid and high frequencies. Each band’s volume is controlled by a randomly generated envelope and resized based on a tap tempo, available to trigger by the user on command:

It feels like my system does not make enough choices by itself enough and I would like it to guide the piece more, perhaps by using probability gates that are influenced by decisions on the fly but I am not sure how to implement these in my patch. Within the compositional process I am also wondering about specific ideas for pieces that would relate to ‘utterance’ somehow, e.g. pieces with rules based on language syntax- e.g. taking the attack from one utterance which transitions into the sustain of a different utterance. Or possibly with political allusions; utterances being silenced as a vague example. Here is a recent improvisation performed with the system (to download): we.tl/Q7PNECwffj
Going forwards, I’d recommend that you try to make a form that you know already exists and works well before you get on stage. Use a timer to work through different sections and get used to how long the different sections are, by practicing. If you don’t like the rigid feel of fixed length sections, you can randomly set section lengths when the piece starts so that they’re different each time.
Scaling down from the large form, what about the shape and content of each section? Could this also be shaped in advance and then played through on stage?
Finally on the smallest scale, what might you be able to do with the sound coming in at any moment to modulate how the system sounds? For example, an envelope follower (peakamp~) could be mapped to different parameters and different parameter widths as the performance moves on – this would be relatively easy to implement with some good use of presets.
Owen –
Developing more autonomy would indeed be a way to grow the formal possibilities you have with this system. It’ll probably take less than you think, e.g. you can trigger processes that don’t listen at all, so forcing you to adapt; you can introduce a predictive element that conditions what the machine does on the basis of what it guesses you’re about to do rather than tracking your input directly.
Ferdinando Valsecchi –
Hi Matt, I am really enjoying your project so far. Since you raised some preoccupation about the automatic choice-making of your system, I would like to give you some suggestions and places you can look into and then decide whether it could be useful to you or not. For my project I’m using Markov Chain of probabilities in order to determine the pitches which are imposed on the feedback loops. To implement them, I followed this blog, which was very helpful and I think you can find other things in there as well =>www.algorithmiccomposer.com/2010/05/algorithmic-composition-markov-chains.html. Apart from the direct and rough implementation of the above method (like creating your own midi file which will contain only two notes that you will use to either open or close a gate), maybe you can find it helpful in order to see how Max parses probability based situations.
angusestewart –
Hi Matt. Last week I saw a guy called Ben Neill performing with a seriously modified trumpet (it had three bells, it was pretty crazy and actually really split the audience down the middle) have you heard his stuff before? The impression I’ve gotten from the material I’ve heard so far is that your main focus is on extending the trumpet to a point where it is capable of implying harmony. This isn’t very techy, but it might be worth reading some of the writings about John Coltrane in his late career when he became obsessed with being able to play multiple notes on the sax (I’ve got a pretty hefty bibliography on that from my undergrad dissertation which I can dig out and send your way if you want?) as that can provide a kind of acoustic bench mark of capabilities (although I realize a trumpet is a totally different beast) which you can move off from.
Ferdinando Valsecchi
I’m writing this post to update my current state of development and present my work to my fellow Sound Design Classmates and Tutors.
For anyone who is wondering about the name IESEI, it stands for Interactive Eco-Systemic Electronic Instrument. My project started with the intention of creating a system which could extract a musical soundscape out of the ambience-sounds that surround us every day. That meant to create something which is able to self-manage and evolve, but still capable ofreacting to any changes in the audio source.
To me a key point was for the system to create the soundscape in real-time. Therefore, I opted to use MaxMSP, creating several engines which would mostly rely on feedback-loops to create the soundscapes (derived from the sound-source) and then to impose pitches controlled by different algorithms (either random or probability-based).
A more in-depth look into the Max patch, and how it evolved, can be found in other posts, up to its current form which can be found here.
Lately, I also created a Max4Live version of my system, breaking it into four different engines. I did this in order to have more control on the single parts of the system and create new possibilities for the signal-path (sending audio from one engine to another, stacking several engines in series etc.).
As my final hand-in, I plan on providing several examples of what this system can do. I would like to create an example of the usage on a short-movie, one on a music-related application and one on a sound-walk. In fact, the very initial idea spawned from the concept of creating a musical soundscape that could be manipulated by other apps (like maps and navigation apps) in order to direct people differently than speech-based or radar-like navigation. Moreover, if I have enough time, I’d like to test the system for aperformance and/or an installation.
As of now, I just finished the first-mentioned example, involving a short movie. It may not be in its final form yet, but I would like to ask everybody involved to comment on it. I would also like everyone to continue checking the blog for further developments and examples on other sources.
I include 5 videos of the original short-movie, the one with the soundscapes created by mysystem and three screen-grabs of the system generating the audio in real time. In order to create the final version I used a combination of all three passes, choosing the most appropriate sounds out of the three versions for each scene. The way I envisioned my project is in not intended as a complete subsitute for composers/sound designers. However, it could be useful to inspire professionals and facilitate their work.
Unfortunately, due to contest regulations, I cannot make these videos public. The password to view the videos can be found in the email I sent.

Martin Parker –
Thanks for all of the examples Ferdinando, these are great to have. It’s impressive that you’ve made an algorithmic system work for audio design for film and I’d like to advise that this is where you concentrate your remaining energy and time and avoid the installation idea. If you feel a performance with the system would be a good way to show it, then go for it; there is value in doing something like this in a live situation as you get to learn a great deal about your system and its limits on stage, also a live performance to film can help to liberate the film from the obvious interpretation and open up other meanings, meanings that can emerge and fall away in different ways on each encounter. However, a performance might also be a red-herring as from what I’ve seen and heard above, you can keep your focus around the audio-visual challenges and opportunities of working in this way.
Paul Meikle –
Paul Meikle
Throughout this project I have explored the alphabet, experimenting with ways of creating new sounds for individual letters based upon their characteristics; font, shape, connecting points and so on. The purpose of this is to investigate how a person may respond upon hearing these new manufactured sounds, sounds which are completely unassociated with the spoken version of each letter. I want to provide an immersive, interactive walk through where people can come and experience both letter and audio.
So far I have explored multiple avenues regarding sonifying letters and as the project has progressed I have opted to create sounds for 26 naturally occurring letters. This means I will go around and photograph different letters I find on the street, it could be a letter etched onto a bus shelter, graffiti or a convenient crack on the pavement. I will then create individual sound designs for each instance and try to capture the ‘character’ of each letter, the situation it was found in, its shape, its texture, material etc. I hope that these small sound designs will tell story about each letter. The benefit of going out and finding these letters myself is that each letter will be completely different from the last and will not necessarily abide by any particular fonts making for rich material to create sounds for.
Some difficulties I’ve had so far is finding other projects to reference and take inspiration or cite in the dissertation. I’ve also been thinking that on the writing side of things it may be useful to look into game or film sound and investigate what makes a characters sounds within these areas convincing and successful, this could be useful as my project has turned in the direction where I’m basically creating characters and accompanying sound. Any encouragement or recommendations in this area would be welcomed
Here is just some of the comments on Paul’s project, for the full comment section click here
koolmatt6 –
So the sounds are inspired by a combination of the letter itself as well as the style that it is presented in? Could be worth thinking about the way that the letter happened to appear in the picture, like if someone dropped the banana skin then maybe the sound could be infused with the act as well. Unless you are solely aiming to concentrate on the visual aesthetic, in which case perhaps you could analyse ways that people may perceive a picture and play with ways that it could be interpreted. I remember you thinking about the possibility of stringing letters together. It could be interesting to look at prosody- the so called ‘musicality of language’ which isn’t necessarily melodic or beat driven. Maybe you could introduce short sound designs for words that make a story through the letter pictures which constitute the word or something along those lines.
tinparkagram –
Finally, plenty of work out there on letter design and sound. First look at Jules Rawlinson’s work on graffiti and sound design. Skr1bl’s a good one.You might be interested in what’s happening in children’s education and early learning: www.letters-and-sounds.com/
empeeby –
Nice work, I’m enjoying the act of trying to match up the sonic and visual shapes and symmetries while going through what you’ve posted here. In a related way to what Martin has mentioned, one of the things that strikes me most about the visual representations here is the sense of location, texture and atmosphere they all have. Have you considered/had any success using manipulated samples in the sound designs to reflect this? My immediate impression is that this could be an effective way of transposing the resonances + stories behind each letter into the sound realm – with some spectral processing you could merge your synthesised gestures with some real world grit?
A Study into Turning Brain Activity Into Generative Sound
Alex Williamson
So my final project is based around creating an easily set up and accessible system that can take a person’s brain activity information from a simple EEG monitor, separate the different wave types and use the real-time information to control sound parameters to create what could be a performance, installation or sell it on to GCHQ for $$$. A video of the current state of the project is below, and previous blog posts may give a bit of info about the context of the project.
Here you get a basic idea of what’s going on and a look at how the parameters and sounds change based on the incoming data. The sound is likely to change and the sound samples (rain & noise) are definitely going to change, what you hear there is just a placeholder for now.
HOW’S IT LOOKING?
Things have gone from a state of “not progressing at all” to “actually, this might work” in a small period of time. I was having problems scaling the signals coming in from the EEG headset as they could be quite erratic and jump quite a lot. It came to just sitting down, letting it run and seeing what the maximum and minimum values were over time and using that to scale it. I still need to introduce a cut-off for anomalies and maybe have a look at refining the data to have it fit within a more controllable set of values.
The sound is very rudimental at the moment and I will be spending a lot of the remaining time actually getting it sounding better. The reson~ object seemed like a good idea at the beginning, but it’s become very expensive computationally and might have to be refined. The tones are working, but it’s a lot darker than I wanted and will work to have it sounding less ominous, which will involve tweaking the tables that decide what the notes are going to be. I’m going to expand on this in the next blog post, to be posted tomorrow afternoon.
PROBLEMS
The main problem that you can see in the video is that I’ve accidentally patched things the wrong way round, so things speed up when they should be slowing down and vice versa.
Another problem is the headset having connectivity issues and quite a large delay which seems to be a problem within how the headset works. To counteract this, I’ve been aggregating the incoming data and using an average to base the output on the last ten readings from the device. In the video above, you can see this in play at 02:51. This helps with the erratic readings that the device can sometimes put out. After changing the project fairly recently, I’m kind of against the clock when it comes to finishing it on time. Because of this reason I’m generally going to focus on making it sound right, now that the inputs have been tamed.
Also, the headset is very particular to how it is placed on the user’s head, so if it does end up being used as part of an installation or something to be used by people who haven’t worked with the device before, it may need somebody on-hand to help set it up right. On a performance basis, this should not be too much of a problem.
SUCCESSES
My main successes have come from the logic side of things, rather than the sound. Getting the headset to work in Max/MSP in the first place was a massive trial of patience and understanding the data coming out of it was another. I’m at a stage now where I can understand what is happening when I’m getting signals that range from a thousand to a million and have been able to scale them accordingly.
At 01:47 of the Vimeo video above, you can see how I’ve been taking the values from the headset data and converting that into signals that can control audio commands and parameters. Being able to do that across the board and being able to look at the balance of the data and see whether the brain is alert or more relaxed is essential in being able to control the sound. However, I feel like from here on out, it is the sound that needs to be sorted out, which is maybe what I feel more passionate about, rather than number crunching.
OBSESSIONS & GOALS
I guess my main obsession from here is to make the sound less basic and start looking at how to make it more complex, polyphonic and easier to listen to. I’d like to have a greater change in the sound between different states so it represents these changes better. I have been exploring Markov chains so that the sound can be more complex and have better direction. It could also contribute to changing scales and having more interesting tonality to the overall sound. I’ve also been doing some work with delay lines and would like to incorporate this as well, to replace the gen~ stuff that’s in there at the moment (and would be my own work instead).
I would like to incorporate Jitter as well, to help with visual feedback that will help the project have more personality to it. Basically, I’m just going to crack on with getting it sound good.
Despite not sleeping much and burning the candle at all ends, I’m actually kind of enjoying working on the project in what is probably a weirdly sick way. I’m definitely pushing myself in patching and synthesis terms and would love to take what I’ve learned from this project and use it in future work.
Please email any feedback to s1345245@sms.ed.ac.uk
Here is just some of the comments on Alex’s project, for the full comment section click here
andreatrinciblog –
Really interesting project. One thing that I noticed is that this is something similar to what I worked on for the DMSP project. This is data sonification as well, and I encountered pretty much the same setup/problems when getting volatile inputs from an external source and sonifying those.
What I did to solve the problem is to assign the parameters you are receiving to a specific set of sound parameter. of course this is not something I can help you with, that’s up to you how you link those but I can tell you that there are multiple ways of doing it. I’ll link you my blogpost where I explained how I intended to link things:dmsp.digital.eca.ed.ac.uk/blog/artscience2016/2016/02/16/sonification-submission-1/
An incredibly useful source for this topic is “The sonification Handbook” by Hermann, Hunt & Neuhoff. It really helped me understand how to use the parameters I was receiving properly.
tinpark –
Thanks Alex, I’m very pleased to see and hear that you’re making some progress with the patching side of things and that you’ve got some discernible results from your efforts so far.
For me, the main questions are not so much to do with the sound, until you’ve resolved some of the bigger things about this project that so far seem to have been overlooked, such as, why a brain sensor might be useful, valuable, interesting in a sound-context and of course (if we are to believe all that is claimed about sound’s influence on our cognitive processes) what happens when you play a sound to a brain that’s controlling that sound?
If you can dig into these questions a bit, it might tell you the kinds of sound and sound systems you need to develop next.
Caleb Abbott –
Very interesting project. I was wondering what you were working on. The video is very helpful, and gets me into it. This is a difficult zone for me as I don’t know much about the technology, but I can kinda imagine what it could be used for and where it could be enhanced. Have you considered video games with generative music? Meaning, is it possible to take this data and have it somehow reflect the soundtrack of a game? As the game becomes more stressful/calm, perhaps that is reflected in the EEG and in return feeds back into the game (interesting feed back loop). I realize you mention possible uses are for installations.
I think there are several really good (commercial) uses for this, especially as we move more into user-lead technologies and this kind of prototype could push some types of games further. Thanks!