{"id":53,"date":"2016-07-21T17:05:59","date_gmt":"2016-07-21T17:05:59","guid":{"rendered":"http:\/\/digital.eca.ed.ac.uk\/finalprojects\/?p=53"},"modified":"2016-07-21T17:08:56","modified_gmt":"2016-07-21T17:08:56","slug":"interim-presentations-where-the-projects-are-and-feedback-part-three","status":"publish","type":"post","link":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/2016\/07\/21\/interim-presentations-where-the-projects-are-and-feedback-part-three\/","title":{"rendered":"Interim Presentations &#8211; Where the projects are and feedback &#8211; Part Three"},"content":{"rendered":"<p style=\"text-align: center\"><a href=\"https:\/\/davesmithsound.wordpress.com\/\"><strong>Create an immersive audio-visual experience<\/strong><\/a><\/p>\n<p style=\"text-align: center\"><strong>Dave Smith<\/strong><\/p>\n<p>My project has gone through a number of changes since mid-May. I initially intended to produce an audio-visual fixed media piece to explore movement and transition in sound. I ran with this idea for a few weeks and built some sketches around this theme. However, I felt that this linear approach was perhaps too simplistic and was lacking in scope, so I have decided to explore this theme within the context of an interactive environment.<\/p>\n<p>Accordingly, I have been building an audio-visual environment with FMOD and Unity. The idea is to offer the participant a chance to play with the sonic environment based on the choices they make within it. The most obvious choice is where to go (in this case, where to direct the first-person controller). Although this may seem like a rudimentary and self-evident thing to point out, I think there is potential for some really interesting results just within the bounds of this simple variable. Since sounds are placed within a (virtual) space, a particular route through the space can influence fundamental properties of the composition such as what sounds are heard, when they are heard, for how long and their relative levels in the mix.<\/p>\n<p>Below is a brief sketch of an environment I\u2019m working on at the moment \u2013 dark and ominous which I think works quite well. One thing I\u2019ve been trying to do is to construct sounds in such a way that moving between their relative locations creates a feeling of gradual transition and movement with intermingling textures of sound. I want the sonic environment to flow with little in the way of obvious boundaries between one sound and the next. Here, I\u2019ve attached sounds to each of the dark and light \u2018monoliths\u2019. You can move around and explore the ways in which sound transitions from place to place. Lots of work needed obviously, although I\u2019m quite happy with the way the dark monolith sounds create a kind of broad, composite texture while at the same time providing individual areas of interest. I intend to work on more interesting ways of triggering effects when you move through the monoliths, randomise things a little, and generally build a bigger environment.<\/p>\n<div class=\"jetpack-video-wrapper\"><\/div>\n<p>&nbsp;<\/p>\n<p>To an extent, I\u2019ve also tried to segregate frequency bands when creating sounds and placing them around the environment. For example, I want bass heavy sounds to have more range and form a kind of foundation upon which the \u2018short-range\u2019 mid and treble sounds can sit. This way I have a bit more control over the general structure of the sonic environment rather than having bass\/mid\/treble frequencies \u2018locked\u2019 in individual sound files.<\/p>\n<p>Problems: Mainly technical at the moment \u2013 FMOD and Unity can be an unstable combination to work with and a few things have hindered workflow considerably. For example, Unity has a habit of randomly freezing my laptop (Windows 10 issue I suspect) requiring reboot. In addition, tweaking parameters in FMOD requires rebuilding master banks which then need re-importing to Unity to test whether the tweak sounds any good. This takes a lot of time (and patience). Perhaps I am doing something wrong here\u2026any technical advice would be much appreciated! Another issue I managed to trace to overuse of the FMOD convolution reverb parameter which was presumably asking too much of the CPU, creating glitches in the sound. However, I liked the sound of the convolution reverb too much so I applied the convolution offline and fed the rendered sounds into FMOD instead. I suppose sometimes it\u2019s about a compromise between what you want and what is technically feasible.<\/p>\n<p>Any comments, suggestions or constructive criticism would be great.<\/p>\n<p><strong>Here is just some of the comments on Dave\u2019s project, for the full comment section click<a href=\"https:\/\/andreatrinciblog.wordpress.com\/2016\/07\/04\/the-project-so-far\/\">\u00a0<\/a><a href=\"https:\/\/davesmithsound.wordpress.com\/2016\/07\/05\/project-interim-review\/\">here<\/a><\/strong><\/p>\n<p>Owen &#8211;<\/p>\n<blockquote><p>Thanks Dave. Is there a title for the project yet? It seems to me you have a range of possibilities in front of you and having your themes developed will help pin down what you do next. As it stands there\u2019s scope to develop more variety in to the sound world but the degree to which you pursue that, and how you do it depends largely on what it is you want people to do \/ have done to them. Is this an explorable composition, for instance, or an instrument? What do such categories imply for the ways in which you might curtail \/ encourage certain behaviours and possibilities? Are there features of using 3D worlds as interfaces for things like this that seem to dictate aspects of the sonic form? At the moment everything is quite slow and gliding: how would one make a jump cut in this kind of compositional process? Can you trace out particular kinds of overall shape in the environment? Can you do an A-B-A? A sonata form? A rondo? If you were to perform this, what would be the lower \/ upper limits for a workable duration?<\/p><\/blockquote>\n<p>tinpark &#8211;<\/p>\n<blockquote><p>Thanks for your post Dave and for the demo example. I don\u2019t like to pass you onto Jules WRT the FMOD stuff as he\u2019s so busy, but I\u2019m a year out of date with it at the moment and haven\u2019t caught up this summer yet. If Jules doesn\u2019t respond in time, maybe we can resolve some of these issues as a group. Andrea has been making progress inspite of FMOD\/Unity too\u2026 If you\u2019re still in the experimental stages of this (sounds like you\u2019re past that now), you might consider sending commands from Unity to Max and bypassing FMOD all together.<br \/>\nI\u2019m drawn to you considering this an explorable composition, rather than an instrument. For me, this kind of work brings sound design and composition together in meaningful ways. You have the formal concerns of shape and experience to deal with (composition), but are designing these as \u201cranges\u201d rather than shaped absolutes. If you\u2019re composing with shape, terrain, object, speed, how can you parameterise these? What are you designing with \u2013 spectra, pace, rhythm? What kinds of compositional models are you interested in borrowing from and what formal shapes work well in these kinds of environments or might be explored further because of this environment? How do we listen\/hear if you\u2019re also playing as opposed to consuming in a still listening capacity? Do we begin to participate in the music differently and how does this affect the short to mid term form and structure. Can a piece of music like this be performed or can we only play it?<br \/>\nGet your title and chapterlist nailed down this week, make sure that you promise what you\u2019ve been doing and enjoy making your piece\/pieces. What\u2019s the piece about\/for? Why does it exist. Not just to be droney and dark right? It\u2019s droney and dark for a reason isn\u2019t it\u2026?<\/p>\n<p>&nbsp;<\/p>\n<p style=\"text-align: center\">\n<\/blockquote>\n<h1 class=\"entry-title\" style=\"text-align: center\"><a href=\"https:\/\/angussblog.wordpress.com\/\">Exploring the potential of percussive impulses as a gestural means of parameter control for live\u00a0electronics<\/a><\/h1>\n<p style=\"text-align: center\"><strong>Angus Stewart<\/strong><\/p>\n<p><u>Current progress:<\/u><\/p>\n<p>At the moment my most universal means of parameter control is based on a 4-way switch system, where one of 4 possible sound analysis values can be output:<\/p>\n<p>Current value _ This is the most obviously transparent and linear of the four. Parameters will be set immediately to the current value.<\/p>\n<p>Mean value _ Another relatively linear means of parameter control, inspired by one of Frederick Rzewski\u2019s Little Bangs basic propositions of free improvisation, stating that \u201cthe past determines the present.\u201d This results in less transparent changes (especially as the amount of data received by the mean object expands) but also has a reset function so the user can gain a more immediate sense of control.<\/p>\n<p>Differential between current and mean values_ this provides a more subtle and less linear means of control by simply subtracting the current value from the mean. I have found this to be most effective when used with frequency orientated processing. It is made especially interesting in the way that materiality orientated extended techniques (Chris Corsano (<a href=\"https:\/\/www.youtube.com\/watch?v=2dIpdqH22kY\" rel=\"nofollow\">www.youtube.com\/watch?v=2dIpdqH22kY<\/a>), Mark Guiliana (<a href=\"https:\/\/www.youtube.com\/watch?v=qDQ7eHNufNs\" rel=\"nofollow\">www.youtube.com\/watch?v=qDQ7eHNufNs<\/a>)) can break the system, rendering it less linear by emphasizing the offset irregularly and at extremes.<\/p>\n<p>Stored sequences that move between previous two values_ This function stores the previous two current values, and then repeatedly moves between the two (line object) in the amount of time between the two values changing (timer object). This duration of time is randomly changed slightly (by a maximum of 50 ms) with each repartition (drunk object) in order to give it a looping quality that is constantly changing subtly. Multiple loops can be set up on multiple parameters in order to create multi-metered textures.<\/p>\n<p><a href=\"https:\/\/vimeo.com\/173440385\">vimeo.com\/173440385<\/a>\u00a0password is: sd<\/p>\n<p>(apologies for the lack of audio example, I\u2019ve been away from my drum kit for the last week. One will follow shortly.)<\/p>\n<p>In terms of parameters that these values can be sent to, I am considering these in three main categories: Time based processing (Delay Rate, Tremolo Rate, Line loop parameters, Reverb time), Frequency based processing (Pitch Shift, Ring mod Frequency, Filter Frequency) &amp; Dynamic based processing (Overdrive intensity, Tremolo envelope, Delay feedback.) At present my patch analyses both global and drum specific frequency, dynamics and envelope length, as well as durations of time between impulses from drum to drum. With these obviously frequency, amplitude and duration focused player and processing parameters, it is possible to set up some relatively direct and transparent relationships between acoustic material produced by the player, and the patch\u2019s electronic response.<\/p>\n<p><u>Problems and how I plan to fix them:<\/u><\/p>\n<p>Whilst some of my previously mentioned successes in creating a sense of transparency between audience and user have been effective, I feel that I have somewhat negated the more non-linear aspects that I initially set out to incorporate into my patching. One way that I feel could be an effective means of doing this is through creating a blurring between these three categories of processing:<\/p>\n<ul>\n<li>Something I discovered very recently and would like to experiment with is the Squid Axon module. I feel this could work effectively as a means of merging the aesthetic properties of dynamic modulation (tremolo) and frequency modulation (ring mod.) and thus blurring the properties of frequency and time-based amplitude modulation (all of which go back to oscillation anyway.) This is something that could potentially not work at all, but I feel that if the resultant waveform could be smoothed a bit to eliminate popping, the signal could work effectively as an amplitude controller.<\/li>\n<li>Hz to millisecond conversion and vice versa. I have already incorporated millisecond to Hz conversion into my tremolo modules tap tempo. I feel that converting milliseconds to Hz in a delay could work effectively as a means of giving delay wet signal a more pitched quality. I have been working on a module that combines a reverb with a bitcrusher, in an attempt to imitate the effects of the Old Blood Noise Endeavors Black Star Pad Reverb guitar effects pedal\u2019s \u2018crush\u2019 mode (<a href=\"https:\/\/www.youtube.com\/watch?v=XPiCxpEXpe0\" rel=\"nofollow\">www.youtube.com\/watch?v=XPiCxpEXpe0<\/a> lots of talking in this one, sorry), and feel this could be implemented into this module effectively.<\/li>\n<\/ul>\n<p>When performing and practicing with the patch, I\u2019ve found that I have struggled particularly at dealing with longer formed improvisations. This is largely down to transitions between combinations of modules being currently somewhat difficult. This is something that I feel is essential to be able to do in performance situations, as it gives the player a larger palette of sounds at their disposal. I feel that some additional means of control are necessary, and would therefore like to explore the potential of using a single expression pedal alongside a pattr object to switch between a number of preset signal path configurations.<\/p>\n<p><strong>Here is just some of the comments on Angus&#8217; project, for the full comment section click<a href=\"https:\/\/andreatrinciblog.wordpress.com\/2016\/07\/04\/the-project-so-far\/\">\u00a0<\/a><a href=\"https:\/\/angussblog.wordpress.com\/2016\/07\/05\/progress-report-problems-and-some-future-goals\/\">here<\/a><\/strong><\/p>\n<p>awrwsound &#8211;<\/p>\n<blockquote><p>I like the sound of the experiments with snares you\u2019ve done previously and I\u2019d be interested to see how your patch can work with different snare types. How would a ringier snare sound compared so something a bit more woody?<\/p>\n<p>Besides from that, it\u2019d be interesting to see how the system reacts to what the player\u2019s doing and how the player then reacts to that. It could be almost like a drum jam with a machine, rather than a performance aid. I\u2019m definitely hoping there\u2019s a chance to see it in action soon.<\/p>\n<p>Seek could be handy as well, which could dip into a buffer and pull out certain bits. The regularity of this could be dictated to the player and controlled using tempo\/random. Could seem super basic but might be interesting to see in use. Could also be useless info as there\u2019s a very good chance that I\u2019ve completely misinterpreted everything and am just typing garbage.<\/p>\n<p>Looking forward to hearing more!<\/p><\/blockquote>\n<p>Caleb James Abbott &#8211;<\/p>\n<blockquote><p>Hey Angus!<br \/>\nI wanted to reiterate some of my thoughts on your performance the other night, which might help you understand how I approach what you\u2019re doing, which may or may not be useful. Nonetheless, here I go! I really enjoyed your performance. As you know I am also a percussionist, so I can appreciate your project from several perspectives.<br \/>\nIt\u2019s something I\u2019ve brought up in my feedback to Mike and Matt as well, and I think it\u2019s worth noting here too. I don\u2019t think we should forget about the performative aspects of what we do, beyond what the computer processes. You mentioned finding it difficult to generate long form pieces (not that this is the answer to your issue, but I think it\u2019s one helpful consideration) and I found I wanted to hear you play more, without the tool. I\u2019ll elaborate. I think audiences will see it in a similar way as well \u2013 performer first and tool second. I realize this project is about development of a tool, but I feel that your performance can benefit from gradually bringing the tool in and out, to accompany your performance. This doesn\u2019t mean that the tool can\u2019t take over and become the focus, but I wonder if you thought more about your relationship to it (the tool) and how that would enhance your performances.<br \/>\nI look forward to more of this! And to trying it at some point too!<\/p><\/blockquote>\n<p style=\"text-align: center\"><a href=\"https:\/\/corinashfieldsummerproject.wordpress.com\"><strong>How are techniques such as Diegesis, Hyperreality and Anempathetic Sound are used as Thematic tools in the creation of sound designs?<\/strong><\/a><\/p>\n<p style=\"text-align: center\"><strong>Corin Ashfield<\/strong><\/p>\n<p>For my summer project I\u2019ve been looking at how techniques\/features such as diegesis\/non-diegesis, anempathetic sound and hyperreality can be used as thematic tools in the construction of sound designs and how they can be used to evoke conceptual resonance and establish temporalization.\u00a0Originally, I was planning on researching how selective use of sound and tones can affect the theme\u00a0of a piece and creating a range of examples consisting of a short film for which I would construct 6 or 7 sound designs to prove my point before analysing them further. However, After extensively researching\u00a0the field of audio-vision through a plethora of reading material I\u00a0realised how broad and diverse this area of research is and decided that I needed to broaden my scope of study to include, alongside the aforementioned areas, subcategories such as dubbing, rhetorical effects and synchronisation.<\/p>\n<p>Alongside my extensive research into the area, I have been experimenting with creating sound designs (often rough and needing refinement) communicating a certain area of research I\u2019m particularly interested in at a given time.\u00a0Many examples can be found in my blog posts.<\/p>\n<p>Below is a link to an example of a video I created in an attempt to effectively display anempathetic sound. This is\u00a0an area that I have particularly been interested in, as I have frequently come across the question \u201cIf anempathetic sound \u00a0works to effectively convey\/emphasise a mood on-screen, can it truly be called anempathetic?\u201d. To try and effectively convey anempathetic sound I created a three-part sound design, one that worked with the picture and two that were completely unrelated \u2013<a href=\"https:\/\/vimeo.com\/172330971\">vimeo.com\/172330971<\/a><\/p>\n<p>I admit I have become somewhat obsessed with researching these areas and analysing examples I find, and one of the main successes of my project has been that I have considerably improved my understanding\u00a0of a range of areas in sound design and have introduced myself to areas that I had not even thought of before (i.e. temporalisation). One hangup I may have has been the practical side of my project, as I have created a range of videos but nothing that I feel is substantial enough to submit as part of the final hand-in. I am therefore working on finding videos long\/substantial enough for me to effectively display a number of the areas that I have been researching \u00a0while also working on a photo montage that I may temporalise\/communicate conceptual resonance through sound.<\/p>\n<p><strong>Here is just some of the comments on Corin\u2019s project, for the full comment section click<a href=\"https:\/\/andreatrinciblog.wordpress.com\/2016\/07\/04\/the-project-so-far\/\">\u00a0<\/a><a href=\"https:\/\/corinashfieldsummerproject.wordpress.com\/2016\/07\/05\/progress-report\/\">here<\/a><\/strong><\/p>\n<p>Owen &#8211;<\/p>\n<blockquote><p>Thanks Corin. How intact you do feel that your themes are at the moment? Anempathy seems to be your main focus here. Are you still probing your ideas about hyperreality etc. Meanwhile, there are certainly some interesting questions raised by your work so far. The most urgent, I think, is where the division is to be found between anempathetic sound and something just arbitrary laid over a clip. One aspect of this is that the anempathetic effect only stands much chance of taking hold in the context of empathetic sound design having already established a context, i.e. it works in juxtaposition to its temporal surroundings. Another aspect may concern the resonance (for want of a better word) of how anempathetic materials are presented, e.g. the common combination of brutal violence with the everyday (such as light music). A third could be to do with playfulness around the diegetic boundary. I think you could benefit from letting some writing and analysis guide your practical work here. How, really, does anempathetic sound seem to be doing its work in some canonical examples? Ditto for your other concerns with hyperreality etc. Can you identify principles along the lines I\u2019ve suggested? Do these seem to work in practice?<\/p><\/blockquote>\n<p>tinpark &#8211;<\/p>\n<blockquote><p>Thanks Corin, I\u2019m intrigued by your work and delighted that your research is opening up new approaches for you. I\u2019m also fascinated by the film(s) you\u2019ve made. It may be that you don\u2019t even need to make a longer video, but several other short experiments that pepper and illustrate a serious discussion of the areas you\u2019ve been working in. To that end, it\u2019s worth really concentrating on the shape of your writing and seeing what design work is needed to complete your quest as it stands. How\u2019s the title and chapter list coming on? Does what you\u2019ve put down create an opportunity to design more sounds? I like the idea of making a photo montage, but not if that\u2019s an extra thing that dilutes the depth you\u2019ve been getting to with the video above. Finding a longer film online might be a way to move forwards, but are there any collaborators from the Film classes that we know who might have access to something? The film curators from SFM, Carys and Devin know loads of short film makers, is it worth asking them for some suggestions or contacts?<\/p><\/blockquote>\n<p style=\"text-align: center\"><a href=\"http:\/\/blog.mikepb.net\/category\/msc-project\/\"><strong>Mike Parr-Burman<\/strong><\/a><\/p>\n<p>My project is based around a live performance system that interacts and improvises along with its audio surroundings, using rehashed fragments of that input as the source for its sonic output.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>obsessions<\/strong><\/p>\n<p>My main obsession while working through this project so far has been to find a way of exploring the space between live music systems that aim for autonomy and independence from the human musician (in the vein of George Lewis\u2019s <em>Voyager<\/em>, and Adam Linson\u2019s <em>Odessa<\/em>), and those conceived as extensions of the instrument, under the \u2018control\u2019 of their human performer (collectively grouped under the banner of effects processors).<\/p>\n<p>On a conceptual and political level, these two ideas are practically opposites. At the most basic technical level though:<\/p>\n<p>(external sound input \u2014&gt; processing of this in some way \u2014&gt; output sound derived from processes)<\/p>\n<p>they could also be seen as technically very similar, part of the same broad family of sound tools.<\/p>\n<p>This observation has convinced me that there might be something worthwhile in attempting to deconstruct the autonomous\/instrument opposition in a live performance system.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>problems #1<br \/>\n<\/strong><\/p>\n<p>One of the main problems I\u2019ve had with this \u2018obsession\u2019 is that the autonomy\/subservience of a given system to large degree a conceptual construct embedded in design choices that are quite distinct from each other and do not easily combine.<\/p>\n<p>The following represents my attempts at trying to bridge the gap \u2013 while seeking also not to camp too firmly in either field. I\u2019d be interested to hear your feelings\/critiques of how well this comes across\u2026<\/p>\n<p>&nbsp;<\/p>\n<p><strong>successes #1<br \/>\n<\/strong><\/p>\n<p>An early process-sketch that I am fond of involved using concatenative synthesis to replace the live sound of my solo improvisations with grains from a soundbank made from the same live stream. Sound here, the top one is just the \u2018wet\u2019 concatenation, the bottom is a different improvisation, wet+dry :<\/p>\n<p style=\"text-align: center\"><span class=\"mejs-offscreen\">Audio Player<\/span><\/p>\n<div id=\"mep_0\" class=\"mejs-container svg wp-audio-shortcode mejs-audio\">\n<div class=\"mejs-inner\">\n<div class=\"mejs-mediaelement\"><\/div>\n<div class=\"mejs-layers\"><\/div>\n<div class=\"mejs-controls\">\n<div class=\"mejs-button mejs-playpause-button mejs-play\"><\/div>\n<div class=\"mejs-time mejs-currenttime-container\"><span class=\"mejs-currenttime\">00:00<\/span><\/div>\n<div class=\"mejs-time-rail\"><\/div>\n<div class=\"mejs-time mejs-duration-container\"><span class=\"mejs-duration\">00:00<\/span><\/div>\n<div class=\"mejs-button mejs-volume-button mejs-mute\"><\/div>\n<p><a class=\"mejs-horizontal-volume-slider mejs-mute\"><span class=\"mejs-offscreen\">Use Up\/Down Arrow keys to increase or decrease volume.<\/span><\/a><\/p>\n<div class=\"mejs-horizontal-volume-total\"><\/div>\n<div class=\"mejs-horizontal-volume-current\"><\/div>\n<\/div>\n<div class=\"mejs-clear\"><\/div>\n<\/div>\n<\/div>\n<p style=\"text-align: center\"><span class=\"mejs-offscreen\">Audio Player<\/span><\/p>\n<div id=\"mep_1\" class=\"mejs-container svg wp-audio-shortcode mejs-audio\">\n<div class=\"mejs-inner\">\n<div class=\"mejs-mediaelement\"><\/div>\n<div class=\"mejs-layers\"><\/div>\n<div class=\"mejs-controls\">\n<div class=\"mejs-button mejs-playpause-button mejs-play\"><\/div>\n<div class=\"mejs-time mejs-currenttime-container\"><span class=\"mejs-currenttime\">00:00<\/span><\/div>\n<div class=\"mejs-time-rail\"><\/div>\n<div class=\"mejs-time mejs-duration-container\"><span class=\"mejs-duration\">00:00<\/span><\/div>\n<div class=\"mejs-button mejs-volume-button mejs-mute\"><\/div>\n<p><a class=\"mejs-horizontal-volume-slider mejs-mute\"><span class=\"mejs-offscreen\">Use Up\/Down Arrow keys to increase or decrease volume.<\/span><\/a><\/p>\n<div class=\"mejs-horizontal-volume-total\"><\/div>\n<div class=\"mejs-horizontal-volume-current\"><\/div>\n<\/div>\n<div class=\"mejs-clear\"><\/div>\n<\/div>\n<\/div>\n<p>The thing I liked the most about this setup is the way the almost all of the sonic details and intention in what I try to put out in the guitar get swapped out and re-purposed in a way that rips up and disembodies my intentions, while reflecting the same gestures and shapes. I\u2019ve found that despite its simplicity this in itself can be a pretty fruitful \u2018interactive\u2019 system for solo improvising as the transformations keep twisting and pushing you around cutting off paths and creating new ones.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>successes #2 + problems #2<\/strong><\/p>\n<p>Growing out of the experiments above, I have built a more independent improvising agent that still uses concatenative synthesis of the live input as it\u2019s voice, but which generates its own distinct phrasing and gestures, rather than mirroring the input directly.<\/p>\n<p>This incarnation had its first test run in a small live improvisation last week:<\/p>\n<p style=\"text-align: center\"><span class=\"mejs-offscreen\">Audio Player<\/span><\/p>\n<div id=\"mep_2\" class=\"mejs-container svg wp-audio-shortcode mejs-audio\">\n<div class=\"mejs-inner\">\n<div class=\"mejs-mediaelement\"><\/div>\n<div class=\"mejs-layers\"><\/div>\n<div class=\"mejs-controls\">\n<div class=\"mejs-button mejs-playpause-button mejs-play\"><\/div>\n<div class=\"mejs-time mejs-currenttime-container\"><span class=\"mejs-currenttime\">00:00<\/span><\/div>\n<div class=\"mejs-time-rail\"><\/div>\n<div class=\"mejs-time mejs-duration-container\"><span class=\"mejs-duration\">00:00<\/span><\/div>\n<div class=\"mejs-button mejs-volume-button mejs-mute\"><\/div>\n<p><a class=\"mejs-horizontal-volume-slider mejs-mute\"><span class=\"mejs-offscreen\">Use Up\/Down Arrow keys to increase or decrease volume.<\/span><\/a><\/p>\n<div class=\"mejs-horizontal-volume-total\"><\/div>\n<div class=\"mejs-horizontal-volume-current\"><\/div>\n<\/div>\n<div class=\"mejs-clear\"><\/div>\n<\/div>\n<\/div>\n<p>I have mixed feelings about this performance. The voice tended towards using samples of the strongest attacks, while generally ignoring more textural sounds. This made its gestural content fairly predictable and somewhat unsatisfying for me to interact with.<\/p>\n<p>On the positive side however, as a first outing and proof of concept, it could certainly gone a lot worse \u2013 it <em>worked<\/em>. My immediate goal for this now is to see what tweaking can be done to improve the behaviour and to try and coax some unpredictability, belligerence and subtlety out of it\u2026<\/p>\n<p><strong>Here is just some of the comments on Mike&#8217;s project, for the full comment section click <a href=\"http:\/\/blog.mikepb.net\/msc-project\/interim-how-it-stands\/\">here<\/a><\/strong><\/p>\n<p>Owen &#8211;<\/p>\n<blockquote><p>Thanks Mike. This hazy territory between co-player and instrument suggests a rich seam of themes to focus on. The idea of mirroring (or not) certainly suggests itself as one to explore. How can the possibilties be broken down here? You\u2019ve got slavish repetition at one pole, and complete inversion at the other (but this kind of comes back on itself, I guess). My instinct is to guess that it\u2019s in the timing of the machine\u2019s utterances that the most profound differences will emerge (Ivan Illich once said something along the lines of \u2018in order to master a language one must learn its pauses\u2019). Have you read my phd thesis? I riff on some of the same ideas you\u2019re playing with here. Technically you might be almost there, and much of the remaining work could be in actually learning the possibilities of what you have. Can you have multiple corpora in a piece that the machine uses for its voice? This might give the possibility of making some richer forms, if you introduce some distinctions about where you put certain samples as you play.<\/p><\/blockquote>\n<p>Martin Parker &#8211;<\/p>\n<blockquote><p>I\u2019m enjoying listening to what you\u2019ve made Mike while commenting so forgive me if I change tack half way through a sentence. Based on the aura that I\u2019m in now (first file) you have an issue with playback speed which eclipses by some way what you\u2019re able to play physically. Slow her down and stop her stuttering and see what she says. What if her speed was glacial by comparison with yours? A sympathetic electronic voice is not a still one, but it doesn\u2019t have to be a fast one.<br \/>\nStructurally (file 2), I feel like I believe that she\u2019s making you play differently, so you\u2019re getting to that co-player place you wanted to be in and making form with it. I\u2019m also agreeing (with Owen) that your reflections on merging autonomous systems with effects processors are timely.<br \/>\nThe so-called live electronics world grew up, in part, in reaction to the limitations of liveness and presence that reveal themselves when recorded music gets played on on stage, but the environment we\u2019re in now is also an inevitable outcome of technological development; everything has sped up. Are we still playing electroacoustic music just with close-to-real-time tools?<br \/>\nThe challenge of liveness and presence in music making also have a lot to do with human to human communication and I\u2019d be really interested to read a section in your essay about how you as a human feel a part of the instruments you play and design and the extent to which they are structures you climb over and explore, taking different routes each time. What of the audience, where are they as receivers of information you provide in sound an gesture?<\/p><\/blockquote>\n","protected":false},"excerpt":{"rendered":"<p>Create an immersive audio-visual experience Dave Smith My project has gone through a number of changes since mid-May. I initially intended to produce an audio-visual fixed media piece to explore movement and transition in sound. I ran with this idea &hellip; <a href=\"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/2016\/07\/21\/interim-presentations-where-the-projects-are-and-feedback-part-three\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":174,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"acf":[],"_links":{"self":[{"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/posts\/53"}],"collection":[{"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/users\/174"}],"replies":[{"embeddable":true,"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/comments?post=53"}],"version-history":[{"count":3,"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/posts\/53\/revisions"}],"predecessor-version":[{"id":56,"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/posts\/53\/revisions\/56"}],"wp:attachment":[{"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/media?parent=53"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/categories?post=53"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/tags?post=53"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}