{"id":48,"date":"2016-07-21T16:47:33","date_gmt":"2016-07-21T16:47:33","guid":{"rendered":"http:\/\/digital.eca.ed.ac.uk\/finalprojects\/?p=48"},"modified":"2016-07-21T16:47:33","modified_gmt":"2016-07-21T16:47:33","slug":"interim-presentations-where-the-projects-are-and-feedback-part-two","status":"publish","type":"post","link":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/2016\/07\/21\/interim-presentations-where-the-projects-are-and-feedback-part-two\/","title":{"rendered":"Interim Presentations &#8211; Where the projects are and feedback &#8211; Part Two"},"content":{"rendered":"<h2 class=\"p1\" style=\"text-align: center\"><a href=\"https:\/\/andreatrinciblog.wordpress.com\/\">Acheron Crossing: a practical study of narration through dynamic and fixed spatialisation<\/a><\/h2>\n<p style=\"text-align: center\"><strong>Andrea Trinci<\/strong><\/p>\n<p>My project wants to explore the potentials of spatialization, in particular its capabilities to create rich and immersive ambiences.\u00a0The storyboard with draw from the classical myth of the Acheron as threshold between the world of the living and the world of the damned. This set is chosen from both a practical and narrative point of view. It allows me to create a completely dark space (possibilities are that the user is the damned soul or the river is underground) which is a key element to focus on the narration only through sound and it will refer to something commonly known but it will give me a lot of freedom from a design prospective as the myth is open to interpretations. The Idea is to create an immersive VR environment that manages to unfold a story only through sounds.<\/p>\n<p>I am currently developing the backbone of the scenes I\u2019m missing. I have almost 4 out of 8 scenes fully working (but still not at the level I want them to be). the last 4 scenes are very passive so it won\u2019t take long to develop them.<\/p>\n<p>Once I\u2019m done with Unity I can focus more on the audio refinement. The spatialization is working but I want to put my efforts in\u00a0refining parameters further (i.e. distance or surround) within FMOD through filters and ambiences.<\/p>\n<p>Obsessions: I spent almost a month trying to figure out how to create a 3D audio environment for the purpose of this project. Bouncing between toolkits and plugins for cinematic VR, everything was so confusing and without a proper manual to explain the workflow, I almost risked changing\u00a0the entire project out of frustration. In the end I managed to use the Oculus toolkit properly and started doing the designs.<\/p>\n<p>Problems:\u00a0Even with everything working there\u2019s still a major bug that limits my design possibilities as the attenuation curve for the FMOD event is fixed and if I disable the standard one, the event does not play in Unity. Scripting is definitely not my strong point, therefore, in order\u00a0to design all the interactions of the scenes it takes me longer than it should. Another problem is that directionality is very subjective and dependent on the knowledge of the source so I\u2019ll need a lot of testing from people that never listened to my project to achieve something realistic because, as it stands, it is tuned for myself and I\u2019ve been working on this project for months. Lastly the biggest problem I\u2019m facing is the lack of a definite vision of the final shape of the project, I don\u2019t know if I want to focus on narrative or on spatialization. What I\u2019m doing for now is just the surface of this project that I perceive as soulless, with no clear direction.<\/p>\n<p>Successes: Once set up properly, spatialization works really\u00a0well. It\u2019s a weird feeling that you\u2019re not used to have in an interactive experience as it emulates reality almost perfectly.\u00a0Working with such a new topic can often be really frustrating but when you manage to achieve something that works it feels immensely rewarding. It is a project that I would gladly show to people and this is certainly symptomatic of a project that for me is worth working on as I\u2019m always very critical of my works.<\/p>\n<p>I\u2019ll attach a video that shows the first scene in action:<\/p>\n<div class=\"jetpack-video-wrapper\"><span class=\"embed-youtube\"><\/span><\/div>\n<p>&nbsp;<\/p>\n<p>I\u2019d like to have some criticisms\/suggestions from other people so feel free to comment this post and thanks in advance.<\/p>\n<p><strong>Here is just some of the comments on Andrea\u2019s project, for the full comment section click<a href=\"https:\/\/andreatrinciblog.wordpress.com\/2016\/07\/04\/the-project-so-far\/\"> here<\/a><\/strong><\/p>\n<p>Owen &#8211;<\/p>\n<blockquote><p>Thanks Andrea, good to see that things are coming together. At this stage I think it\u2019s going to be very important to develop your thematic ideas to help guide the rest of the practical work. In particular, my feeling is that fewer scenes with their sound fully developed will make a stronger submission, with more to talk about, than risking many scenes and running out of time. So now is a good moment to reflect on what we have and think about how it\u2019s sonically populated. How much sound do you need to inhabit the scene to make it feel suitably hellish? How can some fitting textures be developed that give the impression of crowdedness without using up too many resources? What sense of materiality do you wish to convey, given the virtual acousma you\u2019ve adopted. Are there solid surfaces (the reverb implies that there are)? What of the floor? What possibilities exist for the voice? Can its intimacy be modulated? Can the transformations be varied to keep us, as \u2018players\u2019 unbalanced and suitably fearful?<\/p><\/blockquote>\n<p>davesmithsound &#8211;<\/p>\n<blockquote><p>Hi Andrea, I really enjoyed listening to your walkthrough video. The spatialization works very well. I\u2019m wondering whether there is a \u201cgoal\u201d for the player in this environment? Is there something they have to do within the story or are they passive observers? You were questioning whether you should focus more on narrative or spatialization \u2013 I am tempted to suggest narrative since you have chosen to base your project on mythical underworld which has so much potential for telling a story through sound. I know this is relatively early days, but I was hungry for more layers of sound in the ambience, suggestive of things to explore in the distance perhaps. Since there is a horror element here, perhaps you could use sounds that are ambiguous (particularly in the distance) as I think that\u2019s an effective way to build tension.<\/p>\n<p>I was also wondering what you meant about the FMOD attenuation curve thing \u2013 do you mean you tried disabling the \u201cdistance attenuation\u201d on the Event Macro tab but the event doesn\u2019t play? Did you add a distance parameter? Try adding the inbuilt distance parameter as normal, then set the Event Macro distance attenuation to \u201coff\u201d (and set the min and max distance to whatever you need). When I do that, the event still plays in Unity and I can use automation on the master volume to get a custom distance curve. Perhaps you mean something different though?<\/p><\/blockquote>\n<p>Matt Harold &#8211;<\/p>\n<blockquote><p>Hell(o)<\/p>\n<p>The localisation is great and it seems worth the effort of getting it right! Is it possible to monitor the persons head movements? e.g. if they spend a lot of time looking in direction then the \u2018scene\u2019 will move onwards so there is a sense of response. A subtle sound could exist which needs to be centrally focused on by the user and after they \u2018center\u2019 it for a few seconds a transition occurs? Also where do you want to take your audience? I once created a piece based on the Greek rivers that led to hell and Archeron was specifically related to sorrow as opposed to lament, fire, forgetfulness or hate, so maybe it\u2019s worth thinking about the kind of emotion you want to portray in contrast to other hellish emotions in order to fully explore a specific version of hell. Or as it is a river you could construct it as a journey on boat where you can look at the scenes around you on shore or within the river.<\/p>\n<p>&nbsp;<\/p><\/blockquote>\n<p style=\"text-align: center\"><a href=\"http:\/\/mscproject2016.tumblr.com\/post\/147090199720\/state-of-the-project\"><b>spawning and swarming: sounding the expanded audio input<\/b><\/a><\/p>\n<p style=\"text-align: center\"><strong>Caleb Abbott<\/strong><\/p>\n<p>For my final project, I am developing a vocal processing tool for live performance. To listen to samples of the tool go <a href=\"http:\/\/t.umblr.com\/redirect?z=https%3A%2F%2Fsoundcloud.com%2Fcalebjamesabbott&amp;t=YzE2OWJlZDlkZDAzYTY2MGU1OTEwZWM3NWFmN2VmMjY5ZTZiYzBmOCxzQThMZ2ZzTw%3D%3D\" target=\"_blank\"><b><i>here<\/i><\/b><\/a>. I recommend the following recordings to give a sense of where I\u2019m at.<\/p>\n<ol>\n<li><a href=\"http:\/\/t.umblr.com\/redirect?z=https%3A%2F%2Fsoundcloud.com%2Fcalebjamesabbott%2Fscalespawnswarmpresence&amp;t=Mzk5ZGFhY2FkMjBmYjhlM2RlNzY2OWI3ZjVmMTU5Y2U5ZjlkNjI2NCxzQThMZ2ZzTw%3D%3D\" target=\"_blank\">scale\/spawn\/swarm\/presence<\/a><\/li>\n<li><a href=\"http:\/\/t.umblr.com\/redirect?z=https%3A%2F%2Fsoundcloud.com%2Fcalebjamesabbott%2Flive-24062016&amp;t=NGU0Y2IyYzQ1NTAzM2ZiMDJjNGVkMjY0MDE0M2YzNzk4N2FjMTYxNyxzQThMZ2ZzTw%3D%3D\" target=\"_blank\">live 24\/06\/2016<\/a><\/li>\n<li><a href=\"http:\/\/t.umblr.com\/redirect?z=https%3A%2F%2Fsoundcloud.com%2Fcalebjamesabbott%2Fswarmpacebody&amp;t=YTRiM2YwOTgwMGUyMjAwOWQ1YTViNjFiZTVlZGM5MmUxYmJlMjU5YyxzQThMZ2ZzTw%3D%3D\" target=\"_blank\">pace &amp; body<\/a><\/li>\n<\/ol>\n<p><i>Spawning &amp; swarming<\/i> is, in essence, the way I have come to describe the process for the tool. Essential, I treat <i>spawn\/swarm<\/i> as the way in which the sounds are transmitted, collected, and used within the tool. The tool is broken down into five parameters. They are:<\/p>\n<ol>\n<li>body<\/li>\n<li>presence<\/li>\n<li>scale<\/li>\n<li>pace<\/li>\n<li>weight<\/li>\n<\/ol>\n<p><b>The final submission will contain:<\/b><\/p>\n<ul>\n<li>video documentation of the parameters of the tool<\/li>\n<li>live performance<\/li>\n<li>4 recordings of performances<\/li>\n<li>the code &#8211; Max\/MSP<\/li>\n<li>the report<\/li>\n<\/ul>\n<p><b>Obsessions:<\/b><\/p>\n<p>I have been actively coding, recording, researching, and blogging (documenting) about the tool for just under two months. The focus has been heavily placed on the development of the tool. In the next few weeks I will be ending this section of it and moving more towards the report writing, the final presentation\/performance, and rehearsing with the tool. There appears to be no shortage of time that could be spent on development.<\/p>\n<p><b>Problems:<\/b><\/p>\n<p>Initially, I felt compelled to create a generalized tool which could be used by anyone, for anything sound related. The difficulty of this project, so far, is in accomplishing that. That isn\u2019t to say the tool, in its current form, isn\u2019t learnable, it\u2019s just not user friendly. I feel this is because I have customized every aspect of it to my performance habits. This isn\u2019t entirely bad. The tool was intended, either way, to be functional for my use first, and secondly for the use of others. I may altogether abandon the idea of a commercial version for this project.<\/p>\n<p>In addition to the latter, finding a suitable device to perform the patch with has also bee tricky. I have settled on temporary usage of the <i>Korg nanokontrol2<\/i>, but will be upgrading to the <i>Livid cntrl: r <\/i>for the last stage of development.<\/p>\n<p><b>Successes:<\/b><\/p>\n<p>On June 24, 2016 myself and 3 others from the cohort (Mike, Matt, and Angus) did a scratch performance at Alison House Atrium to showcase our work in progress. This was the first live demonstration of the project and it reflects where I am at with the code, my approach to performing, and aesthetics and compositional ideas to date. The primary purpose of this experience was to see how the tools would act in a live setting, how they would sound together amplified (beyond headphones), and to gain a bit of insight into how the audience would engage with the piece. In general, and given the positive feedback, I feel this was a successful and useful experience.<\/p>\n<p>Lastly, I am happy and encouraged when I work on this project. I think this is probably the most successful aspect I can highlight. I feel I am pushing my abilities as it pertains to Max\/MSP, recording, and performing, and that I\u2019m challenging my own comforts.<\/p>\n<div class=\"tags\"><a href=\"http:\/\/mscproject2016.tumblr.com\/tagged\/research\">#RESEARCH<\/a><\/div>\n<div class=\"tags\"><\/div>\n<div class=\"tags\" style=\"text-align: center\"><a href=\"http:\/\/mikefowler.co.uk\/final-project\/\"><strong>An investigation into the application of light-reactive systems in<span class=\"Apple-converted-space\">\u00a0<\/span>sound art<\/strong><\/a><\/div>\n<div class=\"tags\" style=\"text-align: center\"><\/div>\n<div class=\"tags\" style=\"text-align: center\"><strong>Mike Fowler<\/strong><\/div>\n<div class=\"tags\" style=\"text-align: center\"><\/div>\n<div class=\"tags\" style=\"text-align: center\">\n<p><a href=\"http:\/\/mikefowler.co.uk\/wp-content\/uploads\/2016\/06\/ohp-1-of-1-e1466782123400.jpg\" rel=\"attachment wp-att-219\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-large wp-image-219 alignleft\" src=\"http:\/\/mikefowler.co.uk\/wp-content\/uploads\/2016\/06\/ohp-1-of-1-e1466782123400-1024x700.jpg\" alt=\"ohp (1 of 1)\" width=\"840\" height=\"574\" \/><\/a><\/p>\n<p style=\"text-align: left\">I have been exploring the use of optical electronics in systems for sound design.\u00a0 I was interested in this field as I have some practical experience of working\u00a0 with electronics, and wanted to apply this in order to\u00a0 create unique sound designs, installations, and performances that expose correlations between what we see and what we hear. Early experiments involved the use of light sensors circuits using light dependent resistors (LDRs), and two main methods of application became apparent.<\/p>\n<p style=\"text-align: left\">The first method, which I will call the AC method, is where output from a light sensitive circuit is passed into an audio amplifier via a AC coupling capacitor. This capacitor removes any DC component from the signal, while allowing any fluctuations (the AC or audio component) to pass, and subsequently be amplified and manipulated as with any other audio signal. This method provides a clear sonic representation of the light hitting the sensor, revealing a hidden soundscape of our local environments.\u00a0\u00a0 LED and LCD lights found in everything from CD players to bike lights create all kinds of interesting bleeps and drones, as do digital projectors and most other digitally controlled light sources. I created a simple \u2018light microphone\u2019 using a solar cell connected to a 3.5mm audio jack. Most audio recorders have an AC coupling capacitor in the preamp so the solar cell can be connected directly to the recorder.<\/p>\n<figure id=\"attachment_172\" class=\"wp-caption aligncenter\"><a href=\"http:\/\/mikefowler.co.uk\/wp-content\/uploads\/2016\/03\/VDiv1.png\" rel=\"attachment wp-att-172\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-172 size-full\" src=\"http:\/\/mikefowler.co.uk\/wp-content\/uploads\/2016\/03\/VDiv1.png\" alt=\"VDiv1\" width=\"181\" height=\"298\" \/><\/a><figcaption class=\"wp-caption-text\">Simple LDR circuit<\/figcaption><\/figure>\n<p style=\"text-align: left\">\n<figure id=\"attachment_213\" class=\"wp-caption aligncenter\"><a href=\"http:\/\/mikefowler.co.uk\/wp-content\/uploads\/2016\/06\/solarmic-1-of-1.jpg\" rel=\"attachment wp-att-213\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-213\" src=\"http:\/\/mikefowler.co.uk\/wp-content\/uploads\/2016\/06\/solarmic-1-of-1-662x1024.jpg\" alt=\"solarmic (1 of 1)\" width=\"367\" height=\"568\" \/><\/a><figcaption class=\"wp-caption-text\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 even simpler solar cell \u2018light microphone\u2019<\/figcaption><\/figure>\n<p style=\"text-align: left\">The second method is the DC method. Here there is no coupling capacitor present and we can use the DC output from a light sensor circuit as a control voltage (CV). This CV signal can be used in limitless ways to control audio (or anything else).\u00a0 I have been using both AC and DC methods with an analogue modular synthesiser to create some unique light-modulated timbres and drones.<\/p>\n<p style=\"text-align: left\">For control of light input I have been using digital and analogue projectors, both of which have their own unique interactions with the sensors. The digital projector\u2019s output can be controlled using sound as an input signal in order to create a novel autonomous feedback system that can be interacted with.<\/p>\n<p style=\"text-align: left\">\n<p style=\"text-align: left\">I am asking myself what is the reason behind this research? what is the critical angle?<\/p>\n<p style=\"text-align: left\">At this juncture, I would like to change direction slightly with the project, to help answer the questions above. I aim to apply my previous experience in the development and delivery of educational workshops to design a cross-disciplinary workshop where participants can build a solar powered noise making device. In addition to building the device, participants would learn some fundamental aspects of sound and science, while introducing skills such as soldering and building circuits.\u00a0 The device would retain the direct correlation between sound and light that I have been exploring so far, taking it a step further by being light dependent for power and input simultaneously.<\/p>\n<p style=\"text-align: left\">Prototype to follow soon\u2026<\/p>\n<p style=\"text-align: left\"><strong>Here is just some of the comments on Matt\u2019s project, for the full comment section click <a href=\"http:\/\/mikefowler.co.uk\/final-project\/interim-project-post\/\">here<\/a><\/strong><\/p>\n<p style=\"text-align: left\">Nikita Gaidakov &#8211;<\/p>\n<blockquote>\n<p style=\"text-align: left\">I enjoyed your video. There\u2019s something quite impressive about this rather simple but unique and effective setup \u2013 the interaction of how the board looks graphically, your stripped down use of a projector as an excitor, and the sonic results. It strikes me that your core piece is a very simple idea \u2013 the transduction of light into sound \u2013 and a very simple technology, the basic elements of which can be recombined in a number of permutations, scales, performance and installation situations. I don\u2019t know how the teachers feel, but to me that is substantive enough to be your piece: the prism of different configurations that result from one jumping off point. Your themes: DIY, accessibility, recombination, simplicity, and the raw interaction of light and sound. So, personally I would encourage you not to be hung up on producing a single final product, but rather an installation which showcases this prismaticism. Invent many combinations, keep them all basic but punchy, contrasting in scale, agency, interaction, positive\/negative (what affects what, who plays what, by means of adding or subtracting? etc.)<\/p>\n<\/blockquote>\n<p style=\"text-align: left\">Martin Parker &#8211;<\/p>\n<blockquote>\n<p style=\"text-align: left\">Mike, I\u2019m delighted by the prototype you\u2019ve submitted, and unnerved (slightly) by the fact that you\u2019re shifting focus, however, it tunes in with what you\u2019ve been doing this summer elsewhere and also with a long-term direction in your career. So, if you\u2019re proposing to submit an educational workshop, which models of workshop design are you following? How\u2019s the discourse on participatory arts? What\u2019s going on in the world of coding and education? How\u2019s the open source community faring these days? What\u2019s the point of education workshops if there is no infrastructure to sustain development after the workshop? How do you build communities around hacking and making with electronics? Are they sustainable? Should they be? What of the sound? Do we need any more glitchy rickety flakey electronic devices making noise? if so WHY do we need them, what\u2019s essential about their sound and our relationship to it as makers (and parents\/audience\/friends)? OK, some broad questions here to draw attention to the fact that you could focus this in any way, but you can\u2019t go in all ways. I\u2019m drawn in particular to the sound-related flakey stuff and how important such sounds might be to society. To dig into this, you\u2019ll find Attali\u2019s \u2018Noise\u2019 useful: Attali, J. (1985). Noise: The Political Economy of Music. Minneapolis: University of Minnesota Press. Check out the last chapter at least.<\/p>\n<\/blockquote>\n<\/div>\n<div class=\"tags\" style=\"text-align: center\"><\/div>\n<div class=\"tags\" style=\"text-align: center\"><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Acheron Crossing: a practical study of narration through dynamic and fixed spatialisation Andrea Trinci My project wants to explore the potentials of spatialization, in particular its capabilities to create rich and immersive ambiences.\u00a0The storyboard with draw from the classical myth &hellip; <a href=\"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/2016\/07\/21\/interim-presentations-where-the-projects-are-and-feedback-part-two\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":174,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"acf":[],"_links":{"self":[{"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/posts\/48"}],"collection":[{"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/users\/174"}],"replies":[{"embeddable":true,"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/comments?post=48"}],"version-history":[{"count":1,"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/posts\/48\/revisions"}],"predecessor-version":[{"id":52,"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/posts\/48\/revisions\/52"}],"wp:attachment":[{"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/media?parent=48"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/categories?post=48"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/digital.eca.ed.ac.uk\/finalprojects\/wp-json\/wp\/v2\/tags?post=48"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}