Acheron Crossing: a practical study of narration through dynamic and fixed spatialisation
Andrea Trinci
My project wants to explore the potentials of spatialization, in particular its capabilities to create rich and immersive ambiences. The storyboard with draw from the classical myth of the Acheron as threshold between the world of the living and the world of the damned. This set is chosen from both a practical and narrative point of view. It allows me to create a completely dark space (possibilities are that the user is the damned soul or the river is underground) which is a key element to focus on the narration only through sound and it will refer to something commonly known but it will give me a lot of freedom from a design prospective as the myth is open to interpretations. The Idea is to create an immersive VR environment that manages to unfold a story only through sounds.
I am currently developing the backbone of the scenes I’m missing. I have almost 4 out of 8 scenes fully working (but still not at the level I want them to be). the last 4 scenes are very passive so it won’t take long to develop them.
Once I’m done with Unity I can focus more on the audio refinement. The spatialization is working but I want to put my efforts in refining parameters further (i.e. distance or surround) within FMOD through filters and ambiences.
Obsessions: I spent almost a month trying to figure out how to create a 3D audio environment for the purpose of this project. Bouncing between toolkits and plugins for cinematic VR, everything was so confusing and without a proper manual to explain the workflow, I almost risked changing the entire project out of frustration. In the end I managed to use the Oculus toolkit properly and started doing the designs.
Problems: Even with everything working there’s still a major bug that limits my design possibilities as the attenuation curve for the FMOD event is fixed and if I disable the standard one, the event does not play in Unity. Scripting is definitely not my strong point, therefore, in order to design all the interactions of the scenes it takes me longer than it should. Another problem is that directionality is very subjective and dependent on the knowledge of the source so I’ll need a lot of testing from people that never listened to my project to achieve something realistic because, as it stands, it is tuned for myself and I’ve been working on this project for months. Lastly the biggest problem I’m facing is the lack of a definite vision of the final shape of the project, I don’t know if I want to focus on narrative or on spatialization. What I’m doing for now is just the surface of this project that I perceive as soulless, with no clear direction.
Successes: Once set up properly, spatialization works really well. It’s a weird feeling that you’re not used to have in an interactive experience as it emulates reality almost perfectly. Working with such a new topic can often be really frustrating but when you manage to achieve something that works it feels immensely rewarding. It is a project that I would gladly show to people and this is certainly symptomatic of a project that for me is worth working on as I’m always very critical of my works.
I’ll attach a video that shows the first scene in action:
I’d like to have some criticisms/suggestions from other people so feel free to comment this post and thanks in advance.
Here is just some of the comments on Andrea’s project, for the full comment section click here
Owen –
Thanks Andrea, good to see that things are coming together. At this stage I think it’s going to be very important to develop your thematic ideas to help guide the rest of the practical work. In particular, my feeling is that fewer scenes with their sound fully developed will make a stronger submission, with more to talk about, than risking many scenes and running out of time. So now is a good moment to reflect on what we have and think about how it’s sonically populated. How much sound do you need to inhabit the scene to make it feel suitably hellish? How can some fitting textures be developed that give the impression of crowdedness without using up too many resources? What sense of materiality do you wish to convey, given the virtual acousma you’ve adopted. Are there solid surfaces (the reverb implies that there are)? What of the floor? What possibilities exist for the voice? Can its intimacy be modulated? Can the transformations be varied to keep us, as ‘players’ unbalanced and suitably fearful?
davesmithsound –
Hi Andrea, I really enjoyed listening to your walkthrough video. The spatialization works very well. I’m wondering whether there is a “goal” for the player in this environment? Is there something they have to do within the story or are they passive observers? You were questioning whether you should focus more on narrative or spatialization – I am tempted to suggest narrative since you have chosen to base your project on mythical underworld which has so much potential for telling a story through sound. I know this is relatively early days, but I was hungry for more layers of sound in the ambience, suggestive of things to explore in the distance perhaps. Since there is a horror element here, perhaps you could use sounds that are ambiguous (particularly in the distance) as I think that’s an effective way to build tension.
I was also wondering what you meant about the FMOD attenuation curve thing – do you mean you tried disabling the “distance attenuation” on the Event Macro tab but the event doesn’t play? Did you add a distance parameter? Try adding the inbuilt distance parameter as normal, then set the Event Macro distance attenuation to “off” (and set the min and max distance to whatever you need). When I do that, the event still plays in Unity and I can use automation on the master volume to get a custom distance curve. Perhaps you mean something different though?
Matt Harold –
Hell(o)
The localisation is great and it seems worth the effort of getting it right! Is it possible to monitor the persons head movements? e.g. if they spend a lot of time looking in direction then the ‘scene’ will move onwards so there is a sense of response. A subtle sound could exist which needs to be centrally focused on by the user and after they ‘center’ it for a few seconds a transition occurs? Also where do you want to take your audience? I once created a piece based on the Greek rivers that led to hell and Archeron was specifically related to sorrow as opposed to lament, fire, forgetfulness or hate, so maybe it’s worth thinking about the kind of emotion you want to portray in contrast to other hellish emotions in order to fully explore a specific version of hell. Or as it is a river you could construct it as a journey on boat where you can look at the scenes around you on shore or within the river.
spawning and swarming: sounding the expanded audio input
Caleb Abbott
For my final project, I am developing a vocal processing tool for live performance. To listen to samples of the tool go here. I recommend the following recordings to give a sense of where I’m at.
Spawning & swarming is, in essence, the way I have come to describe the process for the tool. Essential, I treat spawn/swarm as the way in which the sounds are transmitted, collected, and used within the tool. The tool is broken down into five parameters. They are:
- body
- presence
- scale
- pace
- weight
The final submission will contain:
- video documentation of the parameters of the tool
- live performance
- 4 recordings of performances
- the code – Max/MSP
- the report
Obsessions:
I have been actively coding, recording, researching, and blogging (documenting) about the tool for just under two months. The focus has been heavily placed on the development of the tool. In the next few weeks I will be ending this section of it and moving more towards the report writing, the final presentation/performance, and rehearsing with the tool. There appears to be no shortage of time that could be spent on development.
Problems:
Initially, I felt compelled to create a generalized tool which could be used by anyone, for anything sound related. The difficulty of this project, so far, is in accomplishing that. That isn’t to say the tool, in its current form, isn’t learnable, it’s just not user friendly. I feel this is because I have customized every aspect of it to my performance habits. This isn’t entirely bad. The tool was intended, either way, to be functional for my use first, and secondly for the use of others. I may altogether abandon the idea of a commercial version for this project.
In addition to the latter, finding a suitable device to perform the patch with has also bee tricky. I have settled on temporary usage of the Korg nanokontrol2, but will be upgrading to the Livid cntrl: r for the last stage of development.
Successes:
On June 24, 2016 myself and 3 others from the cohort (Mike, Matt, and Angus) did a scratch performance at Alison House Atrium to showcase our work in progress. This was the first live demonstration of the project and it reflects where I am at with the code, my approach to performing, and aesthetics and compositional ideas to date. The primary purpose of this experience was to see how the tools would act in a live setting, how they would sound together amplified (beyond headphones), and to gain a bit of insight into how the audience would engage with the piece. In general, and given the positive feedback, I feel this was a successful and useful experience.
Lastly, I am happy and encouraged when I work on this project. I think this is probably the most successful aspect I can highlight. I feel I am pushing my abilities as it pertains to Max/MSP, recording, and performing, and that I’m challenging my own comforts.


