Panel discussion – Scaling to fit and coping with detail

Sound is data. Alterations in the structure of data can create events of audible and emotional value. Delayed and modulated samples create filters, an array of samples create microseconds of a wavetable, milliseconds and seconds of samples create distinguishable sound events. Yet, a single sample can affect an event that is seconds long.

Sound is scalable. A stream of numbers can be sonified as noise, transposed to drive oscillators, modelled to trick the ear, mapped across a game to trigger sounds or translated as MIDI notes to create a composition. A sound event can be analysed and translated to colours on a screen, understand patterns or re-synthesised to take up a new form.

Sound has detail. The data that makes up an audible event means different things to a musician, sound designer, geologist, physicist, acoustician, film editor or computer scientist.

Data is scalable and detail is subjective. This panel will discuss the effect of sound across different windows of time – from samples to seconds and how it can be altered and structured to create value.

This panel is chaired by Varun Nair and will feature the following panelists..


Garry Taylor: 
After 10 years in the music industry as a musician and recording engineer, Garry started his video game career at X-Com developer Mythos Games.

He joined Sony Computer Entertainment Europe’s London Studio in 2001 as a sound designer, where he worked on The Getaway, Eye-Toy and Gran Turismo franchises.
Now based in Cambridge, He is currently Audio Director for Sony Worldwide Studio’s Creative Services Group, responsible for audio development across Sony’s London, Guerrilla Cambridge and Evolution Studios.

 


Andy Farnell: 
Andy Farnell is a computer scientist from the UK specialising in audio DSP and synthesis.

Pioneer of procedural audio and the author of the MIT textbook “Designing Sound”, Andy is visiting professor at several European institutions and consultant to game and audio technology companies. He is also an enthusiastic advocate and hacker of Free open source software, who believes in educational opportunities and access to enabling tools and knowledge for all.

 

Jamie Bullock: Jamie Bullock is co-founder of Integra Lab, a music technology research centre based at Birmingham Conservatoire, where he works as Senior Researcher in Interactive Music Technology. His work focuses on improving digital technology for musicians by raising standards in interface design and user experience. He holds a PhD in applied audio feature extraction and has published widely on humane and sustainable systems for interactive music production. He is regularly an invited speaker at events such as Music Tech Fest and Sound Software Workshop. His current projects include LibXtract, a widely used library for audio analysis, and Integra Live a new software environment for composing and performing live electronic music.

Ben Gillet: Ben is the founder and ‘Head Camel’ at Camel Audio, an Edinburgh based company creating virtual instrument and effects plugins for musicians. Camel Audio’s products have won numerous awards and are used by leading film music composers such as Danny Elfman and Hans Zimmer and electronic music producers such as Orbital and BT.

Ben’s focus has been on both the design and implementation of synthesisers which include re-synthesis techniques. He’s spent the majority of the last eight years working on Alchemy which includes additive, phase vocoder and granular based re-synthesis, and he’s currently working on improved re-synthesis and morphing algorithms for version two. He also has an interest in touch based gestures for music instruments, having designed Alchemy Mobile for the iOS platform, which has been installed nearly one million times. Ben is also on the lookout for freelance sound designers to join the Alchemy v2 sound design team and is looking to hire a great full-time C++ programmer.

Register to attend – www.wisd-day-01-pm.eventbrite.com