{"id":992,"date":"2019-09-15T15:37:59","date_gmt":"2019-09-15T15:37:59","guid":{"rendered":"http:\/\/digital.eca.ed.ac.uk\/sonicstructures\/?p=992"},"modified":"2019-09-16T12:36:15","modified_gmt":"2019-09-16T12:36:15","slug":"01-introduction-to-the-course-and-your-starter-patch","status":"publish","type":"post","link":"https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/2019\/09\/01-introduction-to-the-course-and-your-starter-patch\/","title":{"rendered":"01 &#8211; Introduction to the course and your starter patch"},"content":{"rendered":"<h3><\/h3>\n<h3><a href=\"http:\/\/digital.eca.ed.ac.uk\/sonicstructures\/files\/2018\/09\/howFirstPatchEndedUp.png\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-large wp-image-998\" src=\"http:\/\/digital.eca.ed.ac.uk\/sonicstructures\/files\/2018\/09\/howFirstPatchEndedUp-1024x823.png\" alt=\"how the first patch ended up looking\" width=\"605\" height=\"486\" srcset=\"https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/files\/2018\/09\/howFirstPatchEndedUp-1024x823.png 1024w, https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/files\/2018\/09\/howFirstPatchEndedUp-300x241.png 300w, https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/files\/2018\/09\/howFirstPatchEndedUp-768x617.png 768w, https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/files\/2018\/09\/howFirstPatchEndedUp-100x80.png 100w, https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/files\/2018\/09\/howFirstPatchEndedUp-150x121.png 150w, https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/files\/2018\/09\/howFirstPatchEndedUp-200x161.png 200w, https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/files\/2018\/09\/howFirstPatchEndedUp-450x362.png 450w, https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/files\/2018\/09\/howFirstPatchEndedUp-600x482.png 600w, https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/files\/2018\/09\/howFirstPatchEndedUp-900x723.png 900w, https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/files\/2018\/09\/howFirstPatchEndedUp.png 1668w\" sizes=\"(max-width: 605px) 100vw, 605px\" \/><\/a>Introducing programming languages for sound, performance and composition<\/h3>\n<p>We know that the work of the sound designer or composer is not bound to the manipulation of pre-recorded sound, or the synthesis of new sound. The art and craft of sound design and composition is not limited to fixed media contexts (such as film or electroacoustic composition) either. Contemporary computer games offer a popular genre where the sound and music structures aspire to be dynamic, context sensitive, malleable and adaptable<sup><a id=\"fnr.1\" class=\"footref\" href=\"file:\/\/\/Users\/mparker\/Dropbox\/UoE-paperwork\/teaching\/Programme\/SS\/ProjectBrief2018-19\/projectBrief-18-19.html#fn.1\" name=\"fnr.1\">1<\/a><\/sup>. There are many other contexts where sound is (or could be) designed to adjust itself depending on when, where and how it is heard or needed. Live performance, electronic composition and free improvisation, multimedia situations, online, Human Computer Interfaces (HCI), sound art, museum exhibits, GPS-enabled podcasts, and a range of industrial audio; in car, alarms, mobile phones, sound masking, sound reinforcement, sound therapy, and auditory interfaces of almost any kind constitute the short list.<\/p>\n<p>Professional sound designers and composers can contribute significantly to these areas because such people are not only technically skilled in fundamentals of digital audio, but also understand <i>how<\/i> sound can carry information to potential listeners. Importantly, they&#8217;re also sensitive to <i>what<\/i> sounds good.<\/p>\n<p>All of the above suggests that the sounds themselves are only <i>part<\/i> of the design process. The <b><i>system<\/i><\/b> that generates, manipulates, adjusts and presents these sounds is a significant concern for the sound designer and composer too. Thanks to software tools like <a href=\"file:\/\/\/Users\/mparker\/Dropbox\/UoE-paperwork\/teaching\/Programme\/SS\/ProjectBrief2018-19\/www.cycling74.com\">MaxMSP<\/a>, <a href=\"http:\/\/puredata.info\/\">Pure Data<\/a> and <a href=\"http:\/\/supercollider.sourceforge.net\/\">Supercollider<\/a>, and the increasing ubiquity of computing power, sonic artists can rapidly prototype and test the <i>behaviours and principles<\/i> of reactive and interactive audio <b>as well as<\/b> the sounds themselves. Audio Design is now about imagining how sound might adapt to unimagined contexts and conceiving the <b>shifting parameters<\/b> that control the sounding result.<\/p>\n<p>Given this (extra?) responsibility for the <i>system<\/i> that generates and manipulates sound to fit its context, sonic artists must ask of their work, \u201cwhat is the sound supposed to be doing now?&#8221;, then evaluate \u201cis it doing that?&#8221; and if not, persuade the system to change its parameters in a well designed and carefully thought out way such that the sonic consequences of these changes are meaningful.<\/p>\n<h3>Next steps<\/h3>\n<p>In order to make the most of this &#8216;new&#8217; frontier, we need to develop skills in audio programming. Of course, a highly desireable way to do this is to learn how devise C++ DSP code for audio applications that can be ported and embedded into any piece of software. However, most artists on this course are not audio programmers &#8211; yet &#8211; but there are tools\/languages out there that can help you learn what DSP is and rapidly prototype sound processing ideas without needing to learn how to do this in C++. If you&#8217;re sufficiently excited by this, you can eventually learn how to manage C++ but before then MaxMSP and Pure Data will be sufficient for learning the key concepts behind computer-based sound and devising the behaviours your sound should have and even testing them out in real-world contexts very easily.<\/p>\n<p>What&#8217;s key to learning a new language is having some work to actually do. Of course working through the tutorials for MaxMSP is going to be very helpful in terms of learning how the language works and what it can do but these tutorials will also need to be studied in tandem with the development of your project work. Knowing what you really want to do and what you don&#8217;t need to do is key here.<\/p>\n<h3>MaxMSP<\/h3>\n<p>Max\/MSP is a real-time, graphical, object-oriented programming language.<br \/>\nMax was named after Max Matthews, the computer music pioneer whose Music programme was written at Bell Labs in 1957; Music III (1960) to Music V (1968) established the &#8220;<a href=\"https:\/\/en.wikipedia.org\/wiki\/Unit_generator\">unit generator<\/a>&#8221; paradigm that Max still embodies.<\/p>\n<p>Although it can be used for many purposes, Max\/MSP is a music programming language.<br \/>\nAs the name would suggest, there are two parts to Max\/MSP: Max is the MIDI programming part and MSP is the Digital Signal Processing (DSP) part (Max Signal Processing).<\/p>\n<p>Max was conceived and written by Miller Puckette in the mid 1980s whilst he was working at IRCAM.<\/p>\n<p>It was first taken up by IRCAM but then commercially released by Opcode (the makers of the OMS system) in 1990.<\/p>\n<p>Since 1999 it has been owned by Cycling &#8217;74 (<a href=\"http:\/\/www.cycling74.com\" class=\"autohyperlink\">www.cycling74.com<\/a>), a company started by David Zicarelli, one of the original developers of Max.<\/p>\n<p>In addition, Max\/MSP can be extended by writing &#8220;external objects&#8221; in the programming language C (or even Java and Javascript) so it is very flexible indeed.<\/p>\n<p>This, along with its user-friendliness (i.e. ease of programming) has established Max as possibly the most successful music programming language of all time.<\/p>\n<p>Max is pretty much universal, i.e. it is used in many, many universities and studios, by musicians of every genre from all over the world.<\/p>\n<h3>Pure Data<\/h3>\n<p>Like MAX, <a href=\"https:\/\/puredata.info\/\">PD is a data flow programming language<\/a>.<br \/>\nPD was developed by Miller Puckette around the time when Opcode took Max over.<br \/>\nImportantly PD is broadly open-source and therefore there is a wide developer base, numerous developers have contributed to furthering its development.<\/p>\n<p>The key differences between MaxMSP and PD relate to how the interfaces look. PD <a href=\"https:\/\/puredata.info\/downloads\/pd-extended\">was extended<\/a> to include graphics libraries and other toys but it&#8217;s no longer supported.<\/p>\n<p>Its open-source nature means that it has been ported to the likes of Android and iOS via libPD and the <a href=\"https:\/\/github.com\/enzienaudio\/hvcc\">Heavy Compiler<\/a> can take uploaded PD patches and provide code for WWISE, HTMLF, Unity, Javascript and other other systems.<\/p>\n<h3>Faust<\/h3>\n<p>Is another extraordinary technical achievement where the aim is to support the development of complex DSP processing and implement it as highly optimised code that can be exceuted almost anywhere on any platform.<\/p>\n<p>It is developed by an excellent team of programmers and is open-source. The language is somewhat convoluted, but you can very quickly make effective sound software and preview its effect within a browser: <a href=\"https:\/\/faust.grame.fr\/\" class=\"autohyperlink\">faust.grame.fr\/<\/a>.<\/p>\n<h3>Midi<\/h3>\n<p>The MIDI standard was first proposed by Dave Smith in 1981; the MIDI Specification 1.0 was released in 1983. The most important distinction between MIDI and other sound-related formats is that MIDI code does not contain sample data; rather, it is a control language that specifies at its most basic level when, how loud, and for how long musical notes happen. How these notes then sound depends on the hardware used to create sonic output.<\/p>\n<p>The acronym stands for &#8220;Musical Instrument Digital Interface&#8221;<\/p>\n<p>-consists of \u2013 hardware connection \u2013 agreed computer codes<\/p>\n<p>tries to be comprehensive<\/p>\n<p>\u2013 not all manufacturers implement everything \u2013 documentation usually contains a MIDI implementation chart so that you know what controller and note values are mapped to hardware functions.<\/p>\n<h3>MIDI Terminology<\/h3>\n<ul>\n<li>MIDI Interface<br \/>\nThe physical connection<\/li>\n<li>MIDI Device<br \/>\nAnything that sends or receives MIDI data<\/li>\n<li>MIDI Controller<br \/>\nA device which only SENDS data<\/li>\n<li>MIDI Sound Source<br \/>\nA synthesiser, sampler, or other audio device controlled by MIDI<\/li>\n<li>MIDI Channel<br \/>\nData sent on up to 16 separate channels\\<\/li>\n<li>Note On<br \/>\nA code that starts (or stops) a MIDI event<\/li>\n<li>MIDI Pitch<br \/>\nThe tempered scale numbered from 0\u2014127 where middle C = 60<\/li>\n<li>Velocity<br \/>\nThe speed with which a controller (e.g. key on a keyboard) is pressed; usually translated into loudness<br \/>\nNB A note on message with velocity 0 is effectively a note off<\/li>\n<li>Continuous Controller<br \/>\nA continuous stream of MIDI information from e.g. Pitch Bend Wheel<\/li>\n<li>Program Change Event<br \/>\nCode which tells a MIDI device to change its internal setup e.g. synth voice bank<\/li>\n<\/ul>\n<h3>Introduction to Digital Signal Processing<\/h3>\n<p>Digital Signal Processing (DSP) refers to processing (but not necessarily changing!) a signal through digital means.<br \/>\nThe origins of DSP are in electrical engineering.<\/p>\n<p>The signal in analogue circuits was a continuous electrical signal. A digital signal is discrete i.e. non-continuous: a stream of numbers, in the case of digital audio a stream of sample values, as we are probably already familiar with from working with ProTools and other such hard-disk recording systems.<\/p>\n<p>DSP of musical signals was originally carried out in non real-time as the first computer systems weren&#8217;t fast enough.<br \/>\nToday however real-time DSP is possible for all but the most demanding applications.<\/p>\n<p>The main uses of real-time audio processing in software are convenient audio processing in the studio i.e. as replacements for older hardware (digital or analogue) e.g. Cubase, Logic, Nuendo, Pro-Tools, Reaktor etc.live performance e.g. Ableton Live, MAX\/MSP, SuperCollider etc.<\/p>\n<h3>MSP<\/h3>\n<p>The DSP extension to Max is called MSP (&#8220;Max Signal Processing&#8221; or &#8220;Miller S. Puckette&#8221;?)<\/p>\n<p>It developed out of projects from IRCAM (ISPW) and from Miller Puckette (PD).<br \/>\nAll MSP objects have the ~ (tilde) extension to their name, e.g. cycle~<br \/>\nObjects which have signal inputs or outputs are connected using striped patch cords so it&#8217;s easy to see what takes a signal and what doesn&#8217;t.<br \/>\nSome Basic Operations with MSP: record and playback soundfiles real-time sampling variable speed\/position sample playback MIDI (or other) control of audio synthesis (additive, subtractive, FM etc.) and sample playback other signal processing including filtering, fft analysis (+resynthesis), granular synthesis etc.<br \/>\nSome real-time interfaces that could control MSP and PD: MIDI keyboards and controllers The Lemur Multitouch Control Surface Wacom graphics tablet touch- and pressure-sensitive pads games controllers e.g. joysticks pitch, timbre, and volume sensors (perhaps themselves real-time audio MSP programmes)<\/p>\n<h3>Technical Terms<\/h3>\n<p>We are probably already familiar with most of these but let us revisit them briefly:<\/p>\n<p><strong>Sampling Rate:<\/strong> During recording, the number of times per second (frequency) at which the Analogue-to-Digital Convertor (ADC) measures the amplitude of the sound wave and generates one sample in general, the higher the sampling rate, the more accurately the waveform is represented CD quality is 44100Hz (44.1kHz) but the latest systems can sample up to 384kHz.<\/p>\n<p>Q<strong>uantisation \/ Sample Size \/ Word Size \/ Bit Depth:<\/strong> The number of bits allocated to the storage of one sample the more bits, the higher the sound quality: the number of discrete steps available to measure the amplitude increases as more bits become available CD quality is 16 bit (65 536 steps), the latest technology offers 24 bit integer (16 777 216 steps) or even 32 bit floating point (4 294 967 296 steps). Compare this to the 8-bit (uncompressed) files found (or once found) on the Internet (256 steps). Quantisation directly affects the dynamic range of a digital system with approx. 6 dB per bit (= 96dB for 16bit, 144dB for 24bit systems) Quantisation noise: signal distortion that creeps into digital sound playback of low-amplitude signals due to the signal constantly being represented in the lowest few bits of the samples<\/p>\n<p><strong>Nyquist Frequency:<\/strong> From Harold Nyquist, Bell Labs researcher, 1928:<\/p>\n<p>&#8220;For any given deformation of the received signal, the transmitted frequency range must be increased in direct proportion to the signaling speed\u2026 The conclusion is that the frequency band is directly proportional to the speed.&#8221;<\/p>\n<p>The Nyquist Frequency is the theoretical frequency limit of a digital audio system, usually defined as half the sampling rate, but in practice a little less than half. Hence the CD sampling rate of 44.1kHz which represents frequencies up to about 20kHz, the average upper limit of human hearing.<\/p>\n<p>Read page 2 of this for a perfectly clear explanation: <a href=\"http:\/\/www.rctn.org\/bruno\/npb261\/aliasing.pdf\" class=\"autohyperlink\">www.rctn.org\/bruno\/npb261\/aliasing.pdf<\/a><\/p>\n<h3>Buffers and Latency<\/h3>\n<p>The term real-time is quite misleading: nothing happens instantly, audio has to be buffered. Running at 44.1 KHz , the buffer latency for I\/O or processing is as follows<\/p>\n<p>1024 samples = 23.22ms<\/p>\n<p>512 samples = 11.61ms<\/p>\n<p>256 samples = 5.8ms<\/p>\n<p>128 samples = 2.9ms<\/p>\n<p>64 samples = 1.45ms<\/p>\n<p>As well as buffering latency there is usually a separate control rate for DSP. In MSP the usual scenario is:<\/p>\n<p>audio rate = 44.1KHz<\/p>\n<p>buffer latency = 128-512 samples for real-time sound processing<\/p>\n<p>control rate = 1KHz (DSP scheduling rate or tick)<\/p>\n<p>samples\/tick = 44.1<\/p>\n<p>Some of the above is borrowed from notes written by Michael Edwards.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introducing programming languages for sound, performance and composition We know that the work of the sound designer or composer is not bound to the manipulation of pre-recorded sound, or the synthesis of new sound. The art and craft of sound &hellip;<\/p>\n<p class=\"read-more\"> <a class=\"more-link\" href=\"https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/2019\/09\/01-introduction-to-the-course-and-your-starter-patch\/\"> <span class=\"screen-reader-text\">01 &#8211; Introduction to the course and your starter patch<\/span> Read More &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5],"tags":[],"acf":[],"_links":{"self":[{"href":"https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/wp-json\/wp\/v2\/posts\/992"}],"collection":[{"href":"https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/wp-json\/wp\/v2\/comments?post=992"}],"version-history":[{"count":6,"href":"https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/wp-json\/wp\/v2\/posts\/992\/revisions"}],"predecessor-version":[{"id":1199,"href":"https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/wp-json\/wp\/v2\/posts\/992\/revisions\/1199"}],"wp:attachment":[{"href":"https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/wp-json\/wp\/v2\/media?parent=992"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/wp-json\/wp\/v2\/categories?post=992"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/digital.eca.ed.ac.uk\/sonicstructures\/wp-json\/wp\/v2\/tags?post=992"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}