#203      34 min 27 sec
How we respond to music: Cultural construct or hardwired into the brain?

Behavioral scientist and musician Assoc Prof Neil McLachlan brings a scientific understanding of sound to his research on our emotional responses to music. Presented by Dr Shane Huntington.

"From what probably started with Helmholtz as a reductive idea that if we could understand how the brain hears a pure tone and we could add up lots of pure tones, we’d get complex tones and we’d understand the processing of complex sounds. But the brain didn’t evolve to hear pure tones.  It evolved to find the source and the meaning of complex sounds in complex environments.  So it’s a pattern recogniser." -- Assoc Prof Neil McLachlan




Associate Professor Neil McLachlan
Associate Professor Neil McLachlan

Assoc Prof Neil McLachlan has extensive professional experience ranging from the performing arts and music, installation art and design, psychological modelling and acoustic and aerospace design and engineering with patents and publications in all these fields. He is currently an Associate Professor in Acoustic and Auditory Modelling in the Department of Psychology at the University of Melbourne. He has over 40 refereed publications and invited presentations and his work has been described in Nature and featured in New Scientist.

Dr McLachlan is a director of Australian Bell and was a designer of the Federation Bells project in which the world's first harmonic bell was invented. He is currently prototyping new musical instrument designs and collaborating with psychologists and biosignal engineers on the measurement of human responses to sound as a means of establishing acoustic design criteria for musical and architectural applications. He has undertaken extensive research on machine sensing that has now found application in bio-informatics and has 5 ongoing international patent applications relating to the harmonic bell, military aerospace design, and bio-informatics systems.

Publications

More from Assoc Prof McLachlan on instrument design, and music and auditory neuroscience

Music, Mind & Wellbeing

Credits

Host: Dr Shane Huntington
Producers: Eric van Bemmel, Kelvin Param
Associate Producer: Dr Dyani Lewis
Audio Engineer: Gavin Nebauer
Voiceover: Nerissa Hannink
Series Creators: Kelvin Param & Eric van Bemmel

View Tags  click a tag to find other episodes associated with it.

 Download mp3 (31.5 MB)

VOICEOVER 
Welcome to Up Close, the research talk show from the University of Melbourne, Australia.

SHANE HUNTINGTON
I'm Shane Huntington.  Thanks for joining us.  As we all know, the sounds that we hear from different musical instruments can vary greatly.  Less obvious are the precise workings of these instruments and the physics behind how they produce various notes.  In some cases, the explanation is relatively simple.  Whilst with others, modelling the sounds produced can be an intensive process.Surprisingly, some of the simplest instruments, such as a bell for example, can be significantly more difficult to model than seemingly more complex string and wind instruments.  Detailed modelling, design and tuning of instruments such as the bell require an intricate understanding of acoustics.Additionally, to what extent are our emotional responses to music hardwired into the brain, or are they learned culturally?Today on Up Close, we are joined by musician, acoustics expert and auditory modeller, Associate Professor Neil McLachlan, from the Melbourne School of Psychological Sciences here at the University of Melbourne.Welcome to Up Close, Neil.

NEIL MCLACHLAN
Thank you.

SHANE HUNTINGTON
Let’s start by talking about a typical musical instrument like a violin.  How does such an instrument go about producing a sound?

NEIL MCLACHLAN
Well, even simpler as , say, guitar where you simply pluck the string, when you pluck a string you push it away from its neutral equilibrium state where it’s just simply lying under tension, and then it bounces back.  In that bounce back you’re actually sending an elastic wave down the string and it hits the other end of the string and bounces back.  At certain frequencies, based on how long it takes for the impulse to travel up and down the string, you’ll get additive reflections, just like you have to push the swing at the time it comes back to you.  That’s an additive reflection or, in other words, resonance.So when you set a string in motion, the tension wave runs down the string, bounces back, meets itself again in the middle and it sets up a resonant wave or a standing wave.  We’re all quite familiar with these sorts of things that happen in the real world.  So that would be what we call the fundamental frequency of the string, the lowest frequency of the string.  When there’s a single wave the string is moving the most at its centre, but you also get standing waves set up at half the length of the string, so you’ve got two maxima of vibration with what we call a node in the middle where it’s see-sawing around a central point that, in fact, doesn’t move, and that’s called a harmonic.And so when you pluck a string, you will cause it to vibrate in a harmonic series, many overtones which are integer multiples of the fundamental frequency, so if it was 100 hertz there’d be 200, 300, 400 hertz and so on.  It happens with any string.  It happens with air columns, it happens with any one dimensional vibrating system.The problem with strings is that they slice through the air very easily, so if you just took a guitar string, tensioned it up and plucked it you wouldn’t hear very much at all.  So what we do is we put the end of the string on a soundboard, a piano has a big wooden soundboard, so that the motion of the string leaks a little bit into the soundboard and the soundboard’s got a big surface area to push the air.  So if you’ve got a very powerfully tensioned instrument like a piano, it can actually force the soundboard to move, a large soundboard to move, and push a lot of air and so the instrument’s loud.And then to even further boost an instrument like a guitar, you have a volume of air underneath the soundboard captured and the air itself starts to resonate.  So that further enhances the coupling of the mechanical vibration to motion in the air.

SHANE HUNTINGTON
So, Neil, how does that version, the string version, compare to instruments that are based on columns of air, say woodwind instruments or some of the brass instruments?

NEIL MCLACHLAN
It’s pretty much the same idea; a travelling wave moving in a one dimensional direction so it reflects off either end of the air column.SHANE HUNTINGTONWe use some of these terms like hertz.  Can you just clarify what we mean by this?

NEIL MCLACHLAN
When we take a recording, what we’re doing is a microphone is responding to changes of pressure in the air and it’s creating voltages.  When you look at a wave form on a computer recording, for example, you’ll see the voltage going up and down through time.  That also has a length, but in this case it’s the length of time and we call that the period of the wave.  If the period is one second, then the wave will repeat every second so we’d say that that is one hertz or one cycle per second.So waves have a frequency, cycles per second, hertz, they have a period which is how long it takes for the wave to repeat and they have a wave length which is their physical dimension in the air.

SHANE HUNTINGTON
Neil, how do percussion instruments compare to the woodwind and some of the air based instruments or the string instruments we talked about?

NEIL MCLACHLAN
Percussion instruments are stiff objects in their own right, so you don’t have to put them under tension, and they vibrate in more than one dimension, so they vibrate in two or three dimensions.  Especially a bell, where you’ve got the shape of a cup, an open cup. You can vibrate around its rim.  You can have standing waves going around the rim, you can have standing waves propagating up and down the height of the bell and all sorts of combinations of those things.So if you go back to what I was saying about a standing wave being reflections in the structure, you can imagine that in all the different dimensions of a bell there are many different possible frequencies that those standing waves could have.  So if you pick up a saucepan and hit it, goodness knows what sort of frequencies you’ll get out of it.

SHANE HUNTINGTON
Now with this level of complexity it makes it somewhat hard to incorporate some of these instruments into orchestras.  Can you talk a bit more about what requirements there are for an instrument to end up in an orchestra relative to these complexities in the way they produce sound?

NEIL MCLACHLAN
Okay, now I'm going to get a little controversial because if I was talking to you five years ago, I would have said that a one dimensional vibration produces harmonics and our brains are hardwired to hear sound with a harmonic series as having a pitch at its lowest frequency or its fundamental frequency, and that only sounds that have harmonic series could have pitch.  That’s the established theory about pitch that’s been around since the mid 20th Century in particular. But what really stands out to contradict that is that there are rich musical traditions, especially in South East Asia, where they use gongs and metal keyed instruments.  It’s called the gamelan and gamelan makers can tune gongs and keys and all the other instruments to within a couple of hertz of each other at their fundamental frequency, but they don’t have harmonic overtones.  They have inharmonic overtones.So what we now propose happens is the brain is plastic.  There’s been lots of books written about how the brain changes itself and what we would argue is that the brain can learn to take a timbre like a particular gong and associate the pitch of that timbre to its lowest frequency.  It’s a learned mechanism.  So that’s what we think is happening in Indonesia and that’s what we think happens for people who play carillons which are like the church bell in Europe, the long tradition of making musical instruments out of bells, but they don’t have harmonic overtones and carilloneurs are a very small population of people who are very happy with the sound of bells.  Not many other people are.

SHANE HUNTINGTON
Neil, how does the term pitch relate to the term frequency in physics?

NEIL MCLACHLAN
Well, for a simple sound, a simple pure tone, it’s the same thing.  Pitch is an auditory dimension.  It’s a psychological thing, whereas frequency is a physical measured thing.  And what I would now argue, on the basis of our recent research, is that pitch is a very specific loudness, so it’s the loudness of a sound at a very specific frequency.  But a sound may comprise many harmonics but we attribute the loudness to the fundamental frequency, for example.

SHANE HUNTINGTON
Neil, when we listen to an orchestra and we hear a particular note that we perceive, are we just hearing one particular wavelength there, or is it more complex than that?

NEIL MCLACHLAN
It’s very rare that the whole orchestra plays one note but, say, let’s take the violins.  They will, hopefully, all be playing the same pitch but you’ll hear all of the overtones of every instrument.  So what you’ll hear is a blurring, a shimmering of all the harmonics of all the violins but what you’ll do is when you’re familiar with that sound, you’ll associate it with one pitch.  That’s a learnt thing and this is an extraordinarily controversial new proposition, that pitch is actually learnt and it’s not fundamental to human auditory perception.SHANE HUNTINGTONSo does this mean that after we learn to do this we’re incapable of separating out the harmonics when we hear them?

NEIL MCLACHLAN
No, that takes learning, too.  in fact, you’ll find enormous individual difference in the way people will listen to a sound and you can prompt a musician to listen to the harmonics of the sound in which case they will apply a different processing method.  So they’ll actually change gears in the brain and go alright, I'm not going to just associate all that sound with a simple pitch, I'm going to listen out.That has been talked about in the literature as analytical listening as opposed to holistic listening.

SHANE HUNTINGTON
We’ve been talking a bit about harmonics which is a physics term.  What about the term harmony?  What does this mean in the context of these sounds?

NEIL MCLACHLAN
Well here we go into a little bit of history because if we go right back to Pythagoras, one of the first theorems of physics was that if you take two strings equally tensioned and you take two-thirds the length of one of them, will produce a frequency ratio of 3-is-to-2 because frequency is exactly inversely proportional to the length of the string.  That ratio of 3-is-to-2 is what we call, in Western music, a perfect fifth.  So Pythagorean tuning was to take a fifth of a fifth of a fifth and so on.That idea that you could create a harmony from a simple number ratio, two-thirds, 3-is-to-2, and then Arabic mathematicians and lute makers started to look at other ratios, four is to three, five is to four and so on, and we had what was called just tuning.  In Mozart’s time, many of the intervals were in these perfect number ratios and it created the idea of the harmony of the spheres.  In the classical period, people took Pythagorean ideas, it was kind of a numerology, and thought that God’s mind was expressed in these simple number ratios because they produced harmony.So all our tuning systems were based on these simple number ratios, except there were anomalies in that.  One of the great anomalies was the Devil’s Tritone, as it was called, which was exactly half the octave.  We call it a diminished fifth in music.  It’s six semitones out of the 12 and it produces what everyone calls a dissonance, a famous dissonant sound.  It perplexed this simple idea of numeracy, of simple numeric relationships.So in the mid 19th Century, the 1860’s, a famous psychophysicist, Helmholtz, he actually discovered harmonics in sounds really.  He also did a lot of physiology on the ear and Helmholtz came up with the idea that when two sounds occur close together in frequency they start to beat against each other.  If anyone’s tuned a guitar string and they can hear the sound starting to flutter, wa-wa-wa-wa-wa-wa, and then as they get the strings closer to tune the fluttering slows down until it disappears.  We call that beating or destructive interference, comodulation, there’s lots of words for it.His idea was that if you have two harmonic tones, two guitar strings, and they’re slightly out of tune or they’re not at a perfectly 3-is-to-2 ratio then the harmonics of those sounds will start to comodulate against each other, beat with each other, and you’ll get roughness in the sound.  This is the roughness theory of dissonance, that if we have tunings that are slightly away from these simple number ratios, 3-is-to-2, then the harmonics will start to interfere with each other.That’s the theory of dissonance that’s been with us since the mid 19th Century.  It’s the common explanation for dissonance.

SHANE HUNTINGTON
The term dissonance itself, meaning?

NEIL MCLACHLAN
Meaning that negative response you might have to some sounds.  It could be difficult to listen to.  It often gets simply termed as a roughness and then people say the rough sounds are bad, although we love rough sounding saxophones and we love rock and roll also.  There’s a lot of problems with this definition of dissonance.  In fact, I have trouble saying what dissonance actually is to you.

SHANE HUNTINGTON
This is Up Close coming to you from the University of Melbourne, Australia.  I'm Shane Huntington and in this episode we’re talking about the science of music with Associate Professor Neil McLachlan.Neil, how does our brain work in concert with our ears to perceive certain musical notes and chords?

NEIL MCLACHLAN
That’s a big question and in 2010, Sarah Wilson and I published a new model of the auditory system, we call it the object attribute model.  In that model we propose that recognition mechanisms are the most important aspect of sound processing for an animal because an animal first needs to be able to recognise whether a sound is coming from a predator or a prey.  You’re not going to wait until the end of the lion roar before you start running.So, recognition mechanisms in an evolutionary sense were probably the first thing to evolve and need to happen quickly.  That turns around the whole idea of the sequence of processing.  From what probably started with Helmholtz as a reductive idea that if we could understand how the brain hears a pure tone and we could add up lots of pure tones, we’d get complex tones and we’d understand the processing of complex sounds. But the brain didn’t evolve to hear pure tones.  It evolved to find the source and the meaning of complex sounds in complex environments.  So it’s a pattern recogniser.So what we propose is that, as I said, the brain first recognises the timbre and associates the timbre with an identity.  Now I know I’ve introduced a new word, timbre, and if we’re to take the current standard definition in the American standards of timbre, it is those aspects of a sound that make it different given that it has the same pitch and loudness.  In other words, it’s a negative definition.  It’s saying we don’t know what timbre is, but it’s not pitch or loudness.What I would say is that timbre is the quality by which we recognise a sound.  When we describe sounds to each other we say it sounds like a trumpet or a car horn or a door closing.  That’s because timbre and recognition are integral.  Timbre then is really just the pattern of auditory nerve responses as they merge in time.  For a physicist you’d say it’s a spectro-temporal pattern of a soundwave and those patterns become identified with voices, phonemes, car doors and so on.So we form long term memory templates for those patterns. So there are some sounds which you are so familiar with you can almost recall them in your mind with what appears to be, at least to you, perfect clarity.Given that we have a long term memory for sounds, we can apply that template to a new sound when we hear it.  We can always be listening out for our name, for example.  We have a template for our name.  The “cocktail party effect” is this famous problem of how do we hear our name across a crowded room.  It’s because we have an accurate template for it and as we’re listening to the stream of bubble and babble of speech, suddenly something matches our name and we pull it out from the mixture.It also means that if we have a template for harmonic sounds, then we can use that template to find the pitch of that sound.  So our recognition mechanisms can prime and adapt the rest of our feature extraction mechanisms.So recognition happens, first.  Pitch, loudness, location all happen later.

SHANE HUNTINGTON
Now, Neil, we’re going to play a couple of sounds, a dissonant and non-dissonant chord, and I guess we have to take into account here that Up Close is listened to by tens of thousands of people from many different cultures across the world.  So I'm going to ask you to basically respond to each of these sounds and describe how they will be heard by our various listeners.

NEIL MCLACHLAN
Okay.

[Dissonant sound]

SHANE HUNTINGTON
First of all, the dissonant sound.  That was the diminished fifth that you mentioned earlier, Neil.

NEIL MCLACHLAN
Now, to answer your question, we did a bit study in our laboratory.  We got non-musicians, we got amateur musicians and we got professional musicians and we played them diminished fifths and all of those chords and the non-musicians didn’t really care.  They didn’t think that a diminished fifth was particularly dissonant.  The musicians, they hated it.Let’s play a perfect fifth.  

[Perfect fifth sound]

NEIL MCLACHLAN
Not very much different.  They’re only a semitone apart in their interval, but musicians found that particularly consonant and, remarkably, the non-musicians didn’t really distinguish very much between them at all, which flies in the face of all theories, Helmholtz’s theory and every other theory of consonance and dissonance.Now perfect fifths are used extensively throughout music, diminished fifths are very rarely used.  So musicians are not as familiar with that timbre and they’re not able to pull out the two pitches, they’re not able to accurately find the pitch.  That’s what we tested.  We asked them to find the pitch, we asked them to tell us how familiar the chord was and how dissonant the chord was.  We found all three of those measures were highly correlated.

SHANE HUNTINGTON
You mentioned that these are learned responses, so is it possible to get these same musicians to learn to like these particular chords?

NEIL MCLACHLAN
Yes, in fact, it’s well known that people’s musical taste changes and so the more you become familiar with using diminished fifths, the more you come to like them.  That explains what’s happened in 20th Century music when we started getting tone row music and atonal music.  You had overtrained musicians.  We were constantly looking for novelty because there’s nothing worse than listening to music when you know what’s going to happen.  So music needs to be familiar enough to be able to process it, but have interesting tricks, interesting unexpected things happening.Once you’re completely familiar with the structures of tonal music, the only thing left is to create atonal music and if you become familiar with atonal music you don’t find the diminished fifth at all dissonant.  And jazz musicians love the diminished fifth. Another interesting aspect of that question is can we train non musicians to like the diminished fifth.  So what we did in 2011, we asked 20 non-musicians to learn to find the pitch of a set of two pitched chords.  These were simple pure tones.  There were no harmonics.  They were very simple pure tone chords and they took a computer home and for 10 days they heard the chords, they tried to find the two pitches in them, they got feedback from the computer telling them how far away they were and so on and when they came back, we asked them which chords out of all possible 12 two pitched chords they liked and didn’t like.  Of course, what we found was that the chords that they’d learnt to find the pitch of were the chords that they found consonant and the chords that they hadn’t been learning they found dissonant.  And this was completely irrelevant to the tuning of the chords.It shows how over 10 days the brain can adapt to learn to recognise the timbre of these chords, to find the pitch and then it finds those sounds pleasant.

SHANE HUNTINGTON
Neil, you’ve been working quite extensively on bells in particular.  Why is there the interest in what seems to be a very simple musical instrument?

NEIL MCLACHLAN
As we mentioned earlier, bells are complicated three dimensional vibrations. And it wasn’t until we developed computational methods in the late 20th Century, 1980s and onwards, that we could predict how the complex shapes of a bell would vibrate.  So they’d been in the too hard basket to really design.  Bell designs really started in the mid 18th Century when people started trying to use bells in musical instruments, in carillons in Europe and also it became possible to put a bell on a lathe and actually remove the metal in a consistent and measurable way.When they started doing that, the best they could do with a European bell was to tune the overtones to some of the notes of the Western scale.  So Western church bells and carillon bells have a minor third interval in them, that’s a three semitone interval in them.  We’ve all got used to that sound and come to love it and that’s fine, except that when you play a major third chord with these bells you get a semitone clashing, the clashing semitone, and for most people, one, they can’t really find the pitch of the bell because there’s ambiguity in the tuning compared to a harmonic series, compared to a flute or a violin. And two, they can’t recognise the chords that the bells make.  However, carilloneurs can because they’ve become accustomed to it.So we looked at using finite element analysis methods to predict how bells will vibrate and then we used artificial intelligence algorithms to search the design space, because you can change the shape of a bell in a million ways and we had to systematically go through all the ways you could change the shape of a bell to find the geometries that could produce a harmonic series.  Eventually we did do that and we produced the Federation Bells here in Melbourne which are the first bells to have harmonic overtones.  So they’re the first bells that normal people exposed to Western music will find the pitch and find harmonious.

SHANE HUNTINGTON
I'm Shane Huntington and my guest today is Associate Professor Neil McLachlan.  We’re talking about the physics of musical instruments and how we perceive sounds here on Up Close coming to you from the University of Melbourne, Australia.Now I want to talk to you a bit about finite element analysis because the bell is, as you described, a very complex beast compared to something like a string that is simply attached at two ends, which from a physics perspective is a very simple object.

NEIL MCLACHLAN
Yes.

SHANE HUNTINGTON
Why do you have to use this particular type of analysis and how does it go about helping you to solve the problem with the bell?

NEIL MCLACHLAN
Okay, as we were saying earlier, the flexural vibrations of a bell can go in all sorts of directions, it could go around the circumference and up and down the wall.  What finite element analysis does is it breaks the complex shape of a bell up into little finite elements which are like little bricks and each of those bricks has a very simple geometry.  We can write the equations of motion, we can write equations to describe how it will flex for a simple brick.  If we can add up all those equations we can compute how a complex shape will vibrate.So with computing power increasing dramatically in the 1990s, it became possible to do that on a desktop computer and now you can probably do it on iPhone.The beauty of this was that you could go through lots of geometries quickly without having to actually make the bell.  That gave us the opportunity to sequentially search through the design space of bells and look at how all of their overtones behaved for different geometrical properties and understand those relationships.  Once we’d understood those relationships we could hone in on the possible tuning system.

SHANE HUNTINGTON
You have a variety of options there of how you go about tuning a bell.  When you do this analysis you can change the bell’s size, it’s length, it’s width, it’s girth, all of these things.  What did you focus on when you were making these particular bells that had these tones in order to get them, or was there a particular aspect of the bell’s construction that gave you the best results?

NEIL MCLACHLAN
With the help of finite analysis, I basically started with a cylinder with a capped end and I increased the cone angle.  So I went all the way from a cylinder to a disc.  I actually broke it down into some simple properties, the cone angle, the wall taper, so was it thick at the top and got thinner or the other way around, and then I also added curvature, so was it convex or concave, and then wall thickness in proportion to the length of the bell.  They were the four main properties.And what I found, searching across all of the possible values of those properties, was that when you had a cone angle of around 45 degrees and you had a taper going down so it was thinner as it got towards the rim, which is the exact opposite of a church bell, what happened was that the lowest frequencies of the bell were all circumferential, which means that the standing waves were just occurring around the rim of the bell which is almost like a one dimensional vibration in a loop so that the first harmonic was approximately twice the frequency of the fundamental and second, third and fourth and so on.  So they were an almost harmonic series.The other types of modes where the standing waves going up and down, they were above the circumferential mode so they were outside of the range of the modes that we were tuning.  If any of those modes became mixed in with the circumferential modes, it was impossible to tune a harmonic bell because they would be in the way of a simple number series.  You’d have one, two, 2.5, three.  You couldn’t do anything about that one that was 2.5.So having this simple geometry of a truncated cone that’s tapering towards the rim made it possible to tune the circumferential modes in a harmonic series by finely tuning the wall thickness, the profile of the bell.SHANE HUNTINGTONNow when you strike one of these bells, and we’ll hear one of these in a moment, does the sound change in time as the various harmonics start to lose energy and sort of bleed out of the bell as it were?

NEIL MCLACHLAN
Absolutely.  The longest wave length lasts the longest, so the lowest frequency lasts the longest.  Not all of the vibratory modes of the bell are tuned, it’s impossible, there’s thousands of them, but you might hear just for a very, very short time maybe 10 modes depending on how hard the mallet is.  If you hit it with a piece of steel then you’re going to put high energy into the bell, very impulsive energy, which will cause the higher frequencies to start vibrating more than the lower frequencies, but if you hit it with a padded mallet, the impulse will be slow, it will put energy into the lower frequencies.So even the mallet design is quite a complex thing, but because the high frequencies go through lots of cycles, they damp quickly.  So the higher frequencies damp out quickly and the lower frequencies sustain. 

SHANE HUNTINGTON
We’re about to hear a couple of bells.  I think we’ll start with one that you didn’t design, the standard church bell, so that we can get a feeling for the richness of the sound.

NEIL MCLACHLAN
And the minor third that’s in there.  

[Church bell sound]

NEIL MCLACHLAN
Can you hear that ringing overtone, and some ambiguity about what the pitch is meant to be.

SHANE HUNTINGTON
Now we’ll hear one of your bells now.  

[Federation bells sound]

SHANE HUNTINGTON
That’s quite different.

NEIL MCLACHLAN
You can still hear the overtones but the pitch is unambiguous for a Western listener, and that’s the important thing.  Here’s part of the problem, what context are we listening with?  When you hear an isolated tone like that you tend to listen analytically and say I can hear overtones there and then you’re not quite sure about what relationship they have, but when you hear it in a musical context there’s no way you can process as analytically as that.  You have to listen holistically and so then it’s easy to hear the notes and the chords.

SHANE HUNTINGTON
Now, just before we finish up, you are working on other instruments that I guess have similar complexity to the bell.  Tell us which ones you’re focusing on at the moment?

NEIL MCLACHLAN
We’re also working on bells. Because the Federation Bells are bronze cast and then machined and hand tuned, terribly expensive things.  So we’re looking for ways that we can make bells, gongs, keyed instruments that have harmonic overtones so that they can work in Western music but are cheap to make.  So we’re making aluminium bells from cheap metal, we’re making gongs as in the gongs you’d find in gamelan orchestras in Indonesia but they’re made from sheet steel and we are using a new patented technology which we developed here at the University [of Melbourne] in the Department of Mechanical Engineering where we actually add residual stress to a neutral structure to tune the overtones.  So it’s just like tensioning a guitar string in one dimension.  We’re tensioning the surface of the gong in three dimensions and we can tune different modes by putting little dimples into the surface.So we have to build a robot to dimple the gongs accurately to get them in tune and we’re also making what we call metalophones, so the vibraphones that the jazz musicians use is a form of metalophone, except our keys are harmonically tuned and they’re also designed so they can simply be laser cut from sheet metal.  So all of the tuning is in the shape of the key and we’ve designed an ensemble of instruments that can be mass produced and we’re hoping to get them into schools in 2013 so that kids in schools can have a whole orchestra, a classroom ensemble.  We use African and Indonesian musical systems which are rhythmic cycles, fun to play, easy to learn, you don’t have to learn the fine motor skills to play a violin before you make a good sound, you just walk up and start hitting the instruments.  Kids absolutely love it.But it takes them directly into playing ensemble music and it takes them directly into creating and composing real music from the first day that they approach the instrument.  So all of that agony of learning music is taken away by this method and increasingly we’re seeing research which shows that music is really important in training the brain, in emotional resilience, in forming good social networks and empathy.  We’re really keen to reintroduce music back into the Western world because as it stands at the moment, there’s only about three per cent of us who make music.

SHANE HUNTINGTON
Neil, do you see a future where things like your bells will one day be in the orchestra?

NEIL MCLACHLAN
Yeah. We also have a lot of interest from professional musicians and so we’ll be making instruments that suit the classroom and also instruments that suit the stage and because of the harmonic overtones they can play harmonies that we recognise.  So we fully expect that there’ll be a new percussion chamber ensemble and that chamber ensemble will become a new part of the Western orchestra.

SHANE HUNTINGTON
Associate Professor Neil McLachlan, thank you for talking to us today and we’re just going to go out on some sounds that you’ve created.  Tell us a bit about what they’ll be.

NEIL MCLACHLAN
We’re going to hear one of our new aluminium bells and a little aluminium metalophone.  You’ll hear how they create harmony.

[Sound of Aluminium bells and aluminium metalophone]

SHANE HUNTINGTON
Associate Professor Neil McLachlan from the Melbourne School of Psychological Science, the University of Melbourne, thank you for being our guest today on Up Close and talking to us about the intricacies of musical instruments and how we perceive sound.

NEIL MCLACHLAN
Thank you.

SHANE HUNTINGTON
Relevant links, a full transcript and more info on this episode can be found at our website at upclose.unimelb.edu.au.  Up Close is a production of the University of Melbourne, Australia.This episode was recorded on 14 June 2012.  Our producers for this episode were Kelvin Param and Eric van Bemmel; Associate Producer Dyani Lewis; Audio Engineer Gavin Nebauer.  Up Close is created by Eric van Bemmel and Kelvin Param.I'm Shane Huntington.  Until next time, good bye.

VOICEOVER
You've been listening to Up Close. We're also on Twitter and Facebook. For more info, visit upclose.unimelb.edu.au, copyright 2012, The University of Melbourne.


show transcript | print transcript | download pdf