So now, we understand a bit more about what sound is, and how we measure it. But just as important as that, if not more important, is how we actually perceive it. So I want to talk a little bit more about how we perceive sound, and why that's relevant. So as I've already said, sounds have a frequency, which is the number of times that we have these compressions and expansions, rarefactions. That frequency is perceived as both what we think of as musical pitch. So, if the frequency is high, then the notes on the piano are going to be higher. If the frequency is low the notes on a piano are going to be lower. But it's also, the what we call the spectral composition of a sound, such as the brightness. So if a sound is very, very bright, then it has more high frequencies. And if a sound is very dull, then it has fewer high frequencies. And this is what we mean by the spectral composition. The word spectral refers to spectra. Spectra are really just combinations of waveforms that make up a sound. All sounds are made up of combinations of waveforms that act together to create characteristics of that sound. So spectral characteristics are one of the most important things about sound. You can tell if an object is made of wood or metal based on how bright it is. Try it yourself at home, tap on a piece of wood. You'll notice it sounds quite dull. Now tap on a piece of metal, particularly, if you use a sharp object, you'll notice it's much brighter. It has more high frequencies. So the human ear can hear sounds that range from 20 cycles per second, that's 20 oscillations, to 20,000 cycles per second, which sounds like quite a lot, and it is. So if you're getting a bit old, you might not be able to hear sounds all that way at 20,000 cycles per second. Your top end will probably finish at around about fifteen or sixteen thousand. And this is used to great effect by restaurant owners who want to get young people away from their restaurants. They put high frequency speakers outside the restaurants that only young people can hear, and young people get very uncomfortable when they stand outside. I'm not lying about this. It's a common method of social engineering. But measuring these cycles, how do we do that? Well, I've already explained what they are, but the term I've not introduced yet is Hertz. And Hertz is the official nomenclature for describing CPS, Cycles Per Second. Another characteristic we've learned about is amplitude, which is basically how loud something is. It also tells us, or can tell us how far away it might be. So, sounds with a high amplitude are louder, if a sound is loud, we might also assume that it's close to us. If a sound is quiet, we might assume it's further away. So we measure acoustic signals in decibels. I'm not going to go into detail about how decibels are structured, or why we use them. And also, you should just be aware that there are lots of different ranges of decibels. It's not like a centimeter where there's just one centimeter. There's at least five or six different types of dB. When we measure acoustic signals, often we use something called, dBA-weighted. An A-weighted dB is very different to digital dB. Suffice to say, if you're looking for a number for the loudest sound human beings can bear, it's around about 140 dB. So another very, very important characteristic of sound is direction. And we understand what that means. But how do we actually perceive direction? Well this is very, very important particularly with respect to virtual reality. Because research which has been done in sound on acoustics for many, many years, is now crucial to the way that all sound works in virtual reality environments. We perceive what direction sounds are coming from through a combination of time differences between the ears, and volume differences between the ears, and also frequency differences between the ears. So this is quite a complex picture, and it's very different to the way things used to work in audio. I'll talk in a bit more detail about that, but what you need to remember, the most important thing is that these time differences are very powerful, and help us to locate sound in really complex ways. So direction, particularly in VR is often about the time difference between when sound is heard by one ear, and then by another ear. The difference between the sound reaching your right ear, and your left ear for example, tells you a lot about where the sound is coming from. Because as we know, sound travels at 330 meters per second, roughly in air. And your brain knows this too. It's very, very deeply encoded in your brain. So this time difference is between any single sound being detected across and between the ears. And it's used by the brain to estimate direction. Now there are times when it doesn't necessarily work that well. So let's say that my ear is here receiving a signal from this direction. And it's getting to this ear much later. That difference in time is decoded by my brain, and tells me that because the sound is here first, it's more or less over here. The problem is that there's what we call a cone of confusion, which means that the same may be true if the sound was coming from over here. The time difference between this point, and this point maybe the same. This cone of confusion can be dealt with, or handled by also trying to evaluate the frequency differences of a particular sound based on where it's coming from. And that's because of the shape of our ears. The shape of our ears means that the sound tone, the quality of the sound, for example, what I was talking about before with wood or metal, will change whether the sound is coming from in front, or behind us. There are lots of other things, and it's quite a complex picture, but that just gives you an idea of what your VR environments are actually doing behind the scenes. We'll do a bit more about this in a moment. I think what's very important to realize about interaural time difference is that although it's very powerful, it's not the whole story. Also, it doesn't work well in general for low frequencies. Low frequency sounds, we tend to be very, very bad at understanding where they're coming from, unless they're very close to us. So, when sounds are happening at some distance, and they're very low. Partly because of their wavelength, they're very long wavelength, i.e. how long it takes for the sound to go through a cycle. And also partly, because of the reverberation, it gets very, very tough to localize low frequency sounds for us, for human beings. So another really crucial part of how we decode direction in virtual reality environments is interaural amplitude differences. Now, interaural amplitude differences have been used in audio for a lot longer than interaural time differences. The amplitude differences, all it really means is the sound is louder in one ear than in the other because it's closer to one ear than the other. So this forms part of the mechanism by which we hear sounds generally. And this process is modeled when we play sounds in virtual environments. So, the sense of direction really comes from that very small change in amplitude between the two ears. And frequency has less of an impact than you might think on this. So, what this means is it's harder for the brain to decode direction signals from certain frequency bands as I've already said, but also the amplitude can cause problems for this. It can modulate this effect. As I've always said, if something is very close, maybe you're better at being able to understand where it's coming from, but that's also because when it's close, it's louder. So these variations in amplitude can have a bigger impact on the way the frequency can help us to decode position. So this diagram explains more or less what I've been saying. You can see quite clearly, there's a speaker here which is emitting sound. And the two black lines, these two black lines here, what they represent is the length of time that it takes for the sound to reach the ear. This is going to have an impact on how loud the sounds are when they get to each ear and also, what the time difference is between those two ears, because of the speed of sound, and also because of the way that amplitude dies down as sound propagates through space. And that's basically, how the ear makes decisions, or helps us make decisions about where sound is coming from.