Any sound can be written as a sum of sinusoidal functions. Sinusoidal functions include the sine function and the cosine function. More generally, you can write a sinusoidal function using a phase shift. A cosine wave is the same as a sine wave except with a phase shift. For an individual sinusoidal function, a phase shift is the same as a time delay.
If you have a complex sustained sound, such as from a musical instrument, then you can describe the sound by looking at the different frequency components -- that is, think of the wave as a sum of sinusoidal functions, and look at each of those sinusoids separately. When you write out the series of sinusoidal functions, you are writing out the "Fourier Series" describing the sound. If you have a time delay, you will get certain specific phase changes for the sinusoids. With a time delay, a sustained sound does not sound any different than without a time delay.
Mathematically you can add phase shifts individually to each of the sinusoidal functions. When you do this the waveform you would see, for example on an oscilloscope, can change dramatically. What is most interesting, though, is that what it sounds like to your ear does not change. That result is only for sustained tones and if the sounds are not too loud. When sounds are loud, non-linear effects in the electronics and/or in hearing can make a difference.
If you have time-changing signals (such as speech or the attack at the beginning and decay end of a sustained note) then adding random phase shifts can change the sound. If you play some singing backwords, it doesn't sound the same at all. Playing music backwards is equivalent to a special set of phase shifts.
Here is an example of two sounds, produced using a computer program, which use the exact same sinusoidal functions with the exact same amplitudes, but some of them have had a phase shift applied (e.g. were delayed or advanced in time compared to the other sinusoidal functions). The graph shows what the total waveforms look like. You can see they are very different. You can listen to the two sound files to compare how they sound. As long as you keep the volume low enough so that non-linear effects (in the electronics or in your hearing) are not important, most hear them as not just similar, but identical.
|Sound 1 (wav)|
|Sound 2 (wav)|
Hence, when comparing two sustained, periodic sounds, the waveform is probably not very informative. One can remove the phase shifts from recorded sounds to make a fairer comparison. Such a "phase-neutral" comparison is equivalent to comparing the amplitudes of the spectral components. That is one reason that spectra are so useful when comparing two signals -- they contain the most important information. Another equivalent comparison is through the "autocorrelation" of the signals -- a way of comparing a signal to a time-delayed version of itself.
Questions/comments to: suits @ mtu.edu.
There are no pop-ups or ads of any kind on these pages. If you are seeing them, they are being added by a third party without the consent of the author.Physics of Music Notes