Jump to content
Please note: You can easily log in to MPN using your Facebook account!

What's aliasing, and what does it sound like?


Veracohr

Recommended Posts

  • Replies 35
  • Created
  • Last Reply

Aliasing

 

Aliasing is a term generally used in the field of digital signal processing. When an analog signal is digitized, any component of the signal that is above one-half the sampling or digitizing frequency will be 'aliased.' This frequency limit is known as the Nyquist frequency.

 

When a digitized signal is analyzed, often by Fourier analysis, the power contained in the frequencies above the Nyquist frequency are added to lower frequency components. In fact, they are indistinguishable from those lower frequency components; hence the term 'aliasing.'

 

The aliased signal will appear at a predictable frequency in the Fourier spectrum. For example, given a sampling frequency of 200Hz (Nyquist frequency = 100Hz), a digitized 101Hz signal will appear at 99Hz, while a 200Hz signal will appear at 0Hz or DC. A 201Hz signal will look like a 1Hz signal, and so on.

 

Analog signals are usually low-pass filtered to remove most or all of the components above the Nyquist frequency in order to avoid aliasing.

Aliasing relates to any digital audio, not just VA synths. The above excerpt is a good summary of what it is - it can be best heard by finding an old 8 or 12-bit sampler and recording something with a lot of high harmonic content, like finger cymbals, a triangle, or glockenspiel. Set the sample rate to something low, like 22.05kHz and sample the source; play back the note and you'll hear a low tone or group of tones under the source note. This is aliasing where the sampling rate isn't high enough to capture the highest frequencies of the source, so they get "folded back" into the sound as lower harmonics.
Link to comment
Share on other sites

Originally posted by pete psingpy:

I can understand how the term applies to sample-based synths. But I don't understand how it relates to VA synths.

One way would be if you generated a signal that has components above the Nyquist frequency of the VA synth. So suppose you modulate a signal so that the resulting side bands are too high in frequency. What is going to happen?

 

I would have to run the actual math to work this out. This is tricky because you have to use wavelets to get a real answer.

 

The Nyquist theorem ignores the fact that Heisenberg's uncertainty principle also applies here. The shorter a sound the less sure you are of its frequency. Or even its harmonic makeup - there are an (uncountable) infinity of possible sounds that would produce a given short signal.

 

This is the real reason 96Khz recording is desirable - transients reproduce more accurately.

 

So just running a long duration Fourier Analysis does not cut it. Unfortunately I have not seen, or else failed to understand, how exactly one should do the analysis in order to mimic what the ear actually hears. Probably you need to produce an "uncertainty" noise measure.

Link to comment
Share on other sites

Analysis isn't really neccessary to determine aliasing. You can often hear it on older VAs by playing a very high note and using the pitchbender. You'll hear the note bend in one direction while the "alias tone" bends in the other. You'll hear it even more obviously in a mid-90s rompler. In extreme, it can generate an almost ring-modulator type effect.

 

If it's inaudible, who really cares if there's aliasing? If it's audible, it's now up to you to determine whether you can make musical use of it or not.

 

Originally posted by pete psingpy:

I can understand how the term applies to sample-based synths. But I don't understand how it relates to VA synths.

I used to think I was Libertarian. Until I saw their platform; now I know I'm no more Libertarian than I am RepubliCrat or neoCON or Liberal or Socialist.

 

This ain't no track meet; this is football.

Link to comment
Share on other sites

Originally posted by pete psingpy:

I can understand how the term applies to sample-based synths. But I don't understand how it relates to VA synths.

Okay, let's see if I can explain it then. You understand aliasing when doing a/d conversion correct? If you don't, understanding how it relates to VAs isn't going to make sense either.

 

You understand Fourier right? Basically, any complex wave is made up of a bunch of sine waves. So take a triangle wave, for instance. It has the fundamental frequency and then a bunch of harmonics, some of which may reach up to say 100khz or so. Now lets say that you're synthesizing that triangle wave at 48khz. When you run that synthesized wave through a d/a converter, your going to have harmonics that are above your nyquist frequency. These frequencies get folded over the nyquist frequency just like they do if you were using an a/d converter. That creates your aliasing.

Link to comment
Share on other sites

Originally posted by Byrdman:

The Nyquist theorem ignores the fact that Heisenberg's uncertainty principle also applies here. The shorter a sound the less sure you are of its frequency. Or even its harmonic makeup - there are an (uncountable) infinity of possible sounds that would produce a given short signal.

 

This is the real reason 96Khz recording is desirable - transients reproduce more accurately.

Heisenburg's principle has to do sub-atomic particles, not audio.

 

You don't seem to understand that we're not dealing with infinitely small frequencies when dealing with audio. We're dealing with known frequencies with a known speed. There is a limit to how fast these frequencies can be, so you're not going to have some transients that are missed or messed up by sampling them at a slower rate.

Link to comment
Share on other sites

Originally posted by azrix:

Okay, let's see if I can explain it then. You understand aliasing when doing a/d conversion correct? If you don't, understanding how it relates to VAs isn't going to make sense either.

 

You understand Fourier right? Basically, any complex wave is made up of a bunch of sine waves. So take a triangle wave, for instance. It has the fundamental frequency and then a bunch of harmonics, some of which may reach up to say 100khz or so. Now lets say that you're synthesizing that triangle wave at 48khz. When you run that synthesized wave through a d/a converter, your going to have harmonics that are above your nyquist frequency. These frequencies get folded over the nyquist frequency just like they do if you were using an a/d converter. That creates your aliasing.

But since the sound is being generated in software, wouldn't it be relatively easy to run it through a final brickwall filter algorithm that prevents any frequencies over the nyquist limit from going to the d/a converter? I have to admit, this has always confused me too about aliasing in v/a synths. I must be over-simplifying or something.
Link to comment
Share on other sites

Originally posted by azrix:

...

You don't seem to understand that we're not dealing with infinitely small frequencies when dealing with audio. We're dealing with known frequencies with a known speed. There is a limit to how fast these frequencies can be, so you're not going to have some transients that are missed or messed up by sampling them at a slower rate.

That is only true to a certain point. You also have to remember that those finite frequencies can have an infinite number of phase positions so transients can be shifted. Tune an analog synth so that one osc is dead on 440, and the second osc is 440.001. You can hear the effect on the sound as the waves become out of phase with each other. The Nyquist frequency makes no allowances for this effect. While a sampling frequency of 44,000 can conceivably sample a 22,000 wave without aliasing as long as it has no harmonics, it also sucks the life out of the sound. That is why a sampling rate of 96K is the standard for digital recording, and expect that rate to increase as computer power grows to handle higher rates.

 

Robert

This post edited for speling.

My Sweetwater Gear Exchange Page

Link to comment
Share on other sites

Originally posted by azrix:

Originally posted by pete psingpy:

I can understand how the term applies to sample-based synths. But I don't understand how it relates to VA synths.

Okay, let's see if I can explain it then. You understand aliasing when doing a/d conversion correct? If you don't, understanding how it relates to VAs isn't going to make sense either.

 

You understand Fourier right? Basically, any complex wave is made up of a bunch of sine waves. So take a triangle wave, for instance. It has the fundamental frequency and then a bunch of harmonics, some of which may reach up to say 100khz or so. Now lets say that you're synthesizing that triangle wave at 48khz. When you run that synthesized wave through a d/a converter, your going to have harmonics that are above your nyquist frequency. These frequencies get folded over the nyquist frequency just like they do if you were using an a/d converter. That creates your aliasing.

I think I understand this. Regardless of whether the sound source is sampled or generated some other way, you will get aliasing on d/a conversion.

 

Does this mean that every VA synth aliases? Unless the frequency were very high, i.e. > 44.1kHz?

Link to comment
Share on other sites

Originally posted by azrix:

Originally posted by Byrdman:

 

Heisenburg's principle has to do sub-atomic particles, not audio.

 

You don't seem to understand that we're not dealing with infinitely small frequencies when dealing with audio. We're dealing with known frequencies with a known speed. There is a limit to how fast these frequencies can be, so you're not going to have some transients that are missed or messed up by sampling them at a slower rate.

Actually not - you will find this discussed in books on wavelets. Turns out Heisenbergs uncertainty principle applies. Of course, Plank's constant does not apply.

 

Let me attempt to explain:

 

Even though the frequencies are limited (by the filter if nothing else) the time locations of signals are not. So, simple case - consider a signal that is exactly on the nyquist frequency. So there are two samples per cycle. Lets suppose the signal is precisely aligned with the sample times (which we shall assume are instantaneous). So you get a nice fat signal. Now move the signal 90 degrees. Result - no sampled signal at all.

 

OK, so that is artifical, so now move down from the Nyquist frequency a bit and window the signal. So now you've got a signal whose phase is slowly changing relative to the sampling clock.

 

Depending on how the phase of the signal in the window is aligned with the sampling, you will get different sample values. As long as the window is shorter than the time it takes for the phase to march round in phase against the clock, you are not going to get your original signal back.

 

By making the signal and the clock rate have no common denominator you can make that as long as you like but clearly the amount of difference approaches zero as the samping time increases - that is how Heisenbergs Uncertainty Principle gets in on the act.

 

As soon as you add position information, in other words non steady signals, Nyquists Theorem is too optimistic.

 

There is a whole space of signals that gets projected down to a subspace. You are going from a continuous basis (fourier transform) to a discreet one (fourier series). Noise figures are measured only in the fourier series subspace - the noise introduced by the projection is ignored. So noise figures for CD format are overstated. The question is - by how much? This question only makes sense when you add localisation in time otherwise you can drive the projection noise to zero by making the sample time arbitrarily long. Heisenberg again.

Link to comment
Share on other sites

Originally posted by Byrdman:

Actually not - you will find this discussed in books on wavelets. Turns out Heisenbergs uncertainty principle applies. Of course, Plank's constant does not apply.

 

Let me attempt to explain:

 

Even though the frequencies are limited (by the filter if nothing else) the time locations of signals are not. So, simple case - consider a signal that is exactly on the nyquist frequency. So there are two samples per cycle. Lets suppose the signal is precisely aligned with the sample times (which we shall assume are instantaneous). So you get a nice fat signal. Now move the signal 90 degrees. Result - no sampled signal at all.

 

OK, so that is artifical, so now move down from the Nyquist frequency a bit and window the signal. So now you've got a signal whose phase is slowly changing relative to the sampling clock.

 

Depending on how the phase of the signal in the window is aligned with the sampling, you will get different sample values. As long as the window is shorter than the time it takes for the phase to march round in phase against the clock, you are not going to get your original signal back.

 

By making the signal and the clock rate have no common denominator you can make that as long as you like but clearly the amount of difference approaches zero as the samping time increases - that is how Heisenbergs Uncertainty Principle gets in on the act.

 

As soon as you add position information, in other words non steady signals, Nyquists Theorem is too optimistic.

 

There is a whole space of signals that gets projected down to a subspace. You are going from a continuous basis (fourier transform) to a discreet one (fourier series). Noise figures are measured only in the fourier series subspace - the noise introduced by the projection is ignored. So noise figures for CD format are overstated. The question is - by how much? This question only makes sense when you add localisation in time otherwise you can drive the projection noise to zero by making the sample time arbitrarily long. Heisenberg again.

:freak: That didn't make much sense to me. I think you're trying to argue that digital sampling can mess up or not account for the phase of a signal? If this is what you're saying, that's not true. Once your signal has been filtered, sampling it doesn't mess with phase and on d/a conversion can reconstruct the filtered signal exactly, frequency and phase.
Link to comment
Share on other sites

Originally posted by Richard Whitehouse:

But since the sound is being generated in software, wouldn't it be relatively easy to run it through a final brickwall filter algorithm that prevents any frequencies over the nyquist limit from going to the d/a converter? I have to admit, this has always confused me too about aliasing in v/a synths. I must be over-simplifying or something.

Well, you'd need to upsample it first, then filter and downsample. This does take some dsp resources that some companies might rather put into making a beter filter or more effects.
Link to comment
Share on other sites

Originally posted by pete psingpy:

I think I understand this. Regardless of whether the sound source is sampled or generated some other way, you will get aliasing on d/a conversion.

 

Does this mean that every VA synth aliases? Unless the frequency were very high, i.e. > 44.1kHz?

Well, you don't HAVE to have aliasing. There are lots of VAs that have anti-aliased oscilators. And you shouldn't have aliasing with sampled material, but if you start pitch shifting (playing higher notes) those samples it's easy to get aliasing.

 

I was just trying to explain why some oscilators may alias. Some programmers may take shortcuts with this or may not have the experience to fix it. A true triangle or square wave isn't that hard to program if you don't care about aliasing. Making it anti-aliased is a bit harder and takes more cpu/dsp power. It's a trade off.

Link to comment
Share on other sites

Originally posted by Rabid:

That is only true to a certain point. You also have to remember that those finite frequencies can have an infinite number of phase positions so transients can be shifted. Tune an analog synth so that one osc is dead on 440, and the second osc is 440.001. You can hear the effect on the sound as the waves become out of phase with each other. The Nyquist frequency makes no allowances for this effect. While a sampling frequency of 44,000 can conceivably sample a 22,000 wave without aliasing as long as it has no harmonics, it also sucks the life out of the sound. That is why a sampling rate of 96K is the standard for digital recording, and expect that rate to increase as computer power grows to handle higher rates.

Those infinite phase positions are accounted for with nyquist. Phase differences may be easier to synthesize with a higher sample rate, but that has nothing to do with a/d conversion. Any effect phase would have in your analog synth example would be captured and recreated when converted.

 

96K is the standard for recording because it sells product. On good converters, the difference between 48K and 96K are negligible to non-existant. I don't expect sampling rate to go beyond 96K. There's no reason. Developers need to figure how to make things sound better as cpu power increases, not deal with increasingly higher sample rates.

Link to comment
Share on other sites

Originally posted by azrix:

Originally posted by Byrdman:

Actually not - you will find this discussed in books on wavelets. Turns out Heisenbergs uncertainty principle applies. Of course, Plank's constant does not apply.

 

Let me attempt to explain:

 

Even though the frequencies are limited (by the filter if nothing else) the time locations of signals are not. So, simple case - consider a signal that is exactly on the nyquist frequency. So there are two samples per cycle. Lets suppose the signal is precisely aligned with the sample times (which we shall assume are instantaneous). So you get a nice fat signal. Now move the signal 90 degrees. Result - no sampled signal at all.

 

OK, so that is artifical, so now move down from the Nyquist frequency a bit and window the signal. So now you've got a signal whose phase is slowly changing relative to the sampling clock.

 

Depending on how the phase of the signal in the window is aligned with the sampling, you will get different sample values. As long as the window is shorter than the time it takes for the phase to march round in phase against the clock, you are not going to get your original signal back.

 

By making the signal and the clock rate have no common denominator you can make that as long as you like but clearly the amount of difference approaches zero as the samping time increases - that is how Heisenbergs Uncertainty Principle gets in on the act.

 

As soon as you add position information, in other words non steady signals, Nyquists Theorem is too optimistic.

 

There is a whole space of signals that gets projected down to a subspace. You are going from a continuous basis (fourier transform) to a discreet one (fourier series). Noise figures are measured only in the fourier series subspace - the noise introduced by the projection is ignored. So noise figures for CD format are overstated. The question is - by how much? This question only makes sense when you add localisation in time otherwise you can drive the projection noise to zero by making the sample time arbitrarily long. Heisenberg again.

:freak: That didn't make much sense to me. I think you're trying to argue that digital sampling can mess up or not account for the phase of a signal? If this is what you're saying, that's not true. Once your signal has been filtered, sampling it doesn't mess with phase and on d/a conversion can reconstruct the filtered signal exactly, frequency and phase.
Made perfect sense to me, Azrix.

 

What you're missing is the filter part.

 

You're still not getting your original sound back. Why? Because of that filter.

Link to comment
Share on other sites

Originally posted by azrix:

Those infinite phase positions are accounted for with nyquist.

Only as far as preventing of alising is conserned. I got a bit off topic and was addressing the quality of sound you get when restrictinging yourself to the sampling frequency that is deemed ok by Nyquist. Nyquist does not address quality of the sound when it comes to out of phase waveforms. Sampling a 22,0000 frequency at a rate of 44,000 is fine, but sample a frequency of 21,999 and you do not give you the same quality.

 

Capturing a city scape with a digital camera will yeald a nice picture. Use the same camera to capture the expance of a clear blue sky that fades from deep to light blue and you will see distortion. A 10 megapixle camera will not capture a good picture of the sky if the color depth cannot give you enough hughs of blue to make a smooth transition. While some argue that current cameras are fine and that anything with more color depth or pixle depth is just a marketing ploy, others can see the difference. It is the same with digital recording. 44K may be fine for hip hop or rock, but try listening to a soft piano piece with good mics that pick up the natural reverb in the room. Hey, try adjusting the settings in Reaktor to 96K when rendering a synth part and listen to how much smoother the sound is. More "pixles" and less grain.

 

Robert

This post edited for speling.

My Sweetwater Gear Exchange Page

Link to comment
Share on other sites

Originally posted by Rabid:

Only as far as preventing of alising is conserned. I got a bit off topic and was addressing the quality of sound you get when restrictinging yourself to the sampling frequency that is deemed ok by Nyquist. Nyquist does not address quality of the sound when it comes to out of phase waveforms. Sampling a 22,0000 frequency at a rate of 44,000 is fine, but sample a frequency of 21,999 and you do not give you the same quality.

 

Capturing a city scape with a digital camera will yeald a nice picture. Use the same camera to capture the expance of a clear blue sky that fades from deep to light blue and you will see distortion. A 10 megapixle camera will not capture a good picture of the sky if the color depth cannot give you enough hughs of blue to make a smooth transition. While some argue that current cameras are fine and that anything with more color depth or pixle depth is just a marketing ploy, others can see the difference. It is the same with digital recording. 44K may be fine for hip hop or rock, but try listening to a soft piano piece with good mics that pick up the natural reverb in the room. Hey, try adjusting the settings in Reaktor to 96K when rendering a synth part and listen to how much smoother the sound is. More "pixles" and less grain.

 

Robert

Actually, Nyquist states that a signal has to be sampled at 2x the highest frequency + 1. So your 44000Hz sample rate with a 22000Hz signal wouldn't actually work.

 

Okay, can we agree that 20Hz to 20Khz is the normal range of human hearing? If so, we can move on.

 

I realize the digital camera analogy may seem to make sense, but the two are not comparable. More "pixels" (higher sample rate) with regard to digital audio will not give you any more useful information within the audible spectrum (20Hz to 20KHz). All it does is give you the ability to capture frequencies above 20Khz. You can't hear these frequencies anyway, so why capture them?

 

Talking about Reaktor is talking about synthesis. Synthesis and conversion are two related but different things. Raising the sample rate in Reaktor lessens the aliasing which would explain why it sounds smoother.

Link to comment
Share on other sites

Originally posted by Griffinator:

Made perfect sense to me, Azrix.

 

What you're missing is the filter part.

 

You're still not getting your original sound back. Why? Because of that filter.

No, you're not getting the original signal back. But there really isn't any solid evidence to support the fact that a person can tell the difference between a filtered and non-filtered signal when the filter is out of the audible spectrum. So, if you can't hear the difference what does it matter?
Link to comment
Share on other sites

If you cannot hear the difference between audio recorded at 44K and at 96K and if you refuse to believe anything that you cannot hear, then this discussion can only be an argument. All I can say is that I can hear the difference and I am not alone. It has more to do with graininess than aliasing and maybe someday you will recognize that difference. It will be interesting to return to this thread in two years as digital recording continues to evolve and more scientists continue to address theories on sampling rates. After all, this theory was devised before digital recording could test it. As people hear the samples recorded at different rates they are given reason to study the audible differences and refine standards. I do acknowledge that aliasing is the more prominent and recognizable form of distortion but there are other things that are affected by the sampling rate. It is not aliasing that makes a Reaktor instrument rendered at 96K sound fuller than the same instrument rendered at 44K. But again, if you cannot hear the difference then there is no reason to continue. So yes, lets move on.

 

Robert

This post edited for speling.

My Sweetwater Gear Exchange Page

Link to comment
Share on other sites

Originally posted by Rabid:

If you cannot hear the difference between audio recorded at 44K and at 96K and if you refuse to believe anything that you cannot hear, then this discussion can only be an argument. All I can say is that I can hear the difference and I am not alone. It has more to do with graininess than aliasing and maybe someday you will recognize that difference. It will be interesting to return to this thread in two years as digital recording continues to evolve and more scientists continue to address theories on sampling rates. After all, this theory was devised before digital recording could test it. As people hear the samples recorded at different rates they are given reason to study the audible differences and refine standards. I do acknowledge that aliasing is the more prominent and recognizable form of distortion but there are other things that are affected by the sampling rate. It is not aliasing that makes a Reaktor instrument rendered at 96K sound fuller than the same instrument rendered at 44K. But again, if you cannot hear the difference then there is no reason to continue. So yes, lets move on.

 

Robert

Rabid, have your ears been tested using a double blind test method?

No guitarists were harmed during the making of this message.

 

In general, harmonic complexity is inversely proportional to the ratio between chording and non-chording instruments.

 

Link to comment
Share on other sites

Originally posted by Rabid:

If you cannot hear the difference between audio recorded at 44K and at 96K and if you refuse to believe anything that you cannot hear, then this discussion can only be an argument. All I can say is that I can hear the difference and I am not alone. It has more to do with graininess than aliasing and maybe someday you will recognize that difference. It will be interesting to return to this thread in two years as digital recording continues to evolve and more scientists continue to address theories on sampling rates. After all, this theory was devised before digital recording could test it. As people hear the samples recorded at different rates they are given reason to study the audible differences and refine standards. I do acknowledge that aliasing is the more prominent and recognizable form of distortion but there are other things that are affected by the sampling rate. It is not aliasing that makes a Reaktor instrument rendered at 96K sound fuller than the same instrument rendered at 44K. But again, if you cannot hear the difference then there is no reason to continue. So yes, lets move on.

 

Robert

I'm not talking about aliasing. I stopped talking about aliasing when we started talking about converters.

 

Theories on sample rate have already been addressed. There is no evidence to suggest that humans can tell the difference between signals that have frequencies higher than 20Khz and those that don't. That's all higher sample rates are guaranteed to give you, the ability to capture frequencies >20KHz. By most accounts 44.1Khz may not be the ideal sample rate, but 60Khz is the ideal. That doesn't stop 44.1 from being perfectly usable and 60Khz is still a far cry from 96KHz.

 

I'm not saying you're not hearing a difference and I'm not telling you not to use these higher sample rates, I'm just asking you to realize that sample rate isn't the reason that they sound different. There are many other things affecting converter performance, even at different sample rates within the same converter.

 

But that's converters. Synthesis and certain effects, such as distortion and certain kinds of pitch shifting, are helped along greatly by using higher sample rates. But, by using upsample and downsample converters, you can achieve the desirable elements of higher sample rate with these effects without wasting resources running the entire signal chain at these higher rates. Like I said before, synthesis and conversion are different things when talking about digital audio and need to be treated as such.

Link to comment
Share on other sites

It's been mentioned elsewhere but bears emphasis: Nyquist etc. are THEORIES, not rules. They describe observed behavior and attempt to create mathematical formulas to explain that behavior - theories do not govern behavior. As such, they will neccessarily be tested again and again, and perhaps revised or even discarded as observation gets more refined.

 

About hearing: The average human has a hearing range of approximately 20-20,000 hz. Just as many people do not hear that full range, there are likely many people who can hear lower and/or higher frequencies. We also don't know the full extent of other means of perception. Consider this: most of us feel frequencies at and below 20hz even if we don't hear them. What's to prevent some (or many) of us from 'sensing' frequencies above 20k? Perhaps this begins to explain the 'presence' of a live symphony performance which, regardless of miking & recording & sonic positioning/imaging techniques, consistently eludes capture.

 

The auditory system does not exist independently of other body systems - they work in concert with each other. The scientists who have isolated that particular body part and proclaimed "it only has these abilities" miss the overall picture.

 

Originally posted by azrix:

[QB]Actually, Nyquist states that a signal has to be sampled at 2x the highest frequency + 1. So your 44000Hz sample rate with a 22000Hz signal wouldn't actually work.

 

Okay, can we agree that 20Hz to 20Khz is the normal range of human hearing? If so, we can move on.

I realize the digital camera analogy may seem to make sense, but the two are not comparable. More "pixels" (higher sample rate) with regard to digital audio will not give you any more useful information within the audible spectrum (20Hz to 20KHz). All it does is give you the ability to capture frequencies above 20Khz. You can't hear these frequencies anyway, so why capture them?

[QB]

I used to think I was Libertarian. Until I saw their platform; now I know I'm no more Libertarian than I am RepubliCrat or neoCON or Liberal or Socialist.

 

This ain't no track meet; this is football.

Link to comment
Share on other sites

It's been mentioned elsewhere but bears emphasis: Nyquist etc. are THEORIES, not rules. They describe observed behavior and attempt to create mathematical formulas to explain that behavior - theories do not govern behavior. As such, they will necessarily be tested again and again, and perhaps revised or even discarded as observation gets more refined.

It may be a theorem, but if you sample below Nyquist, alias frequencies (the difference between the sample frequency and the frequencies being sampled) are created. I may have my head up my ass on this as far as audio is concerned, but the effect is readily apparent in video and other forms of imaging. I see no reason why the same shouldn't be true with audio since it's simply lower sampling and sampled frequencies.

 

Damn, I thought I could stay out of the geek fight. :freak:

Link to comment
Share on other sites

Originally posted by JimmieWannaB:

It's been mentioned elsewhere but bears emphasis: Nyquist etc. are THEORIES, not rules. They describe observed behavior and attempt to create mathematical formulas to explain that behavior - theories do not govern behavior. As such, they will necessarily be tested again and again, and perhaps revised or even discarded as observation gets more refined.

It may be a theorem, but if you sample below Nyquist, alias frequencies (the difference between the sample frequency and the frequencies being sampled) are created. I may have my head up my ass on this as far as audio is concerned, but the effect is readily apparent in video and other forms of imaging. I see no reason why the same shouldn't be true with audio since it's simply lower sampling and sampled frequencies.

 

Damn, I thought I could stay out of the geek fight. :freak:

I don't think anyone is questioning the lack of quality when using a sample rate below Nyquist, but rather, do you continue to get audio improvement above Nyquist. At least, that is my point. Phase distortion only approaches nill as the sampling rate approaches infinity. Sort of like Moire patterns in pixalized pictures.

 

Robert

This post edited for speling.

My Sweetwater Gear Exchange Page

Link to comment
Share on other sites

Originally posted by Rabid:

Originally posted by JimmieWannaB:

It's been mentioned elsewhere but bears emphasis: Nyquist etc. are THEORIES, not rules. They describe observed behavior and attempt to create mathematical formulas to explain that behavior - theories do not govern behavior. As such, they will necessarily be tested again and again, and perhaps revised or even discarded as observation gets more refined.

It may be a theorem, but if you sample below Nyquist, alias frequencies (the difference between the sample frequency and the frequencies being sampled) are created. I may have my head up my ass on this as far as audio is concerned, but the effect is readily apparent in video and other forms of imaging. I see no reason why the same shouldn't be true with audio since it's simply lower sampling and sampled frequencies.

 

Damn, I thought I could stay out of the geek fight. :freak:

I don't think anyone is questioning the lack of quality when using a sample rate below Nyquist, but rather, do you continue to get audio improvement above Nyquist. At least, that is my point. Phase distortion only approaches nill as the sampling rate approaches infinity. Sort of like Moire patterns in pixalized pictures.

 

Robert

Nyquist isn't a theory. It's a theorem.

 

Theorem - 1. [n] an idea accepted as a demonstrable truth, 2. [n] a proposition deducible from basic postulates

 

Nyquist does not cause phase distortion. Modern converters do not cause phase distortion. Modern converters do not alias to any appreciable degree. Aliasing is a byproduct of improper filtering, not of Nyquist in general.

 

I will grant that there may be other means of perception, but if that is true, please find one study that shows that people could tell the difference between audio with and without content above 20KHz. To me, if there were other senses involved this should be readily apparent. Most symphony ochestras get up above 100db on loud passages. You can most definitely feel that. Same with a loud guitar. There are no high frequencies, but you can still feel it.

Link to comment
Share on other sites

all frequencies present in a sound interact harmonically, also those that are not recognized by the ear modify the audibile ones. If the system output is cut at 22.5 khz you'll never ear the same sound as in reality, you simply miss a portion of the signal. I think that one of the reasons that a well mic'ed sound is always better than a direct one, is not only for electrical reasons, but also because in the air around the mic, all the unrecordable frequencies have the ability to affect the recordable ones shaping them, and the perceived result of the recording is much closer to reality.

 

The VA thing is different in practice, but just think that if the missing frequencies affect naturality of reproduced sound, imagine what degradation can they, or worst, wrongly generated ones, bring to the timbral quality of synthesized waveforms. This is not only digital, also in true analog you have better or worst circuitry.

Guess the Amp

.... now it's finished...

Here it is!

 

 

Link to comment
Share on other sites

Aliasing is a byproduct of improper filtering, not of Nyquist in general.
Again, I'm speaking from video not audio experience, but aliasing is the product of ignoring Nyquist. Yes, filtering is involved but it's to meet Nyquist. In video, one of the uses of filtering is to ensure that your highest frequency is less than one half of of your sampling frequency. This is termed anti-aliasing or Nyquist filtering in video. Since a frequency is a frequency is a frequency, I have to assume that the same holds true in audio.
Link to comment
Share on other sites

Gotta be careful of those ideas which are generally accepted as demonstrable truths. Until we learned how to interpret what we were seeing in the skies, the idea accepted as demonstrable truth was that the Sun revolved round the Earth.

 

I'm not arguing against Nyquist at all. I'm merely cautioning that it remains to be fully tested.

 

I used to think I was Libertarian. Until I saw their platform; now I know I'm no more Libertarian than I am RepubliCrat or neoCON or Liberal or Socialist.

 

This ain't no track meet; this is football.

Link to comment
Share on other sites

Since we're talking about sampling theory, I thought I'd share my first high sampling rate sucess story. I've always believed the Nyquist theory, which says that you need twice the sampling rate of the output signal, so 44.1kHz for a 20k range of hearing. However, I have been beta testing our FW-1884 compuuter interface, which goes up to 96k. I decided I would run mine at 88.2kHz, because it easily SR converts down to 44.1 and it's oddball enough that I might find bugs that others miss.

 

They other day I had switched it back to 44.1 and forgotten about it. I pulled up a sound on my Super Jupiter, a big sawtooth with the filter wide open and played a few chords. I checked if I was getting distortion somewhere, then remembered that the unit was set to 44.1. I switched it back to 88.2 and got the sound I remembered. The high frequency sizzle for each note of the chord was more attached to that note, where at 44.1 it was just a big noise. I couldn't pick out which harmonics went with each note at 44.1 -- it was just noise -- but at 88.2 I could hear it more plainly.

 

So, chalk one more up for true analog synths. I don't know of any VAs running at over 48kHz, so you wouldn't hear this kind of detail from a digital synth.

 

And, going back to your original question, check out "Principals of Digital Audio" by Ken Pohlman. It's the definitive book on digital audio theory, and plainly explains ailiasing.

 

-jl

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...