Jump to content

Teorija Semplovanja


Mikorist

Preporučeni Komentari

Credit: Dr. Nyquist discovered the sampling theorem, one of technology’s fundamental building blocks. Dr. Nyquist received a PhD in Physics from Yale University. He discovered his sampling theory while working for Bell Labs, and was highly respected by Claude Shannon, the father of information theory.

Nyquist Sampling Theory: A sampled waveforms contains ALL the information without any distortions, when the sampling rate exceeds twice the highest frequency contained by the

sampled waveform.

While this article offers a general explanation of sampling, the author's motivation is to help dispel the wide spread misconceptions regarding sampling of audio at a rate of 192KHz. This misconception, propagated by industry salesmen, is built on false premises, contrary to the fundamental theories that made digital communication and processing possible. The notion that more is better may appeal to one's common sense. Presented with analogies such as more pixels for better video, or faster clock to speed computers, one may be misled to believe that faster sampling will yield better resolution and detail. The analogies are wrong. The great value offered by Nyquist's theorem is the realization that we have ALL the information with 100% of the detail, and no distortions, without the burden of "extra fast" sampling. Nyquist pointed out that the sampling rate needs only to exceed twice the signal bandwidth. What is the audio bandwidth? Research shows that musical instruments may produce energy above 20 KHz, but there is little sound energy at above 40KHz. Most microphones do not pick up sound at much over 20KHz. Human hearing rarely exceeds 20KHz, and certainly does not reach 40KHz. The above suggests that 88.2 or 96KHz would be overkill. In fact all the objections regarding audio sampling at 44.1KHz, (including the arguments relating to pre ringing of an FIR filter) are long gone by increasing sampling to about 60KHz. Sampling at 192KHz produces larger files requiring more storage space and slowing down the transmission. Sampling at 192KHz produces a huge burden on the computational processing speed requirements. There is also a tradeoff between speed and accuracy. Conversion at 100MHz yield around 8 bits, conversion at 1MHz may yield near 16 bits and as we approach 50-60Hz we get near 24 bits. Speed related inaccuracies are due to real circuit considerations, such as charging capacitors, amplifier settling and more. Slowing down improves

accuracy. So if going as fast as say 88.2 or 96KHz is already faster than the optimal rate, how can we explain the need for 192KHz sampling? Some tried to present it as a benefit due to narrower impulse response: implying either "better ability to locate a sonic impulse in space" or "a more analog like behavior". Such claims show a complete lack of understanding of signal theory fundamentals. We talk about bandwidth when addressing frequency content. We talk about impulse response when dealing with the time domain. Yet they are one of the same. An argument in favor of microsecond impulse is an argument for a Mega Hertz audio system. There is no need for such a system. The most exceptional human ear is far from being able to respond to frequencies above 40K. That is the reason musical instruments, microphones and speakers are design to accommodate realistic audio bandwidth, not Mega Hertz bandwidth. Audio sample rate is the rate of the audio data. Such data may be generated by an AD converter, received and played by a DA converter, or even altered by a Sample Rate converter. Much confusion regarding sample rates stems from the fact that some localized processes happen at much faster rates than the data rate. For example, most front ends of modern AD (the modulator section) work at rates between 64 and 512 faster than a basic 44.1 or 48KHz system. This is 16 to 128 times faster than 192KHz. Such speedy operation yields only a few bits. Following such high speed low bits intermediary outcome is a process called decimation, slowing down the speed for more bits. There is a tradeoff between speed and accuracy. The localized converter circuit (few bits at MHz speeds) is followed by a decimation circuit, yielding the required bits at the final sample rate. Both the overall system data rate and the increased processing rate at specific locations (an intermediary step towards the final rate) are often referred to as “sample rate”. The reader is encouraged to make a distinction between the audio sample rate (which is the rate of audio data) and other sample rates (such as the sample rate of an AD converter input stage or an over sampling DA’s output stage)

Record at 192KHz than process down to 44.1KHz?

There are reports of better sound with higher sampling rates. No doubt, the folks that like the "sound of a 192KHz" converter hear something. Clearly it has nothing to do with more bandwidth: the instruments make next to no 96KHz sound, the microphones don't respond to it, the speakers don't produce it, and the ear can not hear it. Moreover, we hear some reports about "some of that special quality captured by that 192KHz is retained when down sampling to 44.1KHz. Such reports neglect the fact that a 44.1KHz sampled material can not contain above 22.05KHz of audio. Some claim that that 192K is closer to the audio tape. That same tape that typically contains "only" 20KHz of audio gets converted to digital by a 192K AD, than stripped out of all possible content above 22KHz (down sample to CD). “If you hear it, there is something there” is an artistic statement. If you like it and want to use it, go ahead. But whatever you hear is not due to energy above audio. All is contained within the "lower band". It could be certain type of distortions that sound good to you. Can it be that someone made a real good 192KHz device, and even after down sampling it has fewer distortions? Not likely. The same converter architecture can be optimized for slower rates and with more time to process it should be more accurate (less distortions). The danger here is that people who hear something they like may associate better sound with faster sampling, wider bandwidth, and higher accuracy . This indirectly implies that lower rates are inferior. Whatever one hears on a 192KHz system can be introduced into a 96KHz system, and much of it into lower sampling rates. That includes any distortions associated with 192KHz gear, much of which is due to insufficient time to achieve the level of accuracy of slower sampling. There is an inescapable tradeoff between faster sampling on one hand and a loss of accuracy, increased data size and much additional processing requirement on the other hand. AD converter designers can not generate 20 bits at MHz speeds, yet they often utilize a circuit yielding a few bits at MHz speeds as a step towards making many bits at lower speeds. The compromise between speed and accuracy is a permanent engineering and scientific reality. Sampling audio signals at 192KHz is about 3 times faster than the optimal rate. It compromises the accuracy which ends up as audio distortions. While there is no up side to operation at excessive speeds, there are further disadvantages: 1. The increased speed causes larger amount of data (impacting data storage and data transmission speed requirements). 2. Operating at 192KHz causes a very significant increase in the required processing power, resulting in very costly gear and/or further compromise in audio quality. The optimal sample rate should be largely based on the required signal bandwidth. Audio industry salesman have been promoting faster than optimal rates. The promotion of such ideas is based on the fallacy that faster rates yield more accuracy and/or more detail. Weather motivated by profit or ignorance, the promoters, leading the industry in the wrong direction, are stating the opposite of what is true.

ceo text videti na : http://www.lavryengi...ling_Theory.pdf

Link to comment
Podeli na ovim sajtovima

  • Odgovora 4
  • Kreirano pre
  • Zadnji odgovor pre

Aktivni članovi u ovoj temi

Popularni dani

Aktivni članovi u ovoj temi

Kako to da su, ultra brzi kompjuteri, sa mnogo memorije i puno mhz dobri za snimanje muzike, a pri slusanju tj reprodukciji sa kompjutera, pribegava se trikovima usporavanja procesora, izbacivanju viska memorijskih modula itd? Citao sam razne tekstove o raznim podesavanjima PCja za slusanje m,uzike, iskljucivanje jezgara u BIOSu itd..? nije 100% u vezi sa sampling rate-om, ali ipak...

Link to comment
Podeli na ovim sajtovima

It's Alive! Ultrasonic Spectra Isn't So Ultra Anymore

Andrew Hon
ashon_at_uclink.berkeley.edu
www.ocf.berkeley.edu/~ashon


When the consumer audio Compact Disc (CD) was introduced in the late 1980s, the Redbook CD format specified 16 bit word lengths and 44 kHz sample rates. This specification was sufficient for 96 dB of dynamic range and a frequency response of up to 22 kHz (Nyquist principle). The limit of 22 kHz was based on consensus among audio engineers that the human ear can only hear up to at most 20 kHz. Limitations of the technology - of data density and in Digital-to-Analog conversion circuitry - undoubtedly also influenced this decision.

Early CD players sounded horrible. Part of the reason for their sub-audiophile performance was the engineering learning curve manifested with any new technology - Digital-to-Analog Converters (DACs) that could only extract 14 bits of the 16, and poor implementation of filtering followed the conversion stage. Now though, over a decade later, the audio industry can be considered to have mastered the technology, with a great number of very good-sounding yet inexpensive CD players available. Still, some audiophiles claim that the very best CD players on absolute terms still do not compare with the very best of LP (record) players. In addition, one point of agreement among audiophiles is that there has yet to be a CD implementation that gives a convincing sonic illusion, in particular of a full orchestra. What gives?

One possible reason for CD's lack of ultimate fidelity is that it simply cannot encode the frequency response present in real world acoustics. Work done by James Boyk of Caltech (incidentally my EE/Music professor from when I was a student there, and the primary motivating force for my being involved with audiophilia) has documented this over-20 kHz spectral content in various musical instruments. After accounting for confounding factors, Boyk concluded that there is indeed acoustic energy extending as high as 100 kHz and perhaps beyond, limited only by his analyzing equipment.

Certain instruments more than others exhibit ultrasonic energy, with percussive instruments and in particular the cymbal having 40% of its energy in >20 kHz frequencies. This finding in part accounts for the cymbal's frequent mention in audiophile literature as a benchmark for audio systems' high frequency performance - whether or not the system can capture the unique "hiss" of a real-life cymbal. Needless to say, not very many systems can come close to portraying a cymbal naturally; most consumer systems produce something closer to white noise.

Additional work supporting James Boyk's findings was done by John Atkinson, editor of Stereophile magazine, the preeminent audiophile journal. In his October 2000 editorial he describes spectral analyses of audio recordings, all of which demonstrate more or less activity above 20 kHz. An interesting finding he reports is that it is not just acoustic instruments that exhibit ultrasonic activity - the electric guitar in bluegrass music, where intentional feedback produces rampant clipping and the characteristic electric guitar sound, also results in spectral content extending above 20 kHz. Furthermore, Atkinson noticed that even old analog recordings from the '60s and earlier have captured this ultrasonic content.

All this ultrasonic energy is all well and good, one argues impatiently, but what about the long-accepted 20 kHz limit of the human ear? Sounds above 12 kHz, even, are relatively indistinguishable and are lumped together as simply "high frequencies". The first response is that we may have to rethink our dogma of the hard perception limit at 20 kHz.

Recent work by Tsutomu Oohashi et al., published in June of 2000 in the Journal of Neurophysiology, shows that the brain may in fact be registering over-20 (or 22) kHz spectral energy. Titled "Inaudible High-Frequency Sounds Affect Brain Activity: Hypersonic Effect", their paper discusses their finding that sounds containing High Frequency Components (HFCs) above the audible range significantly affect the brain activity of listeners. They used the gamelan music of Bali, which is extremely rich in HFCs with a nonstationary structure, as a natural sound source, and divided it into two components: an audible low-frequency component (LFC) below 22 kHz and an HFC above 22 kHz. Brain electrical activity and regional cerebral blood flow (rCBF) were measured as markers of neuronal activity while subjects were exposed to sounds with various combinations of LFCs and HFCs.

The experimenters found that while subjects could not recognize (i.e. perceive in the common sense of the word) HFC when presented alone, their brain activity altered significantly when they were presented with music containing HFC in addition to LFC as compared to LFC alone. Psychological evaluation indicated that the subjects felt the sound containing an HFC to be more pleasant than the same sound lacking an HFC. These results suggest the existence of a previously unrecognized response to complex sound containing particular types of high frequencies above the audible range. Oohashi et al. term this phenomenon the "hypersonic effect."

One conclusion this research suggests is that the method used to determine the limit of human hearing is imperfect. The standard "report" method of psychology has been criticized (e.g. by UCB Professor Richard Ivry) as not being an accurate measure of internal representation. Specifically, the access of internal state for verbal report may result in information being discarded as is commonly the case with any sort of attention-evaluation-selection-action cognitive pathway. What may have happened with the original research on the 20 kHz hearing limit, in keeping with Oohashi's recent findings, is that even though the ear/brain system registers high frequency content but only as a complement to low frequency (audible) content and not sufficiently enough to be consciously reported. The effects of HFCs are subtle but not inconsequential.

As an aside, another criticism of standard methodology may be warranted. Great debate has raged in the audio community over subtle effects in amplifier quality, cable differences, and even mechanical resonance effects, with boundaries being drawn between "subjectivists" and "objectivists". A staple of objectivist argument has been the double-blind test (DBT) or ABX test. Under DBT or ABX conditions many self-proclaimed golden-ear (i.e. sensitive to these subtle differences) audiophiles have failed to identify differences to any significant statistical degree. Nevertheless, over the past twenty to thirty years, the threshold of criteria for accepted high fidelity audio characteristics has steadily been decreasing. Nowadays not many respected audiophiles would claim that there are no differences between the above-mentioned amplifiers (tube versus solid state), interconnect or speaker cables, and to a lesser degree in electro-mechanical resonance interactions, mainly with properly mechanically damped electrical components. Jon Risch, a respected audiophile on the Internet with rigorous engineering principles, has suggested objective mechanisms for many of these subjectively-perceived differences. More importantly, he has thoroughly denounced standard DBT and ABX tests to be inaccurate measurements of perception. Most forms of these tests, being rigid and timed, put undue psychological stress on the subject thus resulting in a worsening of apparent perceptual abilities. It could be that the original tests that determined the supposed 20 kHz hearing limit were confounded by these effects.

A second explanation that may not necessarily have to refute the 20 kHz hearing limit entails engineering details slightly beyond the scope of this class. A well-respected high fidelity digital audio company, dCS, has published a white paper describing the engineering issues involved with reproducing high-sample rate material and standard sample rate material. Due to what is called the Gibbs phenomenon, typical sharp anti-aliasing filtering for standard 22 kHz sample rate material as is necessitated by the Nyquist theorem results in a ringing transient response. The energy contained in this transient ringing "smears" or "defocuses" the sound, impairing the ability to localise sounds.

Higher sample rates mitigate this problem. dCS produces an ultra high-end upsampler and DAC that converts standard 16 bit/44 kHz CD material to interpolated 24 bits at 192 kHz, improving the sound by all subjective audiophile criteria - air, soundstage, imaging, ease - to no end. Given that there is no real information being added to the signal, the engineering explanation dCS offers gains credibility.

One could even argue that the dCS explanation supercedes the neuroimaging and EEG work done by Oohashi et al., because the playback equipment used by Oohashi et al. may be subject to the same engineering limitations and may in fact be responsible for the results they found.

Along the same lines, reader feedback to John Atkinson's editorial on high frequency spectral content took aim at the analysis of transients (from which much of the HFC is derived). The reader states that "spectral-content analysis shows the flaws inherent in concluding that the necessary frequency bandwidth and sampling rates of audio systems can be determined simply by analyzing the frequency response of the human ear. Because the Fourier Transform isn't valid for those dynamic, transient musical sounds and resulting signals, the assumption simply isn't so." He goes on to praise the merits of analogue-only LP systems, that, not being subject to invalid Fourier Transform analysis, always had to have frequency responses much higher than the 20 kHz limit of human hearing. FT analysis is only one way to look at acoustic waveforms, and like with all modes of perception, it carries along its own assumptions, some of which may not be applicable to every circumstance.

A concensus seems to be arising from this discussion, which is that whatever the cause, a higher-than-CD bandwidth would be beneficial to the ultimate fidelity of sound reproduction, due to the requirements of transient signals. These transients may be exhibiting high frequency spectral energy or they may merely be an artifact of attempting to apply Fourier Transform frequency analyses to mere impulses. Regardless, in terms of the engineering criteria involved, achieving higher frequency-response bandwidth in digital recordings is a Good Thing™.

The two major new digital formats, DVD-Audio (Digital Versatile Disc- Audio) and SACD (Super Audio CD), both provide substantially improved frequency bandwidths though from differing engineering approaches. Sony's proprietary SACD format uses the Direct Streaming Digital (DSD) format which samples analogue material roughly 2.4 million times per second, though in 1-bit increments. Despite recent debate at the 109th AES meeting (2000) about the true nature of SACD, this completely different paradigm for digital audio recording does away with the anti-alias filters needed for PCM (CD and DVD-A) analogue waveform reconstruction. Preliminary reports in the audiophile community indicate that SACD has a natural quality of sound that DVD-A has yet to demonstrate. Ironically, one of the descriptions of SACD, this radically new digital format, is that it sounds "like analogue", meaning like LPs, ancient technology. (LPs i.e. vinyl records are associated with having smooth, relaxing presentations, that while sometimes not as impressive per audiophile standards, nevertheless can offer perfectly enjoyable music. The same is not always true, and in fact is seldom true, for CDs.)

If DVD-A is unable to achieve the same sublime description of "naturalness" as SACD does, especially within the next year when truly high-end implementations of DVD-A are released, then one may be tempted to give credence to the explanation offered by dCS. It may be that PCM D-to-A reconstruction is inherently flawed, that one will never be able to escape the Gibbs phenomenon manifest in reconstructing transient signals, no matter how high the sample rate. I believe it is the hope of the high-end members of the DVD-A consortium that a sufficiently high sample rate (perhaps 192 kHz) will mitigate this problem.

In any case, SACD seems to have gained a foothold in the market, and DVD-A will be practically guaranteed success if only via market piggy-backing on the success of video DVD, so the audiophile's dream for high resolution digital audio will inevitably be fulfilled. To what degree and by which format, not to mention within what time frame the dream will be fulfilled is yet to be determined, but one estimates that the next year in high-fidelity audio reproduction will be truly exciting.

RESOURCES

John Atkinson, "What's Going On Up There?" October 2000 Stereophile http://www.stereophi...rchives.cgi?282

James Boyk, "There's Life Above 20 Kilohertz! A Survey of Musical Instrument Spectra to 102.4 KHz" http://www.cco.calte...tra/spectra.htm

Tsutomi Oohashi, Emi Nishina, Norie Kawai, Yoshitaka Fuwamoto, Hiroshi Imai, "High-Frequency Sound Above theAudible Range Affects Brain Electric Activity and Sound Perception. Audio Engineering Society preprint No. 3207 (91st convention, New York City)". Abstract, page 2. http://jn.physiology...tract/83/6/3548

Personal communication with Jon Risch, web page http://www.geocities.com/jonrisch/

dCS White Papers, "A Suggested Explanation For (Some Of The) Audible Differences Between High Sample Rate and Conventional Sample Rate Audio Material" http://www.dcsltd.co.uk/papers.htm

dCS White Papers, "Effects in High Sample Rate Audio Material" http://www.dcsltd.co.uk/papers.htm

Digital and Hi-Rez Digital Forums at the AudioAsylum, http://www.audioasylum.com



http://www.ocf.berke...Ultrasonics.htm
Link to comment
Podeli na ovim sajtovima

Here's some thoughts on the audio sample rate influence on the... Audio. Are you one of those who thinks 96 khz (DVD-A) is more precise than 44.1 khz (CD)? Well, it's not, in fact, it's as precise as analog... Find out why.

Before going into the issue, let me let you know that I work in 96 khz, or 48 khz (enough) or above (192 khz for fun as I write), but not because 44.1 khz is less precise - the higher sample rate tends to compensate for worst A/D conversion stage, ergo, it's a cheaper way of making good converters, at the expense of more storage and CPU.

More ahead... In English, as much as possible.

Fourier and Nyquist

Nyquist tell us that to accurately get a sampled wave you must use a sampling frequency equal to twice the frequency you wish to sample. With 44.1 khz you can sample up to 22.05 khz bandwidth signals. Fourier tells us that every wave is but an addition of sine-waves at different frequencies and amplitudes. So, at any given point, what you sample is an addition of sine waves.

Ok, now lets introduce and examine false argument #1 towards 96 khz being more precise than 44.1 khz.

Maximum frequency

Each sample is a "photo" of the sound wave position in time. A sound is but a "wave" that wobbles up and down at a certain frequency and we hear it. Many will say that between a 44.1 khz sample and the next one there will be a lot of "unsampled" variations. Agreed. But now here's an issue...

All the things "in between" the 44.1 khz samples HAVE to be signals at a frequency SUPERIOR to 22.05 khz, hence, we don't care anyway! (Our ears, for that matter.)

Precision

So here's a shocking truth: the precision at 20 hz-20 khz of a sampled signal at 44.1 khz, 96 khz or even analog is EXACTLY the same, all are equally precise in the below Nyquist limit bandwidth.

Why 96 khz then?

Now things get complex, but first it's because it's cheaper to produce a more accurate sampler /converter using 96 khz, whereas to achieve the same degree of fidelity with a 44.1 khz one would require more expensive design.

But I digress, as I said, a sound wave is a wobbling signal that goes up and down. This up/down can vary up to 22.05 thousand times per second to be sampled by 44.1 khz rate accurately. But allow me to try...

Aliasing

Now imagine you have a signal that is running at 30 khz... The signal is a wobbling sound that goes up and down 30 thousand times per second... And this is a problem. Why?

From those 30 thousand times, Around 15 thousand are bound to e sampled erroneously by a 44.1 khz rate. It wont catch the 30 khz wave, but it will catch parts of it that will sound audible as a somewhat 15 khz or something sound wave (whistle, "chirping"). This is known as aliasing. The only way IS to STOP frequencies higher than 22.05 khz from ever getting in the sampled signal to begin with - using a brick wall filter - and HERE is where digital has its major handicap, and here is where the PRECISION starts to fail.

Brick wall

These brick wall filters are what make 44.1 khz signals lack in fidelity what 96 khz can provide, the "brick wall" filters are not "brick walls" but rather progressive filters that induce a lot of distortion, like phase shifts that can go up to 1000 degrees (unless they are really expensive)... At 96 khz, the brick walls are set at 48 khz (not 22.05 khz) and their effects (distortion) are NOT audible, hence why it's better (hence, cheaper).

Digital age

With sophisticated oversampling and digital filters, brick wall filters are not as distorting as they used to be so, with an irony twist, 44.1 khz is starting to sound as perfect as 96 khz, and as cheap (96 khz is more expensive on the long run, more CPU, more storage, and so on).

Psycho Acoustics

Nonetheless, ultrasonic material DOES alter audible one (know it for sure, if you modulate two ultra-sonic beams against each other, sub multiple audio waves come out of nowhere, or should I say, the interception point), ultrasonic waves may produce "beats" that carry down to audible frequencies coloring it.

But it's rare in what applies to sonic music material (the importance of having ultrasound recorded), an example is at an orchestra, when you catch the sound far away, ultrasonic content has already interacted between instruments and you get an accurate picture, even at 44.1 khz (the ultrasonic sounds already did the interaction work, before hitting the microphone).

But, for the sake of argument, if you close mic the instruments and later mix them at 44.1 khz, the ultrasonic part - that causes the coloring "beats" - will be missing, so no "coloring" will exist, as opposed as it would with a 96 khz sample/mix, whereas you have the ultrasonic info and it WILL color the mix with audible results - assuming your sampling did not cut ultrasonic frequencies to begin with (many do, even at 96 khz).

Although in the end, mind you, if you down sample the 96 khz ("beats" doing the work) mix down to 44.1 khz, it will sound the same. So we are back to 44.1 khz is enough, again.

So why go 96 khz?

Simple, or is it?

At 96 khz there is much bandwidth for plug ins to work in, if they're poorly engineered, it may make a difference. The real achievable difference for having high sample rates would be re-pitching or non-linear processes like compression (and I mean really heavy one, like 20:1).

The number of bits (24 vs 16) is actually the most important thing regarding fidelity, spreading rounding errors across such a vast space they wont make the a big difference in distortion (the more bits, the better). Time-stretch and pitch-shifting will benefit a lot on native 96 khz samples, but the rest won't a bit, and I mean volume control, EQ, reverb, and so on...

Does DVD-A makes sense then?

As far as a listening experience, the 24 bits make more of a difference than the 96 khz. But as said above, having native ultrasonic content alters the sound (although you can record that "altering" in 44.1 khz), and it will alter it according to your room acoustics IF you can play it back (and was sampled/mixed) at 96 khz.

96 khz will sound more analog (actually it feels more than it sounds), but its a very subtle effect and only shown off with very good mixes always done and preserving the full 48khz audio bandwidth. Today that's still hard to do since most mics don't go beyond 20 khz, as most synths (44.1 khz based) and guitar rigs (typical 15 khz bandwidth).

I believe and advise to always use 96 khz for the "cheaper and better" reason when it comes to converters (but expensive with computing resources), BUT only take it seriously if you use gear that goes up that high (48 khz) when capturing sound (produced by instruments with ultrasonic components captured by ultrasonic capable mics/devices).

Please mind this is a rather "simplistic" explanation and there are other math and physical phenomenon I've not talked about, but are not really that relevant and more technical oriented (limits on analog to digital converters and path stage electronics, to name one). 44.1 khz is enough, and if you don't use A/D converters, for example, you only use sound generators and virtual instruments, you can use ASIO 48khz, instead of ASIO2 96 khz for it saves you CPU and the end result is exactly the same when it comes to audio quality of the render - if it seems it does not (with some rendering synths, not samplers) it's because of internal reasons, not the sample rate (and typically 96 khz sounds warmer and fat - but it's actually just the "octave higher" effect, usually, try hitting the higher octave at 48khz and it will probably sound similar/the same), less than 96 khz sounds cold and thin (the octave lower effect), but you may want the later)...

So, unless you're writing some tune for dogs and bats, or working on ultrasonic experiences, I'd stick to 44.1 khz with no second thoughts, just don't up sample it and sell it as 96 khz, I really don't like that. My advanced resolution DVD-A content is all mostly based on content sampled/processed on native 96 khz resolutions or higher (and 24 bits and higher), so you will have that added value that can't be putted on 44.1 khz/16 bit, or can it?

© Álvaro M. Rocha

Alvaro M. Rocha. IT and audio engineer, composer, producer, performer: a professional musician.

Article featured at: http://www.alvaromro...vs-441-khz.html

Link to comment
Podeli na ovim sajtovima

Kreiraj nalog ili se prijavi da daš komentar

Potrebno je da budeš član DiyAudio.rs-a da bi ostavio komentar

Kreiraj nalog

Prijavite se za novi nalog na DiyAudio.rs zajednici. Jednostavno je!

Registruj novi nalog

Prijavi se

Već imaš nalog? Prijavi se ovde

Prijavi se odmah
  • Članovi koji sada čitaju   0 članova

    • Nema registrovanih članova koji gledaju ovu stranicu
×
×
  • Kreiraj novo...