Jump to content

The Low Frequency Content Thread (films, games, music, etc)


maxmercy

Recommended Posts

I guess the only way to find out would be to use a higher distortion sub system to playback scenes at the same level and push the drivers to a specific distortion point and then see if you can hear a difference.  For full range speakers, even on complex content, I find it easy to tell when THD gets above 10%, hence my <5% rule of thumb.   I know for a fact scenes that my lone THT would play sounded different than when I had two of them sharing the load at the same volume level....but the FR was better for the dual THTs....so not apples/apples.  I'm pretty sure if I had eight 12" drivers having to move further to duplicate the same stuff the eight 15's do, I would be able to tell the difference, and that difference would be entirely distortion related due to excursion requirements, but I am not willing to conduct this experiment....maybe at a gtg?  Set up two systems, equivalent FR, one taxed more than the other for equal SPL, and try to do blind testing with some of the classic scenes?

 

JSS

Link to comment
Share on other sites

Although I'm sure there are a few effects that are comprised of a single sine at 'x' Hz < 20 Hz, they are rare and I haven't experienced any to date.

 

Single sines may be rare, but sine sweeps appear all used quite frequently.  The distortion measurements I reviewed were obtained using sine sweeps anyway.  Distortion results from non-linearity of the driver parameters such as motor strength, inductance, or suspension compliance.  If the non-linearity is smooth and the input signal consists of a single sine wave (or sine sweep), the distortion will be purely harmonic.  If the input consists of multiple sine waves then Inter-Modulation Distortion (IMD) results.  IMD contains both harmonic and anharmonic overtones.  The anharmonic overtones from IMD resulting from two simultaneous sine waves are likely to be even more audible than the harmonic overtones from a single sine wave.  Both THD and IMD arise for the same reason: smooth non-linearity.  This means that if a tone contributes high THD in isolation, it will also contribute high IMD when played with another tone.

 

The important things to remember in anticipating the audibility of harmonic distortion in-room are 1) the effect of room gain on harmonic distortion as a percentage and 2) the masking of harmonic distortion by the sound design result.

 

A good example is the recent EOT opening scene effect. The 10 Hz fundamental is simultaneous with a 3HD tone at 30 Hz @ -10dB, or 31.6% harmonic distortion, a 5HD tone at 50 Hz @ -20dB, or 10% harmonic distortion, etc.

 

If, as an example, your sub generates 20% 2HD harmonic distortion at 10 Hz and you have typical +15dB room gain at 10 Hz and +5dB at 20 Hz the harmonic distortion drops as a percentage to around 5% with odd order harmonics being completely masked by the design of the effect. I submit that it is impossible to audibly detect that.

 

Adam recently posted his speclab cap of that scene mic'd at the seats and it looks like he's around 10% 2HD at 20 Hz from the 10 Hz fundamental, but that's running the subs at 5-10dB above reference level and I will still question whether anyone could audibly detect that amount of distortion. At reference level, the distortion level would be <5% and absolutely inaudible.

 

Back in the day it was the Irene scene from BHD and when comparing the mic'd at the seats version to the looped version it was easy to see the distortion drop to <5% with room gain and with odd order harmonics being completely masked by the fundamental structure of the effect.

 

Yes, playing a single sine makes it simpler to detect audible harmonic distortion but there are no pure single frequency sine waves in nature nor in sound effects. In fact, as a general rule, ULF sound effects contain an incredibly wide array of frequencies. Neither is it possible to know exactly what the original version of a star ship going to warp should sound like vs the final version presented to the seats by your system.

 

Realize that these 2m ground-plane distortion measurements are done with constant voltage input.  Even with constant voltage, sealed subs show increasing THD with decreasing frequency.  This means that even if you have a perfectly sealed room with 12 dB/octave room gain to cancel the sealed roll-off, there will still be a general trend of distortion increasing with decreasing frequency for the same in-room SPL.  This trend appears to be consistent with just about every sealed system measured on data-bass.

 

Another trend I see is that as drive voltage is decreased from that required for peak output levels, the distortion doesn't decrease as quickly for low frequencies as it does higher frequencies.  In fact, it looks like the distortion may approach a minimum level that may be as high as 10% or higher for most woofers.  Here, I'm specifically talking about performance well below 15 Hz.  Unfortunately, the data-bass sweeps don't go lower than 10 Hz or -15 dB from the peak output voltage, so I can only guess as to what happens at lower levels.  On the other hand, it doesn't matter as much how the woofer performs much below -15 dB from its peak since the audibility threshold of ULF itself is so high.

 

One last trend I see is that the distortion at the bottom end is mostly 3rd or otherwise odd order in nature.  This increases the likelihood of audibility, and it also means the 2nd harmonic is not a good measure of overall distortion.  I'm not certain of this, but I think these trends arise because the influence of the suspension increases as frequency decreases, and the suspension is the hardest part to make linear.  It is the relative symmetry of the suspension with respect to in-vs-out strokes that makes it dominated by odd harmonics.

 

The example you give in EOT is an odd special case.  Although multiple frequencies are present, they all share 10 Hz as their fundamental.  As a consequence, the IMD overtones are also harmonics of 10 Hz.  In this case, I would not be surprised if the 20 Hz and 40 Hz overtones are substantially stronger than they would be with 10 Hz playing alone.  This is because of its IM character.  On the other hand, the source material has such strong odd harmonics that it's impossible to see how much 3rd HD the sub is producing.  That passage might sound okay, but another passage with 10 Hz at the same level but lacking that strong 3rd harmonic might still sound like the EOT effect even though it's supposed to sound much more clean.

 

The bottom line is that, based on many years of listening and measuring tests, filtering out the bottom half of soundtrack effects because of the possible audibility of added harmonics based on projections derived from ground plane test results of a single driver version of a subwoofer would be a tragic mistake without very specific tests done in-room with actual program source.

 

I don't doubt the substantially enhanced experience offered by a system capable of high output ULF.  You must realize, however, that the cost for me will exceed what I've spent for all the rest of my equipment, and I already get very clean reference level performance down to 20 Hz in an open living room.  Sound quality is very important to me, and I only want to spend that money once if possible.

 

As a rough figure, I believe most people can detect 5% THD in music.  In the upper ULF region, the ear's sensitivity has a very steep slope, and as a consequence, the distortion overtones are likely to be much more audible than they would be if the distortion occurred with full-range music content.  As such, the audibility threshold may be much lower for "musical" ULF.  By quoting "musical", I'm noting the fact that actual ULF sound effects are probably more likely than music to consist of odd order or otherwise noisy overtones.  It appears that this feature of sound effects masks a lot of the distortion produced by subs in the spectrograms.  What's not clear from these spectrograms is if the distortion causes significant phase alterations and whether these may be audible.  Also, for particular passages that are especially heavy on the noise (very common), the spectrogram may make it difficult to spot flaws that may actually be audible in reality.  The distortion noise will have a lot more structure and potentially more acoustic coloration than the pink noise that masks its appearance in the spectrogram.

Link to comment
Share on other sites

All enclosures fall prey to their native responses in 2pi space.  I would at least build one sealed enclosure and see how it measures in room (to see room gain profile, as it will cut down on those harmonics seen in 2pi).  Now that REW can do distortion calculations with a regular sweep, it should be quite easy to see if you could do sealed, or if a quasi-IB or LLT will allow you the SPL/distortion needed and still hit the <10Hz goal.

 

You are absolutely correct about harmonics when playing back ULF.  They can easily be detected, and the fact that you are taking it into account in the planning phases is quite good.

 

JSS

If and when the time comes, I will do just that.  Even now, I can measure and roughly extrapolate using the equipment I have now.  Things get rough below my 12 Hz mode, and I don't see any evidence of room gain in the 8-10 Hz range.  A quick calculation suggests that even if my room is perfectly sealed, I need 8 X18"s to get enough displacement to hit the room gain limit with enough output to matter.  In the long term, I hope to add-on to my house and build a custom sealed room.  I should just save my money to get that build sooner.

Link to comment
Share on other sites

The curious thing, IMO, is that all of the complex sound effects show no harmonic distortion in my scrutiny of the digital v mic'd versions in speclab. It's only in the extended and relatively very high level sine wave effects like EOT and Irene that show harmonics that don't belong.

 

For example, comparing the Red Death dragon crash in HTTYD digita v mic'd, I saw zero harmonics of the big 3-5 Hz hit and which portion should have shown harmonics that don't belong if there had been any at 2HD. Yet, I saw 3% 2HD of the 6 Hz fundamental in Irene at reference level.

 

Although I find that result curious and worthy of examination, if anyone is ever gonna try to convince me that they will detect harmonics that don't belong at 10% THD ('T' for TOTAL) in effects like those found in the MWB we all crave...

I saw this post before I finished my reply to your previous post and had to think for a while.  Note my comments there that it's very possible that the noise in the sound effect is making is harder to identify the numerous noise-like (but still structured) IMD overtones occupying the same space in the spectrogram.  I argued that such distortion may be audible over the noise present in the soundtrack.  Whether this audibility significantly impacts the listening experience is another question.  It may not matter much in that particular scene and scenes like it.

 

On the other hand, when sound effects call for more subtlety, the lower distortion might make a big difference.  To improve on the example I gave a couple posts above about a 10 Hz tone that gradually increases in level, let's say that it's not a pure tone but is a bit of band-passed noise in the 7-12 Hz range.  Without distortion harmonics, the listener may notice the distortion components > 20 Hz before hearing the ULF noise, thus making the effect more dramatic.  I hear similar effects frequently on well done soundtracks.  I call it "sneaky" bass because it's often very startling.  I hear this kind of bass in real life when something very very powerful approaches in the distance.  For example, the larger helicopters that fly over my house with huge slow-moving rotors having a fundamental well below 10 Hz can be induce quite a bit of excitement when I perceive that intense ULF while the helicopter is too far away for me to hear the rest of the sound.

Link to comment
Share on other sites

I guess the only way to find out would be to use a higher distortion sub system to playback scenes at the same level and push the drivers to a specific distortion point and then see if you can hear a difference.  For full range speakers, even on complex content, I find it easy to tell when THD gets above 10%, hence my <5% rule of thumb.   I know for a fact scenes that my lone THT would play sounded different than when I had two of them sharing the load at the same volume level....but the FR was better for the dual THTs....so not apples/apples.  I'm pretty sure if I had eight 12" drivers having to move further to duplicate the same stuff the eight 15's do, I would be able to tell the difference, and that difference would be entirely distortion related due to excursion requirements, but I am not willing to conduct this experiment....maybe at a gtg?  Set up two systems, equivalent FR, one taxed more than the other for equal SPL, and try to do blind testing with some of the classic scenes?

 

JSS

I think blind testing could be quite revealing.  Too bad it would be very hard to setup correctly.  I think you'd have to try to setup each system with the woofers in the same locations, which could be quite difficult with large arrays.

 

Another worthwhile experiment would be to see how much bi-amping or tri-amping can improve the experience.  Setup an array for reference-level ULF, then add additional subs that only need to be capable of reference playback for non-ULF.  Then try running all the bass through the ULF array versus switching in a crossover between the ULF array and the non-ULF subs.  It's important to make sure the ULF array has plenty of headroom so that the headroom advantage provided by the bi-amped configuration doesn't come into play.  The bi-amped configuration should reduce IMD considerably for full-bandwidth effects.  How audible is this distortion reduction?

 

Going by my subjective experience, the addition of dedicated mid-bass woofers made a major improvement to my sound.  What I'm not sure about is how much that improvement had to do with the slight increase in headroom for wide-band effects I gained and the ability to optimize the room locations for reproduction of each frequency range as opposed to reduction of IMD.  Going on my subjective impression, I think the reduced IMD mattered a lot, and I think I noticed a lot more improvement in the lower octaves than I was expecting.  Noise heavy sounds like rocket launches with heavy content in the teens and 20s sounded both a lot more powerful and more natural at the same time.

Link to comment
Share on other sites

Flight of the Phoenix:

 

Level - 5 Stars (114.39dB composite) 

Extension - 5 Stars (8Hz)

Dynamics - 5 Stars (28.14dB)

Execution - 5 Stars - This is a classic and well done LFE film. Many demos done to this track.

 

Overall - 5 Stars - Only the second film to earn a 5 Star overall rating. This thing is a beast.

 

Recommendation - Buy. Even if just for demo material. The film is OK to good, the bass excellent.

 

 

JSS

Finally got around to watching Flight of the Phoenix on the DIY setup, although still pre-miniDSP - better than I remember and the added extension of going sealed has improved things greatly.  And I don't remember there being a Massive Attack track in there! 

 

Film is still cheese-tastic, though  ;)

  • Like 1
Link to comment
Share on other sites

If there are pure sine sweeps as LFE in various soundtracks, can you post the speclab of one here, as I'm not aware of any?

 

Some years back, Axiom conducted a listening test focused on what the threshold is for audibility in the subwoofer range. The results were surprising to most and caused a lot of howls and jeers.

 

Link: http://www.axiomaudio.com/distortion

 

Conclusion notes:

 

The Results

While it is has been recognized for years that human hearing is not very sensitive to low bass frequencies, which must be reproduced with much more power and intensity in order to be heard, what these results show is that our detection threshold for “noise” (made up of harmonically related and non-harmonically related test tones) is practically non-existent at low frequencies. (The “noise” test tones are noise in the sense that they are not musically related to tones commonly found in musical instruments.) In fact, the “noise” tones at 20 Hz and 40 Hz had to be increased to levels louder than the music itself before we even noticed them. Put another way, our ability to hear the test frequency “noise” tones at frequencies of 40 Hz and below is extremely crude. Indeed, the results show we are virtually deaf to these distortions at those frequencies. Even in the mid-bass at 280 Hz and lower, the “noise” can be around -14 dB (20% distortion), about half as loud as the music itself, before we hear it.

Conclusion

Axiom's tests of a wide range of male and female listeners of various ages with normal hearing showed that low-frequency distortion from a subwoofer or wide-range speaker with music signals is undetectable until it reaches gross levels approaching or exceeding the music playback levels. Only in the midrange does our hearing threshold for distortion detection become more acute. For detecting distortion at levels of less than 10%, the test frequencies had to be greater than 500 Hz. At 40 Hz, listeners accepted 100% distortion before they complained. The noise test tones had to reach 8,000 Hz and above before 1% distortion became audible, such is the masking effect of music. Anecdotal reports of listeners' ability to hear low frequency distortion with music programming are unsupported by the Axiom tests, at least until the distortion meets or exceeds the actual music playback level. These results indicate that the “where” of distortion—at what frequency it occurs—is at least as important as the “how much” or overall level of distortion. For the designer, this presents an interesting paradox to beware of: Audible distortion may increase if distortion is lowered at the price of raising its occurrence frequency.

 

I'm positive that I've conducted the largest body of comparison spectrograph measurements off the disc compared to mic'd at the seats from the subwoofers ever. As I've mentioned before, curiously, there is no detectable harmonic distortion found in any of those results with the exception of sine wave-based effects.  Here are some examples that should show harmonics with even cursory examination, if they were present:

 

Plane Crash Scene in WOTW:

 

60f7dec4d04e642cba551c72db7537be.gif

 

Red Death Crash Scene from HTTYD:

 

886cd389d1a808f7e79d30927bef096c.gif

 

Neither THD nor IMD tests have ever been conducted using program source. But, I know this: If I input a 5 Hz sine wave @ 0dBFS, like the low end burst shown in the HTTYD scene , I'll clearly see the 2HD distortion at 10 Hz in speclab, whereas none shows up when the entire effect plays. That phenomenon needs to be examined, IMO.

 

I've scoured dozens of these comparisons using different SL settings over the years and have never found a calculable trace of harmonic distortion.

 

Regarding the EOT effect, it really appears to be a sine wave that was sent through an effects processor. Otherwise, it makes no sense. But, it follows a similar pattern of effect creation in that it consists of a fundamental with odd-order overtones, like the chopper blades in Irene and others.

Link to comment
Share on other sites

Aural Masking may have a lot to do with what Axiom found.  But I always look upon any of their conclusions with a critical eye as they are some of the pioneers of limiting subs below a certain frequency to avoid distortion and over-excursion, and their premier center channel speaker design for years (maybe still today) was/is a comb-filtering masterpiece, with tweeters nearly a foot apart.  But then again, what are the concepts of  'multichannel stereo' and 'line arrays' if not comb-filtering nirvana?

 

I tend to have a 'worst case scenario' approach.  If my system can handle the worst possible scenario and reproduce what is on the disc with little 'extra' (my threshold for 'extra' is 5%, based on long hours of listening to both tones and program material), then I am ready for anything, and I am pretty sure I will experience what is on the disc without any coloration from the playback system.  I will always grant that playback material will ALWAYS be less stringent than the worst case scenario (so far, there seem to be some sound designers/mixers out there ready to wreck some speakers, EoT intro and HTTYD are a kind of an underground 'proof' of this).

 

The good part about ULF is that below 10Hz, harmonic distortion becomes harder to detect, because of our own ears freq response.  You need to get 3HD+ at significant levels to be able to pick up distortion at fundamentals below 7-8Hz, which makes LT's a very good idea that low, as driver excursion still nets you a 'distortion free' system to the ears.  My own 5% threshold rises below 12-15Hz, simply due to my ear's limitations.  Clean 15-20Hz is some of the hardest stuff to acheive at high levels, both due to ear response, and house-rattling.

 

Just like anything else, there are thresholds of what is acceptable.  If I had slightly looser thresholds, my system would easily be a 'reference level capable' system for 90% of all audio tracks.  But due to my thresholds, it simply cannot be, as I prepare for playback of BEQ tracks.  Some of those can be demanding.  But most will have so much aural masking that THD below 20Hz will go un-noticed.  I will definitely grant you that.  TF2 BEQ is a good example.  It uses <20Hz to 'reinforce' the midbass/MF/HF that is happening, to tremendous auditory and tactile effect. 

 

JSS 

Link to comment
Share on other sites

If there are pure sine sweeps as LFE in various soundtracks, can you post the speclab of one here, as I'm not aware of any?

 

I looked around here for some examples, but I gave up pretty quickly.  It makes perfect sense after thinking about it.  No one is likely to mix straight ULF without harmonics into a track.  Why bother when 99.9% of listeners (including likely the mixers themselves on their systems) don't hear anything?  Admittedly, this realization is slightly disappointing to me because it means almost all the ULF in movie LFE is subharmonic or subordinate to fundamentals at higher frequencies, due to loudness differences.  Perhaps that's reasonable.  I'm sure some real-life sounds have loudness dominated by ULF, but these may be limited to things like sonic booms, meteors, and volcanic eruptions at a great distance.  Such sounds literally blow out windows and probably don't belong in our home theaters.

 

Some years back, Axiom conducted a listening test focused on what the threshold is for audibility in the subwoofer range. The results were surprising to most and caused a lot of howls and jeers.

 

Link: http://www.axiomaudio.com/distortion

 

I read this, hoping to learn something counter-intuitive.  Unfortunately, all I learned is that their study is severely flawed.  First of all, they chose musical passages with very little dynamic range and likely high distortion (for loudness and aesthetics).  To the extent that the distortion in the source material masks distortion in the speakers, this is probably a best case.  The applicability of this study is inherently limited to source material with these qualities.

 

A more fundamental issue is that the study only tests the audibility of pure sine tones.  Speakers don't generally single pure sine tones as distortion products.  Pure sine tones are almost always harder to hear than noise at the same SPL.  Moreover, the noise that speakers produce is more structured than true noise and may be even more audible.  The study defines distortion as the sound level of a sine tone divided by the average playback level of the music, converted to a percentage.  This measure of distortion is completely useless outside the context of the musical selections because it's based on average playback level.  Their distortion figures at 20 Hz and 40 Hz are totally meaningless because it's unlikely the track has any bass low enough to produce distortion overtones at those frequencies.  If it did, the average playback level would be a lot higher making the percentages lower for the same level of distortion tone.  This would make the audibility thresholds look at lot lower!

 

A much better study would be to actually simulate the distortion produced by a speaker by analyzing the source track, synthesizing the distortion, and then mixing it into the track before playback.  It would still need to be repeated with a variety of source material to be meaningful.

 

I'm positive that I've conducted the largest body of comparison spectrograph measurements off the disc compared to mic'd at the seats from the subwoofers ever. As I've mentioned before, curiously, there is no detectable harmonic distortion found in any of those results with the exception of sine wave-based effects.  Here are some examples that should show harmonics with even cursory examination, if they were present:

 

Plane Crash Scene in WOTW:

 

60f7dec4d04e642cba551c72db7537be.gif

 

Red Death Crash Scene from HTTYD:

 

886cd389d1a808f7e79d30927bef096c.gif

 

Neither THD nor IMD tests have ever been conducted using program source. But, I know this: If I input a 5 Hz sine wave @ 0dBFS, like the low end burst shown in the HTTYD scene , I'll clearly see the 2HD distortion at 10 Hz in speclab, whereas none shows up when the entire effect plays. That phenomenon needs to be examined, IMO.

 

I've scoured dozens of these comparisons using different SL settings over the years and have never found a calculable trace of harmonic distortion.

 

Regarding the EOT effect, it really appears to be a sine wave that was sent through an effects processor. Otherwise, it makes no sense. But, it follows a similar pattern of effect creation in that it consists of a fundamental with odd-order overtones, like the chopper blades in Irene and others.

 

I have no way of telling, one way or the other, how much distortion is present in your system from the spectrograms you posted.  They actually look quite different to me, but I'm assuming the differences I see are mostly due to room-response and maybe some limiting in the case of HTTYD.  Measuring actual distortion with live source material may be quite difficult.  One approach that could work would be to deconvolute the mic response with the room impulse response, measured at a low enough level that distortion can be neglected.  Then, deconvolute this result with the source itself.  In theory it should work, but I imagine it might be messy to do in practice.

 

As for why distortion does not appear in spetrograms, I can propose a few reasons.  It must be emphasized that distortion results from non-linearity of the subwoofer's mechanical components among other things.  Nothing about the signal changes the physics of the situation.  Without some kind of specialized correction, non-linearity will lead to distortion.  Why would this not appear in a spectrogram comparison?

  • If there's content at the same frequency as the distortion overtone and it's much higher in level than the distortion, then the distortion won't contribute any significant change.  This is what happens with the 30 Hz overtone on 10 Hz fundamental in the EoT effect.
  • If there's content at the same frequency as the distortion overtone and it's similar in level, then the spectrogram will show the result of combining them, and the result depends on their relative phase.  They could interfere constructively or destructively causing a doubling of the signal level, cancellation of the signal, or anywhere in between.  The result may be identical to the source except with distorted phase that may not appear on the spectrogram.
  • As I stated before, non-linearity causes HD with sine tones but causes anharmonic IMD when multiple tones are present.  With two tones of different frequency playing at the same time, IMD manifests as tones appearing at frequencies that are multiples of one frequency added or subtracted from the other.  I believe IMD is typically distributed over a greater number of overtones than HD.  With a complex sound, the IMD overtones may be spread all over the place and very hard to visually discern from noise present in the content itself.  That said, just because one can't see it in the spectrogram doesn't mean one can't hear a difference.  That's not to say that one will either.

At this point, I think you've made a good case for why moderate distortion levels on ULF are unlikely to matter.  To the extent that most LFE contain strong noise and/or overtones that may mask distortion produced by a sub, you may very well be right, but I remain unconvinced.  The difference may only be apparent on some effects (i.e. ambient sound effects rather than big hits).  It may be too subtle to be important in the grand scheme of things.  I do have to wonder though, and until adequate listening tests have been carried out, we will probably have to agree to disagree.

Link to comment
Share on other sites

I have no way of telling, one way or the other, how much distortion is present in your system from the spectrograms you posted.  They actually look quite different to me, but I'm assuming the differences I see are mostly due to room-response and maybe some limiting in the case of HTTYD.  Measuring actual distortion with live source material may be quite difficult.  One approach that could work would be to deconvolute the mic response with the room impulse response, measured at a low enough level that distortion can be neglected.  Then, deconvolute this result with the source itself.  In theory it should work, but I imagine it might be messy to do in practice.

 

As for why distortion does not appear in spetrograms, I can propose a few reasons.  It must be emphasized that distortion results from non-linearity of the subwoofer's mechanical components among other things.  Nothing about the signal changes the physics of the situation.  Without some kind of specialized correction, non-linearity will lead to distortion.  Why would this not appear in a spectrogram comparison?

 

Actually, harmonics appear in SpecLab the same as everything else the mic pics up does, including noise to -60dB. My point is not that SL doesn't detect it, but rather, it simply isn't generated by the subs when the signal is complex vs steady state.

 

4b1e93048bbf5b9ed5c066b025b827d4.png

 

Frequency response non-linearity explains the differences you see in the comparisons, as you guessed. But, there are no differences in the area focused on in the HTTYD scene (they are virtually identical) and yet there is an obvious difference in the Irene scene which clearly shows 2HD that does not belong there (and 3HD is obviously masked by the effect).

 

In both effects, the fundamental is the same, the system is the same, the levels are the same, the mic position is the same, the room is the same... in fact, only the input signal is different.

 

I've found this phenomenon to hold true in countless examples of comparison of complex effects vs Irene, and now EOT.

 

My own theory is that this is related to IMD in a way no one has suspected. In fact, when I finally post the theory, backed by measurements, I have no doubt about it's reaction from pretty much everyone. I may conclude that my theory is unfounded and just accept the phenomenon as a good thing and leave it at that.

 

Ilkka concluded that IMD exactly tracked THD so he eliminated the spectral contamination portion of his battery of tests as redundant. I tried to persuade him otherwise, but Ilk rarely followed mu suggestions and it was he who had to do the extra work or not, so I let it go. But, I have used the spectral contamination data from the tests he did conduct and post the results of using the same drivers in 2 of the cases.

 

This is just a curious result in my data. But rest assured, if there is no harmonic distortion in the spectrograph, then there is none being generated. The ACO Pacific rig is accurate and SL graphs what is being fed into it without discrimination.

  • Like 1
Link to comment
Share on other sites

Actually, harmonics appear in SpecLab the same as everything else the mic pics up does, including noise to -60dB. My point is not that SL doesn't detect it, but rather, it simply isn't generated by the subs when the signal is complex vs steady state.

--- cut (see original post) ---

 

Frequency response non-linearity explains the differences you see in the comparisons, as you guessed. But, there are no differences in the area focused on in the HTTYD scene (they are virtually identical) and yet there is an obvious difference in the Irene scene which clearly shows 2HD that does not belong there (and 3HD is obviously masked by the effect).

 

In both effects, the fundamental is the same, the system is the same, the levels are the same, the mic position is the same, the room is the same... in fact, only the input signal is different.

 

I've found this phenomenon to hold true in countless examples of comparison of complex effects vs Irene, and now EOT.

 

My own theory is that this is related to IMD in a way no one has suspected. In fact, when I finally post the theory, backed by measurements, I have no doubt about it's reaction from pretty much everyone. I may conclude that my theory is unfounded and just accept the phenomenon as a good thing and leave it at that.

 

Ilkka concluded that IMD exactly tracked THD so he eliminated the spectral contamination portion of his battery of tests as redundant. I tried to persuade him otherwise, but Ilk rarely followed mu suggestions and it was he who had to do the extra work or not, so I let it go. But, I have used the spectral contamination data from the tests he did conduct and post the results of using the same drivers in 2 of the cases.

 

This is just a curious result in my data. But rest assured, if there is no harmonic distortion in the spectrograph, then there is none being generated. The ACO Pacific rig is accurate and SL graphs what is being fed into it without discrimination.

I'm not saying that your system is not capturing the distortion produced by the sub.  As long as the distortion is above the measurement noise floor, it should be captured by the measurement system.  What I'm saying is that you may not easily see that distortion when viewing the captured data as a spectrogram and without further processing to remove the room influence.

 

What I suspect is happening with the more complex signals is that the IMD energy is being spread out across many different frequencies that have little to do with the harmonics of the frequencies involved.  If the signal contains more lower frequency content or is in general more complex, then we can expect a greater number of possible IMD overtones.  What's not clear is how much distortion energy is present, and how it gets distributed these possible overtones.

 

What I think may be happening is that with complex and noisy input signals, the distortion is being distributed across a wide range of frequencies where it basically blends in with the noise that's already there.  You may not be able to see this on the spectrogram, especially since the color map is logarithmic in level (i.e., in dB).  Even if it's actually there, it might be confused as variation in the room response instead.  If you don't see it in the spectrogram, there's a good chance you don't hear it either, but that's not 100% certain.

 

This is an interesting and complex problem.  I wonder if your results could be reproduced in a speaker simulator.  At least with a simulator we might be able to control for things like room influence and background noise.  A simulator could also be useful for synthesizing distortion for audibility tests.  I guess I better put that on my long list of things to experiment with.

 

Will you post your theory here when you are ready?  Or will you post it elsewhere?

Link to comment
Share on other sites

Bosso shows the digital spec graph first and then his in room to show how close it is.  If there is no distortion in the digital signal then there won't be in the room measurement if accurate.  Same goes if there is distortion from the digital signal, that would get reproduced as well.  If IMD is added to his system and it is masked by other frequencies it would show up as a louder than digital signal. 

Link to comment
Share on other sites

I am not sure what the topic was but I have a question for anybody that might ca answer. I am running dual subwoofers, A SVS PB 12 ultra with a 800 watt amp and a Klipsch sw-450 pushing 200 watts, these are RMS, the peaks are 2300 and 450. When the phase on the klipsch is at 0  the bass below 40 is much stronger with both subs on but the bass above 40 is down with both subs. When the phase is at 180, everything is reversed. below 40 is down and above 40 is up. The SVS has a variable phase control. it can be set anywhere from 0 to 180 in increments of 1. Should i experiment with the svs phase, the sub weighs 135 lbs and the control is in the back, hard to get to . The SVS also has extensive bass management wit the use of a mike and sortware called room eq wizard. this is a last resort.because of the time factor. My e-mail is 1981mp@suddenlink.net if anybody has any suggestions.

Link to comment
Share on other sites

Tmnt has a very good sweep in it towards the end. Would love to see it graphed. Thanks

I'll wait for measurements to confirm, but I think TMNT has a steep 30 Hz filter.  What I heard was loud but definitely not low.  Even Godzilla had a better bottom end than this one.  I guess that makes this another loudness war victim.

Link to comment
Share on other sites

The Loudness War: Reference Level is Already Dead

 

At several points in the past, I've raised the question of whether "0" (calibrated according to Dolby's theatrical standards) is really the correct reference playback level for Blu-ray soundtracks.  I discussed my belief that Blu-ray releases are typically mixed at a level lower than theatrical reference and argued that this may unnecessarily cause considerable damage to the sound including loss of transient resolution, clipping (sometimes severe), and application of high-pass bass filtering to raise the maximum level of the rest of the sound.  My running hypothesis has been that this damage is typically done in the process of "remixing for the home".  Some here have countered that playback "0" is too loud for home releases because small rooms sound louder at the same SPL compared with large rooms.  This is an important detail to keep in mind, however, even after adjusting level trim down by an appropriate amount (or perhaps better, calibrating against an EQ curve that slopes down toward high frequencies) "0" is still too loud for almost all mixes.  What's going on here?

 

I think I'm getting closer to some answers.  I came across a Gear Slutz thread with some interesting discussion between industry people about loudness in movies:

 

https://www.gearslutz.com/board/post-production-forum/768373-cinema-playback-levels-mixing-again.html

 

My take-aways from it are as follows:

  • Most movie theaters do not playback at reference level.  The number that do are likely very few.
  • Most movie theaters likely play most films at or under -5db.  Some theaters may play at -10db or even lower, especially in Europe.  Note that the Dolby system used by most theaters labels reference level "7" with the scale being rather.  The paper I link to below describes the scale well.
  • Many theatrical mixes are being done at levels below reference level and are louder to compensate.  This means that much of the damage I have been blaming on the "home remix" process may be happening to the original theatrical tracks in many cases.
  • Until recently, the fact that Dolby requires monitoring of the print master mixes (for the old reels) at "7" has helped limit the loudness of mixes because the engineers must sit in the room at that level.  Unfortunately, with DCP becoming widespread, many houses are choosing to save money by skipping the print mastering process.  This means they may never receive guidance from Dolby on standards for monitoring, and they may never hear their mix played at the standard reference level.
  • Because movie theaters usually run with limited staff, playback levels are often only adjusted downward in response to customer complaints of loudness.  One loud movie or even one loud trailer can potentially lead to a complaint and a lowered playback level that adversely affects playback of films mixed at higher levels.  This in turn leads to a downward spiral, a loudness war.
  • Blu-ray mixes are all over the map; sometimes the Blu-ray gets its own mix; sometimes it gets the theatrical mix;  sometimes the Blu-ray, TV, and video-on-demand mix are on and the same.
  • Many audio professionals are frustrated by having to try to convince directors to accept a mix that plays too quietly in most theaters in order to abide by standards.  Many directors refuse to compromise for a variety of reasons.
  • Responding to an actual medically documented case of hearing damage in a movie theater (presenting "Inception" of all things), the Belgian government is moving (or already has) to draft actual limits on loudness in movie theaters.  There is some speculation that similar legislation may arise in other parts of the EU.
  • Interest appear to be building to add loudness measurement metadata to DCP using a method like the R128 standard developed for TV in the EU and to update playback hardware to allow this information to be used in some useful way such as enforcing an upper bound on loudness.

Anyway, there's a lot of stuff here to chew on.  It does look like the loudness war in movies has been raging for quite a while, perhaps since the time digital formats came into being and the Christoper Nolan's of the world found ways to abuse all that extra headroom.  Perhaps the industry is taking note now because the situation appears to be taking a turn for the worse as of late.  I expect it will keep getting worse unless and until the industry can come up with a good standard like R128 for TV in the EU.

 

I found a paper that discusses the issue in detail and outlines a possible solution along these lines:

 

http://www.grimmaudio.com/site/assets/files/1088/cinema_loudness_aes_rome_2013.pdf

 

They give an example of a production in which the TV release actually has more dynamic range than the theatrical one.  They propose using the same K-weighting scale used by R128 to measure loudness in LUFS.  Interestingly, a wide variety of material mixed at theatrical reference level appear to have K-weighted loudness measures of around -27 to -29 LUFS.   The paper suggests having most theaters limit average playback level to -27 LUFS with up to -6 LUFS on short passages.  Note that LUFS is relative to the full-scale level of a single channel.  I believe the standard band-limited calibration pink noise at -27 LUFS is around 80 dBA.

 

Anyway, I hope others find this interesting.

  • Like 1
Link to comment
Share on other sites

I'll wait for measurements to confirm, but I think TMNT has a steep 30 Hz filter.  What I heard was loud but definitely not low.  Even Godzilla had a better bottom end than this one.  I guess that makes this another loudness war victim.

 

Besides the lacking extension to some degree in TMNT, I thought the bass execution was excellent in relation to the on screen action and tone of the film. The audio in general sounded great to my ears, but I am curious to see it graphed as it definitely does not have the extension of the better bass films. VERY curious to check out the 3d now!

 

Curious to see that bass sweep as well which I think was during the jeep/avalanche scene? Again, not the deepest, but very impressive and effective in conjunction with the scene I thought.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...