Jump to content

SME

Members
  • Posts

    1,702
  • Joined

  • Days Won

    111

Everything posted by SME

  1. Yeah. Thor was definitely mixed way too hot. It inspired me to buy an amp because I thought I was clipping in my AVR. Nope. Of course, I still don't regret the external amp. Anyway, I posted my comments about Interstellar to its thread. Those familiar with my opinions will be surprised to learn that I liked this soundtrack a lot. I did not find it to be excessively loud, nor did I have any issues with the dialog. I did hear a lot of clipping, but a lot of this seemed to be on purpose with the rest arising from headroom compromise with the powerful score. I do wonder how much room acoustics plays into perceived loudness in this movie? I did not turn this one down compared to most movies. I would think the IMAX should have been large enough to play this at true reference level without the need for hearing protection. Maybe the theater JSS went was still a bit too small? Or maybe their system was overloaded and distorting.
  2. So the LieMax theaters now auto-recal themselves now? I heard this was being talked about, but I didn't know it had been implemented already. I believe LieMax essentially uses the same technology as Audyssey MultEQ XT32. I can't speak for XT32, but my experiences with the XT product have been mixed. This is very interesting to me, because I find many movies just sound too loud at levels that are "supposed" to be reasonable. One possibility I argued for in a previous post is that many directors purposely have their films mixed at lower levels under the (likely correct) assumption that most theaters will be playing back at below the proper reference level for the space. It's also possible that this is done for some Blu-ray releases for similar reasons. Unfortunately, there's no way to know when that's the case unless someone from the sound team actually says as much on a forum somewhere. Another factor which may play a part is how soundtracks translate between rooms. There are a lot of reasons why soundtracks don't translate very gracefully having a lot to do with psyhoacoustics and differences in loudness perception for sounds of varying duration. Roughly speaking, smaller rooms have lower reverb time and contribute less room reverb to the overall sound so that transients will actually measure louder for such a system calibrated to the same steady state pink noise SPL as another in a much larger room.In recent publications, Dolby and others have argued for adjusting the playback level in the mixing room according to its size to improve translation. Whereas 85 dBC is recommended for typical theaters, residential-sized spaces should probably aim for closer to 79-82 dB or less. I believe 76 dBC is now recommended for near-field monitoring and less still for headphone listening. The need this kind of correction is well documented in a variety of other places including by mixers on the Gearslutz forum who are frequently disappointed by how puny their mix that they prepared in a bedroom at 85 dBC sounds on the big screen at the film festival calibrated at the same 85 dBC. The trouble with room size and translation is that they don't correlate exactly, and furthermore, playback level is probably not the only thing that needs to be adjusted to get things right. The frequency response target curve likely also needs to be adapted to the decay characteristics of the room. In the past, I've argued that a theatrical mix should be the "standard" that we all calibrate our systems to reproduce, but I now understand better that in practice this will be difficult for us to achieve. Both playback level and frequency response target curve can only be adjusted to *compromise* for the psychoacoustic differences. In a small room, the ratio of SPL for transients to SPL for sustained sounds will still be higher than in the larger reverberant spaces, so the best one can do often to reproduce a dub-stage-mixed track in a small room leaves one with transients that bite a little too much and sustained sounds that are still a bit weak. Therefore, there is a big benefit to home listeners when a theatrical mix is re-done with a more intimate monitoring setup than the dub stage. Unfortunately however, I have strong reason to believe that bass levels will need to only drop by about half as much as the mid/high playback levels to retain the tonal balance; hence, by following the usual monitoring levels recommendations a home "remix" may gain clipping due to the effective loss of headroom for bass. Getting back to LieMax and their auto-calibration. The 85 dBC standard has been around for a long time, and in that time, I believe newly constructed theaters have tended to have less live acoustics than the theaters they replaced. As a consequence, these theaters may also reproduce tracks too loudly when playing them back at reference level. Hence, there's ample reason for the theater operator to turn things down a bit, and ideally boost the bass up a bit to restore the tonal balance. Until Audyssey or whoever can better model loudness in-room and make "0" on the dial actually sound like reference, we're likely to still have the problem of levels being off. While likely more rare these days, I reckon there are theaters at the other extreme for which even 85 dBC isn't enough. For what it's worth, I am now running with a fair amount of room treatment, and I'm no longer using a significant high frequency roll-off. The treatments (particularly the diffusers) cleaned up my upper-mids and high dramatically, so they sound much more clear and less harsh, and the balance without the HF roll-off sounds better. My recent analysis suggests that I am actually listening near-field in so far as my ratio of transient to sustained SPL is concerned, particularly from 400 Hz or so on up. Below that with bass trapping, I'm still probably pretty close to near-field. That means I should in theory listen at "-9" from reference. In reality, I bump my bass response, starting around 250 Hz and ending about 5 dB up at 20 Hz to keep the bass response balanced. So with mids/highs at "-9" from reference, I'm actually at "-4" from reference in the bass. With this calibration and my room sounding amazingly good, I'm still surprised by how much my playback levels vary from film to film. If I treat "0" as 76 dBC instead of 85 dBC for mids and highs (about average for me), then my playback level, chosen based on subjective dialog level, varies about -9/+4 dB! By the way, it is well documented that if you put relatively naive listeners in control of a nice system in a proper room and tell them to set the level of dialog to where they can hear it the best, most will set it within about 2 dB of one another. We don't need objective measurements to tell us that relative film loudness is all over the map and that there's a lot more to it than just room size and translation issues.
  3. From your first sentence, I would hazard a guess that the clipping was a part of that philosophy. I also disagree. I don't think digital clipping/limiting sounds like anything else but digital clipping/limiting. When my ears are really being blasted with too much sound, the distortions are completely different. The soundtracks that seem most realistically loud to me employ clever level management. If they can keep the average level at 78 dB or so, they have > 30 dB to work with at reference level (more in the bass) for transients that really get your attention. For louder scenes, it makes sense for the mixer to start out maybe ~10 dB higher (leaving plenty of headroom so nothing sounds forced) and then slowly back the fader down over time. Most of the time we don't even notice this gradual drop, especially with much higher SPL transient sounds to reinforce the action on-screen. The movie "Star Trek" does this very well in my opinion. Listen carefully and you'll realize that the loudness is very well moderated throughout, yet the big effects leave your jaw on the floor. On a capable system, it sounds fantastic at "0". I also don't think that realism is any excuse for muddy dialog. In all kinds of movies, plenty of dialog is recorded on location (about as realistic as you can get) and it still turns out just fine. "Whiplash" is a recent film that comes to mind where most dialog was realistic in this sense. Did anyone have trouble understanding what was being said in that movie? Maybe the real story is that Nolan is half deaf and wants us to hear it like he hears it.
  4. I want to request measurements for: "The Secret of Kells" "The Song of the Sea" Both are visually stunning Irish-themed animation films with excellent audio as well. "The Secret of Kells" has bass that might score 3 stars for level with solid dynamics and a couple effects that sound like they go deep. This film has been a part of my library (and my reference material for testing, due to its demanding upper-mid content) for a while now. It was "The Song of the Sea" that I just watched tonight. TSotS sounds like it has lower levels (1 or 2 stars?) and is mixed with more headroom (turn it up!) but there seems to be more bass content than the first. This fact is remarkable because the movie is so very laid back and yet is chock full of LF effects. Everything from automobile engines to wind and water effects are rendered with natural compliment of low frequencies. As one who appreciate subtlety (I don't always need to be "pounded" by the bass), I have to say this is one of my favorite "bass movies" in a while.
  5. I thought it dug a lot deeper, right off the bat, when I saw it in the theater, and then I second guessed myself...
  6. I hope "HTTYD 2" is worthwhile. I clearly heard content down to 20 Hz throughout, even though it was all low level. The score had a few heavier bass hits, so it may be hard to get a BEQ with a good tonal balance that doesn't make the score disproportionately heavy-handed.
  7. The PvA is not enough information to objectively compare bass in movies like EoT and B:LA. If EoT has more wide-band content, it may be perceived to be more powerful than B:lLA despite the PvAs on a system with smooth frequency response. I have a hunch that wide-band content sounds less impressive on systems with messy frequency response for various reasons.
  8. On the subject of movies with both insane bass and terrible execution, has anyone watched "The Neverending Story"? It was a relatively low budget kids movie made in 1984. I believe the one I saw was a Blu-ray re-release. Right away, I could tell something was weird about the sound. The bass line in the cheesy 80s pop song at the start was enormously slow and heavy. By the time the movie got going, I had it figured out. They took the soundtrack and rammed it through a subharmonic synthesizer. Presumably they neither had a clue nor the proper monitoring equipment to know what they were doing. There's nothing like hearing huge bass hits accompany rather mundane scenes. It's just a cheesy kids movie, not an action flick. Oh yeah, the subsonic analog synthesizer that pops up in the score at various points is quite a thrill!
  9. Crazy! How does "Ragnorok" compare to "Kon Tiki" in terms of sound design? I loved the bass in "Kon Tiki" in the storm scene, but after that, the bass just overly gratuitous. I commented to my wife that it sounded like a guy at the console just shoved the 20 Hz fader up to max. Overall, it was great for a full body massage, but aside from the storm, I don't feel the bass execution was particularly good. Is "Ragnorok" better in this regard?
  10. I just watched "Boxtrolls". I really loved the movie and soundtrack. Bass is pretty low level but has dynamics and is extended at least to 20 Hz. It may go lower. A relatively high playback level is probably warranted. The dialog is top-notch, as are both the sound design and environmental effects. This is another whose theatrical release was in Atmos, and clearly benefited in the sound design process. I love the sound of Atmos ... on my 5.4 system!
  11. The relative audibility of ULF does depend on the MVL. For a steeply filtered film like PR, a high sub playback level is not likely to make any ULF audible, but for films like TF and EQ, it can and does make a big difference. My reason for bringing PR in the discussion wasn't to suggest that turning the sub up will bring in missing ULF but to point out that using a powerful tool (BEQ) it is possible to extract ULF that others here have found to be very satisfying. You might argue that BEQ is a more invasive tool to use to tailor the sound than increasing the sub level, but I disagree. Increasing the sub level throws the upper and mid bass out of balance, which in turn makes it sound slower and more bloated. Due to masking effects, it can actually make some wide band bass sounds weaker. The sub level bump also alters the response around the crossover, but this may not be for the worse if the crossover blend was not optimal to begin with. BEQ, on the other hand, is surgical. It can bring up the level of ULF without throwing the mid and upper bass balance out of whack. Of course, BEQ requires additional equipment to apply versus running subs hot, so in that sense it is a less simple adjustment. What I'm getting at though is that even PR has the content. It's just not possible to recover it by merely running the sub level higher. If we're willing to give more points to movies that sound better with the sub level spiked, then why not include movies that BEQ well too? Or we can just drop it and be happy that we have a system that is nice and simple even if a bit unreliable at times.
  12. I think that most of the time when the system errs on extension, it tends to give a higher rating that is deserved. "The Equalizer" (TE) seems to be a rare case of a movie that got a worse extension rating than it probably deserves. I have given a lot of thought to what a "better" system would look like, and I have to say that the problem of objective measurement does not admit a single optimal solution. A lot depends on preferences too. For example, one possible way to improve the extension rating is to take into account equal loudness contours. Instead of choosing the -10 dB point, we might instead look for the lowest frequency that is represented in the soundtrack above the threshold of audibility. Ignore for the moment the issues of psycho-acoustic masking of ULF content by higher frequency content and the fact that distortion introduced by vibrating room surfaces may make ULF audible that wouldn't otherwise be. Bossobass reported excellent bass for TE when listening (presumably at theatrical reference) with his subs at +6 dB. Without a doubt, he's hearing more ULF (potentially a lot more) than he would have if he ran his subs calibrated equal to his mains. Does this mean that he overrates the bass in TE? I don't think there's a straightforward answer there at all because we all listen differently! Even in the studio, as we've found out, tracks are rarely monitored with enough low-end capability to hear everything that's present, and we have reason to suspect that much monitoring happens at lower-than-reference playback levels in order to compensate for the reduced playback levels used in common theaters. Now, one could point out that if Bossobass plays "Pacific Rim" (PR) with subs at +6 dB, it's unlikely he will hear nearly as much ULF as he did in "The Equalizer". PR has a steep filter compared to the modest roll-off seen in TE, so arguably TE should have a much better extension rating. On the other hand, PR with BEQ gets rave reviews, so given that PR has good sounding ULF content underneath a steep filter, perhaps it should get a good extension rating too? As a thought experiment, what happens if you cancel the 20 Hz filter and boost the sub by 30 dB on HTTYD2? It might sound awesome! Clearly (looking at the PvA) there's a nice 10 Hz hit buried in there, just waiting to be EQed and boosted out. Or maybe it'll wreck the score and make it sound bad. The point here is that it's impossible to make an objective measurement system that satisfies all of our subjective qualifications. I for one mostly ignore the star ratings anymore and I go straight for the PvA and the scene spectrograms. Even then I can't judge until I've viewed the film because my chosen playback level depends on the overall loudness of the film. I also don't boost my sub level because it screws up the crossover region and the upper bass (100-200 Hz). Indeed, I have found that upper bass response has a considerable impact on how real world bass sound effects are heard and felt, even many of those with very low fundamentals. So merely having a flat sub response with low single digit extension does not necessarily yield a better experience compared to a system that is smooth up in the upper bass octave. Incidentally, a gradually sloping house curve that is applied to the mains and subs together will likely perform much better than a sub-only boost, provided your mains chain has the headroom for the extra upper bass. What we all need to avoid doing is reading the "stars" ratings as value judgments. Aside from the "execution" category, all the stars represent is the outcome of an objective measurement process. Any such process will inevitably be flawed from a subjective standpoint, and the particular flaws will vary depending on the subject! The only change I would consider at this point is to give more weight to the execution category. The trouble is that opinions vary widely depending on the capabilities and listening choices of each voter's system. A movie that's "a 29 Hz drone fest" for one listener is "a top pick" for another. And there's the larger audience of 30 Hz ported subs that always rate things with high levels like TF4 and "Godzilla" as 5 stars. Should we restrict voting to "elite" users whose systems have been qualified by an export board as "ULF-capable enough"? That would surely exclude my opinion, even though I frequently agree with the "elites" in my execution assessments, and I can only imagine the pissing wars arising over whether flat from 3-20 Hz is more important than flat from 100-200 Hz. (For what it's worth, I'm putting my attention on the latter for the time being.) Alas, in the end, I don't think we'll ever be able to fairly rate the bass in films or come even close. I'm just happy that there's so much great data posted here that I can use to relate to what I hear in the movies I watch as well as to do useful things like estimate how much headroom my system needs. Big props for those who are volunteering their time to do and post these analyses!
  13. What's with all these recent mixes that hump at 40 Hz? I guess big hits at 40 Hz sound more impressive than big hits at 30 Hz, and it's all about loudness now?
  14. I just watched a trailer for the horror movie "The Babadook". It sounded like it had a pretty heavy low end. Hopefully that bass finds its way into the Blu-ray release. The effects I heard in the trailer were brilliant. By all appearances, this is actually a decent movie to watch, too.
  15. About the TMNT track, I must say it did have a strong positive point. The Atmos version of the track sounded like Atmos lite on my 5.1 system. I heard some very nice pans as well as phantom images from above. Thus far, I've been a strong skeptic of Atmos, in part because Dolby seems dodgy about releasing enough technical data to evaluate it. I reckoned that they were either going to move back to a lossy coding (and combat awareness of this change via heavy duty marketing), or they were going to go with a mix-down instead of all the objects. It's looking like the former. That said, they may embed some kind of metadata that lets them extract objects from the mix losslessly. Or maybe their multichannel upmixer just sounds good enough that they can lie and no one will notice. In any case, I look forward to hearing more material in this format and hope that more directors exercise discretion in their mixes so that there's enough headroom to mix it all down to 7.1 without having to crush all the dynamics out. I noticed no clipping in TMNT, but it sounded compressed to hell. So I guess it's down to the content now.
  16. So I guess TMNT does have some low stuff. I thought I heard some at the very beginning and a few odd moments later on, but other than that, most of the content seemed very 30-40 Hz heavy early in the movie with the mid-bass getting heavier later on. I misjudged it to be steeply filtered at 30 Hz based on what I thought I heard in the heavier bass sweeps. I assumed I would dislike this movie and love the soundtrack, as was the case with Transformers. Unfortunately, I thought the soundtrack was only slightly better than the film, which was atrociously bad in my opinion.
  17. Aren't the new Liemax theaters all calibrated with Audyssey XT32? It does a great job, except that it suffers from so many quirks that require an attentive engineer to fix. I wonder if anyone tweaks the target curve and/or delay settings post-calibration to suit the room and speakers? Without that, there is likely little hope for uniformity.
  18. The Loudness War: Reference Level is Already Dead At several points in the past, I've raised the question of whether "0" (calibrated according to Dolby's theatrical standards) is really the correct reference playback level for Blu-ray soundtracks. I discussed my belief that Blu-ray releases are typically mixed at a level lower than theatrical reference and argued that this may unnecessarily cause considerable damage to the sound including loss of transient resolution, clipping (sometimes severe), and application of high-pass bass filtering to raise the maximum level of the rest of the sound. My running hypothesis has been that this damage is typically done in the process of "remixing for the home". Some here have countered that playback "0" is too loud for home releases because small rooms sound louder at the same SPL compared with large rooms. This is an important detail to keep in mind, however, even after adjusting level trim down by an appropriate amount (or perhaps better, calibrating against an EQ curve that slopes down toward high frequencies) "0" is still too loud for almost all mixes. What's going on here? I think I'm getting closer to some answers. I came across a Gear Slutz thread with some interesting discussion between industry people about loudness in movies: https://www.gearslutz.com/board/post-production-forum/768373-cinema-playback-levels-mixing-again.html My take-aways from it are as follows: Most movie theaters do not playback at reference level. The number that do are likely very few. Most movie theaters likely play most films at or under -5db. Some theaters may play at -10db or even lower, especially in Europe. Note that the Dolby system used by most theaters labels reference level "7" with the scale being rather. The paper I link to below describes the scale well. Many theatrical mixes are being done at levels below reference level and are louder to compensate. This means that much of the damage I have been blaming on the "home remix" process may be happening to the original theatrical tracks in many cases. Until recently, the fact that Dolby requires monitoring of the print master mixes (for the old reels) at "7" has helped limit the loudness of mixes because the engineers must sit in the room at that level. Unfortunately, with DCP becoming widespread, many houses are choosing to save money by skipping the print mastering process. This means they may never receive guidance from Dolby on standards for monitoring, and they may never hear their mix played at the standard reference level. Because movie theaters usually run with limited staff, playback levels are often only adjusted downward in response to customer complaints of loudness. One loud movie or even one loud trailer can potentially lead to a complaint and a lowered playback level that adversely affects playback of films mixed at higher levels. This in turn leads to a downward spiral, a loudness war. Blu-ray mixes are all over the map; sometimes the Blu-ray gets its own mix; sometimes it gets the theatrical mix; sometimes the Blu-ray, TV, and video-on-demand mix are on and the same. Many audio professionals are frustrated by having to try to convince directors to accept a mix that plays too quietly in most theaters in order to abide by standards. Many directors refuse to compromise for a variety of reasons. Responding to an actual medically documented case of hearing damage in a movie theater (presenting "Inception" of all things), the Belgian government is moving (or already has) to draft actual limits on loudness in movie theaters. There is some speculation that similar legislation may arise in other parts of the EU. Interest appear to be building to add loudness measurement metadata to DCP using a method like the R128 standard developed for TV in the EU and to update playback hardware to allow this information to be used in some useful way such as enforcing an upper bound on loudness. Anyway, there's a lot of stuff here to chew on. It does look like the loudness war in movies has been raging for quite a while, perhaps since the time digital formats came into being and the Christoper Nolan's of the world found ways to abuse all that extra headroom. Perhaps the industry is taking note now because the situation appears to be taking a turn for the worse as of late. I expect it will keep getting worse unless and until the industry can come up with a good standard like R128 for TV in the EU. I found a paper that discusses the issue in detail and outlines a possible solution along these lines: http://www.grimmaudio.com/site/assets/files/1088/cinema_loudness_aes_rome_2013.pdf They give an example of a production in which the TV release actually has more dynamic range than the theatrical one. They propose using the same K-weighting scale used by R128 to measure loudness in LUFS. Interestingly, a wide variety of material mixed at theatrical reference level appear to have K-weighted loudness measures of around -27 to -29 LUFS. The paper suggests having most theaters limit average playback level to -27 LUFS with up to -6 LUFS on short passages. Note that LUFS is relative to the full-scale level of a single channel. I believe the standard band-limited calibration pink noise at -27 LUFS is around 80 dBA. Anyway, I hope others find this interesting.
  19. I'll wait for measurements to confirm, but I think TMNT has a steep 30 Hz filter. What I heard was loud but definitely not low. Even Godzilla had a better bottom end than this one. I guess that makes this another loudness war victim.
  20. The issue with using the NanoAVR for EQing your sub is that it alters the contents of the channels in the soundtrack, before any processing or bass management has been done. This means that you cannot EQ the sub or any of the speakers directly with the NanoAVR. You must instead EQ them indirectly by altering the content in the soundtrack channels. This limitation is essentially complementary to the issue of using a MiniDSP to apply BEQ. With a MiniDSP, you can't make BEQ changes to the soundtrack channels directly. These are two different tools for two different tasks. That doesn't mean you can't try, but the results are not likely to be as good as when using the right tool.
  21. For EQing your sub response as opposed to EQing film content, you want to keep the MiniDSP because its filters apply to the full sub signal after bass management.
  22. I'm not saying that your system is not capturing the distortion produced by the sub. As long as the distortion is above the measurement noise floor, it should be captured by the measurement system. What I'm saying is that you may not easily see that distortion when viewing the captured data as a spectrogram and without further processing to remove the room influence. What I suspect is happening with the more complex signals is that the IMD energy is being spread out across many different frequencies that have little to do with the harmonics of the frequencies involved. If the signal contains more lower frequency content or is in general more complex, then we can expect a greater number of possible IMD overtones. What's not clear is how much distortion energy is present, and how it gets distributed these possible overtones. What I think may be happening is that with complex and noisy input signals, the distortion is being distributed across a wide range of frequencies where it basically blends in with the noise that's already there. You may not be able to see this on the spectrogram, especially since the color map is logarithmic in level (i.e., in dB). Even if it's actually there, it might be confused as variation in the room response instead. If you don't see it in the spectrogram, there's a good chance you don't hear it either, but that's not 100% certain. This is an interesting and complex problem. I wonder if your results could be reproduced in a speaker simulator. At least with a simulator we might be able to control for things like room influence and background noise. A simulator could also be useful for synthesizing distortion for audibility tests. I guess I better put that on my long list of things to experiment with. Will you post your theory here when you are ready? Or will you post it elsewhere?
  23. I looked around here for some examples, but I gave up pretty quickly. It makes perfect sense after thinking about it. No one is likely to mix straight ULF without harmonics into a track. Why bother when 99.9% of listeners (including likely the mixers themselves on their systems) don't hear anything? Admittedly, this realization is slightly disappointing to me because it means almost all the ULF in movie LFE is subharmonic or subordinate to fundamentals at higher frequencies, due to loudness differences. Perhaps that's reasonable. I'm sure some real-life sounds have loudness dominated by ULF, but these may be limited to things like sonic booms, meteors, and volcanic eruptions at a great distance. Such sounds literally blow out windows and probably don't belong in our home theaters. I read this, hoping to learn something counter-intuitive. Unfortunately, all I learned is that their study is severely flawed. First of all, they chose musical passages with very little dynamic range and likely high distortion (for loudness and aesthetics). To the extent that the distortion in the source material masks distortion in the speakers, this is probably a best case. The applicability of this study is inherently limited to source material with these qualities. A more fundamental issue is that the study only tests the audibility of pure sine tones. Speakers don't generally single pure sine tones as distortion products. Pure sine tones are almost always harder to hear than noise at the same SPL. Moreover, the noise that speakers produce is more structured than true noise and may be even more audible. The study defines distortion as the sound level of a sine tone divided by the average playback level of the music, converted to a percentage. This measure of distortion is completely useless outside the context of the musical selections because it's based on average playback level. Their distortion figures at 20 Hz and 40 Hz are totally meaningless because it's unlikely the track has any bass low enough to produce distortion overtones at those frequencies. If it did, the average playback level would be a lot higher making the percentages lower for the same level of distortion tone. This would make the audibility thresholds look at lot lower! A much better study would be to actually simulate the distortion produced by a speaker by analyzing the source track, synthesizing the distortion, and then mixing it into the track before playback. It would still need to be repeated with a variety of source material to be meaningful. I have no way of telling, one way or the other, how much distortion is present in your system from the spectrograms you posted. They actually look quite different to me, but I'm assuming the differences I see are mostly due to room-response and maybe some limiting in the case of HTTYD. Measuring actual distortion with live source material may be quite difficult. One approach that could work would be to deconvolute the mic response with the room impulse response, measured at a low enough level that distortion can be neglected. Then, deconvolute this result with the source itself. In theory it should work, but I imagine it might be messy to do in practice. As for why distortion does not appear in spetrograms, I can propose a few reasons. It must be emphasized that distortion results from non-linearity of the subwoofer's mechanical components among other things. Nothing about the signal changes the physics of the situation. Without some kind of specialized correction, non-linearity will lead to distortion. Why would this not appear in a spectrogram comparison? If there's content at the same frequency as the distortion overtone and it's much higher in level than the distortion, then the distortion won't contribute any significant change. This is what happens with the 30 Hz overtone on 10 Hz fundamental in the EoT effect. If there's content at the same frequency as the distortion overtone and it's similar in level, then the spectrogram will show the result of combining them, and the result depends on their relative phase. They could interfere constructively or destructively causing a doubling of the signal level, cancellation of the signal, or anywhere in between. The result may be identical to the source except with distorted phase that may not appear on the spectrogram. As I stated before, non-linearity causes HD with sine tones but causes anharmonic IMD when multiple tones are present. With two tones of different frequency playing at the same time, IMD manifests as tones appearing at frequencies that are multiples of one frequency added or subtracted from the other. I believe IMD is typically distributed over a greater number of overtones than HD. With a complex sound, the IMD overtones may be spread all over the place and very hard to visually discern from noise present in the content itself. That said, just because one can't see it in the spectrogram doesn't mean one can't hear a difference. That's not to say that one will either. At this point, I think you've made a good case for why moderate distortion levels on ULF are unlikely to matter. To the extent that most LFE contain strong noise and/or overtones that may mask distortion produced by a sub, you may very well be right, but I remain unconvinced. The difference may only be apparent on some effects (i.e. ambient sound effects rather than big hits). It may be too subtle to be important in the grand scheme of things. I do have to wonder though, and until adequate listening tests have been carried out, we will probably have to agree to disagree.
  24. I think blind testing could be quite revealing. Too bad it would be very hard to setup correctly. I think you'd have to try to setup each system with the woofers in the same locations, which could be quite difficult with large arrays. Another worthwhile experiment would be to see how much bi-amping or tri-amping can improve the experience. Setup an array for reference-level ULF, then add additional subs that only need to be capable of reference playback for non-ULF. Then try running all the bass through the ULF array versus switching in a crossover between the ULF array and the non-ULF subs. It's important to make sure the ULF array has plenty of headroom so that the headroom advantage provided by the bi-amped configuration doesn't come into play. The bi-amped configuration should reduce IMD considerably for full-bandwidth effects. How audible is this distortion reduction? Going by my subjective experience, the addition of dedicated mid-bass woofers made a major improvement to my sound. What I'm not sure about is how much that improvement had to do with the slight increase in headroom for wide-band effects I gained and the ability to optimize the room locations for reproduction of each frequency range as opposed to reduction of IMD. Going on my subjective impression, I think the reduced IMD mattered a lot, and I think I noticed a lot more improvement in the lower octaves than I was expecting. Noise heavy sounds like rocket launches with heavy content in the teens and 20s sounded both a lot more powerful and more natural at the same time.
  25. I saw this post before I finished my reply to your previous post and had to think for a while. Note my comments there that it's very possible that the noise in the sound effect is making is harder to identify the numerous noise-like (but still structured) IMD overtones occupying the same space in the spectrogram. I argued that such distortion may be audible over the noise present in the soundtrack. Whether this audibility significantly impacts the listening experience is another question. It may not matter much in that particular scene and scenes like it. On the other hand, when sound effects call for more subtlety, the lower distortion might make a big difference. To improve on the example I gave a couple posts above about a 10 Hz tone that gradually increases in level, let's say that it's not a pure tone but is a bit of band-passed noise in the 7-12 Hz range. Without distortion harmonics, the listener may notice the distortion components > 20 Hz before hearing the ULF noise, thus making the effect more dramatic. I hear similar effects frequently on well done soundtracks. I call it "sneaky" bass because it's often very startling. I hear this kind of bass in real life when something very very powerful approaches in the distance. For example, the larger helicopters that fly over my house with huge slow-moving rotors having a fundamental well below 10 Hz can be induce quite a bit of excitement when I perceive that intense ULF while the helicopter is too far away for me to hear the rest of the sound.
×
×
  • Create New...