Jump to content

SME

Members
  • Content count

    1,343
  • Joined

  • Last visited

  • Days Won

    72

Everything posted by SME

  1. SME

    The Bass EQ for Movies Thread

    I watched this one today. The movie itself was kind of funky, but I appreciated it for what it was and would watch it again. Spectacular video and audio on this one. Center channel dialog track sounded a bit full in a lot of places, but otherwise, the sound was excellent. (At some later date, I might try to re-EQ the dialog to fix it up.) The Atmos surround effects were some of the best I've experience, despite my sitting outside the MLP on a 5.1 system. The sub bass effects sounded more distinctive and somehow imaginative than a lot of movies. I was often surprised by how laid back the mix was. It seems to fit the feel of the movie. I watched at "-4" and probably could have gone higher if my subs weren't already at their limits. (If only I had the floor space and the money, I could easily make use of 8 x UH-21" in here.) Anyway, thanks for the BEQ on this one. The restored ULF seemed to contribute a lot to the sound effects, and not just heft. In fact, most of the bass in the movie seemed quite tight and transient, except where it was obviously not supposed to be (some scenes with music). I've noticed before that the restored ULF can actually make the sound seem tighter and more precise.
  2. This thread is more about full-range content than bass, but it is content related, so I think it works best here. In the future, I may post this somewhere on AVSForum, but for now I want to keep it to a limited audience. As I've mentioned in the main LF Content thread, the X-curve calibration standard in cinema causes two major problems: Tonal balance that deviates substantially from neutral and from what is typical used (informally) for music production and what sounds good on a home system that is optimized for music. Inconsistent calibration between different dub-stages and cinemas. As I also noted, many UHDBR/BD/DVD releases these days have high quality home remixes that fix most of these tonal balance problems. This is true for most recent Disney releases these days (including, e.g. the new "Star Wars" and much recent Pixar and Marvel stuff). However, much legacy content as well as lesser quality home-remixes do not feature any re-EQ and retain the inverse-X-curve signature. The effect of X-curve calibration is to attenuate both high frequencies, via the -3 dB/octave slope in power response, and the low frequencies, which arises from forcing a flat power-averaged response even though virtually all speakers have a significant drop in directivity for low frequencies and what absorption is present in typical dub-stage / cinema rooms is also less effective at low frequencies. As a consequence of the altered tonal-balance, most mixes are likely altered to sound good in the dub-stage during the re-recording mix process in which highs and lows are boosted to compensate. The resulting mixes, in addition to translating unreliably between theaters, sound less than optimal when played back on a home system. The auditory symptoms are mixed. I find it easiest to hear the problems in the dialog. Sometimes only one of the excess highs or the excess lows is audible in the unaltered track because the boost dominates. For example, some cinema mixes, the dialog comes across very bright. In others, it comes across very boomy. Sometimes, the dialog seems relatively balance, in terms of high vs. low, but with the mid-range being relatively depressed, intelligibility often still suffers. Dialog is both much easier to understand and much more enjoyable to listen to when it's presented neutrally. Unfortunately, the required correction varies between track for both of the above reasons. Mixers don't necessarily attempt to defeat the X-curve alterations in any systematic way. Instead, they "turn various knobs" and listen until they are satisfied with the result. So the ideal filters to reverse their changes may vary between mixes. And because the X-curve calibration method isn't even consistent between dub-stages, EQ-adjustments that give good sound in one dub-stage may not work well in another. In fact, there's evidence that X-curve calibration doesn't even achieve consistency between the left and right vs. center screen channels vs. surround channels in the same dub-stage. The situation is a big stinking mess for sure. Nevertheless, even if the adverse effects of the X-curve standard on the mix cannot be perfectly reversed, it's possible with some rudimentary EQ to improve the sound quality of cinema mixes considerably. Now that I've finally achieved a stable, reliable audio reference in my own sound system, I've been giving attention to this problem. In this thread, I hope to document some of the candidate corrections that I've applied to improve the sound quality of various movies. I would encourage anyone with the required capabilities to give these a try and share feedback. To implement these requires the ability to apply various biquad EQ filters such as high and low shelves and Peaking EQs, ideally to the streams *before* bass-management. Though for my first pass, I'm applying the filters identically to all channels, so it should work fine to apply them after bass-management as well. One issue I imagine most people will have is that they have a limited number of free filter slots. The more filters used, the better quality correction that's possible. I will try to limit the filters to what's actually needed. Edit: I posted a candidate correction for "Wonder Woman". Sweet!
  3. SME

    X-curve compensation re-EQ

    Yep, I am absolutely aware of these things. Almost no speaker sounds right with a ruler flat on-axis response. Of course, that doesn't mean that manufacturers are tweaking their speakers on the basis of actual power response measurements, which are difficult to do correctly. I'd bet that almost every speaker design gets tweaked at least once, based solely on subjective listening tests before being finalized. My guess is that most speakers start up with fairly flat on-axis responses as a baseline and then get tweaked from there There was a time when more speakers were designed specifically for flat power. As I understand it, there was a kind of rivalry between people who believed the flat on-axis was optimal and those who believed that flat power was best. JBL was a big proponent of the former. Allison was a big proponent of the latter. Allison did some remarkable work designing speakers for placement against walls and trying to optimize passive signal shaping to maintain nearly flat power under those conditions. I grew up listening to one of Allison''s designs, which I remember fondly. In the end, flat on-axis essentially won in listener preference experiments. However, flat power may be have lost in part because such speakers almost always had up-sloping first arrival responses, for the reasons you note above. Or maybe it's not about the up-sloping first arrival response but the *lack of slope in the power response*? Personally, I'm pretty certain first arrival is still very important. Otherwise, one would expect horns to sound rather dark compared to cone-and-dome speakers with the same on-axis FR, which is definitely not the case. In any case, I have no doubt in my mind at this point that power response counts for a lot.
  4. SME

    X-curve compensation re-EQ

    Let's first assume that I have access to some future version of the tools I've created to do this sort of thing. The tools would probably take the place of REW for measurement but could export WAV files that one could import into REW. These tools use measurement data to obtain accurate estimates of first arrival and in-room power response, from which they use to suggest optimal correction curves. From there, the problem would be to generate filters that could be implemented on more modest DSP hardware than I'm using now. Some of the more recent MiniDSP products look pretty capable as far as offering FIRs along with a greater number of biquads than earlier models. With mathematical optimization for the biquads, it may be possible to implement some pretty good corrections on the more capable MiniDSPs. A complication is the fact that so many different MiniDSP hardware configurations are possible, each with somewhat different constraints. Of course, my tools are not currently available for wider use, and I don't know when they will be or how affordable they'll be. So what would I do without those tools? That's a tough call. An approach similar to Lygndorf's for estimating power response may work OK, but there are many details to be mindful of. Randomizing the measurement locations in 3D space is probably critical, and I'd opt for more than 6 measurement locations if possible. The different measurements should probably be level-matched before averaging. Then there is the task of smoothing power response while keeping first arrival approximately flat, and it's important to prioritize fixing low Q features over high Q features of the same magnitude. Contrary to common belief, low Q features are more audible than high Q features. To get this right requires smoothing, and unfortunately, REW lacks power smoothing, which I believe is the best kind to use for this. Most of the other stuff can probably be done in REW in some way or another. What to do without a good power estimate? At that point, falling back on existing advice is best. Start with speakers that have an excellent native power (and on-axis) response. Place them far from walls or install them flush-mounted, if possible. If not, some absorption on the part(s) of the wall(s) closest to them may or may not help. EQ can be experimented with but it's hard to know where and how much to apply. Modal resonances may be a relatively easy problem to treat with EQ. The peaks tend to stand out in the in-room measurements, and they can be particularly annoying to listen to. Be sure to confirm they are actually modal resonances by looking at several measurements throughout the room. Be willing to experiment to get the right EQ filter. In my "optimized" system the modal resonances are only partially suppressed in most of the in-room measurements. Unfortunately, none of this helps with lower Q stuff, including broad bass shape, that's critical to getting the best sound. One can try to optimize broad shape by ear, which is what I did up until I applied my latest tools, which finally gave me a superior result than I could get by ear. Sorry that I can't give a better answer than this right now.
  5. SME

    X-curve compensation re-EQ

    I would say the patent gives a lot of details, even if it does not give a lot of specifics. It's way more informative than the marketing literature is, at least. The gist of the method(s) described by the patent is as follows: Fundamentally, EQ is optimized so that a measurement or averaged cluster of measurements at/around the listening position are made flat. However, the measurement/average is potentially smoothed and filtered before the correction is computed, and the correction is subject to constraints. No additional information is given about the smoothing. That's too bad because the particulars of the smoothing potentially has a big impact on the quality of the result. They do describe performing measurements at 1/12th octave resolution, which is pretty "meh" as far as these things go. That's the good news. The bad news is that the measurements rely on pure tone sine waves, so the results will have a lot of uncertainty when used to estimate the shape of the whole spectrum. Before the correction, the response is pre-conditioned using filters. The filters are derived from idealized models of normal and expected in-room frequency response and/or power response. They essentially take the place of a non-flat EQ target curve by reshaping the listening position response (or average) before calculation of the EQ parameters. The models discussed in the patent include: a high frequency directivity roll-off (expected more in power response than at listening position), low frequency room gain, and low frequency high pass / roll-off. The parameters for the pre-conditioning filters may be calculated using the measurement itself (HF and LF roll-off models) or may be based on foreknowledge of certain characteristics (bass room gain model). The end result is very similar to developing and applying a target curve that is customized to the room and speakers and possibly to listener preference. The constraints are derived by taking additional measurements around the room and averaging them. This average is intended to be a power response estimate. (Based on my experience, the quality of this estimate is probably very crude.) The power average is also pre-conditioned with filters, just like the listening position measurement (or average) was. The pre-conditioned power average is then inverted, and the inverse is used to develop upper and lower bounds on the EQ. The idea here is to prevent the EQ from doing anything too extreme (in either direction) to the power response. === There are many likely benefits to their approach. The pre-conditioning filters effectively customize the target curve for the room and speaker. I'm a big critic of EQing to a one-size-fits-all target curve, so this is a big plus to me. Their models are probably over-simplified, but I would guess that they make a big improvement over one-size-fits-all. My guess is that they stumbled upon this approach when trying to figure out how to re-EQ speakers close to walls. The use of a power response estimate to develop EQ constraints is also a big plus. This is a solution for the "hidden resonances" problem I described above. However, I don't believe their's is a very good solution for a few reasons. The worst part is that their measurements rely on pure sine wave test tones. In anechoic measurements, pure tones tend to be OK because there are no acoustic effects (except involving the speaker itself). The impulse responses aren't very long except for high Q low frequency resonances, which are fairly rare in competent designed speakers. In-room, it's a totally different story. Acoustic effects will effectively contribute a lot of uncertainty to the measured values. It's because a measurement at a narrow frequency is a poor statistical guess about what FR is doing in a fuzzy region around that frequency, which is the kind of information that's needed to do the correction. Their tech note indicated that pink noise was rejected as a test signal because it didn't "reach deep enough into the impulse response". I'm pretty sure that's wrong. The pure tones certainly engage the late parts of the impulse response but they don't reveal them to the measurement system any better than the pink noise does. The most likely reason they chose pure tones over pink noise was because the signal-to-noise ratio was better. Unfortunately for them, the "insight-to-signal" ratio using pure tones is not so good. Moving to REW-style log sine sweeps and applying appropriate smoothing would probably lead to big improvement in the performance of their system. I would argue that they are also worrying too much about the listening position measurements. Practically everybody assumes that in-room frequency response will tell them something about what they'll hear in that position, but the reality is that things are a lot more complicated than that. The information is there, but the primitive analytical techniques in widespread use don't do an especially good job of recovering it. Power response is a major key to the puzzle, but it's not obvious how to determine speaker power response using in-room measurements. As I said, the approach taken in this example is quite crude, and would still be crude even if the measurement methods were improved. To follow-up on my comments about room vs. speaker correction: The best "room correction", by a long shot, is the space between your ears. Listeners are extraordinarily well-adapted to listening in environments with early reflections, and experimental evidence suggests that listeners *hear better* in the presence of early reflections. As such, the goal is not to correct the room but to correct the speaker for the room and let the brain do its thing. The big problem is that no one has a particularly good perceptual model to apply to in-room IR/FR measurements to assess what listeners will actually hear. Almost every product relies on EQing in-room response to some kind of simplified target, and each such product adds its own twist to make it unique to make it not actually suck. The use of a power response estimate to constrain the EQ is very good idea, but the methods described in the patent for doing so are rather obvious and primitive, IMO. Still, I don't doubt that it sounds pretty good, especially compared to other room EQ systems. It's possible that they have improved their analytical methods some beyond the patent. However, the tech note suggests that they are still using the primitive pure tone measurements, and I believe that holds it back considerably.
  6. SME

    X-curve compensation re-EQ

    Thanks for the reference. Since I had not heard of roomperfect, I decided to visit their web site to try to learn more about their product and the marketing language. The main web page was sparse, but I found a bit more info on this article: http://lyngdorf.com/news-what-is-room-correction/ To their credit, they seem to dedicate effort to improving in-room speaker power response. However, as best as I can tell from their description, their methods are nowhere near sufficient to obtain an unbiased measurement of in-room power. There is another, potentially larger problem: Here they seem to be implying that one of their goals is to preserve the (presumably desirable) unique sound signature of the customer's speakers. This tails in with the marketing of the product as a "room correction" product rather than a "speaker correction" or "EQ optimization" product. So how exactly do they achieve this goal? How do they distinguish between the speaker's "specific and desirable product performance" and the room's degrading influence and only correct for the latter? They don't explain how, and I very much doubt they have any way of making this distinction in the first place. I believe the industry of "room correction" products has a kind dirty secret. To the extent they work at all, it's mostly about speaker correction not room correction. I can offer two possible reasons why the technology is marketed this way. First is simply ignorance. Room effects cause the vast majority of variation within in-room frequency response measurements, leading naive engineers to erroneously conclude that the room is overwhelmingly to blame for poor sound quality. Second, "room correction" probably sells much better than "speaker correction". Most audiophiles don't want to be told that their speakers (possibly costing 5 or 6 figures) are flawed. It's much easier to point to the huge variations in-room response measurements (+/-20 dB !!!) and sell people on the idea that *their room that is holding back their speakers*. I realize I have emphasized room-dependent problems in my discussion above, but my focus is more general. IMO, the best description for my approach is "in-room speaker EQ optimization". It is intended to take the place of two activities which are typically performed separately: voicing and crossover design (usually performed using anechoic measurements) and so-called "room correction". I personally could care less about preserving the "unique sound signature" of particular speakers or other "audiophile" products. I just want the best sound possible from my system.
  7. SME

    X-curve compensation re-EQ

    I do not follow... How can one correct something that is inherently designed into a speaker (power response), if it does not depend on speaker location? Or are you talking about correcting power response depending on the speaker's location in the room? Is this why incredible amounts of headroom are needed? Allow me to clarify my statement above: The speaker's *total acoustic power output* response does not depend on the location of the listener. It does, however, depend on the location of the speaker. With that said, there's no reason not to correct power response issues that occur in the native (anechoic) response of the speaker. In fact when analyzing in-room measurements, it's not really practical to distinguish issues that depend on the speaker location vs. those that don't. Very few speakers have ideal native (anechoic) power response either. Even if the mid/woofer drivers measure very cleanly when on an I.B. and the cabinet doesn't have any panel resonances, the cabinet shape still contributes variations. In practice, placement and treatment options are almost always limited and serve as only partial solutions. Often these options also involve compromises. For example, most speakers sound their best in a room when aimed a certain direction, but this makes a flush-mounted installation difficult, especially with overheads and surrounds. Also, any absorption removes valuable reflected sound energy as a side effect. (This assumes small rooms where decay time reduction is rarely necessary, except at the modal resonances.) Even in the most ideal circumstances, there is likely to be benefit from DSP if it's done well. That leads to an interesting question: To what extent is it possible to work-around acoustical problems using DSP? The answer obviously depends on the capabilities of the DSP and quality of the algorithms used to compute the filters. So what if one uses the best possible DSP capabilities and algorithms? Well, no one can really answer that question because the best possible algorithms probably haven't been invented yet, and the best available algorithms may not be very good. Conventional wisdom says that DSP cannot fix most acoustical problems and can only optimize sound at one location or perhaps compromise for handful of locations. This reasoning is entirely valid from the view that the goal of DSP correction is to fix "problems" in the in-room response measurement. The name of the company that produces the Dirac Live software refers directly to the goal of most room EQ system: to achieve an in-room impulse response measurement that looks more like Dirac Delta, which corresponds to a perfectly flat frequency response. But what if all this conventional wisdom is wrong? Empirical evidence suggests that anechoic chambers make terrible listening rooms, yet they are the closest to achieving an ideal Dirac Delta. OTOH, 99.9% of the listening we do in real life is in rooms with significant reflections. Maybe the reflections don't harm sound quality at all. Maybe the real "problem" with reflections is that they confound our ability to measure and correct the speaker itself. Or that they help reveal the less-than-ideal power response of the many speakers that otherwise look great when measured only on-axis. Anyway, my recent experience suggests that DSP (done well) *can mostly work around* the kinds of acoustical problems that affect subjective sound quality and not just for a single listener location. The filters do usually require extra headroom to implement. Therefore, changing speaker placement or installing absorptive treatments may be beneficial in conjunction with DSP to reduce the amount of boost required. Some boosts will still likely be required to overcome limitations of the speaker itself. Edit: The text editor widget glitched and wouldn't let me add another paragraph to my response! One caveat to add to all of the above is that a human listener can only ever *estimate* the power response of the source from the available information. For the most part, these estimates can be remarkably accurate, perhaps excepting particularly pathological rooms or listener locations very close to untreated boundaries. However, the accuracy of the estimates does deteriorate for bass in small rooms where the sound field becomes highly structured. As such, the conventional wisdom that one can "only optimize sound at one location or perhaps compromise for a handful of locations" does apply to bass to an extent, particularly below roughly 150 Hz depending on room size. However, this is not nearly as bad as one would expect by looking at in-room FR measurements. The subjective variance is still much less than what is observed directly in the measurements. Several strategies may be used to reduce subjective variation of bass. One example we're all familiar with is to place subs in multiple room locations. I suspect this may be beneficial even if the placements are not optimized, e.g., to cancel modes. Of course, independent filters on multiple subs has the potential to do even better. When I did this before, I was optimizing in-room frequency response, which didn't sound nearly as good as I'd hoped. I expect that optimizing for a superior objective will deliver much better subjective results.
  8. SME

    X-curve compensation re-EQ

    I can see this, by correcting response at one location, you create problems and ringing at others. Yes, but this is true *not just at other locations* but also at the location where the single measurement was taken. The ear and brain use information from reflections to hear through most acoustic effects that are particular to a single location. How can 'precise' correction of a reflection be 'corrected' for many locations? The peaks and dips will occur at different freqs depending on location from the speaker.. A *reflection* cannot be corrected for multiple locations using DSP, and often one should probably not try to "correct" reflections because they actually facilitate the hearing process. (Note: some reflections may still be degrading for some frequencies.) To oversimplify just a little: What we wish to correct is the effect of one or more *boundaries* (among the many other things) on the speaker's *total acoustic power output* response, which does not depend on room location. Yes. Your anecdote was puzzling to me for a while, but not anymore. It makes perfect sense now. Of course that doesn't mean that EQ (even high "Q") can't improve sound. Rather, the problem is that we misinterpret smoothed FR graphs, which don't really show us what listeners will hear.
  9. SME

    X-curve compensation re-EQ

    Yes and more, including SBIR effects. The method I use doesn't really care about what causes the features it sees. Of course it can only correct problems that are linear and which are not strongly dependent on room location, even though it still tries. Bass problems do tend to be more local than for mid and high frequencies, but they are nowhere near as local as one would expect by looking at standard frequency response measurements. The ears and brain are remarkably good at listening "through the room" for the sound produced by the actual speaker(s)/sub(s), and this appears to be true even for very low frequencies, well down into the sub-bass range. My process also avoids creating new resonances, which appears to be a major problem with most if not all other room EQ systems, including earlier iterations of my own. Fundamentally, an *optimal* in-room frequency response at any location *will not be smooth or flat* unless the room is anechoic, and an anechoic room sounds bad. Every room EQ system I know of tries to "correct" in-room frequency response and/or impulse response in some manner, which I have found is completely misguided. It ignores the fact that listeners hear through most localized acoustic effects from diffraction, reflections, and so on. My experience suggests that *this is true for the full range of frequencies*, including low frequencies where modal effects may be strong. A floor bounce alone will contribute substantial ripple to the upper bass / low mid-range part of a single in-room frequency response measurement, and attempting to correct this, even using short-time and/or frequency-dependent windows, will just add new audible resonances to the speakers' sound. Performing correction based on multiple spatial measurements can reduce the negative effects, but the particular choice of locations still biases the aggregate in some way or another and leads to creation of new resonances. OTOH, the floor boundary (and any others that are nearby) alters the acoustic impedance adjacent to the speaker, affecting its acoustic power output sensitivity/efficiency vs. frequency. This has a global impact on the sound produced, and *precise* correction of these effects is beneficial. Modal resonances have similar acoustic loading effects, and again the idea is to correct their impact on the speaker/sub output, *not* the effect on response at any particular location.
  10. SME

    X-curve compensation re-EQ

    It time for me to update things here. Following on the "hidden resonances" insight I had, I developed a novel and vastly superior room and speaker EQ algorithm. This took a long time and is still a work in progress. However, I recently reached a substantial milestone, having implemented and fine-tuned the latest iteration on all 5.1 channels. I had to develop and code some custom algorithms to get to the point I'm at now. I've been having a lot of fun, doing real, serious math for the first time in a while. Previously, I was relying on estimates of first arrival response using FDW applied to measurements at different spatial locations, as inspired by Dr. Toole and Harman's work on polar, anechoic chamber measurements of speakers. My new approach is totally different and all but abandons use of FDW entirely. It's not that FDW was a poor choice of approach. It actually worked very well for me compared to other room EQ systems I've heard. It offered a theoretically compelling solution to the problem of choosing an optimal target curve to fit frequency response to. However, my new method sounds so much better. My recent experiences have totally changed the way I understand audio in terms of frequency response, whether in-room or "on the wire". My new approach is based on a completely new theory of audio perception, one which I developed to try to reconcile my accumulated knowledge and experience with sound. There's a lot of stuff that seemed weird before that now makes a lot more sense. In any case, I spent some time today reviewing a number of movies to find out how they sound now. I watched several scenes from both "Wonder Woman" and "Star Trek", both movies that I attempted to "re-master" and posted tentative re-EQ for. Needless to say, I didn't even bother trying to apply any re-EQ, and both films sounded *excellent*. That's not to say that they are perfect. In fact, I can definitely still hear the increased emphasis on bass and/or treble in most of these mixes compared to most music. However, in my recent viewings, these imbalances were *far less objectionable*. What was happening before is that the broadband accentuation of bass and/or treble on the tracks was accentuating nasty resonances in my playback system in those ranges. It was not the overall level of low frequency sound in the dialog but rather the finer-scale resonances in the low frequencies that were causing the upward masking / mud. Likewise, much of the ear discomfort and downward masking apparently caused by the excess of high frequencies was actually caused by finer-scale resonances there as well. Because my new EQ approach minimizes those degrading resonances, there is nothing for the broadband bass and treble boosts to accentuate. This mostly eliminates the masking problems I was having, and I can hear the mid-range quite clearly throughout. As such, I have much less personal motivation to re-master cinema tracks in the first place. In a way, that's unfortunate because I still think that good re-EQ would help the tracks to sound better on a wider variety of systems. At the same time, I can see the extra bass and/or treble as being preferred, for those whose systems can rendering it cleanly enough to not kill the mid-range. There is also the philosophical question of director's intent. Chances are very high that my presentation of e.g. "Wonder Woman" sounded better in my home theater than it did on the dub stage, but the director has never heard my system. Would she approve? I mean, her intent for a highly bass-focused and physical sound presentation is discussed in the articles linked above. IIRC, her mixer also talks about getting the most bass out, given the constraints of a 120 dB SPL (informal?) budget. My system as configured probably blew way past that. I'd have to either measure (using a better mic than I own) or analyze the soundtrack data to determine where I peaked, but I don't doubt I pushed near 130 dB SPL. If the director had 130 dB SPL instead of 120 dB SPL to work with, would she have used it? Would the presentation be closer to my own? I'd certainly like to think so. Anyway, I will be viewing a lot of movies over the next few weeks / months and may decide to try to re-master stuff at that point, but it's definitely not a priority for me anymore. It probably makes more sense for me to focus on developing my speaker/room EQ tech further because that made all the difference in the world.
  11. SME

    The Bass EQ for Movies Thread

    Thanks for your comments. I realize I will need to do the UHD upgrade sooner than later. It's a top priority when I have the income for it again. At the same time, Netflix by mail doesn't offer the discs, which is a drag being that the prices to buy the discs tend to be a lot higher too. This was an unusually quiet track, and so I'm surprised to learn that the Atmos track had an even higher crest factor. Wow! Even at "0" (or -2 vs. 85 dBC reference after dialnorm), the level seemed quite moderate on the DTS track. I assume I didn't exceed 128 dB WCS. Even if I did, I don't have that specific limitation in my signal chain. As such, I don't apply the static gains in the corrections and have no need to compensate using my MV. My headroom is effectively unlimited until the output stage to the amps (i.e., before the DAC), for which I have ~3 dB more than the signal that clips the amps with no load. On heavy infra content, the amps clip a fair bit lower than that, so in most cases, my headroom is closer to ~6 dB above the clipping point of the amps. This means that I can easily exceed 128 dB WCS with content above 20 Hz, but below there I run out of headroom a lot earlier. I've seen a fair few instances of heavy infra content clipping the amps, depending on my level choice and other parameters (HTTYD crash, TIH boxing / body slam?, a couple points in SW:TFA+BEQ), but this is the first time I've seen clipping on the digital output stage. That suggests to me that there were insane amounts of infrasonic content in those scenes, which doesn't exactly line up with what I see in the PvA. At the same time, you seem to indicate that the PvAs appeared very similar between the two tracks, apart from different overall levels. Is it possible that, even though the aggregate PvAs looked similar, the distribution of low frequency energy between the different channels was different between the two tracks? So maybe in the DTS track, the LFE was filtered a lot less than in the Atmos track, but the LCRS were filtered a lot more? If so, I can see how this could lead to a track that's much hotter in the infra after applying this BEQ. I guess I'll have to do my own analysis when I have the capability.
  12. Sorry if I wasn't clear, but it's specifically *not* the bass extension that's the issue. The walls and ceiling provide plenty of boundary gain below a certain point. All my speakers are close to walls and have plenty of headroom below 150 Hz or so. Where the surrounds struggle is in the area of 200-900 Hz. The Volt 10LX at 95 dB is a very typical example of what's widely available. It's designed to offer bass extension that I don't need at the cost of sensitivity that I do need. Going from 92 to 95 dB sensitivity is not a significant upgrade. I'm aiming for the high 90s (or even 100 dB) in a 10" or 12" size, and I'm perfectly willing to accept significant LF roll-off and excursion limitations to get there. I'm curious what Erich comes up with, but the trend I see with most coaxials is that the larger ones just play lower and don't gain much if any sensitivity. Meanwhile, the vast majority of consumer speakers are just plain pathetic. As I develop my DSP optimization methods and learn as I go along, my opinions continue to develop in so far as what requirements for speakers are most important. It is my view that key among these requirements is high sensitivity/efficiency, especially in the low mid-range above roughly 150-200 Hz. Optimized DSP can fix a lot of speaker and room problems (even if most *current* products on the market don't do it well), but this solution requires EQ boost. Almost every speaker in every system has significant problems in the upper bass / low mid-range due to baffle effects and boundary interactions that require EQ boost to improve on, and anyone who likes a bass heavy sound needs more output in that range to properly transition to the subs or else quality suffers considerably.
  13. I may have been overly paranoid, and I don't think I had the visual output indication (Motu 16A) that I do now. However, movies rarely have continuous, droning, high SPL sounds that are sustained for *a few minutes*, especially going to the surrounds. (The sub range is a different story.) Also, there's a big difference between playing very hot content for a few seconds vs. a few minutes. If you look at the average level for surrounds (not including sub bass) over one minute intervals across an entire soundtrack, I doubt you'll ever see anything higher than 90 dB per speaker, and I doubt you'll even see higher than 80 dB per speaker except in climatic scenes where the score is playing very loud. With that said, yeah I do want/need better surrounds soon bit more for instantaneous output capability than for long-term output capability. My current speakers are rated at 92 dB/2.83V/2pi/1m, but I'm fairly sure I'm clipping them from time to time with a ~300W/channel amp. (I can now watch the peak / RMS level of the signals going to them.) It's only on instantaneous peaks, and I don't hear any obvious distortion. Part of the problem is that they experience substantial boundary interference in the low mids, being that they are installed near a wall-ceiling corner. If I don't correct for this, they sound very thin and impart their thinness to the rest of the sound-stage when playing multichannel content including movie scores. With my latest optimization methods, I am able to correct this problem very precisely, but the correction does involve a lot of EQ boost. The end result is well worth it, despite potential for occasional clipping. For 99.9% of the time, the system sounds way better than it would without it. Still, I'd like to have a proper amount of headroom. FWIW, I'm pretty sure a lot of other systems have similar problems, and they most certainly apply to overhead speakers as well as surrounds. It has definitely affected my perspective on how to design multichannel systems. Surrounds and overheads are almost always placed in or near walls and/or ceilings. Where flush-mounting against a large, rigid surface is possible, there is less likelihood of a problem. However, for various reasons, this is often not possible. For example: It may not be possible or desirable to cut holes in the wall or ceiling. The room design may not allow a large rigid surface at the chosen placement location. It may not be possible to aim a speaker to achieve audience coverage goals when flush-mounted. For these and probably other reasons, surrounds and overheads are installed on or near the wall rather than in the wall perhaps in the vast majority of cases. With such placement, the sound of the speakers will be degraded without correction. The required correction requires a lot of EQ boost, usually to counter-act suck-outs in the low mids. At the same time, the nearby boundaries will usually interfere constructively in the bass range. As such when EQ/room correction DSP is intended to be used to achieve good performance, a good surround/ceiling speaker should have as high a sensitivity as possible. Excursion is not as important because most of the boost will be applied where excursion is low to begin with. Unfortunately, this limits the options considerably. Typical consumer surround speakers use small, medium sensitivity drivers and a ported enclosure to get the bass extension. These are seriously deficient in the crucial low-mid area. OTOH, small, high sensitivity drivers tend to lack too much in excursion and power handling. The implication is that good surrounds need bigger drivers, albeit light-weight pro-style drivers that have high sensitivity and modest excursion capability at the expense of bass extension. I had been planning on building new surrounds with 2 x 6.5" AE drivers and an SEOS horn, but I have other ideas now. In the long term, I want to build fully digital-driven arrays, but I have a lot of learning to do before I can do those. In the short term, I'm leaning toward a pro-style coaxial. I've seen some glowing recommendations for a few particular co-axials, for example from Radian. Unfortunately, most of the coaxials I've looked at, even the pro-style ones, appear to be designed to have significant bass extension I don't need at the cost of sensitivity. Most of the Radian coaxials just aren't as sensitive as I'd like, even at 12". IIRC, I saw one I liked from B&C that had a fairly shallow mounting depth and a woofer with a super strong Nd motor, providing like ~98-99 dB/1W in a 10". That's what I'm talking about! I think they also published polar response measurement data, which is almost unheard of in the industry. Once I have some expendable income again, I'll probably move on something.
  14. SME

    Ricci's Skhorn Subwoofer & Files

    In practice, how much does CMS actually vary in different manufacturing samples (assuming the same batch of parts)? I know we see lots of variation when we attempt to do measurements, but that's not necessarily the same thing. The value obtained may depend a lot on the details of the measurement, and the value is known to be quite sensitive to temperature and humidity. Both of these kinds of things should be expected to affect both drivers equally.
  15. SME

    Ricci's Skhorn Subwoofer & Files

    I don't see how a pair of drivers in series could cause a problem. Three or more drivers, yes, but only two? Anyway, I installed a 3/4" wall between chambers in my D.O. sealed sub because I was paranoid about local acoustic loading effects and wanted to option to possibly run separate signals to each end at some point in the future. That pic looks great! I'm sure your neighbors love you.
  16. SME

    The Bass EQ for Movies Thread

    Do you know how similar the two tracks really are? I tested this out on the DTS track (no UHD here yet), and I'm seeing (post EQ) levels that are hotter than I'd expect, even for BEQ. One of the effects (SPOILER: select text to reveal <<< the destruction of Leia's ship >>>) clips not just the amps but my digital processor, which is pretty extreme. Note that my digital domain headroom is not based on 7.1 WCS but rather is a few dB higher than that required to drive the amp to max voltage. The PvA doesn't look *that* crazy. Are these just examples of extremely ULF-heavy broadband effects? Overall, the effects also seem a bit bottom heavy, and some of the ambiance and tension ULF seems rather overkill. Since I don't really how the tracks compare and can't measure them right now, I'm thinking of proportionally scaling back the correction until I have enough headroom and/or the bottom sounds a bit more balanced again. Perhaps it's another case, as with "Thor: Ragnorok", in which the two tracks are have substantially different bass profiles? Edit: I dialed back the BEQ to 75% (in terms of dB) and also made some minor broad shape adjustments (> 100 Hz), and the mid-bass slam is back and proper. I presume the latter changes were more important, but this BEQ was just too much for my system. Even at 75%, the aforementioned scene as well as (SPOILER: select text to reveal <<< Phasma falling into the flames >>> still clip in the amps at my chosen playback level. There's still plenty of infra power all over the place. Very nice! The overall level on the track is lower than typical, and I find it to be completely comfortable at MV "0" (equivalent to "-2" after dialnorm). The mixer must have liked less loudness or else his monitors sounded overly loud to him. Either way, the soundtrack quality is superb, easily better than TFA. Edit2: I watched this all the way through with the DTS 7.1 track at MV "0" (-2 with dialnorm) with guests and 75% of the correction applied. There was tremendous infra, but it honestly seemed a bit repetitive with almost every effect being very bottom heavy. The bigger on-screen events were rather disappointing because the effects were the same or weaker than for many other lesser events. Since I still don't know how close the PvA on the DTS track is to the Atmos track, I have no idea if my track was corrected as intended. Maybe the DTS track is ramped more aggressively or rolls off less in the ULF. The most impressive effect was the latter of the two scenes mentioned above that clips my amp, which unfortunately seemed only vaguely connected to anything obviously big on-screen. I don't know what frequency the effect focused on (mid teens?), but it was downright violent. I felt like my body was being tossed around like a jet flying through turbulence or something.
  17. I recall "Ex Machina" had a loud droning score toward the end and the DTS:X mix sent the score almost entirely to my surrounds. I turned things down a bit because I was worried about long term power effects.
  18. SME

    Raw data for just 1 system

    $ du -hcs acoustic-measurements 8.8G acoustic-measurements Bummer. No bragging rights for me. That also includes a lot of simulation output.
  19. SME

    Woofer for 40-250Hz?

    Ouch. It'd probably be a good idea to hook it up ASAP and check for coil rub being that the tolerances are very tight on those. I'm a little confused as to where you plan to use the AE vs. FaitalPro drivers. I thought you just needed one type of driver to cover 40-250?
  20. I got a conversion done. It's not really pro quality as I didn't do any dithering, but it should compensate for the mic cal down to 4 Hz: https://drive.google.com/file/d/1EylLR1mXkmaCI0mIjwQgPi7MkWVy0D1s/view?usp=sharing Anyway, the recording could benefit from some editing. It was hard to tell in a wave viewer where the good stuff is. It also has a couple of nasty pops, which I believe may be caused by EMI from the lightning flashes. I'm pretty novice with Audacity and didn't have any luck getting rid of them using its click removal plugin. Anyway, there's definitely thunder with very solid bottom in the recording. Oddly, the most impressive bass event doesn't seem to be caused by thunder. There's a lot of weird bass noise starting roughly around 56:30, and then a big ker-chunk right at 57:48. It's nearly -3 dBFS peak in the compensated version and almost entirely bottom end. Did you bump the mic or something?
  21. Do you have a narrow band cal file for your mic? If you can send me that and the audio, I can create a compensated version of the recording.
  22. SME

    Horn length extension on Othorns?

    LOL! If it's too loud, you're too old! It's like the equivalent of an SPL car in a listening room. But seriously, it's hard to argue with this point. If he actually ran the bass so hot as to use all the Othorn capability for movies, what's the point of having the Terraforms? Or are you trying to talk him into buying another 12 of them?
  23. A 45% on Rotten Tomatoes isn't, like, "Conan: The Barbarian (remake)" bad. And it looks like it's not terribly long either. Something like 103 minutes of constant wind noise?
  24. SME

    Horn length extension on Othorns?

    I see (now). Some details in the photo are hard to make out, what with all those huge boxes in the way. 😉 I think this is key. Models in Hornresp and whatnot are helpful to understand what happens in an idealized 2pi space, but when you stuff 8 Othorns into a tiny cinder block room whose dimensions are smaller than most of the wavelengths of interest, much of that goes out the window. How much footprint does that well and sump pump take up? I don't know if there is space for this, but what happens if you arrange them into two groups of four with all units firing into the front wall? Each group is two units wide and two units high and the width between the two groups is twice the width between each group and each side-wall. I'd also invert the unit on top so the mouths are closer to the ceiling, and maybe put them up on platforms to get them closer to the ceiling for better symmetry. Is there room enough in the left corner for that to work? I'm guessing the room is maybe 12 feet wide? So I guess that means 12" between the side wall and each group and 24" between the two stacks. Maybe that's too close. Or perhaps it works if there is enough distance to the front wall? Another option may be to cluster all eight (or just six?) subs in the center, still firing everything toward the front wall but forcing all sound to exit along the side-walls. In this configuration, the cabinets could be perhaps be angled to create expansions in order to reduce high order resonances and possibly boost output even more. Obviously, both of these options will work much better if the ceiling and right wall gaps are closed. Anyway, just throwing out more ideas here. I know if I had that kind of appetite for bass and a dedicated room, I'd probably just build my system into the structure itself.
  25. SME

    Sundown ZV4 18D2 - sealed enclosure

    Lower crossover can be easier to integrate because the wavelengths are longer so interference is more likely to be constructive. However, every situation is different, and there are pros and cons to different approaches. I wasn't sure how low your mains could go. Some other guy on here with horns was pretty much set on using 60 Hz, and I think I might have gotten you two confused. If your subs play well together to higher frequencies, it can often be helpful to cross above 80 Hz. I cross at 100 Hz FWIW, and in fact I currently have MBMs behind my sofa that contribute up to about 150 Hz on the FL/FR channels with optimized DSP to make sure everything plays together in phase. Bass localization is not an issue at all, but I'll probably change this configuration at some point in the future.
×