Jump to content

SME

Members
  • Content Count

    1,547
  • Joined

  • Days Won

    93

Everything posted by SME

  1. Oh, I see. I guess I've always interpreted "power compression" to refer to *any* kind of compression that occurs in the speaker. Maybe that's not really the correct usage of the term. What I meant was output compression in general.
  2. Ideally dialnorm assures similar dialog loudness at the same master volume, regardless of the dynamics or crest factor of the particular mix. The way it's supposed to work is that the final soundtrack is measured for loudness (which takes into account spectral balance factors, to an extent) using a standardized method, for example LKFS. Then the dial-norm offset is set based on where the loudness falls vs. a reference value, which I believe is -31 LKFS. So soundtracks with -31 LKFS, -27 LKFS, and -24 LKFS, should respectively have dialnorm offsets of 0 dB, -4 dB, and -7 dB, and they should all sound about as loud when played at the same master volume, even though the latter example of -24 LKFS is probably a lot less dynamic than the first. Of course all this assumes consistency between different titles in the loudness measurement method and setting of the metadata on the soundtracks, which still doesn't happen. In the old days of DVDs, the DD tracks on them very often had a "-4 dB" offset, and I believe this was because that was the default value. (Some titles still came with other values.) For BD, a lot of tracks are DTS-HD, and those encoders probably default to a "0" offset. The Dolby TrueHD tracks are more likely to use a non-zero offset, but I believe this is less consistent than it was for DVD DD tracks. So the consequence of inconsistent use of the dialnorm offset parameter actually has the opposite of the intended consequence.
  3. Air flow is very non-linear versus velocity, and losses increase very quickly at high velocities. A rough rule of thumb is that you compress 1 dB for every 10 m/s (also depending somewhat on the flow area and overall shape of the passage) , but there is also a saturation point, maybe in the 30-50 m/s range (again depending on details), where output stops increasing altogether. Often by the time you hear chuffing, you're already at that point.
  4. The problem with too much vent velocity is not just chuffing but also power compression, and substantial compression can set in well before the chuffing becomes audible.
  5. It's like the age-old Information Technology help desk joke (based on a true story), "have you tried turning it off and on again"? Except for movie soundtracks it's: "have you tried adjusting the volume control to the loudness you want?" An entire Loudness War has been fought over --- catering to the whims of the volume-control-challenged masses. Of course it doesn''t help people when the soundtracks aren't the slightest bit consistent in their setting of the dialnorm metadata. So it's like the worst of both worlds.
  6. Hi @peniku8. That's not really necessary unless you really want to do it yourself. I expect that DC and ULF noise can be introduced in an analog tape device under the right circumstances but that it's often prevented or minimized by the presence of blocking capacitors somewhere between the tape output and digital input. But wait, capacitors work essentially like a filter, one which may not cut the noise as much as a desired BEQ boosts it. OTOH, running a whole 7.1+ channel mix through a tape machine with only 2 channels might not work well. At the least it'd be tedious (de-mux, do 4 runs, time-align, and re-mux) but any wow and flutter are really going to screw up the time alignment between channel pairs. Perhaps an even better explanation is that it's simply a digital plugin in the mastering chain that's contributing the noise. Maybe it's a tape machine simulation plug-in. And here again, it might include a built-in filter to reduce the DC noise but which cuts the noise less than the desired BEQ boosts it. You're right that we'll probably never know for certain, but it's clear to me now that this noise could have easily been introduced (probably unintentionally) into the "mastering" chain. If that were the case, then the Atmos mix was probably not derived from the DTS mix but is a fork of an earlier version of that mix. Perhaps they are independent home mixes derived from the same original cinema mix?
  7. My "Like" is for this part of your post. Incompetent is definitely the wrong word and wrong concept, really. Cinema mixers are generally very competent because if they aren't, they don't get new contracts. But competence really refers to the ability to fulfill a particular job function, which isn't really saying that they know everything to know about the craft. In fact, there is probably a lot about this craft that is as yet unknown and a lot that is "known" that's actually just wrong. The issue I have with some of the recent Marvel releases is dynamics crushed to the point that it's less dynamic than an old analog TV program and doing so in a way that breaks a lot of the artistic integrity of the presentation. It's one thing if a particular scene isn't as BIG as it would have been without dynamics reduction, but another entirely when the dialog in a heated argument gets quieter just because more than one person is talking at once. Or using peak limiting in a way that snuffs transient sounds out of existence. For the most part though, I think mixers (and the other sound people) do a decent job considering the ridiculous time pressure they endure and the complexity of these soundtracks. And this is especially true given the spectral balance problems resulting form X-curve calibration. It's just tragic that this quirk affects the creation of the art in the ways it does. If they had neutral systems and the cinemas were neutral too, I think it would have a huge positive impact on the quality of the overall art. It'd definitely bring us closer to the performers, for better or worse. Exactly. There's no way the DTS and Atmos tracks were each created from scratch. One is likely derivative of the other or at least derived from a similar original. So the fact that one has this noise and the other doens't is puzzling. I know you can apply de-noising to a complete mix, but I doubt anyone does that unless they are trying to restore and re-master old content or something. Maybe that's it, and I just need to think more flexibly. I think @maxmercy sees ULF noise (or really "DC noise", under 3-5 Hz) quite a bit in movie soundtracks, more often in older tracks. However keep in mind that this noise isn't really a problem except when doing BEQ because the BEQ boosts the noise along with the desired content. At least some of the time, the "noise" may simply be due to tracks with the DC noise not being filtered any more steeply than the tracks with desired ULF content. Curiously though, some soundtracks have this noise all over and others don't have it at all. It's totally hit-or-miss. ==================== As a separate response to this overall discussion, I would caution people not to read too much into the PvA data when trying to understand why different soundtracks sound different. Certainly differences in spectral balance are possible, and seemingly minor spectral balance differences (like 1 dB) can actually have a major impact on the perceived sound. Perception is very relative in terms of what's happening at different frequencies, and you can't really see what is happening, spectrally, with the individual effects by looking at the PvA. Keep in mind too that dynamics processing might be quite different between tracks, which likely explains why the PvAs are not exactly consistent in the fine details. Yes, both tracks (and the cinema track too, if it's different) are likely seeing plenty of dynamics processing, which can affect how the sound is perceived. Also, literal dynamics is only one parameter that affects perceived dynamics. Spectral balance affects (e.g. momentary shifts in broad spectral balance and "saturation" effects) can give very different impressions of dynamics even with the SPL pegged to the same number. And the consequences of these differences may all be expressed different on different systems. So the situation is way more complicated than can be depicted with a PvA or even spectrograms, though sometimes these tools reveal interesting things. They are useful tools, but they can only explain so much.
  8. My wild guess is that the "noise" was contributed by a mastering plugin, maybe a subharmonic synth which was different or configured differently for the DTS vs. Atmos mix. Let me turn your question around. If the noise was actually deleted from the Atmos track rather than merely filtered, how and why was this done? One possible answer to the "how" is a de-noising plugin, but my understanding is that such plugins are not fire-and-forget and the results need to be actively monitored and settings tweaked to get the best results, which these engineers probably could not do for the lowest frequencies. Maybe they just ran it blindly? In which case, I'd expect more destruction of non-noise content in the de-noised version. But why? Why would they apply a de-noising plugin just for stuff at the very very bottom of the spectrum? If one is worry about that noise "causing problems" in playback devices, a simple high pass filter will do just fine to prevent that, and in fact both tracks are HPFed already. I guess one possibility is that de-noising was applied to the entire track, and the removal of extreme LF noise happened as a consequence. Still, even with carefully hand-tuned settings, this de-noising is likely to collaterally damage some of the original content, and I don't see how it would help the soundtrack in any way, except maybe in some of the dialog recordings which can and do come with unwanted hiss sometimes. Hmm, maybe the de-noising *is* selectively applied to dialog samples with hiss, and maybe the ULF noise is part of *those* samples. (I really doubt that though being that almost all dialog probably gets high-passed at 60-120 Hz or so.) I know that on the video side, de-noising is pretty standard for home release (and probably cinema too) where it's used to try to scrub out film grain, and in this application, it also tends to degrade the actual content and can contribute new artifacts also. So maybe the Atmos track has been de-noised, in which case it's possible that the DTS version retains more content and has fewer artifacts, making it the "better track" if one is not bothered by the noise. Either way, the fact that such noise appears on one track and not the other poses interesting questions. Assessment of sound quality via purely objective means is probably very difficult if not impossible. That's a big reason why humans are involved in the mix process, and also why it's essential for the humans doing the work to listen with a monitoring system that's as neutral and accurate as possible. It may be possible for a machine learning algorithm, trained using listener preferences, to provide an assessment of sound quality. However, I would not expect this to work consistently well for a number of reasons. It might also be possible to write an algorithm specifically to detect loss of information due to lossy encoding at reduced bit-rate, but this is a very specific case. And furthermore, quantifying how much information was lost does not tell us the impact of that loss on sound quality. The subjective sound quality impact of these losses could be estimated using the same psychoacoustic models used for the lossy encoding itself, but this is still just educated guessing and for what purpose? Most media can be obtained in a lossless format or at least a lossy format with a high enough bit-rate that the subjective impact is going to be very subtle. I did recently see evidence that some, maybe all Dolby AC3 (i.e. "Dolby Digital") encoded tracks are low-pass filtered at 20 kHz, and for a lot of listeners and systems, this could have as much or more impact on the sound quality as the lossy encoding. So I guess LPFing at the top is one thing that can be objectively assessed.
  9. Glad your wife didn't veto your purchase after what she had to "suffer" through, lol. Keep in mind that a subwoofer in a car get a lot louder than it does in a house where there's a lot more space to fill. If building a vented sub, the tuning frequency is very important because the sub won't play much content below the tuning frequency. So at 29 Hz, you'll be missing out on most of the content below. If you want to *just* get to 20 Hz, maybe aim for a tune of like 22 Hz. Also, I think a 4" diameter port is a bit small. You need enough area to avoid chuffing and compression at high output. How much you need depends on what the overall design looks like, but for a 15" woofer like that, I think you'll want at least 2 x 4" pipes. Do you want to tune lower? The trade-off is that you need to make the vent longer or the box larger, keeping the vent area the same. Working from the numbers you posted, a pair of 4" x 20" pipes ought to get you a tune around 22.5 Hz. Actually, it'll be lower for that particular cabinet if the pipe exit is near the wall because the wall effectively extends it. Alternatively, you can make the cabinet larger, and this will also help boost the output around the tuning frequency. You may want to look at simulations though to see what you're ultimately likely to end up with. If you do decide to use one or more 4" pipes, this product is real hard to beat if it works for your cabinet design. The flares help a lot with improving performance at high output. They are also very easy to install. If you decide to use them, I recommend using the formula included in the manual to calculate the length because it takes into account the flares properly. And also keep in mind again that if the exit is near the wall, it'll tune lower than expected. (Don't let them exit less than ~3-4" from the back to you don't constrict the flow there too much.) I think the XLS 1002 amp is a good choice as it includes DSP which you need for a vented design to apply a high pass filter to protect the woofer from frequencies it can't play.
  10. I'll just point out that just because they graph nearly the same doesn't mean they'll sound the same. Even changes that appear negligible in the graphs could be audibly quite significant. People may disagree on which sounds better, not just for purely subjective reasons, but also depending on the characteristics of their system. As a question though. Do the two look nearly identical below the filter cut-off too? The reason I ask is because it's interesting that the DTS track has more ULF noise. Think about it. A filter doesn't *remove* noise like this, right? It just suppresses content at those frequencies, unless the attenuation is extreme. That suggests that the ULF noise got added to the DTS track somehow, at a point "downstream" from whatever the starting point for the Atmos track was. Could it be that the ULF noise is from a subharmonic synth in the mastering chain that was configured differently for the DTS vs. Atmos? I'm sure there are other possibilities, but most seem to suggest that the tracks are more different than the graphs suggest. Any comments on how the dynamics sound? I hope they've dialed back the aggressiveness of the loudness normalization (or whatever it is). That together with the (IMHO) lackluster sound design really detracted from a lot of the recent Marvel movies for me.
  11. Thanks for the suggestions which I will keep in mind for when it's time. The tech is definitely not ready yet, and there are a few major things I need to accomplish before I'm even ready to test in other rooms. Hopefully I'll be a lot further along in a few more months. I already have a design for a test/demo speaker that I have all the parts for except the waveguides, which I may regret not buying earlier if they are covered by tariff now. I did get a bit carried away over the summer.
  12. Thanks for the encouragement @peniku8! I live in the USA and anticipate working locally or traveling as necessary. I'm not willing to work remote, at least until the technology is more mature and I have more experience with a variety of different rooms. And no, a mic is definitely not a substitute for being physically present. I actually need to walk the whole room during my evaluation.
  13. I see it's been about two years since my last update. My hardware configuration hasn't really changed in that time. I still have all the drivers I was going to use to build new MBMs, but I'm now not even sure I need them. Building them is a very low priority, and the amp I bought to power them is likely to get used to power my "demo" speaker system instead. OTOH, the DSP configuration has been modified heavily. In other threads here, I've hinted about my discovery of a novel method for optimizing low frequencies. I've made substantial progress on this and also on optimizing high frequencies, which seemed to benefit from more attention after I'd managed to drastically reduce the muddying effect of low frequency problems. Lately, my attention has returned to low frequencies, this time dealing with sensitivity to physical /environmental changes such as the precise location of MBMs and absorber panels vs. where they were when I measured. Just this week, I finally implemented the first algorithmically optimized low frequency configuration. I expected some improvement below 40 Hz, but was amazed by how much the increase in precision provided by computer vs. hand optimization improved the sound from the mid-range on down to the very bottom. I watched some of the scenes from "Ready: Player One" using the @maxmercy BEQ. It's hard to describe the experience. Despite pushing into the 120s dB, the bass never trampled the mids and highs, which came through clearly even on the weightiest of sound effects, yet the bass itself contributed intense physicality to some of the sounds. The slam was impressive, not just because it was there but because it was *everywhere* and in a wide variety of different flavors rather than being a one-note-ish thump as is often the case with PA systems. I also didn't notice any house shaking at all, but I don't know if this will be the case with other movies. As a kind of ironic conclusion to this thread, I figured out that I didn't need 4 MBMs with independent DSP to get "perfect bass response" at all my seats, yet the title seems to be a reasonable description of what I experience now with my optimized configuration. What I mean is that the bass almost feels like it's coming from inside my own body, and this sensation follows me around the room, even when I'm well outside the "calibrated" listening area. This is similar to experiences I've had with superb quality bass systems outdoors, but I'm experiencing this indoors throughout a room that's not especially large nor heavily outfitted with absorption. The low frequency sound in general seems to be completely untainted by the room, and the acoustics of the recordings (whether natural or synthetic) come through with remarkable clarity. Unlike those outdoor systems, I am able to take full advantage of room gain and hit high SPL down to much lower frequencies. Part of the reason I became so quiet about my recent work on my system is that I am seriously thinking about seeking commercial application for my technology. My confidence in this regard has been growing over the last year or so. I'm now fairly confident (i.e. > 50% chance) that I will go into business, in some form or another, with this technology. I haven't worked out the details yet, but I have some ideas. I'm likely to start small with custom / bespoke installs. These could be for ultra high-end home theater or perhaps for mixing / mastering rooms. These early jobs could fund further research into adapting my methods (or developing new ones, where necessary) for cars as well as potentially larger rooms (cinemas?) and outdoor environments. Admittedly, I'm shying away from doing any kind of consumer product because I don't know if I will be able to make my tech work reliably under those circumstances. I don't know if I can really make it "idiot proof" enough, but I can potentially research that too. So with that said, I'll try not to self-advertise too much in these threads. Thus far, I haven't really meant to. I'm just passionate about this subject and am having a very hard time "keeping this great sound to myself". Maybe I should just record my system and post it on YouTube? () Seriously though, from a marketing standpoint I've already lost. Just about everything positive I'd like to claim about my own sound has already been claimed repeatedly for other products that, IMO, don't live up to the hype. So perhaps my best approach is just not to *say* anything and let my systems "speak" for me. That probably means starting small and growing very slowly, which isn't necessarily bad.
  14. SME

    Noise at low SPL

    From your description, I'd be more inclined to suspect something going on with electronics. It's almost like there is noise in the signal chain but a muting circuit is acting to stop it the noise until sound starts playing. Then at higher levels, the sub is music enough that the noise is drowned out. What kind of electronics are you using? This is a bit of a stretch, but it's possible what you're experiencing could be caused by numerical precision issues in the DSP, hence it's more pronounced at lower signal levels.
  15. Thanks for the interesting discussion!
  16. I hear you and understand totally. Like, you can't just go to YouTube to see a video of what "20 Hz at 105 dB" sounds like. And part of making the best of the budget is to avoid either over-buying or under-buying for your (as yet unknown) bass desires. Being able to visit SI in person should help a lot with that. Good luck!
  17. Also the duration of SPL averaging (e.g. "fast" or "slow" on a meter) is important, especially when assessing high crest factor content. I think 100 dBA long-term is very loud. For "lively" critical monitoring (what I'd use for mix and mastering if I did these things), I end up around 79 dBA (long-term average). So something like 85 dBA is really blasting, and I do develop alterations in subjective hearing following extended exposure to sound at that level and beyond. Every extra 3 dB or so dramatically shortens the rate of onset and subsequent recovery time for these alterations. Much beyond 90 dBA for more than 5 minutes, and my hearing is off for most of the rest of the day. Apart from such hearing changes being a warning sign for potential long-term hearing damage, these alterations notably degrade my hearing acuity, causing everything I hear to sound less good, so I avoid them. I also avoid demoing content at such excessive levels, at least until the end of a session, because it will likely adversely impact the listener's own experience of my sound. Why would I want that when I'm trying to show off? Now if long-term averages are reasonable and we're talking about brief short-term 100 dBA peaks on drum hits or something, that is not so insane, especially if the reproduction is clean. Indeed, even though I've heard some sound engineers claim that peaks beyond "X dBA" (for some number X, often in the 90s) are uncomfortable and should be reduced using peak-limiting, I doubt there is really a universal transition point. It depends a lot on the sound. I suspect what these engineers are identifying is the SPL at which the limiter they are using sounds unacceptably harsh to them. (It could also be the threshold at which their monitors start distorting heavily.) That's a totally different thing. Many limiters sound harsh to me at any SPL. A big gripe I have with a lot of stuff that Disney/Marvel have put out recently, apart from the loudness normalization that harms the presentation of the program, is the use of very harsh sounding limiters that have a strong upper-mid/presence edge. (Example: glass shattering sounds on these titles can sound like total crap.) This probably yields a more "dynamic" sound with less peak output on middling systems, but on a high quality system it just sounds like a system that's being driven into distortion. Yuck! I see mention of screaming crowds being able to push SPL very high also. I see how this can be a big problem. I've never witnessed this myself, but I talked to a punk guy who said he's been at shows where the performers basically told the audience to STFU so that the performance could be heard over the noise. I'm not sure this is a viable option for every band, but it ought to be kept in mind.
  18. Yes. I must have missed something. Is there a 2 ohm IPAL? Anyway, the NSW6021 should be easier to drive with common amps.
  19. There's also the new Eminence driver --- probably a better option for those in the USA.
  20. That sure looks weird! (Both curves.) I have no idea how they got like that, but this is my best guess: They applied a steep LPF around 13-14 kHz to everything (in the dub-stage?) but then later added synthetic top-end content (for home release?). And the extra content doesn't start until ~16-17 kHz, leaving that weird notch in place. As for the differences, the blue curve clearly as a very steep filter applied before 20 kHz, but the BD was allowed to extend beyond. It's certainly possible that the filter on the blue curve came from the encoder itself, but that would probably have been applied as a "pre-encoding" step rather than as an artifact of the lossy compression. One reason to brick-wall LPF before encoding is to try to reduce the entropy (i.e. information that cannot be easily predicted by identifying redundancy) in a range that's (presumably) less audibly important so that the encoder has more bits to use for encoding the rest. It probably only helps marginally though as there isn't a whole lot of room between 20 and 24 kHz even with "white" / linear frequency weighting. Along similar lines: I was a very early adopter of MP3. A major factor in sound quality with MP3 (and probably most other lossy codings) is the quality of the encoder itself. The decoder just has to follow a recipe to turn the compressed bits back into sound. The encoder has to decide what to discard in order to give up bits. When I migrated to LAME, arguably the best encoder available, I found a nice guide suggesting options to use to get the best performance (i.e. sound quality vs. bitrate). The guide recommended using the LPF option to brick-wall everything above 19.5 kHz. It seemed perfectly reasonable, and I didn't notice anything "wrong" at the time. But that was long before I had any kind of decent speakers or any of the knowledge I have now, and finally now on this system I hear a characteristic resonance common to all those old MP3 files. Thankfully it's fairly inoffensive most of the time but it's definitely degrading. OTOH, I also have a lot of MP3s I downloaded, streamed (back when "streaming" was basically pirate Internet radio), or obtained from other people which are encoded at lower bit rates than I used (e.g. 128 kbit fixed vs. 192 kbit ABR). Those sound much worse, but I still listen to them because I like the music and can't easily replace them with lossless stuff. My old MP3s also still sound better than whatever YouTube is using these days.
  21. I don't know for certain, but I believe most likely the "extra" treble content on the BD is synthesized using a plug-in. I believe such plug-ins are applied in the post-processing step of all kinds of audio, both music and movie to "sweeten" the sound. I believe they work by adding content at higher harmonics of the existing content, mostly above 10-12 kHz. They may also identify and artificially extend transients. It's basically a way to boost the perceived spectral balance at the top end without boosting background hiss and nasty microphone resonances. It often sounds better than the "real thing" that it takes the place of. On a reference quality system, the added treble is likely to improve the perceived sound quality, but it could make sound quality worse on systems that do not reproduce the top-end with adequate accuracy. Lots of speakers have problematic resonances way up high that could be excited by the extra content and impart harshness, graininess, haziness, or excess brightness to the sound. I personally had this problem on my system up until the last 6-9 months or so after I optimized my HF response very tightly. (Alas, it's still not tight enough.) Also listener differences may apply here too, given that it mostly concerns content at the very top. Though I would not recommend counting on hearing tests using sine wave signals to decide whether top-end content applies to you or not. I hear sine waves to 17 kHz, but I'm fairly certain I hear contribution of frequencies up to at least 22 kHz in actual content. I can think of a few reasons this may be true. Probably most important is that we what we hear depends on the result of a lot of high-level perceptual processing that is highly adapted to the properties of real-life sound and real-life acoustics. For example, a tone at some fundamental frequency is almost always accompanied by higher harmonic overtones. A pure tone at say 19 kHz is unlikely to occur in isolation in real life, so our brains are likely to judge such a tone as spurious and filter it out from perception. However if that same tone occurs simultaneously with sine components at 3800, 7600, 11400, and 15200 Hz (all harmonically related to the 3800 Hz fundamental), then the brain might deem that provisional 19 kHz stimulus to be reliable and might incorporate it into perception. The highest frequencies affect apparent brightness, which relates very closely to perceived depth/distance because higher frequencies naturally decay more quickly as the sound passes through air. FWIW, I strongly suspect the same thing happens with ULF. I suspect hearing thresholds for ULF may be lower in the presence of higher harmonic content. However, there is a major variable here (really no different from the UHF) in that perception is likely to depend a lot on how accurately the higher frequency bass is being reproduced. Being that room effects both alters the innate bass response of speakers/subs and confounds efforts to fix these errors, very few systems reproduce the higher bass frequencies accurately enough to fully benefit from the ULF. Likewise, very few if any HF transducers (without DSP optimization) can reproduce the upper mids and treble accurately enough to allow the listener to benefit from the UHF content. Sorry this got a bit long, and yes, BEQ for LOTR would be quite exciting! I'm hoping to soon implement what ought to be a major accuracy improvement in my under 40 Hz response, so will hopefully be able to make even better judgments about bass quality soon.
  22. Hey @jay michael. If using the same amp when running everything together, definitely get the same drivers. Otherwise, it's not nearly as important.
  23. FWIW, for the Denon and Marantz AVRs and processors I know of, if you have bass management enabled, they cannot pass a WCS signal at master volume "0" (or 80 for you) unless the subwoofer trim on the unit is set to the lowest setting of "-12". This assumes Audyssey is turned off. If Audyssey is turned on, it's still possible to clip. If you don't play higher than "70", then you should be OK with the trim as high "-2" without Audyssey. With Audyssey, you might still want to go with "-12" because Audyssey can boost (IIRC) by up to 9 dB. These days I use my Denon without bass management and without running Audyssey. I do it in my own processor downstream. Hence, I can get away with putting my sub-trim at "-2" in theory. (LFE channel alone needs -10 dB compared to LFE+LRCS.) In practice I have mine set to "-7" and mains set to "+3" as this let's me go as high as +5 dB vs. reference, in case I need that.
  24. I think it means "no longer available". I don't have answer to the question though.
  25. While I for the most part agree with you, I would ask then: why does bass so often end up different on different versions of a mix? If we presume that the mixes are being mostly let through untouched, why does the sub balance get touched so often? Do these mixers frequently identify undesirable bass (whether too much or too little) as a down-mix artifact? Oh yes! My HT system is definitely at its best, so far. And I find that this is a good defense against crummy content. Too little bass? I'm still feeling and enjoying it. Too much bass? It's heavy-handed but doesn't pull attention away from the presentation. The perfectionism I exhibit above has led me to realize that if I find a soundtrack annoying in some way, it's almost always partly a problem on my end. That's great, because it means it's worth striving for better. I don't have to give up because "most soundtracks suck and it's not worth more investment of time and effort". To connect this with above though, I suspect mixers aren't getting good bass if they are so inclined to make changes to it. That's unfortunate, and hopefully that will change for the better some day. One last thing about movie sound. I'm finding these days that I often enjoy the sound in lower budget movies more because they are likely to do a lot more recording on set rather than green-screened. My system is so transparent now that it's distracting when the acoustics in the sound don't match what's on-screen. I also have to give credit to the sound designers of many animations who clearly go through pain-staking effort to make the acoustic spaces believable. And then there are many 9 figures budget visual effects bonanzas with sound not much better than TV programs. This will really shock you, but it turns out that some entire movies are really quite terrible too. And yes, bad movies often get accompanied by bad soundtracks. Anyone remember "Mortal Engines"? There was some messed up stuff in that one!
×
×
  • Create New...