Jump to content

The Low Frequency Content Thread (films, games, music, etc)


maxmercy

Recommended Posts

OK right, this is with no amplitude offset and just fiddling with the colours.

 

attachicon.giftaegukgi.jpg

attachicon.giftaegukgi-2.jpg

attachicon.giftaegukgi-3.jpg

 

Does that look correct now? I put the wav on my gdrive (in case that helps get the setting right)

 

The scene in question is a rather visceral artillery bombardment. It is from the DVD btw, this one - http://www.play.com/DVD/DVD/4-/656483/-/Product.html

 

 

Your levels are peaking at -35dB, which means your color intensity setting is way too high.

 

Problem is, I don't see that setting adjustment on your posted pic.

 

6e3e3a3a172c4fc797bb55aaec42dfea.png

 

See the posted pic and notice at the top are options for the display. There is a setting for minimum and maximum frequencies and below that are settings for various adjustments, including the last 2 on the right that change the color intensity without having to shift your color scale.

 

If you click the FREQ tab it will display the settings in my posted pic.

 

Then, click the last 2 '^' 'V' icons and your color intensity will change accordingly.

 

Then, go to your OPTIONS tab and select SPECTRUM (2) and change the RANGE to -60dB to +5dB.

 

You should then find the right combination of intensity (noted above) and output from your source until the waveform graph (right side of the SL display) shows a better level, or waveform magnitude.

Link to comment
Share on other sites

Thanks, I will go back and review against what i have.

 

I am slightly confused though. What is the point of the exercise here? I was working on the assumption that it is to adjust the colour scheme so that levels are shown relative to the peak levels for that track. The reason being this gives full detail of the relative spectral content of the track. Is this wrong? If so, what is the point?

 

The latest images were created by not using the up/down arrows but instead clicking on the frequency range under the colours and adjusting the amplitude range there as I thought this is what maxmercy was advising.

 

Fundamentally though I don't know what difference this makes. It seems to be just an fft analysis with a user selected colour scheme to represent the z axis (amplitude) within some arbitrary user selected min/max range.

Link to comment
Share on other sites

Thanks, I will go back and review against what i have.

 

I am slightly confused though. What is the point of the exercise here? I was working on the assumption that it is to adjust the colour scheme so that levels are shown relative to the peak levels for that track. The reason being this gives full detail of the relative spectral content of the track. Is this wrong? If so, what is the point?

 

The latest images were created by not using the up/down arrows but instead clicking on the frequency range under the colours and adjusting the amplitude range there as I thought this is what maxmercy was advising.

 

Fundamentally though I don't know what difference this makes. It seems to be just an fft analysis with a user selected colour scheme to represent the z axis (amplitude) within some arbitrary user selected min/max range.

 

Correct.  You have to use the legend to derive much out of any FFT, and it helps to know the FFT integration time to know how much 'smear' will occur, making longer effects appear hotter.  I tried to standardize to the Max available signal level with summed 7.1 content.  That means very little pink in practice, but a few titles come close.  But you can use the sliders to make the graph look differently to suit your tastes.  I just check the legend to make sure I am seeing what I think I am seeing, and also look at the signal bar.  -0dBFS summed (that means 128dB peak, 125dB RMS sinewave) should use up the entire signal bar.  Reference Level is a tall order.  For most rooms, unless very well treated, it is quite loud.

 

I also do all of my measurements digitally now, no signal chain, just analyzing the data on the disc.

 

 

JSS

Link to comment
Share on other sites

Thanks, I will go back and review against what i have.

 

I am slightly confused though. What is the point of the exercise here? I was working on the assumption that it is to adjust the colour scheme so that levels are shown relative to the peak levels for that track. The reason being this gives full detail of the relative spectral content of the track. Is this wrong? If so, what is the point?

 

The latest images were created by not using the up/down arrows but instead clicking on the frequency range under the colours and adjusting the amplitude range there as I thought this is what maxmercy was advising.

 

Fundamentally though I don't know what difference this makes. It seems to be just an fft analysis with a user selected colour scheme to represent the z axis (amplitude) within some arbitrary user selected min/max range.

 

 

Yes, to all of the above, but...

 

The levels are adjusted the way I mentioned above (rough calibration) and the way Nube mentioned above (offset adjustment in the spectrums display options).

 

If the waveform graph and bar graph are irrelevant to you, you should turn the waveform and bar graphs off because the level is too low to show any detail on the waveform graph and the bar graph shows grossly incorrect data.

 

Also, you have selected FFT options that show sacrificed spectral detail in favor of time-related detail.

Link to comment
Share on other sites

Having watched the movie (Godzilla) twice in a movie theater (ex-Imax 15/70, now with Dolby 6.1) when it was released, I do not remember the audio mix sounding as harsh, muddy or compressed and sounds like a different mix for home release.

 

Thanks.  This seems to provide some evidence (albeit anecdotal) that the soundtrack was damaged during production of the "at home" version.  I believe that a remix or remaster for home release is done more often than not these days , and I have a strong hunch that a lot of clipping is introduced in this process.  I have several earlier posts about reference level along with the speculation that many discs ship with hotter tracks, intended for playback at lower levels.  In that discussion it became clear to me how fragile a concept "reference level" is when consideration is given to variance between rooms and systems.  I was pointed to industry specific recommendations that lower monitoring levels be used in smaller rooms.  In this particular example, a table was given specifying monitoring level to use for a particular range of room sizes.  My assumption here is that the "at home" soundtracks are often monitored at lower playback levels, in accordance with these recommendations.  If someone with additional knowledge knows otherwise, please speak up!

 

Here's how I propose this might happen.  Note, this is still speculative:  The "at home" soundtrack is monitored at a lower playback level.  I think commonly used adjustments include -3, -6, and -10 dB.  At this lower playback level, the loudness is closer to that of the theatrical presentation than at "0 dB".  However, the bass is very weak.  Why?  My thinking is that when comparing rooms, bass loudness does not vary as much as mid-range and especially treble loudness.  I wouldn't be surprised if room size is irrelevant for bass loudness under 100 Hz.  So when the monitoring level is reduced by 6 dB, guess what happens?  The bass gets boosted to repair the tonal balance and restore the awesomeness of the big bass scenes.  Where the track contained signal approaching digital full scale, clipping is very likely to occur.

 

I reckon that in many if not most instances, the clipping is noticed, and different strategies are employed to remedy it.  One option may be to redirect some bass from a busy front channel to LFE,  This essentially preserves the signal but may not always work and be labor intensive to do.  Another remedy may be to install a compressor or soft limiter.  This limits dynamics but can give a very good result if done skillfully.  Naturally, better results often require for labor.  Still another may be to filter out infrasonics because they eat up a lot of headroom and it is assumed (probably correctly) that the vast majority of listeners won't hear them anyway.  I'm sad to say it, but this remedy is likely very easy to implement.  Being a low hanging fruit, I wouldn't be surprised if ULF does sometimes go out with the "at-home" mix, even it probably gets stripped for many theatrical soundtracks too.

 

In the worst cases, I imagine some production may be too rushed to fix the clipping completely, or the monitoring system may be too poor to reveal it.  The egregious cases like "Star Trek: Into Darkness" and "Godzilla" may involve mixes that were very loud to begin with being monitored at close to -10 dB.  If the sub bass is to be boosted 10 dB a lot of work will be needed to make it sound clean, and if they're rushed, they might just throw a subsonic filter on it and hope for the best.

 

If I had any influence, I would encourage studios to do their "at home" mastering at "0" and to reduce the level of the soundtrack itself in order to adapt it to their listening environment.  Since almost all changes to the sound would involve reductions instead of boosts, there would be little chance of running out of headroom and clipping.  Ideally, they would also use an EQ curve optimized for their room, but I'm not aware of any suitable model or standard that specifies what target curve to use for any particular set of circumstances.  In reality, the correct target curve for a room depends on a lot more than just room size.  If bass loudness may be assumed to not vary much between rooms, then it makes sense to calibrate bass responses the same and then shape the rest of the target curve using bass @ 0 dB as a reference.  Then ideally, all rooms with optimized target curves and calibrated bass will sound similarly loud at "0".

 

Lacking such a model or standard to go by myself, I recently spent a period of weeks making small adjustments to my own EQ target curve and listening to achieve a balance that "sounds right".  While I'm not yet convinced I have it right where I want it, I am very happy with the curve I am using now, in which the treble is reduced 5 dB relative to the bass.  It sounds much more balanced and natural than a flat response ever did.  I am calibrated so that I get theatrical level bass at "0".  If I got the curve right and if bass loudness doesn't change between rooms, then theatrical tracks should sound just right for me

at "0".  Thus far, most of the Blu-rays I've watched sounded "right" at "-3" or "-6".  I believe this is entirely consistent with the idea that Blu-rays are monitored at lower playback levels and then re-mastered to sound equally loud to the theatrical mix, and that this change almost always requires a bass boost on the sound track.

 

So there's my largely unsupported argument.  I wish there was a way to get better insight into this.  Clipping affects the listening experience for more than just those with high performance gear.  I fear that even if studios were to recognize the headroom benefit of mixing at "0" all the time, they would be reluctant to do so for the same reason CDs often get compressed to death: they fear customers complaining about the "weak soundtrack" on the new release, all because it's not as loud as their other flicks when played at the same "volume setting".  There may be justification for this.  I recall that "Elysium" was criticized for having weak dynamics, when in fact that opposite was true.  What was really going on is that people were equating maximum loudness with dynamics.  By virtue of being so dynamic, "Elysium" needed a playback level to reach the same loudness.  Unfortunately, 25 years of highly compressed popular music seems to have altered cultural preferences to be more in favor of a louder, more compressed sound.  For those of us who appreciate realism in audio, I certainly hope this trend turns around soon.

  • Like 1
Link to comment
Share on other sites

If the waveform graph and bar graph are irrelevant to you, you should turn the waveform and bar graphs off because the level is too low to show any detail on the waveform graph and the bar graph shows grossly incorrect data.

why is the bar graph showing grossly incorrect data?

 

Is there any practical impact, when replaying captured data, to adding an offset by x vs reducing the amplitude range by x? 

Link to comment
Share on other sites

+1 on Elysium even though it's not one of my favourites I agree on the comments. I also place Dredd in a similar category. That film kicks ass but definitely not as loud as other films I've watched. Turn it up and it's brilliant.

Say whaaaaaat? :blink:

 

Do you have the US BD? (7.1 DTS, I think?)

 

IIRC the UK BD is 5.1 and, IMO, it's ridiculously loud...  -6 on the opening sequence feels like I've been physically assaulted :wacko:  yet -6 on Oblivion is just a little bit too loud during speech but fine otherwise.

Link to comment
Share on other sites

It is clipped. This is not something I would mislead anyone about.  I zoomed in on many instances on many channels and there were flat tops a-plenty.  Is it the worst case I have ever seen?  No.  Just 1-2dB of attenuation across the board would have probably prevented all of it.  But someone should be at least glancing at the final product to see if it hits 0dBFS, and if it does so regularly, to do something about it before it is pressed onto a disc.  This is apparently not the case, given the frequency with which I see clipping in soundtracks.

 

JSS

Link to comment
Share on other sites

My guess is that the problem arises when we sum the bass into a single SW channel, although it would help if those measuring clipped waveforms mention the method used. Soundtracks are mixed with no bass management (summing).

 

Measurements of the analog SW out using various discs yields surprising results. Summing signals from sats channels and the LFE channel can cause huge spikes in voltage and clipping in the signal chain so I imagine the same is likely in the digital realm. The SW output, after all, has finite headroom, so the summing aspect needs to be taken into consideration during production.

 

Just a thought.

Link to comment
Share on other sites

I know for a fact my Sherbourn clips a 'worst case scenario' track I created with tonebursts all encoded at 0dBFS from 1.5Hz to 200Hz.  Clips terribly unless the signal is attenuated prior to the AVR, I am using a minidsp nanoavr, but am having issues with 1080p24 passthrough.

 

I think this varies by brand, some probably have enough headroom in place.

 

The clipping in Godzilla is encoded in every channel.  By using the BassEQ process in the BassEQ thread, the shelving filters make the square waves into smoother sounding sawtooth-type waves, and the presentation is much improved, and it restores the bottom end that a giant monster film deserves.  Anyone with the DSP capability to do so should try it.

 

JSS 

Link to comment
Share on other sites

My guess is that the problem arises when we sum the bass into a single SW channel, although it would help if those measuring clipped waveforms mention the method used. Soundtracks are mixed with no bass management (summing).

 

Measurements of the analog SW out using various discs yields surprising results. Summing signals from sats channels and the LFE channel can cause huge spikes in voltage and clipping in the signal chain so I imagine the same is likely in the digital realm. The SW output, after all, has finite headroom, so the summing aspect needs to be taken into consideration during production.

 

Just a thought.

 

Good point, some processors will be fine, though.

I checked this now on a Marantz and all channels at 0dB seems fine.

Link to comment
Share on other sites

For your Nano to work make sure the 24hz option in your bluray player is turned off. Once I turn the 24hz off my nano works awesome.

 

Yes, but then you are dealing with interlacing artifacts and artifacts from turning a 24Hz presentation into a 60Hz one...they are not multiples, so judder will be introduced.  

 

JSS

Link to comment
Share on other sites

An important point about clipping in playback equipment deserves mention.  While we often think of clipping as occurring in the power amplifier, clipping can occur just about anywhere in the signal chain.

 

My guess is that the problem arises when we sum the bass into a single SW channel, although it would help if those measuring clipped waveforms mention the method used. Soundtracks are mixed with no bass management (summing).

 

Provided that the DSP arithmetic is handled properly, clipping should not occur during the bass management process in the processor/AVR.  However, depending on the playback level and the amount of gain on the equipment downstream, the summed signal may be enough to clip the output DAC and/or the pre-amp output.  This is absolutely the case on my Denon 3313CI AVR, which appears to implement the level trims in the digital domain.  In order to maximize my bass headroom, I adjust the gains on my subs until the correct level trim for calibrated playback is as low as possible.  I actually have my level trim at about -9.5, which is +2.5 above the minimum, so I can decrease it a tiny bit if need be.  I forget how much bass headroom that actually gets me, but IIRC it's near 120 dB rms.  Someone running with the sub trim near "0.0" would hit clipping at around 110 dB of output, which is quite low by sub standards!

Link to comment
Share on other sites

An important point about clipping in playback equipment deserves mention.  While we often think of clipping as occurring in the power amplifier, clipping can occur just about anywhere in the signal chain.

 

 

Provided that the DSP arithmetic is handled properly, clipping should not occur during the bass management process in the processor/AVR.  However, depending on the playback level and the amount of gain on the equipment downstream, the summed signal may be enough to clip the output DAC and/or the pre-amp output.  This is absolutely the case on my Denon 3313CI AVR, which appears to implement the level trims in the digital domain.  In order to maximize my bass headroom, I adjust the gains on my subs until the correct level trim for calibrated playback is as low as possible.  I actually have my level trim at about -9.5, which is +2.5 above the minimum, so I can decrease it a tiny bit if need be.  I forget how much bass headroom that actually gets me, but IIRC it's near 120 dB rms.  Someone running with the sub trim near "0.0" would hit clipping at around 110 dB of output, which is quite low by sub standards!

 

 

Yes, well, the devil's always in the details.

 

Most would reason that a SW channel trim at '0' is optimum, so that's where we set it to take measurements. The term "running the sub hot" also implies setting the sub trim to a positive number, not a negative number. That goes for the channels as well and master volume level, which is an unknown on many AVRs.

 

The Oppo, for example, has a 0-100 scale on it's master volume level when using the analog outs (using the Oppo as a pre), and it's not a dB scale, so 0dBFS is arbitrary, and once you get above 95, the output goes unpredictably nuts regarding scale/voltage.

 

It's like roll off. There is no standard and most hardware roll off is unknown, certainly not a standard published spec.

 

In the case of the Sherbourn pre we measured at Adam's, it seemed as though the output was gated, making it impossible to do a loopback measurement because the 'gate' wouldn't open until the sweep hit 20 Hz or so. That made the result look as though there was a brick wall filter at 20 Hz. Still not sure what the hell went on there, but the point is that all hardware is different in many respects and those differences have to be gotten to the bottom of before measurements even begin.

 

 

Good point, some processors will be fine, though.

I checked this now on a Marantz and all channels at 0dB seems fine.

 

How exactly did you check the Marantz?

Link to comment
Share on other sites

...

 

How exactly did you check the Marantz?

 

I only listened to it, if there is excessive clipping it will be audible.

 

However, according to the above post, mentioning that trim level can affect headroom, a more extensive analysis should be done to accurately determine what headroom is available, for different trim levels.

This is quite easy to do, only need to monitor the signal out from the processor before it enters the subwoofer dsp.

 

I find headroom issues to be the largest problem with ordinary avr/processors; they have too high noise floor and trim levels are not sufficient for speaker systems with large dynamics.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...