SME

Time to kill the myth that "flat" bass is "correct" bass.

65 posts in this topic

I'd like to throw in one more observation I have about the final mix of any track. We already talked about varying rooms, playback systems and playback level, unknown response in them, etc and how these can affect the final mix. Another thing to consider is the upper ceiling of digital formats. In my opinion this makes a large impact on modern mixing and the overall balance of music. In theory you would simply lower the overall level of the various tracks to maintain the headroom needed for the loudest signals in the hottest track in the mix, all you need do is turn the playback system up a bit more, but in practice this is very rarely done anymore. If your "dynamic" or alternatively mixed track comes on the radio in between 2 other tracks that are heavily compressed and densely mixed right up against a limiter, like modern tracks are, yours will be less loud and will seem weak in comparison. We all know louder is preferred by most. The artist will not be happy that their tracks are quieter and ultimately this will affect your reputation as a mix engineer and ability to earn a living. Also the majority of people are listening to content on tiny phone speakers, Bluetooth docks, tv speakers, PC sub sat system, or perhaps a soundbar. The best playback system many people might have access to is their factory car stereo and we all know that would leave a lot to be desired in most cases. Mixes have to be made to sound clear and loud on these types of miniature, response limited devices.

 

Most of the above is in reference to recorded music but much of it also applies to motion picture soundtracks as well.

2 people like this

Share this post


Link to post
Share on other sites

 

You totally missed the point.  The point is that the bass instrument *usually* has strong harmonic content, which has a big impact on the sound.  Do you ever just plug your electric bass directly into a subwoofer when you play?

 

This is waaaaay off topic from the OP. I'm not interested in the rabbit hole of educating a non-musician as to what my bass sounds like or can sound like when played through my HT subwoofer system. The reasons should go without saying but if they don't in this case c'est la vie.

 

You already educated us about the electric bass (at least with regard to its range of fundamental frequencies) without even asking for permission.  And what I said is absolutely on topic.  Here I've re-quoted what you responded to.

 

When I speak of calibrating tonal balance, I'm talking about the full range of frequencies, including those typically reproduced by a subwoofer.  I don't see any need to make a distinction between the frequencies reproduced by speakers vs. subs.  For all I care, you could be running huge full-range speakers without subs.  That's not to minimize or ignore the problems that arise with blending subs with the mains channels, but I ignore that here because I'm more interested in discussing the goal than the method of achieving it.  There are other more subtle issues that can lead to inconsistencies in reproduction of soundtracks where bass management, but I'm also ignoring these for now.

 

Dave, If you want to discuss calibrating subs but ignore everything above the sub/main crossover region of 100 Hz (as seems to be your preference), then I suggest you pursue that interest in another thread.

 

Ground plane measurements can approximate close mic... yes. Sometimes ground plane measurements contain non-trivial errors, as shown in my previous post. How many times that fact is discussed isn't relevant. Comparing performance is also not relevant to the discussion.

 

All measurement methods are prone to errors if done improperly or care taken to achieve consistency.  Without knowing the reason that those errors crept into Ilkka's measurements, I would guess they are due to a mic calibration problem, which of course has *nothing* to do with ground plane measurements vs. using other methods.

 

Reflections are responsible for the peak at 110 Hz? You mean reflections that create a standing wave or RT60 reflections? And, you don't hear the +7dB peak, but you hear the individual reflections that produce it? I'm not misreading, I just don't get what you're trying to say. Please provide some... any... evidence/data when you make proclamations. It makes for a much less frustrating discussion.

 

To your first two questions, the answer is neither.  My room response is not dominated by standing waves around the frequency nor does it develop a diffuse field, which would be necessary to speaker with regard to RT60.

 

I'm suggesting that the ear "hears" the first arrival and reflections as separate acoustic events.  Higher level processing in the brain correlates these events and fuses them into a single event, which is what is actually perceived.  What is perceived is the sound from the speaker occurring within an acoustic space.  The perception of the temporal and tonal characteristics of the sound is dominated by the first arrival.  The early reflections make only minor contribution to the temporal and tonal characteristics, but they can impact perception of spatial characteristics of the sound and listening room.

 

I responded to your thread based on your OP in which you declared that it's sad that any authority would suggest the best calibration from a recorded source playback system is a flat response. You declared it, right here and now, to be bullshit with no disclaimer. You support that absurd declaration by mentioning "numerous blind listening studies conducted by Harman" though you don't define numerous nor do you cite any of them.

 

It *is* sad.  I'm sure most who recommend as much mean well, but flat in-room response just doesn't sound good most of the time.  Worst still, when production/mastering systems are calibrated this way, the tonal balance of the mix becomes skewed.  The calibration method skews the tonal balance of the system to try to correct for mostly inaudible room characteristics, and then the mix reflects the inverse of these characteristics.  This is confirmed to be the case with cinema dub stages where everyone calibrates in-room 1/3rd octave binned power response (at the MLP only) to the same in-room X curve target.

 

Apparently I didn't need to cite any studies because you are already familiar with much of Toole / Harman's work.  Though you also dedicated most of the space of an additional post to deriding Harman's use of listening tests to try to figure out what sounds good.

 

I've said about a thousand or more times; the equal loudness curves are built into all commercially available recorded material. I have been a participant in enough of those sessions and processes over the past half century, beginning at age 13YO, to assure you that no producer has ever mixed the content flat assuming that SME or anyone else will re-mix the product, post production, using a one size fits all calibration adjustment. For various reasons, that material may end up anywhere on the quality scale which exposes the flaw in such an approach.

 

I'm not talking about equal loudness curves, except in so far as almost all ULF content is inaudible without boost or excessive playback levels.  (That's another point that's already been beat to death on these forums.)

 

Otherwise, I don't even know what you are trying to say here.  I can't remix anything post-production without having access to the original tracks.  I *can* and *do* effectively re-master content these days.  I've learned to identify tonal balance issues by ear and make improvements with EQ.  This process is absolutely *not* "one-size fits all".  However, the vast majority of music sounds better with some bass boost somewhere in the measured smoothed in-room frequency response.  With cinema, things are all over the map, and cinema content is more likely to need aggressive EQ to clean up.

 

Think about it. A producer is deduced to have radically different hearing than some random collection of listeners to the point of requiring post production production to get the mix right. And, THAT conclusion isn't bullshit?
Nope.  The problem isn't with the hearing of the engineer.  The problem is that the references are insufficiently defined.  Even if every mastering engineer had a ruler flat 1/6th octave smoothed in-room response, the subjective tonal balance of each of their systems could vary dramatically.  This is not far off from what is done in cinema (albeit using 1/3rd octave RTA and the X curve target), and this leads to major inconsistency between mixes.  In the music world where EQ to flat in-room response is much less likely to be done, inconsistencies are still present but are more minor.

 

For the record, I don't have any music program that requires a +5dB boost in the bass region to correct for a thin sound. That doesn't mean there is no such program, it just means that i don't keep poorly produced source in my collection.

 

For what it's worth, I rarely stray beyond +4 dB.  When I do so, the center frequency of the shelf is almost always up pretty high, at like 250 Hz or above.  As  a point of interest, for music that needs the shelf moved up higher like that, I usually hear the thinness a lot more on sounds / instruments whose fundamental frequencies are above the center frequency.  Harshness on women's voices can be a good clue, but any kind of ascending sequence of notes in which the timbre obviously shifts from a full to a thin sound over the sequence.  This might involve piano, keyboard, violin, saxophone, or any number of others.

 

I have a lot of recordings that are of very high quality and sound great, provided that I get the bass shelf center frequency and gain set optimally.  If I did not have the capability to adjust these things, as is the case for the vast majority of listeners even on "top of the line" systems, I would regard a much narrower subset of those mixes as being high quality.

 

But anyway, to your point about not needing a bass boost with most content.  It's not clear to me how you calibrate your system and whether you use EQ with your speakers.  How do you level match your subs to your mains?  I also don't know what kind of speakers you use, how far away from boundaries they are, and how far away from them you sit.  These details matter a lot.  Depending on your setup, you may even benefit from some attenuation in the bass for music playback.  This can happen if the speakers are designed for placement away far away from walls, but yours are placed too close to them.  Your Raptor subs will also likely interact with the room differently from your speakers most likely.  They are likely to offer a lot more directivity and may actually sound better if run with a little bit less gain than a typical level match would suggest.  But this is all speculation without more info.

 

If one falls prey to Harman (or any other of the thousands of 'listening studies') conclusions and calibrates to some distorted bias, then every recording played back on that system will show that bias. This is proven on the production side. Mix on a system other than flat and get a result that has too much bass or too little bass, which is what happens in reality. The proper method is indeed to calibrate flat and "season to taste' after that on a disc-to-disc basis. What others preferred during a listening test positively changes in those test subjects over time with the evolution of hardware and software, room construction differences, age and preference adjustment. Why you or anyone would think that he/she should conform to such a metric is beyond me, but calling a flat calibration bullshit is just not the way to be taken seriously.

 

I'm citing Harman to give credit where it is due, but in fact, I reached my conclusions mostly on my own.  My overall approach is also quite different from Harman's.  Harman aims to create the best sounding speakers and headphones.  I aim to achieve the best sound I can using EQ.  Because my speakers use an active crossover, the EQ is required to provide the voicing that would be done with passive components instead.  Where our approaches overlap is with regard to the importance of flat anechoic response and smooth off-axis response.  They use anechoic chamber measurements to optimize their speakers, and "flat" is the design target.  My approach relies entirely on in-room measurements, and regards "flat anechoic response" as merely the imperfect reference for music mastering.  Neither of our approaches lead to a smoothed, in-room frequency response that's flat at all in most cases.

 

I'm getting tired of repeating this, but you have to get through your head.  Calibration to flat in-room response does not lead to consistent sound between mix environments.  It's not even close.  SMPTE pretty much trashed this notion in their recent studies of dubbing stages and theaters.  Flat anechoic response of the speakers gets you a lot closer but it still allows inconsistency in the low mids and bass.  To the best of my knowledge, there is no validated solution to this problem other than to try to kill 100% of the low mid and bass reflections and calibrate to flat in-room response.  This solution is not realistic.  Even the best rooms likely see at least a few dB of gain from reflections.  I might have a solution to that problem, but it needs time to develop and will eventually need listener studies to validate.

 

The thing is, you can't claim that any calibration method will lead to consistent results *without* listening tests to validate that the method works.  The best listening studies are the ones done by Harman that indicate flat anechoic response is best and not flat power response or flat in-room response.

 

I think you are very confused about what listener preference really means here.  If one is doing a mix, one often has a a lot of latitude as to how loud the bass instruments sound compared to the rest of the mix.  The easiest thing to do is just make the bass instruments louder than everything else.  There may be a point at which masking of higher frequency content becomes a problem, but there are a lot of tricks to compensate for this that can be applied during the mix stage.  However, once the mix is done, most of the preferences for the sound itself are locked in.  If you want kick drum to hit harder but it shares bandwidth with the bass, then you can't do what you want to the kick drum without messing up the sound of the bass.  Indeed, about the best thing you can do with EQ in the mastering and playback stages is to *improve the audibility of the content that's in the mix*.

 

In most cases, any content that is in a mix is meant to be heard.  There are certainly exceptions like unintended noise (including ULF) or clipping that the mixer does not hear, but if we are talking about particular musical passages or voices, the goal is definitely to be able to hear everything in the mix.  Most of this work happens in the mix stage, but there is the translation problem in which the tonal balance of the mix monitoring system doesn't match the tonal balance of the playback system.  The mastering engineer attempts to bridge this gap by making adjustments to the overall sound while monitoring on a (usually) higher quality "reference" system.  The window of what actually allows for all the content to be heard is quite narrow.  Once you deviate from the ideal by more than say +/-2 dB, you're likely to encounter significantly more masking, which means that musically relevant content becomes difficult or impossible to hear.  That narrow window doesn't offer much flexibility at all for either a mastering engineer or a home listener to "season the sound to taste" without damaging the integrity of the mix.  As such, a good mastering engineer can do most or all of his/her work without even consulting the artist or mixer.  He or she can spend some time listening to the music and tweaking the tonal balance until he or she is confident that he or she knows what is in the mix and has reset the tonal balance to maximize audibility of that content on the reference system.

 

Therein holds a key to better reproduction on playback system as well and listener preference of playback systems comes into play.  The preferred sound is the one that allows the listener to hear the most details.  Transients sound better and have more impact with better balance as well.  The more evenly a transient activates the various critical bands of the ear at the same time, the stronger it will seem to be.   A good sounding (and feeling) kick drum doesn't just hit at 30 Hz or 60 Hz.  It hits across a relatively wide bandwidth even as it may also ring around a few frequencies of resonance.

1 person likes this

Share this post


Link to post
Share on other sites

Here is a link to a Harman study about target curves.Hopefully it will work.

 

I'm not a big fan of this study for a couple reasons.

 

First of all, I'm skeptical that you can give untrained listeners access to crude tone controls and expect them to set them optimally, even if "optimal" is judged by their preferences.  As an analogy, imagine letting people without culinary training prepare "the best cake", giving them say over how much sugar and salt to use in the recipe but not the other ingredients.  Without training and experience, they won't have a good grasp of how these two flavors interact with one another and with the rest of the recipe.  For example, more salt can paradoxically increase sweetness.  And what if the other ingredients are not in ideal balance?  What if there isn't enough butter in the recipe?  How should this problem be compensated for?  The same issue arises in this study where the baseline is "flat in-room response", which is almost certain to create imbalances in the mid-range.  So how is an untrained listener supposed to strike a proper balance between treble and bass when the mid-range is out of whack?  Indeed, there's a saying among mixers: "get the mid-range right and everything else will fall into place".

 

The other issue I have is with the whole idea of trying to find a one-size fits all smoothed, in-room response target curve for speaker responses.  Such a curve undoubtedly varies with the room and speaker characteristics.  IMO, these differences will have more impact on the measured in-room response than on the subjective sound of anechoic flat speakers.  I'm OK with them looking for an ideal target curve for headphones because at least there are no room effects to alter the measured responses of the headphones in ways that are only marginally inaudible to the listener.

 

Indeed, I believe as long as Harman and others continue to pursue an optimal in-room response target, they will not succeed in solving the translation problem.

Share this post


Link to post
Share on other sites

I think we should have a data-bass bake-off for the best cake. Please send all entries to me and I will make a definitive and concrete supposition of who will win this bake-off and therefore have the theoretical absolute best cake on the internets.

3 people like this

Share this post


Link to post
Share on other sites

I think we should have a data-bass bake-off for the best cake. Please send all entries to me and I will make a definitive and concrete supposition of who will win this bake-off and therefore have the theoretical absolute best cake on the internets.

 

Yes. Of course we will need a full chemical analysis of each cake and readings of the brains electrical activity while eating them so we can log the peak enjoyment experienced.

2 people like this

Share this post


Link to post
Share on other sites

But then won't someone be tempted to have a 'Lone Survivor' helping of cake and throw up......twice?

 

JSS

3 people like this

Share this post


Link to post
Share on other sites

Seriously trying to figure out how to combine Lone Survivor with a Devil's Food chocolate cake.

 

Hmm...

Share this post


Link to post
Share on other sites

Yes. Of course we will need a full chemical analysis of each cake and readings of the brains electrical activity while eating them so we can log the peak enjoyment experienced.

What do you think is more indicative of a cake's performance: burst taste testing or long-term taste testing? 

2 people like this

Share this post


Link to post
Share on other sites

I can confirm that it will be 4-layer with chocolate ganache glaze.

 

Well, hopefully the ganache glaze will help with the inherently higher inductance that the 4-layer cakes suffer from.

1 person likes this

Share this post


Link to post
Share on other sites

What do you think is more indicative of a cake's performance: burst taste testing or long-term taste testing? 

Both are important to the overall cake experience, neither one can be discounted. 

 

An excellent burst taste test result can be countered by poor long-term taste performance.

 

An appropriate testing and scoring methodology would consider both.

1 person likes this

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now