Jump to content


Please note: You can easily log in to MPN using your Facebook account!

EP won't get out of the way of the bass (or vice versa)...


Sundown

Recommended Posts

 

Hey all,

 

I'm working on some ethereal mood music for a documentary, and I'm struggling to get the mix right between a synthetic EP and a synthetic bass.

 

The bottom end is always the hardest to nail down (especially in small rooms), and while I have some bass traps, I'm in the process of acquiring/building more.

 

But I don't think this is standing waves issue... I just can't seem to find an approach that lets both parts breathe without overlap. I've checked the mix in multiple environments, so it's not just the room.

 

I'm using the EP as a pad, and it's just a simple progression of descending fifths in C minor. I already modified the part I'm playing to avoid any note overlap with the bass, but it's not enough to let both be heard. The bass is a simple thumpy patch from my D-20 (I recorded it mono and panned it straight up the center), and the EP is a synthetic tone from my XV-3080. Originally it was recorded stereo, but I have since re-recorded it mono and panned it hard left (about 5/8 of the way). I also slapped a 64th note delay on it panned hard right to widen the sound a bit (no feedback).

 

Both the D-20 bass and the XV have a bit of software compression (Waves Renaissance), though the D-20 is compressed much more than the EP. I try not to compress synths at all, but even with proper level setting, this little D-20 patch needed some "oomph".

 

I've likewise rolled off the highs of the bass near 800 Hz with a HPF in Cubase's EQ section, and I've rolled off the bottom end of the EP around 200 Hz (and boosted 1,000 Hz, 2000 Hz, and 4000 Hz with a dB or less to try and help it cut). But so far it's still a muddy mess, and the EP won't climb out in front of the bass to find a home.

 

If you have any tips for EQ'ing these two instruments, I'd love to hear them. I chose the D-20 bass patch because it's very simple, and it has a fingered, electric quality to it. It gives the track a little bit of bottom end, without the detuned, overly fat sound that my XV tends to generate for basses. An analog or VA patch might work, but I'd really hate to switch patches at this point. I'm convinced that some revised EQ or compression strategy might help. Modifying the parts wouldn't work too well either, as there is a percolating synth line that forms the backbone of the track about an octave higher. If I raise the EP into that octave, there will be frequencies clashing all over the place.

 

Any tips are appreciated -- Thanks in advance.

 

Todd

 

Sundown

 

Working on: The Jupiter Bluff; Driven Away

Main axes: Kawai MP11 and Kurz PC361

DAW Platform: Cubase

Link to comment
Share on other sites



  • Replies 21
  • Created
  • Last Reply

 

 

Invariably when I have been in this situation changing the patch and/or the voicing of the chords has been the answer. If your not keen to change the bass tone, which I can understand, try changing the EP instead. EPs can have a massive frequency range and are often tricky to place in a mix IMO.

 

Try getting rid of the delay and the compression also. Sometimes you begin processing thinking your improving a mix and after a time you get lost. BTW , why did you record the EP in mono? For ambient music my pad or EP would normally be in stereo. Its just more goodier...

 

If you are totally hooked on the sounds and MUST have them both, I would try rewriting the parts, overdubbing either the bass or the EP part while LISTENING to the other. Whilst your playing listen for the clashing problem and try to avoid it (much like you'd do if you were playing in a live band.)

 

If your still convinced that the parts are irreplaceable, you could try using a sidechain on a multiband compressor over the EP channel triggered by the bass: so when the bass plays you would get it to compress the midrange for example of your EP part.

 

Usually easier to replace dodgy parts though. Ive wasted 100s of hours mucking with parts that would simply NEVER sit with each other. My 2 cents

 

 

We are all slave's to our brain chemistry!

 

Link to comment
Share on other sites

Thanks Mike - That helps a lot...

 

I think I'll try a different EP tone. As much as I like this particular sound, the mix simply has to glue together.

 

The reason I switched to mono was to try and get better control over the stereo spectrum, as well as thin out the part a bit. It was an approach I learned from The Mixing Engineer's Handbook. One of the engineers would drop a channel and then delay the resultant mono signal and pan it a bit to get the proper width. It was his belief that many synths were too wide, and I can relate to that on many levels. In my case, I didn't drop a channel, but I instead yanked the RH cable, which forces the Roland to output all audio through the LH jack. But in doing so I think I made the problem worse.

 

It's a fairly dense mix, and I used a combination of mono and stereo for the various parts. If it's a simple VA-type part, I'll generally go mono and then use delay/reverb and panning to place it in the stereo spectrum. If it's a patch that I'm relying on the onboard effects (and it's thin enough), I'll generally capture it in stereo. I would guess that roughly a third of the parts are mono.

 

Thanks again for the help, and I'll report back if I can get it to sit properly.

 

Todd

 

Sundown

 

Working on: The Jupiter Bluff; Driven Away

Main axes: Kawai MP11 and Kurz PC361

DAW Platform: Cubase

Link to comment
Share on other sites

I think the bass would still have valuable information in the 1000 to 2000 hz range, especially in the attack. I'd relax that HPF and then when you are happy with the bass, tweak the ep with with parametrics to suit. Second the sidechain idea for gluing purposes also.

 

best,

 

Jerry

Link to comment
Share on other sites

Hi Todd,

I couldn't tell if you were recording direct or mic'ing an amp, though you did mention standing waves...

 

Anyway, here is an article about recording bass. I've had some success mic'ing the bass amp and going direct at the same time.

 

Bass Recording Tips

 

Hope it helps, and good luck!

 

Regards,

Joe

Link to comment
Share on other sites

Thanks Mike - That helps a lot...

 

 

The reason I switched to mono was to try and get better control over the stereo spectrum, as well as thin out the part a bit. It was an approach I learned from The Mixing Engineer's Handbook. One of the engineers would drop a channel and then delay the resultant mono signal and pan it a bit to get the proper width. It was his belief that many synths were too wide, and I can relate to that on many levels

 

This is very interesting, nice. I am a mono fan myself, but Ive never dealt with my keyboards in this fashion. I will give it a try.

 

Good luck with the mix and I second posting MP3!

We are all slave's to our brain chemistry!

 

Link to comment
Share on other sites

Sometimes with pianos and EPs if you boost their attack transient (either with an outboard parametric or with the internal peaking filters on the XV) you can make them pop out of the mix better. Then you can reduce the level of that part so it doesn't fight the rest of the track (in this case your bass). Might be worth a try...
Link to comment
Share on other sites

I agree with Mike.

 

Be careful with that trick from the handbook, and be sure the mix sounds good when summed to mono. That trick casuses coloration due to phase cancellation. IMHO, it's a terrible trick for acoustic instruments, but often is OK for epiano and electric guitar, where a bit of coloration isn't an issue.

 

IMHO, you should do most of your mixing in mono (even if the tracks are recorded in lovely stereo) and get that working BEFORE fiddling with any stereo. I'm a *huge* advocate of stereo, so this isn't any anti-stereo stump thumping.

 

It's just that in most cases, a mix needs to sound good in mono, so it'll sound good when heard from the next room or any number of other situations, including crappy living room stereo setups.

 

Furthermore, it's *harder* to make a good crisp clear mix with good separation in mono, and that's exactly why you want to be forced to do it.

 

When I'm adding FX, I'll flip to stereo and add the FX to taste. But then back to mono monitoring to get the parts to fit well together. When mono sounds great, then fine tune the stereo, and double-check in mono.

 

You'll end up with a far clearer, cleaner, listenable mix.

 

BTW, if you can't find an EP patch that works, then the problem is in the arrangment, not the patches or scooping.

Link to comment
Share on other sites

This is why this forum rocks... I post a detailed question, and in less than 24 hours there are several replies, all of which offer great advice. I'm really grateful...

 

To Joe P, the bass is just a simple electric bass patch from my D-20. I recorded it direct, and the only reason I mentioned standing waves was for mixing/monitoring. When the room is deficient, it can be very difficult to make good EQ decisions, and my room still needs some work. Four more bass traps are going in this weekend, and more will follow. I have several already, but a 15' x 12' x 8' room has lousy ratios and bad standing waves near 70/80 Hz. Getting the bottom end right requires a lot of trial and error right now.

 

Unfortunately side chaining isn't an option, as I only have Cubase SX 2.0. I will ultimately upgrade to Cubase 6 and Wavelab 7, but that's going to be a costly upgrade (all new hardware and software). I may even try to find Cubase 5.x, as it could be 2 years before I can make the 64 bit leap.

 

I'm going to try some different patches, dial back the EQ and compression, and get the bass right first. I don't have a lot of gear, but there are still a lot of patches in what I have (not to mention what I can program on my own). I can find a new tone that blends better.

 

I wish I could post MP3, but for this particular project I can't at the moment. Depending on how it turns out, perhaps I'll be able to at a later date.

 

Thanks again for all the help, and I'll post back once I find the right sound solution.

 

Cheers -

 

Todd

 

Sundown

 

Working on: The Jupiter Bluff; Driven Away

Main axes: Kawai MP11 and Kurz PC361

DAW Platform: Cubase

Link to comment
Share on other sites

 

Hey guys,

 

I just wanted to post a quick update...

 

I took everyone's advice, and I've been working hard on this all day. I was starting to lose hope (a lot of EP patches I tried were just not the color or tone I was looking for), and then I hit a lucky break...

 

I took the ROM-26 patch from my hardware W/S ("Tine Piano"), and used that for the EP chords (with a stereo recording). It was better than before, but still not what I was looking for. I really don't want to emphasize the tine sound... I want the EP body. So I decided to isolate the three tones that make up that patch, and I was able to solo the body, get rid of the tine attack transients, and then perform some basic envelope adjustments to soften the attack on the body a bit more and reduce the high frequency content (with the W/S LPF). I got rid of all onboard effects, and now I'm re-applying some ambience with my UAD EMT-140 plate and applying a bit of chorus with the DM-1 module from the UAD channel strip. There is no compression on the EP track whatsoever...

 

I also took the D-20 bass patch (mono audio track) and slammed the crap out of it with the 1176SE plug-in (UAD). The attack is relatively fast with a 12:1 ratio (and about 12 dB of gain reduction), which gives it some "oomph", as well as a little bit of grit. I took Tusker's advice and raised the HPF frequency to allow the attack to snap a bit, which helps distinguish it from the mellow EP. And I took Jeff's advice and I'm checking the mix constantly in mono. When I do that, the basic tone and balance of the track sounds OK, but some of the ambience disappears (e.g. the stereo delay effects and some of the reverb tails). I don't know how common this is, or if you should hear consistent delay feedback and reverb tails in both stereo and mono. If so, I've got work to do...

 

To make a long story short, I'm not out of the woods yet, but it's getting much better. Will this be my signature mix? Probably not... But I've got a deadline to meet, and it will certainly suffice. Enough that I can say, "yeah, it sounds pretty good after all..."

 

Tomorrow may be another story (mixing/engineering and home mastering can change a lot of variables). But so far, I'm pleased with the changes.

 

I'll post back once it's finished... Thanks again.

 

Todd

 

Sundown

 

Working on: The Jupiter Bluff; Driven Away

Main axes: Kawai MP11 and Kurz PC361

DAW Platform: Cubase

Link to comment
Share on other sites

Stereo reverb sticks out more than mono, because it tickles our mind's acoustic location circuitry.

 

This is a good thing, IMHO. It means that we can use less reverb to get a nice stereo image, that won't collapse to quite so much mush in mono.

 

If you're losing too much reverb in mono, consider using a mono reverb rather than a stereo one (or using both).

 

If by "stereo delay" you mean delaying one side a bit to build a stereo image, when you collapse to mono that will cause an EQ difference due to phase addition & cancellation (similar to comb filtering). The length of the delay governs the coloration. IMHO, this change from a spatial effect to a coloration effect when collapsing to mono can be good or bad.

 

There is a trick to avoid phase cancellation when collapsing to mono, but it comes with a side effect. It's called "Mid-Side" technique, and I use the mda-vst "Image" plugin to do it (simple free one from http://mda-vst.com if that's still there).

 

Basically, instead of sending clean to left and delay to right, you send clean plus delay to left and clean minus delay to right. When summed to mono, the delay part cancels out totally. (If you're using a plugin intended for stereo imaging by delays for this, the plugin might be working this way by default.)

 

The side effect is that instead of getting the coloration when summed to mono, you get the coloration in each side. Folks in the middle won't hear coloration; they'll hear both sides and their brains will interpret it as phase-based location information. But folks next to one side or the other will hear the coloration. As above, this is either good or bad depending on whether the original source tolerates coloration. IMHO, electric guitar and epiano do; voice and acoustic piano/guitar do not.

 

BTW, if you're using a stereo reverb, it might also be using Mid-Side technique. Most stereo reverbs work by doing reverb independently on left and right side, or on a mono signal, by adjusting the reverb params a little between left and right. That's the model I was assuming above. But some reverbs (esp older hardware reverbs) just use the Mid-side technique to create a nice stereo image ... and one that disappears completely in mono (assuming you don't apply any subsequent nonlinear FX to the reverb returns).

Link to comment
Share on other sites

Maybe we should have a thread titled "The Recording Room"?

 

Good idea? :idea:

 

I don't have the guts to start it myself..... :blush::)

 

Or maybe we should make a pit-stop over at the "Project Studio" forum to discuss such matters? :idea:

 

I swear to you, there are folks that occasionally lurk over there that have some smarts with these kinds of things... ;)

Link to comment
Share on other sites

 

Thanks Jeff...

 

The delays in question are used for ambience. The times are sync'd to tempo and are sometimes as fast as 16th notes, or as slow as quarter notes depending on the part. They aren't choruses or being used for pseudo stereo. I stopped doing that immediately after starting this thread. Usually I'll pan the delay to the other side of the spectrum so that the repeats are more audible, or I'll pan it just inboard or outboard of the original signal. And I've noticed in mono, that sometimes those repeats aren't as audible.

 

Thanks again for the tips on mid/side. I've heard the term before, but I wasn't sure what it was or how it worked.

 

I'll be working on this project again tomorrow, and hopefully I'll have it wrapped by Monday. Getting the bottom end right is a bear. Synths are great, but I really like acoustic and electric instruments better. I just don't have a real bassist handy.

 

Thanks again.

 

Todd

 

 

Sundown

 

Working on: The Jupiter Bluff; Driven Away

Main axes: Kawai MP11 and Kurz PC361

DAW Platform: Cubase

Link to comment
Share on other sites

 

If you're hearing the mudiness in different rooms and systems, I doubt adding bass traps will help. It's probably the recording itself.

 

This may or may not help. I've found that a LOT of synths (sample based) have very exaggerated bass EQ on many sounds. Check the edit mode of your EP and Bass programs to see how the equalization is set. Also check the EQ parameters in the synths' effect sections. Sometimes those EQs are also boosted way high. If they are, try lowering the bass EQ down to zero boost. Hopefully that will work.

 

If not, I'd guess attenuating more low EQ of the EP would help. Trying attenuating by 6dB or more - until it fits in the mix with the bass guitar patch. Maybe even try a higher cutoff if you can.

 

The next thing I would try is to maybe play the EP part in a higher register, or play a different part altogether.

 

Is the bass part playing single notes, or chords?

 

 

 

 

Link to comment
Share on other sites

 

Hi Odyssian... Great observation (about onboard EQ). The D-20 doesn't have onboard EQ, but it might explain why so many of the bass presets on the XV are dissatisfying.

 

The reason I think bass traps will help is for making decisions. It's hard to EQ and set relative levels when there are peaks and nulls in the room. If I can smooth out the room, I can make better judgments against a reference recording.

 

I can't really move the parts (it would result in clash elsewhere), but I might be able to use a slightly different voicing. But I'm pretty close now... A bit of hard work with panning and EQ and I think I'll nail it.

 

But to answer your question, the bass is just a single note. The EP is actually only two notes. It's a descending progression of fifths, where (for example) C2 will be the bass note, G2 will be the bottom of the EP chord, and C3 will be the top of the EP chord. As I descend to the next chord, Bb1 becomes the bass note, and F2 and Bb2 become the EP notes.

 

 

Sundown

 

Working on: The Jupiter Bluff; Driven Away

Main axes: Kawai MP11 and Kurz PC361

DAW Platform: Cubase

Link to comment
Share on other sites

Another thing I would like to suggest is that you start A/Bing your mix with a commercial mix of a similar type of material. you will immediately hear differences (if you haven't already being doing this or you aren't a top flight mixer).

 

the first time I did this I had to turn my vocals up 6db, my snare down 3 db and my bass down 6db.It was a real eye opener. you just get so used to listening to"your"sound that you lose context with what ACTUALLY sounds pro. Try it!

We are all slave's to our brain chemistry!

 

Link to comment
Share on other sites

...and I forgot to mention it is a great way to counteract standing wave problems etc. You simply judge your mix against a close commercial mix. It will sound a bit wonky in your room too, just kind of copy it. First time I "nailed" a mix I A/Bed my tune (which was an RnB female singer thing) with ¨"Jennie from the block!" Did it in a crappy enough room, some nice gear but nothing stellar, and it sounded 90-95% as good as J Lo, at least to me.

We are all slave's to our brain chemistry!

 

Link to comment
Share on other sites

 

Hi Mike,

 

I do listen to reference tracks, and it indeed helps. On a side note, there is a fun exercise I play with friends, but I'll share that on another day...

 

The one challenge with mixing to a reference is the difference between a mastered track and a mix. I do my own mastering (or at least I try, just like many home musicians), but a finished mix is not going to have the punch or level that a mastered track will. I recently read an article by one of Avid's top engineers (on the Pro Tools side), and one of his biggest recommendations was to avoid trying to get the mastered sound while mixing. In other words, leave 6 to 12 dBfs of headroom so that the mastering tools and mastering personnel can do their work.

 

To compensate, I can obviously bump the monitoring level for the A/B compare. But I also keep a high quality limiter and buss compressor in my master section, though I leave them bypassed. By doing this, I can quickly turn them on to get some idea of what limiting and buss compression is going to do to the overall balance and sound, but I'm not applying limiting and compression too early in the development. It's a compromise, but it's the best I can do right now.

 

Thanks again for the feedback.

 

Sundown

 

Working on: The Jupiter Bluff; Driven Away

Main axes: Kawai MP11 and Kurz PC361

DAW Platform: Cubase

Link to comment
Share on other sites

Using the limiter & master bus compression that way is a good idea.

 

Bass traps hardly work, and there's a better way: near-field monitoring.

 

The reason bass traps don't work is that the pure ideally perfect bass trap is equivalent to a hole in the wall of the same frontal area. So, if you build a monster bass trap with a 1m x 1m front, it's no better than opening a big window (except that you can move it, which can help). Traps generally won't fix a room, though used judiciously (and with incredible attention to detail) they can fix some problems sometimes.

 

The secret to near-field monitoring is to be closer to your speakers than to the walls, so that most of what you hear is direct rather than reflected. Make sure your monitors are well away from any walls! If not, you won't be hearing proper EQ, especially in the low end.

 

Are you doing any Fletcher/Munsen compensation (loudness compensation)? I've never done that scientifically, but it might help. No doubt there are compensation plugins, and you'll need an SPL meter to calibrate. That might help.

 

But I understand your frustration about getting the bass right. I relied on comparison monitoring for that (and didn't do nearly enough, and the bass in my mixes leaves much to be desired). While near-field monitoring (in the middle of as big a room as possible) helps, it seems to me that the louder I turn it up, the less the near-field helps. I'm not sure that's technically correct, but it's how it seems to me. As though I get more standing waves set up at higher volumes (which sounds silly when I say it!)

 

I don't know WHAT someone might have meant by "leaving 6 to 12 dBFS of headroom". That's total idiocy.

 

First, you should produce a 32-bit floating point mix, and master from that. If you do that, the FS level is mostly irrelevant, because adding gain to bring it up to whatever level is desired for mastering is nearly lossless (just rounding error for one mulitply using 32-bit floating point math, with a 24-bit mantissa, so it's what, -140dB?)

 

However, many mastering houses are too ignorant to realize this and require 24-bit mixes. Why waste any bits at the top? Yeah, it's still small potatos, but ... what? You're supposed to leave bits unused? For what purpose? To "leave room"? What does that even mean?

 

There is so much complete nonsense bandied about in digital audio it's a riot.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...