Jump to content

Please note: You can easily log in to MPN using your Facebook account!

Please explain mono/stereo patches, mixing, monitoring, etc


Recommended Posts

It's hard to find good, straightforward info online, so I figured I'd ask the experts here.


There are so many "moving parts" in the chain from audio source to audience, and I am sure, as I contemplate my signal chains for a setup, I misunderstand what's going on.


1. I've got a keyboard (in this case, Casio PX5s, but it could be anything else). This keyboard plays back sounds that might or might not be programmed to send different audio to the left and right outputs of the keyboard (maybe via stereo effects, or maybe it's baked into the patch in a different way? Not sure here).


2. The board itself has two outputs, for left and right. One of them, however, does 'double duty' as a mono out. This "sums" both the left and right channels into one. Is this just a mechanical adding of the signals, or is there some other processing/logic that happens? Also, I've heard people say things like "summing a stereo signal leads to phase-related problems." What does this mean?


3. Let's say I take the stereo outs and route them through a mixer (like my old Mackie). If I pan those two channels to the same side, let's say the left, will the main left output be now giving me the same perceived audio as if I had taken the mono out from my Casio?


4. Is there a difference between panning two halves of a stereo signal to center and listening to just the left output and panning two halfs of a stereo signal to left and listening to just the left output? Is there a volume difference? Anything else?


I've somehow managed to get this far without thinking too hard about this, but now that I do, I'm realizing how much I don't actually understand what's happening. :)



Link to comment
Share on other sites

  • Replies 13
  • Created
  • Last Reply

Also, I've heard people say things like "summing a stereo signal leads to phase-related problems." What does this mean?


If two signals are simply added together, a peak in one can combine with a trough in another, leading to a cancellation. Or two peaks can combine to boost the signal at that moment in time. And so on, through endless combinations. All of this distorts the original freq spectrum.

Link to comment
Share on other sites

Generally speaking, mixing is mixing regardless of how its accomplished. Mixing your L+R signals by panning both to the same side (within your keyboard) is the same as bringing them separately into an outboard mixer and combining them together there. Many keyboards allow you to use one of the output jacks (most typically the left one) to get a summed L+R signal, assuming that there is not a plug inserted into the other (right) jack. In this case, the mixing is being done passively in the output electronics, but the result is the same (aside from possible level differences).


Many Nord keyboards have a "mono" button that sums the L&R signals into a mono signal. It is said to do something else in the process to improve mono summing of the stereo signal, but I have no idea as to what that might be.

DISCLAIMER - professionally affiliated with Fulcrum Acoustic www.fulcrum-acoustic.com
Link to comment
Share on other sites

That's already WAY deeper thinking than I have done on the issue, and it's one I've dealt with since I've been gigging. Not that understanding stuff is bad :) But you probably won't find answers that are universal, though you might find this stuff out for your specific gear.


In short: hook things up and try it, and go with what sounds best. That is the true and only answer that matters. Your keyboard's manual might say to use the mono out, but if you get better results using both outputs into a mixer and summing there, then that's the right approach (for you).


If, like me and many, you are running in a mono PA with mono monitoring, you are mainly listening to what happens to your stereo patches. I've had to go in and edit sometimes on various keyboards to make patches more mono-friendly. My Motif was notorious in that regard. Generally it is the effects that cause issues, I've sometimes gone in and edited just those and it made a big difference.

Link to comment
Share on other sites

My thinking has always been -- once you make a decision to go stereo -- you're sort of committed through the entire signal chain: stage amplification, monitoring and FOH.


The primary reason appears above: voices in stereo sound different than voices in mono, and the difference can be enough to make you want to optimize for one or the other.


Some will swear that mono is better, others stereo. That's not my point here, really -- just that it's usually not workable to mix and match.

Want to make your band better?  Check out "A Guide To Starting (Or Improving!) Your Own Local Band"


Link to comment
Share on other sites

I don't know anyone who actually LIKES mono, it's just that it's easier and often the lowest common denominator (PAs) are mono :D


I also wouldn't mix and match, if only because I like to monitor the same signal that I'm sending to FOH.

Link to comment
Share on other sites

1. I've got a keyboard (in this case, Casio PX5s, but it could be anything else).


I'm just going to keep my comments confined to the PX5s. If you are playing a PX5s piano through a mono system, select a mono piano because the stereo ones don't sum as well. All other sounds are fine because they start as mono samples (if memory serves me). It's been awhile since I had mine so FWIW.

Link to comment
Share on other sites

At the risk of "tl;dr", well here goes the "S" word has certainly ignited a few threads here. Regarding summing, I don't think it matters where it's done in the synth, at the outputs, at your mixer, etc. A L&R signal should sound the same summed any way, as long as their phase relationships are the same and other variables (level, eq, etc.) remain constant. Though I have no knowledge of a specific example, I suppose it's possible that one channel might get phase-flipped or shifted somehow in a mixer or somewhere else, and that would affect the sound if summed. Phase cancellation is imo the #1 reason for stereo sounds that are summed to mono sounding bad. The "stereo-ness" of a sound can be either baked into samples or created by effects that add ambience like reverb, chorus, leslie etc.


To me, the question is: what kind of sounds are you looking to use, and how do you want to hear them? With acoustic instruments like piano , strings, real brass, etc., well these are sounds we grew up with and listened to acoustically so with our binaural hearing we're used to hearing them in a space in other words, there's a certain amount of ambience involved, even if the sounds originate at one "point." For me, hearing these kinds of sounds come out of a single point-source, i.e. mono, doesn't make it. I need stereo but that's just me, I know there might be others with a different opinion! OTOH, give me the sound of a plain rhodes sample through a Fender Twin amp sim I can play that in mono all day, because that's "normal" for that sound. The bottom line, again FOR ME, is that almost any sound I use is gonna have some ambience in it, either straight up reverb, or chorus/leslie, etc. I like the feeling of space and hearing these sounds around me. YMMV.

Link to comment
Share on other sites

Heh. For sure, I don't intend this thread to open the "which is better/more true" debate of mono vs. stereo. It's more of a practical question of what happens when stereo signal is converted to mono, and does the sound get "summed" differently depending on where you do it (in my DAW vs. using the mono out of a board vs. panning in my mixer vs . . . )?


Bill H., can you elaborate on what you mean when you say they "don't sum well", and how or why that might be different than other stereo sources that do sum well?



Link to comment
Share on other sites

I actually had a whole paragraph talking about phase cancellations & how they could affect the sound, but my post was already too long. Shoulda left that in and deleted the other stuff! Yea we don't need another go-around on which is better. The answer is, of course, BOTH! :)
Link to comment
Share on other sites

Keep in mind that a mixer will treat a stereo input differently than separate left and right signals on 2 channels.


Mono channels have pan, so in this case you would pan the left left, and the right right. If you panned them both center in would sum them to mono.


A stereo input channel technically has balance, not pan. When you turn to left, it really just turns down the right, and vice versa....it doesn't do any summing.



Acoustic/Electric stringed instruments ranging from 4 to 230 strings, hammered, picked, fingered, slapped, and plucked. Analog and Digital Electronic instruments, reeds, and throat/mouth.

Link to comment
Share on other sites

.....It's more of a practical question of what happens when stereo signal is converted to mono, and does the sound get "summed" differently depending on where you do it (in my DAW vs. using the mono out of a board vs. panning in my mixer vs . . . )?.....


I believe the critical variable is whether the original sound source was designed to sum well to mono, as opposed to which piece of technical gear is doing the summing.

Link to comment
Share on other sites

As a gross oversimplification --


We perceive stereo sounds by means of slight phase difference/delay in the sounds. Our brain does the math, knowing the speed of sound and how far apart our ears are.


Keyboards, not being real pianos, have to do the same basic thing, so they offset the signals coming out of the left/right speakers to fool our brains.


When these combine, there are "phasing issues".


Drop a pebble into a pool of water, and observe the waveform.


Now drop two pebbles into the pool, near each other. Observe what happens when their waveforms meet. That is a "phasing issue". It doesn't look like the original waveform, and it doesn't sound like the original waveform.


Mono is like having only one pebble.


Stereo into two speakers is like having two pebbles and two pools of water.


Stereo into one speaker is like having two pebbles and one pool of water. The further apart the pebbles, the crazier the waveform. Unless they are totally different (way far apart).



Hammond: L111, M100, M3, BC, CV, Franken CV, A100, D152, C3, B3

Leslie: 710, 760, 51C, 147, 145, 122, 22H, 31H

Yamaha: CP4, DGX-620, DX7II-FD-E!, PF85, DX9

Roland: VR-09, RD-800


Link to comment
Share on other sites

Harmonizer's post:

Correct and the principal answer.


Explaining exactly what happens when L & R stereo signals are "summed" into a mono signal would require going fairly deep into technical stuff, math including vector analysis, and so forth.


Suffice it to say that a LOT of how well a particular keyboard sums a particular sample (of whatever) depends heavily on just how the sample itself is generated, and on exactly how said company chooses internally to do the summing. Also, some companies seem to pay more attention than others in how the samples are generated, so a particular company may have better results than another. Among the things in play here is if the samples were originally made from the acoustic viewpoint of a person playing the instument (up close), or from the viewpoint of an audience (L & R reversed, and not nearly as close).


Another thing that greatly affects condensing a stereo signal to mono is the effects chain (either internal to the instrument, or external effects). Example: A Hammond Organ with a tone cabinet (no Leslie) is basically a mono source to begin with. However, add a Leslie cabinet or simulator, and there are perceived stereo aspects (not really quite true stereo, but this is how our ears perceive it). A lot of clones (imitators of the original Hammond tone wheel organ) don't really handle this well, which results in their Leslie emulation sounding quite considerably better if rendered in stereo than if summed to mono.


Some keyboards try to get around this by having some specific patches that are identified as mono. If done properly, this means that particular patch was mono from the start, so that it sounds the same if played back through two separated speakers whether being stereo fed or mono fed (and one problem here is that the effects chain may still be optimized for stereo).


Specific answer on phasing: I will illustrate this simply - take a pure sine wave signal - like a Hammond with only the 8' drawbar pulled out or like an acoustic flute that is not being overblown. ANYWHERE in the signal chain, there may be differences in processing that lead the R signal to be either in phase, or 180 degrees out of phase. If both signals are in phase, the result to the ear is that the two signals add, so it is still a sine wave but louder. If they are exactly the same amplitude and out of phase, the result is almost complete cancellation of the sound.


To make things even more complex, if a stereo system is being used in a typical venue, only a small percentage of the audience will be in the "sweet spot," the area where both L & R signals are perceived at the same levels and similar phases. Due to acoustics such as bounce of audio off of walls, floors, ceilings, etc., most of the audience will hear one side or the other considerably louder than the other, thus destroying the stereo image; and also with possible partial phase peaking and cancellation, which will be frequency dependent so that there are very noticeable differences if one, say, plays single notes up and down the scale. Wes post is a very good illustration of this (posted while I was writing my post)


Howard Grand|Hamm SK1-73|Kurz PC2|PC2X|PC3|PC3X|PC361; QSC K10's

HP DAW|Epi Les Paul & LP 5-str bass|iPad mini2

"Now faith is the substance of things hoped for, the evidence of things not seen."


Link to comment
Share on other sites


This topic is now archived and is closed to further replies.

  • Create New...