Jump to content


Mike Rivers

Member
  • Posts

    1,007
  • Joined

  • Last visited

Posts posted by Mike Rivers

  1.  

    Wow! There is plenty of interesting articles on your website! Almost all the things I wanted to know on audio!

    Thanks a lot!

     

    Glad to hear that you wanted to know. The reason why those articles are there is because most are about fundamental principles that apply to both analog and digital systems. During the period when I was writing them (many are articles for a series I ran in Recording Magazine) was kind of the transition period, and the idea was to help the people who started out with computer audio, never having had to connect two things together with a cable, realize that the basics were the basics, whether analog or digital, and they remain pertinent today.

     

    Craig has a lot of more practical stuff on his web site - how to use things that you got - and they're well written and presented. There's room for both old school and new school here without being nostalgic or radical.

  2. At present I own a 2014 Focusrite Scarlett 2i4 (I suppose it is a 1st generation 2i4), which is connected to a mid 2012 MacBook pro. I use it as an input/output for my two keyboards: a digital piano (Yamaha P45) connected via USB to the MacBook and a Arturia keylab 61 MKII connected via MIDI to the Focusrite. As a software I am using MainStage 3.5, since I mainly perform live gigs, but I sometimes record music in my home studio with Logic Pro X, so I use also the two mic/line inputs for voice and guitar.

     

    The Scarlett 2i4 has not done a bad job so far, but my main concern about it is the apparently insufficient output signal. Unfortunately I am not (yet) a super expert in audio theoretical concepts, so I will try to explain the problem with simple words: it seems like the interface is not loud enough. When on stage, I usually connect the audio interface to the mixer through a TS jack cable, set the Monitor knob to "max" and the direct monitor knob all to "Playback" side (I still have this last point not so clear...).

    Although the Monitor knob is set to max (the master level in MainStage is at 0 dB), it seems that the signal arriving at the mixer is weak. Is the so-called "Maximum Output Level" responsible for this?

     

    Yes, that's exactly the problem. The science behind is is called "gain structure" and, in short, it means that the output level of a source is reasonably matched to its destination. You have the knobs set correctly, but the input that you're using for the Focusrite output doesn't have enough gain to get the output to the speaker up to the level that you want it. There's an article on Gain Structure on my web site that you might want to peruse.

     

    Looking at the specs for the Scarlett 2i4, I see that the maximum output using the unbalanced outputs (TS cable) is pretty wimpy. If you used Outputs 1 and 2 with a TRS cable, if your mixer inputs are balanced, you'll get a few more dB out of it.

     

    So, first of all, I would like to know if I am making some mistake connecting or setting the interface that results in a weak output signal.
    ,

    No, you're connecting it correctly, though as I said, if your mixer's line inputs are TRS, try swapping out the cable.

     

    Secondly, since in any case I would like to upgrade my audio interface, I would ask for an advice for the purchase. The main features should be: USB connection (is USB3 significantly better than USB2?), 2 line/mic inputs and 2 outputs, MIDI 5-pin I/O sockets. Furthermore, the new interface should be generally better and possibly louder than the Scarlett 2i4 I currently own.

     

    The Scarlett is a good interface, but it's one where they cut costs and not providing "pro" output level is one way to do it. I had a similar Behringer interface and was annoyed with its low output level (for the same reason as the Focusrite). When I wanted a 4-input interface, I chose a TASCAM US4x4HR, and one reason for my choice was the proper output level. You might want to take a look at it.

     

    As far as the difference between USB3 and USB2, at the number of channels you're working with, USB2 is fine, though I think that you'll find that most interfaces on today's market have USB3 or USBC. If you were doing 48 chanels at 96 kHz you'd want USB3, but at standard sample rates (44.1, 48 kHz) USB2 is fine. The best part about knowing this is that if your computer doesn't have USB3 ports, you can use USB 2 comfortably. The connector is the same, and most that have USB-C include a cable to go between that and USB 2/3.

  3. [This was an old time string band, in the 1960s, the Foggy Bottom Stump Jumpers] - we were hired to play for a convention of forest fighters and were told that they liked folk music, so we dug out a few Kingston Trio songs that we could do old time style. During the dinner break (these folks know how to eat, and share their food with the hired help) one of the women there asked if we could sing the Smokey The Bear song. Of course we didn't know it, but our fiddle player asked if she could sing a bit of it. She did, and wrote down some words, and by the time we finished dessert, we had a fiddle, guitar, mandolin, and banjo arrangement of Smokey The Bear to play for opening our second set.
  4.  

    You may noticed one of the differences in your notes. Or, it could very well be a noisy example. My CAD D-82 is lower gain than my dynamic or condenser mics. That means it needs more preamp gain and in the end, more noise - partly because the signal to noise ratio of the mic itself is lower and most preamps are at their best set to lower levels - they do begin to hum a bit if you have to turn them up.

     

    Noise is noise, but hum is a special kind of noise. In my not terribly complete tests, in the same environment, I was seeing a few dB more 60 Hz and 180 Hz in the noise spectrum with the Samar than with the Cascade mics. Programs like iZotope RX are good at getting rid of power line frequency hum, but I shouldn't I need to buy a fairly expensive program in order to feel comfortable with using the mic.

     

    As I think I mentioned, the problem isn't horrid, but if it's noticeable, it's a problem. I've gotta write to Mark this week so I can stop hinting that the Samar A95 might have an EMI problem. I hope it really doesn't.

  5. For me, a mic is a mic is a mic. There's nothing uniquely special about ribbon mics other than that they need more gain than most other types of mics in order to get the same record level. My first ribbon mics were Beyer M260s because I was looking for something different than SM57s and RE-11s (never could afford -15s) and a trusted engineer friend suggested the Beyers. I found myself using the M260s most of the time when I'd reach for an SM57, not because I thought they sounded better, but they were different and sounded very good for whatever I used them on. When the ribbons died on one of my M260s and factory service was outrageously expensive, I asked Stephen Sank if he could replace the ribbon. He did, but not with a Beyer "piston" style ribbon, but with a 77DX style, and doggone if it no longer sounded like a Beyer, but sounded very much like an RCA.

     

    I got a really good deal on a pair of Beyer M160s, quite a different mic, and also one that sounded great wherever I put it. I particularly like it on fiddle since its directivity pattern is fairly tight, but about 2 feet above the fiddle, it captures the full sound of the instrument and provides a little (but not much) isolation. When looking for an inexpensive mic to recommend to someone who just wanted to use a ribbon mic on banjo, I picked up a Cascade Fathead and that was yet another ribbon mic that sounded different than the others in the closet, and, yes, excellent on both a bright bluegrass banjo and a plunky old time banjo. The Fathead was the first ribbon mic I owned that had a "classic ribbon figure-8/null" pattern. Wish I'd bought a pair of them.

     

    While I'd love to have a few Royers or AEAs, or Clouds hanging around here, I just don't do enough paying work to justify the cost. But when Samar showed a $400 ribbon mic at last year's AES show, I bit, mostly because I like talking with Mark Fouxman (the maker) and I had faith that he would make the best darn sounding ribbon mic that he could for the price. Like all the rest of my ribbon mics, I really liked how it sounded wherever I used it, even as my mic for Zoom sessions. However, I've been a bit disappointed with the fact that it seemed to be more prone to hum pickup than other mics I have, and for quiet instruments like finger style acoustic guitar and fretted dulcimer, I need to get it closer than where it sounds good in order to have an acceptable signal-to-hum ratio. I really need to send this one back to Mark and see if it's what he considers normal or if there's a missing or broken ground wire or could use some magnetic shielding where there isn't any. But with brass or even over the strings of a piano, it's excellent. I just like my mics to be more universal than this,

     

    So, yeah, I use condenser mics and dynamic mics and ribbon mics and occasionally, still, a hot-rodded Radio Shack PZM. No culture bias here.

  6.  

    Points well taken about compressors, but the fact remains -- and this throws your comments above into sharp relief -- that most beginners use them in DAWs, with the unbelievably over-the-top settings that would never work in a hardware compressor, to ride levels. This is, to use the technical terminology: crappy.

     

    Sad how we've evolved. I was flipping through a survey of plug-ins in one of the mags recently and it seems that no matter what the basic purpose of the plug-in is, whether a compressor, equalizer, preamp simulator, guitar amp simulator, everything has a "drive" control or the description made mention of sufficient input gain to overdrive the device. I can understand it in an amp simulator because ever since people have been plugging guitars into amplifiers, distortion has been part of the sound. But how many distorted stages do you need in your signal chain? I suppose that the arguing point is that every subtle addition makes a difference in what comes out, and by careful adjustment, stacking, and bad taste, you can arrive at exactly the sound you want - or decide after some time fiddling around, that NOW you have the sound that will put your record on the charts.

     

    I'm glad that I record acoustic instrument for people who want the recording to sound like it was played on their instruments. It's so much easier that way - just find the right place for the mic, set the gain for a good level safe from clipping, and go.

  7. I started a new subject here since the "mics you think are good" discussion drifted this way.

     

    This sort of volume riding is precisely what compressors were invented to circumvent, since mixers of the day weren't automated. Nowadays, compressors tend to be primarily valued for their sonic qualities, since level riding is much easier than it used to be. But to do it well takes a lot of patience.

     

    Compressors don't do a very good job of level riding. There are dynamic processors that attempt to do that and are more successful on some sources than others - better on spoken word than music. Generally they have a slower attack and longer release time than a general purpose compressor.

     

    Compressors were invented back in they days of one microphone, and other than for gain reduction of only 2-3 dB, aren't very useful to smooth out a wild singer. Manual gain riding works much better, but as you say, it takes patience and skill. Some people get pretty good at drawing a volume envelope, and for the occasional howl, "clip gain" works better than I expect it to.

     

    True that these days compressors are used more for the sound of an overdriven amplifier or transformer, or to use adjustable attack and release controls to modify the attack of the incoming signal - and there are special products that aren't called compressors (but really are, at heart) which are designed specifically as transient re-shapers. I think SPL started that.

  8. Try to record this noise. Your DAW acts like a storage oscilloscope and you if there's actually something coming from the mic and getting through the interface, you'll see spikes in the audio waveform.

     

    Another trick to isolate the interface or preamp from the mic is to make a dummy mic. Connect a 150 ohm resistor between pins 2 and 3 of an XLR male plug. If you see the noise with the mic connected and it goes away when replaced by the dummy, then the noise is coming from the mic. If the noise continues with the dummy, then it's coming from further downstream.

     

    I discovered a weirdness with a PreSonus interface where, with a low frequency steady tone going in, the meters would start bouncing up and down. There's no recorded or monitored audio that follows the meter indication, apparently it was a problem with aliasing in whatever was driving the meters. A friend discovered it when he was playing alignment tones from his tape deck and happened to glance at the interface's meters.

     

    [video:youtube]https://www.youtube.com/channel/UCCMxM3m3lEwZnxNmhrm2b2A

  9. Glad All Over by the Dave Clark Five is an example of another story. That was released in the early days of the Beatles, back when the men in white coats dictated levels.

     

    Glad All Over is pretty distorted, all of it. Somebody was pushing it well into the red intentionally to get a "sound" and it worked, at least for that one song.

     

    James Brown records were always distorted. It was a (conscious) product of the meter on the the Ampex 300 recorder hitting the pin constantly and saturating the Scotch 201 tape. They knew the sound they wanted to get and that's how they got it.

     

    THIS IS NOT "THE SOUND OF TAPE" by any means. It's creative engineers presenting the sound of a powerful singer.

  10. I listen to my records occasionally. I never got rid of the records, turntables, or preamp/receivers with an RIAA equalized input. Just drop the needle and drop my butt on the couch for 20 minutes or so. Clicks and pops don't bother me, it's all part of the experience - not that I'd miss them if they were gone. There's a lot of good software to clean up noisy records, but then you'll end up with a WAV file, and where's the fun in that?

     

    I have a theory as to why some people prefer vinyl - it has nothing to do with vinyl, but with the limitations that vinyl forced on mastering. You couldn't have caricatures of sound, the way you can with the way (some) people master for digital.

     

    What a great way of putting it - "caricatures of sound." In the vinyl days (pre-Beatles, anyway), engineers didn't use a compressor to "warm up" a sound or otherwise introduce some form of distortion that's a side-effect of the box intended to control dynamics. Tracking had the goal of sounding like it did in the studio, and mastering was to make sure the grooves would fit on the record. A little bit of tweaking at the mastering stage wasn't uncommon, but it was because the mastering houses had the best reverbs, the best equalizers, and the best limiters. But these tools were used for making small changes, not creating a sound that the engineer/producer never thought of, or only dreamed about having the gadget that did that.

     

    Today sounds that never came out of an instrument go into a mix that then needs a lot of help afterward to sort out all the junk. So that's what mastering is today. It was a trick to fit 25 minutes of music with dynamics on to a side of an LP and that's what we hear when you play the phonograph record.

  11.  

    I understand there's some new technology called "stone tablets," with an estimated shelf life of several millennia. But it's probably just some Apple fanboi rumor.

     

    It worked for Moses, so they say. Predicted lifetime was better than the paints that cavemen used.

     

    When I was working for the Naval Oceanographic Office, we had a very early version of GPS aboard the survey ships. The CPU was a DEC PDP-8, and data I/O was with punched mylar tape. It doesn't absorb moisture, a big plus aboard ship, and the program was small enough so that it only took a few minutes to load.

  12. Hopefully the electrolytic capacitors are ok... Very decent distortion specs for such an old board, hard to get a similar new one for SH price. I don't know what the EQs sound like, and the frequency range seems most adapted for early digital work.

     

    On the positive side, the Mackie 8-bus was a brilliant design and an ideal mate for the low-end studio with a TASCAM or Fostex 1/2" 8-track analog or Alesis or TASCAM 8-track modular digital multitrack recorder. With an 8-bus and three ADATs, you could have a fully functional and well thought out - both for tracking and mixing - studio with all the expected features, and decent sound.

     

    An 8-bus that doesn't have ribbon cable problems by now will probably be fine in that respect, but like all electrolytic capacitors, it will take replacing all of the electrolytics used as coupling capacitors to bring it back to its original sonic performance. It's a tedious job, but one worth doing if you want to keep using the console. Switches and pots do indeed require cleaning but that's considered routine maintenance for any device that has them.

     

    and

     

    Or add membrane switches, and bring it [potential trouble points] back down to 52%

     

    It seems that every manufacturer building low-mid priced hardware (and some building top tier hardware) somehow managed to use custom switches and pots in their manufacturing. Instead of using standard off-the-shelf parts and designing circuit boards and panels to accommodate them, they designed (primarily) panels to fit the features and functions into the space that Marketing told them couldn't be any bigger (gotta fit on the dining room table, you know), and had the mechanical parts built to those size constraints. Consequently, 30 years later, it's impossible to get exact replacements and, in most cases, darn near impossible to make substitutions for currently available parts.

  13. The new room is shared as a laundry room, but it is big enough for both functions. I also moved the acoustic panels. The new room is much higher (and a bit bigger) then the old one. It also sounds more lively. I really like it. But there is a peak/standing wave of about 44Hz. I would like to get rid of it. It is quite dominant

     

    The size of the room is L3,65xW2,95xH2,20 to 2,90 m (ceilings is sloped).

     

    Mike or others, do you have any suggestions for improvement further?

     

    44 Hz is really low for acoustic panels. You need something like a Helmholz resonator, which is really not as hard to build as it is to spell. But maybe you can solve the problem another way, by moving the listening position or putting a diffusor where you'd put a trap to spread the reflection out to where it won't interfere with what you want to hear.

     

    I didn't run the numbers, but have you? Does 44 Hz jive with one of the room dimensions? Have you checked out Ethan Winer's acoustics articles? Among other things he has a room mode calculator there that's pretty slick. More articles on the RealTraps web site? (Ethan's former company)

  14. A really long time ago I worked as a product manager at Mackie, and one thing we came up with was a "standalone console center section", as were starting to be seen from different manufacturers.

     

    Since adjusting monitoring level was the most used control, I pushed to make that knob really big. Then we named the product the "Big Knob".

     

    I thought it was a catchy name, and then my fellow product manager, who hailed from across the pond, explained to me what a "knob" was in his native English English.

     

    Hi, Dan. I didn't know you were practically a neighbor.

     

    Names are indeed tricky, but you have to use common sense. Just like a record label wouldn't be confused with a computer manufacturer, the thing you turn to make an adjustment isn't likely to be confused with a body part - or taking the slang to the second level, a person who is in need of an adjustment. ;)

  15. I was in the Netherlands several years back, where if you were an artist you could basically have subsistence-level living from the government (don't know if that's still the case). The object was to encourage the arts, and hopefully, the artists would be able to parlay their exposure into a gig.

     

    Back when I was looking for ways to dodge the draft (Vietnam War era), one possibility was to get a Masters degree in folklore and learn enough French to get around, then go to Canada and tell them that I wanted to collect and document songs. There seemed to be quite a bit of interest in bringing Canadian folklore out to the general public. A few years later, I met someone who did just such a gig, only he was collecting poetry from a native tribe.

     

    But then I learned that I was safe from the draft because I was too nearsighted. The rest is history.

  16. I recall a similar faux pas years ago with another vehicle that was marketed in South America. I don't remember which specific vehicle, but after poor sales it was eventually uncovered that in the native language the model name translated to "no go".

     

     

    That was the Chevrolet Nova. That was also an urban legend. In Spanish, "nova" and "no va" are pronounced differently and, while "no va" means "doesn't go", it isn't good language usage. And "nova" in Spanish means the same thing as it does in English - like a nova star.

  17. @Mike - I have NEVER seen myself analyzed like that, nor have I really thought about my approach...I just blindly do what I do :) But it was fascinating to see myself through someone else's eyes (especially someone with analytical abilities).

     

    I hope you weren't offended. But, really, the thing that stuck with me from the interview was that you look at what you're working on - whether it's a piece of music or a piece of studio equipment - with an eye toward what does it really do, what do I need, and how they relate. And your writing follows your workflow - whether you're reviewing a product or designing something new - you clearly describe what's happening under the hood that explains what you hear (or don't hear) when evaluating the result. That's really important to me, and while I'm not a designer (though I still claim firsties on the "monitor controller') when I'm evaluating a tool, I look for things that I can measure and relate those to what I hear.

     

     

    Apologies for going off track here, but we both have some useful points related to what I was impressed with in this interview. But it's a big world out there. ;)

     

     

    As to tape, ah yes...when DAT came out, I liked to remind people that analog tape with Dolby SR had better specs :) It's the contrarian in me, I guess.

     

    And (specs are a specialty of mine) need to be taken as just data points, while the important thing is importance to the end of the project. Specs be damned - when I first heard a solo piano recorded digitally, I was ready to take it home. No flutter, no tape hiss, really solid low notes. Distortion characteristics between tape (with or without noise reduction) are different from digital, but are present with both. For me, the the value of those "improvement" with digital recording made for a better final product than with the "better specs" tape. And I saw digital editing - even the kludge of editing PCM-F1 digital recording between two VCRs - meant that I could make a better product faster, particularly since nearly all of my recording projects were (and still are) musicians playing live, with "fixed" done by editing rather than overdubbing - though I've been (too, sometimes) guilty of doing punch-ins on analog, and later, digital, media.

     

    Does the difference between analog and digital intermodulation distortion characteristics matter? Sure, but that will get (and has gotten) better. But the tradeoffs, for me, tilted the scale toward digital. But for those who could afford Ampex ATR-100 tape decks (and maintain them), or Nagras for field recording, I wouldn't try to change their mind if that's what they preferred to use. I did plenty of analog tape editing when that's what the project needed, but I appreciated the tangible benefits of digital editing - though admittedly I was a late adopter in my own setup because of cost. If I had a project on tape that required a lot of editing, I'd find someone with a digital editing system that was good enough not to change the audio in a harmful way and worked on that. What's important with me (and you) is getting better end results.

     

    True, there were some pretty dicey digital recording systems over the years, and most of them were enthusiastically adoped as they came along. F-1. DAT, ADAT and DTRS . . . and some systems that recorded directly to CD-ROM - all systems that had better or not-so-better conversion between digital and anslog and back, but all of which had predictable media problems. Hard drives took a while to catch on because of cost - I came close to buying a Turtle Beach 56K system (the poor man's Sound Tools) but when a hard drive that would barely hold a CD's worth of edited material cost $1,000, that was too expensive for a reel of tape for me, so I stuck with analog. I had the knowledge, experience, and test equipment to keep analog recorders working as well as they could, and it was frequent practice around here to take a DAT out in the field and, as soon as I got home, transferred it to tape. Today, if someone offered me a freshly overhauled ATR-100 for $100, I'd probably take it, but I doubt it would get any use other than to play tapes. And, thanking my lucky stars and a lot of study by archivists and chemists, I've been lucky enough to have encountered only one seriously damaged tape that couldn't be fully (but mostly) played.

     

    I do believe it had more or less reached the zenith of its performance when digital came along, but it was the variability that drove me nuts. You could never have a copy that sounded as good as the original, never bounce without degradation, never do a backup in the sense we can do it today, never have the performance not change from one week to the next.

     

    I suppose the "it" to which you're referring here is tape. Your observations are absolutely correct, and are well supported. But changes, while noticeable, are, or at least with good equipment, should be in the "who cares?" category. But many felt that this in itself was very important, hence the adoption of digital recording ASAP. To the end listener, none of that matters - what he hears, with smart and sensible production, is something better than the the original performance. One pro-digital argument is from historians and archivists who are interested in what it sounded like at the microphone - it's important to preserve an nth order harmonic from an instrument even though it might not be heard by a listener.

     

    Taken in isolation, there's nothing wrong with tape. But digital can be part of a system, which is a huge deal for me.

     

    Absolutely!

     

    I do feel that tape is a signal processor, and it just happens to create effects that people like. Whether people like those effects because they became accustomed to hearing it in the music is debatable.

     

    That's how we see it today, and we can debate whether passing a recording through a real analog recorder and through a "tape simulator" plug-in is best. But there are so many ways - both with analog and digital processors - for making a recording sound better in the ear of the producer/engineer. That's their job.

     

    Yet I think if only digital had existed, tape had never existed, and someone from Waves came up with a plug-in they called a "pleasurizer" that had the same effect as tape, they'd sell a bunch. It does do a certain "something" to the sound.

     

    And that's their job. Will the impetus to make a first generation recording sound "better" will be with us forever? Sure. It's what we do.

×
×
  • Create New...