Jump to content


Tusker

MPN Advisory Board
  • Posts

    8,030
  • Joined

  • Last visited

Posts posted by Tusker

  1. On 5/2/2024 at 12:11 PM, rockit31 said:

    Yeah,  both boards can be split into zones just for for one song,   The mx isnt as robust as the rd2000 and can only have 2 zones while the rd can have 8.   I have been diving into the manual last night and am learning that the rd is a feature rich board!  


    You are on the right track. 👍

     

    As others have said, “RD as master keyboard” is the most bullet proof way of building your set list without the complexity of an additional device. 
     

    Practice three related but separate skills:

     

    1 Connecting your two boards with midi cables (i.e. using midi).
    - Use this connection for a variety of purposes, like using one keyboard to perform sounds on the other, but also to select sounds on the other.


    2 Layering and zoning on both boards.
    - Explore their multi-timbral capabilities and develop your personal sonic preferences. Which sounds do you prefer on which board? (BTW, MX should give you 16 different simultaneous timbres in performance mode, no?)

     

    3 Using the RD as a master keyboard to pull up a particular sound  or sound combination on the MX.

     

    - Use this ability so that you hit one button on the RD, the sounds get called up on both boards and the next song is ready to go.

    - The only cable you need is one from the MIDI out of the RD to the MIDI in of the MX.

     

    Generally, you need to be comfortable with skills 1 and 2 to get the most out of skill 3. Hope this helps. Keep us posted on your adventures. 

  2. I too hope they stay with a high end offering and they likely will, but the future is going to look different.

     

    Increasingly higher quality audio will be created on increasingly lower cost devices. The same forces that created and then killed the ADAT home studio are now taking aim at the tower workstation. Brilliant mixes will be created simply. The mix engineer will go the way of the typist.

  3. 33 minutes ago, ElmerJFudd said:

    I prefer this to udio.com where the machine writes the arrangement based on text prompts and lyrics.  While that might be a great approach for beginners or someone with just a bit of knowledge about how music is made - anyone who is deep into it would prefer tools that assist with getting great results without removing them from being in control along the way.  

     

    100% For me, Drummer is a great creative tool.

  4. 1 hour ago, Docbop said:

    Apple site says the session player was trained on 1000 of hours of bass players, I sure hope it did a deep study on James Jamerson, Chuck Rainey and Ron Carter.  Okay, okay and have to have some Bootsy baby. 

     

    I would also wish they would go deep at some point, but initially the 1000 hours seem to be spent across eight players:

     

    "Bass Player was trained in collaboration with today’s best bass players using advanced AI and sampling technologies. Users can choose from eight different Bass Players and guide their performance with controls for complexity and intensity, while leveraging advanced parameters for slides, mutes, dead notes, and pickup hits."

     

    If the Drummer plugin is any indication, they will spread the goodies amongst various genres. Drummer has seven genres and a few dozen drummers. I imagine they gave some thought to how well the styles will mesh together between Drummer, Bass Player and Keyboard Player. This is likely the small beginning of a larger library to come. I imagine Guitar Player is not far behind.

     

    Wouldn't it be insanely great if they created a Session Players ecosystem like the App Store, allowing some of our favorite heroes to make unique contributions? AI Twiddly Bits! 💪

  5. On 5/6/2024 at 2:00 AM, AROIOS said:

    I'm kinda curious what a 6 oscillator mini can do that 2 minis layered cannot achieve.

     

    Me too. We may have to wait and see.

     

    We live in such richness of musical options that we can create similar outcomes in multiple ways. Stokely, for example was able to make a stock Logic instrument sound like a very well regarded Prophet 5 emulation by using effects. Keith Emerson figured out how to perform some of his iconic GX1 brass parts with digital synths like the TX816

     

    So what do we get with six oscillators before the one filter? More power? More options because of course you will have more coverage of the overtone series? Perhaps.

     

    Hans Zimmer and Kevin Schroeder are careful digital luthiers. They could easily have added the versatile wavetable oscillator from Synapse's Dune 3 synth but instead, we see six Moog-like oscillators. Hans has a history of getting particular analog features into synths. He persuaded Urs Heckmann to add Polymoog-like resonators to Zebra HZ. Similarly, he has persuaded Kevin to add the Moog Modular Filter Bank to Legend HZ.

     

    This is the joint work of one of the greatest composers of synth soundtracks and one of the most well respected modern synth developers. I am excited to see what the beast can do.

     

     

    • Like 3
  6. It's a thought provoking question and not easily answered.

     

    The natural world is full of reverb, so there is no question the average human being is going to want some. They might get it from the room they are sitting in, but they still want it. None of us train our ears in anechoic chambers. Richness comes from interactions between source and space.

     

    I tend to be a two-stage sound designer, tending to the synth before the effects. This habit is probably left over from interacting with old hardware synths which didn't have reverb. Very rarely in a modular or semi-modular synth, I will place a reverb or delay module further upstream inside the synth patch to modulate delay time with an LFO or an envelope. You can get some nice smearing effects that way. But that's rare. I prefer to use external reverbs and delays rather than the ones in the softsynth. This is why I look for an effects kill switch as I open a new synth. In the Legend HZ it's conveniently there in the top right.

     

    As you said, because so many modern effects are good, it gets harder to tell the great synths from the ok ones. That's why I look for the way the synth changes as you tweak it. If it's responsive it feels like a musical instrument, whether a patch came drenched in effects or not.

     

    Some of us are creating music to be played over headphones or other "artificial" sonic spaces. Others are performing in live venues which have their own "natural" sonic signature. A good synth should accommodate both workflows. 

    • Like 1
  7. I own or have dealt with most soft synth emulations of the Minimoog. Most of them nail the timbre of the components and it's not hard to find a musically adequate Moog sound. So why push the envelope?

     

    Where the synths sometimes fall down is how the components interact. How does filter resonance change as the oscillators get louder in the mixer?  There is an infinite variety of nooks and crannies when the interactions are modeled well. It makes the synth satisfying to tweak. This looks like one of those synths. Simon Le Grec is twisting knobs on presets and the changes feel satisfying.

     

     

    • Like 1
  8. Hans Zimmer is under-appreciated for his ability to get good developers to do great work. He worked with Urs Heckmann to turn Zebra in to Zebra HZ when scoring the Dark Knight. Now partnering with Kevin Schroeder, Hans turns the Legend into Legend HZ. Legend HZ raises the bar for everyone. Definitely on my buy list. My copy of Diva is feeling nervous. 😅

    • Like 3
  9. 11 hours ago, ksoper said:

    In the mean time, until these programs can pass Adam Neely's musical Turing test, proceed as planned.  


    At the risk of derailing this thread, may I please mention that your album Miutronics is a complete inspiration to me? It's a testament to the human imagination. I am blown away!
     

    Congratulations! 🎈 

    • Love 1
  10. 4 hours ago, ElmerJFudd said:

    It might be to one’s benefit to include Human Made or No AI in the liner notes and artwork.  
     

    The industry and governments need to get their shit together quickly on new copyright and IP rules.  What’s their plan to compensate all the copyrighted digital material that was fed to the robot? 

     

    Great point on copyright and IP law. "Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should." Ian Malcolm, Jurassic Park

     

    However, I feel that labels can be a clumsy tool. I admit they seem to work very well with food safety. It just feels different with art and music. Maybe I am a hippie ha ha. These two labels left a bad taste for me. Others may feel differently.

     

     

    image.png.3a3617983b1b3db7be0b557b1715f89f.png

     

    image.png.c95e482c925284aa57b9a5d5c99a6a6a.png

     

     

     

     

     

     

     

     

     

     

     

     

    • Like 1
  11. 4 hours ago, ElmerJFudd said:

    I’m guessing we’re looking at a further devaluing of recorded music/digital music, especially in genres where AI is already excelling.  
     

    Possibly an increase in value of live human music - especially of the acoustic nature.  

     

    This seems very logical to me, and it could be that improvised (jazz) and group interactions (piano bars, choirs etc) will rise in perceived value. The human connection and the artifact are not the same.

     

    It's interesting to me that we have wonderfully new kinds of wallpaper these days but the fine art market has never been stronger.

  12. 3 hours ago, analogholic said:

    Hmmm...My "favorite"....wonder if they get remotely close regarding the punch, fatness and grit of the original


    Yes, PFG is vital.

    If you run it through a similar signal chain you can expect very similar results (pre-amp, tape, mixing board, sometimes amplification) All of those stages add punch, fatness and grit.

     

    I run my CR78 emulation through such a chain and it works for me. The sound source is pretty clean. The chain adds the PFG. YMMV

    • Like 1
  13. I have been using large language models to write some documents. They are hackneyed writers, but hard working research assistants. They don't replace me, or Google or that subscription to the online research library. 

     

    I could imagine an AI composers assistant who checks to see if your new riff is original, finds similar motifs in the pantheon of songwriting, and outlines five ways you could develop that riff. You would still be the curator of your art.

     

    But AI will grow ...

  14. A typical early recipe for a drone synth bass had three ingredients:

     

    - Two or three hard to tune Moog 921B oscillators, gently rubbing against each other and then drifting into ... 

    - a Moog CP3 mixer which added it's own unique saturation to accentuate the rubbing and reinforce the fundamental frequency and then

    - floating out through a fairly open Ladder filter, which would allow the gentle fluctuations to be heard.

     

    You could add additional flavors to this basic recipe. Today, anything that adds random fluctuation through some kind of saturation and filtering will get you close. You want a lot of harmonics so saw waves are your friends.

     

    Personally I use a Toy Box Audio Atomic Oscillator in Reaktor, which frequency modulates the individual partials of a saw oscillator to create a lot of frequencies. Also as you probably know, Zebra HZ has a number of the Dark Knight drone patches that you mentioned. They are a great jumping off point. Howard Scarr is brilliant.

    • Love 1
  15. Mike, this is brilliant and I am so glad you mentioned this.

     

    If you don’t mind my asking, terms of what aspect of logic workflow do you find the Elgato most useful for?

     

    Switching editors, running scripts, transport control, switching active track, something else? 

     

    TIA 🙏
     

     

  16. 12 hours ago, jazzpiano88 said:

    Fortunately Gordon Lightfoot saved us. 

     

    But Dua Lipa? Harry Styles? Justin Timberlake? Lady Gaga? Seems like there is plenty of Disco around. 

     

    What will ubiquity and true audience control do to streamed music? It will devalue it. But wallpaper did not kill paintings.

     

    What will be valued then? Magic in live music. Yay! Artists who can take risks with an audience and take them somewhere special. Musicians like Bobby McFerrin. And Vulfpeck. And ...

     

     

     

     

     

     

     

     

     

     

    • Like 1
  17. 9 hours ago, J.F.N. said:

    For sure the reason round robin was invented, adding a way for samplers to represent real world sounds in a less static way...

     

    Great analogy. 👍

     

    In synth land, variation can be created with traditional LFOs and Envelopes, but people are also inventing new tools.  U-he's Zebra for example has four modulation mappers which provide 128 random values before restarting. You could think of them as 128 round robins. Once you direct those four to different parts of the synth architecture, your sound is very alive.

    • Like 1
    • Cool 1
  18. 15 hours ago, CEB said:

    I can’t play without any verb.  It feels too raw. I use just enough that it feels okay.

     

    Same here. I need it but a smidge goes a long way.

     

    I'll add a distinction here that could help: the reverb component matters. I put the Early Reflections in to place the sound but the Room Tail is set at zero. I let the room tail arise from the room the jazz or pop band is performing in. This way my pianos aren't slamming into ear drums directly out of the speakers, but there is little to no trade-off with mud, because early reflections don't create much mud. Room tails are the mud culprits.

     

    A delay can be used instead of reverb, to emulate that early reflection. The human ear desires spatial information and recoils from a completely dry sound.

     

    My default reverb is Seventh Heaven which is not a convolution reverb based on an authentic impulse recording (IR)  It's a model constructed from delay networks. That's because I don't think it matters how "authentic" the early reflections are. You can mix and match. I do. Seventh Heaven has a knob for Early/Late reflection mix  (see below) which I set 100% to early, because I want only to place the sound.

     

    image.png.f3cdc65fca672f9a9050e1e4a14469bb.png

     

    Pianos could be placed further forward. Pads further back. That can be accomplished through reverb volume not the Early/Late mix. All sounds are getting only early reflections because the room we are performing in will provide enough (or too much!) tail.

     

    One exception to this approach is with a very atmospheric space-music situation, where you can add your own tails to the room tails. Then nobody cares because they don't feel they are in a real room. They are in an imagined space that an Eventide Blackhole or Valhalla SuperMassive or Strymon BigSky is generating.

     

    Beat Kauffman's video illustrates these ideas. Hope this helps.

     

     

     

     

    • Like 1
    • Thanks 1
  19. 1 hour ago, CyberGene said:

    I’m all for people having fun and feeling good, as I said previously, and there’s a lot to love in a nice vintage analog synth, however expensive it is. But I have no delusions regarding its sound. Software models are as good sounding as an analog nowadays. 

     

     

    Very well said. There's a vintage car show here in the summer. One my neighbors has a Silver Ghost from the twenties which his pride and joy. When the weather is nice, he brings it out. The kids in town love to toot the rubber ball horn. I stick around when that happens. It's a great sound. At the car show, he is one of the stars. Rightly so. There's a maintenance shop here in town, which helps keep his car running, year after year. For them, it's a labor of love.

     

    Similarly, I reckon old analog vintage synths will become more and more prized inside smaller and smaller milieus. There's no doubting their immense capabilities and character, some from the design, some from the aging of the circuits. Every succeeding generation owes the early synth-makers a debt of gratitude: for the imagination, the courage, the example. That's why I love to play the old synths when I can.

     

    Can I replicate their sound? Never to a 100% to be honest. But what I love is the music, and modern instruments have a ton of character also, if you are willing to put it in. The key ingredients for character are imperfection and eccentricity in my experience. Today, if I am imitating a drum loop as it might have been performed with an Akai S900, I might want to downsample it to 12 bits. To add some crustiness. To get some character. If I am imitating vintage analog oscillators on a modern analog synth, I've been known to send tiny amounts of white noise into the pitch modulation CV of the synth. To create some jitter. To get some character.

    • Like 3
  20. 12 minutes ago, Stokely said:

    Eh, I doubt even hardcore "in the boxers" (like me) would ever hate on a room full of vintage synths :)  

     

    👏  For sure, we've got to get past the idea of hate on one side or the other. It's about mutual love and respect. I use both types of tools. Admittedly I am more in the box these days. Still, digital synth aficionados like me can respect the vintage synth makers, players, restorers and collectors. We are kinfolk.

     

    I'll be in Orlando in the middle of May. Excitedly hoping to spend a free day at Joe Rivers's museum again. It's been awhile. Hoping he's still active. 🤞 👍

    • Like 1
×
×
  • Create New...