Jump to content

Nathanael_I

Member
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Nathanael_I

  1. Nice work. I can see why you replied that it is kind of "Dave Smith" like in the sense that the control values all yield fairly useful results. Clearly a very capable machine, and it would take a bit to really internalize it all.
  2. You are making great progress! Studio builds are so time intensive. I've done two. It is worth it. But so much work.
  3. I with KuruP. I'm getting tons of stuff done with all the extra time. I'm used to working from home, so I now lose zero time to commuting. I've cleaned and re-organized the whole studio (which also means re-cabling). I really dialed in my bass and how it interacts with the Pre/EQ/compressor in a channel strip, am writing lots, and more. Having all the extra time has me spending as much of it as I can on music. I'm just about to start a project gathering a bunch of new synth sounds. I find that is an activity I need to do not in a "playing" or "writing" mode, and this is a perfect time to do it. I want to have the sounds saved in memory of the board that made it, and also as a playable sample in UVI Falcon. Then, even if the board goes away, I still have that sound in my arsenal. I've been wanting to do this for a while, and the project is now ON!
  4. Yeah that's it, that's what was in the back of my head writing this. I didn't even bother to put the name in because I don't know that it was ever even released. Audio is so low bandwidth compared to video that it kind of just rides for free now. In a Windows 95 world a real-time OS probably seemed like a very obvious need for things to progress. Moore's Law did the work, however, and quickly made it so that even though nothing was real-time everything was fast enough for most uses that the distinction has long since become unnecessary. Much of music making is done by essentially non-technical people. Computer literate, for sure, but not "code enabled". One would think that a developer could release software as an ISO image, running some massively stripped out version of Linux where there is little to interrupt the CPU. But most wouldn't dedicate a machine to music, rendering the idea moot. So, "unicorns and rainbows, and all". Here, I use Win10 and OS X for music making, both quite successfully. So it is another one of those things where there is a technically "better" architecture, but in the real world just doesn't amount to much. I suspect that in the next two years, the major DAWs will all figure out how to take advantage of the new massive core counts that are happening. Now that we aren't going to get faster clocks, we are going to get more cores. I'm on an I9 9900k (8C/16T) right now. The new Threadripper stuff has so many cores that DAWBench won't even run correctly, let alone something like Cubase! That will all get worked out. There is a rumor that the optional $2,000 Afterburner video accelerator card for the new Mac Pro is an FPGA platform and that therefore, someone could code audio algorithms into it ala UAD. But how many musicians buy an $8-15k computer and then put another $2k into that card just to avoid buying a UAD card or HDX, both of which have quite a few other benefits? What I know is that it is a great time to be making music - the tech, the price points, the quality have never been better.
  5. Very cool! This is good Dr. MIke. Your review is clearly pulling out enough for me to put the pieces together on what the instrument is about. Looking forward to whatever else you have time to share on this one. Thanks for the hard work - I know it is time consuming. (But a great way to learn the instrument by going through it all....)
  6. Thanks for the oscillator text and videos. Very helpful. I think that the Hydrasynth is part of a new vanguard of synths that advances the art, specifically advancing the purpose of an oscillator. In vintage analog synthesis, the waveforms are very simple - and very static. Unless you detune them, or use an LFO to create PWM, etc the waves are also very static. In fact, they are "perfect". Too perfect. They lack any complexity that our ear associates with natural sounds. In the real world, things that vibrate do so in complex ways that the simple envelopes only kind of get at. The envelopes do track the general volume profile of a given sound. But what is missing is that the sound is modulating in many ways that the envelope doesn't suggest. Each harmonic is ringing and falling silent in its own ways, sometimes being amplified by resonance or damped by nodal points in the instrument. Plucked strings may start out vibrating vertically, and then switch to horizontal as they loose energy and other strange and wonderful phenomena. I have noticed that newer instruments are innovating in the oscillator section. The Bowen Solaris does this. There are many, many other waves, and you can scan through them with envelopes, LFO's, etc. There are "rotors" that do a circular vector synthesis with waves, ring mod and more - all under advanced modulation matrices. The Schmidt 8-voice has a very non-traditional oscillator section, that again is built to create dynamic complexity at the oscillator itself, long before any filtering. The NonLinear Labs C15 has very powerful oscillators and resonators to shape the sound, often without any traditional "filtration". The Korg Wavestate offers a different take. The Hydrasynth has complex oscillators in its own way. There are modular synth oscillators that also perform these feats. It is a trend that is gathering speed and momentum in newer instruments. It is a different way of thinking about subtractive synthesis. It says that interesting timbres can be created just by dynamically varying the synthesized wave and having the harmonic content rise and fall over time without (or at least before) filtration. In my experience, this is quite welcome and does a lot to improve the musical quality of the sound. A searing triangle lead is what it is, but it is not nuanced, and velocity/keyboard tracking only does so much. I think it is very clever that they have grouped similar waves so that one can get "related" variation. Under the right envelope, I'm sure interesting things are possible. I know I've certainly worked on the Solaris with scanning backwards and forwards through wavetables at different rates. If arbitrary multi-point envelopes are available (8pt on the Solaris) even more possibilities occur that can add a lot of interest to sounds. It is also my experience that subtle changes can do a lot. It isn't necessary to mix a bell wave with a trombone wave to find interest. Small shifts in timbral balance under the right modulation work a treat. It seems to me that the Hydrasynth is built to do this. You don't want all the things on "full volume"... you want to bring them in and out in a musical way. Powerful stuff.
  7. Oh, I plan to keep going, but with the text of the thread able to hold all the techy details, I can keep videos focused on hands-on stuff. What do you mean by "accessing the expression"? I am not sure I follow what you'd like me to dive into here. Do you mean modulations as a form of expression, or playing technique, or...? I am trying to get at how much work is it to get the sound engine to respond to playing nuance? Do you have be very careful about how you assign modulation to get musically useful results, or is it like a Dave Smith instrument where you almost can't make it sound bad because the parameter values are so well chosen? Are sounds expressive with just the poly-AT, or are you also finding that you want to modulate lots of other things to improve the expression? What are those things, and what works? I'm on the Osmose waiting list, but if I wasn't, I suspect this machine would be very interesting. It sounds great.
  8. Actually, your example illustrates the point. Your DAW actually uses many more threads than 144, but this doesn't make it a massively parallel, GPU-ready workload. In your example you have 127 tracks, 16 busses, and presumably a master fader (stereo, 5.1, whatever). But it isn't 144 discrete workloads. Each of the 127 tracks can be computed in a single, un-correlated thread, as you suggest. But what of the buses? A bus thread can't process data until it gets valid samples from every single track feeding it. The master fader can't run until every bus feeding it has given it processed samples. The whole "massively parallel" thing breaks down very quickly into a small number of threads, any one of which can be the "long pole in the tent" that everything else waits for. The whole DAW can only go as fast as it can get every thread serviced in the exact right order and still keep the sound-card buffers full. A single "slow" thread can cause the whole thing to gap. This is why you can get "cracks and pops" while your CPU is running 48% utilization in absolute processing terms. It just can't switch between all the threads any faster, even though it could handle more data on every clock cycle. Audio is real-time. I can't just give it 5 min of data to do at once. I have to give it one slice at a time, at a constant data rate. I don't know if you use Mainstage, but you used to see the limits of "one thread per track" quite clearly. A big plugin could end up crackling just due to needing more than one core could offer in real-time performance. Now, in practice? Your modern CPU will laugh off the 144 threads, 16 busses and master fader on a 4-8 core machine unless you do some massive processing or have the busses nested into each other in complex ways. Your machine runs thousands of threads many, many times every second. Modern CPUs are astonishingly powerful compared to audio bandwidth. I think this is Mike River's point (and a larger point on adequacy). Most people don't want a dedicated music computer, even if it would be "better for audio". They do want the one device they paid for to handle audio too. And it does for most people and most uses. For some, adding a UAD card solves any other issues, and for the very few, Avid's HDX handles the rest. The truth is that DAW's can't even use all the cores of a modern Threadripper chip. Most DAW's top out at 24-32 cores. You can see this in the latest benchmarks over at ScanPro. The other thing that is problematic for GPU processing of audio is the memory latency. GPU's work for games because many of the textures and such (the raw materials) are loaded into the 6-12GB of very fast RAM inside the GPU. Again, audio doesn't work this way. You can't pre-load the audio stream. It has to be computed in real-time. So the CPU would have to constantly feed data to the GPU only to buffer it and get it back later. It is easier to just process it on the CPU since it isn't "that much" data. So, there is a reason that no one uses the GPU. It isn't well suited for the task, and audio processing is "good enough" on a non-real-time CPU for all but extreme corner cases. This comes up on the composers forum at vi-control about every other year. But the software engineers have known about GPU for a very long time. If it was useful, it would have been used ages ago. Engineers love efficiency. They are also very pragmatic about "good enough", and we have more than "good enough" in general purpose CPUs.
  9. Wow, the sound design is coming along very well indeed. Very much looking forward to giving the Osmose people the rest of the money!
  10. Every time I think about getting an analog console, I end up right where MLB talks about in the video: its all about input, and then straight to converters and digital. I love the intentionality of his live rooms and the thought behind it. You can tell its his eighth room. Pure analog headphone monitoring. The live room isn't big - its RIGHT; the vocal booth plays way above its weight class and does multiple duties, with high ceiling retained. Its a very flexible, modern production space capable of working at the highest levels. And then you hear about the kinds of projects being worked on in the space and realize that the big studios left really are the last dinosaurs roaming the earth. This is not a lot of square feet, but it is totally focused on exactly what matters for anything outside a full orchestral session, which is already handled by others. Great space, great design, great hang, great times. Thanks for sharing.
  11. Wow. I have a lot of gear. I don't have anything from InMusic's portfolio. Clearly not pursuing the spaces they support or markets they serve! I do remember when Roli gave me the fxexpansion synth plugins as part of my "authorized software". I never even opened them, and just deleted them after a while. I was already drowning in soft synths and more VA stuff was just a giant yawn. I'm sure that others find them very useful and rightfully so. My comment is not about the quality of their work, just that it was free, and I treated it like an unwanted "free gift". Roli's own Equator is quite nice.
  12. In a world of "Why of course I like riding my flying unicorn!", we would all want there to be an industry standard real-time OS dedicated to music. Every DAW maker would run on this one OS, and it would work on any Intel platform. If we had this, something like the UAD accelerators would all go away almost instantly, and UAD would port their code to this OS as plugins. We would be drowning in CPU. Software synth manufacturers could radically improve their code given all the extra cycles freed up. No one would worry about audio interface latency for tracking. Drivers would have minimal buffering. But you are an industry insider. What do you think are the odds that all the DAW and plugin manufacturers would agree on spending a couple years to make this OS? Or that one would and then freely share it with others? And they would take it up without modification so that no one would struggle with compability? Windows, and OS X are ubiquitous, "free" from a developer standpoint, and good enough for 99% of all use cases. Audio processing beyond the realm of human sensory perception has been real for well over a decade, probably two, so it is just isn't a "problem" that is likely to get any attention. But if it did, all musicians who work with digital audio would rejoice.
  13. Audio is actually very low bandwidth for modern CPUs. It is more that our modern OS's distract them to death. Merging's DAW hacks Windows to leave several cores completely and totally alone, letting them schedule audio processing far more efficiently. My studio runs on Dante. Every Dante interface card is really just an FPGA and supporting memory, hardware, etc. The PCIe version runs 128ch of 96khz audio with sub 1ms latency - why? Its a fixed latency, real-time device. Of course, by the time my 8C/16T Intel CPU gets involved, even though it is WAY more powerful, it has to take 3-5 times longer to run those tracks just because it is distracted by thousands of other threads that all want attention. Sadly music is too small a space to demand a purpose built real-time OS. If we could have a true real-time OS for our DAW computers, everyone would think a modern computer was many times more powerful than we experience. The truth is that we use computers that do audio incidentally. They are far more likely to optimized for graphical performance, hence the long history of audio interfaces with DSP mixers, FPGAs, etc. It doesn't solve the CPU problem, but at least gives an environment where audio is a first class citizen and is treated appropriately. UAD is a master at marketing this, obviously, but so is RME with TotalMix, etc. All the live mixing desks are switching from DSP to FPGA's - that's why there's now all kinds of emulation plugins available for them. Notice there's always a fixed amount of them? Why? Because its hardware - there's only so many gates that be programmed for that use - the others are used. But the latency is constant - it doesn't cost more gates to keep performance. That's very cool and how we want audio to work. Avid HDX is the right architectural approach for audio - they just botched the license model in the transition from HD, and then computers got "mostly good enough for most things" to the point where few actually NEED fixed, essentially zero latency audio processing. This is still a big deal if you need to push record on 80 armed tracks at an orchestral session. But for anyone working at home one instrument at a time? Meaningless. And so, HDX has largely been sidelined, and Avid has pretty much equal performance on the Native side for most people and most uses. UAD offered branded emulations on hardware that worked with any DAW and took Avid's market outside of film and professional audio (which is a very small scene, actually). The whole world of amateur, hobby, prosumer, gigging, self-producing people (which includes some very big, very profitable acts) works just fine on a UAD accelerator.
  14. Yes. Audio is real-time and processing must be sequential. This means that if I put an EQ before the compressor, then the audio bit stream has to pass through the EQ before going into the compressor code. GPU's are not designed for this, despite being massively powerful. GPUs are designed for massively parallel workloads. Think about a typical game - I have to paint an entire 4k screen, using thousands of texture files, lighting FX, etc. But every pixel can be calculated independently of any other pixel. So, if I can throw a couple thousand processor cores at it, so much the better - I can finish sooner. GPUs are optimized for a condition exactly opposite of audio. Audio doesn't parallelize this way. Your DAW's master fader is running on a single core, no matter how many you have. It has to. If you want to play with this, put lots of complicated bussing and routing in a project. Watch the real-time CPU performance meter go up, even when you aren't doing anything. Then take all the complex routing out and bypass your plugins, and watch the real-time CPU get real skinny. If you want to have a super low latency tracking setup, disable all plugins, eliminate the bussing, and I bet you can run a smaller buffer. Then load up your full mix template. You may have to increase your buffer to keep from getting dropouts. This is the real-time requirement for audio at work inside a non-realtime OS (Windows, OS X, Linux). In audio projects, even big orchestral mockups rarely drive more than 50% absolute CPU usage. But it is quite possible to run out of real-time assets on a CPU that is only running at 50% actual utilization. Audio has to work at a constant rate (the sample rate). Almost nothing else in the computer works that way. So when the computer has to switch back and forth rapidly from thing to thing, for our purposes, it can only work for audio as long as it can keep the audio buffers full. This is always much less than 100% of CPU power. GPUs work great for graphics. But it isn't industry oversight that they aren't used for audio. If anyone could use them, UAD would have been put under years ago, or stopped making DSP hardware. DSP and FPGA's work for audio because they are constant latency devices. When the audio path is burned into silicon, then the whole chip runs real-time. This is why Avid and others have been so committed to the idea. DSP is GREAT for audio. DSP is not nearly as powerful as a modern CPU, but it doesn't get distracted all the time. That's why live sound mixing desks all use FPGA's. They are programmable and re-configurable, but they are fixed latency, once loaded. All the analogies between audio and video (or photography) really just aren't very true. Audio is quite different.
  15. A few years ago I bought a brand new Kronos-88 for $2600. I don't know why anyone would pay that much for a used one.
  16. This was also very useful to learn about the Innerclock Systems method for synchronization. Worth the time to learn about that.
  17. Thanks for posting these. I'd encourage you to keep going. A lot of instrument videos are rambling, and repetitive. You are keeping them organized and to the point. I'm looking forward to hearing your take on accessing the expression and how easy/hard it is to get it, and how easy/hard the programming is to take full advantage of it.
  18. I enjoyed this. Thanks for sharing. I just installed my first patch panel and now can route out to the modular, guitar pedals, etc from the DAW or any synth.... I appreciated the thought he put into his patching/routing/normaling. That is the "workflow" of a studio, and greatly affects ease of use.
  19. Things will always be priced at some point above the cost of construction and enterprise profitability based on the market's willingness and ability to pay. The number of people who both need, want, and will pay for professional keyboard instruments is low compared to any consumer electronics. The issue for me is feel is as an instrument. I hope the things like the Osmose, decreasing CNC cost, etc will enable someone to innovate in keyboard expression, and the linkage to the sound engine. Things like the Kronos are amazing - so versatile, they will do anything. But it doesn't feel like an instrument. It feels like a computer, but is remarkably complete for a huge range of potential use cases. There's a reason they don't need to make it better - it is already past the point of adequacy for most commercial music making situations. The NonLinear Labs C15 feels like an instrument, however - it responds well enough that you forget it. If anything, I'd happily see the price go up if it made better playing (and not for piano, but synth). I've pretty much only bought flagship synths with Fatar's best synth action, good wheels, etc. I am very much looking forward to the Osmose and its potential for expression on a standard keybed. Somewhere with powerful digital getting less expensive, and keyboard innovation, we should be able to move beyond the simple mod matrices to the kind of nuanced, multi-factor timbral changes that acoustic instruments offer. The Yamaha Montage and the C15 share the idea of "master controls" that are mapped to multiple parameters simultaneously in subtle ways. It clearly points to the usefulness of deeper, more sensitive modulation tied to playing controls and surfaces. Somewhere here is the potential for real innovation as an instrument (vs a sound source). The trick is getting the right product manager who insists that the complexity is all managed by the instrument not the end user. The manufacturer should find the sweet spot and optimize for it. But this is risky compared to making a general music timbre generator that "does everything". But with CNC and digital bits getting cheaper, sooner or later, someone is going to surprise us all with something that none of the big companies have vision to do. It won't have to be cheap, and may need not to be. But many synths are not played. They are sequenced with many simultaneous mod sources to achieve the rich results the sound engines are capable of. So maybe, they will keep getting less expensive - ala the Korg Wavestate. I'm sure the keyboard is "not awesome" - but for $800 it has huge power. That UI though... sigh... so cumbersome.
  20. My Kawai RX-7 is the best piano I've owned and a dream come true to have a long grand piano in the studio. The Nord Grand is very satisfying to me for a MIDI controller/desk piano (I don't use the internal sounds - but love playing the VSL Synchron Steinway samples on it). For synths, I have a vast preference for Fatar's premium TP-8S action. All of my best synths have this action and great wheels/joysticks. I will pay for good controls - I want an instrument, not a sound generator.
  21. I do all monitor/headphone mixes on a digital mixer (Allen & Heath) SQ-5. I am very familiar with the X32/M32 mixers. Honestly, they are just as fast and immediate as an analog console - with WAY more control. They work conceptually just like a standard desk, and have dedicated knobs for the main things. Select the channel, and then all the knobs are right there. You think, "I want to adjust EQ for the bass", and by the time you are done thinking that, you've hit the bass channel select button and its up on the screen already. It doesn't break flow for me. I never feel like, "oh I'm doing tech". But I have spent a LOT of time mixing on digital mixers in a live setting. The extra control is really nice if you know how to use it. I know that IEM mixes can be absolutely CD quality and excellent. Necessary? Maybe not, but sure nice to be playing in that environment!
  22. The interface as mixer as interface is a fantastic thing. A drummer friend of mine uses a Behringer rack mixer exactly that way - drummers need a LOT of channels and that is $$$. You can't get more capability for less $$. And it sounds fine.
  23. The new Nightwish triple album arrived two days ago. Great album! Trans Siberian Orchestra are the ones who get virtuoso keyboards, guitar, violin, bass, drums, string, horns, all the everything going and everyone has a good time from kids to senior citizens. Those shows are amazing. They are also through composed, so that keeps a lid on too many notes with not enough to say. They aren't "metal" enough for the growling crowd. They aren't "out" enough for prog, but they put on a great show, and give arenas of average folk fairly complex material to digest. There's always a story, and the live show is super well produced.
  24. While I have worked as a professional network engineer and held bunches of Cisco certifications, I never cared a bit for the small technical differences between Ethernet audio transport systems. At anything less than massive scale, I don't see it mattering for any typical studio. If you are wiring a college campus music facility or something, then yes, one might have technical needs that define the solution. I just went for the fattest, most developed part of the market. Live sound is essentially all moving to Dante in the new products - it "just works". Sure, they all have legacy proprietary solutions, but I wouldn't buy any of them. I have a separate Ethernet switch and just plug the Dante stuff into it - no configuration. It is all 169.x auto configured IP addresses. "It Just Works". Because all Dante devices use a chipset from Audinate, there are ZERO compatibility issues between vendors. I changed out from Midas M32 with Dante to the Allen & Heath SQ-5 with Dante, re-patched the channels in the Dante matrix, and was back in business. This ease of switching things out is probably a corollary of the "simple cabling". Getting stuff to talk is as easy as MIDI. I am with you on USB cables. I found the UFX to be cable sensitive once I was using all the channels and full ADAT expansion. No 15' cables worked. One 10' did, the 6' seemed most consistent. I grew up fantasizing about the big mix rooms with $1m SSL consoles on the cover. But those rooms can't even stay open anymore. Every year, I think, "Maybe I should get a console" and double my converter inputs.... It lasts about a month. I always end up the same place - I work exclusively with digital audio. Everything is digital from right after the mic pre to the Genelec speakers, and I end up just choosing to optimize my existing workflow and tools. I am lurking around Avid's Dock + S1 - it looks like the best option for hardware assisted mixing, but I'm pretty used to using the mouse at this point.... But getting back to interfaces - audio is essentially a solved problem. Gear is readily available that exceeds the limits of human perception. The analog input electronics and clocking do improve to a point, but once at the mid-priced professional solutions like Focusrite, UAD, AVID, Lynx, Apogee, and lots of others, there isn't really anything to choose except ports, interfaces, modularity, inclusion of digital mixers, onboard FX, etc. There is a very high end group of interfaces from Merging, DAD, Prism, etc at about 3-4x the price of the standard professional interfaces. I do have a Sonosax SX-R4+ recorder (that I am eagerly awaiting the Dante card for) that is in this upper class. It is the best recording quality that I have personally experienced. Professional rechargeable batteries provide power , and therefore it has ZERO mains contamination. It is like a Leica camera - few controls, almost no features or options, but the essentials are all done as well as they can be done. Is there a difference in musical meaning? No. Are the files extra nice with extra nice microphones and then played back on extra nice speakers? Oh yes. But there is no difference in musical meaning - that comes from the playing. That has been my maturing over the last several years. "Is there a difference in musical meaning?" Synths, pianos, drums, interfaces, plugins.... There's lots of "different", and I do have my preferences, which I honor, but try not to obsess over. The filter of "musical meaning" has been helpful to me. There is definitely a point of professional excellence beyond which it is all personal preference - there isn't a difference in musical meaning. We might all disagree a little about what that is, but no one listening to our music would ever know.
  25. There may also be an uptick in people's general concern for their health. There can be a downside medically to being on a respirator, even after recovery. We may a lot of people with a fresh appreciation that their life was saved, but also sobered by a new fragility that they cannot undo. I do hope that you are right Craig, and that we see a lot more people doing the calculation on what is truly important and what that means. I am in white collar America, and I know many who are thrilled with the time back from soul-less commutes, and the joy of having dinner every night with the people they love. I've worked out of my home for 25 years and occasionally an office and productivity is the least of the concerns. Silicon Valley is adjusting well. It is a small sample, and one that is generally well suited to WFH, but most companies still have most employees report to an office everyday. It is really sales that has been long-term WFH, as a matter of course. Companies figured out that it didn't make sense to desk sales forces that call on customers way back in 2000. But for our young business development people (age 23-27), they really miss the office. They are generally outgoing people, and often live alone, renting a room, or with roommates. They prefer to work in the office by a big margin. Our engineers? They are probably thrilled if they can get the quiet and isolation they need. That last bit is an most important bit - do you have a space that you can have as focus space, and not have work taking up your kitchen, or bedroom. We've always picked a place to live that had an extra room, converted a garage - something to make sure that exists. But that is not easy for everyone, yet makes working from home far more pleasant.
×
×
  • Create New...