The current environment of regular livestreamed performances has me wondering about best practices (and any unique solutions others have come up with) for blending software instruments with analog-domain sounds being captured into a computer. Because if my audio path is microphone/DI > interface > livestreaming software, I'm not sure what makes the most sense as far as incorporating a controller and a software instrument.
Maybe this is a no-brainer on Windows; as a Mac user, getting internal sounds from one application to another requires a third-party application (it used to be Soundflower -- I think that's been replaced by iShowU Audio or something). My instinct was not to mess around with splitting processing power between audio/video capture/streaming and sound generation. It's something I've never had to consider as someone who has spent a lot of years using computers to create and capture music, but is a relative newcomer to using them for performance.
I also wonder about this in a recording studio setting. My studio workflow is usually to start by recording a live group of musicians simultaneously (remember when that was a thing?), and then overdub additional parts one by one to flesh things out. Most of the computer-generated sounds I use in my live rig would either be replaced by an analog instrument in a studio, or would fall into the "overdub" category anyway, so I would perform the overdub in a Logic session with the software instrument as a plugin. But as I add more and more softsynths into my live rig, I'm wondering how I would handle capturing one of those instruments on a live tracking session with a rhythm section.
Do any of you record a live band along with software instruments in the same DAW, capturing audio and MIDI simultaneously? Do you use separate computers for audio generation and for recording, and capture the software instrument as audio rather than MIDI?
Not necessarily looking for a particular solution for myself, just the philosophies and practices of others.