Theo Verelst Posted September 18, 2020 Share Posted September 18, 2020 Suppose you have the perfect piano, organ, sting ensemble big synthesizer, and so on and want to put them in an instrument or piece of software, and you have all kinds of parameters in your digitization of these sounds. Maybe you can change the envelope, the timbre, the amount of resonance and reverberation sensitivity, the warmth, amplitude/power limitations effects like sound board and leslie/chorus, and even Equal Loudness Curve based (pre-) transformations so that you can sound good when someone plays your recording/sequence really soft or loud or in between. Now presume the parameters interfere with each other, so changing a parameter will influence either make other sound elements sound like you changed their parameters, or there's more sound space than normally covered by using unrealistic parameter combinations. First of two major options from these givens would be to limit the output sound of the workstation to only realistic sounds, and sort of squash illogical or physically incorrect parameter combinations into somewhere in the sound spectrum of existing sound using some mapping of the parameters on the front panel, maybe using AI techniques. Another option would be to take for granted that other than intended sound settings will lead to unrealistic sound coming out the virtual instrument, and that only certain adjustments of the various sound tuning options lead to the sounds you'd expect while the rest become "synthetic" I suppose which of these main options will be chosen depends on the marketing of the instrument: is it primarily a high grade imitation instrument, or are people going to have fun with all kinds of strange constructed sounds and smile at the chipmunk elements. T Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.