Jump to content
Please note: You can easily log in to MPN using your Facebook account!

Have synths reached a plateau?


dansouth

Recommended Posts

Originally posted by Dave Bryce:

Carlo -

 

I went to respond to your post, and I hit the Edit button instead, and wiped out your thoughtful reply.

 

http://www.musicplayer.com/ubb/eek.gifhttp://www.musicplayer.com/ubb/rolleyes.gifhttp://www.musicplayer.com/ubb/eek.gif

 

I'm a complete idiot...my sincerest apologies...I suppose I'll just have to send you my Chroma now...

 

dB

 

A

a

a

a

r

r

g

g

h

h

!

!

 

*Sigh* - g a s p - (gosh) http://www.musicplayer.com/ubb/eek.gifhttp://www.musicplayer.com/ubb/eek.gif

 

It was a really *passionate* post, you know...

(did I save it somewhere? - N O )

 

OK never mind, I'll take the Chroma...

An Andromeda will do, too. http://www.musicplayer.com/ubb/biggrin.gifhttp://www.musicplayer.com/ubb/biggrin.gif

 

Carlo

Link to comment
Share on other sites

  • Replies 52
  • Created
  • Last Reply
Originally posted by Dave Bryce:

I want to know why synth players are so obsessed with this need for technological innovation.

 

I'm not obsessed with innovation, but I am obsessed with quality. That's why on the CD that I'm working on I use very few sounds from my PCM-based synths. I get far better results with the GigaSampler libraries that I own. The low A note on the East West Steinway B for GigaSampler exceeds 16MB. Most PCM synths try to emulate the entire multi-velocity piano in less. This translates to very audible differences.

 

How come no other type of instrument (that I can think of right now, anyway) has people who play it in constant search for newer and better versions? When was the last time anyone did something new to the sax? Or violin? Or guitar? Okay - the Steinberger, maybe... http://www.musicplayer.com/ubb/wink.gif

 

I don't place the blame for this on the players but rather on the manufacturers. They are the ones who can't seem to get it right the first time. They're the ones who treat their customers like computer buyers, buy this one today with the knowledge that it will be replaced in 12-24 months with something that has twice the presets, twice the ROM, twice the sequencer capacity. How come they can't get it right the first time? We're up to the Nord 3 correct? Will there be a Nord 4? 5? How come the B3 emulators didn't get it right the first time. I mean Hammond introduced the B3 in the 1930s (I believe). It was then essentially the same instrument as it was when production stopped decades later. But Hammond/Suzuki, VOCE, Roland, Korg have all produced multiple versions of their Hammond clones. We didn't ask for these. We just want one drop-dead-perfect emulation of a B3 and we'll buy it, and keep buying it. And the PCM-based stuff, will the revisions ever end?

 

 

Do you have a really discerning ear when it comes to piano samples? Get a piano!!!! Uprights fit into the same space as a digital piano. Really want that realistic piano sound for a critical recording? Try booking space at a recording studio that has a nice grand. It will end up costing you much less time and money than trying to find a sample that comes close. Actually, I don't think any sample comes close...too many possible tonal variations and complex harmonic interactions...also, it's muuuuuuch more fun to play a real piano - something about the feeling of a ton of resonating metal and wood...

 

Here's where you think I'm really twisted. I have a beautiful Steinway B in the other room. I've tried recording it on many occasions. I'm still not satisfied. The difference between what I hear while sitting in front of it and what the microphones pick up is huge. I'm still working to get the right sound. In the mean time I'm actually quite happy with (and prefer) my GigaSampler Steinway with the Altiverb Bosendorfer case resonance added, in most cases.

 

[/b]The reason that I never bought a VL1 was that I said to myself..."Okay, I can spend a bunch of time learning the performance idiomatic of playing sax on a keyboard...using three wheels, four pedals and a breath controller...or I can just spend $50 and hire a sax player when I need one". The latter option just seemed to make more sense. I just recorded a tune that needed horns - I hired a sax player, and called up a gorgeous blonde trombone player that I know, and filled in the rest with synths. It sounds exactly like I wanted it to sound. Plus, I got to play with other people...kinda part of the charm of the whole music thing for me...

[/b]

 

Here's my take on the VL1. I've spent as much time as anyone on the planet refining sax/trumpet/flute sounds for that instrument. I have hundreds upon hundreds of variations of the basic tenor sax model. During the time when I was doing this I listened to horn players exclusively. I went years never buying a piano player CD. But this indepth listening to the sound of the horn (which I was trying to capture in my programming of the VL1) and the note choice/phrasing (which I was trying to emulate in my playing) took me to a new level. It got me to think out side of the box, in terms of creating melodies and soloing. I often compose melodies on the piano, but I use the VL1 to quite literally breathe life into them. I don't believe there is any synthesizer that's as expressive to play as the VL1. It is like a real instrument in that every phrase you play is slightly different. With all of the controllers available to impact the sound, it's virtually impossible to impart each of them in the exact same manner every time. And the controllers are always used to control "musical" aspects of the sound, not increasing X over Y or introducing some Vox organ sounding vibrato. I think the VL1 is one of the few synthesizers that can be legitimately called a musical instrument.

 

I agree with you with regards to incorporating other musicians into your music. Based on a suggestion on this forum, I recently enlisted NY Horns to do all of my trumpet and sax parts. While they are significantly higher than $50/song, I'm thrilled with the results. Moving forward I will most likely consider all of my horn parts to be only scratch horns. But the NY Horn guys also seemed excited to work with my stuff. They really listened to my VL1 scratch horns as they provided very specific musical direction as to what I wanted. Sometimes they would bring in their own ideas, but most of the time they stuck closely to the script that I laid out using the VL1. They were excited because most of the time, dealing with keyboard players, they get melodic ideas that take a lot of effort to turn into good sounding horn phrases. I don't think they would have been able to capture the mood that I was trying to get, nor would I have come up with the melody in the first place had it not been for the VL1. Also, this allowed their horn parts fit like a glove into my exisiting music. Very little had be re-done. And it was all done long distance, via the internet.

 

You want a live vs memorex? Well here it is: http://www.purgatorycreek.com/july_4th.html

 

My response to the original poster's question is yes, I certainly think hardware synths have reached a plateau. But I feel that the quality I can achieve using the computer is far above that hardware plateau. They still make sense for those who gig, but I don't. As far as I'm concerned, they are fraught with compromise and the convenience of quickly pulling up a "string orchestra" preset ain't worth the hit in quality. For me the most boring place in the world is the keyboard section at Guitar Center. It's down right depressing.

 

Gosh this sounds negative.

 

Busch.

Link to comment
Share on other sites

Originally posted by marino:

*Sigh* - g a s p - (gosh) http://www.musicplayer.com/ubb/eek.gifhttp://www.musicplayer.com/ubb/eek.gif

 

It was a really *passionate* post, you know...

(did I save it somewhere? - N O )

 

Yeah, I know. I suck. It was a good post. I was in such a hurry to agree with you that I didn't look where i was clicking.

 

Damn damn damn - I hate that I have the ability to do that...it just feels awful.

 

Any possibility that you can try and remember some of it? I wanna agree with you again! C'mon, Carlo - you can do it...I promise not to erase it this time... http://www.musicplayer.com/ubb/rolleyes.gif

 

dB

:snax:

 

:keys:==> David Bryce Music • Funky Young Monks <==:rawk:

 

Professional Affiliations: Royer LabsMusic Player Network

Link to comment
Share on other sites

Originally posted by burningbusch:

You want a live vs memorex? Well here it is: http://www.purgatorycreek.com/july_4th.html

 

The new version sounds kick-ass. I see what you mean about them working off your performance, and the nuances that you worked up on the VL1. Outstanding.

 

I cited the unbelievable job that you did playing the VL1 in some of my above posts, as I'm sure that you noticed. I'm sure that you know that everyone who heard it was blown away by the quality of your performance.

 

Oh, BTW - if you tell me that they're both samples, you're a dead man... http://www.musicplayer.com/ubb/wink.gif

 

dB

:snax:

 

:keys:==> David Bryce Music • Funky Young Monks <==:rawk:

 

Professional Affiliations: Royer LabsMusic Player Network

Link to comment
Share on other sites

OK - I'll try to write down my thoughts once again. Maybe a condensed version...

 

I don't think synths have fulfilled their promise yet. What I want is an expressive instrument. I don't care having the ultimate sampled guitar or the perfect clarinet. I *do* use imitative synthesis, mainly for my TV work, and I've done a lot of it in the past; I'm good enough at it to get the job done (Yes, I have VL1, sampled libraries, etc.).

But I never try to imitate other instruments in my own music. I like to play with others, and the keyboard doesn't have to stick to the cliches of playing cheap imitations of acoustic instruments, or sweep-filter farts and moog basses. I like my instrument to be in the same expressive league with acoustic ones.

 

Samples can be used, but they are static, and while you can work inside that staticity (is that a word?), they don't lend themselves well to be shaped while you play. The best-sounding synth in my rig is probably the VL-1, but it's built, again, with imitation in mind. I'd like a synth that *I* can model, not a "modeling" one. I have a specific tecnique in mind, but that deserves a thread of its own.. coming soon. http://www.musicplayer.com/ubb/smile.gif

 

And, I want a keyboard on it! That's what I play. I can play what I hear on a keyboard; I can't on a guitar or sax. Just give me a breath controller, a ribbon, wheels, pedals... What else do you need? Only, it takes time to master these things. Other instrumentalists spends years to control their breath, bow pressure, etc.. We are so spoiled that we refuse to really study our tools. Remember the first time you tried aftertouch?

 

Now the big one: I am appalled to see how many keyboardists (on this thread, at least) look at perfect imitation of acoustic instruments as their final goal. It's a losing game, guys. These things are there already, and we're given the chance to create something different, so let's do it! And your expressive gestures don't have to imitate those of a violin, guitar or trumpet. The problem is, we have to invent our instrument/sound first, then to give it life by refining our playing. It's difficult; but if we don't do it, music will suffer IMO. People's tastes are cheapened enough already, and autopilot music is so pervasive, it's frightening. Don't get me wrong, I like a robot feel in music sometimes, but when it's the only choice, something's not right. The synthesizer is very involved in all this... Maybe what we need is a generation of "expressive synth players" (and the right instruments of course).

 

Gosh, that's more an *expanded* version maybe... Forgive my sermonizing, but that's really dear to me... It's a long time I'm waiting for this kind of electronic instrument, and I'm getting older... http://www.musicplayer.com/ubb/smile.gif

 

4.00 AM - Time to get some sleep

 

marino

Link to comment
Share on other sites

OK - I'll try to write down my thoughts once again.

 

Mille grazie!

 

What I want is an expressive instrument. I don't care having the ultimate sampled guitar or the perfect clarinet. I *do* use imitative synthesis, mainly for my TV work, and I've done a lot of it in the past; I'm good enough at it to get the job done (Yes, I have VL1, sampled libraries, etc.). But I never try to imitate other instruments in my own music. I like to play with others, and the keyboard doesn't have to stick to the cliches of playing cheap imitations of acoustic instruments, or sweep-filter farts and moog basses. I like my instrument to be in the same expressive league with acoustic ones.

 

I agree with this point. This sort of reasoning is what leads me to lean so heavily on my PPG Wave, Prophet VS and Wavestation - they are very distinctive sounding instruments, and in their uniqueness lie methods of expression that my ROMplers can never get anywhere near.

 

This is also why I love the VAST synths. There are so many possibilities with this engine - I never get tired of exploring it's depths. Sure, it's ROM set is old and tired...doesn't matter - i can warp the waveforms sp that they are unrecognizeable, and get nuances out of it that I cannot get anywhere near with anything else that I own. Besides, I can always load it up with new waveforms, and then use the VAST engine on them.

 

I am appalled to see how many keyboardists (on this thread, at least) look at perfect imitation of acoustic instruments as their final goal. It's a losing game, guys. These things are there already, and we're given the chance to create something different, so let's do it!

 

Very well said.

 

Thank you.

 

dB

:snax:

 

:keys:==> David Bryce Music • Funky Young Monks <==:rawk:

 

Professional Affiliations: Royer LabsMusic Player Network

Link to comment
Share on other sites

Originally posted by marino:

Now the big one: I am appalled to see how many keyboardists (on this thread, at least) look at perfect imitation of acoustic instruments as their final goal.

 

Boy, it's a good thing I'm not a keyboardist. I'd hate to think that anyone finds my ideas appalling. http://www.musicplayer.com/ubb/wink.gif

 

I may have given the wrong impression with this thread. I'm not suggesting that keyboards can't go any further in their ability to create amazing sounds. I'm just saying that I haven't seen a huge leap forward in the past few years. Today's instruments are improved versions of technology released in the 80's (sample playback) and the early 90's (modeling). I suspect that another "leap" is ahead (maybe soon) and I'm trying to inspire a discussion of the possibilities that that leap may bring.

 

I apologize to anyone to took this to mean that I find the current state of the art insufficient. I don't. Anyone who has frequented this forum has heard me praise the Triton and certain soft synths profusely. Today's tools are better than ever, and I thank the Good Lord every day for allowing me to live in this time and to enjoy the benefits of these wonderful tools. But the future WILL bring new innovations, and I will welcome them. To get the discussion rolling, I posted some of my ideas as to what types of innovations might be useful. This doesn't mean that I regard current synths as worthless.

 

Dave made a good point - modeling in its current incarnation is problematic because the keyboard is not a sufficient interface. Instead of investing a lot of time mastering a new interface (e.g. a breath controller), you might as well master the instrument you're trying to model. But this argument falls short when you consider the power of computers to manage real time controllers. The manufacturer that produces the next VL1 type synth should supply users with controller templates that approximate how expert players play the modeled instrument. Glissandos, slides, double stops, pizzicato, double tonguing, legato, staccato, sforzando, etc., etc. Ask a hundred players to play a major scale on a tenor sax, and you'll get a hundred different sounding scales. This kind of nuance COULD be controlled by todays computers via todays sequencers, but no one is spearheading the effort to assemble libraries of controller data for instrument models. Admittedly, this would be a herculean task, but think of the possibilities once someone pulls it off even in a limited way.

 

Don't dismay, synthesists! Of course you can use these techniques to come up with new sounds. Play a trumpet with a bow. Play a flute with a plectrum. Adjust your Moog filter to decay like a Steinway. The possibilities are amazing, but products like this are not coming to market at this time. I hope that someday they will, because I think that composers and arrangers and producers and players will be able to do wonderful things with this technology.

 

Please don't take this as an indictment of the currently available technology, which is nothing short of a dream come true.

 

 

This message has been edited by dansouth@yahoo.com on 09-07-2001 at 12:38 AM

Link to comment
Share on other sites

I'm nicely surprised with all of your comments here, gentlemen.

 

 

I love to fool my friends with a good acoustic guitar sample in silly things like playing live "Dust in the Wind" or sequenced in the middle of a Flamenco tune, using the technique David Bryce mentioned, like "strumming" without using the DIGITAR...

 

I really like those lead guitar-alike sounds in VA Synths or even that ones in Yamaha's S80 or Korg's Triton.

 

... but even while I'm happy with them, I'm always surprised about how good my guitar player is and how quick can he amaze me with his mastery of his 6 string Ibanez.

 

Dave's point is valid when he mentions that in order to capture an idea, mastering an instrument is not necessary. Many of our DEMOS surely were made like this and it's valid.

 

However, To get a FINAL result, having such new instruments (I liked the "Drum Machine / Guitar Machine" thing... ) that could actually replace a guitar would be absolutely nice in order we could work faster... Agree. Not because I do hate other musicians (drummers involved). Just because it would be terrific and absolutely the real next big thing in synthesis.

 

Sorry to say, under that statement, synths have really reached a platou.

 

However, taking all those technology to emulate a guitar sounds so weird for me... I'm sure VAST or VA technologies are still not that explored and used at their maximum. We should try (and surely we can get if we really get deep) something NEW using current technology. I mean, Synth engines, not alternate controllers. Anyone of you can say is using a 100% of your synth's features?

 

We're still far from replacing the actual sound of acoustic instruments (Depeche Mode call them "Organic Instruments" since an electric Guitar is electric) with synths. Even worse... if we could get those sounds from a keyboard (or a synth engine, better said), there will be nothing to replace the fact of mastering that emulated instrument.

 

And once again, there will be the question if we do need to really master those instruments or then hiring someone specialized on playing them... say, do you get this 100% real acoustic guitar synth engine... do you know how to play it just like Al Dimeola would?

 

I'm still amazed about how good Busch masters his VL1 ... awesome guy ... He's a good example of what I meant... I guess we all would love to master a Virtual instrument like him... NOW try to master how to play every REAL emulated guitar, Oboe, Violin... even Drums for a jazz / prog rock tune ... would be nice... but then, perhaps learning those new (and still utopic instruments) could be as hard to learn and master than learning to play the real stuff...

 

 

I'm still trying to figure out how to get better sounds from Reaktor and my old Ensoniq SQ1 ... Then, I have GIGAPIANO but I can not absolutely play like chick corea or bartok... http://www.musicplayer.com/ubb/biggrin.gif ... how's that for an example?

 

I'd love to have all of the new (even the utopic) technology right here, right now. It always helps to make better music if you know how to use your stuff...

 

... There will never be a replacement for virtuosity. Great equipment in the wrong hands will always sound like crap...

 

------------------

Gus Lozada

 

Moderador de:

MusicPlayer.com/NuestroForo

"La voz en Español en Música y Tecnología"

 

Gus TraX @musicplayer.com

Músico, Productor, Ingeniero, Tecnólogo

Senior Product Manager, América Latina y Caribe - PreSonus

at Fender Musical Instruments Company

 

Instagram: guslozada

Facebook: Lozada - Música y Tecnología

 

www.guslozada.com

Link to comment
Share on other sites

Originally posted by nursers:

Speaking of replacing guitar payers, I went to a Dave Matthews Band concert about a month ago, and the keyboard player played the most amazing guitar solo - first time it has taken me ages to work out who was playing it, as usually you can pick a keyboard played guitar solo at the drop of a hat.

 

So it can be done http://www.musicplayer.com/ubb/smile.gif

 

 

For me, it was 3:15 into "Burning My Soul" by Dream Theater. You know it's a great emulation of a guitar solo when all the guitar mags are transcribing a keyboard solo... http://www.musicplayer.com/ubb/smile.gif

Link to comment
Share on other sites

Originally posted by GusTraX:

...Great equipment in the wrong hands will always sound like crap...

 

Gus, this IS the sentence! I think I must nail it on a wall in my home studio and ... stop making music (while trying to get all the great equipment I can afford) http://www.musicplayer.com/ubb/biggrin.gif

 

 

 

This message has been edited by Gulliver on 09-07-2001 at 05:59 AM

I am back.
Link to comment
Share on other sites

Ok, I've seen some nice rants here.. [ http://www.musicplayer.com/ubb/smile.gif].

 

I've got one thing to say:

 

MIDI has reached a plateau, not the synths.

 

There are 2 major things which bothers me with MIDI.

 

1. It's a realtime protocol. Does anyone play reversed hihat in real-time?. Does anyone program reversed hihats in midi?, no we have to use the audio-channels for that. Why?, because the thing we want to say to the reversed-hihat-player isn't "start playing" & "stop". We'll want to say "Stop playing in 4 beats from now".

 

2. It's a standard-protocol for the VST-synths as well, which put heavy restriction on how you can control a synth, even when there is no need for a MIDI-cable.

Link to comment
Share on other sites

___________________________________________________________________________

QUOTE:<<<<<

So it can be done>>>>>>>>

___________________________________________________________________________One of the big problems with sampled guitars was not hearing a release sound effect.....the sound of the finger releasing the string at the end of a sustained note, whether it be a short or longer sustained note. Software based samplers such as Giga Sampler, and now the latest version of HALion 1.1 do support release trigger samples. With this type of guitar sampling which takes advantage of this release trigger feature, guitar samples sound like guitars instead of sounding more like a harpsichord. There are a few guitar samples which have been produced in recent times which take advantage of these features in sample producing for Giga and HALion formats.

 

You can go to the MP3 page on my web site and hear some examples of guitar samples which have been produced, using these release trigger features.

 

These same release trigger features are also important characteristics to capture on most any acoustic instrument samples.

 

Kip

Bardstown Audio

www.bardstownaudio.com

Link to comment
Share on other sites

Here's one of the areas where modeling can really make a difference. I have a feeling that modeling string release will provide a much more lifelike experience than slapping a 'release' sample on a fading string sample....

 

Someone else mentioned that modeling is an old technology. It seems to me that, unlike sampling, the potential of modeling hasn't really begun to be fulfilled. But I'll take the bait here and ask the obvious Q: What would be a technology which could make a leap past modeling for us?

 

Originally posted by Bardstown Audio:

________________________________________________________________________

___________________________________________________________________________One of the big problems with sampled guitars was not hearing a release sound effect.....the sound of the finger releasing the string at the end of a sustained note, whether it be a short or longer sustained note. Software based samplers such as Giga Sampler, and now the latest version of HALion 1.1 do support release trigger samples. With this type of guitar sampling which takes advantage of this release trigger feature, guitar samples sound like guitars instead of sounding more like a harpsichord.

I used to think I was Libertarian. Until I saw their platform; now I know I'm no more Libertarian than I am RepubliCrat or neoCON or Liberal or Socialist.

 

This ain't no track meet; this is football.

Link to comment
Share on other sites

Great thread. I'm one of the few who get to read Marino's original post on the topic before Dave inadvertently "censored" it, and let me tell you, it was inspired and illuminating. Sorry.

 

On the subject of modelling, isn't it interesting that *guitar players* have gobbled it up shamelessly with the Line 6, Johnson, and Roland products and now a host of imitators. Why is is it that electric guitarists, those infamous afficianados and purists of string, metal, and tube, are the ones supporting the advancement of modelling? Hell, we have "real" infinitely expressive instruments full of unpredictabilities and erratic behaviors, amps that change in essential sound over the course of a set, instruments that evolve over years, and yet we (some of us) welcome the arrival of a practical, predictable, and controllable approach to tone.

 

If you've played with a POD for any length of time, you will most likely be left feeling impressed but just a little saddened by it for the same reason we gripe about samples--a deadening consistency and "staticity" (thanks Marino) both to its sound and, most importantly, in the way it responds to your attack and articulation. A levelling of expressiveness.

 

So my brother, a music professor, composer, and studio owner in the midwest, visited me recently and laid his hands on a POD for the first time. I had to imagine he was dubious. The man has a very impressive and eclectic guitar and amp collection, not world class but all of the major tone mosnters are represented, and he knows how to use them. Let's put it this way -- he feels the need to have two different kinds of Dobros on hand at all times. So he plugs into the POD, monitoring through my Mackie mixer and Tannoy Reveals. He dials up a high-gain Mesa Boogie model and trots out his Eric Johnson-type licks. Meanwhile, I'm sitting there apologizing for the static and somewhat unsatisfactory response of the POD. A look of contempt comes over his face. He strikes one monster distorted power chord, looks at me while it evolves and decays, and says, "do you have *any* idea how difficult it is to get that sound on tape with the real thing?"

 

Don't know my point exactly--in the guitar world, modelling isn't about pushing the sonic enevelope; it's about a practical and efficient alternative, much the same way, I think, that the B4 is so well received. Not the real things but a real time and hassle saver. This valuation of acceptable realism over exploration is driving the development of modelling.

 

John

Check out the Sweet Clementines CD at bandcamp
Link to comment
Share on other sites

Originally posted by Magpel:

Don't know my point exactly--in the guitar world, modelling isn't about pushing the sonic enevelope; it's about a practical and efficient alternative, much the same way, I think, that the B4 is so well received. Not the real things but a real time and hassle saver. This valuation of acceptable realism over exploration is driving the development of modelling.

 

John

 

John (and Gus) - I agree wholeheartedly with your sentiments. Every tool, every convenience lets us accomplish more, or to accomplish similar results with less effort. A "guitar machine" or a "woodwind section machine" would open new possibilities for lots of musicians. No machine will ever replace the real thing, but that's no reason to shy away from better approximations.

 

Gus is absolutely right about the pitfalls of tools in the wrong hands. Tools only magnify our abilities; they don't transplant the abilities of others into us. Take any instrument as an example. More people play that instrument badly than play it well. But the hoards of hackers don't matter to you when you're listening to a virtuoso performance. The fact that music technology COULD be used to make crappy music is not an argument against developing that technology in the first place.

Link to comment
Share on other sites

Lots of good thoughts in this thread. Here and there it reminds me of the lament we used to hear about jazz in the '60s and '70s. Old-line jazzers would say, "There's been nothing new in jazz since Coltrane." If you said, "What about Weather Report?" they'd say, "That's not jazz!" "What about Keith Jarrett, then?" "That's not jazz!" They wanted to hear something new, but only within boundaries that they could understand.

 

The next big thing in synthesis has been around for years, in one form and another, but people don't see it, because it would require them to think (if you'll forgive the cliche) outside the box.

 

Don Buchla's Thunder controller was a big thing. Jim Johnson's Tunesmith was a big thing. Max was (and is!) a big thing. Ditto for Kyma. Right now I'm working with NI Reaktor 3.0, and it's a very big thing indeed. What it isn't is a shortcut to a more realistic acoustic guitar or piano emulation.

 

Another point: The revolution will not be televised. By which I mean, the next big thing will not be user-friendly. Not in its first incarnation. No more than Emerson's Moog modular was user-friendly. The people who get in on the ground floor of the next next big thing will be those who are freak enough to burn a lot of evenings and weekends figuring out how to make music with it.

 

By the time it becomes user-friendly, it ain't new any longer -- everybody's got one.

 

--Jim Aikin

Link to comment
Share on other sites

Herbie Hancock said something that I'll never forget.

 

Back around 1990, Graham Nash hosted a one hour cable television show that interviewed legendary musicians and featured live sets (VH1, A&E, I don't recall). Herbie Hancock was one of the guests, and when asked to compare the acoustic piano to the synthesizer, his response was "Synthesizers are still babies".

 

His point was, that you can't compare the development of an instrument several hundred years old to an instrument that has only existed for decades.

 

As for me, I don't think we've seen the tip of the iceberg. The most exciting things haven't been imagined yet, and probably can't be comprehended.

 

Can you imagine what would happen if you took a modern digital sampler back in time? Can you imagine showing Rick Wakeman, Emo, or any other keyboard hero what an S6000 can do? They would think you were a frickin' alien (nevermind that you traveled back in time...you get my point).

 

I DO think the current crop of synths has reached a stalemate, but I don't think it will last for long. Virtual synths and soaring processor speeds will take care of that. We just need to break out of the rut of the traditional DCO-DCF-DCA-LFO thinking. When you're a hammer, every problem looks like a nail. We need to dream about the sound, and then decide how to get there.

 

Rick Wakeman also said something memorable in his 1989 interview with Keyboard (during the ABWH sessions). He said something like, "We have now heard all the sounds that can be synthesized and sampled. It's a matter of things getting smaller, faster, and better." Well, this was a year before the Korg Wavestation turned sample playback on its head.

 

It just goes to show, you can't predict the future.

 

Wiggum

 

 

This message has been edited by Wiggum on 09-07-2001 at 10:16 PM

Link to comment
Share on other sites

Originally posted by dansouth@yahoo.com:

The manufacturer that produces the next VL1 type synth should supply users with controller templates that approximate how expert players play the modeled instrument. Glissandos, slides, double stops, pizzicato, double tonguing, legato, staccato, sforzando, etc., etc. Ask a hundred players to play a major scale on a tenor sax, and you'll get a hundred different sounding scales. This kind of nuance COULD be controlled by todays computers via todays sequencers, but no one is spearheading the effort to assemble libraries of controller data for instrument models. Admittedly, this would be a herculean task, but think of the possibilities once someone pulls it off even in a limited way.

 

Don't dismay, synthesists! Of course you can use these techniques to come up with new sounds. Play a trumpet with a bow. Play a flute with a plectrum. Adjust your Moog filter to decay like a Steinway. The possibilities are amazing, but products like this are not coming to market at this time. I hope that someday they will, because I think that composers and arrangers and producers and players will be able to do wonderful things with this technology.

 

This message has been edited by dansouth@yahoo.com on 09-07-2001 at 12:38 AM

 

 

Much of what you're describing can be done on the VL1 today. Because it is modeling a real instrument it responds, as accurately as the model can, to input. There is no need for templates. It does the most wonderful legato, emulating a horn's legato (just keep applying breath pressure as you change notes). If you want staccato, play it and the instrument will respond accordingly. You want a swell, just gradually increasing breath control pressure. Trumpet rips are easy. Everything from the throat of the player, through the lips, the mouthpiece, the body of the horn, to the bell can be programmed and controlled. Want the scale to sound different every time, increase damping which will cause the scale to go out of tune slightly based on input pressure. And yes you can do things like mix a trumpet mouthpiece to a flute.

 

In the sampler world, GigaSampler has something called dimensions which can act like the templates you're describing. They are typically setup using very low or high keys on the keyboard as triggers (notes that are out of range of the original instrument). C-1 might be normal, C#-1 legato, D-1 staccato, etc. You can then either in realtime or in you sequencer move instantly between any of these different phrasings. Each dimension is really a complete sampling of the instrument using the particular phrasing/effect. The newly release Garritan String Library is probably the ulitmate example of this. http://www.garritan.com/articulations.html

 

Busch

Link to comment
Share on other sites

Parts of this thread seem to have drifted emulative vs. real debate. I view modern music creation like modern film making. Directors employ all sorts of techniques/technologies in order to create their product. Who cares if the house that's being blown up is really a minature model, or computer generated, the explosion is foley, the actor running from the scene is a stunt man. These are all emulations, if you will. But if we're drawn into the scene emotionally and are not distracted by the "emulations" then the scene is said to have worked. I feel the same way about musical emulations. If the listener is not distracted by them and is emotionally drawn into the piece, then it's valid.

 

I am glad Jan Hammer didn't just hire a guitar player. It certainly would have been easier. I've always gotten a kick out of great emulations. Hans Zimmer typically does a complete mock up of his film scores using samplers which are almost always replaced with a real orchestra. But I remember reading in Keyboard that on a particular film there were two sections where the samplers sounded so good they weren't replaced and he challenged anyone to pick out the real from the fake. Moving forward with my music I know I will be using more real musicians. But that's for me and I don't feel others you either don't have the budget or access to real musicians should feel that their music is somehow less valid because they rely on synths entirely.

 

And thanks to those who paid me those nice compliments.

 

Busch

Link to comment
Share on other sites

Back to normal mode...

 

Originally posted by burningbusch:

Much of what you're describing can be done on the VL1 today. Because it is modeling a real instrument it responds, as accurately as the model can, to input. There is no need for templates. It does the most wonderful legato, emulating a horn's legato (just keep applying breath pressure as you change notes). If you want staccato, play it and the instrument will respond accordingly. You want a swell, just gradually increasing breath control pressure. Trumpet rips are easy. Everything from the throat of the player, through the lips, the mouthpiece, the body of the horn, to the bell can be programmed and controlled. Want the scale to sound different every time, increase damping which will cause the scale to go out of tune slightly based on input pressure. And yes you can do things like mix a trumpet mouthpiece to a flute.

 

A few thought.

 

1. The VL1 is no longer available.

 

2. A better VL synth could be built today for less money than the original five grand, but Yamaha chooses to repackage the technology in a watered down format. This is the same thing that they did with the DX7. Will they never learn?

 

3. A VL-X synth that would support multiple voices and multiple models simultaneously would allow the modeling of ensembles in real time. This could be done with a VL1 and a multitrack recorder, but it's a lot less convenient, because you can't build arrangements in real time.

 

4. Support our new eight-voice VL-X allows us to model a Dixieland band using the following models - trumpet, clarinet, guitar, piano, bass, snare, hi hat, cymbal. No problem, just like normal multitimbral synths. You record some rough tracks into the sequencer. Here's where it gets interesting.

 

Each sequencer track is run through a processor (not in real time) that analyzes the notes and phrases and simple controller information and combines that with a "performance model" - not a sound model - to create performance data for the specific instrument in a specific style. For instance, you can run the trumpet part through a Louis Armstrong phrasing model (one type of performance model) on the way to a standard trumpet sound model. The phrasing model approximates how the player might have phrased the notes - air pressure, slurs, tonguing style, valve speed, etc. The resulting data can be edited via controller editors in your sequencer.

 

You can replace the phrasing model with other players (Roy Eldridge, Miles Davis, some modern studio cat). You can replace the sound model with other models (Dizzy trumpet, flugelhorn) - or even alternate instruments (trombone, French horn). Once you get a basic combination that sounds about right, you can edit to your heart's content. You can blend in phrases that you play in real time for feel or spontaneity. The possibilities are staggering.

 

It would take a while to develop libraries of useful phrasing models, but the effort could be shared by opening up model generation to users. People all over would soon be posting models on their web sites. This "performance modeling" approach has many possibilities. Today's technology should be good enough to start the ball rolling, even if a polished product is years away.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...