Jump to content


Please note: You can easily log in to MPN using your Facebook account!

Sound designer technical standards


Theo Verelst

Recommended Posts

When I was into serious synthesizers, some time before the "workstations" like the (Korg) M1 came out, I was an (early) univ. EE student, and I never found it easy to get the exciting and well done sounds done that were in the factory or second source sound banks. Some here come from a education background I've read where Moogs and other early synths were part of the curriculum, but in my case the background I had in analog synthesis was more in electronics, i.e. I would imagine the circuits and the currents running through them in connection with the sound that produces.

 

In university I learned about the basics of signal processing, frequency analysis, etc. which I won't bore the lot of you with, but it struck me the programs in the synths I had (Korgs Poly 800 and DW8000, Yamaha DX7/TX802, Bit One, amo(u)ng others) were quite complex and often not easy to understand, like a ROMpler is a bit more straight (sample, some sound shaping, envelope, effects). So how does a polyphonic synth "get" a good working brass patch, some rudimentary e-piano, how can an FM "algorithm" be persuaded to sound a bit like a piano, and on top of that, how does the result work good through effects and standard studio reverb, for instance?

 

I suppose I should have owned a deep Moog or a Prophet to learn more about the basics at stake in that sort of programming, but I'm sure that for instance the ROM banks of the DX-7 cannot be easily approached by some programmer dude trying out sounds and shades of harmonic feels and playing with the envelopes. At some point there are signal processing or circuit knowledge or however you want to call it involved to arrive at those special sounds some synths can be good at.

 

Would you think you apply deep forms of sound knowledge to make the LFOs make sound chords sound great or something, or would you say you believe more in the "samples + some messing about" theory?

 

T.V.

Link to comment
Share on other sites



  • Replies 14
  • Created
  • Last Reply

Being an EE with a passion for analog synths, my Memorymoog was the machine where I learned a lot of sound design. I will say that you'll learn a lot more from an intuitive interface. Even with a one knob one function remote editor, I don't think there is ANY intuitive interface to an FM synth because it is so unpredictable.

 

It didn't hurt that I started out with a PAiA modular either,

Link to comment
Share on other sites

So how does a polyphonic synth "get" a good working brass patch, some rudimentary e-piano, how can an FM "algorithm" be persuaded to sound a bit like a piano, and on top of that, how does the result work good through effects and standard studio reverb, for instance?

 

...

 

Would you think you apply deep forms of sound knowledge to make the LFOs make sound chords sound great or something, or would you say you believe more in the "samples + some messing about" theory?

 

Theo, I think it's a combination of talent and training. In the early days of synthesis, people codified the synthesis techniques slowly, from from FM plucks to simple FM pianos, to the burp at the beginning of a brass sound. Messing around helped expand the frontier of what is possible. Now there is a sizable body of knowledge. Someone armed with a couple of acoustics reference books, a few classes, a re-synthesizer (e.g. Izotope Iris) and a relatively open architecture synth (e.g. Reaktor) can develop huge sound design skills within a couple of years. I find the most significant challenge is similar to a challenge we face as musicians also: teaching ourselves how to listen. There are classes, but it's a personal journey also.

Link to comment
Share on other sites

And he/she still doesn't understand even the basics of the sampling issues they're constantly battling (or worse: give up and proclaim bugs are features), and the interesting part of the original synthesizer idea gets no attention: the natural and great sounds of all kinds of differential equations.

 

T.

Link to comment
Share on other sites

I am quite impressed with how much good sound design is happening is without resort to higher math, actually. The newer tools are quite intuitive (if you'll pardon the youtube sound quality):

 

[video:youtube]0YD3n5LYaYQ

 

 

I could do those bells he is extracting using FM techniques on an open architecture synth but he'd be finished and having stroopwafels and espresso with his friends while I was still setting up the FM relationships with (virtual) patch cords. Especially when samples are involved, even a visceral understanding of sound is useful.

 

If you are interested in using the math anyway, there are some lovely new tools ...

 

[video:youtube]CiT117i6YnE

 

 

... we are so lucky to have so many options. :2thu:

Link to comment
Share on other sites

Tools are important too.

 

Some synths just refuse to do certain sounds.

 

Some synths are one-trick ponies but a hell of a trick that nothing else does.

 

I've been in this business since the late 70s and have yet to find a system that does it all. My Andromeda does a lot, but not everything.

Link to comment
Share on other sites

It has always interested me to see how far you can push doing more with less. To Theo's point, some of the presets on my first analogs (Korg polysix, Roland Jupiter 6) baffled me at first because they were not your typical subtractive sawtooth synth brass/bass/strings type sounds. They'd get creative using FM, or the filter ring on high resonance to create bell-like sounds and other tones you wouldn't expect. I'd say that was some of my earliest appreciation for digging into all the possibilities of analog without resorting to samples (mainly because I didn't have a keyboard with samples at the time). I'd say that's carried on with me today. In the Kronos instead of scrolling through all the samples, often my first instinct is to go to one of the VA engines and start tweeking. I wish I was better at FM. I never owned a DX-7. Instead, I had a CZ101 and a VZ8M, which I found to be much more intuitive.

Dan

 

Acoustic/Electric stringed instruments ranging from 4 to 230 strings, hammered, picked, fingered, slapped, and plucked. Analog and Digital Electronic instruments, reeds, and throat/mouth.

Link to comment
Share on other sites

Yeah, sure FM was a nice complex sound generator to tame a bit. Of course, it was also digital in the DX7, with for the time a quite heavy industrial processor and two custom Vlsi chips, prepared to sound right by some cunning researchers, and the digital parts in synths makes for more power, but also harder to understand starting points to get good sounds from all those parameters!

 

T.

Link to comment
Share on other sites

I suppose I should have owned a deep Moog or a Prophet to learn more about the basics at stake in that sort of programming, but I'm sure that for instance the ROM banks of the DX-7 cannot be easily approached by some programmer dude trying out sounds and shades of harmonic feels and playing with the envelopes. At some point there are signal processing or circuit knowledge or however you want to call it involved to arrive at those special sounds some synths can be good at.

 

Would you think you apply deep forms of sound knowledge to make the LFOs make sound chords sound great or something, or would you say you believe more in the "samples + some messing about" theory?

 

T.V.

 

Now here's a topic Theo is asking which I actually understand and know a thing or two about !

 

Lots of great musicians don't read music and don't know music theory. Synthesis & sound design is similar. One can become proficient with a _lot_ of practice, trial & error and happy accidental "wow, that sounds cool!" moments without any technical background or knowledge.

 

That may seem a cheap and vague answer, so here's how I'd break down the skills.

 

#1 is auditory memory, being able to hear and remember sound characteristics (pitch, timbre, behavior)

#2 is developing a 'vocabulary' of cause & effect of what changes input into the sound design/synthesis system yields as an output (so that's what happens when I do this)

#3 is the ability to reduce the sound you hear with your ears or in your mind into the components of your 'vocabulary' so it can then be created.

 

Having the time to practice, twiddle, experiment and gain experience with these three skills will let you create a lot of cool stuff without ever having any understanding of 'why' you got the sound you wanted. Kind of like street smarts vs book smarts, practical vs academic.

 

Knowing the intricacies of the technical & theoretical background of synthesis is IMHO probably only #4 on the list of skills. That said, having the technical & theoretical knowlege can be _extremely_ useful in that it gives you a better starting point to get where you want to go, and can let you more quickly hone in on characteristics that aren't in your sonic vocabulary. So it definitely helps you be a better synthesist/sound designer.

 

For me, I had tons of hours of practical experience before I put in the all the hours of learning the technical and theoretical academics.

 

Theo, specifically to your DX7 comment I was that "dude" you mentioned, I deconstructed every factory preset & cartridge patches available for it with every spare moment I had for the first six months I owned my DX7. I was just as committed to that as others are committed to musical practice. I had a pretty thorough handle on what it could do before I studied Chowning's research. Note the first sentence of this interview -- all about the ears!

 

[video:youtube]

 

Manny

People assume timbre is a strict progression of input to harmonics, but actually, from a non-linear, non-subjective viewpoint, it's more like a big ball of wibbly-wobbly, timbrally-wimbrally... stuff

 

Link to comment
Share on other sites

Manny, that was an extremely good analogy to music, I think you hit the nail on the head.

Dan

 

Acoustic/Electric stringed instruments ranging from 4 to 230 strings, hammered, picked, fingered, slapped, and plucked. Analog and Digital Electronic instruments, reeds, and throat/mouth.

Link to comment
Share on other sites

Now here's a topic Theo is asking which I actually understand and know a thing or two about !

Indeed Manny, that's an understatement if I heard one! It's a very deep observation of yours that the ears are often the limitation, and I concur completely. It's the first thing I tried to say to Theo though not nearly as eloquently ...

 

I find the most significant challenge is similar to a challenge we face as musicians also: teaching ourselves how to listen. There are classes, but it's a personal journey also.

I hope your advice helps Theo...

 

My DX7 story has four phases.

 

1) I didn't dive in at first. I used it in conjunction with a JX8P which I loved to program. Cross-Mod and Hard sync on the JX were much more rewarding than the DX. On the DX, I played the presets. :roll:

 

2) Several years later however, a miracle happened to the DX, for me. A dear fellow Nord user, Wout Blommers wrote software to translate ALL DX7 patches to the Nord Modular environment. The sound was the same. The interface? The size of a computer monitor. :love:

 

Here's the famous DX7 Harmonica (as played by Bill Livsey for Tina Turner's What's love got to do with it) on the Nord Modular converted by Wout's software ... (only audio-rate patch cords are shown for simplicity)

 

http://www.cim.mcgill.ca/~clark/nordmodularbook/images/harmonica01.jpg

 

3) I dived into the world of the DX7 completely. There are 9400 patches (many are similar) using the 32 DX7 algorithms in the archive and feel like I learned from them all. It's ironic how much I mis-under-estimated that beautiful synth when I had it. Brian Eno's music had told me there was wonderful stuff to learn, but I was too busy trying (and failing)to be Chick Corea in the early 80s. ;)

 

4) Later, I learned that while DX by itself is cool ... it plays really nice with other things. In an open architecture synth, you can modify those DX patches in ways the original DX synths wouldn't allow. Change waveforms, add dozens more operators, change modulation linkages (the algorithms), or even combine DX sounds in parallel or series with other DSP's. To this day I find myself combining DX with additive synthesis, subtractive synthesis and sample manipulation. The perceptual wall between synthesis approaches is broken, at least for me. As Don Henley knew, I can't go back.

 

>>>>>>>>

 

Theo, to this part of your question ...

 

I'm sure that for instance the ROM banks of the DX-7 cannot be easily approached by some programmer dude trying out sounds and shades of harmonic feels and playing with the envelopes. At some point there are signal processing or circuit knowledge or however you want to call it involved to arrive at those special sounds some synths can be good at.

 

... you would likely predict what my response is: Experimentation and instruction are both important to the development of the ear. For example, if someone is starting out in linear FM, spending 8 minutes with a video like this ...

 

[video:youtube]ziFv00PegJg

 

... is going to make the next 8 hours of experimentation, that much more fun and productive. :2thu:

 

Great to hear from you Manny. Good synthesis and peace to everyone!! :cheers:

Link to comment
Share on other sites

 

#1 is auditory memory, being able to hear and remember sound characteristics (pitch, timbre, behavior)

#2 is developing a 'vocabulary' of cause & effect of what changes input into the sound design/synthesis system yields as an output (so that's what happens when I do this)

#3 is the ability to reduce the sound you hear with your ears or in your mind into the components of your 'vocabulary' so it can then be created.

 

 

This is spot on. That best describes the process for most of my sound progamming work.

 

Regarding background and training: Regardless of your specific background, it's always helpful to surround yourself with individuals who possess skills in a variety of areas.

 

The variety of skills/backgrounds was and still is one of my favorite things about working with Kurzweil (full time for years, now sometimes as a contractor).

 

In the soundware department, most of us had a musical and/or audio engineering background. Many of us had worked on big name rock tours, TV, film, and theater, as players and sound engineers and/or producers. But there was one guy in soundware with an EE degree and another who with a good comp-sci background.

 

In the hardware dept, most of the guys had EE degrees. In fact one of the big guys at Kurz when I was there, Hal Chamberlin, literally "wrote the book" on the use of microprocessors in keyboards - all the big analog synth makers in the 80s read and took a page from this book.

http://www.amazon.com/Musical-Applications-Microprocessors-Hal-Chamberlin/dp/0810457687

 

But... they also had one hardware guy with a music degree from Dartmouth. Very handy.

 

The software department was comprised mostly of guys with a computer science background. Several were from MIT (one from the Media Lab), some had worked for plug-in companies, some came from computer giants like Digital. But one of the main software guys was a music major - in fact he was my roommate at Berklee. If you like the PC3's cascade mode, fxchains and the double leslie, this is the guy to thank - a film scoring major.

 

Also, most of the hardware and software guys were avid hobbyists if not full blown pro musicians, in their spare time.

In this kind of environment learning from each other was hugely important and incredibly helpful.

 

It brings to mind my own step #4 for the list at the top of the post: "Reducing the components of what you hear into your own vocabulary, then translating again into vocabulary best suited for hardware and software engineers".

 

It was also helpful to have a mix of younger and older cats working together, combining experience and expertise with energy and fresh new ideas.

 

One thing I've noticed.... not all keyboard companies maintain a full time staff of musicians and audio experts. At some of these companies I've seen decisions being made by EE and software types, decisions that really ought to be made by musicians and audio engineers. Invariably, the sound and/or feature set suffers as a result. The companies that consult with working musicians and audio experts always produce the best results in my experience.

 

Link to comment
Share on other sites

Well, it's always an interesting subject of course, I suppose there's a whole army of people out there who'd want to become a famous sound designer. Once, when I had made a certain type of piano edit on the PC3, there were like 10,000 people checking my youtube, which sure is fun, but I think my own interest is in the standardizing and proper application of the studio norms probably present in the signal processing designs since S. Wonder and R. Kurzweil's early designs. There are reasons for that which take longer than a post to explain, besides that it appears that these norms are present in all the A grade recordings I've checked (which it like a little under 10k tracks of CD quality or higher (blu-ray, high resolution audio)).

 

And, maybe a little smaller font here, some EEs aren't necessarily in touch with all the basis theories when into other projects like digital audio software, or like in Hall's excellent and interesting (but more focused on analog/digital combinations like in Moogs 60s and D. Smith's 70s designs) it is sometimes hard to understand which part of the signal path is done in certain ways and why (like the analog filters in the latest Prophets!).

 

There's a lot of stuff in the PCs that I don't like because it as it were abuses certain signal processing facilities for other purposes than where they'd be good at, and there's sometimes analysis of the sound and the notes being played, which influences the out coming audio, sometimes, like some of the "note refusal" examples I've documented (and had verified not to come from my unit alone), there's even a sort of commenting on or playing choices being made in the machine, where all I'd want is that it would logically and consistently work.

 

T.V.

Link to comment
Share on other sites

Great observation Theo.

 

When you speak of sound design, are you thinking specifically of designing patches for synths, or are you referring to it in a broader sense, the design of sound for the performance arts such as film, music and theater?

 

Jerry

Link to comment
Share on other sites

Once, when I had made a certain type of piano edit on the PC3, there were like 10,000 people checking my youtube, which sure is fun,...

 

Sorry, but I don´t have listened to any acoustic "piano edit" sound which sounded significantly better than the KURZ PC3 stock piano sounds, may it be your´s or creations from other "sounddesigners".

 

... but I think my own interest is in the standardizing and proper application of the studio norms ...

 

Please explain what the "studio norm" in regards of an acoustic piano sound is,- edited or not.

 

... probably present in the signal processing designs since S. Wonder and R. Kurzweil's early designs. There are reasons for that which take longer than a post to explain, besides that it appears that these norms are present in all the A grade recordings I've checked

 

What "A grade recordings" are you talking about? Examples please.

What do you consider to be a B-grade recording, or C, D, E or F?

What is a "signal processing design since Stevie Wonder and Ray Kurzweil´s early designs?"

 

IIRC, Stevie Wonder started using Kurzweil w/ the Kurzweil 250 (and possibly Midiboard too) and that single layer sample sound design was cool decades ago but today is the last in line s##t.

 

There's a lot of stuff in the PCs ...

 

You´re talking about a (your?) PC computer or the PC3 series of Kurzweil keyboards?

 

... where all I'd want is that it would logically and consistently work.

 

Well, when you mean your Kurzweil PC3 keyboard, notify support.

 

All I want is it sounding good and is usable in a context, regardless how it´s technically realized.

 

When it sounds good, is playable and fits the task, it´s perfect.

 

A.C.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...