Jump to content


Please note: You can easily log in to MPN using your Facebook account!

Question for Nord Piano and/or Nord Stage owners regarding the acoustic piano sounds


Recommended Posts

8 hours ago, AnotherScott said:

 a new interpolated waveform that is "part way between" the two waveforms. Like the way you can morph one picture into another. 

That's kind of my point. Let's take "part way between" to be halfway for this example. If I draw waveform A, and waveform B, and draw another waveform C that is half way between A and B at every point, you get a graph like this (A in blue, B in orange, C in green).

 

C is just (A+B)/2, i.e. a cross-fade of A and B in equal measure. 

 

Clearly sample interpolation is something more sophisticated than this. Does anyone know how it's done?

 

Cheers, Mike.

waveforms.png

Link to comment
Share on other sites



I have a NS2 (same piano library as the NS3l) and I cannot notice sample transitions with velocity. I can hear sample transition in the MODX8 acoustic piano and in the Yamaha P80 samples (specially Rhodes sound in the P80). I have not done any spectrograms plots to see if there are any noticeable changes across velocity levels, but that would be one simple way to investigate it.

  • Like 1
Link to comment
Share on other sites

It always helps to simplify the problem.  I am reaching back well over 40 years to a class I barely paid attention to at the time, so you have been warned as to the accuracy here, imagined or otherwise. 

 

Waveform A is a pure sine wave, 440 Hz, maybe 85 dB.  Waveform B is also a sine wave, 440 Hz, maybe 86 dB but with more pronounced harmonics.  

 

If you wanted to get from A to B in a defined period of time, you'd redraw Waveform A as a sequence of more pronounced waveforms (amplitude) as you got to B.  Plus, you'd gradually start introducing the new harmonic components that weren't there for A, as well as subtracting any that aren't in B.  Maybe call them Waveform A, A', A'', A''', A'''', A''''', A'''''' until you got to Waveform B.

 

Each version of A would be a quantized step across a mathematical distance, a bit less like the one before it, a bit more like the one ahead of it, much like morphing between two graphic images.

 

On the other side, new midi note information comes in, maybe going from velocity 27 (waveform A) to velocity 28 (waveform B).  You'd probably have a short window to smooth the adjustment, maybe 5-10 msec or similar? 

 

From there, I think it was a sequence of FFT transforms or similar.  I remember thinking "gee, that's cool" and moving on with my college life.

 

Anyone got something better in their rusty memory banks?

Want to make your band better?  Check out "A Guide To Starting (Or Improving!) Your Own Local Band"

 

Link to comment
Share on other sites

3 hours ago, stoken6 said:

C is just (A+B)/2, i.e. a cross-fade of A and B in equal measure. 

I am absolutely no expert in this, but I think your diagram is accurate, to my understanding. Except I would not call it a cross-fade. AFAIK, cross-fade means both waveforms are playing. In your diagram. though, with an output equal strictly to waveform C, at no point are two waveforms playing.

 

41 minutes ago, cphollis said:

It always helps to simplify the problem.  I am reaching back well over 40 years to a class I barely paid attention to at the time, so you have been warned as to the accuracy here, imagined or otherwise. 

 

Waveform A is a pure sine wave, 440 Hz, maybe 85 dB.  Waveform B is also a sine wave, 440 Hz, maybe 86 dB but with more pronounced harmonics. 

.. except that sine waves don't have harmonics. ;-)

  • Like 1

Maybe this is the best place for a shameless plug! Our now not-so-new new video at https://youtu.be/3ZRC3b4p4EI is a 40 minute adaptation of T. S. Eliot's "Prufrock" - check it out! And hopefully I'll have something new here this year. ;-)

Link to comment
Share on other sites

I have a love/hate thing with Nord acoustic pianos as I've said in other threads. My biggest issue is the the attack. It ruins my experience when I owned one. I think they sound great in a mix.

 

Once transitions between samples is talked out, I would like to hear comments on the attack.  I am not sure why I dislike, maybe someone can explain it to me, or tell me why it doesn't bother you and you think it is fine.

AvantGrand N2 | ES520 | Gallien-Krueger MK & MP | https://soundcloud.com/pete36251

Link to comment
Share on other sites

1 hour ago, AnotherScott said:

I am absolutely no expert in this, but I think your diagram is accurate, to my understanding. Except I would not call it a cross-fade. AFAIK, cross-fade means both waveforms are playing. In your diagram. though, with an output equal strictly to waveform C, at no point are two waveforms playing.

Except that two waveforms are playing. In my graph, C is precisely A+B, reduced in amplitude by 50%. The sound of C is precisely the sound of A and B together (and reduced in volume)

 

2 hours ago, cphollis said:

I think it was a sequence of FFT transforms or similar.

That is very different from interpolating samples that I did in my graph. You would be interpolating between harmonics, not between samples. That's what I was trying to get at when i posted "A more intelligent (but complex, and computationally intensive) approach is derive the samples' harmonic content through Fourier transform, and interpolate between the harmonics." in my earlier post.

 

Cheers, Mike.

Link to comment
Share on other sites

1 hour ago, 16251 said:

I have a love/hate thing with Nord acoustic pianos as I've said in other threads. My biggest issue is the the attack. It ruins my experience when I owned one. I think they sound great in a mix.

 

Once transitions between samples is talked out, I would like to hear comments on the attack.  I am not sure why I dislike, maybe someone can explain it to me, or tell me why it doesn't bother you and you think it is fine.

 

I think lots of boards have an attack that is too percussive, falling off too much after the initial attack. I've theorized that maybe they're sampling too close to the hammers (somewhere your ears would never be, but which might sound particularly percussive), and/or they are manipulating the sample to keep the "pre-loop" section small to conserve memory, and dropping the volume more quickly between the attack and the loop point creates an exaggeration of the initial attack effect.

 

While Nord is not entirely immune from this, I think they suffer from it less than many other boards. Though a related phenomenon on the Nord is, to me, they don't have a soft enough pp sample, it sounds too much like a louder sample just played more quietly... and specifically, that it feels like you're getting the sharper attack of a more loudly played (harder-struck) note, when you're trying to play quiet notes.

 

2 minutes ago, stoken6 said:

Except that two waveforms are playing. In my graph, C is precisely A+B, reduced in amplitude by 50%. The sound of C is precisely the sound of A and B together (and reduced in volume)

 

Ah, then I misunderstood your graph. I thought you were generating a single waveform (C) by analyzing the content of (A) and (B) and generating the point in between. Conceptually, I believe that yields a different result than playing the two waveforms (A) and (B) together at reduced volume. Getting back to my morphing pictures analogy, imagine an animated morph that goes from a square to a circle. At the beginning, you see a square; at the end you see a circle. In the middle, you see some other shape, but you never see the square and the circle simultaneously. It's not that one is fading out as the other is fading in (cross-fading), it's that new shapes are being generated by interpolating the points in between the starting shape and the ending shape.

  • Like 1

Maybe this is the best place for a shameless plug! Our now not-so-new new video at https://youtu.be/3ZRC3b4p4EI is a 40 minute adaptation of T. S. Eliot's "Prufrock" - check it out! And hopefully I'll have something new here this year. ;-)

Link to comment
Share on other sites

47 minutes ago, AnotherScott said:

I thought you were generating a single waveform (C) by analyzing the content of (A) and (B) and generating the point in between. Conceptually, I believe that yields a different result than playing the two waveforms (A) and (B) together at reduced volume. 

It doesn't - it actually yields the same result (superposition property of waves).

 

That's why I don't know what sample interpolation is (other than crossfading).

 

Cheers, Mike.

 

 

Link to comment
Share on other sites

2 hours ago, AnotherScott said:

 

I think lots of boards have an attack that is too percussive, falling off too much after the initial attack. I've theorized that maybe they're sampling too close to the hammers (somewhere your ears would never be, but which might sound particularly percussive), and/or they are manipulating the sample to keep the "pre-loop" section small to conserve memory, and dropping the volume more quickly between the attack and the loop point creates an exaggeration of the initial attack effect.

 

Wondering if there is a delay (time it takes for hammer to reach strings after key is struck,) build-in to the attack of digital pianos?

 

Playing pp doesn't seem to be a priority, but I agree that it should be.

 

I would rather pay more for a proper sample that didn't cut corners. I am not talking VSTs.

AvantGrand N2 | ES520 | Gallien-Krueger MK & MP | https://soundcloud.com/pete36251

Link to comment
Share on other sites

speaking rather than yelling. seems to me this is what immediately separates the men from the boys in terms of slab pianos. we call it finger to ear connection or similar. but it's really off putting when you're trying to say something gentle and the digital's still yelling.

  • Like 1
..
Link to comment
Share on other sites

4 hours ago, stoken6 said:

That is very different from interpolating samples that I did in my graph. You would be interpolating between harmonics, not between samples. That's what I was trying to get at when i posted "A more intelligent (but complex, and computationally intensive) approach is derive the samples' harmonic content through Fourier transform, and interpolate between the harmonics." in my earlier post.

Yes, that seems a better method than crossfading. Given that the samples are known, the FT, interpolation and IFT do not have to be done in real time; they could be pre-computed and stored in the sample/file format for rendering in real time.

Link to comment
Share on other sites

@stoken6 I'm also not an expert on this but I do have a bit of a math background and find it interesting. There are different kinds of interpolation formulas and (A+B)/2 would be a simple one. Really, any method that gives a point between A and B would technically be an interpolation. But with audio, you can also think about how the points are moving over time and think about that as a curve over time that you want to interpolate between. Think of 3D waveform visualizations that look like a mountain landscape. So, perhaps based on the type of interpolation, it can sound more like a simple cross-fade or something more sophisticated. 
 

I tried a quick google search and found this useful thread with interesting links:

 

https://modwiggler.com/forum/viewtopic.php?t=239984

 

 

@AnotherScott In response to your points, you can just add the amplitude of the waveforms together. That's like the principle of additive synthesis. Our ear drums are receiving a single combined waveform similar to an oscilloscope. Our brains do all the separating of distinct "sounds" based on how distinct "features" in the waveform are changing over time as well as placement in space (unless you are listening to a mono recording). Your analogy with two overlapping images doesn't quite apply because it's comparing two different types of things (audio vs visual) but also at two different levels. With visual data, the waveforms we are seeing are the light waves that we sense as coloured points of light. We can make distinctions even in static images because our eyes can simultaneously sense many distinct points of light. With our ears, our ear drums are receiving one combined waveform in each ear and our brains have to decode/separate out what was originally separate waveforms from different sources. Brains can only do that by hearing the waveform change over time. The sonic equivalent of a snapshot image would be maybe a single cycle of the waveform being played to us. We couldn't discern anything. We need to hear the sounds "moving" to be able make distinctions. Your analogy would maybe hold up if we could visualize/see sound waveforms moving in our real world space with different shapes/colours, etc. It's a whole other level of information.


 

Link to comment
Share on other sites

Approaching this from a different angle, and removing the analogy to anything visual...

 

Imagine you have a recording of a flute playing a sustained note, and a violin playing the same sustained note, and you overdubbed one over the other, and you listened to a playback of that recording. The resulting "combined" waveform, I think, would still sound like a violin and a flute playing the same note simultaneously, whereas I thought that a system that "morphed" or "interpolated" a new waveform "between" the flute waveform and the violin waveform (as opposed to combining them) would create, not the sound of a flute and violin playing together, but rather some other sound that is not distinctly identifiable as either a flute or a violin, though having characteristics of each. But maybe I'm all wrong about this.

  • Like 1

Maybe this is the best place for a shameless plug! Our now not-so-new new video at https://youtu.be/3ZRC3b4p4EI is a 40 minute adaptation of T. S. Eliot's "Prufrock" - check it out! And hopefully I'll have something new here this year. ;-)

Link to comment
Share on other sites

4 hours ago, AnotherScott said:

Approaching this from a different angle, and removing the analogy to anything visual...

 

Imagine you have a recording of a flute playing a sustained note, and a violin playing the same sustained note, and you overdubbed one over the other, and you listened to a playback of that recording. The resulting "combined" waveform, I think, would still sound like a violin and a flute playing the same not simultaneously, whereas I thought that a system that "morphed" or "interpolated" a new waveform "between" the flute waveform and the violin waveform (as opposed to combining them) would create, not the sound of a flute and violin playing together, but rather some other sound that is not distinctly identifiable as either a flute or a violin, though having characteristics of each. But maybe I'm all wrong about this.

 

I think you're basically right but it's complicated by the fact that we're talking about a waveform that is changing over time so we have to think of it more in 3D. And we need to consider how different aspects of sounds blend together more easily than others just like how a simple sine wave trying to represent a sustained woodwind sound will very easily blend into other sounds if it is static. But the attack portion of most sounds are a lot more dynamic and unique with fast transient harmonics. Organ drawbars/stops are an interesting exercise in additive synthesis of static waveforms and how sometimes simple components (especially sine waves) will blend together and other times still stay somewhat distinct depending on the relationship of the frequencies and amplitudes. 

 

Like I said in my previous reply, there are different kinds of interpolation. Mixing two waveforms (simple additive math) may technically be one kind even though it may not be the kind we normally think of when speaking of synths using some kind of mathematical interpolation or morphing algorithm. The final result will be somewhat different and we may need to see an animated visualization of how the different types of interpolated waveforms change over time to get a better sense of the difference. Sounds like an interesting student project. :) 

 

 

  • Like 1
Link to comment
Share on other sites

23 hours ago, AnotherScott said:

 

I think lots of boards have an attack that is too percussive, falling off too much after the initial attack. I've theorized that maybe they're sampling too close to the hammers (somewhere your ears would never be, but which might sound particularly percussive), and/or they are manipulating the sample to keep the "pre-loop" section small to conserve memory, and dropping the volume more quickly between the attack and the loop point creates an exaggeration of the initial attack effect.

 

While Nord is not entirely immune from this, I think they suffer from it less than many other boards. Though a related phenomenon on the Nord is, to me, they don't have a soft enough pp sample, it sounds too much like a louder sample just played more quietly... and specifically, that it feels like you're getting the sharper attack of a more loudly played (harder-struck) note, when you're trying to play quiet notes. 

 

 

 

 

Defintely agree. I wrestled with the Kurzweil Forte's piano for close to two years; just could not adjust the attack and decay to get a satisfying fingers-to-ears connection. But this can be a subjective issue, as our playing sensitivities vary. When I played a PC4 recently the velocity vs. attack/decay seemed improved, especially with the 9-ft. German Grand. 

 

Regarding the above, Nord piano sounds vary.  While I avoid a couple in my Stage 3, 76 that have stronger attacks and shorter decays, some of the pianos have worked well for all-purpose band gigs.  And a well-tweaked version of the Grand Imperial sample works fine for solo piano as well as rock/pop; no velocity switching concerns there. But that's been improving steadily for most DP tones for at least the past decade. I started noticing a satisfying fingers-to-ears connection with my Yamaha S90-ES, which was around 2006.  While I've used various Rolands for piano, and dig the SN variations, Yamaha piano tones fit like a super-comfortable and reliable pair of hiking boots.  My YC88 has no discernable problems with velocity, and the envelopes are highly pianistic; top-notch feel and tone. And I agree, Scott, that the Nord piano's pp sample is suspect. So for more piano-focused work I definitely go to the YC88.

 

  • Like 1

'Someday, we'll look back on these days and laugh; likely a maniacal laugh from our padded cells, but a laugh nonetheless' - Mr. Boffo.

 

We need a barfing cat emoticon!

 

 

 

 

 

 

 

Link to comment
Share on other sites

15 hours ago, funkyhammond said:

I tried a quick google search and found this useful thread with interesting links:

 

https://modwiggler.com/forum/viewtopic.php?t=239984

Thanks @funkyhammond that link really focuses on two approaches: 1. simple interpolation (crossfading) 2. precalculated wavetables. (There's a brief discussion of spline/second-order interpolation, which doesn't really go anywhere). 

 

I've tried to illustrate the difference in the following graph.

- A (solid blue) is a 20% pulse wave

- B (dotted orange) is a square wave (50% pulse)

You would expect the "interpolation" between these two to be a 35% pulse wave. But if you follow "simplistic" sample interpolation, you get a funny staircase wave (dotted green).

 

Cheers, Mike.

 

 

waveforms 2.png

Link to comment
Share on other sites

4 minutes ago, allan_evett said:

 

Defintely agree. I wrestled with the Kurzweil Forte's piano for close to two years; just could not adjust the attack and decay to get a satisfying fingers-to-ears connection. But this can be a subjective issue, as our playing sensitivities vary. When I played a PC4 recently the velocity vs. attack/decay seemed improved, especially with the 9-ft. German Grand. 

 

Regarding the above, Nord piano sounds vary.  While I avoid a couple in my Stage 3, 76 that have stronger attacks and shorter decays, some of the pianos have worked well for all-purpose band gigs.  And a well-tweaked version of the Grand Imperial sample works fine for solo piano as well as rock/pop; no velocity switching concerns there. But that's been improving steadily for most DP tones for at least the past decade. I started noticing a satisfying fingers-to-ears connection with my Yamaha S90-ES, which was around 2006.  While I've used various Rolands for piano, and dig the SN variations, Yamaha piano tones fit like a super-comfortable and reliable pair of hiking boots.  My YC88 has no discernable problems with velocity, and the envelopes are highly pianistic; top-notch feel and tone.

 

Thanks chiming in. IMO, getting the attack right is more important than the samples blending. Out of all the DPs I've owned, I never thought about the samples except for one; the Rhodes sound on Yamaha P80. It made it almost impossible to play hard. I never loved P80 or P90, but at the time they were a new breed of light compact 88 note weighted action boards.

AvantGrand N2 | ES520 | Gallien-Krueger MK & MP | https://soundcloud.com/pete36251

Link to comment
Share on other sites

All these examples of blending, adding or interpolating two waveforms into one seem to ignore a basic and fundamental aspect - phase.

 

I can take two identical waveforms and have them be 180 degrees out of phase with each other. Blend (add, interpolate, whatever you want to call it) those and what do you get? That's right: nada.

 

To me, this points up the fundamental issue about making these "seamless" crossfades work. I'm no digital audio expert so this is only a guess, but my guess is that there's a lot of math involved in making these kinds of things work and sound correct.

 

I believe something like this was done years ago in a "brute force" method with a solo violin library (forget which one). A single note-on would trigger every layer's samples simultaneously and you'd use an expression pedal to crossfade between the layers as they all sounded. I remember reading that the devs had to do some prep work, digitally manipulating the samples to be in phase with each other. This seems obvious; as you crossfade from one layer to the next, phase diffs would cause varying frequency cancellations and additions and probably sound very unnatural.

  • Like 1
Link to comment
Share on other sites

16 minutes ago, Reezekeys said:

I can take two identical waveforms and have them be 180 degrees out of phase with each other. Blend (add, interpolate, whatever you want to call it) those and what do you get? That's right: nada.

I was thinking of making my example two sine waves out of phase. A "harmonic interpolation" of the two should sound identical to either one (perhaps partially phase-shifted?). But "sample interpolation" (crossfade, add, blend etc), not so much.

 

Cheers, Mike.

Link to comment
Share on other sites

3 hours ago, stoken6 said:

Thanks @funkyhammond that link really focuses on two approaches: 1. simple interpolation (crossfading) 2. precalculated wavetables. (There's a brief discussion of spline/second-order interpolation, which doesn't really go anywhere). 

 

I've tried to illustrate the difference in the following graph.

- A (solid blue) is a 20% pulse wave

- B (dotted orange) is a square wave (50% pulse)

You would expect the "interpolation" between these two to be a 35% pulse wave. But if you follow "simplistic" sample interpolation, you get a funny staircase wave (dotted green).

 

I'm not sure why you would expect a different % pulse wave. That's just a convenient way we label shapes of pulse wave. What does the fourier transform or spectrogram look like of the two original pulse waves? If I think of it as molecules in air jumping forward and back, I visualize the two combined waves as a stepped movement like the result in your graph. But I'm sure you could come up with some math using these Fourier expansion formulas for pulse wave that would interpolate a new percentage pulse if that's what you were going for.

(EDIT: And I'm guessing there are more efficient ways to determine duty cycle and interpolate between them).

 

Some other interesting reading about the PPG and Waldorf Microwave which may relate more to "morphing" techniques:

 

https://gearspace.com/board/electronic-music-instruments-and-electronic-music-production/1100425-microwave-1-vs-ppg-6.html

 

 

Link to comment
Share on other sites

1 hour ago, Reezekeys said:

All these examples of blending, adding or interpolating two waveforms into one seem to ignore a basic and fundamental aspect - phase.

 

I can take two identical waveforms and have them be 180 degrees out of phase with each other. Blend (add, interpolate, whatever you want to call it) those and what do you get? That's right: nada.

 

To me, this points up the fundamental issue about making these "seamless" crossfades work. I'm no digital audio expert so this is only a guess, but my guess is that there's a lot of math involved in making these kinds of things work and sound correct.

 

 

That's a good point about phase issues. The discussion did diverge into talking about why simply mixing two very different sounds can result in something different than using some interpolation/morphing technique. I'm guessing that interpolation which might be used for velocity layers is more to do with things like correcting phase issues rather than something related to "morphing". When you have two sounds that are almost identical to begin with, "morphing" is not really what you're thinking about.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...