Jump to content


Please note: You can easily log in to MPN using your Facebook account!

Latency!


JeffLearman

Recommended Posts

Let's unclutter other threads and argue about it here. :-)

 

Latency is the time between playing a note and hearing it. Here are the contributing sources of delays, for the case of a MIDI instrument. Delay sources that are microseconds or less are ignored (e.g., electrical signal transmission).

 

1) keyboard scanning and generating a MIDI "note-on" message

2) (sometimes) transmission sequencing delay of 1 ms, if MIDI message is sent on a MIDI cable.

3) MIDI input driver latency

4) Note assignment delay

5) softsynth processing delay ?

6) Audio output buffering delays

7) sound propagation in the air

 

The last of these is debatable as belonging in the same list as the others, since it comes with audio clues (room reverberations) that our brains are used to associating with sound at a distance. So, it's not the same, but it is still a delay that affects performance. If you put your monitor 100 feet away, trust me, you'll have trouble playing in time, unless you're used to playing instruments with inherent delays like some pipe organs and church bells.

 

Audio output buffering latency is generally the most important one. However, all components add up to the total and should be considered.

 

More coming :-)

 

Link to comment
Share on other sites



  • Replies 30
  • Created
  • Last Reply

How to measure latency?

 

To measure the latency of a MIDI-driven sound module (e.g., computer, rack module, MIDI keyboard played from its MIDI in), the lab standard is to feed the MIDI signal and audio signal to a storage scope (oscilloscope) and measure the time difference from the END of the MIDI message to the beginning of the audio output.

 

Of course, this measures only the MIDI sound module, and doens't work for measuring the latency for a keyboard in local mode.

 

Here's a way to measure total latency for your setup, which works for any electronic keyboard instrument. It includes all sources except the sound in the air (which we generally aren't interested in anyway). All you need is a DAW that can record a mic and the audio output.

 

Stick a mic next to a key. Record the mic and the analog audio output from the keyboard/computer/module. Whack the key with your nuckle. Load the recording into your DAW and set its timeline to read in milliseconds and see how much time it takes.

 

Note that the sound of your knuckle will be a bit ahead of the time the keyboard registers a note, but I doubt it's significant.

 

This gives you the total latency in whatever gear you're testing. It's likely to be a ms or three longer than the numbers your DAW might give you for your soundcard setup, since that only inludes the audio output buffering latency.

Link to comment
Share on other sites

Here are some handy numbers:

 

Sound travels 1 foot in about 1 ms (actually .88 ms, at room temperature, sea level, normal humidity, with a full moon ... ;-)

 

At 44.1khz sample rate, 64 samples is about 1.5 ms. So, increasing your buffering by 64 samples is roughly equivalent to moving your monitor speaker a foot and a half farther away, in terms of delay.

 

While moving the speaker isn't the same experience as changing internal latency (due to natural reverberations in the room, and the fact that you know the speaker has moved), it can be used as a gauge for comparison purposes.

Link to comment
Share on other sites

Back to the sources of delay:

 

1) keyboard scanning and generating a MIDI "note-on" message

 

This should be well under 1 ms for any good design. Hopefully it can be ignored.

 

2) (sometimes) transmission sequencing delay of 1 ms, if MIDI message is sent on a MIDI cable.

 

This applies ONLY when the MIDI is sent on a MIDI cable. It doesn't apply to MIDI over USB where it's USB all the way from keyboard to soundsource. It usually does not apply to a keyboard played in local mode, but some manufacturer might put actual MIDI transmission in there.

 

If you play 8 notes at the same time, the first note is delayed 1 ms, the 2nd 2 ms, etc up to 8 ms for the last note.

 

3) MIDI input driver latency

 

This is a question mark. I expect it to be relatively small, under 1 ms, but it could vary greatly depending on driver software quality, the type of MIDI input (USB vs. Firewire vs. hardware MIDI), and system issues (bus latencies, deferred procedure calls, etc).

 

4) Note assignment delay

 

If you only notice latency when lots of notes are sustained, this is likely to be the cause. Whenever a sound generator receives a "note on" event and the number of notes playing is already at the polyphony limit, it has to decide which note to stop playing. There are a lot of possible algorithms, and each has its pros and cons. Some sacrifice rapid response to guarantee quick playing. Some sacrifice response time on *every* note to minimize response time when over the limit.

 

5) softsynth processing delay ?

 

There are softsynths that have this problem; notably MS Wavetable synth, which nobody should use for any serious purposes. In general, though, this should be zero due to the way hosts and softsynths are designed. The host presents the softsynth with a buffer, the synth fills it. The host is responsible for sending a full buffer to the soundcard by the time it's due (roughly speaking, when the previous buffer's samples have been converted to audio).

 

If the softsynth takes too long to fill its buffer, you don't get latency, you get dropouts, which can sound different on different systems but always sounds nasty (sudden silence, zippering, or other odd things).

 

Link to comment
Share on other sites

Don't forget my favorite points:

 

1b) Keyboard scanning itself often happens in a pattern, for instance in batches of 8 keys in parallel, and certainly not so fast (so many scans per second) that phase accurate playing for anything but low notes is usually possible. So lets say you want to go ting de ding ting de ding on a standard A note, you'd for the sake of good boogie woogie or rock playing want to be able to let the tone start at a multiple wave length of itself or not, which will sound as clear differences especially with reverberation.

 

2b) on a midi cable, every note played costs about 1 mS, so you put down your 10 fingers, there's going to be easily more than 1/16 triplet difference between the first and the last note in the chord.

 

2c) Light travels at about 300,000 km per second, and electricity in wires at about 2/3rds of that speed IIRC, so taking a Midi cable a mile long would be like what, 5 Microseconds extra delay. On top of milliseconds per note serialisation time, that's negligible.

 

which brings us oto probably the most important variable apart from having the choice to get almost zero delay:

 

1) 3) 4) 5) and 6) must in the ideal case be EXACTLY the *same delay* at any point in time. no matter the rhtyhm, the number of notes being played or any (midi) clock in the system, so that at least the ADAPTING to the delay is easy and playing the machine pleasant.

 

Lost of software does tricks with inserting notes in the output buffer, which is actually pretty horrible, even though it seems like on average there's nice small delay.

 

With USB midi, the interface timing is much faster than 1 mS per note, and I think programming on Linux I found it is possible to use timestamping of events, which can be used to make a quite good fixed delay.

 

Theo.

 

Link to comment
Share on other sites

So have you tried to play using a speaker at further and further distance, do you get the same feeling of being unconnected to the sound as you get when playing to high latencies. I don't-it feels distant but different-so the question is - is there a difference in how you perceive a sound generated in a room and the delay you feel pressing a key until it sounds??

 

Does a big old church organ have terrible latency??

 

 

Link to comment
Share on other sites

6) Audio buffering delays

 

This is the big one, since it's the one we have a knob for, and since it's usually the biggest single contributor.

 

Why might we need big buffers? As we know, we need to increase buffering when we hear dropouts or zippering as we play, and that usually fixes it. Why?

 

The reason is that something is preventing the CPU from giving its attention to the audio software for some amount of time, and so the host gets behind in delivering buffers to the audio driver. The following assumes we're talking about a softsynth on a computer, but the same principles apply to other architectures.

 

The minimum number of buffers is 2; let's assume that case. The audio driver holds one buffer while it's busy converting the samples into voltages. When it's done, the other buffer better be there and ready.

 

Meanwhile, the host runs the plugin and gets a buffer's worth of samples. When it's ready, it posts it to the audio driver and waits for the driver to give back the previous ("used" or "empty") buffer.

 

Let's say the buffers are 128 samples. At 44.1 kHz, that's 1.5 ms worth of audio data per buffer. now let's say the host and plugin take about .5 ms to do their processing. That leaves 1 ms of idle time for them to take a smoke break between buffers.

 

However, let's say something happens in the system that needs attantion, like a timer interrupt (oops, you left your calendar running!) or a packet from the internet, or a MIDI message. Fine -- that's what that 1 ms of "smoke break" time allows. No smoke break for the processor, but that's ok as long as the interruption doesn't take more than the available 1 ms to handle.

 

If it takes longer, the audio driver is left with no output data, and strange things happen. To avoid that problem, increase the audio buffer size (or number of buffers) and voila, the CPU has more time to catch up if it loses a ms or two doing other things. There's a safety net because the host has given the audio driver more to chew on, so it can be left alone longer in case of an interruption.

 

That comes at a cost: more latency. If we've given the driver twice as much audio data, and a new note comes in, it'll take twice as long waiting in line in the audio output "queue" for that note's sound to come out.

 

Of course, the above is a simplification. I only mentioned one cause for dropouts: the CPU being busy. But there are others, such as PCI bus latency: the audio hardware is trying to tell the driver something, but some other piece of hardware has grabbed the I/O bus and won't let go.

 

And then there are Deferred Procedure Calls, which I don't know much about, but that's in the "CPU busy doing other stuff" category.

 

Link to comment
Share on other sites

Hmm, abnd there are processing pipelines in the CPU which preferably run for quite a large number of samples to remain efficient. Also, there's memory caching and filling local registers in the CPU which can be more efficiently used when they do not need to switch context too often. So for PCs and Macs (also having Intels and in fact running a variation of X and unix) probably you can run more plugins and/or have more notes of polyphony when increasing buffe size.
Link to comment
Share on other sites

So have you tried to play using a speaker at further and further distance, do you get the same feeling of being unconnected to the sound as you get when playing to high latencies. I don't-it feels distant but different-so the question is - is there a difference in how you perceive a sound generated in a room and the delay you feel pressing a key until it sounds??

 

Does a big old church organ have terrible latency??

A friend who was an experienced musician in a number of ways mentioned this. I was recording him on Rhodes back in the bad old days of tape. He played admirably; I still have it. He monitored in headphones. When he was done, I turned on the speakers, and on playing a note I noticed his monitor was off the playback head (!)

 

He said it hadn't bothered him because he'd learned to deal with stuff like that before, and I *think* one of the examples was low notes on organs, which take time for the air column to get moving (though in that case the delays are pitch-dependent). Or maybe all notes are delayd, with old mechanical air-driven ones.

 

I'll defer to folks who really know that stuff.

 

Back to your 1st question: I know it doesn't affect me at all when the difference is just a matter of a couple feet or a few ms latency. I get annoyed at around 20 ms total latency, and it doesn't much matter to me whether it's 10 ms in the computer and 10 feet in the air, or 20 ms in headphones.

 

I'm not a good data point on latency, though. My timing leaves much to be desired, and I'm pretty latency-tolerant.

 

I'd love to do a blind study to see what latencies people really can detect. Maybe at some KC hang we could line up a number of test subjects ... ;-)

Link to comment
Share on other sites

Playing organ in a big church is a trip. Some of the pipes may be in rooms close to you, say 20 to 30 feet away, some may be just above you in exposed casework, and some may be all the way at the back of the sanctuary, 100 feet away.

 

The choir will be singing with the organ as you hear it, since they typically sit close to the organ. The congregation will be a half second or more behind, since they hear it delayed.

 

If you listen to the congregation, you are doomed.

Moe

---

 

Link to comment
Share on other sites

The psycho matter may have to do with trends of makers of software to suggest a certain delay in their samples and sample processing combined with binoral reverberation clues and lack of inter-tone and chord clues most normal (natural) instruments create. Organs of course will not change a bit in terms of their mechanical delays in the duration of a song, so once you've heard each note once, the reverb enhanced delay experience (hearing the waves interact and bounce) is clear and generally doesn't change, and allso the air columns and the interacting waves in the organ machinery follwo normal power and wave rules, so that works naturally, just like the soundboard on a piano and the size of the strings and enclosure have (complicated but natural) effects on the sound. Speakers places in a space will create their own wave patterns, and unless they're very natural sounding that may be as annoying as placing a piano in the wrong corner of a space.

 

Also interesting to note is that a Digital to Analog converter always (except in old or degerenated cases possibly for very high sample rates) has what in EE is called a reconstruction filter, which can add in the order of milliseconds or more of delay. In a way to convert samples to analog signals when the sample frequency is in the order of CD rate it theoretically can take even up to at least a significant part of a second to do a serious job of "perfect signal reconstruction", and that's theory which is hard to pass by. So in a way the longer the DA converter takes, the better waves WRT the space reverberationsmay come out...

 

 

Link to comment
Share on other sites

I've played small gigs without monitors hearing myself through the same system as the audience. A bit difficult with dynamics but still possible...

 

Interestingly we played a gig with cantaloop in an awfully dry room, all mistakes where heard. I recorded that gig a couple of meters away using a little tascam pocket recorder. Even though I felt and heard all small flaws from where I sat I could barely hear them in the recording....

 

sorry left the subject a bit...I think it would be really test my latency sensitivity. In Cubase or using kontakt I always feel the difference down to what the computer say is somewhere between 1.5-3 ms.

 

Link to comment
Share on other sites

Hmm, abnd there are processing pipelines in the CPU which preferably run for quite a large number of samples to remain efficient. Also, there's memory caching and filling local registers in the CPU which can be more efficiently used when they do not need to switch context too often. So for PCs and Macs (also having Intels and in fact running a variation of X and unix) probably you can run more plugins and/or have more notes of polyphony when increasing buffe size.
Right -- architecture matters.

 

As for pipelining, there are two kinds of pipelining going on. One is normal processor pipelining (which I know a lot more about that I wish I did, programming pipelined packet processors in high speed internet routers). That just means that while the processor is still finishing one instruction, it's already starting a later one, and has one or two or three more at intermediate stages as well. It's beside the point other than that good pipelining increases CPU power without having to raise the CPU clock rate.

 

Another way they raise CPU power without increasing clock rate is to have more CPU cores. Each core can be running a different "thread" in the code. For hosts, this means that each plugin can run in a different thread, allowing more of them to work at the same time. That also helps a lot in avoiding the problem where the CPU is busy elsewhere.

 

Another big factor, which Al Coda can testify to much better than I can, is the various levels of memory cache and front side bus speed, which means the CPU isn't sitting there waiting for data. That's especially important for large sample sets, versus something Arturia's minimoog clamoring for CPU power to emulate lots of oscillators but not needing much fresh memory.

 

Link to comment
Share on other sites

I still say there is a psychological difference between having a speaking right next to you and getting 6 - 7 ms of delay due to latency and just having a non-latent sound emanating from a speaker 10 feet away.

 

Our brains are programmed to decode acoustic cues and clues and so the speaker sounds more "natural" even though it is farther away because we expect the sound to be delayed a bit. We don't expect the sound to be delayed when it's right next to us. The difference may be scientifically negligible, but our brains know something is wrong.

Link to comment
Share on other sites

Given the sampling issue I'm pretty sure it also (and certainly for the mid and high frequencies more audibly when there's reverberation) quite audible to take a Rhodes or a miked piano, or a non-digital organ and put any digital delay somewhere in the signal path...

 

I'm sure with an additional unnatural delay unless you play in an acoustically dead room more adaptation than might be desirable is needed, and that there is also a connection with the way the sound is heard over the speakers, I mean a nice PA soundpath with quite some delay may work well enough whereas a few milliseconds extra delay on a natural instruments may instantaneously feel unnatural to a player unused to the idea. Of course when it is an instrument with feedback issues, the whole feedback sound changes when there is a different than normal delay, and therefore all kinds of cues work different...

 

About the processor, there's context switching, thread scheduling, unknown timing of UI and network, and quite some pipelining in the Intels, and with conditional branching comes some unpredictability, and with the floating point and integer computation pipelines per core, but much more for the heavy SSE and such parallel instructions (like for FFTs) there's pipeline latency and quite some context switching overhead.

 

 

Link to comment
Share on other sites

The real trip playing a pipe organ in a large church is that latency of the lower pitch sounds (pedals and low on the manuals) VARIES depending on the construction and length of the original pipe (or wooden sounding device).

 

Especially noticable on 16' and 32' pedal stops - the note does not begin to sund until the air pressure (which is introduced at the bottom) reaches the top - this results in a delay that is over a second for some of the low pitches.

 

One of the challenges of the organist is to play such notes in ADVANCE by the right timing so that they sound at the proper time.

 

BTW - on the Rodgers electronic large church organs, there are individual adjustments for volume (to compensate for standing waves in the room) and for delay. Current instuments use special software on a notebook computer to connect to the organ for voicing.

Howard Grand|Hamm SK1-73|Kurz PC2|PC2X|PC3|PC3X|PC361; QSC K10's

HP DAW|Epi Les Paul & LP 5-str bass|iPad mini2

"Now faith is the substance of things hoped for, the evidence of things not seen."

Jim

Link to comment
Share on other sites

 

Most important questions:

 

Are we testers or musicians?

 

Does it make sense testing specific rigs of individual musicians which might be simple ones like 1 keyboard (which ?) in local mode and/or triggering 1 module (which ?) or complex ones like several keyboards triggering several or lots of modules (how ? MIDI daisy chaining or w/ help of matrix switchers or other devices ?) and/or triggering computer based units running virtual instruments (natively or using DSP cards) ?

 

Will the results of these tests change anything technically ?

 

Is USB MIDI really faster than "a MIDI cable" ?

(Theo,- are you sure ?)

 

Doing tests like these would be a endless task w/ endless individual results.

Each keyboard behaves different as a MIDI master and every hardware MIDI module does the same as a MIDI slave.

There are too many factors.

MIDI is the same speed always measured by baud rate.

Physical MIDI (5-pin DIN) handles 16 MIDI channels,- USB handles up to 64.

USB has more bandwidth, but not more speed for MIDI.

Every USB 1.1 interface is fast enough for MIDI.

I don´t want to search all the links again,- but there´s some info in the web reporting USB MIDI to be slower than USB MIDI,- not in general but it happens.

 

http://en.wikipedia.org/wiki/Musical_Instrument_Digital_Interface

 

USB

>>>

Some comparisons done in the early part of the 2000s showed USB to slightly slower with higher latency,[7] and this is still the case today. Despite the latency and jitter disadvantages, MIDI over USB is increasingly common on musical instruments.

<<<

 

According to digital or digital controlled hardware synths and modules, the brand and type of microprocessor used inside and the way it handles MIDI I/O makes big differences too.

 

One example is the old Roland D50/D550 which behaves slow, especially if you play chords.

The german company Musitronics developed not only a memory expansion, but also a speed kit for the D50/550,- a piggiback board speeding up the machine by 4 times.

Because of the increase of speed, the LFOs and envelopes became 4x faster too, so the OS/firmware had to be changed too to slow these down again. It worked ! I have it in my D550 and it´s much faster triggering even I play chords,- it´s very audible and you feel it.

 

IMO, the musicians interest is,- does he feel comfortable or not while playing his gear.

If yes,- no prob.

 

There are several ways to feel relatively comfortable as a result,- maybe playing only one keyboard of your choice and/or keeping the rig as small as possible, or investigating and finding the components for a large rig which fit best.

 

I myself, I´m old enough and had/have the luck to select from a somewhat larger amount of actual gear and gear from the past and I´m far away from arguing old gear is better than new one in regards of MIDI.

There were and are pieces of gear behaving slow,- but all my serial and parallel port MIDI interfaces are definitely better behaving than my USB MIDI interfaces,- and my 2 Sycologic MIDI matrix switchers and my MIOC processor MIDItemp PMM88E are tight and fast toys up today,- all physical MIDI and sometimes long cable runs,- but within the MIDI specs.

 

Playing a masterkeyboard into my USB MIDI interface, this connected to a computer and triggering VST is the slowest,- and I´m able to compare it directly by playing the same keyboard into another parallel port MIDI interface connected to another and much slower PC or connected to the same machine.

The difference is definitely caused by connection and drivers,- both drivers DirectMusic b.t.w., but different 3rd party.

Behaviour of specific plugins comes in addition.

 

Testing one of my rackmount PCs becomes very interesting:

I use it w/ a Creamware SCOPE card (15DSP chips), handling all the audio I/O and MIDI routing for Scope devices but also the audio out for VST which I use simultaneously as also Reason 4.

MIDI I/O handling/routing for VST and Reason is done by a DirectMusic MIDI driver and one of the LTP port MIDI interfaces.

The LTP port interface is pretty fast,- faster than USB, but the MIDI In of the Scope card is by far the fastest I´m able to compare w/ a computer solution.

 

Test your USB MIDI interfaces w/ MIDI Test software and w/ your computers.

http://miditest.earthvegaconnection.com/

If you own several interfaces and computers,- you probably will be surprised.

 

Unfortunately, you cannot use it for your masterkeyboards,- but it would eventually be interesting how keyboards behave in a loop back test MIDI wise,- keyboard in "local off" and a short MIDI cable from MIDI In to MIDI Out,- pressing keys plays internal tone generator and measure MIDI latency and jitter inside the machine at related pins,- not audio out as a result.

Can be, your MIDI interface/computer/software combo behaves as good as it can and there are issues in the 1st device in the chain already.

 

A.C.

 

 

 

 

Link to comment
Share on other sites

I still say there is a psychological difference between having a speaking right next to you and getting 6 - 7 ms of delay due to latency and just having a non-latent sound emanating from a speaker 10 feet away.

 

Our brains are programmed to decode acoustic cues and clues and so the speaker sounds more "natural" even though it is farther away because we expect the sound to be delayed a bit. We don't expect the sound to be delayed when it's right next to us. The difference may be scientifically negligible, but our brains know something is wrong.

Sure, there's a difference. No doubt about it. But in an anechoic room, there wouldn't be. I bet the difference isn't so big, and I also doubt we can really detect a 1.5 msec difference, unless it's on the very edge of where we can detect at all (which I think is a bit higher, but I'm guessing.)

 

I'm really dying to do a blind test. Too bad we can't do it remotely, like we could for 16-bit vs. 24-bit comparisons.

Link to comment
Share on other sites

Uggh, my head hurts. I have some latency issues to figure out with Logic and my Evolver, and I don't like it one bit. No, sir, I don't like it. It's keeping me from producing my super-awesome electronic music. I should have been a drummer. ;)
Link to comment
Share on other sites

Latency is the time between playing a note and hearing it.

I thought latency was that portion of time before a woman decides to take a pregnancy test.

Maybe this is the best place for a shameless plug! Our now not-so-new new video at https://youtu.be/3ZRC3b4p4EI is a 40 minute adaptation of T. S. Eliot's "Prufrock" - check it out! And hopefully I'll have something new here this year. ;-)

Link to comment
Share on other sites

Cool, I look forward to trying that MIDI loopback test. It puts an upper bound on input latency. If input and output are equal, it would be half the reported values.

 

Ooops,- really ?

Which kind of loopback test you want to try ?

Testing a USB MIDI interface connected to a computer w/ MIDI Test is easy,- but test all the I/O ports in every combination too,- if there are several ones.

 

Testing a hardware MIDI keyboard would be much more complex because you possibly don´t have the test equipment, the schematics and service manual and need a lot of knowledge how the machine works,- p.ex. how and where (pins) the processor receives a keyboard trigger internally, process this for outgoing MIDI, transmitts (pins) MIDI stream to MIDI out,- MIDI will be received by MIDI In (the loop), processor translates MIDI note number and velocity value to produce a voice.

As soon the voice is active electronically, you´d have the result for ONE MIDI channel and for only ONE note.

Then it comes to intervals, chords, continuous controller data in addition and all might change the behaviour.

Only ONE MIDI channel up to now ...

But,- the instrument is complex,- it´s multitimbral, receives on 16 MIDI channels and so on ...

Maybe priority routines chime in if the machine is operated at is limits.

Forget it,- it´s wasted time and nitpicking,- you don´t know what the designers had in mind when they realized the product.

 

All keyboards and modules I owned and own were/are playable in a real world situation,- the computer stuff is a different story.

In the other threads, we discussed the extremes.

Usage of standard computer components vs custom chip/DSP in digital hardware musical instruments, not which masterkeyboards and MIDI modules come w/ the fastest MIDI I/O system.

Manufacturers have a marketing problem, they know they can sell a hi end keyboard instrument to a end consumer for a price of 3 - 4 grand, that´s the limit and there are only rare exceptions.

It would be possible to design and produce faster gear,- if there were enough customers who want/need that and willing to pay much more.

 

The truth is,- most want a complete studio in a box for a few hundered bucks.

 

A.C.

 

 

 

Link to comment
Share on other sites

I thought latency was that portion of time before a woman decides to take a pregnancy test.
:laugh:

 

Cool, I look forward to trying that MIDI loopback test. It puts an upper bound on input latency. If input and output are equal, it would be half the reported values.

 

Ooops,- really ?

Sorry, I misstated that. If input and output latency are equal, then *each* is half the total. However, what's the likelihood they're equal? Hard to tell!

 

Which kind of loopback test you want to try ?
The one you linked: http://miditest.earthvegaconnection.com/

My MIDI-out to my MIDI-in, on my computer. (I have two different ones, so I can try each, or mix and match if there's a big disparity and maybe learn something, but they're both M-Audio USB.)

 

Regarding note assignment, as I said, it only kicks in when you hit the polyphony limit. In my experience, some instruments (hardware or plugins) handle it more gracefully than others. Someone else posted that with their software piano, they could feel a problem, and given it was only when playing lots of notes with the pedal down, then most likely it's the note assignment algorithm, and definitely not audio latency as it was assumed.

Link to comment
Share on other sites

I still say there is a psychological difference between having a speaking right next to you and getting 6 - 7 ms of delay due to latency and just having a non-latent sound emanating from a speaker 10 feet away.

 

Our brains are programmed to decode acoustic cues and clues and so the speaker sounds more "natural" even though it is farther away because we expect the sound to be delayed a bit. We don't expect the sound to be delayed when it's right next to us. The difference may be scientifically negligible, but our brains know something is wrong.

Sure, there's a difference. No doubt about it. But in an anechoic room, there wouldn't be. I bet the difference isn't so big, and I also doubt we can really detect a 1.5 msec difference, unless it's on the very edge of where we can detect at all (which I think is a bit higher, but I'm guessing.)

 

I'm really dying to do a blind test. Too bad we can't do it remotely, like we could for 16-bit vs. 24-bit comparisons.

 

I guess the next question then is when is your only monitor speaker ever 10 feet or more away from you in a real-life situation? And why should it be?

 

I don't think we should have to deal with these kinds of latencies in our hardware, especially since we never had to before. Using a computer is a different matter, right now. It will get better. But frankly I'd be pissed if I paid $3k+ for a modern, top of the line workstation and it had that kind of latency.

Link to comment
Share on other sites

I missed this earlier.

 

1) 3) 4) 5) and 6) must in the ideal case be EXACTLY the *same delay* at any point in time. no matter the rhtyhm, the number of notes being played or any (midi) clock in the system, so that at least the ADAPTING to the delay is easy and playing the machine pleasant.
Yes, but unfortunately, note assignment delay isn't constant on all implementations. #2 is at least totally predictable.

 

At what BPM & time sig would a 16th note triplet take 10 ms? The way I figure it, at 120 BPM, a quarter note is .5 sec, so:

 

8th = 250 ms

16th = 125 ms

32nd = 62.5 ms

64th = 31 ms

128th = 15 ms

 

so a 128th note triplet is 10 ms, if I understand correctly. (3 128th notes = 1 64th, just as 3 quarter note triplets = 1 half note, right? I'm no sight-reader.)

 

With USB midi, the interface timing is much faster than 1 mS per note, and I think programming on Linux I found it is possible to use timestamping of events, which can be used to make a quite good fixed delay.
Pure USB-to-USB would be 1 ms less delay than MIDI-to-USB, since the USB delays on the receiving end are the same, but there's no 1 ms MIDI transmission serialization delay (the USB serialization delay is on the order of microseconds). However, whether that's better than pure MIDI-to-MIDI isn't established, and would probably vary a lot depending on the specific gear. Check out Al's link to the MIDI test -- the round-trip latencies are a lot higher than I would have guessed!
Link to comment
Share on other sites

I guess the next question then is when is your only monitor speaker ever 10 feet or more away from you in a real-life situation? And why should it be?

 

I don't think we should have to deal with these kinds of latencies in our hardware, especially since we never had to before. Using a computer is a different matter, right now. It will get better. But frankly I'd be pissed if I paid $3k+ for a modern, top of the line workstation and it had that kind of latency.

What I said is that moving the speaker can be used for comparison purposes (with caveats). I also said that if a 64-sample difference bothers you, move the speaker 20" closer.

 

But in general, you're right that moving the speaker isn't a useful real-world compensation. Whether a 10 ms latency is appropriate in the marketplace at that price, well, the market will decide, and you have every right to your decision in that regards.

 

Meanwhile, I'd love to see what's actually detectable using scientific methods and a range of keyboard players. I know that I can't tell under 10 ms, but I'm not the most discriminating player by a long shot.

Link to comment
Share on other sites

I don't think we should have to deal with these kinds of latencies in our hardware, especially since we never had to before.

 

Couldn´t say better !

And,- the technlogy needed to create complex, powerfull and responsive instruments exists.

 

Using a computer is a different matter, right now. It will get better.

 

It is already very good if you look for the right technology.

Synth-wise and for FX, mixing & mastering, the in my opinion best gear paired w/ stock computer technology is Sonic Core.

http://sonic-core.net/joomla.soniccore/index.php

 

There are some goodies missing we need, the bread & butter stuff like electromagnetic pianos, acoustic pianos, better organ emu and sample streaming,- but I think there´s more to come because developers wait for the release of the SDK 5.x package which hopefully will be released soon.

Well,- actually there´s the option to use Scope ASIO device/driver and mix/blend your favourite NI Kontakt instruments and assorted VSTis into the Scope mixer and using a multi channel AD as inputs for your hardware instruments audio outs,- avoiding to set up a hardware mixer.

The converter connects directly to the card/rack device,- so ASIO for your outboard would be in the game only for recording,- not gigging.

Only the few VSTi would deal w/ ASIO output latency and Windows MIDI, but possibly you might be able to avoid that too if you use 2 hardware keyboards you already own for the organ and pianos stuff.

 

But frankly I'd be pissed if I paid $3k+ for a modern, top of the line workstation and it had that kind of latency.

 

Exactly !

 

A.C.

Link to comment
Share on other sites

Testing it:

 

Here's what I'd like to do. Have a setup where the player gets to play whatever they want. At a number of different settings, they get a minute to play and then settings are changed.

 

Each time, the player gets to rate the experience, probably in 3 or 4 categories something like this:

 

1) no detectable latency

2) detectable latency

3) annoying latency

4) completely unacceptable

 

At first, I'd tell the player I'm ramping the latency up by say 1.5 ms each time, and then back down again, telling the player what the current latency is. (Total round trip as measured, not just the buffering settings.)

 

Then a sequence of random latencies from minimum to say 30 ms range (which I would guess that the least sensitive among us would find annoying).

 

We'd be able to tease a few interesting figures from the results.

 

Since this can only be done using VSTi instruments, I'd probably pick a single one for all tests, and I realize that whatever I chose, it wouldn't be ideal for all players. Perhaps I could have two trials for each player, one with piano and the other with NIB4, letting the players adjust NIB4 however they want.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...