Jump to content


Please note: You can easily log in to MPN using your Facebook account!

MIDI 2.0 will there ever be any products?


Recommended Posts

On 2/26/2024 at 9:28 AM, JohnG11 said:

SynMike, 30 notes with identical timing? To what purpose?

The microprocessors in any MIDI instrument or PC will play them back sequentially. They are serial devices.

No electronic instrument on this planet can play back 30 simultaneous notes, let alone 2.

(Modification, it can play back 30 notes starting them one at a time, albeit very quickly using a modern processor.))

They're played in the order in which they appear in the file or over the interface.

The processor in, say, a keyboard, scans the keys and plays them back one after the other in the order in which it 'sees' them.

They may appear to be simultaneous, but they're not.

 

Maybe 30 notes with identical timing is a bad example, not very real-world. (I've played trumpet in an orchestra. LOL)

 

However, simultaneous events, even 30 Note-On messages is possible. It does not have to be serial. Today's synthesizers process each note and render it in serial fashion. But they do not have to.

 

In MIDI 2.0, you could send a bunch of messages all sharing a timestamp. The receiving device will generally get these in a single packet over USB, Ethernet, etc. The receiver could calculate the resulting sound of the 30 simultaneous notes in the packet and play that rendered sound, rather than rendering each note individually. It's what a DAW does with multiple tracks of audio - sync/phase is reproducible to audio clock rate. Will future synthesizers do that? I don't know. If processors are many times faster in 20 years, why not?

 

But perhaps as Craig pointed out above, the point is to reproduce exactly what you played, when you played it. That includes simultaneous events. Repeatable, identical reproduction is useful. MIDI 2.0 defines far better support for simultaneous events than MIDI 1.0 defines.

Mike Kent

- Chairman of MIDI 2.0 Working Group

- MIDI Association Executive Board

- Co-Author of USB Device Class Definition for MIDI Devices 1.0 and 2.0

 

Link to comment
Share on other sites



4 minutes ago, SynMike said:

It does not have to be serial.

 

MIDI and USB are both inherently serial, though. In fact, USB stands for "Universal Serial Bus." Yes, USB has packets, but each packet is sent serially. The amount of data per packet has increased over time, I believe USB 3 transfers up to 1024 bytes per packet. Though different parts arriving in different packets can be a cause of jitter.

Maybe this is the best place for a shameless plug! Our now not-so-new new video at https://youtu.be/3ZRC3b4p4EI is a 40 minute adaptation of T. S. Eliot's "Prufrock" - check it out! And hopefully I'll have something new here this year. ;-)

Link to comment
Share on other sites

This has been such a great thread, I've read multiple articles on MIDI 2.0 but never grasped it as well as I have reading this thread. I also sweat when remembering Roland SE-01's filter stepping because of the limitations of MIDI and buying a big dumb CV knob dongle just to get smooth filter movement.

Link to comment
Share on other sites

6 hours ago, techristian said:

Whatever becomes the established system will dominate. It doesn't matter which one is better.

As someone who wrote a bit of MIDI software, I don't want a whole bunch of new things to implement in my software.

 

Well, the established system is just plain MIDI. MIDI 1.0 and MIDI 2.0 are not competing systems, MIDI 2.0 simply adds more options to MIDI 1.0. You can implement as much or as little as you want, the same way that not all MIDI 1.0 gear implements all the features of MIDI 1.0.

 

Think of it this way: Studio One added Dolby Atmos to version 6.5. But that doesn't mean you need to use Atmos or mix with it. You can ignore it completely, or get totally into it...like any new feature that any gear adds. 

Link to comment
Share on other sites

6 hours ago, AnotherScott said:

 

MIDI and USB are both inherently serial, though. In fact, USB stands for "Universal Serial Bus." Yes, USB has packets, but each packet is sent serially. The amount of data per packet has increased over time, I believe USB 3 transfers up to 1024 bytes per packet. Though different parts arriving in different packets can be a cause of jitter.

Multiple MIDI 2.0 messages can fit into a single USB Packet. So they arrive at the Receiver at the same time. Of course, they maintain their serial order, which is a fundamental rule for MIDI. But all the messages in the single USB packet can use a shared timestamp so the receiver can treat them as simultaneous, not spread across time in a serial manner. Multiple USB Packets can also share a timestamp if you can't fit everything into one USB packet.

 

Stepping: Even with a relatively low resolution, you don't always hear the steps. A good implementation of MIDI 1.0 7-bit data will apply smoothing, adding smaller, interpolated steps between the steps of the incoming MIDI messages so the change sounds smooth. Lots of musical devices do this. Of course, it is probably better to have higher resolution in the first place. But even with MIDI 2.0, smoothing will probably still be used.

MIDI 2.0 messages generally use 32 bit values. That's over 4 billon steps. But it doesn't make sense from a processing standpoint or when considering available bandwidth to send 4 billon messages when adjusting a volume knob from 0-100%. So senders will send less than 32 bits worth of data. Some properties might be served with 8bits of data and I think almost properties all will be great with 10, 12, 14, or 16 bits. The 32-bit potential might seems like a waste at times, for most properties. But modern systems often use 32 bit data storage regardless of the source data, so the format is friendly for modern memory and processing.

 

Mike.

Mike Kent

- Chairman of MIDI 2.0 Working Group

- MIDI Association Executive Board

- Co-Author of USB Device Class Definition for MIDI Devices 1.0 and 2.0

 

Link to comment
Share on other sites

22 hours ago, AnotherScott said:

This is similar to how early drum machines were "too perfect" and then they programmed humanization into them. But that doesn't stop some people from wanting to use the mechanical vs. human sounding "player."

I actually don't like humanizing (randomizing). 

Playing drums in real time puts approximately the same amount and direction of 'error' consistently. 

I might rush or delay the backbeat the same amount every measure.

I might play the bass a bit ahead or behind the kick drum to emphasize either the pitch or the percussiveness.

I might play the start of the guitar strum before the beat so it peaks with the beat.

I might hit the crash a fraction of a second early to emphasize it.

I might play the hand drums a little behind the beat so I sound like a Puerto Rican or Cuban salsa drummer.


And so on. It's not random, it's intentional.

 

21 hours ago, Anderton said:

Basically, I think what you're describing is  "stretch tuning" for guitar instead of piano.

 

That's a good insight, I never thought of it like that.

Is there an official name for tuning the guitar like that? (Guitar is my 7th instrument).



Notes ♫

  • Like 1

Bob "Notes" Norton

Owner, Norton Music http://www.nortonmusic.com

Style and Fake disks for Band-in-a-Box

The Sophisticats http://www.s-cats.com >^. .^< >^. .^<

Link to comment
Share on other sites

11 hours ago, SynMike said:

Multiple MIDI 2.0 messages can fit into a single USB Packet. So they arrive at the Receiver at the same time. Of course, they maintain their serial order, which is a fundamental rule for MIDI. But all the messages in the single USB packet can use a shared timestamp so the receiver can treat them as simultaneous, not spread across time in a serial manner. Multiple USB Packets can also share a timestamp if you can't fit everything into one USB packet.

 

Timestamps make sense for sequencing, I'm not sure they are relevant for real time performance?

 

Related... if a compound MIDI event is split over two packets, the elements presumably can't be kept perfectly in sync unless the data in the first packet is delayed until the second packet is received. Is this simply a non-issue as USB Itself has gotten faster?

 

11 hours ago, SynMike said:

Stepping: Even with a relatively low resolution, you don't always hear the steps. A good implementation of MIDI 1.0 7-bit data will apply smoothing, adding smaller, interpolated steps between the steps of the incoming MIDI messages so the change sounds smooth. Lots of musical devices do this. 

 

I didn't mean to imply that stepping is a constant issue in MIDI 1.0, sure you don't always hear steps. But CHarrell's reference to the Roland SE-02 above is a real-world example of a problem MIDI 2.0 could have solved, that its filter cutoff resolution was stepped due to the 128 available values between min and max, and you were able to buy a non-MIDI (CV) filter cutoff control to get around that. Are you saying that Roland (or co-developer Studio Electronics) could have avoided the problem if they had implemented some kind of smoothing algorithm?

 

2 hours ago, Notes_Norton said:

I actually don't like humanizing (randomizing). 

Playing drums in real time puts approximately the same amount and direction of 'error' consistently. 

I might rush or delay the backbeat the same amount every measure.

I might play the bass a bit ahead or behind the kick drum to emphasize either the pitch or the percussiveness.

I might play the start of the guitar strum before the beat so it peaks with the beat.

I might hit the crash a fraction of a second early to emphasize it.

I might play the hand drums a little behind the beat so I sound like a Puerto Rican or Cuban salsa drummer.


And so on. It's not random, it's intentional.

 

Right. As Craig said "I think the goal is to reproduce exactly what you played, when you played it." I think the issue of possibly wanting to humanize too much perfection comes into play (as I said) when the line was not played by human hands to begin with. Or if, as it happens, you actually did play two notes simultaneously (or, more spevcifcially, less than 1 ms apart), or played five notes together that were not each at least a millisecond away from any other note you played, etc. MIDI 1.0 would space the notes 1 ms apart, MIDI 2.0 would maintain whatever timing you had played. MIDI 2.0 is not going to create simultaneous input from input that was not simultaneous to begin with, but (as I understand it) will better preserve the timings you played with (i.e. not having to force minimum 1 ms between events).

Maybe this is the best place for a shameless plug! Our now not-so-new new video at https://youtu.be/3ZRC3b4p4EI is a 40 minute adaptation of T. S. Eliot's "Prufrock" - check it out! And hopefully I'll have something new here this year. ;-)

Link to comment
Share on other sites

2 hours ago, AnotherScott said:

Are you saying that Roland (or co-developer Studio Electronics) could have avoided the problem if they had implemented some kind of smoothing algorithm?

 

I'ds mentioned the Panasonic DA7 mixer, which used interpolation to smooth the fader action. Some other digital mixers at the time didn't do that. So, I don't know how easy or difficult it is to do a smoothing algorithm. Mike would have a better idea about that.

Link to comment
Share on other sites

4 hours ago, AnotherScott said:

Timestamps make sense for sequencing, I'm not sure they are relevant for real time performance?

 

Yes, they are. A controller device could capture your live playing and send data with MIDI 2.0 JR Timestamps. If the controller knows that there is a 367.25ms difference between 2 notes that you played and sends those notes with JR Timestamps, then the receiver can play those 2 notes 367.25ms apart, regardless of any timing jitter introduced by a transport.

But to be honest, I don't think any devices will implement JR Timestamps in the next couple of years. MIDI 2.0 is a huge specification (actually a set of specifications) and will take some time for the various features to be adopted across the many manufacturers. I predict that JR Timestamps might start to be implemented in new products 3-5 years from now (but predicting the future is very hard so there's a big chance I will be proven wrong).

 

4 hours ago, AnotherScott said:

Related... if a compound MIDI event is split over two packets, the elements presumably can't be kept perfectly in sync unless the data in the first packet is delayed until the second packet is received. Is this simply a non-issue as USB Itself has gotten faster?

 

Topics intertwined. Using time stamps requires adding latency. It's a timing tradeoff. You can have delivery of each message as fast as possible or add a bit of latency have timing as accurate as possible. You can't have your cake and eat it too.  Transports are quite fast so latency keeps dropping. If you are using JR TImestamps, you are likely to apply a bit of latency so multiple USB packets received in a single USB Frame can be treated as simultaneous. Without JR Timestamps, I still expect MIDI 2.0 timing to be better than MIDI 1.0 on USB. Most MIDI devices use the "Full Speed" defined in USB 1.1 which can theoretically send up to 152 MIDI 2.0 Channel Voice Messages per millisecond (the time of a single MIDI 1.0 message on 5pinDIN). Devices which use the "High Speed" defined in USB 2.0 can theoretically send 6656 MIDI 2.0 Channel Voice Messages per millisecond. I say "theoretically" because USB itself is just one component of systems have other limitations.

 

Early testing of the forthcoming MIDI 2.0 network specification shows extremely low latency over a local (the devices in your studio) network. I think networking will provide better performance than USB within a couple of years (when operating systems add support and devices start to add it).

 

4 hours ago, AnotherScott said:

I didn't mean to imply that stepping is a constant issue in MIDI 1.0, sure you don't always hear steps. But CHarrell's reference to the Roland SE-02 above is a real-world example of a problem MIDI 2.0 could have solved, that its filter cutoff resolution was stepped due to the 128 available values between min and max, and you were able to buy a non-MIDI (CV) filter cutoff control to get around that. Are you saying that Roland (or co-developer Studio Electronics) could have avoided the problem if they had implemented some kind of smoothing algorithm?

 

Roland implements smoothing algorithms in many products. So do many other manufacturers. But it's not feasible/possible in every product.

 

4 hours ago, AnotherScott said:

MIDI 2.0 is not going to create simultaneous input from input that was not simultaneous to begin with, but (as I understand it) will better preserve the timings you played with (i.e. not having to force minimum 1 ms between events).

 

Right. More accurate and repeatable reproduction is the end goal.

 

Mike.

Mike Kent

- Chairman of MIDI 2.0 Working Group

- MIDI Association Executive Board

- Co-Author of USB Device Class Definition for MIDI Devices 1.0 and 2.0

 

Link to comment
Share on other sites

21 hours ago, AnotherScott said:

Right. As Craig said "I think the goal is to reproduce exactly what you played, when you played it." I think the issue of possibly wanting to humanize too much perfection comes into play (as I said) when the line was not played by human hands to begin with. Or if, as it happens, you actually did play two notes simultaneously (or, more spevcifcially, less than 1 ms apart), or played five notes together that were not each at least a millisecond away from any other note you played, etc. MIDI 1.0 would space the notes 1 ms apart, MIDI 2.0 would maintain whatever timing you had played. MIDI 2.0 is not going to create simultaneous input from input that was not simultaneous to begin with, but (as I understand it) will better preserve the timings you played with (i.e. not having to force minimum 1 ms between events).

 

That's a valid point.

But the musicians in a band or a symphony orchestra can hit a note at exactly the same time, but they will not reach the listener together. The distance to the ear and the speed of sound are the factors.

How many ms difference will it take for the drummer and the cello player to get to the ears of someone in the audience?

I don't have the time to figure out the difference, I have an early gig today. But when playing wind synth as opposed to saxophone on stage, I don't notice any difference. Perhaps I subconsciously compensate, or perhaps the human ear can't detect it.

Notes ♫

Bob "Notes" Norton

Owner, Norton Music http://www.nortonmusic.com

Style and Fake disks for Band-in-a-Box

The Sophisticats http://www.s-cats.com >^. .^< >^. .^<

Link to comment
Share on other sites

1 hour ago, Notes_Norton said:

But the musicians in a band or a symphony orchestra can hit a note at exactly the same time, but they will not reach the listener together. The distance to the ear and the speed of sound are the factors.

How many ms difference will it take for the drummer and the cello player to get to the ears of someone in the audience?

 

There are plenty of real-world variables in how an audience may hear something, which may create a perception that is different from the performers' intent. I don't think that negates the value of not having MIDI introduce more of them.

 

Also, not everything about MIDI is based on emulating the behavior of multiple people playing actual acoustic instruments in real space.

Maybe this is the best place for a shameless plug! Our now not-so-new new video at https://youtu.be/3ZRC3b4p4EI is a 40 minute adaptation of T. S. Eliot's "Prufrock" - check it out! And hopefully I'll have something new here this year. ;-)

Link to comment
Share on other sites

[RANT}

I don't know!

Nobody here seems to understand how processors work.

I started writing programs in the late sixties and was recruited by a mainframe manufacturer in '72. I was there for twenty years. Through modems, ISDN, LANs and their many different formats, and then DSL and fibre networks.

I've worked on mainframe operating systems (rewriting data communications parts in machine code), DOS, OS/2, Windows 2 onwards.

 

So ... I believe I do know I tiny bit about how things work in this respect. hardware, processors, networks, protocols (from 2780/3270, CSMA/CD, Q.921/.931, GSM) etc,

I'm no expert, very far from it, when it comes to using DAW software. But I do understand the code and how it's processed.

I worked in data comms/tele comms and lastly satellite comms from 1970 until 2002, most of it from the computer aspect, but also writing and delivering courses worldwide for a satellite comms company (InMarSat).

 

When a packet of data arrives at the USB interface it has to be examined by the operating system to understand what the contents are.

It then gets passed on  by the operating system to the appropriate program to process it  examines the first byte to decide what message it is, then takes the appropriate action.

The DAW software then e.g. passes the data to a VSTi to generate a note and then goes back to look at the next message in the packet and so on.

With the best will in the world a modern system even with a 4 MHz octocore processor just deals with one message at a time.

So even two notes timestamped with the same time will be started one at a time. The gap between them may to besmall to hear.

But the notes are still processed SERIALLY.

 

There's no guarantee that notes originating at a music keyboard played simultaneously will occupy just one USB packet.

They may, they may not.

 

But anyway, just like a QWERTY keyboard, an electronic music keyboard is scanned from one end to the other many times per second and notes are accumulated sequentially from the scan and transferred to a buffer in digital format. This is another serial proces.

Even if you physically play two notes simultaneously (an analogue process) they'll each be turned to MIDI Note On events (whether 1.0 or 2.0) one at a time by the processor chip in the keyboard and placed in the transmission buffer, again serially.

 

Even if you devised a system where all the keys were constantly monitored and triggered an interrupt as they were played they would still be processed serially by the code running in the processor.

 

Sorry, getting exasperated!

[/RANT]

 

Akai EWI 4000s, Yamaha VL70m, Yamaha AN1x, Casio PX560, Yamaha MU1000XG+PLGs-DX,AN,VL.

 

Link to comment
Share on other sites

3 hours ago, JohnG11 said:

Even if you devised a system where all the keys were constantly monitored and triggered an interrupt as they were played they would still be processed serially by the code running in the processor.

 

Sorry, getting exasperated!

[/RANT]

 


Ok so then, are we not at a time and place where each key could be embedded with a cpu…  with only one stream of info who cares if a cpu does serial processing. Each key is independently triggering its own sound module. Just need an 88 channel mixer. 🤪

 

More to the point regarding new features of midi 2.0, the improved communication within a studio or stage rig is going to be wondrous.

 

And I am intrigued by the various cables which now support midi communication. An ethernet bus seems like an added cost which is nice as an option but I really prefer to stay away from too many adapters to keep vintage gear a lively part of the conversation.
 

Wondering what happens with SysEx?

 

PEACE

_
_
_

 

 

When musical machines communicate, we had better listen…

http://youtube.com/@ecoutezpourentendre

Link to comment
Share on other sites

 

On 2/29/2024 at 9:12 AM, JohnG11 said:

[RANT}Even if you devised a system where all the keys were constantly monitored and triggered an interrupt as they were played they would still be processed serially by the code running in the processor.

[/RANT]

Notes/Events may be processed serially, but that doesn't mean they can't be rendered simultaneously.

 

On 2/29/2024 at 1:04 PM, Thethirdapple said:

Wondering what happens with SysEx?

 

In MIDI 2.0? It continues to exist like in MIDI 1.0. It is slightly transformed into a packet-based transmission in the "Universal MIDI Packet Format" but the core data remains the same, allowing for backward compatibility. In fact, MIDI-CI is a set of SysEx messages that enact a lot of the MIDI 2.0 functions.

 

Then we added something new called System Exclusive 8 (or SysEx8) that is the same concept but allows 8bits per bytes instead of the 7bit data format of MIDI 1.0. Like the older SesEx, it can be used by manufacturers, tagged their own ID, for their custom purposes or for standardized functions that might be defined in the future. The limitation here is that SysEx8 is not backward compatible like the 7-bit SysEx.

 

Thanks,

Mike.

  • Like 4

Mike Kent

- Chairman of MIDI 2.0 Working Group

- MIDI Association Executive Board

- Co-Author of USB Device Class Definition for MIDI Devices 1.0 and 2.0

 

Link to comment
Share on other sites

Well, this is certainly good news: Microsoft just released Developer Preview 5, which includes an updated USB driver, app-to-app MIDI support, loopback MIDI, a preview of the MIDI Settings app, an updated MIDI Console app, and more. Progress!

  • Like 4
Link to comment
Share on other sites

I'm rewiring my studio room. I've finally decided to be honest with myself and admit that the day of the MOTU MIDI TimePiece AV USB is over. I hadn't actually used it in quite a while even before I wired up this room a year or so ago but there I was filling my cable ducts with MIDI cables. I've realized for a while now that my "infrastructure" just wasn't up to par with how I'm actually doing things. I'd installed the cable ducts to keep from having cables all over the floor and now.....I've got cables all over the floor.

 

So I've removed the MTP, and a couple of other place holders, from my big rack and am placing the cables I actually need in my cable ducts. I won't sell these units though, I've learned from past experience I may regret that.

Link to comment
Share on other sites

  • 2 weeks later...

What's old is new...

 

Midi is a wondrous thing, and I thank all those involved in making it a reality for us in our musical meanderings.

 

The 2.0 logo is sweet and struck a familiar chord today. Dug through a ton of old footage from '88-'89 of placing a tiny mirror on a speaker's diaphragm while pointing a red laser. Seeing music is awesome! Oh yeah, playing "with electricity" on the Juno6 became like sculpting sound on a wall sized oscilloscope.  tabular times indeed...

 

 

 

 

 

 

PEACE

_

_

_

 

When musical machines communicate, we had better listen…

http://youtube.com/@ecoutezpourentendre

Link to comment
Share on other sites

On 3/26/2024 at 4:35 PM, Thethirdapple said:

Midi is a wondrous thing, and I thank all those involved in making it a reality for us in our musical meanderings.

 

The fact that an entire industry - even fierce competitors - could get behind something that would benefit the consumer should be an embarrassment to other industries. And, the fact that our industry continues to do it four decades later is mind-boggling.

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...