Jump to content


Please note: You can easily log in to MPN using your Facebook account!

MIDI 2.0 will there ever be any products?


Recommended Posts

Deliberately contentious post. Please ignore if it upsets you.

 

[RANT}

Well, here we are FOUR years after the announcement of the wondrous, incredible, all singing and dancing, solution to all our problems MIDI 2.0, and still not a product in site (that I know of).

 

Whoops! Totally forgot about the Roland A88 MkII.

With that I can alter keyboard velocity (quote) "You can play MIDI 2.0 compatible software sound generators with rich expressiveness." (end quote) Hmmm.

To what resolution? It doesn't say. I wonder if it's like so many pitch bend controllers that have a 7 bit resolution (or less) but pretend to be 14 bit?

 

And Assignable control [1]–[8] (quote) "You can make smooth changes to parameters such as filters and resonance on software sound generators that are compatible with MIDI 2.0." (end quote)

But ... The USB driver setting is disabled when this instrument is in MIDI 2.0 mode (the “GENERIC” setting is always use. Whatever that means.

Sound generators compatible with MIDI 2.0? Are there any yet? Hmmm again.

 

What about MIDI-CI and all the rest of the v2.0 specs? No mention that I can see.

 

It's all gone SO quiet.

Will there ever be any MIDI 2.0 products on the market? (Not that I'm in the market for any, I just don't have that kind of budget anymore.)

[/RANT]

 

JG

Akai EWI 4000s, Yamaha VL70m, Yamaha AN1x, Casio PX560, Yamaha MU1000XG+PLGs-DX,AN,VL.

 

Link to comment
Share on other sites



:idk: This is a long process.  The MIDI Assoc. updated their MIDI 2.0 spec in June of 2023.  Apple and Microsoft only recently added MIDI 2.0 driver and API to their operating systems, both have been showing up at tech events to explain their work over the last 8 months.   MS's MIDI 2.0 API is now public.  Yamaha funded MIDI Workbench tools to prototype MIDI 2.0 products.  Roland and Korg got controllers out for early adopters to experiment with (A88mkII and Keystage).  Yamaha has already publicly stated that the Montage M will support MIDI 2.0.   There are many developers prototyping, but the MI hardware industry is not terribly fast.  It takes quite a while from conception to a purchasable product.  I think we take for granted how miraculous it was for MIDI 1.0 to be adopted as a standard. 

 

Why would we want MIDI 2.0?  There's a nice long list of advances and improvements.  These 4 are most interesting to me. 

1. Bi-directional communication between devices allows for real-time feedback. 

2. Much higher resolution, 32bit vs. 7bit

3. Property exchange - when you connect a MIDI 2.0 instrument to a compatible device, they can automatically configure themselves and optimize the communication settings, simplifying the setup process.

4. Backwards compatibility 

 

 

  • Like 1

Yamaha CP88, Casio PX-560

Link to comment
Share on other sites

I probably won't live long enough to see keyboard manufacturers embedding wireless protocols into their master controllers.  The backside of mine always looks like a retro cable jungle.  Four outputs, three external controllers, midi, two power cords, etc.  Maybe I could use red cables and go for a techno look.

Want to make your band better?  Check out "A Guide To Starting (Or Improving!) Your Own Local Band"

 

Link to comment
Share on other sites

18 hours ago, JohnG11 said:

Will there ever be any MIDI 2.0 products on the market?

 

As ElmerJFudd points out, MIDI is progressing. Check out this post from my NAMM show report. There's also a lot of background work on profiles and such.

 

Remember, it took MIDI 1.0 40 years to reach where it is now, with show control, MPE, MIDI over USB, etc. Frankly, there's no urgent need to push 2.0 out into the world, today's MIDI works just fine. MIDI 2.0 isn't like flicking a switch and going from 1.0 to 2.0, it's more like a crossfade. For example, at some point, DAWs will accommodate it. Apple already has MIDI 2.0 built in and the Windows API is public. When DAWs can work with 2.0, then it will be worth getting a 2.0 controller. 

 

Also, I was President of the MIDI Association in 2020 and 2021, so I can attest from personal experience that those two years were essentially lost due to covid. It put the brakes on the process. 

 

14 hours ago, ElmerJFudd said:

3. Property exchange - when you connect a MIDI 2.0 instrument to a compatible device, they can automatically configure themselves and optimize the communication settings, simplifying the setup process.

 

This is what excites me the most about MIDI 2.0. There's a section in my Studio One tips book about integrating Komplete Kontrol with Studio One. It's a lengthy, tedious process. I look forward to the day when Komplete Kontrol says "hey, who are you?," Studio One says "I'm Studio One," and Komplete Kontrol configures itself.

  • Like 4
Link to comment
Share on other sites

4 hours ago, cphollis said:

I probably won't live long enough to see keyboard manufacturers embedding wireless protocols into their master controllers.  The backside of mine always looks like a retro cable jungle.  Four outputs, three external controllers, midi, two power cords, etc.  Maybe I could use red cables and go for a techno look.

Korg, Roland, Kawai, CME offer some controllers and digital pianos with Bluetooth audio and MIDI.   Bluetooth 5 was a nice improvement over earlier versions, but I don't know if it has the range and reliability we'd expect in professional settings.  Wireless DMX like the ShowBaby similarly uses  2400-2480, but it has a range of 300 feet and averages 4ms latency.  

Yamaha CP88, Casio PX-560

Link to comment
Share on other sites

Bluetooth works for wireless MIDI, but not wireless audio (without significant latency). There are some (non-BT) wireless audio options, but there are drawbacks of either expense, sound quality, or interference (or some combination thereof). CME is working on something there, though... https://www.cme-pro.com/iwa-instant-wireless-audio-by-cme/

Maybe this is the best place for a shameless plug! Our now not-so-new new video at https://youtu.be/3ZRC3b4p4EI is a 40 minute adaptation of T. S. Eliot's "Prufrock" - check it out! And hopefully I'll have something new here this year. ;-)

Link to comment
Share on other sites

I mentally dismiss the debate because it IS in its infancy. Craig referring to it as a cross-fade sounds exactly right. I think some of the mini-complaint arises because its so new, we have yet to settle on what it means sonically. The Osmose is just beginning to sink in and its not unfair to say that Expressive E is the first to devise an approach that presents like an instrument rather than an oddity. It'll take a couple of generations to refine what it CAN do so people can see why they'd want to go there creatively. Poly AT is pretty amazing as it is.

 "I like that rapper with the bullet in his nose!"
 "Yeah, Bulletnose! One sneeze and the whole place goes up!"
       ~ "King of the Hill"

Link to comment
Share on other sites

MIDI 2.0 has a bit of a chicken/egg problem. Most products need to be cross-platform. MacOS and iOS both have good support for MIDI 2.0. Microsoft is just releasing developer releases for hardware and software testing and product development work. So now DAWs can now start adding MIDI 2.0. MIDI 2.0 is expected for users some time this year (Microsoft has not announced a release schedule). More companies are now starting to develop their next generation of products that will include MIDI 2.0. So as pointed out by ElmerJFudd and Craig, there will be a transition starting over the next few years.

 

Wireless has limitations still today. Especially for audio. The MIDI Association is developing a network specification for MIDI 1.0/2.0. The initial target is wired ethernet. We have not started testing wireless implementations, but wifi support is possible. But low latency wireless digital audio is a problem; latency is required to have dependable delivery. I'd love to see the industry adopt more networking. We could reduce cabling to a single Cat5 delivering audio, MIDI, and power. 

  • Like 2
  • Thanks 1

Mike Kent

- Chairman of MIDI 2.0 Working Group

- MIDI Association Executive Board

- Co-Author of USB Device Class Definition for MIDI Devices 1.0 and 2.0

 

Link to comment
Share on other sites

11 hours ago, David Emm said:

I mentally dismiss the debate because it IS in its infancy. Craig referring to it as a cross-fade sounds exactly right. I think some of the mini-complaint arises because its so new, we have yet to settle on what it means sonically. The Osmose is just beginning to sink in and its not unfair to say that Expressive E is the first to devise an approach that presents like an instrument rather than an oddity. It'll take a couple of generations to refine what it CAN do so people can see why they'd want to go there creatively. Poly AT is pretty amazing as it is.

 

But the Osmose Expressive E is all MIDI 1.0, and if that can allow us to be more expressive ... then why MIDI 2.0?

 

I readily admit that the 5 pin DIN MIDI interface is way too slow for our current needs, but that's not a justification for a totally rewritten set of standards, just for a very low latency, higher speed connectivity mechanism.

IMV there are fundamental flaws in all current Packet Switched mechanisms for data transmission for playing music. (I worked all my career in data communications, both terrestrial and satellite, starting with 300bps modems. Now retired.)

 

? The need for for higher resolution controllers?

The ear IS highly sensitive to changes in frequency.

Experiments done in the 1970's showed that the ear/brain can detect a frequency change of 1.8 Hz from a note at 1000 Hz.

That's why Pitch Bend is 14 bit rather than 7. (>16,000 in/decrements)

 

The ear is nowhere near that sensitive to changes in loudness, hence 127 levels are not too bad for representing volume.

Another bit might suffice, i.e. 255 levels might be sufficient. 16 bits (> 65,000) is overkill. The ear can't hear that level of 'expressive' change.

Sure, a good pianist can play at probably thousands of different velocities, but can we hear it? Experiments say not.

A single deciBel is roughly the smallest change the human ear can detect, So a range of 127dB should be sufficient shouldn't it?

(Dave Smith got it right the first time.)

 

Anyway, my EWI and VL70m will probably outlast my time left on the planet.

 

JG

Akai EWI 4000s, Yamaha VL70m, Yamaha AN1x, Casio PX560, Yamaha MU1000XG+PLGs-DX,AN,VL.

 

Link to comment
Share on other sites

1 hour ago, JohnG11 said:

But the Osmose Expressive E is all MIDI 1.0, and if that can allow us to be more expressive ... then why MIDI 2.0?

With MIDI 1.0, the only way the Osmose is able to provide that fully independent control over each note (e.g. for pitch bend) is to put each note on its own MIDI channel, which means max playable polyphony of the instrument with that control implemented is 16 (or 15 if using a master channel). MIDI 2.0 removes that limitation, by permitting more independent controls for individual notes on the same MIDI channel, so you could have Osmose-like control on an instrument that wasn't limited to 16 notes total. (Of course that doesn't mean that every MIDI 2.0 device will have such capabilities, but the point is that a MIDI 2.0 device can have such a capability.)

 

The basic advantages of 2.0 are described in this article by our host...

 

https://www.sweetwater.com/insync/midi-2-0-what-actually-matters-for-musicians/

 

  • Like 1

Maybe this is the best place for a shameless plug! Our now not-so-new new video at https://youtu.be/3ZRC3b4p4EI is a 40 minute adaptation of T. S. Eliot's "Prufrock" - check it out! And hopefully I'll have something new here this year. ;-)

Link to comment
Share on other sites

5 hours ago, JohnG11 said:

I readily admit that the 5 pin DIN MIDI interface is way too slow for our current needs, but that's not a justification for a totally rewritten set of standards, just for a very low latency, higher speed connectivity mechanism.

 

The MIDI 1.0 standard has not been rewritten, but supplemented. That's why it's backward compatible. If people want to make gear that's compatible with 1.0 in the future, there aren't any problems with doing so. 

 

5 hours ago, JohnG11 said:

? The need for for higher resolution controllers?

The ear IS highly sensitive to changes in frequency.

 

Yes, but that also applies to filters. If you turn the lowpass cutoff knob on a Moog Voyager, it's sounds smoother than if you do it on, say, an Arturia emulation of a Moog filter. It's one of the things that's always bothered me about MIDI - audible stair-stepping. It also happens with digital mixers, unless they do interpolation. (E.g., the Panasonic DA7 sounded smooth, but it interpolated the mixer faders into 1,024 steps. This made a major improvement (although you could still hear "zipper noise" with low-frequency sine waves toward the bottom of the fader's travel).

 

Another example is I have yet to hear a digital synthesizer that does hard sync as smoothly as an analog one. Maybe there's one out there, but I haven't heard it.

 

5 hours ago, JohnG11 said:

A single deciBel is roughly the smallest change the human ear can detect, So a range of 127dB should be sufficient shouldn't it?

 

I know that supposedly 1 dB is the smallest change the human ear can detect, but as someone who mixes and masters, I regularly make 0.5 dB and even smaller changes. They absolutely are audible, especially with mastering - a 0.5 dB of change on a master is the equivalent of changing every track in a multitrack recording by 0.5 dB. 

 

The problem isn't a 127 dB range, it's the resolution of the steps within that range. You could have 4 bits slice a range of 127 dB, but the transition between steps would be obvious. Consider this: even with the CD's 16-bit/65,000+ steps resolution, at the lower end of the dynamic range the quantization noise from a lack of resolution is sufficiently severe that quantization was devised to mask the problem.

 

As to 16 bits of resolution vs. just adding another bit, I suspect (but don't know for certain) that's a matter of practicality. Computers are used to handling 8, 16, 32, 64 etc. bit words. I suspect if a controller was changed from, say, 8 bits to 9 bits, then it would end up getting expressed in a 16-bit word anyway, with 7 of the bits being written as 0. I assume that's also how backward compatibility works with MIDI 2.0. If it gets an 8-bit word and wants to make it 16 bit to go into something that's MIDI 2.0, it just writes zeroes for the bits that aren't used. But, I'm not a coder, that's just a guess.

 

The whole goal of MIDI 2.0 was to have a spec that can evolve for the next 40 years, like MIDI 1.0 did, but without obsoleting MIDI 1.0. So, all MIDI 2.0 does is add extra capabilities. The past has shown that when extra capabilities are included, they're used eventually - like MPE, Show Control, MIDI over USB, etc. Like MIDI 1.0, you can use as much or as little of the MIDI 2.0 spec as you want/need. Think how many decades it took for hardware in general to catch up with polyphonic aftertouch. The capability was always there, it just wasn't used until the technology could accommodate it.

  • Like 2
Link to comment
Share on other sites

23 hours ago, David Emm said:

It'll take a couple of generations to refine what it CAN do so people can see why they'd want to go there creatively.

Right. We can predict some idea of what MIDI 2.0 will deliver. But the real result will be revealed in the products that innovators do with MIDI 2.0 in future generations of products.

 

There are 2 mains areas of expansion that MIDI 2.0 introduces:

1. Bidirectional negotiations - easier connections with less manual configuration by users.

2. New protocol and data format which greatly expands resolution and the range of messages available.


Imagine buying a new synthesizer and connecting it to you DAW that has never seen that model before. The DAW might use MIDI 2.0 to auto-generate a custom patch editor by getting the requirements from the synth itself. Imagine never having to map controllers for your next plugin - the DAW configures the keyboard to send controllers that are needed.

 

Resolution? Great. Absolutely a huge update. But IMO high-resolution is not most interesting part of the new protocol and data format. New messages that convey more information are more exciting to me. A message that tells you the current chord. Knowing that a sequence was performed by _name-of-artist-here_ is intended to be played on a piano with a certain velocity curve. Having articulation information inside a Note On (like we have velocity now). Having jitter reduction timestamps to deliver timing accuracy equivalent to sound travelling 2cm. Being able to send 30 notes with identical timing. Per-Note controllers for increased expression. Precise pitch for every note when you want that. And there's room to define a million more new messages.

 

But just like the designers of MIDI 1.0 could only foresee a fraction of what MIDI would do in 2024, we cannot know all that is coming in 2030 and 2040. MIDI 1.0 had limitations that hindered forward progress. Perhaps MIDI 2.0 even comes 20 years late. For the continued expansion of MIDI, we needed a new environment that pushed past the MIDI 1.0 limitations. I'm excited to see what will come.

 

Mike.

 

 

  • Like 1

Mike Kent

- Chairman of MIDI 2.0 Working Group

- MIDI Association Executive Board

- Co-Author of USB Device Class Definition for MIDI Devices 1.0 and 2.0

 

Link to comment
Share on other sites

Oh yeah, I forgot to reiterate in that post: The new MIDI 2.0 network specification which is coming soon.

So far 9 MIDI Association member companies have working prototypes.

I sure hope more developers adopt that quickly.

Mike.

Mike Kent

- Chairman of MIDI 2.0 Working Group

- MIDI Association Executive Board

- Co-Author of USB Device Class Definition for MIDI Devices 1.0 and 2.0

 

Link to comment
Share on other sites

On 2/25/2024 at 12:29 AM, SynMike said:

Wireless has limitations still today. Especially for audio. The MIDI Association is developing a network specification for MIDI 1.0/2.0. The initial target is wired ethernet. We have not started testing wireless implementations, but wifi support is possible. But low latency wireless digital audio is a problem; latency is required to have dependable delivery. I'd love to see the industry adopt more networking. We could reduce cabling to a single Cat5 delivering audio, MIDI, and power. 

 

Certainly a Cat5 Ethernet solution looks favourite to me for a high speed solution, but it would be good as a solution for MIDI 1 too.

Akai EWI 4000s, Yamaha VL70m, Yamaha AN1x, Casio PX560, Yamaha MU1000XG+PLGs-DX,AN,VL.

 

Link to comment
Share on other sites

Before MIDI was introduced, I was having dinner with Dave Rossum of E-Mu. He was resistant to MIDI because he felt Ethernet would be a better solution (remember, this was the early 80s). However, he also realized that the cost of implementing it that time would be non-starter for most manufacturers, and MIDI's success would be 100% based on universal adoption.

Link to comment
Share on other sites

3 minutes ago, Anderton said:

Before MIDI was introduced, I was having dinner with Dave Rossum of E-Mu. He was resistant to MIDI because he felt Ethernet would be a better solution (remember, this was the early 80s). However, he also realized that the cost of implementing it that time would be non-starter for most manufacturers, and MIDI's success would be 100% based on universal adoption.

 

Ethernet has come a long, long way since the early eighties. If I remember correctly I was working with Ungerman-Bass equipment back then and it was all horribly expensive and a nightmare to implement. Cat 5, for wired connections, is extremely cost effective and inserting a Gigabit Ethernet chip in a device is probably cheaper than an RS232 chip was back then.

Akai EWI 4000s, Yamaha VL70m, Yamaha AN1x, Casio PX560, Yamaha MU1000XG+PLGs-DX,AN,VL.

 

Link to comment
Share on other sites

Bidirectional negotiations were happening 20 years ago, just connect both MIDI In and Out.

T'aint rocket science, I know, I worked for InMarSat for a while, back in the early noughties, writing courses for them and delivering them worldwide.

 

Re the problem using a different note on each channel of a MIDI port.

Multiple ports were used long ago. If you wanted a harpsichord using a non-ET temperament you used one port for that and a separate port for other instruments.

The Yamaha MU128 as long ago as 1999 had four MIDI ports.

 

Craig, Re backwards compatible, can MIDI 2 use the Din interface? I thought it couldn't.

So my old Cheetah MS6 couldn't be used with MIDI 2 gear could it? What about my TX81z, or my AN1x, or my Kenton Plugstation with its 4 PLG cards?

Akai EWI 4000s, Yamaha VL70m, Yamaha AN1x, Casio PX560, Yamaha MU1000XG+PLGs-DX,AN,VL.

 

Link to comment
Share on other sites

Craig, Re hearing stepping when you move a fader.

What you're descirbing is faulty electronic design, not a fault with MIDI 1.

If you want a smooth transition between increments when you move e.g. a fader, you design the equipment to provide it.

Just as when you play an acoustic instrument and don't want each note tongued you play legato or use portamento for a slide.

The electronics has to be designed to do a smooth transition rather than a click. No need to design a complicated new protocol to carry smaller 'clicks'.

 

SynMike, 30 notes with identical timing? To what purpose?

The microprocessors in any MIDI instrument or PC will play them back sequentially. They are serial devices.

No electronic instrument on this planet can play back 30 simultaneous notes, let alone 2.

(Modification, it can play back 30 notes starting them one at a time, albeit very quickly using a modern processor.))

They're played in the order in which they appear in the file or over the interface.

The processor in, say, a keyboard, scans the keys and plays them back one after the other in the order in which it 'sees' them.

They may appear to be simultaneous, but they're not.

Akai EWI 4000s, Yamaha VL70m, Yamaha AN1x, Casio PX560, Yamaha MU1000XG+PLGs-DX,AN,VL.

 

Link to comment
Share on other sites

On 2/24/2024 at 8:42 AM, Notes_Norton said:

Perhaps we didn't need 2.0? 

That was said, half in jest. I should have put a winking emoji after it like this — ;)

Actually, I'd love the higher resolution. 
 

Will it give us greater dynamics? When gigging, sometimes I feel like I would like louder louds and softer softs.

 

Wireless, I'm not really needing. I play scores of different venues.  I remember using a wireless mic while gigging at a yacht club. Unknown to me, the house wireless mic was using the same channel, until partly through the dinner set, someone used the house mic in the room across the hall.

I have about 15 hardware 5 pin DIN synths, and I'm glad it'll be backwards compatible.

My MIDI needs are simple:

  1. For gigging, I need sound modules for my Wind MIDI controller. The Yamaha VL70m with physical modeling synthesis is my favorite. My very old TX81z gets to speak up almost every gig at least for a few songs.
  2. Mrs. Notes needs a synth for her Buchla Thunder Tactile MIDI controller,
  3. I need to be able to make the backing tracks for my duo. I tried buying some, but I spent so much time adapting and/or fixing them, it's quicker to do them myself.

I don't see why I would need MIDI 2.0, but I'm certainly not against it.

The one thing you have to say about good old-fashioned MIDI, is that they did a pretty good job of it to be so universally adopted and to last for so many decades and still be a major, universally used tool.

As long as the progress doesn't make what I'm doing obsolete (therefore expensive), I'm all for it.

 

Insights and incites by Notes ♫

 

Bob "Notes" Norton

Owner, Norton Music http://www.nortonmusic.com

Style and Fake disks for Band-in-a-Box

The Sophisticats http://www.s-cats.com >^. .^< >^. .^<

Link to comment
Share on other sites

Greater dynamics?

That again depends not upon MIDI (whether 1 or 2) but on the device interpreting the MIDI command it receives.

The MIDI note receiving device that then creates the sound interprets it in the way it's been designed to.

e.g Note On, Velocity 64 = 60dB, Note On, Vel. 100 = 90dB - or whatever.

MIDI 2 will give, if used, smaller steps between each of the current MIDI 1 'steps'.

 

 

My understanding, having read the MIDI 2 specs four years ago, is that none of our old gear will have a clue what is happening with MIDI 2 commands.

I also got the idea (could be wrong) tht MIDI 2 devices won't be using the Din interface because it's not bi-directional.

So they won't physically connect to the TX81z vor the VL70m, both of which use the 5 pin Din only.

But I wait to be corrected on this.

 

Theoretically my ancient Yammy MU1000 and my Roland SC8850 could connect, but they'd need some sort of firmware upgrade to understand the MIDI 2 commands.

Now I wonder whether Yamaha and Roland are going to dig out the code and reprogamme 20 plus year old equipment? Hmmm!

 

Hope people are enjoying this debate, I'm deliberately playing Devil's Advocate. (But I do remain to be convinced.)

 

JG

Akai EWI 4000s, Yamaha VL70m, Yamaha AN1x, Casio PX560, Yamaha MU1000XG+PLGs-DX,AN,VL.

 

Link to comment
Share on other sites

2 hours ago, JohnG11 said:

My understanding, having read the MIDI 2 specs four years ago, is that none of our old gear will have a clue what is happening with MIDI 2 commands.

 

The way it works is that MIDI 2.0 gear queries gear that's attached and basically asks "Do you speak MIDI 2.0?" If yes, they converse with MIDI 2.0. If the MIDI 2.0 gear doesn't get an answer, it speaks MIDI 1.0.

 

I explain it like this. I speak English. If I learn Italian, it doesn't mean I've forgotten how to speak English. I can speak Italian to people who speak Italian, and English to people who speak English. All I need to do is ask them first whether they speak Italian or English.

  • Like 2
Link to comment
Share on other sites

20 hours ago, JohnG11 said:

 

Ethernet has come a long, long way since the early eighties. 

20 hours ago, JohnG11 said:

Bidirectional negotiations were happening 20 years ago, just connect both MIDI In and Out.

...

Re the problem using a different note on each channel of a MIDI port.

Multiple ports were used long ago...The Yamaha MU128 as long ago as 1999 had four MIDI ports.

 

Yes, there have been ways to do some MIDI 2.0 things all along, but without being part of the standardized spec, there may have been no assurance that different manufacturers would implement them the same way, or methods may not have been deigned to be well adaptable to future scenarios without re-inventing the wheel, or there may have been additional costs, or the implementations may have been more limited. So yeah, MIDI over ethernet itself isn't a first, but being able to pretty easily adapt the transport design to any future protocol probably is. In and Out provided a kind of bidirectionality, but there was no protocol that would permit two boards connected via In+Out to query each other. These days, we're lucky to have a single 5-pin port on a board, you can pretty much forget about having four, even if you were fine with a massive octopus of cabling.

 

20 hours ago, JohnG11 said:

can MIDI 2 use the Din interface? I thought it couldn't.

So my old Cheetah MS6 couldn't be used with MIDI 2 gear could it? What about my TX81z, or my AN1x, or my Kenton Plugstation with its 4 PLG cards?

 

Craig already answered the second part of that... there's no problem mixing MIDI 1 and MIDI 2 gear. But to the first part of that, there's also no reason MIDI 2 can't itself, use 5-pin DIN connections.

 

20 hours ago, JohnG11 said:

Craig, Re hearing stepping when you move a fader.

What you're descirbing is faulty electronic design, not a fault with MIDI 1.

If you want a smooth transition between increments when you move e.g. a fader, you design the equipment to provide it.

 

The stepping was a limitation of MIDI 1. For example, in a standard MIDI 1 implementation, there are only 128 values for filter cutoff frequency. No matter the quality of the fader or pot design, it can only alter the cutoff frequency in 128 increments total from fully closed to fully open, and that's not enough for entirely inaudible transitions from one to the next, you will hear stepping. The 2.0 protocol provides the finer steps.

 

20 hours ago, JohnG11 said:

SynMike, 30 notes with identical timing? To what purpose?

The microprocessors in any MIDI instrument or PC will play them back sequentially. They are serial devices.

No electronic instrument on this planet can play back 30 simultaneous notes, let alone 2.

(Modification, it can play back 30 notes starting them one at a time, albeit very quickly using a modern processor.))

 

Yes, the modification "very quickly" makes the point. 30 notes played 1 millisecond apart (standard MIDI 1 spec) will be audibly different from 30 notes played "almost" simultaneously (as serially close to each other as the processor is capable of, which to human ears, should essentially be perceived as simultaneous).

Maybe this is the best place for a shameless plug! Our now not-so-new new video at https://youtu.be/3ZRC3b4p4EI is a 40 minute adaptation of T. S. Eliot's "Prufrock" - check it out! And hopefully I'll have something new here this year. ;-)

Link to comment
Share on other sites

20 hours ago, JohnG11 said:

That again depends not upon MIDI (whether 1 or 2) but on the device interpreting the MIDI command it receives.

The MIDI note receiving device that then creates the sound interprets it in the way it's been designed to.

e.g Note On, Velocity 64 = 60dB, Note On, Vel. 100 = 90dB - or whatever.

MIDI 2 will give, if used, smaller steps between each of the current MIDI 1 'steps'.

That doesn't grant my only wish. 

But then my current MIDI devices won't be affected by MIDI 2 anyway. At least they will work as usual, and that's a very good thing. Other than louder louds and softer softs I have no other item in my wish-list. 

I'm sure others will love the additional capabilities of MIDI 2, and that's a very good thing, too.

But for me, I can't see an advantage (yet). 

As long as I can play my sound modules using Wind, Tactile, and Keyboard controllers and record MIDI into a sequencer/DAW to make my backing tracks, I'm happy.

Now, once it gets implemented, if I see an advantage for me, personally, I'll jump on it.

Notes ♫
 

Bob "Notes" Norton

Owner, Norton Music http://www.nortonmusic.com

Style and Fake disks for Band-in-a-Box

The Sophisticats http://www.s-cats.com >^. .^< >^. .^<

Link to comment
Share on other sites

2 hours ago, AnotherScott said:

Yes, the modification "very quickly" makes the point. 30 notes played 1 millisecond apart (standard MIDI 1 spec) will be audibly different from 30 notes played "almost" simultaneously (as serially close to each other as the processor is capable of, which to human ears, should essentially be perceived as simultaneous).

Would that make it too perfect? (Curious question, not an argument.)

I've played in symphonic and marching bands while in school, 20 piece swing jazz bands, 7 piece rock bands, and a duo. As hard as we tried, we didn't all articulate at exactly the same time.

Real people don't always hit the notes exactly together. Guitarists don't strum all the strings at the same time. Some instruments actually have to anticipate the articulation point to sound right.

I don't know if it's possible for musicians playing 30 notes at the same time to play them exactly at the same time.

<semi-related>

Sometimes an improvement gives you something but takes away something else.

I hear old recordings when the guitarists used their ear to tune their guitars. Tune the 6th string and use the 5th/4th fret to tune the others. The high and low E strings were never quite in tune, but they blended better. Tuning them with a tuner makes them sound different. For many songs, I prefer the semi-just-intonation of the old guitar tunings.

I wonder if hitting 30 notes at the same time will end up making a difference?

Just thinking out loud here...

 

Notes ♫

Bob "Notes" Norton

Owner, Norton Music http://www.nortonmusic.com

Style and Fake disks for Band-in-a-Box

The Sophisticats http://www.s-cats.com >^. .^< >^. .^<

Link to comment
Share on other sites

1 hour ago, Notes_Norton said:

Would that make it too perfect? (Curious question, not an argument.)

Yes, I almost added something to that effect. 😉 Certainly that level of (near) perfect synchronicity would make it less "realistic" -- then the question becomes whether that is always the goal. A "beyond-human" level of synchronization is not necessarily always undesirable.

 

But yes, I could certainly imagine the addition of a "humanizing" parameter that would shift the timings of the parts for playback. If the goal is to create an ensemble instrumentation sound that mimics the playing of humans, you'd want that slop. And it could be better to program in the parameters of the desired sloppiness than to just have it that the 30th "simultaneous" note will be played 30 ms after the first.

 

This is similar to how early drum machines were "too perfect" and then they programmed humanization into them. But that doesn't stop some people from wanting to use the mechanical vs. human sounding "player."

Maybe this is the best place for a shameless plug! Our now not-so-new new video at https://youtu.be/3ZRC3b4p4EI is a 40 minute adaptation of T. S. Eliot's "Prufrock" - check it out! And hopefully I'll have something new here this year. ;-)

Link to comment
Share on other sites

1 hour ago, Notes_Norton said:

I hear old recordings when the guitarists used their ear to tune their guitars. Tune the 6th string and use the 5th/4th fret to tune the others. The high and low E strings were never quite in tune, but they blended better. Tuning them with a tuner makes them sound different. For many songs, I prefer the semi-just-intonation of the old guitar tunings.

 

Basically, I think what you're describing is  "stretch tuning" for guitar instead of piano. The issue is that the 12th root of 2 is an irrational number, so with even tempered music, there are always going to be off-pitch errors. Perhaps one reason you don't hear sixth chords too much is because they're the most out-of-tune sounding (at least to my ears). But, we have choices as to how to distribute the errors among the notes of a scale.

 

On a related subject, one of the aspects of machine learning that really interests me would be creating a keyboard where you could play seamlessly in just intonation. It would know the key, know the intervals, and micro-adjust the tuning on the fly. I don't think just intonation is a "fringe" thing, it can sound very cool. Hearing just-intoned intervals and chords is like the difference between a laser and a flashlight - the focus is so much tighter. It's just not practical in a world of even-tempered instruments.

  • Like 2
Link to comment
Share on other sites

13 minutes ago, Anderton said:

I think the goal is to reproduce exactly what you played, when you played it. So if your timing is a little humanized, then it would be preserved as you played it, without more jitter being added. 

Yes, I think that's typically the ideal. The issue of a possibility of "too much perfection," then, would come up when you have tracks that weren't played by human hand to begin with; or where a single human gesture is responsible for multiple sounds (e.g. you're playing a single-note line that is, itself, triggering multiple sounds on different channels).

Maybe this is the best place for a shameless plug! Our now not-so-new new video at https://youtu.be/3ZRC3b4p4EI is a 40 minute adaptation of T. S. Eliot's "Prufrock" - check it out! And hopefully I'll have something new here this year. ;-)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...