Jump to content
Please note: You can easily log in to MPN using your Facebook account!

Has Universal Audio lost it? (Luna)


Recommended Posts

Well, today I updated my iMac to Big Sur and in the process had to uninstall and reinstall my Apollo console. During the process UA was pushing the free Luna DAW. Rather than just check okay I decided to research a bit. Wow, I will be skipping on the "free" offering. Why? ...

 

1. You have to have your UA hardware connected to run it. Okay. I can accept that. I have to connect that hardware to use the many UA plugins I have bought over the years. But...

2. You have to have iLok. What? They are already using the UA hardware as the account identifier. It is what protects the plugins. Why in the world do they also have to have iLok?

3. The much talked about (by UA) Neve summing system is $300. It is bad enough that there is a bit of double dipping on plugins, but having to pay $300 for the Neve system when I already purchased a bunch of the Neve plugins is over the top.

 

So, is anyone using Luna? How to you find it? Am I being too grouchy about this? I am a UA fan, owning a quad card and two external Apollo devices, and a BUNCH of plugins over the years. But this new path worries me.

This post edited for speling.

My Sweetwater Gear Exchange Page

Link to comment
Share on other sites

  • Replies 60
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

I am hearing rumblings that UA may be transitioning away from the hardware business model in the mid-term. From what I've heard, the chip supply constraints are killing their primary business right now, and needing what is effectively a hardware dongle to run a software plugin on is just not a justifiable model for the 2020's moving forward.

 

So they might be accelerating things. Incorporating iLok support into their software might be a sign of things to come.

"The Angels of Libra are in the European vanguard of the [retro soul] movement" (Bill Buckley, Soul and Jazz and Funk)

The Drawbars | off jazz organ trio

Link to comment
Share on other sites

1. Yeah, you gotta use their hardware, presumably because you can't take advantage of what Luna offers without it.

2. I suppose the iLok is to reduce the chances of Luna owners "sharing" plug-ins.

3. I don't know what they put into designing the Neve summing system...it may not have been as simple as just combining existing Neve plug-ins.

 

I really haven't heard a lot of reactions about Luna...not sure how it's faring in the Real World. But also, it's so new. Look at the difference between, say, Studio One 1.0 and Studio One 5.0. So, we'll see where this goes...

Link to comment
Share on other sites

They already have a system for preventing sharing of plug-ins. It is the hardware identification of their accelerators. My plug-ins will not run if I don't have one of those accelerators connected to the computer. That is why the iLok thing is confusing. Luna is supposedly accelerated by that hardware so why the duplicity? Coming late to the game they are going to have to give people reason to move to yet another DAW.

 

What I really worry about is this being a sign that they have reached a saturation point with most of their customers. I, for one, have not bought a plug-in in two years. I don't need anything else. The new releases are not any better than previous products and they hare going to have to find a source of income. I worry that introducing a DAW is their method of finding a way to sell the plug-ins all over again in a new system, eventually leading to abandonment of the current system.

This post edited for speling.

My Sweetwater Gear Exchange Page

Link to comment
Share on other sites

I try to remain positive. I hadn't even considered any "nefarious" rational for Luna, though their collaboration with Apple made me wonder why they wanted to jump into the DAW side of things. And by "collaboration" I merely mean that they seem to work pretty closely with Apple to keep their stuff current. It didn't take them terribly long to update their software for M1 and Big Sur, separately.

 

Might I suggest you check out UA's forums for reactions to Luna? I haven't gone deep over there, but in passing I've seen comments covering the gamut from it looks good to I'll be sticking with my current DAW for now. I don't know what the proportions are at all.

 

I hope analogika's rumblings are wrong. I think UA's hardware is really cool, and we need those interfaces to get sound into and better sound out of our computers. I'm listening to an online concert right now through my studio monitors via my Arrow (Apollo Solo).

"I'm so crazy, I don't know this is impossible! Hoo hoo!" - Daffy Duck

 

"The good news is that once you start piano you never have to worry about getting laid again. More time to practice!" - MOI

Link to comment
Share on other sites

What I really worry about is this being a sign that they have reached a saturation point with most of their customers. I, for one, have not bought a plug-in in two years. I don't need anything else.

 

Well, then I guess you have nothing to worry about :)

 

Seriously, I was thinking about this today as I looked over a publication with product reviews. I really don't feel like I need anything. Of course there are things I want, but will they help me write a better chorus? No. I do want a guitar rack so I don't need to take guitars in and out of cases when I want to use them, and I wouldn't mind a few more solid-state external drives. But realistically, I can make any sound I want to realize any musical ideas I have.

 

When you think of all the great music that was made by one musician with a piano or guitar, you realize what's really important.

 

I've been, well, shocked at how well my eBook of Studio One Tips is doing at Sweetwater. Frankly, I think a lot of average Studio One owners would rather buy an eBook that helps them use what they already have, then add more options to what they already have. I think perhaps one reason why the PreSonus Sphere thing has been successful is because they also offer exclusive videos, master classes, and workspaces. Helix keeps updating, and that's why I'm doing a Helix tips book.

 

UA has a lot of pro-level users. If they really want to find a new source of income, they might be better served selling courses on how to USE their plug-ins to create successful mixes.

Link to comment
Share on other sites

...

 

... If they really want to find a new source of income, they might be better served selling courses on how to USE their plug-ins to create successful mixes.

 

I think you are absolutely correct. How to use compressors. How to get the most out of a specific compressor. When to use channel EQ's. Listening for anomalies and how to correct them. So much they can cover and they are a respected source of information that could draw a customer base in training. But their website is only adding to the confusion. When I go to the plug-in's page there are Luna only plug-ins mixed in with the regular UA plug-ins. You start to wonder why you would ever buy a Luna only plug. They have a nice looking MiniMoog plug now that is Luna only. That knocks me out. I'll never buy something that I cannot use in Ableton Live, and on both formats.

This post edited for speling.

My Sweetwater Gear Exchange Page

Link to comment
Share on other sites

I really like the UA stuff but have become fairly removed from it at this point. I am still clinging to an old 2008 17" MBP because I still have the UA expresscard thingie and plug-ins but I haven't used any of that in many years and can't even remember the last time I logged into their website. When I wanted a new interface for our live act I strongly considered a UA Apollo unit but the lack of MIDI ultimately was a deal breaker for that purpose.

 

Just yesterday I received my new MBP with Logic pre-installed and I'm looking forward to trying that again. I had Logic Pro 9 on the aforementioned computer and then the OS update to 10.13 killed it. I was fairly torqued off about that for a long time! I consider Digital Performer as my main DAW program primarily because it's so well designed for live use but I also enjoyed using Logic. For obvious reasons I've never worked with Luna but it would need robust MIDI functionality for me to even be interested.

Link to comment
Share on other sites

For obvious reasons I've never worked with Luna but it would need robust MIDI functionality for me to even be interested.

 

It looks there's basic MIDI editing functionality but it doesn't appear to have features like Note articulations, MPE editing, chord track, notation, etc.

Link to comment
Share on other sites

They're suffering from mission creep. Probably have some Marketing Guy with a piece of paper that says he should be advising the company, telling them they're not profiting as much as they *could*. Instead of worrying about solidifying their base product(s).

 

I would think they see the same writing on the wall that Avid sees. We'll look back at this era of DAWs as the time when they were still getting UI ergonomics and processes together. This is the DAW equivalent age of the time when consoles had giant knobs for faders, geometrically angled-up panels, giant swaths of surface area that was blank, switches for panning. Things you'd never see even in a budget mixer these days.

 

People seem bent on making more and more arcane shortcuts to try to "make music" faster and faster, but really what is going to happen over the next 5 years is a shakeout of basic UI integration. Performance wise I'm sure we're all at the threshold of not being hamstrung by processing power anymore, and with great free alternatives to just about any possible plugin available we're at diminishing returns. The DAW that is most flexible in process is going to win.

 

I'm going to say this again: the final frontier for companies like UA is going to be machine learning/a.i.. I only know of one plugin that truly uses ML/a.i. in a forward looking manner, but I see many companies touting "a.i." that I'm pretty sure are not really exploiting modern advances in it, much less using it to the extent it's going to be used in the end. There will be a new generation of plugins that will happen eventually that will make everything we're using now seem "old".

 

It will also change the way we make music forever.

 

The structural hierarchy of the way these companies are run will probably prevent them from taking advantage of things as I'm talking about. There will be new little companies coming out of nowhere that will have software that people have *no awareness* can exist right now. Whose description can't even be understood by most people today. When these new generation plugins happen, in short order "somebody" will integrate the new processes that will arise from them into a DAW, and it will make what we see now as a lot of flagellating in a stagnant pond. A few companies could sidestep this by working on a DAW that integrates those processes from the ground up; they'd be able to offer a product that offers an end result faster and more consistent than any other DAW. But they'd have to commit to it, and dump a lot of resources into "prototyping".

 

I'd be on the lookout for a plugin equivalent paradigm shift of what Line6 did in the 90s. Apple might/could do it. Microsoft. Trying to make another hardware-dongled DAW probably isn't the future any more than the covert oil-investor hoping he can sell hydrogen to put in cars one day.

Guitar Lessons in Augusta Georgia: www.chipmcdonald.com

Eccentric blog: https://chipmcdonaldblog.blogspot.com/

 

/ "big ass windbag" - Bruce Swedien

Link to comment
Share on other sites

 

UA has a lot of pro-level users. If they really want to find a new source of income, they might be better served selling courses on how to USE their plug-ins to create successful mixes.

 

What they should be using is access to pro level users helping them train ML processes to automate tailoring "guitar, bass, drums, vocals" channels. The future will be a.i. processes automatically creating successful mixes.

 

There will be a year when the headline topic will be "a.i. mixing: are we there yet?" and the next year it will be "which a.i. DAW mix is the best?", and then the next it will be over. It would be nice if some of the classic, "hero" engineers could work with a company to train ML models. It would go faster, and have a more instantly artistically-valid result I think.

Guitar Lessons in Augusta Georgia: www.chipmcdonald.com

Eccentric blog: https://chipmcdonaldblog.blogspot.com/

 

/ "big ass windbag" - Bruce Swedien

Link to comment
Share on other sites

They're suffering from mission creep. Probably have some Marketing Guy with a piece of paper that says he should be advising the company, telling them they're not profiting as much as they *could*. Instead of worrying about solidifying their base product(s).

 

I would think they see the same writing on the wall that Avid sees. We'll look back at this era of DAWs as the time when they were still getting UI ergonomics and processes together. This is the DAW equivalent age of the time when consoles had giant knobs for faders, geometrically angled-up panels, giant swaths of surface area that was blank, switches for panning. Things you'd never see even in a budget mixer these days.

 

People seem bent on making more and more arcane shortcuts to try to "make music" faster and faster, but really what is going to happen over the next 5 years is a shakeout of basic UI integration. Performance wise I'm sure we're all at the threshold of not being hamstrung by processing power anymore, and with great free alternatives to just about any possible plugin available we're at diminishing returns. The DAW that is most flexible in process is going to win.

 

I'm going to say this again: the final frontier for companies like UA is going to be machine learning/a.i.. I only know of one plugin that truly uses ML/a.i. in a forward looking manner, but I see many companies touting "a.i." that I'm pretty sure are not really exploiting modern advances in it, much less using it to the extent it's going to be used in the end. There will be a new generation of plugins that will happen eventually that will make everything we're using now seem "old".

 

It will also change the way we make music forever.

 

The structural hierarchy of the way these companies are run will probably prevent them from taking advantage of things as I'm talking about. There will be new little companies coming out of nowhere that will have software that people have *no awareness* can exist right now. Whose description can't even be understood by most people today. When these new generation plugins happen, in short order "somebody" will integrate the new processes that will arise from them into a DAW, and it will make what we see now as a lot of flagellating in a stagnant pond. A few companies could sidestep this by working on a DAW that integrates those processes from the ground up; they'd be able to offer a product that offers an end result faster and more consistent than any other DAW. But they'd have to commit to it, and dump a lot of resources into "prototyping".

 

I'd be on the lookout for a plugin equivalent paradigm shift of what Line6 did in the 90s. Apple might/could do it. Microsoft. Trying to make another hardware-dongled DAW probably isn't the future any more than the covert oil-investor hoping he can sell hydrogen to put in cars one day.

 

Great post and spot-on.

You left out Adobe but they are already integrating AI into Photoshop, even the latest Elements version has it and it is seeping into Premiere.

They had to, somebody would eat their lunch if they don't.

They are not that far into audio (yet) but it could happen.

 

Some of the outliers may beat them to it. Fine with me, version 11.5 of my chosen DAW - Waveform - is at 11.5.19 now and still having severe latency problems with their new audio engine.

I went back to version 10 for now until they suss that out but it should never have happened in the first place.

It is a good interface but there is always room for improvement and ideas I will never have implemented by people who can out-think me in their sleep.

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

I'm going to say this again: the final frontier for companies like UA is going to be machine learning/a.i.. I only know of one plugin that truly uses ML/a.i. in a forward looking manner, but I see many companies touting "a.i." that I'm pretty sure are not really exploiting modern advances in it, much less using it to the extent it's going to be used in the end. There will be a new generation of plugins that will happen eventually that will make everything we're using now seem "old".

 

It will also change the way we make music forever.

 

This is something I keep emphasizing in my future-oriented keynotes and sessions. One example I give is comping, where the machine learns what you think is a good take, and assembles complete parts. For example, I've noticed I never keep a phrase where I lose pitch or level at the end because I've run out of breath. There are certain phrasings I prefer as well. A machine could pick up on these over time. It could also pick "good" phonemes to replace "botched" ones, like Adobe's Voco proof-of-concept program.

 

What they should be using is access to pro level users helping them train ML processes to automate tailoring "guitar, bass, drums, vocals" channels. The future will be a.i. processes automatically creating successful mixes.

 

I'm not sold on that idea - there's already too much "look and feel " of music. If I see one more guy on YouTube selling MIDI sequences so "you can write a hit song in TWO MNUTES!," I'm going to scream. I don't want to sound like Chris Lord-Alge or whatever, I want to sound like me, and find things that are unique to how I make music. If the AI wants to analyze my preferences and make what I do faster and more efficiently, that's fine. But I don't want "Famous Engineer XYZ's Guitar Sound" superimposed on what I'm playing.

Link to comment
Share on other sites

 

You left out Adobe but they are already integrating AI into Photoshop,{/quote]

 

Yeah - but AFAIK they're not doing anything like that with Cool Edit... wait... I forget what CEP turned into.

 

But the same process. Most have no idea what is going on in a.i. research, what is GPT-3, Tensor Flow, Pytorch, et al.

 

They are not that far into audio (yet) but it could happen.

 

 

Yeah, they could possibly do it on the code bones of CEP I suppose.

 

Some of the outliers may beat them to it.

 

The ML/ai community isn't going to wait around. There is a true ML guitar amp that is a grass-roots GIthub project; it's rough around the edges, but useable.

 

Or actually, InVidia could develop tools like what I'm talking about. A video game development engine might be the first place what I'm talking about appears.

Guitar Lessons in Augusta Georgia: www.chipmcdonald.com

Eccentric blog: https://chipmcdonaldblog.blogspot.com/

 

/ "big ass windbag" - Bruce Swedien

Link to comment
Share on other sites

I prefer as well. A machine could pick up on these over time. It could also pick "good" phonemes to replace "botched" ones, like Adobe's Voco proof-of-concept program.

 

What can eventually happen is that you would make a model of your voice in idealized conditions, and that would be "weighted" against adversarial-set models of idealized singers. You'll have a fader that blends from your voice to the model. Which is where the training resources come in; the effort put into a program like EmVoice, applied to an adversarial network can create a system whereby the input is almost reduced to just being parameterized to just text (like Adobe's Voco demo), timing, rhythmic inflection, melody. Probably with less effort than was put into EmVoice.

 

It will allow people to make any singer sound like any other singer at some point, with practically no effort. Te only problem right now is having the will to go to the trouble and having both the right training data set AND knowledge of how to approach that.

 

 

What will also happen at first, is that some company will come out with an adversarial model of a Generic Great Singer, and it will be one step better than the EmVoice approach - but it will mean endless variations of something that sounds like Celine Dion I would imagine. Eventually it will be able to train on any morsel of a sample and we'll have all singers available at our fingertips...

 

But first someone needs to get started with making an easy user interface, and doing a musically useful model - instead of Obama and David Attenborough.

 

I'm not sold on that idea - there's already too much "look and feel " of music. If I see one more guy on YouTube selling MIDI sequences so "you can write a hit song in TWO MNUTES!,

 

Yeah, that's horrible. I know people that call themselves "musicians" because they cobble things together from phone apps where they press a button to select a pre-canned melody or even song fragment.

 

This is going to be much worse. Anything that goes in can be made to sound like anything that goes out. Literally. Stylistically. Right now, a.i. curation should probably be considered a form of "making music" IMO. it's very coarse, but what I'm waiting for is finer controls: I want to pat my hands on my desk and have 20 drum performances come out that sounds like John Bonham....

 

Which sounds far fetched, but it's not.

 

" I'm going to scream. I don't want to sound like Chris Lord-Alge or whatever, I want to sound like me,

 

I do too, but 10 years from now "music" will be diluted by a.i. created music. It's going to change what being a "musician" is.

 

 

and find things that are unique to how I make music. If the AI wants to analyze my preferences and make what I do faster and more efficiently, that's fine. But I don't want "Famous Engineer XYZ's Guitar Sound" superimposed on what I'm playing.

 

 

I want what gets me to the end easiest. What I am describing is no different than any other signal processing decision, although it's not signal processing. If I can't get a canned drum sound close enough to what I hear in my head - which might be Hal Blaine, or Art Blakey's ride from a certain record, I would love to have a slider that lets me do that fast. And if I want a Foo Fighters guitar sound NOW, no waiting - that too. I don't have to keep it, or do anything else to it, or alter it's incarnation - but if I've got a guitar sound in mind that is halfway between "Mark Knopfler on "Down to the Waterline" and Eric Clapton on "Son and Sylvia" - something that occurs to me *instantly*, right now - if I can move some knobs to get that in a few minutes, that would be fantastic.

 

And my voice is horrible. I've fought with EmVoice to do some songs, and while it's interesting to see people's reactions to it - "who is singing?" - it's only "sort of" easier than working with a real singer. Having something that could instantly morph the inflections to an iconic style wouldn't negate the artistic pursuit, because I could then go into "producer mode" that much harder - and more effectively.

 

I view it as liberating. I can see a point in the future, finally, where I will be able to quickly get things out of my head, sounding as I want it to, without the battle. The downside will be that people with no musical training or knowledge, will be able to make something that sounds "professional", with a glint of The Real Thing; but I can't do anything about that.

 

There will be plugins that completely morph performances into particular styles. Effectively that's already a thing now; it's scary that they can take a song in one genre and have it converted to another. Crude and bandwidth limited for now, but it won't always be that way.

Guitar Lessons in Augusta Georgia: www.chipmcdonald.com

Eccentric blog: https://chipmcdonaldblog.blogspot.com/

 

/ "big ass windbag" - Bruce Swedien

Link to comment
Share on other sites

I want what gets me to the end easiest. What I am describing is no different than any other signal processing decision, although it's not signal processing. If I can't get a canned drum sound close enough to what I hear in my head - which might be Hal Blaine, or Art Blakey's ride from a certain record, I would love to have a slider that lets me do that fast. And if I want a Foo Fighters guitar sound NOW, no waiting - that too. I don't have to keep it, or do anything else to it, or alter it's incarnation - but if I've got a guitar sound in mind that is halfway between "Mark Knopfler on "Down to the Waterline" and Eric Clapton on "Son and Sylvia" - something that occurs to me *instantly*, right now - if I can move some knobs to get that in a few minutes, that would be fantastic.

 

For me, that would take the fun out of recording. I like the "what if I..." part more than the "how do I..." part.

 

And my voice is horrible.

 

So is Bob Dylan's, LOL. Maybe you could just do a Deep Fake thing with his voice. :)

 

I think what you're describing would be great for home recording fans, but there will still be people who want the experience of live music. Unless the deep fake stuff could be done in real time (although I'm sure at some point it would be possible), I think it would be hard to pull off live for now.

 

I also think that people will still crave sounds that are new & different. I don't know if you could have fed American pop music fading in and out from Florida radio stations into AI, and have Bob Marley doing reggae come out the other end. If every drummer sounds like John Bonham, listeners will never know the experience of hearing "When the Levee Breaks" for the first time...or what might be an original twist on drumming that no one has heard before.

 

The reality is I'm addicted to music, so I'm always looking for a new and more powerful high :) Probably not everyone feels the same way.

Link to comment
Share on other sites

 

I want what gets me to the end easiest. What I am describing is no different than any other signal processing decision, although it's not signal processing. If I can't get a canned drum sound close enough to what I hear in my head - which might be Hal Blaine, or Art Blakey's ride from a certain record, I would love to have a slider that lets me do that fast. And if I want a Foo Fighters guitar sound NOW, no waiting - that too. I don't have to keep it, or do anything else to it, or alter it's incarnation - but if I've got a guitar sound in mind that is halfway between "Mark Knopfler on "Down to the Waterline" and Eric Clapton on "Son and Sylvia" - something that occurs to me *instantly*, right now - if I can move some knobs to get that in a few minutes, that would be fantastic.

 

Oh, hell no. I mean, maybe if I am recording a cover band, but otherwise, no. I want to sound like ME. I want musicians to sound like THEM. I want the idea of exploring sounds. What is the point of recording anymore if we move a slider to get the "Hal Blaine" sound? Damn, if you think pop music sounds the same now, wait 'til this drops.

Link to comment
Share on other sites

 

You left out Adobe but they are already integrating AI into Photoshop,{/quote]

 

Yeah - but AFAIK they're not doing anything like that with Cool Edit... wait... I forget what CEP turned into.

 

But the same process. Most have no idea what is going on in a.i. research, what is GPT-3, Tensor Flow, Pytorch, et al.

 

They are not that far into audio (yet) but it could happen.

 

 

Yeah, they could possibly do it on the code bones of CEP I suppose.

 

Some of the outliers may beat them to it.

 

The ML/ai community isn't going to wait around. There is a true ML guitar amp that is a grass-roots GIthub project; it's rough around the edges, but useable.

 

Or actually, InVidia could develop tools like what I'm talking about. A video game development engine might be the first place what I'm talking about appears.

 

Nvidia has had robots in R&D for some time now that can watch what a human does and learn the procedure by watching. They don't program them, the robots learn.

The implications for the future of manufacturing are not conducive to human labor!!!! The manufacturing of the future will be robots running 3d printers and assembling products.

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

Nvidia has had robots in R&D for some time now that can watch what a human does and learn the procedure by watching. They don't program them, the robots learn.

The implications for the future of manufacturing are not conducive to human labor!!!! The manufacturing of the future will be robots running 3d printers and assembling products.

 

Under the current economic system, how will out-of-work people afford to buy them?

 

There are a lot of great disturbances on the horizon. We can either leverage them into something really cool, or watch humanity dissolve in a sea of antagonism.

 

Not to put too optimistic a spin on things :), but I suspect there are some life-or-death situations in humanity's future as a whole. It will be a pass/fail test.

Link to comment
Share on other sites

Nvidia has had robots in R&D for some time now that can watch what a human does and learn the procedure by watching. They don't program them, the robots learn.

The implications for the future of manufacturing are not conducive to human labor!!!! The manufacturing of the future will be robots running 3d printers and assembling products.

 

Under the current economic system, how will out-of-work people afford to buy them?

 

There are a lot of great disturbances on the horizon. We can either leverage them into something really cool, or watch humanity dissolve in a sea of antagonism.

 

Not to put too optimistic a spin on things :), but I suspect there are some life-or-death situations in humanity's future as a whole. It will be a pass/fail test.

 

Bingo!!!!

This is the problem with increased automation and I fear it will be a startling revelation for some bean counters who look at numbers but have no idea of reality.

We are living in a consumer based economy. If people do not have money to spend, producing products at a lower cost isn't going to improve anything, especially if it eliminates jobs.

This is where the push for a guaranteed income that some politicians are spouting is coming from, a vision of the future where everything is cheap and nobody can buy any of it.

 

It's not "artificial stupidity" it's genuine stupidity at a higher level than intelligent people can comprehend.

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

For me, that would take the fun out of recording. I like the "what if I..." part more than the "how do I..." part.

 

That's fun, but people are making "preset decisions" all the time but just don't think of it that way: using a 57 on snare, 121 on guitar cab, using a Fender amp for a clean sound... a bass guitar for "bass". There is variability, but there will be with a.i. plugins as well.

 

 

 

I think what you're describing would be great for home recording fans, but there will still be people who want the experience of live music.

 

But what is "live music" today? "Tracks" and tuned vocals - if there are actual vocals, or even someone that can really sing on pitch? It's going to get more diffuse.

 

There are 2 aspects: a.i. to get *sounds*, versus a.i. to "create" an output. I want a faster way of getting a sound. But it's also going to mean crazy things like amp presets that does automatic vibrato and auto chord voicings, auto tune. A "live band" can be people that basically don't know anything about music that are presenting themselves as "a band". But we're already there from a studio point of view now.

 

The real question will be whether humans will be able to discern the difference.

 

 

Unless the deep fake stuff could be done in real time (although I'm sure at some point it would be possible), I think it would be hard to pull off live for now.

 

 

Modulate A.I. - real time voice transformation

 

I also think that people will still crave sounds that are new & different. I

 

The problem will be *an avalanche of new and different*.

 

Are you aware there is a 24-hour non-stop streaming Youtube channel of a neural network that continuously makes death metal?

 

 

The other Dadabots offerings are .... interesting. The non-stop "lo-fi classic metal radio" is pretty crude, but again only because of limited resources. Presently people are only applying the technology to data sets trained as a whole; which is why you can't make out the words in the lyrics, but a GPT-3 lyric generator applied on a multitrack level would fix that. The limited bandwidth would be fixed with training done on a discrete instrument level, which is where a UA, NVidia, or Microsoft/Apple would come in.

 

Are you aware of the Open A.I. "A.i. Jukebox" project?

 

Listen to this starting around 1:53. A.M. radio bandwidth; but I think it could pass a "clock radio doctor's office" test :

 

A.I. Jukebox

 

The A.I. Jukebox stuff literally frightens me. It reduces human creation to pure data and recreates it. It's like listening to an a.m. radio from another dimension. The "curated" examples are entertaining, but if you listen to some of the "continuations" sometimes it's uncanny what is there. It's in a wild, feral state right now, but it won't be like that for very long. There will be an app that will let people concatenate elements - like you can do already if you have Python programming ability - to make music that has never existed before. Not algorithmic variations, but something that will pass a Turing test for realism.

 

don't know if you could have fed American pop music fading in and out from Florida radio stations into AI, and have Bob Marley doing reggae come out the other end. If every drummer sounds like John Bonham, listeners will never know the experience of hearing "When the Levee Breaks" for the first time...or what might be an original twist on drumming that no one has heard before.

 

There should be a differentiation made, or rather a term needs to arise to explain the following difference; I suppose there will be such a term eventually.

 

Music that has been created by humans, but the sound has been modified by a.i.

Music that has been created by a.i. from scratch.

 

Because it's difficult to talk about one without referencing the other, or sounding like you're talking about the other concept when you're not. Maybe "A.i. novel music" vs. "A.i. massaged sound". Then intersections; I don't want John Bonham beats copied from Led Zeppelin tracks, anymore than just cutting and pasting a MIDI track. Would I like to do a project with Dave Grohl playing drums? Yes; would I tell him "Dave, absolutely do nothing that John Bonham has or would do"? No. But it's going to be there in his playing in some aspect.

 

The reality is I'm addicted to music, so I'm always looking for a new and more powerful high :) Probably not everyone feels the same way.

 

Well, I am, too. But the tools to make music sound better, faster, I'm all for. From the perspective of a one-mic Sun Studios old school way of recording rock music, multitracked music has a "sound" that doesn't exist in reality in front of you. The choice of using the technology is an arbitrary one, just as choosing to use a.i. massaged sound. It can still be post-mangled, or unique combinations chosen, or particular models created. Or reduced in potency - a mastering plugin that has been trained on CLA recordings doesn't have to make the result sound like each instrument is super-hyped, if could just imply the eq habits on the the 2 mix, not the instruments. That sort of thing is what people are doing anyhow, when they try to emulate the approach of a particular mixer/engineer.

Guitar Lessons in Augusta Georgia: www.chipmcdonald.com

Eccentric blog: https://chipmcdonaldblog.blogspot.com/

 

/ "big ass windbag" - Bruce Swedien

Link to comment
Share on other sites

 

I think what you're describing would be great for home recording fans, but there will still be people who want the experience of live music.

 

But what is "live music" today? "Tracks" and tuned vocals - if there are actual vocals, or even someone that can really sing on pitch? It's going to get more diffuse.

 

I am not here to stop technology, I know the results will speak for themselves and the results will be all over the map, some fantastic music and lots of crap - same as it ever was.

 

Live music today in Bellingham is people singing live into mics with probably some reverb and maybe a bit of EQ. We don't always get to chose our speakers so compensation of some sort may be needed for best results. Everybody I've seen is playing their own instruments, good, bad or indifferent. Real drummers playing real drums, real bassists playing real bassists, etc.

 

I don't want 60 Boss pedals on the floor in front of me (a 2 button footswitch is bad enough!!!!) but I sure love being able to choose the ones I want for the sounds I want to make, my Katana amp allows me to do that. There are other systems, a friend just got a Headrush. He uses it to create the tones he likes and then he plays his guitar.

 

I can think of one performer up here who almost never plays out and uses a looper to build all of her music. It's still her voice but heavily processed, the toys ARE fun but it's all pretty human up here. If I didn't have all those effects in the amp, I'd go back to playing on a 2 channel amp and it would be just as much fun.

 

If it's different in Atlanta I'm not sure if I want to hear it or not. If the human element is first and foremost then probably although everybody has the right to suck - here, there and everywhere...

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

Another rambling thought.

 

I agree with Chip's line of thinking that we already choose what are essentially presets. A Telecaster is a preset, but listening to all the different sounds different players get out of it pretty much renders the choice meaningless from a musical standpoint (although if I had to choose a guitar based on beating my way out of some crazy bar scene with it, the Tele would be high on my list and I would not consider a Gibson SG).

 

Lots of plugins have presets. Most of the time, I just surf those and choose one. If the plugin has a ton of knobs I always choose the presets or find a different plugin. I hate 2d knobs, they are stupid. Sliders please.

 

Now, what happens if you drop 3 plugins onto a track and chose presets for all of them? Is that a preset too or a custom setting or maybe both?

 

I don't find the prospect of AI plugins off-putting, people will mess them up big-time and maybe somebody will come up with something new. That's my kind of fun!

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

I hate 2d knobs, they are stupid. Sliders please.

 

And...why are we beholden to hardware, anyway? Several years ago I came up with a virtual console design that used crescent-shaped sliders, with mute/solo and other parameters located inside the crescent. It gave much higher resolution when moving faders. I presented it to a company and they said none of their customers would accept it. I also said "why do you just imitate clip LEDs? Why not have the entire fader flash red?" They thought I was a totally disconnected from reality. Which may be true, but...

Link to comment
Share on other sites

I hate 2d knobs, they are stupid. Sliders please.

 

And...why are we beholden to hardware, anyway? Several years ago I came up with a virtual console design that used crescent-shaped sliders, with mute/solo and other parameters located inside the crescent. It gave much higher resolution when moving faders. I presented it to a company and they said none of their customers would accept it. I also said "why do you just imitate clip LEDs? Why not have the entire fader flash red?" They thought I was a totally disconnected from reality. Which may be true, but...

 

Yeah, I REALLY don't get it, at all. My Amplitube copy of an Ampeg SVT is mostly a graphic of the amp, which has nothing to do with how it sounds or how you adjust it. There are a few tiny knobs on one panel and I think there is even an image of an input jack - even though it doesn't do anything. The "switches" are in the same place as the amp and they are 3 way switches. Sure, it looks cool.

 

But as an interface it is a total fail. It could be much simpler, provide much larger controls in the same space and be faster and more accurate to dial in. Almost all "guitar amp" sims I see are just like it.

I make the assumption that others are making the assumption that guitarists are stupid and stuck in their ways.

But I don't believe it. I think we'd like better interfaces.

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

Yeah, I REALLY don't get it, at all. My Amplitube copy of an Ampeg SVT is mostly a graphic of the amp, which has nothing to do with how it sounds or how you adjust it. There are a few tiny knobs on one panel and I think there is even an image of an input jack - even though it doesn't do anything. The "switches" are in the same place as the amp and they are 3 way switches. Sure, it looks cool.

 

But as an interface it is a total fail. It could be much simpler, provide much larger controls in the same space and be faster and more accurate to dial in. Almost all "guitar amp" sims I see are just like it.

I make the assumption that others are making the assumption that guitarists are stupid and stuck in their ways.

But I don't believe it. I think we'd like better interfaces.

 

High-end restaurants place a lot of importance on presentation, because as they say, "first you eat with your eyes." I think there's a lot of that going on. Consider the people who think Ableton Live is boring, and looks like a microwave oven control panel. I think it looks functional, clean, and uncluttered...and I also suspect that not spending too much time on eye candy might be one reason why the audio engine is rock solid.

 

There's probably no one-size-fits all solution. Helix Native is a good example of an amp sim that eschews eye candy...then again, it was constrained to have the same look and feel as the hardware, so it may have just been dumb luck that all of Helix Native's bells and whistles are in the functionality. On the other hand, sometimes skeuomorphic world can be inspirational. There needs to be a balance...maybe it's just as simple as coming up with a couple different skins, begging everyone to turn on analytics, and seeing which skin wins.

 

I remember when Sonar changed the interface look, the user base rebelled in a "who moved my cheese" moment, coupled with the first implementation not being fully baked. I think over time people realized that overall, it was an improvement. As to AmpliTube, remember that it's from an Italian company. Italy has always been a fashion center on multiple levels, so I assume that's a major influence. It might even be in their DNA. Getting them to take the art out may be a non-starter for people who live in the country that brought us Ferraris, the Sistine Chapel, and Sophia Loren :)

Link to comment
Share on other sites

Nice post Craig! I still believe you can combine style and function to create things that are beautiful to look at and beautiful to use.

 

I don't see the front panel of an Ampeg SVT as being any sort of artistic accomplishment, it's a boring looking box full of tubes. What makes it fun is when you turn it up a bit.

 

And I can't fault the tone of the plugin.

 

Maybe Italians (and everybody else!) could consider trying to create a new "best thing to use" instead of just documenting specimens from the past that are revered? Plugins have the potential to write a new book.

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...