Jump to content
Please note: You can easily log in to MPN using your Facebook account!

Has Universal Audio lost it? (Luna)


Recommended Posts

Maybe Italians (and everybody else!) could consider trying to create a new "best thing to use" instead of just documenting specimens from the past that are revered? Plugins have the potential to write a new book.

 

I think Logic blazed some trails in terms of plug-in looks, didn't seem to hurt its adoption.

Link to comment
Share on other sites

  • Replies 60
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Maybe Italians (and everybody else!) could consider trying to create a new "best thing to use" instead of just documenting specimens from the past that are revered? Plugins have the potential to write a new book.

 

I think Logic blazed some trails in terms of plug-in looks, didn't seem to hurt its adoption.

 

I have Eventides Physion and Micro-Pitch and they are both very attractive and very functional with great presets and sliders to dial things in. Not only can it be done, it HAS been done.

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

Live music today in Bellingham is people singing live into mics

 

But see, this is because presently there is still something of a rudimentary expectation of what "being a musician" is.

 

This technology will provide the illusion that *everybody* can think of themselves as "a musician". The reflex is to expect some sort of philosophical regulation of that; but I've met people that think of auto-tune as just an either/or aspect, playing actual instruments being optional. When digital consoles allows a plugin that imbues completely natural vibrato on a singer, as well as tonal/timbral quality that mimics Whoever Famous Singer - and guitar amps with the same capability, electronic drum kits that quantize on the fly to grids that perfectly replicate Famous Grooves - "everybody" will think they're a musician.

 

Like I said, I know people that call themselves "musicians" because they make some choices in a phone app and call it a "song". A.i. is going to let anybody make something that sounds like music, if they want to bother. People watching other people onstage "making music" this way won't have any awareness of the divide between the technology and actually being a musician. For those that even sort of know, they'll insist it's no different, just like being a DJ these days is "being a musician".

 

 

Real drummers playing real drums, real bassists playing real bassists, etc.

 

The church music scene: what amounts to some pretty big productions, involving digital boards, midied tracks, lights, sometimes electronic drum kits, and of course the guitar players have Enormous Variations on the Edge pedal boards. Any church that has a drummer that is "marginal", will buy the Latest $500 Roland or Yamaha - or Line6, or Nuevo Corporation drum brain that fixes the guy's flamed beats, asynchronous push/pull, flubbed drum fills, sketchy press rolls, maybe adds ghost strokes .... with a sound that comes out the box as a 2 mix that replicates 50 different Established Classic Drum Sounds. In 5 years, if covid doesn't wreck everything, that will be reality in every church. The same will go for the guitar player, bass player, and singer.

 

The *real* musician band will sound rough around the edges to the Average Median Audience. Will they appreciate the talent of a really good, *real* singer? *They already don't*, they're *only listening to the timbral balance and "authentic" delivery of a single phrase. They only need to be able to *sell a style* today, if their pitch is sketchy, tone kinda all over the place after the first syllable, or whatever else that wouldn't past muster 30 years ago - people *don't hear it anymore*. A.I. enhanced "performances" will be the expected norm, and audiences will lose the ability to discern what's real from fake, just as they already have.

 

Katana amp allows me to do that.

 

There will be a Perfect Katana in a few years.

 

 

I can think of one performer up here who almost never plays out and uses a looper to build all of her music. It's still her voice but heavily processed, the toys ARE fun but it's all pretty human up here.

 

I know a guy in Nashville that has always built his solo gig around looping his guitar parts between a pair of Line6 delay pedals. He has it down pretty effortlessly/seamlessly, and I'm sure 99% of the people in the audience has no idea how he's doing what he does. In a sense it's kind of a waste because of that - he really may as well be singing to tracks - I'm sure there are some people that that IS what he's doing.

 

2 things:

 

1) people in the audience these days are not as musically astute as they once were.

2) technology has made it worse, because it's raised expectations to an unrealistic level as to what something should sound like "live" these days.

 

 

I played at a small theatre with a Beatles band once. I tried to make use of a Variax>Floor Pod setup to mimic each song (that I never did accurately IMO; out of control logistics in this band...). Anyhow, I would do things like the sitar parts with the Variax, backwards sounds with a DL4, etc.. BUT, after we played "And Your Bird Can Sing" (with the "play both parts in harmony" fun lick) I turn around at the finish and because the crowd is SO up close to the front of the stage I hear these 2 guys yelling at each other in the front row, "it's the pedalboard thing he's using, it makes it all the parts come out right", "yeah, I hear that". Because they could see me tap-dancing on my pedal board, and obviously a "guitar" doesn't sound like a sitar, or make backwards sounds when I'm literally playing something technical *it's not actually me playing* to them.

 

People will just accept a.i. the same as they have autotune and gridding in DAWs. To those guys mentioned above, *they were already in the near a.i. future*.

 

If I didn't have all those effects in the amp, I'd go back to playing on a 2 channel amp and it would be just as much fun.

 

I really wish I'd just used a 2 channel amp in the aforementioned situation....

 

 

If it's different in Atlanta I'm not sure if I want to hear it or not. If the human element is first and foremost then probably although everybody has the right to suck - here, there and everywhere...

 

I don't know about any scenes now, I have no idea what's going on in Atlanta these days. R&B/hip hot scene has always been huge there, I see no pushback to using any a.i. "cheats" in a box there, but I don't even know if they have any other real "scenes" anymore?

 

It looks to me like All Scenes have been sublimated in the south east into something that can be summed up by saying

 

"Music Played by People Wearing a Trucker Hat, and Maybe With a Beard".

 

...and "covers" that are simplified and has all of the rhythms changed into "jam band/Dead shuffled-strumming". That is apparently what the Augusta music scene has degraded into over the past year, young people willing to get covid in bars/clubs playing 3 chord songs, singing warbly about lives they've never led.

 

 

 

... and they're smushing all of the songs into the same tempo range? (I just watched a local YouTube channel with a bunch of local "artists") . It's like they're trying to make everything match "been asleep in the tent at Bonnaroo", up tempo songs are slowed down, slow sped up, into a sort of gelatinous struma-lumma trucker cap "arrrrrrrhhh ruhhhggg" presentation. It's like "early 2000s Athens jam band" is colliding with "late 2010s "Americana" faux-country".

 

There will be a plugin for that.

Guitar Lessons in Augusta Georgia: www.chipmcdonald.com

Eccentric blog: https://chipmcdonaldblog.blogspot.com/

 

/ "big ass windbag" - Bruce Swedien

Link to comment
Share on other sites

I agree with Chip's line of thinking that we already choose what are essentially presets.

 

I don't find the prospect of AI plugins off-putting, people will mess them up big-time and maybe somebody will come up with something new. That's my kind of fun!

 

You will still have creative variability with sound.

 

For instance, with the ML amp plugin that's out now, you have to use your own cab IR. And you can preamp it with what ever you want, of course.

 

What I want isn't a restriction in choice of, say, the cab or micing, but a quick way of "Staying Within Accepted Parameters". Which is what everybody does; you decide to roll out the low end, fix if the guitar is too spikey, whatever. If I can move a slider that fades in "Terry Manning guitar channel eq results" in my situation I prefer that. I have a problem with establishing a *baseline* - I don't have ATC monitors in a nice room, a 2nd engineer to move a Royer around on a perfectly maintained Bassman on 7 while I listen through a Neve. I'm in my own Uncanny Valley of potential that theoretically can be nearly spot on to anything, IMO; but the time expenditure and coarseness of the "laboratory" is a time sink, and prone to error.

 

People will default to *hardware* presets. Which I suppose is the Cool Thing one does if you're CLA, JJP or some such, have the blue 1176 picked out, sitting there just for that One Thing. Still a decision has to be made for setting it, when to use it; it won't be any different with the "DAW-centric" a.i. plugins. It will just be curated a lot better than a preset is today.

 

In an ideal artistic world all initial conditions would be met, and everything would be like a Brian Eno/Daniel Lanois experiment fest, but most people are doing "preset combination" trial and error on some level from an audio engineering standpoint IMO.

Guitar Lessons in Augusta Georgia: www.chipmcdonald.com

Eccentric blog: https://chipmcdonaldblog.blogspot.com/

 

/ "big ass windbag" - Bruce Swedien

Link to comment
Share on other sites

I hate 2d knobs, they are stupid. Sliders please.

Oh yes!

 

Why isn"t the pot moving! Oh, it"s not up/down, it"s left/right. Or vice-versa. Grrrrrrr

 

The thing that bugs me, and I used to harass Justin to get Reaper to implement, is that knobs in software are not "damped". Like you can adjust a mouse, if I "pull" on a fader fast but then start slowing down, *scale the resolution higher* for more control.

 

Or alternately - there is zero reason that once you "touch" a control, it's "surface" can't pop out to be huge (or the size you want), increasing the "throw" of the control for more accuracy. It's bizarre developers keep adding teeny tiny knobs/faders, OR the ability for a control to scale down, and THEN you've got to micro-manage the throw of the control AS IF IT'S A REAL HARDWARE POT THAT TINY.

 

 

I want faders and knobs that pop out, grow, when you touch them. I don't want to have to hold down cntrl, or try to move my trackball by breathing on it to get 1 db.

 

 

OR WORSE YET,

 

EVERYTHING SHOULD DEFAULT TO 0 (or revert) WHEN YOU DOUBLE CLICK!!!!!

 

"Oh wait, I've changed my mind" (proceeds to waste 20 seconds trying to set something back to 0".

 

How much of my life have I wasted "resetting back to zero"???????

 

 

AHRGRHhhh...

Guitar Lessons in Augusta Georgia: www.chipmcdonald.com

Eccentric blog: https://chipmcdonaldblog.blogspot.com/

 

/ "big ass windbag" - Bruce Swedien

Link to comment
Share on other sites

^^ Interesting post Chip!

 

Pop music by it's very nature is "faddish" and changes constantly. With somewhere between 2 and 3 thousand gigs behind me and at 66 years old, I am not planning on slogging it out in the trenches much longer.

I play for real, every time. I never "call it in" and I don't really use to much of the weird stuff on the Katana, I hate stuff on the floor for one thing and the 50 only has 4 presets available, which is why I switched from the Katana 100 - which had 8 presets and you could switch the footswitch into "pedalboard mode" and turn your various effects off and on within those 8 settings. Thinking about all that shit I could do on the floor distracts my focus on playing music, hate it.

 

I like it simple - 1 "cleanish" tone for decorating, 1 much less cleanish tone for solos - those are used 80% of the time. I have a gritty rotary speaker sound and a "Gilmourish" lead tone with long delay and plate reverb. That's the other 20%.

 

No pitch shifters or harmonizers, no reverse delays, just straight up rock and roll (and country) guitar tones. I have to play to get anything to happen. People do still notice that I am up there live and in person, actually playing my guitar.

 

I don't think that will ever die or go away even long after I am gone. AI will be another phase and then, from "out of nowhere" - "Authenticity" will become a thing again, sort of like the Rockabilly resurgence a good while back. Motown will never go out of style, that fills the dance floor every time, young, old and in-between.

 

Years ago, I tried to learn to play the guitar fills for And Your Bird Can Sing on an acoustic guitar. I failed. Beautiful stuff though I'm certain the Beatles never played it live, at least not like the record but probably not ever period.

I don't think a harmonizer could get it right either AI or not. It would have to be programmed, might as well pre-record it as you suggested.

 

The irony is that AI can only think of "thoughts" it's been given and it does not have the same profound emotional influences that humans are subject to worldwide. The variety of responses we humans have to sounds, colors, movement is barely understood in terms that could possibly be programmed and make no mistake, we are dependent on programmers for AI at this point. Human emotion is paramount in all music, yes we like to shake our ass but we like to shake our ass to music that makes us feel like shaking our ass. Some people will get the move groove going to just about anything, others want what they want.

 

We were playing a gig once and a lovely couple dressed to the nines and dancing off to the side with sophisticated duo moves came up to us and the lady said "Do you know any of that "not jumping around music." We were not feeding her emotional responses for the dancing she wanted to do. We tried, I knew a waltz so we gave them that at least.

 

If/when AI becomes independently capable of making all it's own decisions, will they be emotional decisions or simply based on capacity? What is it about a samba beat that makes humans shake their ass? AI won't know, that is a deeply human emotional response.

 

Machine learning can only go so far and then it's "Daisy, D a i s y, D a i s y, D a i s y . . .

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

But getting back to Luna...remember that it's still early days. Who knows what UA has up its sleeves?

 

As long as UA has been around, I'm sure they are up to something.

I don't own any UA gear (yet) and I haven't checked out Luna.

 

So I don't really have much to say about them at this point except that it is a given that there will be changes.

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

One differentiator is that Luna is Mac-only. I don't know if there will be a Windows version, but I tend to doubt it. Thunderbolt is such a Mac thing that even though it works on Windows, the variables in the Windows environment would probably be a support problem. I have Thunderbolt on my Windows machine, but the computer was integrated by PC Audio Labs, so Thunderbolt actually works :) I'm not sure all Windows machines would be as solid, whereas you know the Mac will be.
Link to comment
Share on other sites

I ordered my last Windows DAW from Sweetwater and had them put in a Thunderbolt card. It works well enough that I never did move over my PCI UA card.

 

Thought about trying Cubase. In all these years I've read about it but never tried it. Used Cakewalk since version 1, and now stick to Live, Logic and Reason. Was thinking about getting the Cubase crossgrade just for fun until I read that it requires a dongle, purchased separately. ... No thanks. Cakewalk spoiled me with copy protection. I paid them back by buying ever other version upgrade from Cakewalk 1 to the last Sonar. That was a lot of upgrades over the years.

This post edited for speling.

My Sweetwater Gear Exchange Page

Link to comment
Share on other sites

I ordered my last Windows DAW from Sweetwater and had them put in a Thunderbolt card. It works well enough that I never did move over my PCI UA card.

 

Having a Windows computer done by a company that integrates computers specifically for music and/or video is always a good move. It can also take care of weird requests, like I still wanted a FireWire port (yes, FireWire) to run my UA Satellite.

 

Thought about trying Cubase. In all these years I've read about it but never tried it. Used Cakewalk since version 1, and now stick to Live, Logic and Reason. Was thinking about getting the Cubase crossgrade just for fun until I read that it requires a dongle, purchased separately. ... No thanks. Cakewalk spoiled me with copy protection. I paid them back by buying ever other version upgrade from Cakewalk 1 to the last Sonar. That was a lot of upgrades over the years.

 

IMHO Cubase is still the leader of the pack for MIDI implementation. If you're doing orchestration, it's almost essential.

 

I'm a huge fan of Live, it's the only program I use for live performance. I was really big on Reason for a while but with Komplete, IK Total Studio, and the Arturia collection, that pretty much means I don't need any other virtual instruments. My main DAW these days is Studio One, it's the fastest one I've found for putting songs together, and I've become dependent on the Harmonic Editing feature. I also like that it doesn't try to be all things to all people, but do what it does really well.

Link to comment
Share on other sites

I don't really use to much of the weird stuff on the Katana, I hate stuff on the floor for one thing and the 50 only has 4 presets available,

 

 

Well, it's a Roland product. I think they're pursuing a particular type of Product Sartori in features/controls/display, they're very "Groundhog day in 1992" user experience. I have a 50 that I got for use back in the old days, when I actually used the office I pay rent for to teach guitar lessons. The ridiculous, "Roland" thing is that 4 presets would be enough to my purposes, except the "2 banks 2 presets", and blinky-LED x 2 buttons scheme is... Product Sartori Spartan.

 

Would it have been THAT much more expensive to just have 4 buttons, 2 more LEDs? It's like the battle between Line6 Spyder "here's everything at once" kitchen sink approach and the Roland "it's there if you want it, but you must EARN admittance!" approach.

 

 

] Thinking about all that shit I could do on the floor distracts my focus on playing music, hate it.

 

 

Yamaha almost had it right with the THR line, but their dealer distribution path is peculiar.

 

 

I like it simple - 1 "cleanish" tone for decorating, 1 much less cleanish tone for solos -

 

 

So Fender decided that was the way to go with their digital line - except they expect you to pay a premium for what should be cheaper BECAUSE it's digital. I would guess all of these companies are deciding against doing anything that might remotely be construed as "copying" another's approach, to their detriment. As if Microsoft had never gone graphical with a mouse because "Apple does that". It's really interesting how diverse all of these companies have made their amps with such little Venn diagram overlap in user experience....

 

 

Years ago, I tried to learn to play the guitar fills for And Your Bird Can Sing on an acoustic guitar. I failed. Beautiful stuff though I'm certain the Beatles never played it live, at least not like the record but probably not ever period.

 

I don't think it's particularly difficult, it's just has Great Chain Reaction Catastrophe Potential. More fun when your drummer is taking great advantage the free Red Bull.

 

 

The irony is that AI can only think of "thoughts" it's been given

 

I would say in great part, humans are not any different.

 

The variety of responses we humans have to sounds, colors, movement is barely understood

 

A.i. will allow for the potential to mine our responses to combinatory experiences in ways we will not be able to comprehend. Adversarial neural nets, machine learning is not pre-programming responses. What may seem like indefinable aspects to us is buried in the data set, and can be brought out by a.i.. It *seems* like emotional response should be something that *requires* human experience to elicit, but the fact that it can be encapsulated in "art" means that whatever the subset of combinations are that triggers us - a.i. can manipulate. It's maybe the most frightening thing about it IMO; there will be things discovered by a.i. that we can't imagine about ourselves.

 

"Understanding" of the data set is not required for using it. There are supreme dangers to a.i. that human bias makes us blind to. The notion that it won't be able to make convincing music, that will elicit an emotional response, is one. It doesn't have to know or care about context.

 

 

What is it about a samba beat that makes humans shake their ass? AI won't know, that is a deeply human emotional response.

 

Machine learning can only go so far and then it's "Daisy, D a i s y, D a i s y, D a i s y . . .

 

 

I would tactfully suggest you are responding from a bias fallacy, and out dated tropes about a.i.. Generative adversarial networking, machine learning, other a.i. approaches are in a nascent stage right now but already doing things not only completely unexpected, but *beyond the understanding of how it's doing it*. The thought that it can never be "as good as us" is a reaction that will probably be our downfall.

 

But maybe UA can utilize some of these techniques in LUNA and save us?

 

/

sorry Craig

Guitar Lessons in Augusta Georgia: www.chipmcdonald.com

Eccentric blog: https://chipmcdonaldblog.blogspot.com/

 

/ "big ass windbag" - Bruce Swedien

Link to comment
Share on other sites

I would tactfully suggest you are responding from a bias fallacy, and out dated tropes about a.i.. Generative adversarial networking, machine learning, other a.i. approaches are in a nascent stage right now but already doing things not only completely unexpected, but *beyond the understanding of how it's doing it*. The thought that it can never be "as good as us" is a reaction that will probably be our downfall.

 

"As good as us," or "better than us," is not necessarily the same as "as desirable as us." People watch humans play chess tournaments, but there's not a big demand for people to watch computers playing chess with each other. Even in science-fiction movies where giant robots battle each other, humans (or aliens) are at the controls. If not, the robots could just analyze each other's programming before the fight starts, and based on that analysis, one could concede defeat without having to actually fight :)

 

I have no doubt that AI will be able to generate worthwhile music. But it doesn't follow that as a result, humans won't be able to generate worthwhile music, or that AI will inherently generate better music.

 

If you've been following my writings elsewhere, I'm totally up for machine learning (why should I have to do the boring stuff?), and clearly see the potential in AI. But I see AI more like a partner than a conqueror.

 

Besides, there's nothing new about being able to press a button, and have a machine generate better music than I do - it's called a CD player, playing the Brandenburgs :)

Link to comment
Share on other sites

... I was really big on Reason for a while but ...

 

I'll tell you what keeps me coming back to reason. They have some wonderful instruments that create complex arpeggios, both monophonic and chords. I don't know anything else that lets you get this deep or complex. The drum pattern sequencer is also really nice with the logic, locks and other ways to vary a pattern.

This post edited for speling.

My Sweetwater Gear Exchange Page

Link to comment
Share on other sites

I don't really use to much of the weird stuff on the Katana, I hate stuff on the floor for one thing and the 50 only has 4 presets available,

 

 

Well, it's a Roland product. I think they're pursuing a particular type of Product Sartori in features/controls/display, they're very "Groundhog day in 1992" user experience. I have a 50 that I got for use back in the old days, when I actually used the office I pay rent for to teach guitar lessons. The ridiculous, "Roland" thing is that 4 presets would be enough to my purposes, except the "2 banks 2 presets", and blinky-LED x 2 buttons scheme is... Product Sartori Spartan.

 

Would it have been THAT much more expensive to just have 4 buttons, 2 more LEDs? It's like the battle between Line6 Spyder "here's everything at once" kitchen sink approach and the Roland "it's there if you want it, but you must EARN admittance!" approach.

 

 

] Thinking about all that shit I could do on the floor distracts my focus on playing music, hate it.

 

 

Yamaha almost had it right with the THR line, but their dealer distribution path is peculiar.

 

 

I like it simple - 1 "cleanish" tone for decorating, 1 much less cleanish tone for solos -

 

 

So Fender decided that was the way to go with their digital line - except they expect you to pay a premium for what should be cheaper BECAUSE it's digital. I would guess all of these companies are deciding against doing anything that might remotely be construed as "copying" another's approach, to their detriment. As if Microsoft had never gone graphical with a mouse because "Apple does that". It's really interesting how diverse all of these companies have made their amps with such little Venn diagram overlap in user experience....

 

 

Years ago, I tried to learn to play the guitar fills for And Your Bird Can Sing on an acoustic guitar. I failed. Beautiful stuff though I'm certain the Beatles never played it live, at least not like the record but probably not ever period.

 

I don't think it's particularly difficult, it's just has Great Chain Reaction Catastrophe Potential. More fun when your drummer is taking great advantage the free Red Bull.

 

 

The irony is that AI can only think of "thoughts" it's been given

 

I would say in great part, humans are not any different.

 

The variety of responses we humans have to sounds, colors, movement is barely understood

 

A.i. will allow for the potential to mine our responses to combinatory experiences in ways we will not be able to comprehend. Adversarial neural nets, machine learning is not pre-programming responses. What may seem like indefinable aspects to us is buried in the data set, and can be brought out by a.i.. It *seems* like emotional response should be something that *requires* human experience to elicit, but the fact that it can be encapsulated in "art" means that whatever the subset of combinations are that triggers us - a.i. can manipulate. It's maybe the most frightening thing about it IMO; there will be things discovered by a.i. that we can't imagine about ourselves.

 

"Understanding" of the data set is not required for using it. There are supreme dangers to a.i. that human bias makes us blind to. The notion that it won't be able to make convincing music, that will elicit an emotional response, is one. It doesn't have to know or care about context.

 

 

What is it about a samba beat that makes humans shake their ass? AI won't know, that is a deeply human emotional response.

 

Machine learning can only go so far and then it's "Daisy, D a i s y, D a i s y, D a i s y . . .

 

 

I would tactfully suggest you are responding from a bias fallacy, and out dated tropes about a.i.. Generative adversarial networking, machine learning, other a.i. approaches are in a nascent stage right now but already doing things not only completely unexpected, but *beyond the understanding of how it's doing it*. The thought that it can never be "as good as us" is a reaction that will probably be our downfall.

 

But maybe UA can utilize some of these techniques in LUNA and save us?

 

/

sorry Craig

 

Too much work to quote the quoted quotes, etc so I will just copy and paste what I want and put your posts in "quotes". :)

 

"The ridiculous, "Roland" thing is that 4 presets would be enough to my purposes, except the "2 banks 2 presets", and blinky-LED x 2 buttons scheme is... Product Sartori Spartan. "

I like having only two switches. I've set it up so that the cleanish tone and the rotary speaker tone can be switched by changing banks. The same is true for the lead tones, just switch the bank switch on the left and you get the other lead tone.

It's working fine for me, I hardly ever want to hit two switches to get anywhere. It's simple, I don't have to think about it.

 

You can go in the other direction like the Peavey Sanpera 1, where 4 footswitches can give you 3 presets (on a modern Vypyr or 4 on the original 1, not sure what that's about) or you can press both top switches and switch banks (there are 4 banks of 3 presets each), OR you can press both bottom switches at once and have a looper (which I did once by accident and it was funny but it sucked) AND you can use the pedal as volume pedal or as a wah pedal. But wait, there's more - if you chose a preset then that same button becomes the tap tempo button for the delay (if you are using it).

 

I like having the wah but I'll take the two footswitches instead of the 4 with all the options. Makes my tiny brain hurt on stage. At home I don't care, I am just goofing off anyway.

 

" I don't think it's particularly difficult, it's just has Great Chain Reaction Catastrophe Potential. More fun when your drummer is taking great advantage the free Red Bull. "

I could probably play it pretty easily slow and smooth. Playing it at the tempo of the record and smooth is another topic. I'm pretty certain it's an overdub, not done at once with one guitar. Maybe George's tribute to Les Paul?

 

" I would say in great part, humans are not any different."

Perhaps not, taken as a whole. I am a lyricist and I enjoy interesting lyrics. How many metaphors do you think AI will invent on the fly? It may get filled up with all the obvious ones that boring songwriters use again and again, same old same old. I am simply skeptical of the potential in that regard, pop music isn't particularly complex or difficult to write but lyrics or poetry may well be a different challenge. Not because it is so complex, but because ideas pop into my own head that have no real reason to be there but they turn out to be why I sound like me.

Witty? Will AI be witty? Or will it just try to be witty? It won't laugh at it's own "jokes" at least. If it does, I will gently lower it into a tub of warm water and see if it's still laughing.

 

" A.i. will allow for the potential to mine our responses to combinatory experiences in ways we will not be able to comprehend."

This is more like what I expect to happen. Hopefully it then elbows me in the ribs, laughs hysterically and says over and over "Didn't you get it?"

Ummm...no, I didn't... lol...

 

" I would tactfully suggest you are responding from a bias fallacy, and out dated tropes about a.i.."

No real need for tact, is there? Have you read The Mind of a Mnemonist?

https://www.hup.harvard.edu/catalog.php?isbn=9780674576223&content=reviews

It is a documentary about a person who remembered EVERYTHING. The author observed this individual and attempted to describe their plight.

It's been decades but as I recall this person would see the color red and remember every other time he saw the color red and everything that happened that time and in the end they would be so overwhelmed by the onslaught of memories that they could not function.

 

That's more along the lines of what I predict will happen to AI. It won't have shortcomings because it cannot and does not "know" enough, it will simply have more information than it knows what to do with and the decisions it makes will be based on an insurmountable plethora of data that cannot be parsed intelligently.

 

So, to make it "work" they will dumb it down. Just like radio, dishwashers and jelly donuts.

And, weirdos will have fully functional AI systems at home and glory in the vastness of the infinite sentence that attempts to go full circle spewing incomprehensible scads of accurate data.

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

[Reason] has some wonderful instruments that create complex arpeggios, both monophonic and chords. I don't know anything else that lets you get this deep or complex.

 

Some programs have pretty cool MIDI arpeggiation effects, but I wish more would. Cakewalk had the MFX standard, and a lot of companies followed it at first. But then digital audio came out, everyone fell in love with that, and MIDI effects started a trend to being fenced-off sections of programs.

Link to comment
Share on other sites

...it will simply have more information than it knows what to do with and the decisions it makes will be based on an insurmountable plethora of data that cannot be parsed intelligently.

 

Sounds like me trying to figure out if I want to watch anything on Netflix.

Link to comment
Share on other sites

 

"As good as us," or "better than us," is not necessarily the same as "as desirable as us."

 

 

That's the same difference with the subjective interpretation of music. If it's not desirable it's not good. I still claim there will be a time when a.i. produced music will be "invisible" as not being created by humans, and there will be a scary moment when it makes a literal hit song. It will be able to concoct music that creates an emotional response; maybe scarily so.

 

 

People watch humans play chess tournaments, but there's not a big demand for people to watch computers playing chess with each other.

 

People watch *people* play tournaments, the chess program isn't meant to supplant the visual stimulation.

 

Eliciting emotion is not magic, it only seems that way to us. But consider how willingly people anthropomorphize objects. One of the most disturbing things I've seen lately are people seemingly having emotional responses to what they *know* are robots, not even particularly far into the uncanny valley-robots. Visual cues to elicit emotional response may seem more straightforward than music, but it's a presentation of data to a sensory organ. I wouldn't worry about whether a.i. will be able to make convincingly emotional-response creating music, but whether it comes up with a series of sounds that literally "flips a switch" in our brain, creates an immediate chemical response that exceeds what we expect.

 

 

But it doesn't follow that as a result, humans won't be able to generate worthwhile music, or that AI will inherently generate better music.

 

I'm the human that wants a.i. *tools*, not ideas. My original claim was that it will muddy the water of "what is good" by bulk, through enabling non-talent to make "something" or through purely generative novel creation.

 

I see AI more like a partner than a conqueror.

 

 

I'm not saying it will be a "conqueror". It can be dangerous without deliberation - the main problem. There are endless ways it can accidentally destroy us, ways we can't even imagine.

 

The Fermi Paradox might be explained by civilizations never surviving this point in their development of a.i..

 

 

Besides, there's nothing new about being able to press a button, and have a machine generate better music than I do - it's called a CD player, playing the Brandenburgs :)

 

 

We can still make novel combinations. And allow the coarseness of fallibility and chaos to iterate in a way unique to us. And (possibly) free to critique as we choose.

 

My part-husky dog likes to howl when he hears an ambulance siren. I comprehend this as "it's a reaction that huskies, and wolves, do in response to a roughly similar melismatic sound". Do I really fathom his perception of the experience? No. We still have that - presuming "this" isn't a sim.

Guitar Lessons in Augusta Georgia: www.chipmcdonald.com

Eccentric blog: https://chipmcdonaldblog.blogspot.com/

 

/ "big ass windbag" - Bruce Swedien

Link to comment
Share on other sites

 

No real need for tact, is there? Have you read The Mind of a Mnemonist?

 

 

I haven't, that sounds interesting, I'll have to (pun intended) remember to read it. Marilu Henner syndrome.

 

 

That's more along the lines of what I predict will happen to AI. It won't have shortcomings because it cannot and does not "know" enough, it will simply have more information than it knows what to do with and the decisions it makes will be based on an insurmountable plethora of data that cannot be parsed intelligently.

 

Except, that presumes it won't have the technology to supercede those limitations. There is no functional reason why that would be. That's thinking in human-centric fashion; just as a human can't approach understanding chess, or worse, go, as well as a.i. there is no reason that doesn't extend to everything. Computational and storage limitations will not be an issue at some point, which is why we should be afraid.

Guitar Lessons in Augusta Georgia: www.chipmcdonald.com

Eccentric blog: https://chipmcdonaldblog.blogspot.com/

 

/ "big ass windbag" - Bruce Swedien

Link to comment
Share on other sites

 

"As good as us," or "better than us," is not necessarily the same as "as desirable as us."

 

That's the same difference with the subjective interpretation of music. If it's not desirable it's not good.

 

I'm not equating desirable with good, one way or the other...desirable is its own thing. For example, you might like the color blue. So you can go buy a car and there are cars in silver, champagne, red, etc. They're all equally good, but you find the blue one more desirable.

 

I still claim there will be a time when a.i. produced music will be "invisible" as not being created by humans, and there will be a scary moment when it makes a literal hit song. It will be able to concoct music that creates an emotional response; maybe scarily so.

 

I think that's entirely possible as well. But we're not there yet, and when we are, the tools humans use to make music will have also progressed. It's only recently that keyboards have moved out of being two-dimensional devices, and synthesizer keyboards were more than just on-off switches.

Link to comment
Share on other sites

 

No real need for tact, is there? Have you read The Mind of a Mnemonist?

 

 

I haven't, that sounds interesting, I'll have to (pun intended) remember to read it. Marilu Henner syndrome.

 

 

That's more along the lines of what I predict will happen to AI. It won't have shortcomings because it cannot and does not "know" enough, it will simply have more information than it knows what to do with and the decisions it makes will be based on an insurmountable plethora of data that cannot be parsed intelligently.

 

Except, that presumes it won't have the technology to supercede those limitations. There is no functional reason why that would be. That's thinking in human-centric fashion; just as a human can't approach understanding chess, or worse, go, as well as a.i. there is no reason that doesn't extend to everything. Computational and storage limitations will not be an issue at some point, which is why we should be afraid.

 

OK, it's difficult to explain what is in my mind but I will say this and let it go for now. If and when I hear an AI (no human involvement) song written and sung at this level, I will be impressed and concede that AI has the upper edge. Until then, do we need more "Hey Mickey Oh Mickey?" or other hits redux? A much different level of creativity and where the $$$ is. A big part of where AI will be steered is about $$$ rather than real creativity, that is inevitable.

 

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

A big part of where AI will be steered is about $$$ rather than real creativity, that is inevitable.

 

And if it follows the path of other new media introductions, the first big bux will come from p*rn.

Link to comment
Share on other sites

Of course, another thought slowly developed in my substandard, human brain.

 

If music is strictly a math problem, then yes, AI will excel. I truly feel sorry for anybody who thinks music is only a math problem, music goes deep into human emotion, the variations are endless, many of them subtle.

We love the sound of women's voices because our mothers held us and sang to us.

 

Can AI turn that feeling into a logic/math problem and solve it or duplicate it? By it's very nature, AI may be able to synthesize an "emotional" experience but in the end it's all ones and zeros.

It may be useful for some speeding up some functions that we find tedious, that is helpful.

 

I never say never but I want to see tears, I want to know that AI is sad and ashamed of itself. I want AI to laugh, I want it to love.

 

But I haven't the imagination to conjure up anything but ones and zeros.

 

A tool? Yes.

Able to replicate human emotion? Perhaps in some shallow, ultimately meaningless way.

More creatively uplifting than humanity? Somewhere in the middle, never at the top.

Meh (Yes Chip, I am confined to the limiting thoughts of a human. Have a look in the mirror my friend, what do you see?)

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

...I want to see tears, I want to know that AI is sad and ashamed of itself. I want AI to laugh, I want it to love.

 

Sounds like your post confirms my musings in the post immediately above yours :)

Link to comment
Share on other sites

...I want to see tears, I want to know that AI is sad and ashamed of itself. I want AI to laugh, I want it to love.

 

Sounds like your post confirms my musings in the post immediately above yours :)

 

I am not certain of anything but I suspect we agree on more than we disagree with regards to this particular topic.

Time will tell the story.

 

I do agree that pron will be a target for using the tools available with AI. My thoughts above have more to do with the profound differences between the human experience and something that does not manifest in the material world.

AI can't have any kind of real experience because it is not "real". It won't meet somebody and become attracted to them, it won't have a couple shots of whiskey and smoke a bowl, it won't drive too fast, it won't laugh hysterically at something stupid, etc. ad infinitum. Sure, it can "compose" music based on the preferences it is exposed to, it may even become good at it. What will it write for lyrics? I see music as a vehicle for lyrics, instrumental music is fun sometimes depending but we've only had one Miles Davis in our lifetime and I can't believe AI will be able to show that same level of focused expression since it simply cannot feel the human experience that humans experience. You can input all the data you want, let me cut off your foot, now what?

 

I may be engrossed in other stories and not pay much attention, if it becomes something it will be unavoidable.

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

I agree about lyrics. Now, it may be that the paradigm of the vocals being the single most important element in pop music will fade away, but I tend to doubt it.

 

Where AI could really shine is doing soundtracks, if given the right guidance ("need something suspenseful here, segueing into relief that the galaxy didn't get blown up after all").

Link to comment
Share on other sites

I agree about lyrics. Now, it may be that the paradigm of the vocals being the single most important element in pop music will fade away, but I tend to doubt it.

 

Where AI could really shine is doing soundtracks, if given the right guidance ("need something suspenseful here, segueing into relief that the galaxy didn't get blown up after all").

 

Stories put to music is part of human existence, no culture exists that does not have that.

 

Movie soundtracks are an example of stories put to music but the story has also been separated from the music by the production methods. I agree that AI will do a great job there, in some cases, less so in others and impossible for movies like Brother Where Art Thou. Same as it ever was.

 

For every solution, there is a problem.

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

I am imagining a hellscape future with AI training on Temp music to avoid paying another sync license ever.

 

But my real theory about Luna is that it is a pincer action with Apple Logic on the other side for shaving marketshare from Pro Tools supporting an eventual private equity placement.

Link to comment
Share on other sites

  • 3 months later...

CORRECTION -

 

You do NOT need an iLok. You need an iLok account. I have Luna open right now without an iLok. It works on the cloud. Only possible downside of cloud licensing is if the cloud goes down (I hear that it happens w/ Avid occasionally for example, not sure if with UA or if it's w/ iLok as well).

Link to comment
Share on other sites

Hmmm...good to know. Isn't that effectively the equivalent of zero down time, assuming the cloud stays up?

 

I love Native Access - all my authorizations are in the cloud, I don't have to deal with copy protection, there are regular updates, etc. Works for me.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...