Jump to content

Chip McDonald

Member
  • Posts

    4,930
  • Joined

  • Last visited

Everything posted by Chip McDonald

  1. I know of 2 "big artists" I can't name that are now out of commission because they or their organization is infected, who has been out touring. No doubt people in their audiences have been infected as well. Maybe nobody has died or been intubated because of it, that I'm aware of? VACCINES SAVE YOUR LIFE AND KEEP YOU OUT OF THE HOSPITAL, unlike the unvaccinated jerks that have crippled our medical system in the U.S.. IT DOESN'T ABSOLVE YOU OF RESPONSIBILITY. You can still be infected and infect others. WEARING A MASK HELPS but isn't perfect. It also doesn't absolve you of responsibility. YOU HAVE NO WAY OF KNOWING, unless you took a PCR the morning of a gig, whether you're infected or not. 5-15 seconds exposure for delta. As easy as standing behind someone in line at the bar, or having someone shout in your ear at the gig. Concerts with bleacher seats, sitting in front of the infected person behind you. Our hospitals are full locally. If something happens to me, my wife or father I'm not sure if there is an ER within hours of us that is open. Kids are intubated. Grocery stores are having supply chain problems again. UPS/delivery is starting to have problems. I'd suggest the military are having problems kept under wrap. Being reckless, in superficial denial of what's really taking place, is NOT helping the economy in the long run and is certainly making things worse. Ignorance is bliss, but this isn't February 2019 anymore: YOU KNOW THE REALITY.
  2. He would gain nothing from their real estate footprint. I could see a "Sweetwater East" (Nashville), "Sweetwater West" (L.A.), "Sweetwater North" (New York). Contrarily, it would be a cheap way for Amazon to have a "small general goods" storefront: there is no reason to do brick-and-mortar without trying to go bottom dollar. Continual Optimization Past Utility wrecks everything in the 21st century.
  3. Interesting, it has as V I vibe and bass line like the song "Rock Your Baby" by George McCrae, which would have been a hit a bit before then. Sounds Arabic, melody has a curious qawalli inflection on what is kind of like a Moroccan melody? The western-hybridization in Bollywood music is interesting in how they'll "dress it up". Add half a measure, drop a beat, do oddball structures, have the guitar do a polyrhythm to the percussion, or add a quirky microtonal gliss on the melody. Always interesting when I have a native India/Pakistani student that is likewise between two cultures, tying it together.
  4. I haven't, that sounds interesting, I'll have to (pun intended) remember to read it. Marilu Henner syndrome. Except, that presumes it won't have the technology to supercede those limitations. There is no functional reason why that would be. That's thinking in human-centric fashion; just as a human can't approach understanding chess, or worse, go, as well as a.i. there is no reason that doesn't extend to everything. Computational and storage limitations will not be an issue at some point, which is why we should be afraid.
  5. That's the same difference with the subjective interpretation of music. If it's not desirable it's not good. I still claim there will be a time when a.i. produced music will be "invisible" as not being created by humans, and there will be a scary moment when it makes a literal hit song. It will be able to concoct music that creates an emotional response; maybe scarily so. People watch *people* play tournaments, the chess program isn't meant to supplant the visual stimulation. Eliciting emotion is not magic, it only seems that way to us. But consider how willingly people anthropomorphize objects. One of the most disturbing things I've seen lately are people seemingly having emotional responses to what they *know* are robots, not even particularly far into the uncanny valley-robots. Visual cues to elicit emotional response may seem more straightforward than music, but it's a presentation of data to a sensory organ. I wouldn't worry about whether a.i. will be able to make convincingly emotional-response creating music, but whether it comes up with a series of sounds that literally "flips a switch" in our brain, creates an immediate chemical response that exceeds what we expect. I'm the human that wants a.i. *tools*, not ideas. My original claim was that it will muddy the water of "what is good" by bulk, through enabling non-talent to make "something" or through purely generative novel creation. I'm not saying it will be a "conqueror". It can be dangerous without deliberation - the main problem. There are endless ways it can accidentally destroy us, ways we can't even imagine. The Fermi Paradox might be explained by civilizations never surviving this point in their development of a.i.. We can still make novel combinations. And allow the coarseness of fallibility and chaos to iterate in a way unique to us. And (possibly) free to critique as we choose. My part-husky dog likes to howl when he hears an ambulance siren. I comprehend this as "it's a reaction that huskies, and wolves, do in response to a roughly similar melismatic sound". Do I really fathom his perception of the experience? No. We still have that - presuming "this" isn't a sim.
  6. The problem there is that to get to EDM you have to cross a bridge of "pop rock music tied to counter culture". STAND BY FOR CRAZY CHIP STATEMENT: [align:center] I say the Beatles on Ed Sullivan was probably the most important event in music history.[/align] Nothing else is referenced by so many as being so specifically the EXACT reason they made music. In conjunction with the revolutionary change in societal attitudes, which is large part simply started with what otherwise would have been a limited phenomenon in London, the "insanity" of MEN WITH LONG HAIR - the combination of counter culture tied to an approach to music, had a potency unmatched. We'd maybe had some sort of rockabilly progression, and offshoots that may have cross pollenated with musicals, soundtracks. But the Beatles set an example that influenced so many artists - and *people* - to set off in diversionary paths, I don't see how the terrain of "music as we know it today" would have been as elaborate, diverse, and certainly the technology and production techniques would be more basic. Hmm. Actually thinking into this, what would have happened is "rock music" would have proceeded in the more staid manner it set out on initially, but *a form of punk may have still happened*. Musically, the Sex Pistols may have still came about in almost the same form; musically the basic harmonic ingredients would still exist, the instruments and amps, drums. Mid 60s recording technology not moved forward much by a less energized pop music industry. While I was too young to experience the explosion of the Ed Sullivan show, when I was a little kid I *do* remember the Sex Pistols making the news for awhile - and the reaction, the starkness of the punk scene contrasted against the White Urbane Conservative, which without a doubt would still have existed in a non-Beatles timeline. In essence, without the Beatles, the Sex Pistols could have been the next most revolutionary musical event. There were other non-Beatles pre-requisite watershed moments, though - but without the counter culture movement I don't think it would have mattered. Nascent Kraftwerk was amazingly ahead of their time - I think they could have happened without the Beatles on the planet, but without the acceptance provided by the evolving counter culture movement I don't think they would have gained any acceptance at all. Unlike the Sex Pistols, there wasn't any societal rebellion, or innate sex appeal, youth culture aspect that would have jump started something based around them. Or maybe there was an angle there relative to German culture at the time that could have made them spring forth from Germany like the Beatles from England? Would Brian Wilson be the same? I don't think Hendrix would have, but possibly? Genre-inspiring, culture inspiring change? I think pretty much else requires the watershed of "saw the Beatles on Ed Sullivan" or being surrounded by people spurred on by that, an a youth culture with their innovation leading the way.
  7. Well, it's a Roland product. I think they're pursuing a particular type of Product Sartori in features/controls/display, they're very "Groundhog day in 1992" user experience. I have a 50 that I got for use back in the old days, when I actually used the office I pay rent for to teach guitar lessons. The ridiculous, "Roland" thing is that 4 presets would be enough to my purposes, except the "2 banks 2 presets", and blinky-LED x 2 buttons scheme is... Product Sartori Spartan. Would it have been THAT much more expensive to just have 4 buttons, 2 more LEDs? It's like the battle between Line6 Spyder "here's everything at once" kitchen sink approach and the Roland "it's there if you want it, but you must EARN admittance!" approach. Yamaha almost had it right with the THR line, but their dealer distribution path is peculiar. So Fender decided that was the way to go with their digital line - except they expect you to pay a premium for what should be cheaper BECAUSE it's digital. I would guess all of these companies are deciding against doing anything that might remotely be construed as "copying" another's approach, to their detriment. As if Microsoft had never gone graphical with a mouse because "Apple does that". It's really interesting how diverse all of these companies have made their amps with such little Venn diagram overlap in user experience.... I don't think it's particularly difficult, it's just has Great Chain Reaction Catastrophe Potential. More fun when your drummer is taking great advantage the free Red Bull. I would say in great part, humans are not any different. A.i. will allow for the potential to mine our responses to combinatory experiences in ways we will not be able to comprehend. Adversarial neural nets, machine learning is not pre-programming responses. What may seem like indefinable aspects to us is buried in the data set, and can be brought out by a.i.. It *seems* like emotional response should be something that *requires* human experience to elicit, but the fact that it can be encapsulated in "art" means that whatever the subset of combinations are that triggers us - a.i. can manipulate. It's maybe the most frightening thing about it IMO; there will be things discovered by a.i. that we can't imagine about ourselves. "Understanding" of the data set is not required for using it. There are supreme dangers to a.i. that human bias makes us blind to. The notion that it won't be able to make convincing music, that will elicit an emotional response, is one. It doesn't have to know or care about context. I would tactfully suggest you are responding from a bias fallacy, and out dated tropes about a.i.. Generative adversarial networking, machine learning, other a.i. approaches are in a nascent stage right now but already doing things not only completely unexpected, but *beyond the understanding of how it's doing it*. The thought that it can never be "as good as us" is a reaction that will probably be our downfall. But maybe UA can utilize some of these techniques in LUNA and save us? / sorry Craig
  8. For singular transactions. For multiple people buying *their own copy* of a digital item, it becomes harder. Someone needs to incorporate steganography into the process, so that rights management to a *copy* can be sold. This would allow online concerts to be a thing that has sole unique value to the "concert goers", which would allow for very large, tiered promotion to occur ala Pay Per View, but with ownership rights. An artist, like the Rolling Stones, could announce a series of performances over a week, and sell them in limited quantities. Without needing multiple venues, they could offer different performances in different locations that would be unique, owned by the ticket purchasers. But there would have to be a way of tracing copied video, and blockchain won't help with that. Blockchain only kicks in when there is a "social stimulation push", there has to be a large number of people involved. People want it to be "build it and they will come" when it's "build it and they might come, if they think it's a big deal for some reason". Unique signed video steganography would change everything for artists if someone would bother to do it.
  9. We don't need to wait for VR. People aren't thinking about the way "broadcasting" online can work, dynamically. An online performance has so much potential to be *more* than a concert situation. There can be great mystique. A synthesis of the morphology of cinema with a musical performance, in an almost "dawn of MTV" way - people just haven't realized that yet. People are thinking too much along the lines of "a live performance on a stage in front of a camera", when the "stage" *doesn't have to be a stage*. We're in an uncanny valley equivalent to the dawn of television, the Milton Berle era of presentation when everything initially was thought of as "showing vaudeville on a tiny screen". Artists could be selling exclusive performances online - the only hindrance being bootlegging, simple copying of performance that could be negated by a unique steganography based encryption stamp in the video stream, which would tie the copy back to a credit card that paid for a performance viewing. You'd then have the potential for selling limited "seats", making the performance unique and potentially more valuable depending on the "seats" sold. "Pay Per View" but through a ala carte encrypted version of Skype/Zoom/etc. I was sort of looking forward to the online performance thing becoming more popular, it started getting traction for small artists in the middle of last year - when there was still some of the original sentiment of "we're in this together, we have to defeat the pandemic". So much creative potential, because of the novelty of setting that can be presented - it only needs a uniquely encrypted steganography scheme in order to work.
  10. It just occurred to me. They're gaming Spotify. It shows up as "latest release"; from a functional standpoint, media owners will all start doing this in order to keep their subjects "up front" and not stagnant. It's the equivalent of coming out with Yet Another Box Set on Spotify, for acts that have an established catalog, it makes sense since you can't "un-release" an established record to "re-release" it to get it in front of people.
  11. Oh yes! Why isn"t the pot moving! Oh, it"s not up/down, it"s left/right. Or vice-versa. Grrrrrrr The thing that bugs me, and I used to harass Justin to get Reaper to implement, is that knobs in software are not "damped". Like you can adjust a mouse, if I "pull" on a fader fast but then start slowing down, *scale the resolution higher* for more control. Or alternately - there is zero reason that once you "touch" a control, it's "surface" can't pop out to be huge (or the size you want), increasing the "throw" of the control for more accuracy. It's bizarre developers keep adding teeny tiny knobs/faders, OR the ability for a control to scale down, and THEN you've got to micro-manage the throw of the control AS IF IT'S A REAL HARDWARE POT THAT TINY. I want faders and knobs that pop out, grow, when you touch them. I don't want to have to hold down cntrl, or try to move my trackball by breathing on it to get 1 db. OR WORSE YET, EVERYTHING SHOULD DEFAULT TO 0 (or revert) WHEN YOU DOUBLE CLICK!!!!! "Oh wait, I've changed my mind" (proceeds to waste 20 seconds trying to set something back to 0". How much of my life have I wasted "resetting back to zero"??????? AHRGRHhhh...
  12. You will still have creative variability with sound. For instance, with the ML amp plugin that's out now, you have to use your own cab IR. And you can preamp it with what ever you want, of course. What I want isn't a restriction in choice of, say, the cab or micing, but a quick way of "Staying Within Accepted Parameters". Which is what everybody does; you decide to roll out the low end, fix if the guitar is too spikey, whatever. If I can move a slider that fades in "Terry Manning guitar channel eq results" in my situation I prefer that. I have a problem with establishing a *baseline* - I don't have ATC monitors in a nice room, a 2nd engineer to move a Royer around on a perfectly maintained Bassman on 7 while I listen through a Neve. I'm in my own Uncanny Valley of potential that theoretically can be nearly spot on to anything, IMO; but the time expenditure and coarseness of the "laboratory" is a time sink, and prone to error. People will default to *hardware* presets. Which I suppose is the Cool Thing one does if you're CLA, JJP or some such, have the blue 1176 picked out, sitting there just for that One Thing. Still a decision has to be made for setting it, when to use it; it won't be any different with the "DAW-centric" a.i. plugins. It will just be curated a lot better than a preset is today. In an ideal artistic world all initial conditions would be met, and everything would be like a Brian Eno/Daniel Lanois experiment fest, but most people are doing "preset combination" trial and error on some level from an audio engineering standpoint IMO.
  13. But see, this is because presently there is still something of a rudimentary expectation of what "being a musician" is. This technology will provide the illusion that *everybody* can think of themselves as "a musician". The reflex is to expect some sort of philosophical regulation of that; but I've met people that think of auto-tune as just an either/or aspect, playing actual instruments being optional. When digital consoles allows a plugin that imbues completely natural vibrato on a singer, as well as tonal/timbral quality that mimics Whoever Famous Singer - and guitar amps with the same capability, electronic drum kits that quantize on the fly to grids that perfectly replicate Famous Grooves - "everybody" will think they're a musician. Like I said, I know people that call themselves "musicians" because they make some choices in a phone app and call it a "song". A.i. is going to let anybody make something that sounds like music, if they want to bother. People watching other people onstage "making music" this way won't have any awareness of the divide between the technology and actually being a musician. For those that even sort of know, they'll insist it's no different, just like being a DJ these days is "being a musician". The church music scene: what amounts to some pretty big productions, involving digital boards, midied tracks, lights, sometimes electronic drum kits, and of course the guitar players have Enormous Variations on the Edge pedal boards. Any church that has a drummer that is "marginal", will buy the Latest $500 Roland or Yamaha - or Line6, or Nuevo Corporation drum brain that fixes the guy's flamed beats, asynchronous push/pull, flubbed drum fills, sketchy press rolls, maybe adds ghost strokes .... with a sound that comes out the box as a 2 mix that replicates 50 different Established Classic Drum Sounds. In 5 years, if covid doesn't wreck everything, that will be reality in every church. The same will go for the guitar player, bass player, and singer. The *real* musician band will sound rough around the edges to the Average Median Audience. Will they appreciate the talent of a really good, *real* singer? *They already don't*, they're *only listening to the timbral balance and "authentic" delivery of a single phrase. They only need to be able to *sell a style* today, if their pitch is sketchy, tone kinda all over the place after the first syllable, or whatever else that wouldn't past muster 30 years ago - people *don't hear it anymore*. A.I. enhanced "performances" will be the expected norm, and audiences will lose the ability to discern what's real from fake, just as they already have. There will be a Perfect Katana in a few years. I know a guy in Nashville that has always built his solo gig around looping his guitar parts between a pair of Line6 delay pedals. He has it down pretty effortlessly/seamlessly, and I'm sure 99% of the people in the audience has no idea how he's doing what he does. In a sense it's kind of a waste because of that - he really may as well be singing to tracks - I'm sure there are some people that that IS what he's doing. 2 things: 1) people in the audience these days are not as musically astute as they once were. 2) technology has made it worse, because it's raised expectations to an unrealistic level as to what something should sound like "live" these days. I played at a small theatre with a Beatles band once. I tried to make use of a Variax>Floor Pod setup to mimic each song (that I never did accurately IMO; out of control logistics in this band...). Anyhow, I would do things like the sitar parts with the Variax, backwards sounds with a DL4, etc.. BUT, after we played "And Your Bird Can Sing" (with the "play both parts in harmony" fun lick) I turn around at the finish and because the crowd is SO up close to the front of the stage I hear these 2 guys yelling at each other in the front row, "it's the pedalboard thing he's using, it makes it all the parts come out right", "yeah, I hear that". Because they could see me tap-dancing on my pedal board, and obviously a "guitar" doesn't sound like a sitar, or make backwards sounds when I'm literally playing something technical *it's not actually me playing* to them. People will just accept a.i. the same as they have autotune and gridding in DAWs. To those guys mentioned above, *they were already in the near a.i. future*. I really wish I'd just used a 2 channel amp in the aforementioned situation.... I don't know about any scenes now, I have no idea what's going on in Atlanta these days. R&B/hip hot scene has always been huge there, I see no pushback to using any a.i. "cheats" in a box there, but I don't even know if they have any other real "scenes" anymore? It looks to me like All Scenes have been sublimated in the south east into something that can be summed up by saying "Music Played by People Wearing a Trucker Hat, and Maybe With a Beard". ...and "covers" that are simplified and has all of the rhythms changed into "jam band/Dead shuffled-strumming". That is apparently what the Augusta music scene has degraded into over the past year, young people willing to get covid in bars/clubs playing 3 chord songs, singing warbly about lives they've never led. ... and they're smushing all of the songs into the same tempo range? (I just watched a local YouTube channel with a bunch of local "artists") . It's like they're trying to make everything match "been asleep in the tent at Bonnaroo", up tempo songs are slowed down, slow sped up, into a sort of gelatinous struma-lumma trucker cap "arrrrrrrhhh ruhhhggg" presentation. It's like "early 2000s Athens jam band" is colliding with "late 2010s "Americana" faux-country". There will be a plugin for that.
  14. It looks like Tennessee is around 1 in a 100 prevalence? A little higher for Georgia, way higher for Florida. You can't responsibly have a gig or go to one when you know there is at least someone present infected, and it's an opportunity for them to infect more. For certain demographics it's probably much worse than that, but then the "sophisticated" Obama party appears to "maybe have infected some folks" - based on Martha Vineyard local numbers, where vaccination is 90%. So making educated guesses on an event being "safe" based on the nature of the attendees doesn't work. Plotted against the UK curve we have at least 2 more months. On the other hand, they had mask mandates the whole time and a better vaccinated populace. At the rate the U.S. is getting infected, the "sociopath natural vaccination" herd immunity experiment should kick in around mid December. I predict we'll see a sudden droop in the curve around October. I also predict that it might plunge in December, but it's going to bottom out around 10-15,000 a day again. 1) if it goes down after an incubation cycle at the 10,000 mark - say 3 weeks - we should know by late January if herd immunity is happening. The downside being everyone will again act as if it's over; natural immunity going away in a few months, combined with vaccinations waning, will mean, once again, it will tick up around March 2022. 2) If by October it *levels out*, and doesn't go down - *it will again be an indicator there is another player involved*, lambda (thanks, Florida and Texas). The fact that the original strain tail-out (it was "adaptive release" or "opto release" when it should have been 1176...) wasn't linear obviously indicated delta was rising. That it took weeks for epidemiologists to see that is baffling; the medical community talking heads have become so inculcated in the Fear of Lawsuits that they presume their "job" now is just to comment on what has already happened. When they should be conservative and THINKING AHEAD to warn the populace. Regardless, if it doesn't fall off linearly, as it's rising now, we'll know lambda is underway. If lambda catches on, with little to no efficacy from vaccination or natural immunity, 2022 will shut all of the idiots up - one way or another. There are no safe/responsible gatherings until possibly late January IF best outcome-scenario #1 happens. But of course, just like with vaccinations, if it starts going down in October, everyone will screw it up: *if you use a fall off in the infection rate as an excuse to "reopen", "go back to normalcy", start doing gigs - YOU are wrecking our future*. Because without it burning out below 10,000 infections a day, giving it an opportunity to spread - which is what you're doing if you go to any gathering - is both giving it another chance to replicate and MUTATE, and the lambda variant a chance to get a foothold. [align:center] WE MUST RESIST THE URGE TO GO TO MASS GATHERINGS UNTIL IT'S OVER! [/align] Lambda will WIPE US OUT if it catches on. Don't contribute to wrecking the future. / bonus: covid infection results in what amounts to brain damage. Covid infection long term Alzheimer-like effects
  15. That's fun, but people are making "preset decisions" all the time but just don't think of it that way: using a 57 on snare, 121 on guitar cab, using a Fender amp for a clean sound... a bass guitar for "bass". There is variability, but there will be with a.i. plugins as well. But what is "live music" today? "Tracks" and tuned vocals - if there are actual vocals, or even someone that can really sing on pitch? It's going to get more diffuse. There are 2 aspects: a.i. to get *sounds*, versus a.i. to "create" an output. I want a faster way of getting a sound. But it's also going to mean crazy things like amp presets that does automatic vibrato and auto chord voicings, auto tune. A "live band" can be people that basically don't know anything about music that are presenting themselves as "a band". But we're already there from a studio point of view now. The real question will be whether humans will be able to discern the difference. Modulate A.I. - real time voice transformation The problem will be *an avalanche of new and different*. Are you aware there is a 24-hour non-stop streaming Youtube channel of a neural network that continuously makes death metal? The other Dadabots offerings are .... interesting. The non-stop "lo-fi classic metal radio" is pretty crude, but again only because of limited resources. Presently people are only applying the technology to data sets trained as a whole; which is why you can't make out the words in the lyrics, but a GPT-3 lyric generator applied on a multitrack level would fix that. The limited bandwidth would be fixed with training done on a discrete instrument level, which is where a UA, NVidia, or Microsoft/Apple would come in. Are you aware of the Open A.I. "A.i. Jukebox" project? Listen to this starting around 1:53. A.M. radio bandwidth; but I think it could pass a "clock radio doctor's office" test : A.I. Jukebox The A.I. Jukebox stuff literally frightens me. It reduces human creation to pure data and recreates it. It's like listening to an a.m. radio from another dimension. The "curated" examples are entertaining, but if you listen to some of the "continuations" sometimes it's uncanny what is there. It's in a wild, feral state right now, but it won't be like that for very long. There will be an app that will let people concatenate elements - like you can do already if you have Python programming ability - to make music that has never existed before. Not algorithmic variations, but something that will pass a Turing test for realism. There should be a differentiation made, or rather a term needs to arise to explain the following difference; I suppose there will be such a term eventually. Music that has been created by humans, but the sound has been modified by a.i. Music that has been created by a.i. from scratch. Because it's difficult to talk about one without referencing the other, or sounding like you're talking about the other concept when you're not. Maybe "A.i. novel music" vs. "A.i. massaged sound". Then intersections; I don't want John Bonham beats copied from Led Zeppelin tracks, anymore than just cutting and pasting a MIDI track. Would I like to do a project with Dave Grohl playing drums? Yes; would I tell him "Dave, absolutely do nothing that John Bonham has or would do"? No. But it's going to be there in his playing in some aspect. Well, I am, too. But the tools to make music sound better, faster, I'm all for. From the perspective of a one-mic Sun Studios old school way of recording rock music, multitracked music has a "sound" that doesn't exist in reality in front of you. The choice of using the technology is an arbitrary one, just as choosing to use a.i. massaged sound. It can still be post-mangled, or unique combinations chosen, or particular models created. Or reduced in potency - a mastering plugin that has been trained on CLA recordings doesn't have to make the result sound like each instrument is super-hyped, if could just imply the eq habits on the the 2 mix, not the instruments. That sort of thing is what people are doing anyhow, when they try to emulate the approach of a particular mixer/engineer.
  16. Me too, like the way some musicians do cover versions that give a totally different twist to a song. I was hoping for more from the Beatles archives. I'm not particularly interested in hearing what George Martin's son does with a mix, versus Michael Brauer. I want to hear Michael Brauer mix the Beatles. I want to hear Brendan O'Brien mix Led Zeppelin. Or Electric Ladyland. I want to hear Trend Reznor mix _Computer World_ by Kraftwerk. Or _The Wall_. Or Richard Dodd remix _Synchronicity_ by the Police, with no reverb. I want the original Z.Z. Top without reverb. The original mix to Fiona Apple's _Extraordinary Machine_; there is no precious reason to only offer one version of something that is ephemeral in the form of electrons. Not just milquetoast remixes of unknown provenance. Or worse yet, borderline Uncanny Valley things like the Beatles situation where "wait a second.... the hat isn't as loud as it was on the original?". Or Jeff Lynne's "let's see how close we can remake a perfect recording" with _Out of the Blue_ , or Sharon Osborne/whoever retracking the drums and bass on _Blizzard of Oz_ (which made me think I was going crazy one day showing a Randy Rhoads solo to somebody and thinking I'd totally imagined the bass and drums NOT being what I was hearing...). Could you have made a composite mix that sliced out maybe quarter note chunks out of each?
  17. What can eventually happen is that you would make a model of your voice in idealized conditions, and that would be "weighted" against adversarial-set models of idealized singers. You'll have a fader that blends from your voice to the model. Which is where the training resources come in; the effort put into a program like EmVoice, applied to an adversarial network can create a system whereby the input is almost reduced to just being parameterized to just text (like Adobe's Voco demo), timing, rhythmic inflection, melody. Probably with less effort than was put into EmVoice. It will allow people to make any singer sound like any other singer at some point, with practically no effort. Te only problem right now is having the will to go to the trouble and having both the right training data set AND knowledge of how to approach that. What will also happen at first, is that some company will come out with an adversarial model of a Generic Great Singer, and it will be one step better than the EmVoice approach - but it will mean endless variations of something that sounds like Celine Dion I would imagine. Eventually it will be able to train on any morsel of a sample and we'll have all singers available at our fingertips... But first someone needs to get started with making an easy user interface, and doing a musically useful model - instead of Obama and David Attenborough. Yeah, that's horrible. I know people that call themselves "musicians" because they cobble things together from phone apps where they press a button to select a pre-canned melody or even song fragment. This is going to be much worse. Anything that goes in can be made to sound like anything that goes out. Literally. Stylistically. Right now, a.i. curation should probably be considered a form of "making music" IMO. it's very coarse, but what I'm waiting for is finer controls: I want to pat my hands on my desk and have 20 drum performances come out that sounds like John Bonham.... Which sounds far fetched, but it's not. I do too, but 10 years from now "music" will be diluted by a.i. created music. It's going to change what being a "musician" is. I want what gets me to the end easiest. What I am describing is no different than any other signal processing decision, although it's not signal processing. If I can't get a canned drum sound close enough to what I hear in my head - which might be Hal Blaine, or Art Blakey's ride from a certain record, I would love to have a slider that lets me do that fast. And if I want a Foo Fighters guitar sound NOW, no waiting - that too. I don't have to keep it, or do anything else to it, or alter it's incarnation - but if I've got a guitar sound in mind that is halfway between "Mark Knopfler on "Down to the Waterline" and Eric Clapton on "Son and Sylvia" - something that occurs to me *instantly*, right now - if I can move some knobs to get that in a few minutes, that would be fantastic. And my voice is horrible. I've fought with EmVoice to do some songs, and while it's interesting to see people's reactions to it - "who is singing?" - it's only "sort of" easier than working with a real singer. Having something that could instantly morph the inflections to an iconic style wouldn't negate the artistic pursuit, because I could then go into "producer mode" that much harder - and more effectively. I view it as liberating. I can see a point in the future, finally, where I will be able to quickly get things out of my head, sounding as I want it to, without the battle. The downside will be that people with no musical training or knowledge, will be able to make something that sounds "professional", with a glint of The Real Thing; but I can't do anything about that. There will be plugins that completely morph performances into particular styles. Effectively that's already a thing now; it's scary that they can take a song in one genre and have it converted to another. Crude and bandwidth limited for now, but it won't always be that way.
  18. Yeah, they could possibly do it on the code bones of CEP I suppose. The ML/ai community isn't going to wait around. There is a true ML guitar amp that is a grass-roots GIthub project; it's rough around the edges, but useable. Or actually, InVidia could develop tools like what I'm talking about. A video game development engine might be the first place what I'm talking about appears.
  19. Sounds fine. Does it sound "better"? Middle is more compressed? Acoustic sounds like it has a delicate deesser on it (someone feels Natural Acoustic Guitar has too much transient information.....?) above 3k? Middle has a forward low mid to it, and the mix compression has a slower attack and shorter release, it's meant to "pump" on beat I suppose? Horn is now in the middle, super up front. Vocal dry, up front. Sides are tamped down. I prefer the old mix. I think in a thick production like this, ultra dry, up front vocals sounds detached from the song. And I like the way the horns bloom dynamically on the chorus on the sides - it's a response to the vocal melody, and by having it restrained it is less emotional. Among other things. I'm curious to hear Certain Name Engineers remix things in an artistically different approach, but this is kind of like George Lucas color correcting _Star Wars_ Ep. IV so that all of the outdoor scenes have the same color temperature. /$.10
  20. What they should be using is access to pro level users helping them train ML processes to automate tailoring "guitar, bass, drums, vocals" channels. The future will be a.i. processes automatically creating successful mixes. There will be a year when the headline topic will be "a.i. mixing: are we there yet?" and the next year it will be "which a.i. DAW mix is the best?", and then the next it will be over. It would be nice if some of the classic, "hero" engineers could work with a company to train ML models. It would go faster, and have a more instantly artistically-valid result I think.
  21. They're suffering from mission creep. Probably have some Marketing Guy with a piece of paper that says he should be advising the company, telling them they're not profiting as much as they *could*. Instead of worrying about solidifying their base product(s). I would think they see the same writing on the wall that Avid sees. We'll look back at this era of DAWs as the time when they were still getting UI ergonomics and processes together. This is the DAW equivalent age of the time when consoles had giant knobs for faders, geometrically angled-up panels, giant swaths of surface area that was blank, switches for panning. Things you'd never see even in a budget mixer these days. People seem bent on making more and more arcane shortcuts to try to "make music" faster and faster, but really what is going to happen over the next 5 years is a shakeout of basic UI integration. Performance wise I'm sure we're all at the threshold of not being hamstrung by processing power anymore, and with great free alternatives to just about any possible plugin available we're at diminishing returns. The DAW that is most flexible in process is going to win. I'm going to say this again: the final frontier for companies like UA is going to be machine learning/a.i.. I only know of one plugin that truly uses ML/a.i. in a forward looking manner, but I see many companies touting "a.i." that I'm pretty sure are not really exploiting modern advances in it, much less using it to the extent it's going to be used in the end. There will be a new generation of plugins that will happen eventually that will make everything we're using now seem "old". It will also change the way we make music forever. The structural hierarchy of the way these companies are run will probably prevent them from taking advantage of things as I'm talking about. There will be new little companies coming out of nowhere that will have software that people have *no awareness* can exist right now. Whose description can't even be understood by most people today. When these new generation plugins happen, in short order "somebody" will integrate the new processes that will arise from them into a DAW, and it will make what we see now as a lot of flagellating in a stagnant pond. A few companies could sidestep this by working on a DAW that integrates those processes from the ground up; they'd be able to offer a product that offers an end result faster and more consistent than any other DAW. But they'd have to commit to it, and dump a lot of resources into "prototyping". I'd be on the lookout for a plugin equivalent paradigm shift of what Line6 did in the 90s. Apple might/could do it. Microsoft. Trying to make another hardware-dongled DAW probably isn't the future any more than the covert oil-investor hoping he can sell hydrogen to put in cars one day.
  22. Useless. If you're vaccinated you can still be infected and transmit it to others. ... which should be against the law, effectively no different than identity theft. Thanks to clubs, restaurants, other public places being open we're going to get to find out all about lambda. It's "pretty definite". Not that it matters; it's already definite vaccines are not as effective against delta. For vaccinations older than 6 months basically not at all. Some people here said that months ago. And I'm going to say again: eventually nations will realize the sooner they eradicate it, the sooner their economy is back to 100%. And the only way to do that is with a real lockdown that is longer than the duration of the course of the virus, 28 days minimum. The smart nations will do it, against the political uproar it will cause for a month - like what they originally did in Australia before the idiots wrecked it - and eventually most nations will follow. Because the Paris Orleans study shows vax and natural immunity efficacy is limited in duration, AND the variants will continue to evade. There is no other way out. We're wasting time and lives. It won't ever be normal now. We should have locked down back in February until everyone was vaccinated. It was the only obviously prudent and logical thing to do, and we didn't do it. There is zero reason to think it's ever going to get better now; immunity will never catch up to the variants, thanks to people continuing to allow it to replicate. No it's not. Continue to do them and go to them and you're helping to prolong it. The Paris Orleans study shows immunity, whether artificial or natural (natural not really being very effective) wanes in 6 months. This never stops, unless people stop making excuses to go around other people "I'm vaccinated", "I'm wearing a mask", "they're wearing a mask", "we're outdoors" - none of that matters at this point. We're just treading water until people get tired enough of it, or finally smart enough to be scared of it, to actually do something real. Going to shows right now is making it worse.
  23. I have friends that are doing fly dates right now. It makes zero sense. 5-15 seconds exposure, maybe less, is all that's needed. A case in Australia was traced to having been infected just by *walking by someone* - shown by security camera in a grocery store. There are 2 studies done in the U.K. that shows soccer matches are super spreader events, not just because of people sitting in front of other people, but because of standing in line to get food, and being inside public transportation (which was shown to be a vector by Japanese studies last year). Studies that show effectiveness against hospitalization and death are not the same as protection against infection and transmission. Studies show being vaccinated has significantly reduced effectiveness against *infection* against delta, and transmission reduced by maybe only 25%, or lower. So even if you're vaccinated and you don't care if you are infected - people that don't wear masks are still VERY contagious, masks reduces that obviously. With delta having exposure times in seconds, there is no way you're safe being near any stranger in public. It's a fact. Which leaves people who have the attitude of "I'm vaccinated, so if I get infected I'm "protected" like Dr. Fauci and Walensky says". The problem being LONG COVID IS REAL. Doctors are able to identify people who have had covid *just by looking at PET scans of their brains, MRIs of organs*. It looks like Alzheimer's. The previous figure of 30% of infections having some kind of organ damage is probably going to be revised upward a lot. If for nothing else, you don't want to get covid because, I guarantee you, at some point in the future, *your insurance rate will go up if you've had it*. Because the reality is there isn't such thing as a "mild" infection; just because you think you feel fine doesn't mean you really are. As I wrote before, I've some experience in being able to "evaluate" post-covid people in guitar lessons. In most of them I claim cognitive decline. From the guy that I have to remind him what he was literally just doing 15 seconds ago - who has lost the ability to remember how to name chords (he'll have trouble remembering what an open C chord is... then he'll call an E major C major... he'll say "oh yeah, hahaha"), to the woman I had that had to quit because she literally couldn't focus on anything for longer than a few seconds; she could play chords, strum in time, but just go blank with either what to play next, or make the same mistake over and over... and realize it, laugh about it. She quit, told me "I can't seem to do this anymore". That's terrifying to *me*, but maybe not to other people? YMMV I guess? The vaccines keep you from DYING. They DON'T keep you from getting infected and transmitting it to others. How bad you get sick has nothing to do with long term effects - there may actually be an inverse relationship. My wife (professional statistician) says the stats imply it's going to be a syndrome like herpes, fibro myalgia, lupus or Epstein Barr, once you get it you're always going to have it. Like shingles, you'll have life long flare ups to deal with. Except, unlike shingles, it will be your brain, heart, liver or lungs malfunctioning. Have fun at the gig....
  24. I remember when the sky wasn't sodium orange. I can't really see the stars from most places around town anymore. I remember when you couldn't read the text in a book outside at night. You don't even need headlights to drive around at night anymore.
  25. That, or "this will make you wish for a return of the Taylor Swift Years!".
×
×
  • Create New...