Jump to content
Please note: You can easily log in to MPN using your Facebook account!

Recommended Posts

1 hour ago, Dave Bryce said:

There is no doubt in my mind that the “stereo” mix you hear when you toggle the “ATMOS” effect off in Apple Music is way different than if you listen to the original stereo mix on Airpods with and without Spatialize turned on.  I’d much rather do the latter.

 

dB

 

I do agree wholeheartedly with this point. Just like when we get a stereo re-mix of a classic album, it ALL needs to be clearly labeled and we the consumer need to know exactly what we are getting and also need to be offered the choice.

  • Love 1

Editor - RECORDING Magazine

Link to comment
Share on other sites

Quick note, if I may:

 

I’ve noticed that there are folks who don’t understand the difference between ATMOS and Spatial Audio. I suspect there may be more folks than not who don’t even know there is a difference.

 

Basically, ATMOS “is a surround sound technology that creates an immersive, three-dimensional audio experience. So in effect, it is an audio format that allows producers to place different audio sources in a 3-D soundscape around them.”, where Spatial Audio is an Apple protocol that works with standard stereo files using the accelerometer and gyroscope in Airpods, and “is an active modulation system that will change based on your head movements and the position of the listening device.”


Here’s a good explanation.

 

dB

  • Like 2

:snax:

 

:keys:==> David Bryce Music • Funky Young Monks <==:rawk:

 

Professional Affiliations: Royer LabsMusic Player Network

Link to comment
Share on other sites

Almost every engineer I have talked to who is mixing in Spatial Audio as a blanket term (which may not be a blanket term), does the mixing in Dolby Atmos (the only way to currently create "spatial audio" I think) and then has that same mix rendered/translated/converted for binaural playback which is essentially what Apple, Sony 360 and others use in some branded form or another.

Editor - RECORDING Magazine

Link to comment
Share on other sites

3 hours ago, Paul Vnuk Jr. said:

Almost every engineer I have talked to who is mixing in Spatial Audio as a blanket term (which may not be a blanket term), does the mixing in Dolby Atmos (the only way to currently create "spatial audio" I think)’


I think so, too.

 

3 hours ago, Paul Vnuk Jr. said:

and then has that same mix rendered/translated/converted for binaural playback which is essentially what Apple, Sony 360 and others use in some branded form or another. 

 

I’m under the impression that ATMOS is a surround mixing protocol which requires a physical ATMOS surround system to be played back correctly.  You actually have to have the speakers physically around and above your head.  The configuration can vary…but at the end of the day, a physical surround system is required.


Spatial can only be heard on certain Apple devices with a pair of stereo “speakers” (in headphones).  It’s basically a CODEC.  An ATMOS mix is converted down to the two speakers; then, the Spatial algorithm interacts with the gyroscope and accelerometers in the Apple headphones to approximate the ATMOS mix as well as the stereo headphones can physically do that…but it doesn’t sound the same as listening to an ATMOS mix on an ATMOS system.  

 

I believe it’s not possible to hear Spatial on anything other than a stereo source with Apple’s electronics and CODEC…where an ATMOS mix will work properly with pretty much any ATMOS surround system.  The Spatial CODEC does also sound interesting - to me, anyway - when applied to conventional stereo mixes (with or without the “Head Tracking” engaged…although I way prefer it without the Head Tracking).

 

I could be wrong.  While I am not an expert, I am a surround enthusiast…and I do believe this is correct.  If it isn’t, I wanna know more…. 🤔

 

dB

  • Like 1

:snax:

 

:keys:==> David Bryce Music • Funky Young Monks <==:rawk:

 

Professional Affiliations: Royer LabsMusic Player Network

Link to comment
Share on other sites

1. Fully agree with Craig that headphones are 99% of how consumers will access immersive content (which will be movies/games more than pure music, most likely). 

2. Widespread availability of quality HRTF (head-related transfer function) data and mapping to one's headphones of choice will be critical to quality and ultimate success

3. I suspect the current push on top mixers to do ATMOS is that industry leaders have decided this is the direction, and so need to get it rolling and let people learn.

4. Why?  Format re-sell?  No.  People already don't buy music.  it isn't CD's all over. 

        a. I bet it's for easier sync licensing.  If you want to place the song in an ad or media setting, that setting is going to be ATMOS soon - just like everything has had 5.1 for a long time.  Read the Netflix audio specs.  They want content that will have as long an asset life as possible.  High spec video and high spec audio.

        b.  No one wants their track to be lame and small when it shows up in an algorithmic playlist.  Most people aren't listening to albums, or even making their own lists.  Songs just "show up".  It's like the loudness wars.  Let's call it the spatialization wars.  Mixers will figure it out.  So will consumers.  Over time. 

5. Every pro who hears it in a good setting raves.  There is something useful actually there.  and everyone wants it to be true because it holds the promise of being more than a little better.  But it will take time.  Stereo didn't win over mono for a long time.  But it did.  And this isn't 5.1  (it isn't about rooms at all).  It isn't about radio. 

6. We live in a video driven world, not audio driven.  Video is going to immersive. Games are going immersive.  Games are 9x larger than movies by revenue. Which is larger than music.... If game audio is immersive, then a whole lot of band/music content needs to be immersive.  It's a primary listening environment for many.  If movies are immersive, and people are primarily gaming and watching visual content, if their music is not also immersive, won't it feel out of place, small and narrow?  Like the difference between mono and stereo? 

  • Like 2
Link to comment
Share on other sites

7. would be the move toward a metaverse of some kind.  Virtual reality/augemented reality has a LOT of $$ going after it.  I totally get that it is early days, but those environments are all envisioned as complete immersion experiences.  Billions and billions of dollars are at stake.  Mandating spatial audio is very low hanging fruit.  Games and other things have all the sound design captures with high-order ambisonic mics already for the sound effects.  An immersive virtual world needs immersive audio. 

 

This and the games probably mean nothing to most here.  But these things are very big and will be very big to a lot of people who didn't grow up listening to live bands in bars, or playing in said bands. 

  • Like 2
Link to comment
Share on other sites

Nathanael_I's posts seem reality based and well-founded. 

Craig is spot on that headphones will be the primary choice for listeners and it seems likely to me that it will be primary for recording engineers as well. There's too many of us recording in our homes now, using near fields and headphones. 

 

I'd be fine staying with stereo but I don't want to be left behind and there's no feasible way to put together a full Atmos system in an appropriate room at this point in time (assuming I am probably not going to win the lottery!). 

 

It's going to take more than headphones, new or adapted connectivity will be required to run all those discrete channels, headphone amplifiers will become much more complex and DAWs have some catching up to do (although I suspect most are working on that). I think it will be wireless, there will need to be time alignment but as long as everything comes at you at once it should work out for mixing. Tracking could be a separate problem, studio rats may need an additional mode that allows tracking to continue as it is now. 

 

I think transducer technology is up to the challenge of multi-channel headphones, I don't see any other way to do it properly that could be in common usage. 

I guess I'll wait for the really smart people to suss it all out... 

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

18 hours ago, Dave Bryce said:

where Spatial Audio is an Apple protocol that works with standard stereo files using the accelerometer and gyroscope in Airpods, and “is an active modulation system that will change based on your head movements and the position of the listening device.”

This would be an interesting twist on the debate, but I'm not 100% sure it's true. Apple seems to say, here, that Spatial Audio only works on Dolby Atmos source. To use Spatial Audio, you have to have a Dolby Atmos mix of the music. Can you point us at a link that shows how to listen to Spatial Audio with a standard, old-school stereo mix as the source?

 

Also in that Apple doc I linked to is a link to a page which seems to say the 3D head tracking is not an intrinsic part of Spatial Audio, but is a subset. I.e. you can have Spatial Audio without head tracking, but you can't have head tracking without Spatial Audio. 

Link to comment
Share on other sites

1 hour ago, dmitch57 said:

This would be an interesting twist on the debate, but I'm not 100% sure it's true. Apple seems to say, here, that Spatial Audio only works on Dolby Atmos source. To use Spatial Audio, you have to have a Dolby Atmos mix of the music. Can you point us at a link that shows how to listen to Spatial Audio with a standard, old-school stereo mix as the source?

 

Try it. :idk:

Put on your Airpods, play a stereo track from your library, and switch Spatial on - both with and without Head Tracking engaged - and then back off.  You'll totally hear the effect.  It's not subtle.  

 

The Head Tracking really is kind of annoying, IMO.  Easy to confuse it, especially if you move your head quickly. 🙄

Now, if they're defining "Spatial" as their approximation of ATMOS from a pair of stereo headphones...then yes, that would make sense that it would require an ATMOS file; however, if that's the case, why not set the feedback dialogue on the iPhone/device controlling the Airpods to say something other than SPATIAL when it's engaged with a conventional stereo file to avoid confusion? 🤔


 

Quote

 

Also in that Apple doc I linked to is a link to a page which seems to say the 3D head tracking is not an intrinsic part of Spatial Audio, but is a subset. I.e. you can have Spatial Audio without head tracking, but you can't have head tracking without Spatial Audio. 

 


That I fully agree with, as mentioned above.

 

dB

:snax:

 

:keys:==> David Bryce Music • Funky Young Monks <==:rawk:

 

Professional Affiliations: Royer LabsMusic Player Network

Link to comment
Share on other sites

1 hour ago, Dave Bryce said:

Put on your Airpods, play a stereo track from your library, and switch Spatial on - both with and without Head Tracking engaged - and then back off.  You'll totally hear the effect.  It's not subtle.  

That, I cannot do. I'm not gonna set up Apple Music here, for reasons far beyond the scope of this discussion. 🙂

 

This would not be the first, or last, time that Apple has been less than forthcoming about how a significant customer-facing feature is supposed to work. We mere users may never know what Spatial Audio actually does, or even what it's supposed to do, on a technical level. 

  • Like 1
Link to comment
Share on other sites

1 hour ago, dmitch57 said:

That, I cannot do. I'm not gonna set up Apple Music here, for reasons far beyond the scope of this discussion. 🙂

 

You don’t need to turn on Apple Music. You can do it using tunes from the stereo files in your own library stored on your phone.

 

I also resisted signing up for Apple Music until recently; but, as a surround enthusiast, I actually had to hear what was going on with Spatial Audio, so I bit the bullet.

 

 

1 hour ago, dmitch57 said:

 

This would not be the first, or last, time that Apple has been less than forthcoming about how a significant customer-facing feature is supposed to work. We mere users may never know what Spatial Audio actually does, or even what it's supposed to do, on a technical level. 

      :yeahthat: 😒

 

dB

:snax:

 

:keys:==> David Bryce Music • Funky Young Monks <==:rawk:

 

Professional Affiliations: Royer LabsMusic Player Network

Link to comment
Share on other sites

50 minutes ago, Dave Bryce said:

You don’t need to turn on Apple Music. You can do it using tunes from the stereo files in your own library stored on your phone.

How? Apple says "If you subscribe to Apple Music, you can listen to select songs in Spatial Audio with Dolby Atmos." I don't subscribe. My phone does not have an Audio or Spatial option in its music settings.

Link to comment
Share on other sites

Live music is still performed and experienced as basically a stereo experience.  The listener sits still or stands in one place, with the instruments spread out left to right in front of the listener.  It's a convention, sure, not an absolute of some sort.  If we all grew up in a culture where music was performed by placing the audience in the middle and the performers spread out in a surrounding circle, then a surround-type field of sources would sound normal and essential I suppose for serious listeners.   And surely music composition would be different in some way to exploit the physical layout which would become in itself another cultural convention.

 

On the other hand, movies obviously try to recreate a 3D experience since "life" is 3D.  

 

I do like stereo systems that do a good job of creating the illusion of a wide stereo field, with instruments you can pick out from left to right.  But most popular music just doesn't need to provide that sort of experience.  Classical, sure.  Jazz, to an extent.  

 

The one music genre that to my mind could really benefit from surround is - the clue is in the name - ambient music.  But that's a tiny genre in terms of listeners.  Too bad.  I could use me some immersive, submersive, engulfing, surrounding ambient goodness to give the sauna bath treatment to my soul from time to time.

 

nat

 

Link to comment
Share on other sites

1 hour ago, dmitch57 said:

How? Apple says "If you subscribe to Apple Music, you can listen to select songs in Spatial Audio with Dolby Atmos." I don't subscribe. My phone does not have an Audio or Spatial option in its music settings.

 

if you have Airpods and they’re connected to your iPhone, the volume bar in the drop down Control Center menu of your phone (on my iPhone, it pulls down from the upper right hand corner) is where that’s handled.  

 

It will show that it’s connected to Airpods.  Press and hold the volume bar, and the options to turn Spatial on and off/toggle fixed vs. Head Tracking will come up, along with the controls to switch from Noise Cancelling to Transparent.

 

dB

:snax:

 

:keys:==> David Bryce Music • Funky Young Monks <==:rawk:

 

Professional Affiliations: Royer LabsMusic Player Network

Link to comment
Share on other sites

20 hours ago, KuruPrionz said:

 

 

It's going to take more than headphones, new or adapted connectivity will be required to run all those discrete channels, headphone amplifiers will become much more complex and DAWs have some catching up to do (although I suspect most are working on that). I think it will be wireless, there will need to be time alignment but as long as everything comes at you at once it should work out for mixing. Tracking could be a separate problem, studio rats may need an additional mode that allows tracking to continue as it is now. 

 

I think transducer technology is up to the challenge of multi-channel headphones, I don't see any other way to do it properly that could be in common usage. 

I guess I'll wait for the really smart people to suss it all out... 

The good news is that it won’t be that complicated.  Just like we don’t need multiple ears to hear all around us, we don’t need super complex headphones to make the sound. We don’t need “5.1 headphones” arrayed around our ears. Why not?  Super awesome question!

 

The answer is in how we hear.  Hearing is a brain function, processed from a sensor we call an ear. Sound hitting our ears is very complex. It has already gone through the air, bounced off things, including our own torso, head and even the folds of our ears called pinnae.  The human brain is so good at learning and pattern matching that when you were just an infant, it sorted out the relationship between all these things so that when a given stimulus occurs, you understand where it is coming from in 3D space, it’s timbre, volume, and so much more. Speakers work because the sound hits our bodies in a natural way… we just need a lot of them to approximate the complex incidences of the real world. 
 

Let’s first consider over the ear headphones.  They direct sound right into your ears, but at least your pinnae are still involved.  Remember, your brain knows best how to process sound that hits them. But there is no sound bouncing off your torso , your neck or head. Because each ear is fed separately, the brain misses all kinds of correlation that it gets in the natural world. Attempts to use cross-feed simulate some of this experience. But they can’t replace it. So the spatialization is small - only in our heads. It’s the best our brain can do given the reduction in information.

 

In ear monitors are far worse. they remove the pinnae as well!, pumping sound directly into our ear canals. The sound is very much “in our heads” and many performers need an audience mic to not feel isolated. It’s a very unnatural experience.  I’m still not giving mine up….

 

The needed solution is a head-related transfer function (HRTF).  To have our brains think they are in a natural sound field, our brains need the exact same stimulus they get in free space. But how is this possible?  We are all different shapes and sizes and no two humans have the same pinnae?  Most here would be conceptually familiar with DSP based room correction in monitor speakers.  A measurement is taken at the listening position, and then the DSP code works out how to alter the output of the speaker such that it produces the correct signal at the ear.  Magic!  But it works. We need the same thing for ears, except it’s way more complex due to biological diversity. That is where personalized HRTFs come in.

 

A personalized HRTF is the mathematical description of how your torso, head and pinnae affect sound.  If it is accurate, then powerful DSP software can alter the headphone or in ear signal so that it is modified in the way that your head, torso and pinnae modify sound.  When this happens, your brain understands the input as a normal signal, and you hear natural (that is “immersive”) sound.  The goal is not some new experience called immersive.  It’s to literally make recorded sound and natural sound be indistinguishable!  Sorcery!  Yes!

Link to comment
Share on other sites

So, where are we now and where are we going?

 

If you have played with Apple Music, you will know that you did NOT upload a personal HRTF.  You are using a generic one that Apple chose.  This is why it “kind of works” instead of “completely works”. Your brain really is very used to your head, torso and pinnae!  Without that data, it isn’t “real” to you, and can’t be.

 

So, can you get a personal HRTF and load it up?  Not in Apple Music today, but in time you will.  If you nerd out on this, you can get it done today, outside consumer software.  It involves pictures of your ears, fancy analysis, and then software that knows what to do with this. It is all computationally intensive. But it will all sort in consumer friendly ways in a few years. 
 

this means that people will be able to mix fully “immersive” content on headphones powered by normal headphone amps.  The rooms full of speakers are a hack for not being able to model your personal physiology.

 

ultimately, immersive sound is not a gimmick like we experience in Apple Music today.  When personal HRTF data is common, and used with known headphones, our brains will not experience an effect, they will just experience normal ambient sound.  Only we can manipulate it!  Virtual reality for the auditory sense.  So real that it isn’t a goofy effect where you have to sit in a certain place.  It’s just “normal” sound as your personal brain understands it!

Link to comment
Share on other sites

So, the super spendy mix rooms with many speakers are the “early adopter” implementation of immersive sound.  You have to pay if you want to simulate complex sound by having lots of sound sources.  But it isn’t the only solution that yields an identical result to the brain.  Personal HRTFs let you take a simple input signal and alter it so that the brain sees the world in the complex way it is used to.  And this later way is completely affordable, consumer friendly and works even on a commuter train.  
 

Soon, part of setting up a new phone will be having someone take a picture of each side of our head/torso.  This will be sent to the cloud and fancy math done.  A personal HRTF will be loaded into our device, and it will always feed us “natural” or “immersive” sound.  In cheap consumer earbuds. This is why it is nothing like 5.1 in the past. It solves the real, physical problems in another way.  
 

Most mixers will work this way too.  There will always be wealthy and high end places that can afford the room based solution, but all music will be able to be immersive because everyone will hear it that way.  
 

that said, remember that immersive audio is a production concern, not a creative one.  In other words, musicians don’t have to care one bit.  Producers and engineers will learn to adjust micing technique, mixing and production technique. But the monitoring layer is not a long term concern.  It will be 99% headphone based.  Better headphones will sound better and be more pleasurable.  They will be more expensive, same as ever.  But everyone with a cell phone will have immersive audio that fully works in a few years.  

Link to comment
Share on other sites

So will speaker companies try to sell you lots of speakers?  Sure!  It’s one way to get more immersive sound.  Have more sources so that that it hits our ears and body in a way that is more natural.  The more speakers you have, the better it gets!  7.1.2 is a MINIMUM spec for creating Atmos content!  Hollywood dub stages often have 35-50 speakers, or more!  This is not even physically possible in even the best music production rooms. 
 

Why did Focal start making headphones?  Neumann?  Because long term, speakers will not be the immersive answer. Will they fully monetize speakers as long as they can? Of course!  But they are already preparing their customers to see them as a source of high quality headphones as well.  Why?  They know everything I just explained and are preparing their businesses to profit now and in the future, with no loss of profit. All the people that spent $1000 on speakers and thought it was expensive will spend that on headphones in the next several years because they will NOT suddenly spend $30k on speakers…

 

if you need to work in Atmos today to feed your family, you must buy speakers. If you need to have multiple people gather around an Avid S6 to mix, you need lots of speakers. But in just a few years, anyone who just wants to listen to or mix immersive content will do so on headphones.  This is why the big mixers are mostly saying privately not to invest if you don’t have to.  The rooms are expensive, and will soon be unnecessary unless you are doing large scale, collaborative, commercial production. 
 

For this forum, “expensive” headphones and a personal HRTF are the path to pursue.  

Link to comment
Share on other sites

Interesting stuff, Nathanael!

I follow everything pretty well but the sticking point for me is headphones with a single transducer in each ear pod. 

I haven't listened to Apple Music and it's possible I never will. 47+ years of gigging and 3,000+ gigs behind me, I'm not much for listening to music other than projects I'm working on at the moment. Realistically I listen to maybe a song a week or so unless I am out and about and then most of the time I ignore it to the extent possible. I prefer to hear music from speakers and some distance away, when I listen I'm not really immersing in the audio so much as taking in the story and the singer's melodic inflections. 

 

Sound in the real world is directional, it comes from all around us - 360 degrees times 360 degrees? Something like that. 

I've been in canyons when thunder pealed forth, it was impossible to know where it was coming from - reflections masked the origin. 

I get what you are saying about the importance of our bodies and of our particular shape. One night at a gig a few listeners came in and they brought a friend who was profoundly deaf. We had her sit on the bass amp and she commenced to grooving along with the music, she could "hear" through her gluteus maximus and join the fun. 

 

Low frequencies are more omni-directional but most of us use EQ and harmonic distortion on the bass tracks we record so that our friends who listen to music on their cell phones can hear the bass line through the tiny "weefer" in their phone. Musically speaking, the bass in the mix has a direction attached to it. The transient of a kick is often what defines it in the real world. 

 

I think there will need to be a way to make sound from all directions, literally. I don't see that being possible with a single pair of transducers coming directly into your ears from the sides. You can simulate the center by having identical signals in each transducer but that just gives you left/center/right. I'm not following exactly how you will put a crow cawing on the phone line 30 feet ahead of you, 30 feet above you and 10 feet to your left while simultaneously providing the sound of your shoes crunching gravel left and right as you walk towards the crow and the direction the cawing is coming from is shifting the closer you get? The two transducer headphones aren't going to create that with any accuracy and I don't see how scanning the shape of my body will change the sense of direction front to back and top to bottom without a capable system of reproduction. Thoughts?

  • Like 1
It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

I'm no audio engineer but my vision for the headphones would be a pair of cans with center transducer optimized for low frequency response and sized to allow a beveled "ring" around the circumference with 4 mid/high frequency transducers in that ring - front/back/top/bottom. That would provide actual audio information and blending the mid/high transducers on both sides could provide location information. That is a total of 10 channels of information. Time/latency/phasing problems could be solved in the transmitter, these would need to be wireless. 

 

Tracking could just be done in stereo as it always has been, that would solve the latency problems that 10 channels of output much cause. 

A simple toggle on the DAW could make the change, we already have Stereo/Mono switching on our DAW (at least I do) so adding the extra feature would take a bit of coding but could be done. 

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

It's very simple.  Your perception of sound is entirely filtered through signals received by your physical body, primarily your ears.  Not the number of sound sources.  You can either use lots of transducers to try to simulate all the various reflections, sound sources, etc, or you can give the body the stimulus it already knows how to process.  Atmos rooms with many speakers do the former.  Head Related Transfer functions do the later.  HRTF's do depend on being personalized to work extremely well. 

 

This is not new science. People have been working on it for over a dozen years.  It's just hard-  and has computationally expensive math.  But that's ok.  DSP in speakers was too expensive 15 years ago.  Now it isn't.  JBL has been working on headphone perception for  way over a decade.  Look up Floyd Toole and his co-workers.  Brilliant work.  People all over the world are building on it, collaborating and advancing.  Everything in the current iPhone is based on it.  Very much $$ and effort is going to this. 

 

Albert Einstein is said to have remarked that sufficiently advanced science always seems like magic. 

 

There are still people who don't believe the Nyquist Theorem is true., or somehow magically doesn't apply to them,  but it hasn't stopped everyone else from getting on with digital audio.  I'm not asking you to take it on faith, but its likely going to take more effort than reading a forum post.

 

The crow on the phone is just a signal with a certain amplitude, frequency, etc.  It's "distance" and other attributes are all things your brain computes based on your life experience processing auditory signals through your body.  It has nothing to do with the number of devices repeating that sound. Your brain has no idea how signals originate.  It just processes what arrives at your eardrums.   If your brain gets the right stimulus, it will conclude there is a crow doing its thing however far away.   Two headphones can certainly do this.  The illusion of stereo works for more than one instrument and uses two speakers.  

 

Simple binaural recording played back through any headphone you have handy offers freaky levels of realism - particularly if you put small microphones in your own pinnae to do the recording  You can experiment with super cheap little electret condenser mics.  People have done this for decades.  Your pinnae are very special.... and so are everyone elses! 

 

The best news is that none of us have to understand the actual math in order for it to still work.  Unless you are an RF engineer, you don't actually know much about the RF signals that make the phone go - but it still works a treat.  It's a deep hole.  I've given the surface treatment here.  But much more is available - but you'll have to dig and have a pretty decent math/science background the deeper you want to go. 

 

Search HRTF and you'll be off to the races. 

 

 

 

 

Link to comment
Share on other sites

4 hours ago, Nathanael_I said:

Why did Focal start making headphones?  Neumann?  Because long term, speakers will not be the immersive answer. Will they fully monetize speakers as long as they can? Of course!  But they are already preparing their customers to see them as a source of high quality headphones as well.  Why?  They know everything I just explained and are preparing their businesses to profit now and in the future, with no loss of profit. All the people that spent $1000 on speakers and thought it was expensive will spend that on headphones in the next several years because they will NOT suddenly spend $30k on speakers…

 

I have no doubt that headphones will be able to do the job, they already are in some ways. I've been very impressed with what Waves has been doing to create an immersive experience with headphones. Sure, it's not the same as physical surround...but it sure isn't stereo.

Link to comment
Share on other sites

https://slatedigital.com/virtual-recording-studio/

 

This is Slate Digital's version of this idea.  How do you make headphones sound like a space you are not physically in?  It's exactly what we are talking about.  You calculate the difference between a set of headphones and measurement in the real room and put your best guess as to an HRTF in there... that's why it's "not fully real", but "better than stereo". 

 

There are people already mixing on these "not fully baked solutions".  I think it's easy to see why.  Having a big, acoustically treated room with great speakers is EXPENSIVE.  And not even a reality for many people.  If they can work inside a "good space" without doing all the expensive, room/speaker/treatement things, it helps everything they do.  And it's a fraction of the cost.  Exactly what has people investigate new tech and then make the leap - it's better than what they had.

 

Me, I've got a great room, with great speakers and treatment.  I don't "need" the plugin, but it will take me $35-50k to do Atmos really "right" at the level of the rest of the room.  Would I like that to take $4k for a set of world-class headphones and be done?  It's my version of the exact same decision....

 

All the signs are present for a space that is about to transform.  Interestingly, we could also posit that immersive audio and its complete dependence on digital audio will help analog signal processing gear slide further into a niche concern.  That's probably another thread, but I do think it's an inevitable consequence. The more you do digitally, the less sense it makes to go out of box to analog and back again.

Link to comment
Share on other sites

OK, I think we are all on the same page in that we love the idea of "immersive" audio.

I think we agree that an ATMOS system combining a great room and great speakers is out of reach for most of us.

 

We know that there are thousands of small home "bedroom" studios spanning our globe and that all sorts of interesting stuff is now possible for the common person (it didn't used to be this way and it really hasn't been that long). Some of us have a foot firmly planted in both worlds, digital and analog. I play guitar, that is an analog instrument whether acoustic or electric. There are analog to digital solutions for electric guitar, I haven't seen any reviews that indicate this technology has been perfected yet. The efforts go back decades although purely ITB attempts are fairly recent. I also sing and all of my microphones are analog. It is what it is. 

 

Obviously, a headphone solution is ideal for most of us, including all the little "hole in the wall" studios like mine that are just for myself and the occasional talented friend. 

Even if you had a complete and wondrous ATMOS room, a great headphone system would be a welcome addition. 

 

Craig has stated his opinions of the Apple Spatial Audio which does not seem to be the answer for serious audio work but may suffice for those who have been listening to just about anything you can name for decades. Craig has also described his experience with the Waves plugin as something other than stereo. I trust his reviews, he's been at this a LONG time and is a well respected reviewer in every sense of the words. 

 

It doesn't seem to me that we are "there" yet and honestly, I don't think we are even that close. I admire the optimism, I respect the science but headphones with 2 transducers, one for each ear, are going to provide sound waves from left side and right side directions only. You can "tweedle" that but you won't get full front and back, up and down, side to side from a single pair of transducers. 

 

It occurs to me that some of us with smaller rooms don't need the Full Monty either, most of the speakers will just be for midrange and treble reproduction. They don't need to be the large high-powered speakers that a commercial studio requires, they could be pretty small and still fill the room with sound. That could bring the cost of "real" ATMOS down considerably. 

 

 

On 12/13/2022 at 5:33 AM, Nathanael_I said:

 

 

The needed solution is a head-related transfer function (HRTF).  To have our brains think they are in a natural sound field, our brains need the exact same stimulus they get in free space. But how is this possible?  We are all different shapes and sizes and no two humans have the same pinnae?  Most here would be conceptually familiar with DSP based room correction in monitor speakers.  A measurement is taken at the listening position, and then the DSP code works out how to alter the output of the speaker such that it produces the correct signal at the ear.  Magic!  But it works. We need the same thing for ears, except it’s way more complex due to biological diversity. That is where personalized HRTFs come in.

 

A personalized HRTF is the mathematical description of how your torso, head and pinnae affect sound.  If it is accurate, then powerful DSP software can alter the headphone or in ear signal so that it is modified in the way that your head, torso and pinnae modify sound.  When this happens, your brain understands the input as a normal signal, and you hear natural (that is “immersive”) sound.  The goal is not some new experience called immersive.  It’s to literally make recorded sound and natural sound be indistinguishable!  Sorcery!  Yes!

This sounds wonderful and all but I haven't seen an explanation as to how a sound can be produced that is only coming from behind you using a pair of transducers that are aiming directly at your ears? Or above/below you? To me, an immersive sound will come from all directions = 360 degrees x 360 degrees. 

That is how natural sound is, sitting home right now I could hear the train down the hill as it crossed from left to right. I have all the windows and doors closed but the horn on the train and the sound of the wheels on the rails came from across the span of a half mile or more > from one side to the other and well in front and below me, about a quarter of a mile. 

It's sprinkling, I hear the drops on the roof above me at the same time and the car coming down the hill on Taylor which is too my left. This is to say nothing of the stairs I just walked down, which creaked below me and I heard with both my ears and my feet. That's fully immersive, I am not expecting to "hear things with my feet". 

 

With headphones having 2 transducers, how will scanning my HRTF specifically allow me to have a legitimate sense of space in the same way?

I totally understand the ATMOS multi-speaker idea, from that you should be able to get realistic sounds simulating the train, the rain, the vehicle descending from behind, etc. 

Oh and there's a dog that barks every time somebody walks by, I can tell exactly where it lives even though it's a block away, down hill and to the right. That could be "placed" in a multi-transducer array but I just don't see it happening with only 2 transducers. So far, it hasn't. 

 

I will add that there are probably other efforts afoot that are not being publicized. Apple put their Spatial Audio on the market and people are probably buying enough of them to pay the developers to continue the quest. JBL? My Duck Duck Go search for "JBL Atmos" or "JBL Immersive" brought up blank pages (I will be the first to admit that my internet is temporarily substandard). Another entity that might be working on the headphones idea is Zuckenberg, he is obsessed with his "Metaverse" and surround sound headphones would be a part of that beyond any doubt. I'm sure there are others, whoever can bring the real deal to market first will do well, at least until it gets copied in Third World countries. 

 

Something is going to happen, I don't doubt it and I look forward to finding out what it is. I'll wait, once I see consistently good reviews of some system - whoever brings it to market - then I will consider it. On the other hand, if I win the lottery I'll just get a great room with a full ATMOS set up and y'all are invited to the "opening party"!!! 😊

  • Like 1
It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

Thanks Nathanael!

I appreciate having the Audio University link, that video was interesting and there are some links in the description that I pasted into a document on my desktop. 

 

The first video is a sales pitch only and sadly, she stumbles at 37 seconds when she proclaims that "Listening to standard stereo headphones ALL of the spacial audio cues are lost." The left to to center to right spacial cues are still fully intact, as they have always been. 

 

I do get that we are confined to YouTube's audio for these presentations and it is not possible to properly demonstrate what is being done. 

That said, I'm not one to rush out and acquire new tech when it first arrives. These concepts will be refined, patents will be filed, new ideas will come and eventually this will be a mature technology. My DAW (Waveform Pro 12.1.8 doesn't have ATMOS mixing features yet. The update to v. 13 is due mid January, it probably won't include ATMOS but it might. If not, I think Logic Pro is a good choice for an ATMOS capable mixing platform, thoughts?

 

It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

1 hour ago, Nathanael_I said:

Nuendo and ProTools have the deepest support for Atmos at present. 

And the prices to go with! Digital Performer also has an Atmos mixer, I think that covers the current 4 Atmos capable DAWS but I don't think it will stay that way.

 

I'm not looking to go deep, that's somebody else's job. I'd love to avoid learning any new DAW at this point, maybe I'll skate f this ramps up. 

I'm only working on my own music and at this point, one friend's stuff - he's got some quality songs. I have zero plans to start a recording business, it's part of what I want to do - for me. 

 

I'm a songwriter/musician who wants to stay at least somewhat current and have noted that the powers that be would prefer that songs have versions that are mixed for immersive sound in the event that there is placement that calls for it. I get that reality is mostly people listening on substandard stereo and mono "systems". I also get that reality is that you want to find media placement - Spotify is not IT! A friend put out an album and got 20,000 streams on Spotify, if those were just songs then she brought in about $96 at the rate they pay per stream, if they were the entire album then more but still not much. I can do that well gigging for a weekend. 

 

I've been to back deck parties where the music was a cell phone dropped into a drink cup for more resonance. 😘

 

Tonight while trimming down my plugins again, I noticed I have 5 Apple plugins with interesting names. 4 of them are panners:

Sound Field Panner

Spherical Head Panner

Vector Panner

HRTF Panner

 

And one of them is a "mixer":

Spatial Mixer

 

They probably all suck but since I have them already (part of the OS), I might as well give them a spin for shits and giggles. 

Worst case they may provide a basis for comparison. 

  • Cool 1
It took a chunk of my life to get here and I am still not sure where "here" is.
Link to comment
Share on other sites

Thanks for taking so much time to respond so thoroughly, Nathaniel. Greatly appreciated, and very informative.  I share Kuru’s skepticism on the HRTF/two speakers headphones thing, though…including that we’re not close.

 

The technology will have to improve significantly from where it is now for me to believe that the effect and experience I can get from the speakers around my listening environment - not only from their physical position in relation to my head position, but also how they interact and reflect with the walls in my space based on the positioning of the speakers as well as the makeup of the speakers themselves (driver materials/configuration of same, cabinet size/technology, etc) will be convincing.  I very carefully chose certain makes and models of speakers, put quite a lot of thought into where they were positioned in relation to my room and what’s in it, then fuzz measured and swept the room.  The placement of the subwoofer and it’s subsequent ability to deliver ideal propagation and reflection of low frequencies is especially critical for me.  I just love it when the bass is right - an entire art form there. 🤓

 

You see, I’m one of those folks who believe your ears are only part of the hearing experience.   I think your body also plays a role, especially with the lows…and there simply isn’t a way for headphones to address that.

 

Right now though, the technology is most certainly is not there.  I’ll be quite interested to see if it can be made to work better, with open back as well as closed phones.

 

FWIW, I’m a big fan of all sorts of headphones. In addition to owning the usual suspects (AKGs, Audio-Technica, Ultrasone)  I have a couple different models of Audezes - LCD-X and LCD-MX4 as well as a pair of their in-ear LCD-i3.  The MX4s are my babies. 😎

 

dB

  • Like 1

:snax:

 

:keys:==> David Bryce Music • Funky Young Monks <==:rawk:

 

Professional Affiliations: Royer LabsMusic Player Network

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...