I don't know either. I know that if I am recording a band's worth of tracks, nothing needs to be louder than -15 or -18dbFS to make the mix work. If I am working with dozens of tracks of orchestral samples, this is definitely true! I'm turning things down! The more sounds you add, the louder it gets. Apply some compression and it can be as loud as you want it. Digital gain is pretty much free from an artifact perspective. If it needs to be louder, it isn't hard. When mixing live I'll let the drum transients hit -12dbFS because it helps the drummer have more level in the monitor mix.
In my studio, I have it calibrated to -20dbFS is 78dB SPL at the mix position. Uncompressed mixes sound amazing - things like high-end acoustic recordings of pianos, acoustic guitars, etc. It is easy to tell how much compression is put on tracks, because I have to adjust the monitor gain. I got the idea from mastering engineer Bob Katz and have worked this way for years.
I suspect those tracks came from someone trying to get "color" out of their mic preamps, but didn't have the ability to attenuate the signal back down. (A great argument for an inline console - those small fader paths solve this problem neatly). Most digital converters top out at +18dBu or maybe +24dBu. Both classic and modern high end preamps from Neve and others put out +28dBu. If you want to wind them up to get the "color", then the signal will be louder than most audio interfaces. Unless there is attenuation handy, I could see this being the reason. But I'm just speculating.