A lot of people are wondering how the apps that claim the ability to play online in real time even when separated by long distances work. There are two answers to this.
They don't. They use a cheat technique to allow for the fact that the laws of physics and the skeleton of the internet make real long distance collaboration effectively impossible.
The speed of light is 300,000 km/second. That works out to 300 km (or just under 200 miles) per millisecond of latency one way. For practical collaboration, you need two-way latency, so let's say 150 km (100 miles) per millisecond.
Most musicians can sorta kinda stay together with latency of 20 ms or less (much less is preferred). That means you can stay sloppily together with someone 2000 miles away... assuming that you have some sort of magical direct connection that goes straight from your studio to theirs.
In reality, audio has to pass through this thing called the Internet, which is a very very VERY complex network that is routing a couple-three other streams while it's trying to deal with yours. Latency is multiplied by a huge amount; it's not unusual for a person listening to an audio stream on the other side of the planet to have up to 30 seconds of latency.
So how does this "real time collaboration" software work? By working WITH latency rather than fighting it. The basic idea is that you have a central server that measures latency to each of the participants, takes the worst-case scenario, and then fits it to a certain number of measures at a preset tempo determined by the players before the session begins. It then adjusts delay so that each player hears what other people are doing, delayed by a certain number of measures.
Where this gets fun and weird (or patently unusable!) is that each player hears what they are playing in real time, but they're playing against what the other players were playing, say, two measures ago. It's like Rashomon, but with audio. No one in the group hears or plays to the same thing, because all of the relative delays are different... and what the server records and stores will be different still!
If you're in a single key (as in some Indian music and many styles of electronica), this is less of a limitation than you might think, and leads to some remarkably good results. I have done two albums this way and will certainly do more; it takes an incredible amount of focus and listening skill, but when it works, it's magical.
The first software to work this way was NINJAM, which has its own starter thread right here.
It's been around for well over a decade, and is built into the Reaper DAW. A newer version of this is JamKazam, which has a starter thread here.
And there are newer options like Jammr, that I don't know anything about.
I realize that folks who are used to working within complex chord progressions and song structures will find this absolutely useless, but at least now you know where the concept comes from, and what's behind the claims of realtime long-distance collaboration. Other methods aren't realtime, but they'll be much more useful to conventional players, taking the form of an overdub session rather than a jam. This is the idea behind BandLab and the like.
NB. Irena has some great information on this with more technical depth on this thread over in KC.
Hope this helps,