Make server movement independent from package rate / latency fluctuation

Started by
16 comments, last by Geri 1 year, 3 months ago

You have to accept one of:

  1. extrapolation – you “keep moving” the remote entity on the local screen after it “reaches the known position” – remote clients will seem responsive, but will “overshoot.”
  2. interpolation with delay – you interpolate between received known-good positions, but doing so takes as much time as the predicted delay until the next packet, plus some de-jitter time
  3. command delay – you don't show the results of a command until you know that everyone has gotten the same commands for the same timestamp. (This is the “input synchronous,” “deterministic simulation,” system typically used in a RTS.)

It sounds like your “buffer” is essentially an implementation of 2.

If you want to see an illustration and some code about 1, try https://github.com/jwatte/EPIC

enum Bool { True, False, FileNotFound };
Advertisement

Client side latency is still better than server latency.

It looks to me like the problem is that your server is reacting instantly to inputs as it receives them. This means everyone's jitter and latency is baked in to the server-side simulation, which then gets broadcast out to the clients. You want to eliminate this so that the server is processing inputs at roughly the same pace as the clients would.

Some sort of buffer on the server is the right answer, though usually it's a bit more sophisticated than simply queuing the inputs, and more of a short ‘time delay' that allows for every server-side update to receive an input from all clients even when there is some degree of jitter involved.

@kylotan that seems to be the problem!, I implemented a simple buffer and now it actually runs fine, the only thing is the buffer adds waay too much latency, for example:

  • User has 50ms 2/RTT
  • Client side lerp buffer 100ms
  • Server side buffer 3 ticks

100ms RTT + 100ms client buffer + 100ms server buffer (3 * 0.033 at 30Hz) = 300ms delay!

So even if the player has a very healthy connection: 50ms it will see the game state 300ms in the past! Is this normal?

None

The server side buffer shouldn't really be an arbitrary amount like 3 ticks - it should just be whatever is necessary to ensure that jitter doesn't mean you ‘miss’ an update. Under good network conditions and a system where clients send some information redundantly (to make up for potential packet loss or delay) you might not even need a whole tick for that. But it's best to measure with some real-world tests.

Similarly, the client-side interpolation buffer shouldn't need to be an arbitrary 100ms - it just needs to be long enough that you almost always have enough data to interpolate between, again with jitter taken into account. e.g. If you send at 30Hz, the client only needs a 33.4ms buffer in perfect network conditions to be sure of having 2 states to interpolate between. You don't have perfect conditions so 33.4 is too short. But a buffer of 100ms is preparing for at least 3 updates being lost or late, which is arguably too much. If you're losing that many packets, you're probably in bad network conditions and are going to lose the 4th one as well, so you're accepting a lot of latency for little real gain.

I'd probably start by dialing that back to 50ms, as well as ensuring you have a robust strategy for extrapolation when there isn't enough data, and a robust strategy for recovering from an extrapolated position to a ‘correctly-interpolated’ position (snapping instantly is fine in the short term). And, again, doing some real-world measurements to ssee whether this value needs adjusting. It can be adjusted at runtime on a per-player basis, as well.

It's worth noting that, in your original example, the player doesn't actually see the game state 300ms in the past. They see it 150ms in the past - 50ms of latency plus 100ms of buffer. Even if another player's information took an extra 150ms to get processed by the server, that information is not strictly ‘the game state’ until the server has processed it. Until then, it's merely that other player's prediction.

Typical human reaction times are well over 150ms and the buffers mentioned above can usually be reduced somewhat so it's not necessarily a problem, especially when lag compensation is used to resolve interactions. Racing games are a bit of an outlier as you don't want every player to see a world state that puts them in the lead, only to be disappointed at the finish line! So games like that will usually do more extrapolation to try and guess where the other entities will be. I've worked on games where different objects had different buffer lengths, depending on how acceptable extrapolation was for that object.

This is the idea I got:

class Buffer
{
const int buffer_size{10};
public:
void sort();
};

razcodes said:
then not get packets for 123ms, and then get a packet again and move, this will translate to choppiness

Before you would start a scientific consultation about such a miniscule thing. Run the same movement code on the client. Sync once per twice back, and adjust the client if the difference is too big compared to a treshold.

This topic is closed to new replies.

Advertisement