Engine interpolation settings
My aim for interpolation settings is to reduce my visual player and item jitter at 100ms ping. I want to keep the game as accurate as possible. It's worth noting that I'm on gbit symmetric fibre with 5ms ping to the nearest exchange and a gaming computer rebuilt in 2023, so your mileage will vary.
All of these numbers are measured in "ticks". Tribes 2 runs at 31~ ticks per second, with a single tick measuring 32ms or 0.032 seconds. This information is based on old source code instead of assembly, so the descriptions may be inaccurate - corrections welcome!
My settings are "0 30 3 0".
maxLatencyTicks
Personal opinion: Leave at 0 and 'feel out' the latency.
Pros: Turning it up may reduce the need to lead your shots, may correct very simple player movements.
Cons: Unpredictable player movements warp more, higher inaccuracy at higher latency.
Description: Setting maxLatencyTicks higher than 0 enables client-side player position guessing. This guess is applied when you receive position data from the server. The guess is simple - move the new player position ahead relative to the new velocity delta (speed, direction difference). Your server roundtrip time is a factor in the guess, so maxLatencyTicks does little if your latency is very low. The upper limit is your latency, so setting this much higher than 10 doesn't seem to do much.
maxLatencyTicks for a given ping
50ms/64 = 0 - no correction applied
100ms/64 = 1 - one tick of correction
250ms/64 = 3 - three ticks
1000ms/64 =15 - a whole 15 ticks of correction - good luck hitting anyone!
maxPredictionTicks
Personal opinion: The default of 30 is about as high as you'd want.
Pros: Smoother looking player movement, less player freezing.
Cons: High latencies trade freezing for warping - laggy players may warp more but freeze less.
Description: Setting maxPredictionTicks higher than 0 enables client-side player position prediction. Unlike maxLatencyTicks, maxPredictionTicks is only used when server data is unavailable. This setting is self-descriptive. The other player's position will be predicted based on their current velocity delta until new data is received or the tick counter is exceeded. Setting this too low will result in players freezing until new player movement data is received. Setting this too high may make it harder to tell when you're experiencing severe packet loss.
maxWarpTicks
Personal opinion: 3 seems best, 2 functions, 1 is a bit jittery
Pros: Lower numbers = higher accuracy. Higher numbers = smoother player warping.
Cons: Lower numbers = corpse shaking due to floating point inaccuracy. Higher numbers = less accurate player vectors.
Description: Setting maxWarpTicks higher than 0.0 enables player warp smoothing. This setting determines how many ticks a position warp will be smoothed across when the client is behind the server. Visual jitter caused by engine issues (reduced floating point accuracy in network packets) is reduced by smoothing player position updates over a number of frames. Setting this number higher than 7 when minWarpTicks is small will cause players to start gliding between their movement inputs. Try setting it to 30 on a bot-only server with high ping, you'll see what I mean.
minWarpTicks
Personal opinion: 0 or the default of 0.5 look fine.
Pros: Higher numbers = delayed smoothing and more jitter. Lower numbers = faster warp smoothing.
Cons: Bigger numbers reduce the effectiveness of maxWarpTicks.
Setting minWarpTicks enables a threshold that must be passed to enable player warp smoothing. I don't see the point setting it above zero, but someone from Dynamix chose 0.5. There's probably a reason I'm unaware of.
Warning
Don't set both maxWarpTicks and minWarpTicks to 0.0! This seems to partially disable player position warping, resulting in frozen players glitching around the map.
Conclusion
Dynamix engineers struck a near perfect balance between hiding game engine inadequacies and keeping the game responsive. The networking model means that connections with packet loss experience warping as positions are corrected by the server to some degree. Kinda hard to escape that sort of thing anyway.
edit: whoops, I meant to post this in General Discussion... oh well, too late now, no delete functionality
Comments
PlayT2 Discord has looked into this topic a little bit, so I was inspired to write some additional thoughts. I'm also going to revisit the min/max warp ticks because they seem to have a weird relationship.
Impurities / complications
All of these settings imagine that the server and client send and receive data approximately every 32 milliseconds, or once per tick.
Instead of sending and receiving 31 packets per second, Tribes 2 usually sends and receives at around 22 packets per second. I have a custom client that can reach nearly 33 packets a second, but most servers don't do much with the extra data. As a result, most of these settings affect your regular gameplay because packets are often missing.
More or less. If it helps you conceptualize it better, the networking model is built around three concepts: extrapolation, prediction, and interpolation. The client is running a simulation in parallel with the server, and given relatively sparse delayed data replication, the attempt is to get it "close enough" to the server and make it look natural. While it can look accurate, a client rarely sees exactly where a moving object is on the server. Hence why accusations of cheats have always been so prevalent: what you see is not necessarily the truth.
Extrapolation is the "guess" part; in principle, what you know about the world around you is always behind the ground-truth of the server by at least the latency from it to you, and because the simulation is deterministic, i.e. everything that moves in the game follows a hard set of physical rules and will reproduce the same result given the same input, it's expected a client given all the details on the object can accurately (give or take deviance due to precision loss) extrapolate where something will be after X milliseconds.
Due to how the game works this is generally always occurring by one tick regardless of settings you define: in the same event loop iteration following receiving an update from the server, you'll calculate the next tick advance and continue the sim from there until the server provides more data. If you get lucky with simulation tick timings lining up, there's a sweet spot where if you're processing a packet less than ~32ms after it was created on the server (so a ping under around ~64ms), you'll have about as near to best extrapolation as you can get being connected to a remote server.
There lies the rub though: clients connected to a remote server never know precisely when an object was at a given point. There's no mechanism to synchronize timings, and this is really the only fundamental flaw with the design. Although your client is perfectly capable of simulating the exact path of an unpiloted Shrike for example, not knowing exactly when it began a spiralling descent means that when it's updated by the server it'll try to correct to a slightly different point in time.
The maxLatencyTicks preference, if set (and I'd probably never set it at all if your ping is below ~96ms), will "fast-forward" one or more "filler" physics ticks based on continued current trajectory; an attempt to catch up to where the server is now, based only on where it was half your ping ago. If you imagine dropping two balls off a building, one 2 seconds later (or in this case something like 64ms) than the other, this could be thought of as a way to instantly fast-forward the second ball when dropped so that they hit the ground at the same time. For items this can work well enough: unless additional forces act upon the object, you know its properties and where it's going, you can calculate any collisions with static objects locally, and in most cases it'll end up closer to where it's supposed to be. For players however it gets a little janky, as these filler ticks won't give consideration to active inputs upon the object, rather just continue moving it like any other rigid body, which will invariably drift and require correction to the controlled course...
"Prediction" is in attempt to address that last point applies to players and vehicles controlled by a remote client or AI, being the only dynamic objects you can't reliably estimate the behaviour of due to constant external influence. For high priority targets you know what their inputs were *fairly* recently -- in best case their latency plus up to ~32ms plus your latency... in worst case... considerably worse, more on this later -- and the "prediction" is that they will be maintaining those same inputs until you hear otherwise (or for about a second by default). Obviously this isn't going to be accurate for long if there's a big latency gap to cross (e.g. if you're 100ms+ behind the server and someone slowed a high forward momentum instantly after their input to the previous packet), but for nearby players in view you're nominally getting another update every ~32ms and can reasonably use it in combination with other data to smoothly approach more or less to where it's expected to be at the present time. For distant or lower priority players where data is more sparse (particularly if there's a lot going on nearby), this will tend to be leaned on much more heavily.
Interpolation specifically takes place where the client simulation is diverging from the server, meaning if the "present" extrapolation/prediction you'd made with data from the previously received packet ends up different from updated info in a new packet, the game will try to correct towards the latter: when the client sim runs, it'll "warp" by a certain distance over a period of time to reach the correction. If the client and server are very close (the distance/rate it needs to travel being under the minWarpTicks threshold), it'll move it using within the period of one simulation tick, otherwise it'll stretch it out to shift by that offset over time up to the period of maxWarpTicks. Every update from the server for these objects will of course override the previous delta adjustment for these longer interpolation periods, so you effectively get a compensated middle arc between what the client believes will happen and what the server last said has happened.
This ties in to the sub-tick interpolation always occurring on the client: because the client render refresh frequency is much higher than the simulation tick rate, a client always calculates current object states at an intermediate interpolated point between the last known and the next simulation tick, essentially "rewinding" to backstep from where it's expected to be. When you have a warp delta, it graduates in a number of sub-tick render frames between where the local simulation is going wherever the server said it was going most recently.
Now, to the "more on this" part of timings: not every server is going to be processing ticks (much less delivering packets) when it's expected to. With standard servers it's a crapshoot with excessive sleep periods after client moves are received, wildly incorrect timings, and the luck of the draw on whether the timing aligns for you to be allowed to receive info from the latest sim tick (or in fact whether your shot was registered for it) line up depending on tick boundaries and negotiated rate limits. There's by default a minimum period to time events, in addition to barriers on how often certain things are allowed to run at all -- such as the rate limits on net packets to clients being restricted not only by whether the server had processed a simulation tick, but how recently each client had received an update, creating situations in which some clients would get their info on the current server state much later than others, by a tick period or more. This is compounded by the fact that these rates are negotiated by the client, and many of the same issues exist on the client side: submitting a reduced number of packets from the client side whether due to incorrect net settings or simple timing issues puts you at a significant disadvantage because your moves may end up not being processed until later server ticks.
As you may be aware, the physics optimally run with perfect timing if simulation ticks occur at 31.25ms intervals (32Hz), rather than 32ms - that is, sticking to the compiled tick duration values, each "virtual" time event increment fed into the simulation represents 0.9765625 real milliseconds, i.e. 1.024 in sim intervals equals 1 real millisecond. Neither public servers nor clients have ever run at the full rate, so it's largely academic, but it plays in to disparities in simulation behaviour and timing precision issues. Time intervals at 1 = 1ms (for ticks at 32ms/31.25Hz) are fine, it's the best available with standard counters (and is expected by several pieces of code where it shouldn't be in use) so will line up well with current clients, and it's too small of a distinction to make a difference in play, but it's still just not strictly correct: the simulation runs at an effective 0.9765625 time scale. The default GetTickCount based counter however, more so on modern Windows, is absolute chaos and the root cause of unpleasant feeling server pacing. Because an unadjusted process will only interrupt at 15.625ms intervals, and the counter has an integer resolution anywhere between 10ms and 16ms, in practical terms it has very unstable pacing and skips processing around every third tick.
I've thrown this little chart together for quick illustrative purposes and as such it's not a comprehensive reflection of the behaviour in every case; the timing/spacing in particular is largely approximation to show why one server may feel different from another, and we'll call the client a ~2012 era integrated GPU system running with 60Hz vsync on and a ~64ms ping. The blue lines indicate a trigger sent to the server and the response time to the client... that being, when you might see a shot register after clicking your mouse. The older experimental code running on Discord pub for a number of years has been a bit of a hybrid with a 32ms fixed timer with 1ms event loop wake iterations and a 1ms sleep period -- it's never fully idle for long because it's waking to poll winsock (the wine interface, rather) if at least 1ms has passed since the start of the last iteration. Doing this isn't necessary in the ideal case, but it also does so because there was an interest in sending more data faster: i.e. if the server was unable to fit everything in one packet and there's an allowable increased rate, using some rewritten packet delivery logic, it has an opportunity to send the rest before the next simulation tick. Not sure this part has been in use recently.