Multiplayer Game Programming: Architecting Networked Games

Reviewed on , book by Josh Glazer

Toying with some collaborative web features, and I thought skimming a book on networking in gaming was likely to teach me something interesting. In particular, I was interested in what techniques games use to deal with latency, reliability, and desync between clients. Chapters exist on low-level network details (what’s a socket, what’s TCP/IP, what’s Ethernet), which I skimmed faster to get to the game-specific content. There’s an interesting chapter on UDP vs TCP for games. It boils down to this: if there’s any level of quality of service between objects and strong real-time requirements, something based on UDP (which is most games). Otherwise, entertain TCP (turn-based games would do fine here).

It’s quite possible for clients to somehow get out of sync. Perhaps a bug placed an object slightly differently in two games. In general, in my experience, when you send a lot of incremental state, at some point, you’ll get out of sync unless there’s a mechanism to directly prevent this (e.g. log offset). If this is not accounted for, a butterfly effect can easily take form, especially in an RTS game. A rock is placed slightly differently, and players run around it or cover it in one, but not in another. Someone lost 10 health for one player, but not another. To combat this, a hash is continuously exchanged between client and server. If they don’t match, some action is taken. In some cases, you’re just kicked out of the game, in others, you might get transmitted a known good state.

What about lag? Most games use a generous amount of interpolation (smoothen the transition between two known good states) and extrapolation (try to predict the future based on current events) to deal with lag. Some of the first networked games did none of this, and relied on synchronization with the server on every step (e.g. Quake). Others have a turn-based model, RTS such as Age of Empires or Starcraft, with 100-200ms turns.

What I was more interested in are the ones that are synced in real-time (they seem most complicated), such as shooters (this is more applicable to real-time collaboration). They only receive events every so often (1/2 round-trip time), so between those events they do smoothening of events through interpolation. Otherwise the game would seem ‘jumpy.’ Clients may also do extrapolation, if someone is running in some direction, at some speed, your client may continue to assume that (and have to correct it if the next event from the server says they stopped before your client animated them to). In this model, the server is the source of truth, and the clients continue to guess what might happen to avoid constantly jumping to a very new frame if the server tells it to. These techniques are very common today, and seem to be supported well by the modern Game Engines.

A technique I found particularly fascinating is “rewinding.” If you have another player in your sniper scope, and click, you expect it to hit them. However, in the 20-50ms before it hits the server, the other player may have duck (which you might not have had a chance to see). A technique pioneered by Valve’s Source engine is to favour the shooter in this case, and rewind the state on the server to the client’s point in time, verify, and then log an event for the shot. This may be frustrating for the victim, as they may have duck in the interim.

The few chapters I read in detail on the tips and tricks were interesting, but I think if I were someone who was about to write a networked game, I’d be disappointed. There’s still a lot of questions in my mind about how all this works, and how various edge-cases are addressed. I’d love to see demos of games with and without the various strategies to really understand how they work, and get an intuitive sense for how latency of various values affect the game. The writing style was very approachable, kudos to the author for that.