Lately I’m thinking a lot of implementing the networking part of Sylphis3D. I must say it can be a brain melting procedure. There are so many things to take into account, that you can easily find yourself in deadend paths.
Trying to create a networking architecture you are basicaly try to optimize for the following parameters :
- Low latency
- Small baudwidth requirements (56Kbps)
- Transmission reliability and network fault tolerance
- Integration with engine in a non intrusive manner
The first two in the list are closely related. When the baudwidth goes up the latency goes up. This means that the networking subsystem should use some form of compression. The most appropriate compression method for games is the so called delta compression. This is a high level data compression scheme since it requires very high level information about the data. So you immediately have some interference from your networking code parts to the rest of the engine, that I would really like to leave alone. To that you can add some regular compression at the end (like zgip) but that will probably give you nothing.
Since packet switched networks are very unreliable, you also need to implement some sort of transmition control. You can’t get away with using TCP. You need more control. All TCP can give you is a continuous stream of bytes (and an extra big header to each packet). With TCP a single lost packet can stall all your other packets until that packet arrives. In a game you probably don’t care for an old update packet if there are newer packets in the queue. So you end up with the connectionless UDP protocol. The connection is implemented ontop of that. This connection should support unreliable, reliable, sequenced and unsequenced streams.
The good part starts when you try to implement all of above together. For example : It is nice not to care about lost packets when you only need the latest update, but what happens when you delta compress your packets? You should have some ack system even for unreliable packets. The two ends of the wire should know what the other end knows and send diffs to that. This means that both sides have to keep many old states of entities…
For the topology of the network since I’m talking about first person shooters is going to be client/server. I’m not going to think about peer-to-peer at all.
Actually this is not the first time I deal with this problem. My previous 3D engine, ITT (This page is from the web archive, since the original page no longer exists), supported a client/server networking model. I remember that it took me a lot of hard work to get it to work. What I think that makes it so hard is not the combination of the above. It is the fact that we are used to make the server completly authorative of the game. This is for security against cheating. This requires that the client basicaly send to the server the users command, the server updates your state and sends it back to you. This would be so nice in a perfect world of no latency. But in the real world this means that the user will see his actions happening roundtrip time later.
So client-side prediction comes to play and in compination of delta compression etc, makes the thing a mess.
I remember trying to debug the prediction system of ITT. That was the hardest debuging process I had ever gone through. I have programmed kernel drivers, schedulers for operating systems but nothing compares with that! You stick your face to the monitor trying to catch a possible tiny jump or break in the smooth movement of the player, and you find yourself asking what was that? Was that the prediction code or something else? And I had no journaling in ITT.
Anyway I’m not really enthousiastic about doing it again with Sylphis3D. As far as I can see, this is only to make it hard to modify the client, to make the player run faster.
Why not make the client authorative of the players move and have the server check if he is cheating? The server can act like a reffery. It allows you to make the moves but if you cheat you are kicked…