Reliability and Flow Control

Introduction

Hi, I’m Glenn Fiedler and welcome to the fourth article in my series Networking for Game Programmers

In the previous article, we added our own concept of virtual connection on top of UDP.

Now we’re going to add reliability and flow control to our connection.

This is by far the most complicated aspect of low-level game networking so this is going to be a pretty intense article, so strap in and lets go! :)

The Problem with TCP

Those of you familiar with TCP know that it already has its own concept of connection, reliability and flow control, so why are we rewriting our own mini version of TCP on top of UDP?

The issue is that multiplayer action games rely on a steady stream of packets sent at rates of 10 to 30 packets per second, and for the most part, the data contained is these packets is so time sensitive that only the most recent data is useful. This includes data such as player inputs, the position orientation and velocity of each player character, and the state of physics objects in the world.

The problem with TCP is that it abstracts data delivery as a reliable ordered stream. Because of this, if a packet is lost, TCP has to stop and wait for that packet to be resent. This interrupts the steady stream of packets because more recent packets must wait in a queue until the resent packet arrives, so the packets may be presented in order.

What we need is a different type of reliability. Instead of having all data treated as a reliable ordered stream, we want to send packets at a steady rate and get notified when packets are received by the other computer. This allows time sensitive data to get through without waiting for resent packets, while letting us make our own decision about how to handle packet loss at the application level.

It is not possible to implement a reliability system with these properties using TCP, so we have no choice but to roll our own on top of UDP.

Unfortunately, reliability is not the only thing that we must rewrite. This is because TCP also provides flow control, so that it can dynamically scale the rate of data sent to suit the properties of the connection. For example TCP sends less data over a 28.8k modem than it would over T1 line, and it is able to do so without knowing in advance what sort of connection it is!

Sequence Numbers

Now back to reliability!

The goal of our reliability system is simple: we want to know which packets arrive at the other side of the connection.

First we need a way to identify packets.

What if we had added the concept of a “packet id”? Lets make it an integer value. We could start this at zero then with each packet we send, increase the number by one. The first packet we send would be packet 0, and the 100th packet sent is packet 99.

This is actually quite a common technique. It’s even used in TCP! These packet ids are called sequence numbers. While we’re not going to implement reliability exactly as TCP does, it makes sense to use the same terminology, so we’ll call them sequence numbers from now on.

Since UDP does not guarantee the order of packets, the 100th packet received is not necessarily the 100th packet sent. It follows that we need to insert the sequence number somewhere in the packet, so that the computer at the other side of the connection knows which packet it is.

We already have a simple packet header for the virtual connection from the previous article, so we’ll just add the sequence number in the header like this:

   [uint protocol id]
   [uint sequence]
   (packet data...)

Now when the other computer receives a packet it knows its sequence number according to the computer that sent it.

Acks

Now that we can identify packets using sequence numbers, the next step is to let the other side of the connection know which packets we receive.

Logically this is quite simple, we just need to take note of the sequence number of each packet we receive, and send those sequence numbers back to the computer that sent them.

Because we are sending packets continuously between both machines, we can just add the ack to the packet header, just like we did with the sequence number:

    [uint protocol id]
    [uint sequence]
    [uint ack]
    (packet data...)

Our general approach is as follows:

  • Each time we send a packet we increase the local sequence number
  • When we receieve a packet, we check the sequence number of the packet against the sequence number of the most recently received packet, called the remote sequence number. If the packet is more recent, we update the remote sequence to be equal to the sequence number of the packet.
  • When we compose packet headers, the local sequence becomes the sequence number of the packet, and the remote sequence becomes the ack.

This simple ack system works provided that one packet comes in for each packet we send out.

But what if packets clump up such that two packets arrive before we send a packet? We only have space for one ack per-packet, so what do we do?

Now consider the case where one side of the connection is sending packets at a faster rate. If the client sends 30 packets per-second, and the server only sends 10 packets per-second, we need at least 3 acks included in each packet sent from the server.

Lets make it even more complex! What if the packet containing the ack is lost? The computer that sent the packet would think the packet got lost but it was actually received!

It seems like we need to make our reliability system… more reliable!

Reliable Acks

Here is where we diverge from TCP.

What TCP does is maintain a sliding window where the ack sent is sequence number of the next packet it expects to receive, in order. If TCP does not receive an ack for a given packet, it stops and resends a packet with that sequence number again. This is exactly the behavior we want to avoid!

So in our reliability system, we never resend a packet with a given sequence number. We sequence n exactly once, then we send n+1, n+2 and so on. We never stop and resend packet n if it was lost, we leave it up to the application to compose a new packet containing the data that was lost, if necessary, and this packet gets sent with a new sequence number.

Because we’re doing things differently to TCP, its now possible to have holes in the set of packets we ack, so it is no longer sufficient to just state the sequence number of the most recent packet we have received.

We need to include multiple acks per-packet.

How many acks do we need?

As mentioned previously we have the case where one side of the connection sends packets faster than the other. Lets assume that the worst case is one side sending no less than 10 packets per-second, while the other sends no more than 30. In this case, the average number of acks we’ll need per-packet is 3, but if packets clump up a bit, we would need more. Lets say 6-10 worst case.

What about acks that don’t get through because the packet containing the ack is lost?

To solve this, we’re going to use a classic networking strategy of using redundancy to defeat packet loss!

Lets include 33 acks per-packet, and this isn’t just going to be up to 33, but always 33. So for any given ack we redundantly send it up to 32 additional times, just in case one packet with the ack doesn’t get through!

But how can we possibly fit 33 acks in a packet? At 4 bytes per-ack thats 132 bytes!

The trick is to represent the 32 previous acks before “ack” using a bitfield, like this:

    [uint protocol id]
    [uint sequence]
    [uint ack]
    [uint ack bitfield]
    (packet data...)

We define “ack bitfield” such that each bit corresponds to acks of the 32 sequence numbers before “ack”. So lets say “ack” is 100. If the first bit of “ack bitfield” is set, then the packet also includes an ack for packet 99. If the second bit is set, then packet 98 is acked. This goes all the way down to the 32nd bit for packet 68.

Our adjusted algorithm looks like this:

  • Each time we send a packet we increase the local sequence number
  • When we receive a packet, we check the sequence number of the packet against the remote sequence number. If the packet sequence is more recent, we update the remote sequence number.
  • When we compose packet headers, the local sequence becomes the sequence number of the packet, and the remote sequence becomes the ack. The ack bitfield is calculated by looking into a queue of up to 33 packets, containing sequence numbers in the range [remote sequence - 32, remote sequence]. We set bit n (in [1,32]) in ack bits to 1 if the sequence number remote sequence – n is in the received queue.
  • Additionally, when a packet is received, ack bitfield is scanned and if bit n is set, then we acknowledge sequence number packet sequence – n, if it has not been acked already.

With this improved algorithm, you would have to lose 100% of packets for more than a second to stop an ack getting through. And of course, it easily handles different send rates and clumped up packet receives.

Detecting Lost Packets

Now that we know what packets are received by the other side of the connection, how do we detect packet loss?

The trick here is to flip it around and say that if you don’t get an ack for a packet within a certain amount of time, then we consider that packet lost.

Given that we are sending at no more than 30 packets per second, and we are redundantly sending acks roughly 30 times, if you don’t get an ack for a packet within one second, it is very likely that the packet was lost.

So we are playing a bit of a trick here, while we can know 100% for sure which packets get through, but we can only be reasonably certain of the set of packets that didn’t arrive.

The implication of this is that any data which you resend using this reliability technique needs to have its own message id so that if you receive it multiple times, you can discard it. This can be done at the application level.

Handling Sequence Number Wrap-Around

No discussion of sequence numbers and acks would be complete without coverage of sequence number wrap around!

Sequence numbers and acks are 32 bit unsigned integers, so they can represent numbers in the range [0,4294967295]. Thats a very high number! So high that if you sent 30 packets per-second, it would take over four and a half years for the sequence number to wrap back around to zero.

But perhaps you want to save some bandwidth so you shorten your sequence numbers and acks to 16 bit integers. You save 4 bytes per-packet, but now they wrap around in only half an hour!

So how do we handle this wrap around case?

The trick is to realize that if the current sequence number is already very high, and the next sequence number that comes in is very low, then you must have wrapped around. So even though the new sequence number is numerically lower than the current sequence value, it actually represents a more recent packet.

For example, lets say we encoded sequence numbers in one byte (not recommended btw. :)), then they would wrap around after 255 like this:

    ... 252, 253, 254, 255, 0, 1, 2, 3, ...

To handle this case we need a new function that is aware of the fact that sequence numbers wrap around to zero after 255, so that 0, 1, 2, 3 are considered more recent than 255. Otherwise, our reliability system stops working after you receive packet 255.

Here it is:

bool sequence_more_recent( unsigned int s1, 
                           unsigned int s2, 
                           unsigned int max )
{
    return 
        ( s1 > s2 ) && 
        ( s1 - s2 <= max/2 ) 
           ||
        ( s2 > s1 ) && 
        ( s2 - s1  > max/2 );
}

This function works by comparing the two numbers and their difference. If their difference is less than 1/2 the maximum sequence number value, then they must be close together – so we just check if one is greater than the other, as usual. However, if they are far apart, their difference will be greater than 1/2 the max sequence, then we paradoxically consider the sequence number more recent if it is less than the current sequence number.

This last bit is what handles the wrap around of sequence numbers transparently, so 0,1,2 are considered more recent than 255.

So simple and elegant!

Make sure you include this in any sequence number processing you do!

Flow Control

While we have solved reliability, there is still the question of flow control. TCP provides flow control and congestion avoidance as part of the packet when you get TCP reliability, but UDP has no flow control whatsoever!

If we just send packets without some sort of flow control, we risk flooding the connection and inducing severe latency (2 seconds plus!) as routers between us and the other computer become congested and buffer up packets. This happens because routers try very hard to deliver all the packets we send, and therefore tend to buffer up packets in a queue before they consider dropping them.

While it would be nice if we could tell the routers that our packets are time sensitive and should be dropped instead of buffered if the router is overloaded, we can’t really do this without rewriting the software for all routers in the world!

So instead, we need to focus on what we can actually do which is to avoid flooding the connection in the first place.

The way to do this is to implement our own basic flow control. And I stress basic! Just like reliability, we have no hope of coming up with something as general and robust as TCP’s implementation on the first try, so lets keep it as simple as possible.

Measuring Round Trip Time

Since the whole point of flow control is to avoid flooding the connection and increasing round trip time (RTT), it makes sense that the most important metric as to whether or not we are flooding our connection is the RTT itself.

We need a way to measure the RTT of our connection.

Here is the basic technique:

  • For each packet we send, we add an entry to a queue containing the sequence number of the packet and the time it was sent.
  • Each time we receive an ack, we look up this entry and note the difference in local time between the time we receive the ack, and the time we sent the packet. This is the RTT time for that packet.
  • Because the arrival of packets varies with network jitter, we need to smooth this value to provide something meaningful, so each time we obtain a new RTT we move a percentage of the distance between our current RTT and the packet RTT. 10% seems to work well for me in practice. This is called an exponentially smoothed moving average, and it has the effect of smoothing out noise in the RTT with a low pass filter.
  • To ensure that the sent queue doesn’t grow forever, we discard packets once they have exceeded some maximum expected RTT. As discussed in the previous section on reliability, it is exceptionally likely that any packet not acked within a second was lost, so one second is a good value for this maximum RTT.

Now that we have RTT, we can use it as a metric to drive our flow control. If RTT gets too large, we send data less frequently, if its within acceptable ranges, we can try sending data more frequently.

Simple Binary Flow Control

As discussed before, lets not get greedy, we’ll implement a very basic flow control. This flow control has two modes. Good and bad. I call it simple binary flow control.

Lets assume you send packets of a certain size, say 256 bytes. You would like to send these packets 30 times a second, but if conditions are bad, you can drop down to 10 times a second.

So 256 byte packets 30 times a second is around 64kbits/sec, and 10 times a second is roughly 20kbit/sec. There isn’t a broadband network connection in the world that can’t handle at least 20kbit/sec, so we’ll move forward with this assumption. Unlike TCP which is entirely general for any device with any amount of send/recv bandwidth, we’re going to assume a minimum supported bandwidth for devices involved in our connections.

So the basic idea is this. When network conditions are “good” we send 30 packets per-second, and when network conditions are “bad” we drop to 10 packets per-second.

Of course, you can define “good” and “bad” however you like, but I’ve gotten good results considering only RTT. For example if RTT exceeds some threshold (say 250ms) then you know you are probably flooding the connection. Of course, this assumes that nobody would normally exceed 250ms under non-flooding conditions, which is reasonable given our broadband requirement.

How do you switch between good and bad? The algorithm I like to use operates as follows:

  • If you are currently in good mode, and conditions become bad, immediately drop to bad mode
  • If you are in bad mode, and conditions have been good for a specific length of time ‘t’, then return to good mode
  • To avoid rapid toggling between good and bad mode, if you drop from good mode to bad in under 10 seconds, double the amount of time ‘t’ before bad mode goes back to good. Clamp this at some maximum, say 60 seconds.
  • To avoid punishing good connections when they have short periods of bad behavior, for each 10 seconds the connection is in good mode, halve the time ‘t’ before bad mode goes back to good. Clamp this at some minimum like 1 second.

With this algorithm you will rapidly respond to bad conditions and drop your send rate to 10 packets per-second, avoiding flooding of the connection. You’ll also conservatively try out good mode, and persist sending packets at a higher rate of 30 packets per-second, while network conditions are good.

Of course, you can implement much more sophisticated algorithms. Packet loss % can be taken into account as a metric, even the amount of network jitter (time variance in packet acks), not just RTT.

You can also get much more greedy with flow control, and attempt to discover when you can send data at a much higher bandwidth (eg. LAN), but you have to be very careful! With increased greediness comes more risk that you’ll flood the connection!

Conclusion

Our new reliability system lets us send a steady stream of packets and notifies us which packets are received. From this we can infer lost packets, and resend data that didn’t get through if necessary.

On top of this we have a simple flow control system that alternates between sending packets 10 times a second and 30 times a second, depending on network conditions, so we don’t flood the connection.

There are lots of implementation details too specific to mention in this article, so make sure you check out the example source code to see how it is all implemented. I can assure you it is easy to read and well structured for your benefit!

Thats it for reliability flow control, probably the most complicated aspect of low-level networking.


Next: Debugging Multiplayer Games




If you enjoyed this article please donate.

Donations offset hosting costs and encourage me to write more articles!

111 thoughts on “Reliability and Flow Control”

  1. I have not seen any good information about making onlines and the techology behind it.
    You should write a book or a ebook.
    I surely will buy it.

  2. Good stuff. I like the use of ACK vectors (bitfields). And the use of RTT to detect bad networking conditions.

    It would be nice if you spelled out a bit more why you don’t need “real” congestion avoidance (read: TCP-friendly congestion avoidance). There isn’t anything wrong with your approach, but I’m half-scared somebody could come away from this not realising why TCP dials back its rate and thinks goes ahead with a fixed-rate algorithm with a much higher rate than yours… That might be a bit bad!

    A minor point on the term “flow control”: in TCP, this term is purely used for the flow control enforced with the receiver’s advertised window. This prevents you sending data faster than the receiver can read it. “Congestion control” (sometimes “congestion avoidance”) is used to describe the algorithm that handles response to packet loss (or sometimes TCP-Vegas) changes in RTT).

  3. yes it would be very bad for people to think this simple algorithm is suitable for any rate!

    it *only* applies when you have two rates, which are both known good (you know the transport can sustain it), and you oscillate between the two to handle adverse conditions

    this is typical of the xbox 360 TCR you need to pass at ship, where you need to show that you can run at a connection throttled to 64kbit/sec

    what i do in this case is have two modes, something close to 64kbit/sec and something well below like 20kbit/sec – key point here is to be conservative with bandwidth usage whenever possible and stay well shy of the theoretical limit of the transport, the whole point is to minimize latency, not to maximize throughput

    cheers

    ps. yeah my terminology is a bit wrong – sorry about that, thats just an artifact of the path i took to learn this stuff, which is a bit backwards :)

  4. also, what do you think about supplementing this algorithm with a packet loss based metric? what would be the effect? can you think of any pathological cases that would come about from a purely RTT based metric?

    any links to more advanced congestion avoidance techniques you can share?

    thanks

  5. An RTT-based metric is a good idea, but there are probably some pathological cases you can hit. Sometimes there isn’t actually much buffering between the user and the server you are talking to, so it may be if you use a fixed threshold like 250ms a user might never get this RTT, yet still experience high packet loss (more often than not, I suspect this depends on the last hop from the ISP to the user where there is a bottleneck, and then it comes down to the ISP policy). On the other side of the equation, are people in remote countries who have a normal RTT of 200-250ms (this used to be the case when I was an undergrad in New Zealand!) Other than that, using RTT to measure networking conditions is good. TCP Vegas uses RTT measurements to modify its rate, which works quite well. It does still retain the packet-loss based bit of TCP though; this is an example of an algorithm that uses both metrics effectively. Things like Vegas use more fine grained measurements to try and estimate when RTT increases to give an idea of congestion in the network.

    I think adding a packet loss metric into your algorithm would be a good idea and might help avoid pathological cases. It seems reasonable to think that if either the latency increases past some threshold or the packet loss increases past some threshold, the networking conditions are “bad”.

    As far as references go, have you heard of DCCP? It is the IETF’s attempt at making a generic protocol for things like games and voip. The wikipedia entry has more links: http://en.wikipedia.org/wiki/DCCP … To be perfectly honest with you though, I suspect your simple but tuned algorithm will be a lot more effective and a heck of a lot easier to implement.

    See in particular TCP friendly rate control which is probably the most applicable bit to this discussion: http://www.faqs.org/rfcs/rfc3448.html

    On the TCP side there is the aforementioned Vegas, again wikipedia has reasonable links: http://en.wikipedia.org/wiki/Tcp_vegas … There are a bunch of other TCP variants, some of which try and use RTT as well, you can spend years studying them and writing a PhD on them (I can testify on that account), I suspect reading anything beyond TCP Vegas will be diminishing returns and not all that interesting.

  6. i’m sure DCCP has some merit, i’ve always thought that the weakness of my simple algorithm is that it handles poor conditions by scaling back *after* they occur

    when the poor conditions are caused by me flooding the connection, a much better solution would be to avoid letting me flood in the first place.

    it would be much better if each router preferred dropping packets to buffering them. it could do this if it had knowledge that all the packets are time sensitive.

    and of course, this is a great reason why my algorithm should have a packet loss metric too, because if any router behaved like this, the RTT metric would simply stop working :)

    further to this, if each packet was tagged with a time, the router could actually implement some sort of jitter buffering, adding some latency but making sure that packets maintain their nice even spacing apart as it passes them on.

    minimizing jitter is actually pretty desirable for games. of course, i can do this jitter buffering at the receiving side, but jitter can vary wildly. my theory is that if each router did jitter buffering, it would be more stable and packets would arrive with less latency overall than a user level solution

    cheers

  7. What you’re saying about reacting after you cause problems is exactly the way TCP works of course, because its the only way to get any information about the network, which is otherwise a completely block box. So um, it seems like a perfectly reasonable thing to do! The difference is that TCP attempts to adapt to conditions and find a happy medium (to some degree), where your algorithm is a bit less dynamic I suppose.

    I think mostly what you’re referring to in routers is some form of QoS; which we’re unlikely to see set up on the Internet really. There are lots of RFCs and things about how endpoints should mark packets with latency sensitive vs. bulk throughput and routers should do something intelligent with this, but unfortunately that just isn’t the case…

    Jitter is an interesting one though. I guess there really isn’t anything that can be done about that without different scheduling policies in Internet routers, something that is ~impossible to change :)

  8. yes exactly the whole point is if it was NOT a black box, but had some interesting stuff done at the per-router level for real time packet delivery and QoS we could get away from being reactive

    a pipe dream i know! :)

  9. Another issue to keep in mind if you add loss-detection to your algorithm is that wireless links often exhibit loss that’s unrelated to congestion. So any game running over 802.11 may experience this kind of packet loss.

  10. Hello. Thank you for a interesting article. I downloaded the source-code and tried to compile it with Visual Studio, but when I do so I get a lot of “expected constant expression”, “cannot allocate an array of constant size 0″ and “‘packet’ : unknown size” in net.h where it is declaring a array with for example “size+4″ bytes. What is the problem?

    1. yes sorry it does not support win32 at this point, you’ll need to adjust it to get it to work — i don’t have access to win32 at home, and haven’t had time to port everything over yet, cheers

    2. btw. the problem is that GCC lets me dynamically allocate an array from a non-const variable, but in visual c++ you need to use “alloca” (stack alloc) to do the same thing — so that is the fix you need to do in the various places you get this error, be careful about sizeof(array) for alloca arrays though, you need to patch that up too – cheers

  11. Thanks for the answer. I like GCC (I use Linux when I can), but I want my game to compile on Windows. I will try to use alloca.

    Btw, have you used SDL_net, and what do you think about that?

    1. yep. should be pretty easy to port over – sorry i have not had time to do it myself, mostly because i just don’t have a win32 box at home (mac user :))

      should take you just a few hours to port over, the alloca issue is the only “big” issue that requires real work. the rest would just be platform difference whack-a-mole type issues

      also, i’ve never heard of SDL_net so i don’t really have an opinion of that

      stuff that i’ve looked at that is pretty good: RakNet and OpenTNL

      I would not recommend basing any production code off my stuff btw. it is just there for learning purposes, for example if you look at the implementation of the reliability, you would want to rewrite that not using STL, and instead using a proper sliding window – the current implementation is very inefficient

      and of course, my code does not handle NAT-Punchthrough, RakNet and OpenTNL however will do that for you

      cheers

    1. Casting to boolean before testing seems to help:
      #define check(n) if (!(bool)(n)){ printf(“check failed”); exit(1);}
      For whatever reason…

  12. ” The first packet we send would be packet 0, and the 100th packet sent is packet 99.” Actually it is not correct:) When our virtual connection has been broken we need to grant that the packets with the same packet id wont exist in network. So, for the initial sequence number is much better take n lover bits from local timer. It is exactly like tcp doing.

    For RTT estimation we can use well-known Jacobson algorithm:
    RTT = 7/8*RTT +(1-7/8)*MeasuredRTT (where measured rtt is a current rtt obtained from the receiving packet)

    then for deviation estimation we can use:
    Div = 7/8Div+(1-7/8) abs(RTT-M)

    and final
    retransmit_timeout = 4* Div + RTT

    This refreshing of RTT estimation should not be doing for retransmitted packets (see Karn’s algorithm).

    1. yes, the sequence number randomization should be done to make the connection more robust. also, some protocol id should be used to make it with join negotiation to avoid trivial IP spoofing attacks — these were going to be covered in a future article, but just to make it absolutely clear do NOT use the code from this article in production systems, it is meant as a learning aid only. it is not hardened against malicious attacks, and does not support NAT-Punchthrough

      cheers all

    2. btw. the RTT and DEV algorithms are both just exponentially smoothed averages. I actually use them in the article source code, however I use different weights tuned to the specific domain (256 byte packets, 30pps)

      also, i don’t actually ever resend packets – so the last RTT estimation point is moot. instead, only the subset of data which needs to be sent reliably is resent, inside the next outgoing packet

      packets always go out 30 times per-second, in order to minimize latency for multiplayer action games (eg. FPS type games)

      so the situation here is quite a bit different from TCP, good points though, thanks!

  13. yep, agree. Want to note that i found this algorithm useful to estimate quality of user connection, no need to calculate average and jitter by traditional way:)

  14. First of all, I’d like to congratulate you on your articles. They are simple and straightforward, and yet contain many powerful ideas. Very well thought out and written.

    Aside from that, I see two minor problems with your article.

    1. In this part:
    “But perhaps you want to save some bandwidth so you shorten your sequence numbers and acks to 16 bit integers. You save 4 bytes per-packet…”

    Shouldn’t it be 2 bytes-per-packet? Or are you assuming on changing the ack bitmask also?

    2. There’s a minor typ’o at the end of: “So 256 byte packets 30 times a second is around 64kbits/sec, and 10 times a second is roughly 20kbit/set.”
    20kbit/sec

    Also regarding that, in Brazil, it is very common to get latencies of 250ms+ in internet connections, I don’t know what your standards are, but I wonder how that might influence your RTT calculations.

    Either way, very nice set of articles! Keep up the good work.
    Cheers!

    1. Thanks! Ok I’ve updated the article to fix the typo, now the 16 bit sequence:

      The packet header looks like this: [sequence][ack][ack bitfield]

      Each value is in the header is 4 bytes, so it is 12 bytes in total

      If you reduce your sequence numbers to 16 bit, then you reduce both sequence and ack to 2 bytes each

      This gives you a saving of 4 bytes per-packet, ack bitfield remains at 32 bits, but from what you are telling me, you may want a large ack bitfield if you live in brazil ;)

      cheers

  15. Hi Glenn,
    First, I would like to thank you (as so many others have) on your wonderful articles. They are very well done.

    Second, I have been trying to use what I have learned in your articles to create my own Managed (and reliable) UDP Socket class. I have not copied and pasted, but re-written a lot of your example code to do this. In my small example test case that just sends “keep alive” packets I am getting fantastic results (~35ms rtt on LAN, ~70-100ms rtt in a real world internet server/local client setup). However, when I use my Managed UDP socket class in my real server, things get bad. I have seen it take as long as 2 seconds for a packet to be recognized by the server, sent by the client in the real world setup.

    I have two questions…
    1) Could a child process (forked and exec) using only UDP (and a pipe to communicate with it’s parent process) be messed up by it’s parent that is running TCP? (The architecture here is that the parent process spawns new game servers as they fill up and new ones are needed. TCP on the parent is used to accept new clients and tell them what game server to connect to. The clients then use UDP to communicate with the game server they have been told to connect to.)

    2) Are there any other pitfalls I should be aware of when trying to implement your examples that could be causing this extreme lag in communication in my production code?

    Thank you,
    ~Eric

    1. Thanks!

      Happy the articles helped you out.

      Regarding your problem: it is difficult to know what is going on without more information, but you should first check the amount of bandwidth that you are sending – if you flood the connection it is typical for routers in between to add a lot of lag as they attempt to buffer all of your packets really deep, before finally giving up and discarding them when the buffer overflows. I’ve seen lags from 2-10 secs in this case. The solution is to *BACK OFF* what you are sending, hence the binary flow control technique in this article, assuming you have some minimum bandwidth which you know must be supported across all players in your game.

      Assuming you are sending at some send rate like 20-30Hz, and you are under 64kbit/sec — i’d be surprised to see any response around 2 sec between two computers over broadband connections.

      So to diagnose this, try first reducing the amount you send – if this still fails, then there may be some error in your implementation or your specific network code setup, so try running an example code like the flow control one across this connection that shows this problem and let me know what you see

      cheers

      1. Also I should make it very clear that the code provided in these articles is not production quality code. It’s for learning purposes only. Feel free to use it, adapt it, rewrite it or do whatever you like with it – just be aware that I had to spend approximately 3 months cleaning it up to get it production ready when I started work at Sony.

        The most significant problem with the code example for this article is the reliability system implementation choice of using std::list. This is quite honestly borderline retarded. I just wanted to get it done quickly so I used the list, but actually the algorithm is simpler to code and more efficient when you just use a proper fixed size sliding window instead. On top of this, I think there may be a few other minor bugs, so take care!

        cheers

  16. “The implication of this is that any data which you resend using this reliability technique needs to have its own message id so that if you receive it multiple times, you can discard it. This can be done at the application level.”

    What exactly do you mean by “message id”?

        1. pretty much yeah, and of course you may use this id to also re-order events.

          I guess the main point I’m making is that unlike TCP which is basically, here is a stream of reliable-ordered data, like a file — I tend to prefer a system where for critical time-sensitive events, I may transmit (and retransmit) these events repeatedly until they are acked, therefore I find that I add message sequence #s to these events, to facilitate this.

          Also note that this means that a given event may be sent in multiple packet sequence numbers, and that due to the way the sequence # increases with each packet sent, and the way the ack system works — it is possible to know for sure if a packet got through, but impossible to know for sure that a packet *did not*. This is quite different from how TCP does sequence #s and acks…

  17. First off, I’d like to apologize for my English. I’m not a native speaker.

    I’m wondering about something after reading your (excellent) article. You calculate the RTT by measuring the delay between sending a packet and receiving an ack for it. This is fine, but because we’re sending packets at a fixed rate this will introduce a delay in our sending (and thus receiving) of that ack. Is this something I should adjust the RTT for or shouldn’t I care about this? I’m wondering because I’m new to network and game programming, so I have no idea if the RTT will be used for anything else besides flow control.

    Any advice would be appreciated.

    Also, I’d like to thank you for writing these articles and I hope you’ll continue writing them. I’ll be sure to donate to you once I get a paypal account up and running :)

    1. You are correct that the framerate and the packet send rate will contribute to the RTT measurement.

      You can choose to accept this RTT measurement as some sort of “maximal” RTT incorporating frame aliasing, or you could receive packets on a separate thread and timestamp the packets with the actual time of receive. This way you could get the network RTT without frame aliasing.

      Both are actually useful, for example the frame aliased RTT is actually the best measurement about how long it takes for a packet to go to the other computer and actually get back to your game code which processes it. Network RTT is the best indicator of the actual quality of the connection.

      Note that many people actually write dedicated servers that immediately reply to the packet sent, vs. sending the reply on the next frame to reduce latency even further (if the server updates at 60Hz, then you have another 16.66667 delay added to the RTT…)

      cheers

  18. I am a bit unclear about the topic of flow control. In your first article you link to an article on the effect of TCP traffic on UDP packet loss. The authors come to the conclusion, that reducing the transmission rate of UDP packets will actually lead to even greater packet loss of UDP packets (see figure 12 in that article). Having a consistently high UDP transmission rate will actually affect the TCP flow control synchronization in favour of the UDP packets. As I understand it, a high UDP transmission rate will force the TCP flow control to kick in earlier freeing up bandwidth again faster/longer.

    So basically, every time all synched TCP streams do throttle their transmission rate because no bandwidth was available any more, suddenly a lot of bandwidth is available and unused until the TCP flow control mechanism slowly increases the transmission rate(s) again. Instead of joining in in the fun, and loosing packets when the bottleneck node buffer is full and otherwise having a low transmission rate, why not just lose packets when the bottleneck node buffer is full and have a high transmission rate otherwise?

    Also, there is no way to know instantly that a package was dropped (as you point out :P) So once you can finally react to the filled up bottleneck node buffer, it will most likely, or at least possibly, be freed up already. The worst case scenario would be the UDP flow control mechanism being in sync with the TCP flow control mechanism, therefore sending loads of stuff when the buffer is full and sending nothing when bandwidth is available. Again, considering the oscillating character of TCP synchronizing – I do believe it is best just to dont bother about controlling the flow of the UDP packages at all.

    Another point the authors made were that smaller UDP packets are not as affected by TCP synchronization as bigger packets. Therefore I assume that instead of implementing a flow control mechanism, the use of small packets and a high transmission rate are favourable – for applications sending a lot of real time data anyway (VoIP, etc.) I doubt weather the amount of data sent by real time FPS will allow a high enough transmission rate though – and I am pretty sure that the transmission rate of MMORPGs is not enough to combat TCP sync :P

    1. The point of this article is simply that if you detect congestion, it is best to throttle back, otherwise you risk inducing further latency and packet loss. You’ll see this behavior if you run your game through a network simulator and restrict the bandwidth to say 64kbit/sec and send more than that — routers tend to buffer up packets vs. discarding them.

      Yes the interaction between TCP and UDP is extremely complicated. I just added that to the end of an article as an interesting point to consider. Hopefully people understand that yes, other traffic sent over the network can influence the behavior of your game. I’m not proposing the technique described in this article as a *solution* for that particular problem of UDP vs. TCP flows.

      cheers

  19. Hi, that looks pretty powerful: http://udt.sourceforge.net/
    - both Datagram and stream mode.
    - partial reliability(TTL – lifetime of packet before it will be discarded , choice between OoO and in-order mode)
    All this over UDP.
    - award winning algorithms for congestion control, and flow control.
    - open source – BSD license
    - interface is pretty the same as normal sockets, easy.

    1. Looks pretty cool. I still think I can beat it for time sensitive data (eg. lightweight game protocol) but I’d certainly use that if I had to do bulk transfers over UDP vs. rolling my own.

  20. Now lets move on to something a bit less intense! In the next article I show how to extend our simple connection to support more than two machines, in client/server or peer-to-peer topology.

    Glenn, I love your articles.
    But will you ever write about the “client/server or peer-to-peer topology”?
    Cannot wait to read about it!!!

    Thanks

  21. There is a link for a “next article” but it just takes us back to the listing of articles. Is there a follow-up article to this one yet?

    Thanks.

  22. Hello! I have written my own protocol taking to consideration your articles(btw, great articles). It’s obvious that everything regarding lag/loss/resend/ack/special ack is resolved in the shortest time possible if the client and server both send a message(reliable or unreliable) each frame.
    Is this a bad practice in terms of internet usage? If everybody makes their game send 25 tiny(avg 100 bytes) messages/second around the world(client in china and server in canada) is this SPAM ?

    1. 100bytes per-second is around 30kbps roughly, that should be no problem — assuming that is total bandwidth and not per-player of course. I’d recommend 64kbps up and down, up to 256kbps if you want to have the maximum audience. treat 64kbps as minspec.

  23. I think there’s a bug in reliability system class.
    The constructor has only one parameter (max_sequence) but it should have two (max_sequence and rtt_maximum) because you do
    something like that:
    this->rtt_maximum = rtt_maximum;
    Since there’s no rtt_maximum parameter submitted, that line turns out to be a NOP like:
    a = a;

    Am I missing something?

      1. Bugs? Your articles+code are great! Thank you for sharing them!

        I downloaded the other samples (Node mesh, Lan Matchmaking and Real world networking) and I’m trying to figure out how to write a server/client resending lost packets…

  24. First I would like to show my gratitude for all the time you spent on these articles and code for your readers! You are amazing dude :D

    Under “Reliable Acks” section you were talking about this:

    “So in our reliability system, we never resend a packet with a given sequence number. We sequence n exactly once, then we send n+1, n+2 and so on. We never stop and resend packet n if it was lost, we leave it up to the application to compose a new packet containing the data that was lost, if necessary, and this packet gets sent with a new sequence number.”

    Lets say a client sends chat message to sever. Server sends out the chat message to everyone including the client. For the client who sent the message, this reply packet should contain the ack. But what happens if the server’s packet was dropped? The client would resend the message to the server after a timeout occured. This message would have a new seq_num so the server would send out the chat message to everyone a second time.

    I think what I got out of the article is that this algorithm assumes a constant packet flow back and forth. Which means that even if the servers packet with the ack was dropped the next packet to be sent will likely arrive and it too will contain the ack. However, there are times in all games where there is not much network traffic. (Waiting in a lobby for the next game to start). In these cases would you need to send a keepalive packet?
    How would you determine when to send keepalive’s? You would need to send a keepalive before the timeout for the chat message. There is a possibility of the keepalive being dropped as well. Which means that the chat message timeout might occur…

    in “Detecting Lost Packets” section you seemed to hint at a work around for this:

    “The implication of this is that any data which you resend using this reliability technique needs to have its own message id so that if you receive it multiple times, you can discard it. This can be done at the application level.”

    Are the message id and packet seq_num two seperate fields? i.e. [seq_num,ack,ack_bits,msg_id,chat_msg]. So you recieve the same message again but you can now check the msg_id against prev message ids and not send the chat message again? Is this a replacement for timeouts or would both be needed?

    Sorry this is so long winded but I have been thinking very hard how to implement my own code.

    1. thanks and yes this article assumes a constant flow in both directions of 30pps. if you don’t have a constant flow you’ll need to do something more like what TCP does – cheers

  25. The article mentions TCP maintaining a sliding window and waiting for missing packets being the sort of behavior we want to avoid. In a few comments you told that your code would be much more efficient if it was implemented using a proper fixed size sliding window.

    Could you elaborate a little more on when it is good or bad to use a sliding window?

    1. Sorry i meant two different things with a sliding window. You’ll notice that the source code for this article uses a std::list for the acks, pending acks etc. This is incredibly inefficient. A much better data structure to use is a rolling fixed size queue, eg. a 256 size queue indexed by sequence number mod 256.

      When I said that TCP uses a sliding window vs. what is done in this article, I meant only to highlight a few differences between what is presented in this article compared with TCP sequence numbers and acks.

      This article assumes a constant bidirectional rate of packets being sent. This is why the sequence/ack/ackbits header works. It breaks down if you don’t have a constant stream of packets.

      Because of the constant flow assumption, this algorithm keeps sending packets even when the other side is not responding. The idea is that any lost data can be resent in a new packet (with a new sequence number). This is different from TCP, which will resend a specific packet with the same sequence number and gives much finer-grain control to the application over how it chooses to resend lost data (if at all).

      The cost of this flexibility is that it is possible to “lose” acks. In other words for a packet to get through, but for the sender to not receive the corresponding ack! Making the ack bitfield wider lessens the chance of this occuring, and theoretically you could dynamically widen the ack bits in the header to make losing acks impossible, assuming some finite timeout value, but in general if you have about 2 seconds of redundant acks in my experience I’ve found that is enough for average network conditions (so consider widening the ack bitfield in this article to 64, IMO)

      In summary, the algorithm presented in this article is designed for constant flows and does not have any congestion avoidance or flow control built in. It’s useful for situations when you can make assumptions about minimum bandwidth and maximum latency (eg. console games, PC broadband games). Also, you need to be aware that it’s possible, although rare, for the sender to not receive an ack for packets that actually got through, resulting in redundantly resending data (and an additional bandwidth cost per-unit of data sent, in order to filter out duplicates events etc.)

      cheers

      1. I should also note that in my own code (forked from this article, and unfortunately not publicly available) I have found and fixed a subtle bug related to the ack bitfield.

        As presented in this article, the ack bitfield starts with ack – 1, but in reality the ack bitfield should start at ack. Otherwise, it’s possible for the first few packets being sent out indicating acks where nothing was really acked at all, which can mess up reliable event queues driven by this system.

        In other words, if you have a 16|16|32 header as described in this article, you can really only represent 32 acks per-packet, not 33.

        cheers

  26. great article!

    btw. in case your language of choice doesn’t support unsigned types (e.g. java/scala) or you simply try to avoid them (http://anteru.net/2010/05/17/736/):

    static boolean isMoreRecentInSequence( int previous, int current)
    {
    return ( current > previous && current-previous >= 0 ) ||
    ( current < previous && previous-current < 0 );
    }

    1. Umm…. No. You might as well say

      return current > previous

      Which doesn’t handle the case of integer wrap around. Reread glenn’s article more closely.

  27. Hey !

    Very interesting article, I implemented this in my game, but I am facing some limitations with your system.
    Here it is :
    How would you go if you have to transfer let’s say a file, everything is unordered so here is what I thought :
    In the header add a field [nextSequence] which would be the next sequence to wait for if any or sequence – 1 if none, it would work out pretty well if I was sure that the sequence would arrive but it might not.
    Therefore I had another idea which would be to use a TCP like window, every package (collection of packets) would have a unique window id and a window order, thus allowing me the reconstruct the packet even if one is lost.

    Any thoughts on that ?

    Thanks !

    1. Sounds fine I do the same with reliable ordered event system built on top of this reliability — i give each event a uint16_t rolling id, and resend it until a packet containing it is acked. on the receiver side I do re-ordering based on event id. cheers

  28. If someone could explain the use for this, it would be much appreciated.
    - “Additionally, when a packet is received, ack bitfield is scanned and if bit n is set, then we acknowledge sequence number packet sequence – n, if it has not been acked already.”.
    As I understand, you a looping though all the bits in the ack bitfield, checking if they are set. If a bit is set, it indicates the client has received that packet (we know what packet it is, by packetSequenceNumber = ack – n). If I am correct, then does this act as a reliability system, so that if a bit is not set, we should resend the packet?

    1. it does act as a reliability system, in that it tells you if a packet was received. i don’t recommend resending the packet, instead i prefer to send a new packet (with a new sequence number) containing any reliable data that was in the dropped packet.

      in other words, keep including the reliable data in new packets until one of them is acked

      yes this is different to how TCP does it — cheers

  29. Hi Glenn,

    As countless others have said, much respect re your superb ability to instruct others. (Your articles on timing remain the best around.)

    I want to ask: You repeatedly mention that you would not consider this production ready code, and at one point you mention this approach being “non-hardened”. In your professional capacity, do you personally do the security work on the protocols you develop? I’m writing a smaller game and so I’m not too concerned with there being security breaches, so long as I can detect them, log them and fix them later.

    What points would you suggest require greater security here, aside from the IP-spoofing you mentioned? Also how did you go about learning the security side — was it mainly through trial and error? And do you think you’ll ever write any articles on this aspect?

    Thanks.

    1. OK so the first example of not hardening is that the packets have no encryption to stop sniffing or man in the middle attacks. On console platforms this is usually built in, but if you are on the PC you’d want at least to add this yourself manually or use RakNet, OpenTNL or similar.

      The other problem is that the “Virtual Connection over UDP” example used the IP address as the session. Instead you should generate some random guid and use that instead, I think there is a mention of this technique in the Quake 3 Network Model article by Brian Hook. It also help with esoteric NAT issues where the IP/port can get shuffled automatically (mid-connection).

      Another standard thing to do which is not done here is to start with a random session id. This again provides a bit better safety against man in the middle attacks / spoofing and is a good practice anyway.

      Another problem with the code is the reliability and flow control uses a very inefficient algorithm based around std::list. You can do much better with a sliding window implementation or ringbuffer. It’s got much better cache behavior.

      That’s all I can think of. cheers

      1. Also to learn the security side the best way I’ve found is to consider how you would attack your own protocol. Think as if you were an attacker. It’s a very interesting POV. You actually know much more about your protocol than the attacker, so it’s reasonably easy to spot weaknesses.

        For example, if you know the IP of another player (you always will in P2P games…) you could spoof a packet sent to another computer so it looks like it comes from that players IP but with a really high sequence number. This would cause all packets from that machine to be considered old and that player would time out. Now consider how can you fix this? OK lets add a random guid session id instead of IP as session and it’s much harder to spoof packets.

        As for books on security, sorry I don’t have any to recommend. I just work this out as I go :)

  30. Hi Glenn,

    Thanks for tutorials, they have been very helpful for me.

    I intend to start working with a team of friends on our first multiplayer online game. I read all the articles of the series networking for game programmers and my only question is how do i change the code in the last example to make the server handle multiple clients each with his own IP address, sequence, ack, and ack_bits?

    Also, you mentioned in the commentaries that, for security reasons, a PC programmer should use RackNet or OpenNTL. Does OpenNTL use UDP? And, does it provide reliability and flow control?

    1. Hi! It’s not for security reasons but so that it can do NAT punch-through for you — eg. connect P2P through firewalls. It’s hard to code that yourself, and error prone, using a library like RakNet or OpenTNL to do that for you is wise. Also, look at “ENet”

      Good luck!

  31. Hi Glenn

    This topic looks very interesting. If I was to use a network engine such as Lidgren 3 (C#) which supports Ordered/Unordered RUDP, would the above be necessary ? I get that sequencing would be useful especially when following the Tribes model for adding priority on what data should be sent 1st. Sorry just trying to wrap my head around all these topics. I wish there was a simple way to discuss prediction, extrapolation (lerping/smoothing) and lag all at the same time… Let us already assume the internet sucks as always.

    Another thing that concerns me is dealing with collisions or how to apply deal with creating simple colliders before diving right into adding network physics. I find lots of libs out there that discuss physics integrated with a specific engine: OpenGL, DirectX, Ogre, SDL, XNA, Unity whatever. They all have their own way of dealing with geometry. I wish to a simple universal view prior to choosing graphics. As the server will be headless in most cases anyway.

  32. Hi Glenn!

    with RTS game likely turn-base. should i need a non blocking socket?

    and i’m thingking about receive thread and process thread. eg:
    recvThread just receive, analysis the header then store packet to a Queue
    porcThread just check then analysis the data in Queue

    my english is not good very much. forgive this.
    and thank you so much, your articles is really very helpful

    1. Game genre vs. blocking socket are two different things. Blocking socket means that if you call the “recvfrom” it will block until a packet is received. Even with turned based you don’t want that. Usually we only use blocking sockets if they are on a separate send/recv packet thread. cheers

        1. Hi Glenn!
          two more questions:
          - with a multi-client, should i have a list in server to carry each client’s sequence, ack, ackbit or do you have any better ideas?
          - and at server side, you said: “But what if packets clump up such that two packets arrive before we send a packet”, did you mean sever will receive all packet arrive then send back to client? my question is: why you don’t receive 1 packet then send back to client immediately then receive then send again…? what are differences here?

          hope you can fully understand :D

          1. Yes you should. The code here just describes a connection between two clients. You’ll need to have all of these running on both sides of each connection if you have more than 2 players. cheers

          2. And the clump simply means that not all packets arrive evenly spaced if you sent them that way. sometimes you’ll get two packets in one frame, some frames you’ll get no packets etc…

  33. Great article, but I’m a little confused on your section about RTT:

    “Each time we receive an ack, we look up this entry and note the difference in local time between the time we receive the ack, and the time we sent the packet. This is the RTT time for that packet.
    Because the arrival of packets varies with network jitter, we need to smooth this value to provide something meaningful, so each time we obtain a new RTT we move a percentage of the distance between our current RTT and the packet RTT. 10% seems to work well for me in practice. This is called an exponentially smoothed moving average, and it has the effect of smoothing out noise in the RTT with a low pass filter.”

    What do you mean by “current RTT” in the statement “each time we obtain a new RTT we move a percentage of the distance between our current RTT and the packet RTT”. Does that mean total RTT for all packets ever sent? I was reading about smoothed moving averages from this link (http://www.fourmilab.ch/hackdiet/www/subsubsection1_4_1_0_8_3.html) since I was not accustomed to them, and was wondering if I simply followed the last formula, with M_d being the RTT of the current packet, A_d-1 being the calculated RTT of the previous packet, and A_d being the currently calculated RTT.

    Is this correct or an incorrect assumption.

    1. Yes it’s a smoothed moving average. The calculation favors more recent RTT values over old, according to the tightness factor (eg. how rapidly the smoothed RTT approaches the instantaneous RTT)

      cheers

  34. I got a problem, give me some suggests please. (i need reliable and ordered)
    this is my client and server using your example code ( i use JAVA)
    - client_A ->(send to) server: “attack user_B”
    + send 30 or 10 (your flowcontrol algorithm) packets in one second with the same sequence number.
    - server: receive packet, process header then send back to client “OK” or “NOT OK”( recv only one packet and ignore others, it means 30 packets per second guarantee server’ll receive at least one packet if the protocol is very bad)
    This way make me confuse, if the response from server to client is missing, and with ack_bit_field client knows server has already received it, but client doesn’t know what is the “response command” from server.
    So, if client sending again with a new sequence number, server should thing this packet must be a new command. How do you fix this?
    can you understand ? :D

  35. Hello, great articles and good to see you are so active in responding to comments!

    I am implementing a protocol with the advice of your articles. For now, if I get a packet and discover from the acks that one has gone missing I send new packet with a new sequence number as per the article, however I am wondering if you have any advice on a particular corner case…

    If I receive an ack that indicates a lost packet I send another one with the same data. If in the mean-time the lost packet actually arrives, then the destination will get two copies of the packet – however since the ID’s are different there is no way to pick up on the error. This doesnt seem like it would be super rare because it would happen when two packets arrive out of order, and a packet is sent in the middle of receiving the two. (then again Im not sure how often packets arrive out of order in practice)

    Should I add a second ID which is the message ID, and use this to trim duplicates, or is there a better way? Packets marked for guaranteed delivery often should not be executed twice, so I dont feel like I can ignore the problem. A non-game breaking example might be that a line of chat will appear twice, but other packets executed twice might seriously destabalize the game.

    1. Another much much more common example I just thought of would be that you recieve an ack with a missing hole and resend the lost packet, but before the new packet arrives another ack is received and still has the hole… so the sender will believe it was lost again and send yet another packet.

      1. Actually sorry, nevermind about that last comment I cant delete replies :( – I only resend a packet once anyway, and from there only wait to detect if THAT one is lost.

        1. Hey there, so this particular algorithm described here — it is for situations where you are sending a steady stream of packets, say 30 a second each way, and you want to piggy back some reliability on that for a small subset of your data.

          The technique described here is *not* suitable if you are manually “resending” packets, or if you are not sending packets all the time and only sending them when you want to send events.

          If you want something like that you’d be better off using TCP, or RUDP.

          cheers

          1. My problem is still relevant to your algorithm since with your algo packets can still be received out of order, and the problem will arise when a packet is sent between those two receives, however I think I missed this paragraph:

            “The implication of this is that any data which you resend using this reliability technique needs to have its own message id so that if you receive it multiple times, you can discard it. This can be done at the application level.”

            Which describes exactly what I was thinking I would have to do to resolve this problem.

  36. OK get it, yes I do that — I don’t call them packets, I call them events (or messages). These are resent over and over until acked and I use a rolling 16bit eventId to determine which message it is.

    This is necessary because the ack system guarantees only that if you receive an ack, that the packet was received (any any events in that packet…), but does not guarantee that in the case of no acks for packet n, that the packet was NOT received… you could not get an ack, but resend the event multiple times unnecessarily, so you need the event id to filter out already received events.

    cheers

  37. Probably a noob question, but…
    I’m using UDP for a local WiFi application between Linuxes (Angstorm, Ubuntu), and here’s my problem:
    The client sends “incomplete data” every packet every once in a while. When the servers reads data, packets stacked in the socket give 400+ bytes of data to read, the server is kinda lost among these incomplete packets stacks (how do you identify packet limits when they stack?)

    Altogether i have a latency of about 1s \o/ maybe i’m missing something here.

    I haven’t investigated into the seperated thread for wifi (blocking sock) but this seems like a good idea.

    1. Generally when you overflow the bandwidth available the router will buffer up packets attempting to avoid loss, therefore when you see latency about 1s you can be pretty sure you are sending too much data

      cheers

  38. BTW, minor optimization for this subroutine:

    inline bool sequence_more_recent( unsigned int s1, unsigned int s2, unsigned int max_sequence )
    {
    return ( s1 – s2 s2 : s2 > s1;
    }

  39. Hey,
    I feel those tutorials are really helpful and that technics described here will allow me to eventually establish a good network in my game. Thanks a lot for those tutorials! So far I haven’t met any better.
    Btw, maybe I’m dumb but I can’t make this example work. I haven’t had any issues with earlier lessons but here I understand that those 2 files: Example.cpp and Net.h are for server and client as well? Therefore I normally ran the project via microsoft visual C++ and I got a working server listening to upcoming connections. But after I write: Gaffer_UDP_reliable_connection.exe 127.0.0.1
    in my Windows command line to create a client program, I can’t connect to the server. The messages I get are in client program are:
    start connection on port 30001
    client connecting to 127.0.0.1:30000
    flow control initialized
    connect timed out(after a while)
    connection failed
    stop connection
    It’s a bit hard to me to find the issue. Just can’t think of any idea. I would be very grateful for help!

  40. haha I’ve managed to notice that myself :P. Anyway I’ve made a really stupid misstake… I was deleting dynamically allocated memory for packets too early in one place. Thanks again for those tutorials! Keep up the good work!

  41. Hey Glenn,
    I just wanted to ask you a question in hope you’ll have enough free time to answer ^^. Well, eventually I’ve managed to modify the code from this tutorial so it now supports more than one client connected to server. I’ve also splitted the code here into separate projects: client and server.
    The main thing I’ve done is that I added 2 additional objects to the Address class: FlowControl and ReliabilitySystem. Earlier there was only 1 FlowControl object in the main loop and 1 ReliabilitySystem object in ReliableConnection class. I deleted them both and from now on every function in FlowControl class and ReliabilitySystem is executed for every client connected to the server. (I have a vector that stores all addressess and ports connected to the server. And for every Address object there I run all functions from FlowControl class and ReliabilitySystem that had to be ran in the original lesson’s code)
    Is that normal that when 4 clients are connected to the server, the server cpu reaches even 5%? Or I might be doing something too much? I feel that it’s normal becouse methods used here are great but consuming some CPU as well. Just wanted to make sure.
    PS. All clients and server are ran on localhost atm on 1 PC.

    1. This code is not production ready, for example much of the reliability system is using std::list which is a poor choice.

      In order to get the code for these articles working and scaling in a real production system, I would recommend you rewrite it to not using linked lists, and instead use circular buffers indexed by sequence % BufferSize.

      This is quite a lot of work and is quite difficult. It took me a long time to get right

  42. What happened to the mention of how to deal with multiple clients on the server?
    I am interested in how you would go about distributing your virtual connections on multiple threads, or how you would go about balancing your load over multiple threads.

    Thanks for the post I have found it most helpful.

    1. Distributing read and write packets per-mode across different threads is a great idea. Everything can run in parallel

      1. I would simply determine the refresh rate of the display and sync update to this. Then use variable dt equal to 1/f ps.

        This is the simplest and best possible way. As long as you always make 60fps!

    1. Yes. Add a sequence per packet and use the reliability system to resend packets unto acked. Re-order based on sequence on the receiving side.

  43. Hi Glenn,

    Fantastic article thanks. I’m applying these techniques to a real time streaming app.

    In practice what UDP packet size did you find works well? In some cases I need to send quite alot of data and I’m unsure how to break it up. I’m running some experiments to see what is based, but I’m curious what you’ve found from experience.

    1. My last game God of War: Ascension shipped with max. 256 byte packets but we also sent them at a high rate (up to 60pps)

      You can probably get away with larger packets, eg: 1k or so without going over MTU, hell that’s probably quite conservative these days

Leave a Reply

Glenn Fiedler's Game Development Articles and Tutorials