In this article we’re going to explore how AAA multiplayer games like first person shooters read and write packets. We’ll start with text based formats then move into binary hand-coded binary formats and bitpacking.
At the end of this article and the next, you should understand exactly how to implement your own packet read and write the same way the pros do it.
Consider a web server. It listens for requests, does some work asynchronously and sends responses back to clients. It’s stateless and generally not real-time, although a fast response time is great. Web servers are most often IO bound.
Game server are different. They’re a headless version of the game running in the cloud. As such they are stateful and CPU bound. The traffic patterns are different too. Instead of infrequent request/response from tens of thousands of clients, a game server has far fewer clients, but processes a continuous stream of input packets sent from each client 60 times per-second, and broadcasts out the state of the world to clients 10, 20 or even 60 times per-second.
And this state is huge. Thousands of objects with hundreds of properties each. Game network programmers spend a lot of their time optimizing exactly how this state is sent over the network with crazy bit-packing tricks, hand-coded binary formats and delta encoding.
What would happen if we just encoded this world state as XML?
Pretty verbose… it’s hard to see how this would be practical for a large world.
JSON is a bit more compact:
But it still suffers from the same problem: the description of the data is larger than the data itself. What if instead of fully describing the world state in each packet, we split it up into two parts?
A schema that describes the set of object classes and properties per-class, sent only once when a client connects to the server.
Data sent rapidly from server to client, which is encoded relative to the schema.
The schema could look something like this:
The schema is quite big, but that’s beside the point. It’s sent only once, and now the client knows the set of classes in the game world and the number, name, type and range of properties per-class.
With this knowledge we can make the rapidly sent portion of the world state much more compact:
And we can compress it even further by switching to a custom text format:
As you can see, it’s much more about what you don’t send than what you do.
The Inefficiencies of Text
We’ve made good progress on our text format so far, moving from a highly attributed stream that fully describes the data (more description than actual data) to an unattributed text format that’s an order of magnitude more efficient.
But there are inherent inefficiencies when using text format for packets:
We are most often sending data in the range A-Z, a-z and 0-1, plus a few other symbols. This wastes the remainder of the 0-255 range for each character sent. From an information theory standpoint, this is an inefficient encoding.
The text representation of integer values are in the general case much less efficient than the binary format. For example, in text format the worst case unsigned 32 bit integer 4294967295 takes 10 bytes, but in binary format it takes just four.
In text, even the smallest numbers in 0-9 range require at least one byte, but in binary, smaller values like 0, 11, 31, 100 can be sent with fewer than 8 bits if we know their range ahead of time.
If an integer value is negative, you have to spend a whole byte on '-' to indicate that.
Floating point numbers waste one byte specifying the decimal point.
The text representation of numerical values are variable length: “5”, “12345”, “3.141593”. Because of this we need to spend one byte on a separator after each value so we know when it ends.
Newlines '\n’ or some other separator are required to distinguish between the set of variables belonging to one object and the next. When you have thousands of objects, this really adds up.
In short, if we wish to optimize any further, it’s necessary to switch to a binary format.
Switching to a Binary Format
In the web world there are some really great libraries that read and write binary formats like BJSON, Protocol Buffers, Flatbuffers, Thrift, Cap’n Proto and MsgPack.
In manay cases, these libraries are great fit for building your game network protocol. But in the fast-paced world of first person shooters where efficiency is paramount, a hand-tuned binary protocol is still the gold standard.
Game servers are completely different beasts. The client and server are almost always written in the same language (C++), and versioning is much simpler. If a client with an incompatible version tries to connect, that connection is simply rejected. There’s simply no need for compatibility across different versions.
So if you don’t need versioning and you don’t need cross-language support what are the benefits for these libraries? Convenience. Ease of use. Not needing to worry about creating, testing and debugging your own binary format.
But this convenience is offset by the fact that these libraries are less efficient and less flexible than a binary protocol we can roll ourselves. So while I encourage you to evaluate these libraries and see if they suit your needs, for the rest of this article, we’re going to move forward with a custom binary protocol.
Getting Started with a Binary Format
One option for creating a custom binary protocol is to use the in-memory format of your data structures in C/C++ as the over-the-wire format. People often start here, so although I don’t recommend this approach, lets explore it for a while before we poke holes in it.
First define the set of packets, typically as a union of structs:
When writing the packet, set the first byte in the packet to the packet type number (0, 1 or 2). Then depending on the packet type, memcpy the appropriate union struct into the packet. On read do the reverse: read in the first byte, then according to the packet type, copy the packet data to the corresponding struct.
It couldn’t get simpler. So why do most games avoid this approach?
The first reason is that different compilers and platforms provide different packing of structs. If you go this route you’ll spend a lot of time with #pragma pack trying to make sure that different compilers and different platforms lay out the structures in memory exactly the same way.
The next one is endianness. Most computers are mostly little endian these days but historically some architectures like PowerPC were big endian. If you need to support communication between little endian and big endian machines, the memcpy the struct in and out of the packet approach simply won’t work. At minimum you need to write a function to swap bytes between host and network byte order on read and write for each variable in your struct.
There are other issues as well. If a struct contains pointers you can’t just serialize that value over the network and expect a valid pointer on the other side. Also, if you have variable sized structures, such as an array of 32 elements, but most of the time it’s empty or only has a few elements, it’s wasteful to always send the array at worst case size. A better approach would support a variable length encoding that only sends the actual number of elements in the array.
But ultimately, what really drives a stake into the heart of this approach is security. It’s a massive security risk to take data coming in over the network and trust it, and that’s exactly what you do if you just copy a block of memory sent over the network into your struct. Wheee! What if somebody constructs a malicious PacketB and sends it to you with numElements = 0xFFFFFFFF?
You should, no you must, at minimum do some sort of per-field checking that values are in range vs. blindly accepting what is sent to you. This is why the memcpy struct approach is rarely used in professional games.
Read and Write Functions
The next level of sophistication is read and write functions per-packet.
Start with the following simple operations:
These operate on a structure which keeps track of the current position:
The write integer function looks something like this:
And the read integer function looks like this:
Now, instead of copying across packet data in and out of structs, we implement read and write functions for each packet type:
When reading and writing packets, start the packet with a byte specifying the packet type via ReadByte/WriteByte, then according to the packet type, call the read/write on the corresponding packet struct in the union.
Now we have a system that allows machines with different endianness to communicate and supports variable length encoding of elements.
What if we have a value in the range [0,1000] we really only need 10 bits to represent all possible values. Wouldn’t it be nice if we could write just 10 bits, instead of rounding up to 16? What about boolean values? It would be nice to send these as one bit instead of 8!
One way to implement this is to manually organize your C++ structures into packed integers with bitfields and union tricks, such as grouping all bools together into one integer type via bitfield and serializing them as a group. But this is tedious and error prone and there’s no guarantee that different C++ compilers pack bitfields in memory exactly the same way.
A much more flexible way that trades a small amount of CPU on packet read and write for convenience is a bitpacker. This is code that reads and writes non-multiples of 8 bits to a buffer.
Many people write bitpackers that work at the byte level. This means they flush bytes to memory as they are filled. This is simpler to code, but the ideal is to read and write words at a time, because modern machines are optimized to work this way instead of farting across a buffer at byte level like it’s 1985.
If you want to write 32 bits at a time, you’ll need a scratch word twice that size, eg. uint64_t. The reason is that you need the top half for overflow. For example, if you have just written a value 30 bits long into the scratch buffer, then write another value that is 10 bits long you need somewhere to store 30 + 10 = 40 bits.
When we start writing with the bitpacker, all these variables are cleared to zero except buffer which points to the start of the packet we are writing to. Because we’re accessing this packet data at a word level, not byte level, make sure packet buffers lengths are a multiple of 4 bytes.
Let’s say we want to write 3 bits followed by 10 bits, then 24. Our goal is to pack this tightly in the scratch buffer and flush that out to memory, 32 bits at a time. Note that 3 + 10 + 24 = 37. We have to handle this case where the total number of bits don’t evenly divide into 32. This is actually the common case.
At the first step, write the 3 bits to scratch like this:
scratch_bits is now 3.
Next, write 10 bits:
scratch_bits is now 13 (3+10).
Next write 24 bits:
scratch_bits is now 37 (3+10+24). We’re straddling the 32 bit word boundary in our 64 bit scratch variable and have 5 bits in the upper 32 bits (overflow). Flush the lower 32 bits of scratch to memory, advance word_index by one, shift scratch right by 32 and subtract 32 from scratch_bits.
scratch now looks like this:
We’ve finished writing bits but we still have data in scratch that’s not flushed to memory. For this data to be included in the packet we need to make sure to flush any remaining bits in scratch to memory at the end of writing.
When we flush a word to memory it is converted to little endian byte order. To see why this is important consider what happens if we flush bytes to memory in big endian order:
Since we fill bits in the word from right to left, the last byte in the packet E is actually on the right. If we try to send this buffer in a packet of 5 bytes (the actual amount of data we have to send) the packet catches 0 for the last byte instead of E. Ouch!
But when we write to memory in little endian order, bytes are reversed back out in memory like this:
And we can write 5 bytes to the network and catch E at the end. Et voilà!
To read the bitpacked data, start with the buffer sent over the network:
The bit reader has the following state:
To start all variables are cleared to zero except total_bits which is set to the size of the packet as bytes * 8, and buffer which points to the start of the packet.
The user requests a read of 3 bits. Since scratch_bits is zero, it’s time to read in the first word. Read in the word to scratch, shifted left by scratch_bits (0). Add 32 to scratch_bits.
The value of scratch is now:
Read off the low 3 bits, giving the expected value of:
Shift scratch to the right 3 bits and subtract 3 from scratch_bits:
Read off another 10 bits in the same way, giving the expected value of:
Scratch now looks like:
The next read asks for 24 bits but scratch_bits is only 19 (=32-10-3).
It’s time to read in the next word. Shifting the word in memory left by scratch_bits (19) and or it on top of scratch.
Now we have all the bits necessary for z in scratch:
Read off 24 bits and shift scratch right by 24. scratch is now all zeros.
Reading and writing integer values into a packet by specifying the number of bits to read/write is not the most user friendly option.
Consider this example:
This code looks fine at first glance, but let’s assume that some time later you, or somebody else on your team, increases MaxElements from 32 to 200 but forget to update the number of bits required to 7. Now the high bit of numElements are being silently truncated on send. It’s pretty hard to track something like this down after the fact.
The simplest option is to just turn it around and define the maximum number of elements in terms of the number of bits sent:
Another option is to get fancy and work out the number of bits required at compile time:
Now you can’t mess up the number of bits, and you can specify non-power of two maximum values and it everything works out.
But be careful when array sizes aren’t a power of two! In the example above MaxElements is 32, so MaxElementBits is 6. This seems fine because all values in [0,32] fit in 6 bits. The problem is that there are additional values within 6 bits that are outside our array bounds: [33,63]. An attacker can use this to construct a malicious packet that corrupts memory!
This leads to the inescapable conclusion that it’s not enough to just specify the number of bits required when reading and writing a value, we must also check that it is within the valid range: [min,max]. This way if a value is outside of the expected range we can detect that and abort read.
I used to implement this using C++ exceptions, but when I profiled, I found it to be incredibly slow. In my experience, it’s much faster to take one of two approaches: set a flag on the bit reader that it should abort, or return false from read functions on failure. But now, in order to be completely safe on read you must to check for error on every read operation.
If you miss any of these checks, you expose yourself to buffer overflows and infinite loops when reading packets. Clearly you don’t want this to be a manual step when writing a packet read function. You want it to be automatic.
NEXT ARTICLE: Serialization Strategies
Glenn Fiedler is the founder and CEO of Network Next.Network Next is fixing the internet for games by creating a marketplace for premium network transit.