Goodies for Hoodies: TCP Timestamps

The Picts were a tribal culture in northern Scotland that history has relegated to the realm of myth and enigmatic legend. Largely forgotten, the Picts fought off the military superiority of Rome’s army and built a sophisticated civilization on the whole before disappearing from history. These were a people dismissed by the advanced thinkers of the day as unimportant and trivial in their capabilities, only to rise up unexpectedly to great effect. What does this have to do with Security? Nothing really, unless your data center is staffed by Roman centurions on horseback. If that’s you, then I am ripe with envy. But all levity aside, so it was with the Picts, it is today with the humble and unassuming TCP Timestamp.

What is a TCP Timestamp?

If you’ve ever run a vulnerability scan, you’ve probably seen a low or informational severity finding associated with TCP Timestamp responses. The recommendation is always to disable them, but rarely is any background information provided. What are those little timestamps doing there and who really cares anyway? Like the Picts, much lies below the surface, and we dismiss them at our peril. Before we can get into the ramifications of this misunderstood protocol option, we must understand the mechanics behind TCP Timestamps, and what they actually are. The basis of TCP is that it is a stateful, reliable means of sending and receiving IP packets. In order for reliable communications to take place, there must be bidirectional communication between the sending and receiving nodes so that, in basic terms, the sender can know that the target system received the communication correctly, and the receiving node has confidence that the message it received was correct. To such ends, TCP communications are session-based, and the two nodes employ features in the protocol as a framework to manage the reliability of communications. This involves things like resets, syn-acks, re-transmissions, and the like, that you’ve probably seen in any number of network captures. TCP was designed to communicate reliably over any transmission medium at any speed; it provides the same level of communications integrity over dial up as it does on a LAN.

It is important to understand that TCP was originally designed to overcome the challenges of unreliable communication channels. Not much thought was given to excessively reliable and fast communications.

It seems counter intuitive, but, because TCP is synchronous and keeps track of packets, it can break down over high bandwidth connections. It sounds crazy, I know, but let’s look at how this might happen.

Grab your pocket protector here, folks. We have to get a little nerdy.

If you asked President Trump about packet loss on TCP communications, he might respond, “It’s bad, very, very, very bad.” And he would be correct. But packet loss does occur for any number of reasons, and TCP maintains reliability by using selective acknowledgments to tell the sending node what TCP segments are queued on the receiving node and what segments it is still waiting for. These segments are, like anything else in network communications, numbered in finite sequence numbers. This value occupies a 32 bit space and exists within the confines of a Maximum Segment Lifetime (MSL), which is enforced at the IP level by something we’re more familiar with, the Time to Live (TTL). This MSL is usually adjusted based on the transfer rate so that faster speeds have smaller MSLs. This works pretty well until we introduce things like fiber optics. The bandwidth on a fiber connection can be so high that a TCP session can exhaust all of its sequence numbers and still have segments queued up in the same connection, leading to sequence number reuse, aka TCP Wrapping. This causes problems.

In short, as things get faster, it becomes more error prone to use timeout intervals to manage reliability.

The number of sequence numbers can not exceed the 32 bit value of 4,294,967,295 so as transmission speed increases, the MSL values must get shorter to compensate. With enough bandwidth, they can shrink to the point that they are no longer able to provide message integrity. If only there was a way to identify whether packets were dropped based on actual timing rather than a sequence number, but how?

Behold the mighty TCP Timestamp!

TCP Timestamps are an important component of reliable high speed communications because they keep TCP from stumbling over its own sequence numbers!Officially, this benefit is referred to adoringly as “PAWS” or Protection Against Wrapped Sequence Numbers. PAWS operates within the confines of a single TCP connection under the assumption that the TCP timestamp value increases predictably over time. If a segment is received with an older timestamp than one that was expected, it’s discarded. In doing so, PAWS protects against sequence numbers being reused in the same connection. It’s worth pointing out that there are a lot of weird exceptions and math around how that actually takes place, but for purposes of a blog post, we’ll steer clear of that rabbit hole. It should be clear at this point that TCP Timestamps serve a purpose in network communications, and that disabling them as a standard practice is a perilous endeavor.

Still awake? Good! Now we can talk about security!

Strictly speaking, TCP Timestamps are no more a security risk than the TCP protocol itself. Why then are they subject to all the bad press and mob calls for their disablement? The security concerns arise with the underlying mechanisms that are used to populate the values within the timestamp option itself. As the name suggests, the timestamp makes use of a virtual “timestamp clock” in the sender’s operating system. This clock must approximate real time measurements in order to remain compatible with other RTT measurements. There is no requirement for the timestamp clock to match the system clock, but it often maps to it for ease of design. After all, why make a new clock if you can just use the existing clock to derive values to be presented as those belonging to the new clock?

This leads to the one thing for which we hackers profess a deep and undying love for, unintended and predictable behavior. Am I right?

By measuring multiple timestamp replies, we can determine what the clock frequency is of the target system. The clock frequency is how many “ticks” the timestamp clock increments per unit of real time. For example, if we measure 5 timestamp replies each 1 second apart, and each timestamp value increases by 100 with each reply, we can infer that the clock increments 100 ticks for every second of real time processed by the system clock.Since most (not all!!) clocks start at 0, we can compute the approximate uptime of the system. Using the above example, if the timestamp value is 60000, we know that each 100 ticks of that value equate to one second of real time. We can assume that 600 seconds have elapsed since the clock was started. In laymen’s terms, the system was rebooted 10 minutes ago. We’re using small whole numbers here for purposes of illustration, but you get the idea.

In the real world, accurately fingerprinting the system will help with establishing timestamp validity, since clock specifications are documented.

System uptime by itself is an arbitrary value and doesn’t give away much on it’s own. But consider that patch cycles almost always include mandatory reboots. By surveying these values over time, one might be able to determine patching intervals by correlating reboot times across systems. What if you surveyed the same IP address multiple times and received different but consistently disparate values in the timestamp responses? That might allow you to identify systems that are behind a NAT or a load balancer, and may even allow you to draw conclusions about the load balancing configuration itself. Suppose you were testing a customer for susceptibility to DOS attacks. You may be able to use timestamps to determine with certainty whether the target system was knocked over, or whether you were just shunned by the IPS.

TCP Timestamps grant the hacker insight into a given system’s operational state, and how we use that information is limited only by our imagination.

But to dismiss their presence as a low severity security finding just to be remediated is inappropriate, and it may do more harm than good. When to use TCP timestamps should be determined by operational requirements, not by blanket assumptions of their importance. Are you listening, vulnerability scanners?So next time you find yourself knee deep in informational findings from a vulnerability scan, put your ear close to a network cable. It’s possible you may hear the faint battle cry of a forgotten hero. Want to keep reading? Learn the difference between vulnerability scans and penetration tests, more about Vulnerability Management, and how a penetration test can help you understand which risks are most relevant to your network.

More posts