Akshay Ranganath

TCP & Akamai Integration

Blog Post created by Akshay Ranganath Employee on Dec 3, 2014

tl;dr
By default, Akamai's TCP keep alive probes are sent after a very long time. If your load balancer has a short PCONN timeout as compared to some business processes, increase the PCONN time-out. If that's not possible, let Akamai know of the long transactions so that we can explore options.

 

Over the course of last 2-3 weeks, we were involved in a very interesting case that taught me a lot about TCP and integration with Akamai. I felt it would be interesting for other customers and Akamai professional services team members as well.


Customer Case

There was a customer transaction that took close to 8 mins. The symtpom of this issue was timeouts at exactly a specific seconds (5 mins) which would not change regardless of the TCP settings at Akamai. The only situation where the request seemed to work was on curl and nothing else. With this working sample, we ran a lot of Wireshark traces and figured out that it was the origin load-balancer that was the culprit.

 

Root Cause

While working with customers, Akamai servers will typically look at 3 parameters:

  • Connect Timeout: The amount of time Edge server waits before declaring that we can't establish a TCP connection with the server.
  • HTTP Read Timeout: The amount of elapsed time between Edge sending the last byte of request and before receiving any response from server. Generally, a higher timeout is required for a reporting/analytics kind of application where the origin has to do perform some heavy-lifting work to generate the response.
  • PCONN Idle timeout: The amount of time an existing open TCP connection can be idle before it is closed. PCONN is basically a method for supporting the persistent TCP connection between a client and server so that the same connection can be used for multiple HTTP request and speed up the response. Think of it as stuffing more data in a single pipe rather than laying a new pipe between the end points each time a request is made. (See image 1)

Untitled (4).png

 

In an enterprise, the issue of PCONN gets more complex. The connection from a client is usually terminated at the load balancer. This load balancer could then establish another connection to the server. In this situation, there may be a case where client makes a request, the application server is still processing the response but, the load balancer thinks the connection is idle and kills the TCP connection. Such was our situation. (See image 2)

Untitled (5).png

Here's a wireshark trace of the scenario:

Screen Shot 2014-11-25 at 3.59.53 PM.png

Solution

Two possible solutions exist in such a situation:

  1. Increase the PCONN idle timeout on the load balancer
  2. Ensure that the TCP connection is not kept idle

 

Option 1 is generally more feasible. With modern load balancers, it is possible to have a setting for higher timeouts based on domain names. However, if the IT infrastructure is outsourced and the issue is with a QA type setup, this option may not be acceptable.

 

Option 2 is a method that forces the client to signal the load balancer that the connection is not idle. This is accomplished by TCP Keepalive packets. Using these probes, the timer is reset and the connection is kept open and the server is given sufficient time to respond. However, this may not be feasible on CDN environments as it may or may not be possible to tweak the TCP parameter on a per customer or per URL basis. It is possible with Akamai to tweak these parameters. However, they're considered as special tags and require a lot of approvals from Engineering teams to ensure it does not impact our distributed platform of servers.

Untitled (6).png

 

Tweaking the TCP Keep-Alive probes for option 2

TCP probes work on 3 main parameters. From The Linux Documentation Project page (http://www.tldp.org/HOWTO/html_single/TCP-Keepalive-HOWTO/#usingkeepalive) here are the details:

tcp_keepalive_time
 the interval between the last data packet sent (simple ACKs are not considered data) and the first keepalive probe; after the connection is marked to need keepalive, this counter is not used any further

tcp_keepalive_intvl
 the interval between subsequential keepalive probes, regardless of what the connection has exchanged in the meantime

tcp_keepalive_probes
 the number of unacknowledged probes to send before considering the connection dead and notifying the application layer


Essentially, "tcp_keepalive_time" is the time after which we'll send the first probe. If it is say 120s, the first TCP keep alive probe will be sent 2 mins after the last byte of request is sent to the server. "tcp_keepalive_intvl" is the time the client will wait for an acknowledgement for the probe. This will indicate that the other party is still listening and has the connection open. In our case, it will be the load balancer. Finally, "tcp_keepalive_probes" is the number of probes we can affoard to loose before we declare the connection as dead and close by sending the TCP RST packet.

In our solution, we reduced the "tcp_keepalive_time" to 120s and set "tcp_keepalive_intvl" to 120s as well. Since these were well below the 5 min idle timeout limit of customer's load balancer, the TCP connection was kept open and our requests succeeded.

 

Here's the trace after the parameters were tweaked:

Screen Shot 2014-12-02 at 12.16.52 PM.png

Outcomes