eBooks

O'Reilly's Performance Optimizations in a Cloud-Centric World

Learn all you need to know about email best practices, deliverability, and tools with email whitepapers and ebooks.

Issue link: https://hub.dyn.com/i/545501

Contents of this Issue

Navigation

Page 30 of 38

The advantages of CDNs are obvious: most of the time, users should be served content from destinations close to them. CDNs are also typically set up for high-traffic usage, so a good CDN will address issues of both bandwidth and latency. Using CDNs for dynamic content The caching capability of CDNs is only really useful for static con‐ tent—dynamic content is by its nature less cacheable—though modern CDNs are doing their best to change that with technology such as Edge Side Includes (ESI). Despite this, though, there are still advantages to serving dynamic content via a CDN: it reduces the impact of HTTP slow-start. The negative impact of slow-start increases as the latency of the connec‐ tion increases. CDNs maintain open HTTP connections to your server, meaning that only rarely do they have to go through the slow-start process. Using a CDN, therefore, means that even for dynamic content, the slow-start element is only completed for a short round trip between client and CDN, and the communication between CDN and server is carried out using an existing open connection. TCP Slow-Start Slow-start is a core part of the TCP standard; it's there to minimize network congestion and ensure that transmissions are made at a speed appropriate for the available bandwidth. However, a side effect is that newly established connections have much higher latency than they theoretically need to. Slow-start, as its name suggests, starts a transfer slowly and then builds up speed as it becomes apparent that the network can handle it. After the initial connection is made and a handshake completed, the server sends a small number of packets, the client receives and acknowledges receipt, and the server can then send two packets for every packet successfully acknowledged. This allows for exponen‐ tial growth until the capacity of the network is determined. This means that an initial request to a server will involve more round trips to the server than are actually necessary. For example, a 20k request that could easily be served in one round trip will take four round trips on an initial connection to a server. Cache Content as Close to the User as Possible | 21

Articles in this issue

view archives of eBooks - O'Reilly's Performance Optimizations in a Cloud-Centric World