Dr. Hossein Eslambolchi
WAN compression is the term for a collection of technologies that can reduce the traffic carried over WANs. WAN traffic compression makes use of an arsenal of technologies, including data compression, caching, and protocol optimizing technologies. Each is key to driving next-generation WAN traffic compression, especially as deep packet inspection becomes more prominent.
WAN TRAFFIC COMPRESSION:
- Enterprises have been looking for ways to reduce WAN costs and improve WAN performance for several decades. Legacy data compression techniques are useful for text data but are less valuable for heterogeneous data, including web traffic, media objects, the data used by enterprise applications and previously compressed files.
- During the last few years, WAN traffic compression technologies have matured and many start-ups offer WAN compression products.
- Compression standards that simplify interoperability have not been developed, so WAN compression equipment from the same vendor must be used at each WAN endpoint.
- Sophisticated bandwidth management, especially for SoIP traffic, is already being included in at least one vendor’s products.
WAN compression is the term used for a collection of technologies that reduce the traffic carried over WANs to support enterprise applications. WAN traffic compression makes use of an arsenal of technologies, including data compression, caching and protocol optimizing technologies.
Data compression technology has been on use for more than 30 years. The goal of data compression is to reduce the number of bits it takes to store or transmit information.
Compression requires that redundant information or source redundancy be identified and removed. The degree of compression attained depends on the amount of redundancy found in the source, and how efficiently redundant elements can be extracted by the compression device.
The Lempel-Ziv algorithm and Huffman encoding are the most common data compression algorithms used today. The Lempel-Ziv algorithm looks for recurring byte sequences. It searches at the byte level for the longest initial string it can find. It then notes each recurrence of that string, indicating its length in bytes and the location of the string.
The ASCII representation used to represent English (and other European languages) uses eight bits to present each clear text character. The Huffman coding scheme assigns short length codes to symbols that occur frequently, and longer codes to those that occur less frequently. These results in education in the overall amount of bits used to represent text.
These two coding schemes were designed for compressing different types of text.
Huffman coding is more efficient for traffic with well-known source characteristics and an understanding of recurring strings. The Lempel-Ziv algorithm is more effective for a wider variety of sources and is more widely used than Huffman coding. It does not require the user to understand the recurring string scheme of a document.
Today’s data compression technology is based on the Lempel-Ziv algorithm, Huffman coding, and their derivatives.
Other compression algorithms have been developed. Arithmetic compression and dictionary-based compression are two examples. They are less commonly used than Huffman coding and the Lempel-Ziv algorithm. Some proprietary vendor coding is also used to compress data. Note that the data compression algorithms mentioned is lossless compression techniques.
The exact original data are obtained. This differs from lossy compression that is typically used to compress audio, images, and video. When lossy compression is used, the data obtained when compressed data are decompressed differs from the original data, generally containing far less information.
An architecture that embraces modular hardware (e.g., plug-in cards) could allow an optimization engine to be introduced into the data flow without overburdening the routing engine.
Start-up appliances scan strings of data at the byte level, instead of at the packet level. Their compression techniques identify and take advantage of variable sized repeating patterns anywhere in a data stream, even across multiple packets, applications, or sessions. For example, traffic flows frequently carry packets with recurring data elements, such as a common header or a payload element.
These new techniques adapt to the patterns that occur most frequently in the data and assign to each pattern a unique label. The redundancies that waste network resources are stripped from the data stream before they are transmitted. Techniques from molecular biology used by one vendor to analyze DNA sequences are being applied to identity patterns in data streams and to drive WAN compression.
Another technology employed for WAN compression is caching. This is different from web caching. Web caching captures pre-compressed static objects and delivers them to users from caching servers distributed throughout the Internet. Web caching solutions are effective on the Internet because it carries a tremendous amount of web traffic.
However, enterprise network traffic is quite different. Although web applications are present, they represent only a small fraction of all traffic. Other traffic is devoted to a wide range of applications, including services from SAP, PeopleSoft and Siebel; ERP and CRM; and hundreds of other applications. Traffic may result from file transfers, program downloads, the transmission of media objects, streaming media, and even VoIP. To employ caching for WAN compression, new approaches to caching are necessary.
Protocol optimization is another area addressed by WAN compression. For example, headers and header-style information in packets — sequence numbers, protocol identifiers, and checksums — can be reduced. Packets may be aggregated to reduce the overhead required to send many short packets across a WAN. Multiple short packets are assembled into a single packet. The rate at which packets are sent across WAN links can be managed to ensure that packets are not dropped due to congestion. This eliminates retransmissions that waste WAN bandwidth and increase end-to-end latency. Also, when latency is high, packets that might be lost on a WAN link can be automatically recovered to reduce the number of end-to-end TCP retries.
Vendors have also employed proprietary schemes to deal with high latency links, such as satellite and intercontinental links. Specifically, TCP/IP is particularly “chatty” and “spoofing” acknowledgements (ACKs) of caching flows to optimize data buffers and windows can improve performance radically.
Companies that offer WAN compression often add other capabilities to their products, including bandwidth management capabilities and quality of service (QoS) features. Note that no WAN compression standards exist to simplify interoperability. Consequently, equipment from the same WAN compression vendor must be used at each WAN endpoint. Interoperability among different vendor equipment in the near future is unlikely. Most of these vendors, including many of the start-up players, have patented technologies that they believe differentiates them from their competition.
Corporate WAN performance can be improved by implementing WAN compression technologies, either by adding WAN compression CPE or by integrating these technologies in other network elements.
Before adding CPE, it is important to test how much WAN traffic compression can be achieved and whether the introduction of this equipment adds unacceptable latency to applications. It will also be important for there to be mechanisms whereby WAN compression CPE can be managed and administered.
All the elements of WAN compression, including compression, LAN protocol optimization, IP window optimization and caching will be required for an effective overall solution.