Thursday 12 May 2016

Traffic Management Building Blocks

Traffic Management Building Blocks

The configuration of a NetScaler is typically built up with a series of virtual entities that serve as building blocks for traffic management. The building block approach helps separate traffic flows.
Virtual entities are abstractions, typically representing IP addresses, ports, and protocol handlers
for processing traffic. Clients access applications and resources through these virtual entities. The most commonly used entities are vservers and services. Vservers represent groups of servers in a
server farm or remote network, and services represent specific applications on each server.

Most features and traffic settings are enabled through virtual entities. For example, you can
configure a NetScaler to compress all server responses to a client that is connected to the server
farm through a particular vserver. To configure the NetScaler for a particular environment, you need
to identify the appropriate features and then choose the right mix of virtual entities to deliver them.
Most features are delivered through a cascade of virtual entities that are bound to each other. In
this case, the virtual entities are like blocks being assembled into the final structure of a delivered
application. You can add, remove, modify, bind, enable, and disable the virtual entities to configure

the features. The following figure shows the concepts covered in this section.

A Simple Load Balancing Configuration

In the example shown in the following figure, the NetScaler is configured to function as a load
balancer. For this configuration, you need to configure virtual entities specific to load balancing and
bind them in a specific order. As a load balancer, a NetScaler distributes client requests across
several servers and thus optimizes the utilization of resources.

The basic building blocks of a typical load balancing configuration are services and load balancing
vservers. The services represent the applications on the servers. The vservers abstract the servers
by providing a single IP address to which the clients connect. To ensure that client requests are
sent to a server, you need to bind each service to a vserver. That is, you must create services for
every server and bind the services to a vserver. Clients use the VIP to connect to a NetScaler. When
the NetScaler receives client requests on the VIP, it sends them to a server determined by the load
balancing algorithm. Load balancing uses a virtual entity called a monitor to track whether a specific
configured service (server plus application) is available to receive requests.

In addition to configuring the load balancing algorithm, you can configure several parameters that
affect the behavior and performance of the load balancing configuration. For example, you can
configure the vserver to maintain persistence based on source IP address. The NetScaler then
directs all requests from any specific IP address to the same server.

Understanding Policies and Expressions

A policy defines specific details of traffic filtering and management on a NetScaler. It consists of two
parts: the expression and the action. The expression defines the types of requests that the policy
matches. The action tells the NetScaler what to do when a request matches the expression. As
an example, the expression might be to match a specific URL pattern to a type of security attack,
with the action being to drop or reset the connection. Each policy has a priority, and the priorities
determine the order in which the policies are evaluated.

When a NetScaler receives traffic, the appropriate policy list determines how to process the traffic.
Each policy on the list contains one or more expressions, which together define the criteria that a
connection must meet to match the policy.

For all policy types except Rewrite policies, a NetScaler implements only the first policy that a request matches, not any additional policies that it might also match. For Rewrite policies, the NetScaler evaluates the policies in order and, in the case of multiple matches, performs the
associated actions in that order. Policy priority is important for getting the results you want.

Accelerating Load Balanced Traffic by Using Compression

Compression is a popular means of optimizing bandwidth usage, and all modern web browsers
support compressed data. If you enable the AppCompress feature, the Citrix NetScaler intercepts
requests from clients and determines whether the client can accept compressed content. After
receiving the HTTP response from the server, the NetScaler examines the content to determine
whether it is compressible. If the content is compressible, the NetScaler compresses it, modifies
the response header to indicate the type of compression performed, and forwards the compressed
content to the client.

NetScaler compression is a policy-based feature. A policy filters requests and responses to identify
responses to be compressed, and specifies the type of compression to apply to each response. The NetScaler provides several built-in policies to compress common MIME types such as text/ html, text/ plain, text/xml, text/css, text/rtf, application/msword, application/vnd.ms-excel, and application/vnd.mspowerpoint.

You can also create custom policies. The NetScaler does not compress compressed MIME types
such as application/octet-stream, binary, bytes, and compressed image formats such as GIF and
JPEG.

To configure compression, you must enable it globally and on each service that will provide responses that you want compressed. If you have configured vservers for load balancing or content
switching, you should bind the polices to the vservers. Otherwise, the policies apply to all traffic that
passes through the NetScaler.

No comments:

Post a Comment