Azure Load Balancer is a Layer-4 (TCP, UDP) type load balancer that distributes incoming traffic among healthy service instances in cloud services or virtual machines defined in a load balancer set.
The distribution algorithm used is a 5 tuple (source IP, source port, destination IP, destination port, protocol type) hash to map traffic to available servers. It provides stickiness only within a transport session. Packets in the same TCP or UDP session will be directed to the same datacenter IP (DIP) instance behind the load balanced endpoint. When the client closes and re-opens the connection or starts a new session from the same source IP, the source port changes and causes the traffic to go to a different DIP endpoint.
We have introduced a new distribution mode called Source IP Affinity (also known as session affinity or client IP affinity). Azure Load Balancer can be configured to use a 2 tuple (Source IP, Destination IP) or 3 tuple (Source IP, Destination IP, Protocol) to map traffic to the available servers. By using Source IP affinity, connections initiated from the same client computer goes to the same DIP endpoint.
Source IP affinity solves an incompatibility between the Azure Load Balancer and RD Gateway. Now you can build a RD gateway farm in a single cloud service.
Another usage scenario is media upload where the real data upload happens through UDP but wherein the control plane is achieved through TCP:
- A client first initiates a TCP session to the load balanced public address, gets directed to a specific DIP, this channel is left active to monitor the connection health
- A new UDP session from the same client computer is initiated to the same load balanced public endpoint, the expectation here is that this connection is also directed to the same DIP endpoint as the previous TCP connection so that media upload can be executed at high throughput while also maintaining a control channel through TCP.
Note that if the load-balanced set changes (removing or adding a virtual machine), the distribution of client requests is recomputed. You cannot depend on new connections from existing client sessions ending up at the same server. Additionally, using source IP affinity distribution mode may cause an unequal distribution of traffic. Clients running behind proxies may be seen as one unique client application.
Scenarios
- Configure load balancer distribution to an endpoint on a Virtual Machine via PowerShell or Service Management API
- Configure load balancer distribution for your Load-Balanced Endpoint Sets via PowerShell or Service Management API.
- Configure load balancer distribution for your Web/Worker roles via the service model.
PowerShell examples
Make sure to Download and install the latest Azure PowerShell (October release)
Add an Azure endpoint to a Virtual Machine and set load balancer distribution mode
Get-AzureVM -ServiceName "mySvc" -Name "MyVM1" | Add-AzureEndpoint -Name "HttpIn" -Protocol "tcp" -PublicPort 80 -LocalPort 8080 –LoadBalancerDistribution “SourceIP”| Update-AzureVM
LoadBalancerDistribution can be set to sourceIP for 2-tuple (source IP, Destination IP) load balancing, sourceIPProtocol for 3-tuple (source IP, Destination IP, protocol) load balancing or none if you want the default behavior of 5-tuple load balancing.
Retrieve an endpoint load balancer distribution mode configuration
PS C:\> Get-AzureVM –ServiceName “MyService” –Name “MyVM” | Get-AzureEndpoint VERBOSE: 6:43:50 PM - Completed Operation: Get Deployment LBSetName : MyLoadBalancedSet LocalPort : 80 Name : HTTP Port : 80 Protocol : tcp Vip : 65.52.xxx.xxx ProbePath : ProbePort : 80 ProbeProtocol : tcp ProbeIntervalInSeconds : 15 ProbeTimeoutInSeconds : 31 EnableDirectServerReturn : False Acl : {} InternalLoadBalancerName : IdleTimeoutInMinutes : 15 LoadBalancerDistribution : sourceIP
If the LoadBalancerDistribution element is not present then the Azure Load balancer uses the default 5-tuple algorithm
Set the Distribution mode on a load balanced endpoint set
If endpoints are part of a load balanced endpoint set, the distribution mode must be set on the load balanced endpoint set
Set-AzureLoadBalancedEndpoint -ServiceName "MyService" -LBSetName "LBSet1" -Protocol tcp -LocalPort 80 -ProbeProtocolTCP -ProbePort 8080 –LoadBalancerDistribution "sourceIP"
Cloud Service example
You can leverage the Azure SDK for .NET 2.5 (to be released in November) to update your Cloud Service
Endpoint settings for Cloud Services are made in the .csdef. In order to update the load balancer distribution mode for a Cloud Services deployment, a deployment upgrade is required.
Here is an example of .csdef changes for endpoint settings:
API example
You can configure the load balancer distribution using the service management API
Make sure to add the x-ms-version header is set to version 2014-09-01 or higher.
Update the configuration of the specified load-balanced set in a deployment
Request example
POST https://management.core.windows.net//services/hostedservices/ /deployments/ ?comp=UpdateLbSet x-ms-version: 2014-09-01 Content-Type: application/xml endpoint-set-name local-port-number external-port-number port-assigned-to-probe probe-protocol interval-of-probe timeout-for-probe endpoint-protocol enable-direct-server-return idle-time-out sourceIP
The value of LoadBalancerDistribution can be sourceIP for 2-tuple affinity, sourceIPProtocol for 3-tuple affinity or none (for no affinity. i.e. 5-tuple)
Response
HTTP/1.1 202 Accepted Cache-Control: no-cache Content-Length: 0 Server: 1.0.6198.146 (rd_rdfe_stable.141015-1306) Microsoft-HTTPAPI/2.0 x-ms-servedbyregion: ussouth2 x-ms-request-id: 9c7bda3e67c621a6b57096323069f7af Date: Thu, 16 Oct 2014 22:49:21 GMT