Azure Load Balancer: ( from AZ-103 Trainer Book )
The Azure Load Balancer delivers high availability and network performance to your applications. The
load balancer distributes inbound traffic to backend resources using load balancing rules and health
● Load balancing rules determine how traffic is distributed to the backend.
● Health probes ensure the resources in the backend are healthy.
The Load Balancer can be used for inbound as well as outbound scenarios and scales up to millions of
TCP and UDP application flows.
Keep this diagram in mind since it covers the four components that must be configured for your load
balancer: Frontend IP configuration, Backend pools, Health probes, and Load balancing rules.
For more information:
Load Balancer documentation – https://docs.microsoft.com/en-us/azure/load-balancer/
Public Load Balancer
There are two types of load balancers: public and internal.
A public load balancer maps the public IP address and port number of incoming traffic to the private IP
address and port number of the VM, and vice versa for the response traffic from the VM. By applying
load-balancing rules, you can distribute specific types of traffic across multiple VMs or services. For
example, you can spread the load of incoming web request traffic across multiple web servers.
The following figure shows internet clients sending webpage requests to the public IP address of a web
app on TCP port 80. Azure Load Balancer distributes the requests across the three VMs in the load-balanced
254 Module 8 Network Traffic Management
Internal Load Balancer
An internal load balancer directs traffic only to resources that are inside a virtual network or that use a
VPN to access Azure infrastructure. Frontend IP addresses and virtual networks are never directly exposed
to an internet endpoint. Internal line-of-business applications run in Azure and are accessed from within
Azure or from on-premises resources. For example, an internal load balancer could receive database
requests that need to be distributed to backend SQL servers.
An internal load balancer enables the following types of load balancing:
● Within a virtual network. Load balancing from VMs in the virtual network to a set of VMs that reside
within the same virtual network.
● For a cross-premises virtual network. Load balancing from on-premises computers to a set of VMs
that reside within the same virtual network.
● For multi-tier applications. Load balancing for internet-facing multi-tier applications where the
backend tiers are not internet-facing. The backend tiers require traffic load-balancing from the
● For line-of-business applications. Load balancing for line-of-business applications that are hosted in
Azure without additional load balancer hardware or software. This scenario includes on-premises
servers that are in the set of computers whose traffic is load-balanced.
A public load balancer could be placed in front of the internal load balancer to create a multi-tier
When you create an Azure Load Balancer you will select for the type (Internal or Public) of load balancer.
You will also select the SKU. The load balancer supports both Basic and Standard SKUs, each differing in
scenario scale, features, and pricing. The Standard Load Balancer is the newer Load Balancer product with
an expanded and more granular feature set over Basic Load Balancer. It is a superset of Basic Load
Here is some general information about the SKUs.
● SKUs are not mutable. You may not change the SKU of an existing resource.
● A standalone virtual machine resource, availability set resource, or virtual machine scale set resource
can reference one SKU, never both.
● A Load Balancer rule cannot span two virtual networks. Frontends and their related backend instances
must be in the same virtual network.
● There is no charge for the Basic load balancer. The Standard load balancer is charged based on
number of rules and data processed.
● Load Balancer frontends are not accessible across global virtual network peering.
New designs and architectures should consider using Standard Load Balancer.
To distribute traffic, a back-end address pool contains the IP addresses of the virtual NICs that are
connected to the load balancer.
How you configure the backend pool depends on whether you are using the Standard or Basic SKU.
256 Module 8 Network Traffic Management
Standard SKU Basic SKU
Backend pool endpoints Any VM in a single virtual
network, including a blend of
VMs, availability sets, and VM
VMs in a single availability set or
VM scale set.
Backend pools are configured from the Backend Pool blade. For the Standard SKU you can connect to an
Availability set, single virtual machine, or a virtual machine scale set.
In the Standard SKU you can have up to 1000 instances in the backend pool. In the Basic SKU you can
have up to 100 instances.
Load Balancer Rules
A load balancer rule is used to define how traffic is distributed to the backend pool. The rule maps a
given frontend IP and port combination to a set of backend IP addresses and port combination. To create
the rule the frontend, backend, and health probe information should already be configured. Here is a rule
that passes frontend TCP connections to a set of backend SQL (port 1433) servers. The rule uses a health
probe that checks on TCP 1443.
Load balancing rules can be used in combination with NAT rules. For example, you could use NAT from
the load balancer’s public address to TCP 3389 on a specific virtual machine. This allows remote desktop
access from outside of Azure. Notice in this case, the NAT rule is explicitly attached to a VM (or network
interface) to complete the path to the target; whereas a Load Balancing rule need not be.
Do you understand the difference between load balancing rules and NAT rules? Remember, this
approach should only be used when you need connectivity from the Internet. Most normal communications
would occur from on-premises to Azure connections such as site-to-site VPN and ExpressRoute.