CS代考 Cloud Computing INFS3208 – cscodehelp代写

Cloud Computing INFS3208
Recap
• Cloud Delivery Models
• Cloud Deploy Models
• Cloud-EnablingTechnologies
– Broadband Networks and Internet Architecture
– Virtualisation Technology (VT)
– Data Centre Technology
– Web Technology
– Multitenant Technology
• Goals and Benefits
• Risks and Challenges
• Cloud-based Applications in the World
CRICOS code 00025B 2
18/08/2021
2

Outline
• Networking and Virtual Private Cloud
• Load Balancing
– What & Why Load Balancing
– Algorithms
– LB in Cloud Architecture
– LB in Distributed Systems
– LB in Network Communications
– LB in Cloud Product
• Cloud Architecture
– Workload Distribution Architecture & Resource Pooling Architecture
– Dynamic Scalability Architecture & Elastic Resource Capacity Architecture
– Service Load Balancing Architecture & Cloud Bursting Architecture
– Elastic Disk Provisioning Architecture & Redundant Storage Architecture
• Advanced Cloud Architecture
CRICOS code 00025B 3
18/08/2021
3

Course Overview – Lectures
18/08/2021
Lecture 7
DBs in Cloud Computing
Lecture 8
DFS
Lecture 9
Hadoop & MapReduce
Lecture 10

Lecture 11
I
Lecture 1
Introduction
Lecture 2
Adv. topics&appl
Lecture 3
Networks & Load Balancing
Concepts
Storage Computation Others
• More GCP Coupons available with a new link (updated)
• No teaching activities on Wednesday (Ekka)
• Release A1 on Friday 13/8 (due on Friday 3/9, 3 weeks)
• No medical certificates needed for extensions or deferred exams until 31
August (possibly to be extended) – students can use a statement of
Lecture 4
VT: Docker I
Lecture 5
VT: Docker II
Lecture 6
VT: Docker III
Orchestration
Lecture 12
Security & Privacy
Lecture 13
Revision
Dr
circumstances
CRICOS code 00025B 4
4

Cloud Networking
https://en.wikipedia.org/wiki/Domain_Name_System
18/08/2021
CRICOS code 00025B 5
5

Virtual Private Cloud (VPC)
Cloud Deployment models:
• Public Cloud (e.g. AWS, GCP) vs. Private Cloud (UQCloud)
• Human Resource department vs. Finance department in one company
A virtual private cloud (VPC) is a virtualized private cloud within a public cloud (GCP, AWS) for an organization
Advantages of VPC: Better Security + All benefits of public cloud
18/08/2021
Public Cloud
HR Management Services
Finance Services
Public Cloud
Private Cloud
HR Management Services
Finance Services
CRICOS code 00025B 6
6

Virtual Private Cloud (VPC)
The key technologies for isolating a VPC from the rest of the public cloud are:
• Subnets:
– A subnet (a range of IP addresses) is reserved (not available to everyone) within the network – for private use.
– In a VPC, cloud providers will allocate private IP addresses (not accessible via the public Internet).
• VLAN (Virtual Local Area Network):
– A VLAN is a virtual LAN and it’s used to partition a network.
• VPN:
– A virtual private network (VPN) uses encryption to create a private network.
– VPN traffic passes through publicly shared Internet infrastructure – routers, switches, etc.
• NAT (Network Address Translation):
– NAT matches private IP addresses to a public IP address for connections with the public Internet.
– With NAT, a public-facing website or application could run in a VPC.
CRICOS code 00025B 7
18/08/2021
7

Regions and Zones
• Cloud Providers organize IT resources by regions and zones
• Availability Regions
• Availability Zones
– one or more discrete data centers with redundancy in a Region
– Multiple zones are interconnected with encryption
• Prices of IT resources in different zones and regions could
be very different!
https://aws.amazon.com/about-aws/global-infrastructure/regions_az/?nc1=h_ls
18/08/2021



the geographic locations of the data centres
 E.g. China, North America, Southeast Asia, East
Asia, Europe, Middle East, etc. collection of zones
Specific location to run resources
CRICOS code 00025B 8
8

Subnets and VPC in GCP and AWS
Subnets and VPC in GCP and AWS are differently organized:
• VPC in GCP is global (automatic routing for traffic), but regional in AWS (needs VPC peering setup);
• Subnet is zonal and regional in GCP, but confined in zones in AWS (needs routing setup)
18/08/2021
VPC Network (Global)
Zone1 Zone2 Zone3
2
1
Subnet 3
VPC A
Zone1 Zone2 Zone3
VPC B
Zone1 Zone2
Region 1
GCP Region 1 AWS Region 2 Region 2
CRICOS code 00025B 9
9
Subnet 3
Subnet 2 Subnet 1

Demo – Create VPC Network in GCP
CRICOS code 00025B 10
18/08/2021
10

Virtual Private Cloud (VPC)
18/08/2021
VPN Gateway
Zone1 Zone2 Zone3
VPC Network (Global)
Zone1 Zone2
Subnet
1
vm1
Subnet 2
vm2
South-east asia
Europe
On Premises
Internet
CRICOS code 00025B
11
11

Routes and Firewall Rules
• Routes define paths for packets leaving instances.
• Routes in Google Cloud are divided into two categories:
– system-generated and custom.
• Firewall rules aim to protect your VPCs.
• Firewall rules apply to both outgoing (egress) and incoming (ingress) traffic in the network.
• Firewall rules control traffic even if it is entirely within the network.
• In GCP, every VPC network has implied firewall rules;
– two implied IPv4 firewall rules,
– two implied IPv6 firewall rules.
– the implied egress rules allow most egress traffic, and the implied ingress rules deny all ingress traffic.
– you cannot delete the implied rules, but you can override them with your own rules.
• To monitor which firewall rule allowed or denied a particular connection, see Firewall Rules Logging.
CRICOS code 00025B 12
18/08/2021
12

Other Networking Products
• Load Balancing
• Cloud DNS
• Cloud CDN
• Cloud NAT
• Traffic Director
• Service Directory
• Cloud Domains
• Private Service Connect
• And more…
Have some quizzes in Ripple!
CRICOS code 00025B 13
18/08/2021
13

Outline
• Networking and Virtual Private Cloud
• Load Balancing
– What & Why Load Balancing
– Algorithms
– LB in Cloud Architecture
– LB in Distributed Systems
– LB in Network Communications
– LB in Cloud Product
• Cloud Architecture
– Workload Distribution Architecture & Resource Pooling Architecture
– Dynamic Scalability Architecture & Elastic Resource Capacity Architecture
– Service Load Balancing Architecture & Cloud Bursting Architecture
– Elastic Disk Provisioning Architecture & Redundant Storage Architecture
• Advanced Cloud Architecture
CRICOS code 00025B 14
18/08/2021
14

Load Balancing
Picture: https://avinetworks.com/docs/17.2/aws-reference-architecture/
CRICOS code 00025B 15
18/08/2021
15

Example: Three-Tier Client/Server Network
Electronic Commerce, Sixth Edition
CRICOS code 00025B 16
18/08/2021
16

Load Balancing
What is load balancing ?
• Load balancing improves the distribution of workloads across multiple computing resources, such as computers, a computer
cluster, network links, central processing units, or disk drives.
• Load balancing aims to optimize resource use, maximize throughput, minimize response time, and avoid overload of any single resource.
• Using multiple components with load balancing instead of a single component may increase reliability and availability
through redundancy.
• Load balancing usually involves dedicated software or hardware, such as a multilayer switch or a Domain Name System server process.
Why should be load balanced ?
• Improve resource utilization
• Improve system performance • Improve energy efficiency
18/08/2021
CRICOS code 00025B
17

Load Balancing Problems
Server computing capacity problem – The thin clients waged too many applications.
Single-point Data Storage Problem – When a single data resource is (unexpected) demanded by an
overwhelming number of clients, (i.e., a single data item is to be requested by many users). Traffic Problem – When a destination Web Server is to be visited by too many clients.
Storage and Traffic Problem – When a server needs to maintain far too many incoming data streams (upstream) or outgoing data streams (down streams) for file exchanges.
Network Congestion Problem – When the demands of the (web) services is over the server’s capacity. Dynamic Change of the Clients Demands – The clients’ demands of services are unpredictable and may
change dramatically.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.429.996&rep=rep1&type=pdf
Computing
Capacity of Cloud Resources
StoringCRICOS code 00025B
Networking 18
18/08/2021
Load balancing is the process of finding overloaded nodes and then transferring the extra load to other nodes.
18

Outline
• Networking and Virtual Private Cloud
• Load Balancing
– What & Why Load Balancing
– Algorithms
– LB in Cloud Architecture
– LB in Distributed Systems
– LB in Network Communications
– LB in Cloud Product
• Cloud Architecture
– Workload Distribution Architecture & Resource Pooling Architecture
– Dynamic Scalability Architecture & Elastic Resource Capacity Architecture
– Service Load Balancing Architecture & Cloud Bursting Architecture
– Elastic Disk Provisioning Architecture & Redundant Storage Architecture
• Advanced Cloud Architecture
CRICOS code 00025B 19
18/08/2021
19

Load Balancing Algorithm – Round1Robin
2 3
4
18/08/2021
Round-robin load balancing is one of the simplest methods for distributing client requests across a group of servers.
Going down the list of servers in the group, the round- robin load balancer forwards a client request to each server in turn.
531
When it reaches the end of the list, the load balancer loops back and goes down the list again (sends the next request to the first listed server, the one after that to the second server, and so on).
56642
Two servers
Multiple clients
Singh, Navpreet, and Kanwalvir Singh Dhindsa. “Load Balancing in Cloud Computing Environment: A Comparative Study of Service Models and Scheduling
Algorithms.” International Journal of Advanced Networking and Applications8, no. 6 (2017): 3246.
CRICOS code 00025B 20
20

Load Balancing Algorithm – Round1Robin
18/08/2021
• Round-robin load balancing is suitable for some cases:
• Not identified hardware specifications between nodes
• Round-robin load balancing can result in overloading of imbalanced cluster.
• Round-robin is best for clusters consisting of servers with identical specs.
2 3
4 vCore, 4G RAM
531
4
56642
Singh, Navpreet, and Kanwalvir Singh Dhindsa. “Load Balancing in Cloud Computing Environment: A Comparative Study of Service Models and Scheduling
Algorithms.” International Journal of Advanced Networking and Applications8, no. 6 (2017): 3246.
CRICOS code 00025B 21
Multiple clients
2 vCore, 2G RAM
Two servers
21

Load Balancing Algorithm – Weighted Round Robin
18/08/2021
Weighted Round Robin load balancing is similar to the Round Robin (cyclic distribution).
The node with the higher specs will be apportioned a greater number of requests.
1 2
4 vCore, 4G RAM
5421
Weight: 2
6 3
2 vCore, 2G RAM
Two servers
Weight: 1
22
Set up the load balancer with assigned “weights” to each node according to hardware specs.
Higher specs, higher weight.
3
4
For example, if Server 1’s capacity is 2x more than Server 2’s, then you can assign Server 1 a weight of 2 and Server 2 a weight of 1.
5
6
Singh, Navpreet, and Kanwalvir Singh Dhindsa. “Load Balancing in Cloud Computing Environment: A Comparative Study of Service Models and Scheduling
Algorithms.” International Journal of Advanced Networking and Applications8, no. 6 (2017): 3246.
CRICOS code 00025B
Multiple clients
22

Load Balancing Algorithm – Least Connections
Identical hardware specs, but different occupied durations by client. E.g. clients connecting to Server 2 stay connected much longer than those connecting to Server 1.
Congestion in Server 2 makes resources run out faster.
6531
18/08/2021
2
4
1
3
Example: clients 1 and 3 already disconnect, while 2, 4, 5, and 6 are still connected.
Least Connections algorithm consider the number of current connections each server has when load balancing.
Less connection, higher priority for assignment.
Example, when using Least Connections algorithm, client 6 will be directed to Server 1 instead of Server 2.
5
6
Multiple clients
CRICOS code 00025B
6
4 2
Two servers
23
23

Load Balancing Algorithm – Weighted Least Connections
• The Weighted Least Connections algorithm applies a “weight” component based on the computing capacities of each server.
• Similar with Weighted Round Robin, setup a weight for each server.
• When directing an access request, a load balancer now considers two things:
1
2
Multiple clients
CRICOS code 00025B
4 vCore, 4G RAM
1
2
18/08/2021
– the weights of each server
– the number of clients currently connected to each server.
24
24

Load Balancing Algorithm – Random
• As its name implies, this algorithm matches clients and servers by random, i.e. using an underlying random number generator.
3
4
1 2
18/08/2021
531
• In cases wherein the load balancer receives a large number of requests, a Random algorithm will be able to distribute the requests evenly to the nodes.
• Like Round Robin, the Random algorithm is suitable for clusters consisting of nodes with similar configurations (CPU, RAM, etc).
56642
Multiple clients
CRICOS code 00025B 25
25

Other Cloud Load Balancing Algorithms
• Agent-based adaptive load balancing
• Chained failover load balancing
• Weighted response time load balancing
• Source IP hashing load balancing
• Layer 4-7 load balancing
• Etc.
CRICOS code 00025B 26
18/08/2021
26

Outline
• Networking and Virtual Private Cloud
• Load Balancing
– What & Why Load Balancing
– Algorithms
– LB in Cloud Architecture
– LB in Distributed Systems
– LB in Network Communications
– LB in Cloud Product
• Cloud Architecture
– Workload Distribution Architecture & Resource Pooling Architecture
– Dynamic Scalability Architecture & Elastic Resource Capacity Architecture
– Service Load Balancing Architecture & Cloud Bursting Architecture
– Elastic Disk Provisioning Architecture & Redundant Storage Architecture
• Advanced Cloud Architecture
CRICOS code 00025B 27
18/08/2021
27

Load Balanced Virtual Server Instances Architecture
18/08/2021
CRICOS code 00025B 28
28

Load Balanced Virtual Server Instances Architecture
• The load balanced virtual server instances architecture establishes a capacity watchdog system
– dynamically calculates virtual server instances and associated workloads,
– distributes the processing across available physical server hosts
• The capacity watchdog system has
– a usage monitor: tracks physical and virtual server usage and reports any significant fluctuations to the capacity planner
– live VM migration program
– a capacity planner: is responsible for dynamically calculating physical
server computing capacities against virtual server capacity requirements.
• The hypervisor cluster architecture provides the foundation of load-balanced virtual server architecture.
• Policies and thresholds are defined for the capacity watchdog monitor (2), which compares physical server capacities with virtual server processing (3).
• The capacity watchdog monitor reports an over-utilization to the VIM (4).
CRICOS code 00025B 29
18/08/2021
29

Load Balanced Virtual Server Instances Architecture
• The VIM signals the load balancer to redistribute the workload based on pre-defined thresholds (5).
• The load balancer initiates the live VM migration program to move the virtual servers (6).
• Live VM migration moves the selected virtual servers from one physical host to another (7).
18/08/2021
CRICOS code 00025B
30
30

Load Balanced Virtual Server Instances Architecture
• The workload is balanced across the physical servers in the cluster (8).
• The capacity watchdog continues to monitor the workload and resource consumption (9).
CRICOS code 00025B
31
18/08/2021
31

Outline
• Networking and Virtual Private Cloud
• Load Balancing
– What & Why Load Balancing
– Algorithms
– LB in Cloud Architecture
– LB in Distributed Systems
– LB in Network Communications
– LB in Cloud Product
• Cloud Architecture
– Workload Distribution Architecture & Resource Pooling Architecture
– Dynamic Scalability Architecture & Elastic Resource Capacity Architecture
– Service Load Balancing Architecture & Cloud Bursting Architecture
– Elastic Disk Provisioning Architecture & Redundant Storage Architecture
• Advanced Cloud Architecture
CRICOS code 00025B 32
18/08/2021
32

18/08/2021
Hadoop Distributed File System
• Apache Hadoop was proposed in 2010 as a collection of open-source software utilities to deal with big data problem.
• The core of Apache Hadoop consists of a storage part, known as Hadoop Distributed File System (HDFS), and a processing part which is a MapReduce programming model.
– Hadoopsplitsfilesintolargeblocksand distributes them across nodes in a cluster.
– Itthentransferspackagedcodeintonodes
– Ittakesadvantageofdatalocality.
– fasterandmoreefficientlythana conventional supercomputer architecture
Shvachko, Konstantin, , , and . “The hadoop distributed file system.” In MSST, vol. 10, pp. 1-10. 2010
CRICOS code 00025B 33
33

Design Motivations (similar with GFS)
• Many inexpensive commodity hardware and failures are very common
• Many big files: millions of files, ranging from MBs to GBs
• Two types of reads
– Largestreamingreads
– Smallrandomreads
• Once written, files are seldom modified
– Random writes are supported but do not have to be efficient • High sustained bandwidth is more important than low latency
CRICOS code 00025B 34
18/08/2021
34

HDFS – Architecture Overview
NameNode
• Maintains meta-data in RAM
• maintains the namespace tree and the mapping of file blocks to DataNodes
DataNode
• Store data and replicas
• Send heartbeats to NameNode
• receives maintenance commands from the NameNode indirectly (in replies to heartbeats).
– replicate blocks to other nodes;
– remove local block replicas;
– re-register or to shut down the node;
– send an immediate block report.
CRICOS code 00025B 35
18/08/2021
35

HDFS – Architecture Overview
Master/Slave architecture
18/08/2021
HDFS Client
Replication, balancing, heartbeats, etc.
fsImage
NameNode
Secondary NameNode
DataNode
DataNode
DataNode
DataNode
DataNode
Local Local Local disks disks disks
Local disks
CRICOS code 00025B
Local disks
36
36

HDFS Block Placement Policy – built-in loading balancing
• •
• •
The default HDFS block placement policy provides a tradeoff between minimizing the write cost, and maximizing data reliability, availability and aggregate read bandwidth.
When a new block is created, the policy is as follows:
– HDFS places the first replica on the node where the writer is located,
– the second and the third replicas on two different nodes in a different rack,
– and the rest are placed on random nodes with restrictions
 no DataNode contains more than one replica of any block,
and
 No rack contains more than two replicas of the same block, provided there are sufficient racks on the cluster.
There is no consideration of utilisation of DataNode.
In [1][2], disk utilisations have been considered when balancing
the load. [1]. Fan, Kai, et al. “An adaptive feedback load balancing algorithm in HDFS.” 2013 5th International Conference on Intelligent Networking and Collaborative Systems. IEEE, 2013.
[2]. Lin, Chi-Yi, and Ying- . “A load-balancing algorithm for hadoop distributed file system.” 2015 18th International Conference on Network-Based Information Systems. IEEE, 2015. CRICOS code 00025B 37
18/08/2021
37

Load Balancing in NginX
The following load balancing mechanisms (or methods) are supported in nginx:
round-robin — requests to the application servers are distributed in a round-robin fashion,
least-connected — next request is assigned to the server with the least number of active connections,
ip-hash — a hash-function is used to determine what server should be selected for the next request (based on the client’s IP address).
18/08/2021
Round-robin in Nginx configuration file
Weighted round-robin in Nginx configuration file
Least connection in Nginx configuration file
CRICOS code 00025B 38
38

Outline
• Networking and Virtual Private Cloud
• Load Balancing
– What & Why Load Balancing
– Algorithms
– LB in Cloud Architecture
– LB in Distributed Systems
– LB in Network Communications
– LB in Cloud Product
• Cloud Architecture
– Workload Distribution Architecture & Resource Pooling Architecture
– Dynamic Scalability Architecture & Elastic Resource Capacity Architecture
– Service Load Balancing Architecture & Cloud Bursting Architecture
– Elastic Disk Provisioning Architecture & Redundant Storage Architecture
• Advanced Cloud Architecture
CRICOS code 00025B 39
18/08/2021
39

Network Model
OSI (Open Systems Interconnect) Model:
• is a conceptual model that characterizes and standardizes the communication functions of a telecommunication without regard to its underlying internal structure and technology.
• Its goal is to interpret diverse communication systems with standard communication protocols.
• The model partitions a communication system into 7 abstraction layers.
HTTP, SMTP, POP3, FTP, etc.
TCP/UDP
CRICOS code 00025B 40
18/08/2021
40

Packet Encapsulation
OSI (Open Systems Interconnect) Model:
• The data needs to be encapsulated before physically transfer to another location
• Each layer in OSI adds the data by prepending headers
• For a Layer-n Load Balancer, higher n means more encapsulated in the packet.
22Bytes 20Bytes 20Bytes
CRICOS code 00025B 64 to 1500 Bytes 41
4Bytes
18/08/2021
41

HTTPs (Layer 7) Load Balancing
mycompany.com/upload/videos
TCP vs HTTP(S) Load Balancing. https://medium.com/martinomburajr/distributed-computing-tcp-vs-http-s-load- balancing-7b3e9efc6167
CRICOS code 00025B
42
18/08/2021
mycompany.com/*
42

TCP (Layer 4) Load Balancing
18/08/2021
TCP vs HTTP(S) Load Balancing. https://medium.com/martinomburajr/distributed-computing-tcp-vs-http-s-load- balancing-7b3e9efc6167
CRICOS code 00025B
43
43

Differences between Layer 4 and Layer 7 Load Balancing
18/08/2021
Layer 4 LB (TCP)
Layer 7 LB (HTTPs)
Layer
Transport Layer
Application Layer
Packet Manipulation
No
Yes
SSL Traffic
No
Yes
Logging & Monitoring
Not suitable
Yes
Implementation
Dedicated hardware
Typically software
Throughput Speed
Fast
Relatively lower
TCP vs HTTP(S) Load Balancing. https://medium.com/martinomburajr/distributed-computing-tcp-vs-http-s-load- balancing-7b3e9efc6167
CRICOS code 00025B 44
44

Outline
• Networking and Virtual Private Cloud
• Load Balancing
– What & Why Load Balancing
– Algorithms
– LB in Cloud Architecture
– LB in Distributed Systems
– LB in Network Communications
– LB in Cloud Product
• Cloud Architecture
– Workload Distribution Architecture & Resource Pooling Architecture
– Dynamic Scalability Architecture & Elastic Resource Capacity Architecture
– Service Load Balancing Architecture & Cloud Bursting Architecture
– Elastic Disk Provisioning Architecture & Redundant Storage Architecture
• Advanced Cloud Architecture
CRICOS code 00025B 45
18/08/2021
45

Load Balancing in GCP – Overview
Worldwide autoscaling and load balancing
• Scale your applications on Compute Engine from small to big.
• Distribute your load-balanced compute resources in single or multiple regions
– close to your users to meet your high availability requirements. • Put your resources behind a single IP (IP Anycast technology).
– a single anycast IP front-ends all your backend instances in regions around the world.
– It provides cross-region load balancing
 E.g. automatic multi-region failover, which gently moves traffic in fractions if backends become unhealthy.
– In contrast to DNS-based global load balancing solutions, Cloud Load Balancing reacts instantaneously to changes in users, traffic, network, backend health, and other related conditions.
*IP Anycast, is a networking technique that allows for multiple machines to share the same IP address.
CRICOS code 00025B 46
18/08/2021
46

Cloud Load Balancing
Software-defined load balancing
• a fully distributed, software-defined, managed service for all traffic.
• It is not an instance- or device-based solution, won’t be locked into physical load balancing infrastructure.
• Multiple traffic supported: HTTP(S), TCP/SSL, and UDP.
Over one million queries per second
• the same frontend-serving infrastructure that powers Google.
• supports 1 million+ queries per second with consistent high performance and low latency.
• Traffic enters Cloud Load Balancing through 80+ distinct global load balancing locations, maximizing the distance traveled on Google’s fast private network backbone.
Seamless autoscaling
• Cloud Load Balancing can scale seamlessly and automatically when users and traffic grow
CRICOS code 00025B 47
18/08/2021
47

18/08/2021
Cloud Load Balancing
Internal load balancing
• without load balancer exposure to the internet.
• GCP internal load balancing is architected
using Andromeda.
• VPN supported for clients.
Support for cutting-edge protocols
• includes support for the latest application delivery protocols.
– E.g. It supports HTTP/2 with gRPC when connecting to backends.
What is HTTP/2?
It’s the second version of HTTP to have faster, simpler, and more robust applications.
CRICOS code 00025B 48
48

External and Internal Load Balancing
GCP’s load balancers can be divided into external and internal load balancers.
• External load balancers distribute traffic coming from the internet to your GCP network. • Internal load balancers distribute traffic within your GCP network.
https://cloud.google.com/load-balancing/docs/load-balancing-overview
CRICOS code 00025B 49
18/08/2021
49

Hybrid Load Balancing Example on GCP
Traffic from users in San Francisco, Iowa, and Singapore is directed to an external load balancer, which distributes that traffic to different regions in a GCP network.
An internal load balancer then distributes traffic between the us-central-1a and us-central-1b zones.
18/08/2021
https://cloud.google.com/load-balancing/docs/load-balancing-overview
CRICOS code 00025B 50
50

HTTP(S) Load Balancing Setup
Cross-region load balancing
https://cloud.google.com/load-balancing/docs/load-balancing-overview
Content-based load balancing
CRICOS code 00025B 51
18/08/2021
51

No Free Lunch
To have such Load Balancers, you need to manually setup The Load Balancing services are not free
18/08/2021
Rule cost
Ingress cost
Have some quizzes in Ripple! https://cloud.google.com/load-balancing/docs/load-balancing-overview
1-month vanilla load balancing services with 5 forwarding rules
: 24*30*0.025 = $18
: 1024GB *30 *0.008 =$245.76
= $18 + 245.76 = $263.76
CRICOS code 00025B 52
Total cost
52

Outline
• Networking and Virtual Private Cloud
• Load Balancing
– What & Why Load Balancing
– Algorithms
– LB in Cloud Architecture
– LB in Distributed Systems
– LB in Network Communications
– LB in Cloud Product
• Cloud Architecture
– Workload Distribution Architecture & Resource Pooling Architecture
– Dynamic Scalability Architecture & Elastic Resource Capacity Architecture
– Service Load Balancing Architecture & Cloud Bursting Architecture
– Elastic Disk Provisioning Architecture & Redundant Storage Architecture
• Advanced Cloud Architecture
CRICOS code 00025B 53
18/08/2021
53

1. Workload Distribution Architecture
• IT resources can be horizontally scaled via the addition of one or more identical IT resources
• a load balancer that provides runtime logic capable of evenly distributing the workload among the
available IT resources.
• The resulting workload distribution architecture reduces both IT resource over-utilization and under- utilization to an extent dependent upon the sophistication of the load balancing algorithms and runtime logic.
CRICOS code 00025B 54
18/08/2021
54

1. Workload Distribution Architecture


This workload distribution architecture can be applied to any IT resource,
– distributed virtual servers,
– cloud storage devices,
– and cloud services.
Load balancing systems applied to specific IT resources usually produce specialized variations : – the service load balancing architecture
– the load balanced virtual server architecture
– the load balanced virtual switches architecture
CRICOS code 00025B
18/08/2021
55
55

1. Workload Distribution Architecture
In addition to the base load balancer mechanism, and the virtual server and cloud storage device mechanisms to which load balancing can be applied, the following mechanisms can also be part of this cloud architecture:
• Audit Monitor – When distributing runtime workloads, the type and geographical location of the IT resources that process the data can determine whether monitoring is necessary to fulfill legal and regulatory requirements.
• Cloud Usage Monitor – Various monitors can be involved to carry out runtime workload tracking and data processing.
• Hypervisor – Workloads between hypervisors and the virtual servers that they host may require distribution.
• Logical Network Perimeter – The logical network perimeter isolates cloud consumer network boundaries in
relation to how and where workloads are distributed.
• Resource Cluster – Clustered IT resources in active/inactive mode are commonly used to support workload balancing between different cluster nodes.
• Resource Replication – This mechanism can generate new instances of virtualized IT resources in response to runtime workload distribution demands.
CRICOS code 00025B 56
18/08/2021
56

2. Resource Pooling Architecture
• A resource pooling architecture is based on the use of one or more resource pools.
• identical IT resources are grouped and maintained by a system that automatically ensures that they
remain synchronized.
• Common examples of resource pools:
– Physical server pools are composed of networked servers that have been installed with operating systems and other necessary programs and/or applications and are ready for immediate use.
– Virtual server pools are usually configured using one of several available templates chosen by the cloud consumer during provisioning.
 For example, a cloud consumer can set up a pool of mid-tier Windows servers with 4 GB of RAM or a pool of low-tier Ubuntu servers with 2 GB of RAM.
– Storage pools, or cloud storage device pools, consist of file-based or block-based storage structures that contain empty and/or filled cloud storage devices.
CRICOS code 00025B 57
18/08/2021
57

2. Resource Pooling Architecture
• Common examples of resource pools:
– Network pools (or interconnect pools) are composed of different preconfigured network connectivity devices.
 For example, a pool of virtual firewall devices or physical network switches can be created for redundant connectivity, load balancing, or link aggregation.
– CPU pools are ready to be allocated to virtual servers, and are typically broken down into individual processing cores.
– Pools of physical RAM can be used in newly provisioned physical servers or to vertically scale physical servers.
• Dedicated pools can be created for each type of IT resource and individual pools can be grouped into a larger pool, in which case each individual pool becomes a sub-pool
CRICOS code 00025B 58
18/08/2021
58

2. Resource Pooling Architecture
• Resource pools can become highly complex, with multiple pools created for specific cloud consumers or applications.
• A hierarchical structure can be established to form parent, sibling, and nested pools in order to facilitate the organization of diverse resource pooling requirements
• Pools B and C are sibling pools that are taken from the larger Pool A, which has been allocated to a cloud consumer. This is an alternative to taking the IT resources for Pool B and Pool C from a general reserve of IT resources that is shared throughout the cloud.
CRICOS code 00025B
59
18/08/2021
59

2. Resource Pooling Architecture
• In the nested pool model, larger pools are divided into smaller pools that individually group the same type of IT resources together.
• Nested pools can be used to assign resource pools to different departments (development and deployment) or groups in the same cloud consumer organization.
• Nested Pools A.1 and Pool A.2 are comprised of the same IT resources as Pool A, but in different quantities.
• Nested pools are typically used to provision cloud services that need to be rapidly instantiated using the same type of IT resources with the same configuration settings.
CRICOS code 00025B 60
18/08/2021
60

2. Resource Pooling Architecture
The commonly pooled mechanisms are cloud storage devices and virtual servers
The following mechanisms can also be part of this cloud architecture:
• Audit Monitor – This mechanism monitors resource pool usage to ensure compliance with privacy and regulation requirements, especially when pools contain cloud storage devices or data loaded into memory.
• Cloud Usage Monitor – Various cloud usage monitors are involved in the runtime tracking and synchronization that are required by the pooled IT resources and any underlying management systems.
• Hypervisor – The hypervisor mechanism is responsible for providing virtual servers with access to resource pools, in addition to hosting the virtual servers and sometimes the resource pools themselves.
CRICOS code 00025B 61
18/08/2021
61

2. Resource Pooling Architecture
• Logical Network Perimeter – The logical network perimeter is used to logically organize and isolate resource pools.
• Pay-Per-Use Monitor – The pay-per-use monitor collects usage and billing information on how individual cloud consumers are allocated and use IT resources from various pools.
• Remote Administration System – This mechanism is commonly used to interface with backend systems and programs in order to provide resource pool administration features via a front-end portal.
• Resource Management System – The resource management system mechanism supplies cloud consumers with the tools and permission management options for administering resource pools.
• Resource Replication – This mechanism is used to generate new instances of IT resources for resource pools.
CRICOS code 00025B 62
18/08/2021
62

3. Dynamic Scalability Architecture
• The dynamic scalability architecture is based on a system that pre-defined scaling conditions.
• If the conditions are satisfied, the dynamic allocation of IT resources from resource pools will be triggered.
• Dynamic allocation enables variable utilization as dictated by usage demand fluctuations, since unnecessary IT resources are efficiently reclaimed without requiring manual interaction.
• The automated scaling listener is configured with workload thresholds.
• This mechanism can be provided with logic that determines how many additional IT resources can be dynamically provided, based on the terms of a given cloud consumer’s provisioning contract.
CRICOS code 00025B 63
18/08/2021
63

3. Dynamic Scalability Architecture
The following types of dynamic scaling are commonly used:
• Dynamic Horizontal Scaling
– IT resource instances are scaled out to handle fluctuating workloads.
– The automatic scaling listener monitors requests and signals resource replication to initiate IT
resource duplication, as per requirements and permissions.
• Dynamic Vertical Scaling
– IT resource instances are scaled up and down when there is a need to adjust the processing capacity of a single IT resource.
– E.g., increase CPU cores or RAM for a seriously overloaded virtual server.
• Dynamic Relocation
– The IT resource is re-located to a host with more capacity.
– E.g., a database may need to be moved from HDD storage device with 100MB/s write and
read I/O capacity to another SSD-based storage device with 1000MB/s per second I/O capacity.
CRICOS code 00025B 64
18/08/2021
64

3. Dynamic Scalability Architecture
illustrating the process of dynamic horizontal scaling.
(1). Cloud service consumers are sending requests to a cloud service.
(2). The automated scaling listener monitors the cloud service to determine if predefined capacity thresholds are being exceeded.
CRICOS code 00025B
65
18/08/2021
65

3. Dynamic Scalability Architecture
(3). The number of requests coming from cloud service consumers increases.
(4). The workload exceeds the performance thresholds. The automated scaling listener determines the next course of action based on a predefined scaling policy
(5). If the cloud service implementation is deemed eligible for additional scaling, the automated scaling listener initiates the scaling process.
CRICOS code 00025B
66
18/08/2021
66

3. Dynamic Scalability Architecture
(6) The automated scaling listener sends a signal to the resource replication mechanism
(7) More instances of the cloud service will be generated.
(8) The increased workload has been accommodated, the automated scaling listener resumes monitoring and detracting and adding IT resources, as required.
CRICOS code 00025B
67
18/08/2021
67

3. Dynamic Scalability Architecture
The dynamic scalability architecture can be applied to a range of IT resources (e.g. virtual servers and cloud storage devices).
Besides the core automated scaling listener and resource replication mechanisms, the following mechanisms can also be used in this form of cloud architecture:
• Cloud Usage Monitor – Specialized cloud usage monitors can track runtime usage in response to dynamic fluctuations caused by this architecture.
• Hypervisor – The hypervisor is invoked by a dynamic scalability system to create or remove virtual server instances, or to be scaled itself.
• Pay-Per-Use Monitor – The pay-per-use monitor is engaged to collect usage cost information in response to the scaling of IT resources.
CRICOS code 00025B 68
18/08/2021
68

4. Elastic Resource Capacity Architecture
• The elastic resource capacity architecture is primarily related to the dynamic provisioning of virtual servers, using a system that allocates and reclaims CPUs and RAM in immediate response to the fluctuating processing requirements of hosted IT resources
• Example:
(1) Cloud service consumers send requests to a cloud service
(2) An automated scaling listener is monitoring requests. (3) An intelligent automation engine script is deployed
with workflow logic.
(4) the script can notify the resource pool using allocation requests.
CRICOS code 00025B
69
18/08/2021
69

4. Elastic Resource Capacity Architecture
(5) Cloud service consumer requests increase
(6) The automated scaling listener signals the intelligent automation engine to execute the script.
(7) The script runs the workflow logic that signals the hypervisor to allocate more IT resources from the resource pools.
(8) The hypervisor allocates additional CPU and RAM to the virtual server, enabling the increased workload to be handled.
Note: Resource pools interacts with the hypervisor to retrieve and return CPU and RAM resources at runtime.
Comparison between ERCA and DSA?
CRICOS code 00025B
70
18/08/2021
70

5. Service Load Balancing Architecture
Service load balancing architecture
• a specialized variation of the workload distribution architecture
• is geared specifically for scaling cloud service implementations.
• Redundant deployments of cloud services are created, with a load balancing system added to dynamically distribute workloads.
• The duplicate cloud service implementations are organized into a resource pool,
• The load balancer is positioned as either an external or built-in component to allow the host
servers to balance the workloads themselves.
• Depending on the anticipated workload and processing capacity of host server environments, multiple instances of each cloud service implementation can be generated as part of a resource pool that responds to fluctuating request volumes more efficiently.
CRICOS code 00025B 71
18/08/2021
71

5. Service Load Balancing Architecture
• The load balancer can be positioned independent of the cloud services and their host servers (external).
• The load balancer intercepts messages sent by cloud service consumers (1) and forwards them to the virtual servers so that the workload processing is horizontally scaled (2).
CRICOS code 00025B
72
18/08/2021
72

5. Service Load Balancing Architecture
• The load balancer can be positioned built-in as part of the application or server’s environment.
• In the built-in case, a primary server with the load balancing logic can communicate with neighbouring servers to balance the workload.
• Example:
– (1) Cloud service consumer requests are sent to
Cloud Service A on Virtual Server A.
– (2) The cloud service implementation (primary server – Virtual Server A) includes built-in load balancing logic.
– The logic is capable of distributing requests to the neighbouring Cloud Service A implementations on Virtual Servers B and C (2), which are redundant.
CRICOS code 00025B
73
18/08/2021
73

5. Service Load Balancing Architecture
The service load balancing architecture can involve the following mechanisms in addition to the load balancer:
• Cloud Usage Monitor – Cloud usage monitors may be involved with monitoring cloud service instances and their respective IT resource consumption levels, as well as various runtime monitoring and usage data collection tasks.
• Resource Cluster – Active-active cluster groups are incorporated in this architecture to help balance workloads across different members of the cluster.
• Resource Replication – The resource replication mechanism is utilized to generate cloud service implementations in support of load balancing requirements.
CRICOS code 00025B 74
18/08/2021
74

6. Cloud Bursting Architecture
• The cloud bursting architecture establishes a form of dynamic scaling that scales or “bursts out” on- premise IT resources into a cloud whenever predefined capacity thresholds have been reached.
• The corresponding cloud-based IT resources are redundantly pre-deployed but remain inactive until cloud bursting occurs.
• After they are no longer required, the cloud-based IT resources are released and the architecture “bursts in” back to the on-premise environment.
• Cloud bursting is a flexible scaling architecture that provides cloud consumers with the option of using cloud-based IT resources only to meet higher usage demands.
• The foundation of this architectural model is based on the automated scaling listener and resource replication mechanisms.
– Automated scaling listener determines when to redirect requests to cloud-based IT resources
– Resource replication is used to maintain synchronicity between on-premise and cloud-based IT resources in relation to state information
CRICOS code 00025B 75
18/08/2021
75

6. Cloud Bursting Architecture
• An automated scaling listener monitors the usage of on-premise Service A
• For requests from customer A and B, on-premise hardware and software can sufficiently provide Service A.
• When more requests coming in, like request from Consumer C to access Service A, the Cloud Bursting Architecture will direct the request to the redundant implementation of Service A in the cloud (Cloud Service A)
• Need to predefine Service A’s usage threshold
• Need to deploy the redundant service in the cloud.
• A resource replication system is used to keep state management databases synchronized.
CRICOS code 00025B 76
18/08/2021
76

7. Elastic Disk Provisioning Architecture
• Cloud consumers are commonly charged for cloud- based storage space based on fixed-disk storage allocation.
• Example:
– a virtual server with three hard disks, each with a capacity of 150 GB.
– After OS installation, usage is 0 out of total of 450 GB of disk space.
– Because the 450 GB is allocated to the virtual server by the cloud provider, the consumer will be charged for 450 GB no matter how much has been actually used.
CRICOS code 00025B 77
18/08/2021
77

7. Elastic Disk Provisioning Architecture
• The elastic disk provisioning architecture establishes a dynamic storage provisioning system that ensures that the cloud consumer is granularly billed for the exact amount of storage that it actually uses.
• Example:
– a virtual server with three hard disks, each with a capacity of 150 GB.
– The total disk space is 450 GB, but the 450 GB are set as the maximum disk usage that is allowed for this virtual server, but the usage is 0 GB.
– No physical disk space has been reserved or allocated yet.
– Because the allocated disk space is equal to the actual used space (0 GB at the moment), the cloud consumer is not charged for any disk space usage
CRICOS code 00025B 78
18/08/2021
78

18/08/2021
8. Redundant Storage Architecture
• Cloud storage devices are occasionally subject to failure and disruptions that are caused by network connectivity issues, controller or general hardware failure, or security breaches.
• The redundant storage architecture introduces a secondary duplicate cloud storage device as part of a failover system that synchronizes its data with the data in the primary cloud storage device.
• A storage service gateway diverts cloud consumer requests to the secondary device whenever the primary device fails
• This cloud architecture primarily relies on a storage replication system that keeps the primary cloud storage device synchronized with its duplicate secondary cloud storage devices
CRICOS code 00025B
79
79

Outline
• Networking and Virtual Private Cloud
• Load Balancing
– What & Why Load Balancing
– Algorithms
– LB in Cloud Architecture
– LB in Distributed Systems
– LB in Network Communications
– LB in Cloud Product
• Cloud Architecture
– Workload Distribution Architecture & Resource Pooling Architecture
– Dynamic Scalability Architecture & Elastic Resource Capacity Architecture
– Service Load Balancing Architecture & Cloud Bursting Architecture
– Elastic Disk Provisioning Architecture & Redundant Storage Architecture
• Advanced Cloud Architecture
CRICOS code 00025B 80
18/08/2021
80

Advanced Cloud Architectures
1. LoadBalancedVirtualServerInstancesArchitecture 2. CloudBalancingArchitecture
3. HypervisorClusteringArchitecture
4. Non-DisruptiveServiceRelocationArchitecture
5. ZeroDowntimeArchitecture
6. ResourceReservationArchitecture
7. DynamicFailureDetectionandRecoveryArchitecture 8. Bare-MetalProvisioningArchitecture
9. RapidProvisioningArchitecture
10. Storage Workload Management Architecture
CRICOS code 00025B 81
18/08/2021
81

References
1.“Cloud computing: concepts, technology & architecture”. Erl, Thomas, , and . , 2013.
2.https://www.jscape.com/blog/load-balancing-algorithms
3.https://en.wikipedia.org/wiki/Load_balancing_(computing)
4.What Is Layer 4 Load Balancing? https://www.nginx.com/resources/glossary/layer-4-load-balancing/
5.What Is Layer 7 Load Balancing? https://www.nginx.com/resources/glossary/layer-7-load-balancing/
6. , and . (2013) A Load Balancing Model based on Cloud Partitioning for the Public Cloud, Tsinhhua Science and Technology, ISSN l1007-0214l, l04/12l, pp34-39, Vol. 18, No. 1.
7.Cardellini, V., Colajanni, M., & Yu, P. S. (1999) Dynamic load balancing on web-server systems, IEEE Internet Computing, 3(3), 28-39. DOI: 10.1109/4236.769420.
8. and (2014) Load Balancing in Cloud Computing, Proc. of Int. Conf. on Recent Trends in Information, Telecommunication and Computing, ITC, pp374-381.
18/08/2021
9.http://www.javatpoint.com/cloud-computing-tutorial 10.https://blog.newrelic.com/2016/04/07/importance-local-load-balancing/ 11.https://www.cloudflare.com/en-au/learning/cloud/what-is-a-virtual-private-cloud
CRICOS code 00025B 82
82

Leave a Reply

Your email address will not be published. Required fields are marked *