Load Balanced Virtual Server Instances Architecture Essay

Exclusively available on Available only on IvyPanda®
This academic paper example has been carefully picked, checked and refined by our editorial team.
You are free to use it for the following purposes:
  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Abstract

Mechanisms such as load balancers and failover system monitors are used in the cloud to minimize vulnerabilities of cloud-based services. In addition, they contribute to the scaling capacity of cloud environments. With these mechanisms in place, it is possible to achieve seamless performance and reliable data management.

Introduction

Cloud-based services share several vulnerabilities with their physical counterparts. To minimize the risks, mechanisms such as load balancers and failover system monitors are used. The following paper describes the architectures used in a solution developed by Innovartus for hosting a Role Player service.

Summary

Innovartus is a service provider aiming to enter the market with a product called Role Player cloud service. The service is expected to leverage the opportunities offered by cloud-balancing architecture. The company leases two cloud-based environments. The specifications of the clouds are not provided. However, it is known that one of the clouds is hosted regionally whereas the other is from a global provider.

The decision is made to implement multiple redundant instances of a Role Player cloud service on both clouds and establish access for service consumers through a load balancer. The load balancer is deployed on the global provider’s cloud environment. Each service is equipped with a separate automated scaling listener instance that allocates requests by the volume of requests. Finally, two separate failover system monitors are used to route the requests to different service implementations.

Architecture Description

The first main component of the solution developed by Innovartus is the ready-made environment used to operate the Role Player service. A ready-made environment is a collection of components necessary for developing, deploying, and managing services by cloud consumers. In the case of Innovartus, each cloud has a separate ready-made environment responsible for the deployment of replicated instances of a service.

The second architecture component is a load balancer. This element is a specific implementation of a workload distribution architecture. Its main purpose is to establish the need for redundant implementations of a cloud service, thus ensuring its scalability (Hsiao, Chung, Shen, & Chao, 2013). In the case at hand, the requests from consumers are routed through a load-balancing service agent deployed on a global cloud environment. The agent manages the load by routing some of the requests to a second cloud. In addition, it enables the replication of the service when their operational capacity is considered insufficient.

The third key element of the architecture is an automated scaling listener. This component is responsible for the management of fluctuating workload volumes. Scaling listeners operate in two dimensions: vertical scaling, during which the properties of a single IT resource are adjusted (e.g. through an allocation of additional resources), and horizontal scaling, responsible for re-routing requests to available duplicated services (Lorido-Botran, Miguel-Alonso, & Lozano, 2014).

In the case of Innovartus, each cloud environment is equipped with a separate listener. Service consumer requests arrive at the listener agent after being distributed by an algorithm of a load balancer. The listeners then measure the workload and allocate resources most effectively. Considering the presence of several redundant instances of a service, it is reasonable to assume that both automated scaling listeners operate on a horizontal scale.

The fourth and final element of the architecture used for the solution is the failover system monitor. The primary purpose of a failover system is to increase the reliability of service by detecting failures in service implementations and taking appropriate measures to mitigate their consequences. Two types of failover system configurations are used in cloud-based services. The first is an active-active configuration, which requires several instances of a service to be deployed on the cloud (Minhas et al., 2013). Once a system detects a malfunction of a certain instance, it communicates the information to a load balancing agent, which stops routing requests to the failed service until its functionality is restored.

The second type is an active-passive configuration, which relies on redundant instances of a service. In this case, all traffic is routed to one of the service implementation monitored by the failover system. Once the instance starts malfunctioning, a load balancer redirects the traffic to the available redundant instance. Upon its restoration, the first instance goes into standby to be reserved as redundant. In the case at hand, both clouds host several service instances that receive traffic simultaneously. Therefore, it is likely that an active-active failover system is used by Innovartus.

Commercial Vendor Products

Load balancers are provided by several commercial vendors both as a separate solution and as an element of enterprise-scale products. An example of the former is Kemp Technologies, whose load balancer features several enhancements and incorporates additional functions such as content management, security monitoring, and access control implementations. An example of the former is Nutanix enterprise cloud, whose enterprise solutions include both load balancers and failover systems.

Conclusion

Service availability is a major quality concern. From this perspective, the use of load balancers and failover system monitors in a system designed by Innovartus is a significant contribution to a positive consumer experience. With these mechanisms in place, it is possible to achieve seamless performance and reliable data management.

References

Hsiao, H. C., Chung, H. Y., Shen, H., & Chao, Y. C. (2013). Load rebalancing for distributed file systems in clouds. IEEE Transactions on Parallel and Distributed Systems, 24(5), 951-962.

Lorido-Botran, T., Miguel-Alonso, J., & Lozano, J. A. (2014). A review of auto-scaling techniques for elastic applications in cloud environments. Journal of Grid Computing, 12(4), 559-592.

Minhas, U. F., Rajagopalan, S., Cully, B., Aboulnaga, A., Salem, K., & Warfield, A. (2013). Transparent high availability for database systems. The VLDB Journal, 22(1), 29-45.

More related papers Related Essay Examples
Cite This paper
You're welcome to use this sample in your assignment. Be sure to cite it correctly

Reference

IvyPanda. (2021, July 7). Load Balanced Virtual Server Instances Architecture. https://ivypanda.com/essays/load-balanced-virtual-server-instances-architecture/

Work Cited

"Load Balanced Virtual Server Instances Architecture." IvyPanda, 7 July 2021, ivypanda.com/essays/load-balanced-virtual-server-instances-architecture/.

References

IvyPanda. (2021) 'Load Balanced Virtual Server Instances Architecture'. 7 July.

References

IvyPanda. 2021. "Load Balanced Virtual Server Instances Architecture." July 7, 2021. https://ivypanda.com/essays/load-balanced-virtual-server-instances-architecture/.

1. IvyPanda. "Load Balanced Virtual Server Instances Architecture." July 7, 2021. https://ivypanda.com/essays/load-balanced-virtual-server-instances-architecture/.


Bibliography


IvyPanda. "Load Balanced Virtual Server Instances Architecture." July 7, 2021. https://ivypanda.com/essays/load-balanced-virtual-server-instances-architecture/.

If, for any reason, you believe that this content should not be published on our website, please request its removal.
Updated:
1 / 1