The reliance of a consumer on several unrelated services can introduce restrictions and create difficulties. To address the issue, it is suggested to implement a failover mechanism and a state management database. The solution will make the cloud service consumer less prone to potential setbacks in the system.
We will write a custom Essay on Cloud-Based Services and Their Quality specifically for you
301 certified writers online
In recent years, the popularity of cloud-based services has resulted in their widespread adoption by both public and private entities. This has led to a situation where some cloud consumers rely on several services. While being potentially advantageous, such a setup can also introduce restrictions and create difficulties. The following paper describes a solution for a cloud service consumer dealing with two services of varying levels of quality.
In order to proceed to the development of a viable solution, it is first necessary to identify the issues responsible for the disruption of performance. According to the case, cloud consumer A interacts with two separate and unrelated cloud services. The first one is hosted on a cloud owned by a single organization. However, it is accessed by a large number of cloud services, and its data sources are shared by several departments in the company.
Thus, in order to maintain the desired level of performance, cloud service has an access limit of one request per day. The second service is situated on the community-owned cloud shared by different organizations. Due to the absence of a centralized governing body and poor quality of service, cloud B is much more prone to failure. As a result, the requests made to the second cloud require more time to process. In addition, the availability rating of the second service is 84.53%, which is considerably lower than the industry standard of 99% (Zheng, Wu, Zhang, Lyu, & Wang, 2013).
The process of retrieving necessary data by consumer A consists of two sequential steps. First, the data is requested from service X hosted on cloud A. Second, it is sent to service Y hosted on cloud B, where it is processed and indicated by a success or failure response. The direction cannot be reversed due to the reliance on data retrieved from service X during the request to service Y. Thus, two major potential risks can be identified. First, it is possible that a request to service Y returns an error response as a result of its low availability. Second, in a situation where the data obtained from service X loses its value by the time of the negative response, it would be impossible to repeat the process due to the access restrictions of cloud A.
The solution to the issue at hand should consist of two components. First, a failover system should be implemented on cloud B. A failover system is a safety measure that is used to increase the reliability and responsiveness of cloud-based resources by establishing redundant implementations of services (Chang, Tsai, & Chen, 2014). The approach is typically used in situations where a single point of failure can disrupt the entire process.
The core principle of a failover system is the establishment of redundant instances of a service that can be utilized upon detection of errors or a decline in availability. In the case of cloud B, which is used by several organizations, an active-active setup is recommended. Thus, several instances of an IT resource should be implemented in the cloud. The instances are expected to work in synchronicity, receiving a proportion of workload from a load balancer. In a scenario where one of the instances stops responding, the failure is detected by a monitoring mechanism. Upon receiving the information, the load balancer removes the faulty instance from the schedule, and the tasks are assigned to the instances that remain operational.
The second part of the solution is the implementation of a state management database on cloud A. A state management database is a technology used for the temporary storage of the state of selected processes. Traditionally, application states are cached in system memory, which is a superior approach in terms of access speed. However, long-running activities that typically consume a significant amount of resources may require an occasional offload of data to increase the scalability of the service (Schulte, Janiesch, Venugopal, Weber, & Hoenisch, 2015).
In the case on hand, the database can be configured to store data retrieved from the service on cloud A after it is delivered back to the consumer. The proposed mechanism is as follows: cloud service keeps its state offloaded until the request from the consumer is received, at which point it is loaded into run-time memory. The service then responds by sending back the requested data, at which point its task is completed, and the state is offloaded back to the storage.
Alternatively, the process can remain partially active throughout the day and offload only the part containing the state of the system during the response. In both cases, it becomes possible for the cloud service consumer to obtain the necessary information from storage rather than the cloud, thus removing the access restriction.
As can be seen, the described solution addresses both areas of concern. The failover system mechanism implemented on cloud B can greatly reduce the risk of a failed request, whereas the state management database eliminates the access restrictions of cloud A. As a result, the cloud service consumer will be less prone to the potential setbacks in the system.
Chang, B. R., Tsai, H. F., & Chen, C. M. (2014). High-performed virtualization services for in-cloud enterprise resource planning system. Journal of Information Hiding and Multimedia Signal Processing, 5(4), 614-624.
Schulte, S., Janiesch, C., Venugopal, S., Weber, I., & Hoenisch, P. (2015). Elastic business process management: State of the art and open challenges for BPM in the cloud. Future Generation Computer Systems, 46, 36-50.
Zheng, Z., Wu, X., Zhang, Y., Lyu, M. R., & Wang, J. (2013). QoS ranking prediction for cloud services. IEEE Transactions on Parallel and Distributed Systems, 24(6), 1213-1222.