In this article, we have explored the Noisy Neighbor problem and how Throttling pattern (a Cloud Design Pattern) can be used to resolve the issue.
Table of contents:
- Description of the issue: Noisy Neighbor
- Resolving the issue: Throttling Pattern
- Clients' options for action
- Service providers can take the following actions
- Putting everything together
Description of the issue: Noisy Neighbor
A noisy neighbor is a cloud computing infrastructure co-tenant who monopolizes bandwidth, disk I/O, CPU, and other resources, causing other users' cloud performance to suffer significantly.
Assuming we've just launched a web application. And the program grew in popularity. Thousands of customers are sending thousands of requests per second to our application's front-end web service. Everything is in working order. Until, all of a sudden, one or more clients began to send many more requests than they had previously. This can happen for a variety of reasons. For example, our client is a well-known web business and it just witnessed a surge in traffic. Alternatively, the web service's engineers began a load test. This might be a rogue client attempting to DDoS our service. All of these circumstances can contribute to a "noisy neighbor problem," which occurs when one client uses too many shared resources on a service host, such as CPU, memory, disk, or network I/O. As a result, other clients of our application will encounter increased latency or a higher rate of unsuccessful requests for their requests.
Resolving the issue: Throttling Pattern
Incorporating a rate restriction mechanism is one technique to alleviate the "noisy neighbor problem" (also known as throttling).
Throttling is a critical design pattern that helps govern the flow of data into target activities.
A throttle is put in front of a target service or process to regulate the rate at which data flows into the target in the Throttling pattern, also known as Rate Limiting. Throttling is a method of ensuring that the flow of data given to a destination can be processed at a reasonable speed. The throttle will slow or even terminate calls to the target if the target becomes overwhelmed.
Throttling limits the number of queries a client can make in a certain amount of time. Throttling an application, as well as the approach used, is an architectural decision that has an impact on the overall architecture of a system. Because it is difficult to incorporate throttling after a system has been built, it should be considered early in the application design process. Requests that exceed the limit are either refused immediately or delayed in processing.
Clients' options for action
If space is available, purchase reserved capacity. While utilizing Cosmos DB, for example, purchase reserved throughput, and when using ExpressRoute, create different circuits for performance-sensitive situations. Throughput is the rate of production or the rate at which something is processed in general. Throughput or network throughput is the rate of successful message delivery over a communication channel when used in the context of communication networks such as Ethernet or packet radio.
Upgrade to a single-tenant instance of the service or a higher service tier with better isolation assurances. While utilizing Service Bus, for example, upgrade to the premium tier, and when using Azure Cache for Redis, choose between the standard and premium tier caches.
Azure API Management is a private, single-tenant cloud solution that operates on the Microsoft Azure platform. This creates a safe, isolated environment with a set of resources dedicated to it, improving performance and privacy. It also provides predictable performance, allows for governance, and reduces the "noisy neighbor" problem that multi-tenant systems are known for.
To avoid sending excessive queries to the service, make sure your application implements service throttling.
Service providers can take the following actions
Keep track of your system's resource utilization, both overall and by tenant. Configure alerts to detect resource utilization spikes, and if possible, automate the mitigation of known concerns by scaling up or down.
To keep a single tenant from overloading the system and lowering the capacity available to others, use resource governance. This phase might take the shape of quota enforcement, or it could take the form of the Throttling or Rate Limiting patterns.
More infrastructure should be considered. If you use the Sharding pattern, this procedure may entail scaling up by updating parts of your solution components, or scaling out by supplying new shards or stamps if you use the Deployment Stamps pattern.
Allow tenants to buy capacity that has already been supplied or reserved. This procedure gives renters additional assurance that your solution can handle their workload effectively.
Consider the following options for smoothing out resource usage:
Consider rebalancing tenants between instances or stamps if you host numerous instances of your service. Consider distributing tenants with predictable and comparable usage patterns among many stamps to smooth out their consumption peaks.
Consider whether you have any non-time-sensitive background processes or resource-intensive tasks. Run them asynchronously during off-peak hours to keep your peak resource capacity available for time-sensitive applications.
Consider whether your services provide settings to help you deal with loud neighbors. Consider employing pod limitations when using Kubernetes, and consider leveraging the built-in governance features while using Service Fabric.
Consider limiting the operations that renters can execute, if relevant. Prevent tenants, for example, from undertaking actions that may execute very big database queries. This step reduces the likelihood of renters adopting behaviors that may have a detrimental impact on other tenants.
Consider implementing a Quality of Service (QoS) system, if applicable. When you use QoS, you prioritize some operations or workloads over others. By incorporating QoS into your design and architecture, you can ensure that high-priority processes take precedence when resources are limited.
Putting everything together
The Throttling pattern's primary advantage is that it allows a system to regulate both internal and external traffic that might jeopardize the system's capacity to operate in a safe and predictable way. Unanticipated spurts in call activity do occur; sometimes by mistake, sometimes by malice. The Throttling pattern softens the blow of such blasts.
Throttling on the client side will limit the number of calls made to commercial services that charge fees based on use behavior. As a result, the Throttling pattern is employed as a cost-cutting strategy.
Internally, using the Throttling pattern on the server side protects a system from DDoS attacks by malicious actors. Internal throttling also protects the system against unplanned bursts of activity caused by other internal processes and services. The disadvantage is that when throttling is used, the system may slow down, affecting overall system performance. As a result, the Throttling Pattern is frequently utilized alongside the Circuit Breaker Pattern.
With this article at OpenGenus, you must have the complete idea of Noisy Neighbor + Throttling pattern.