Get this book -> Problems on Array: For Interviews and Competitive Programming
In this article, we have covered the process of developing a system's components, architecture, and interfaces to satisfy end-user needs which is referred to as system design. Reliability, Scalability, Availability, and Maintainability are all factors that must be considered while making decisions.
Table of contents:
- System Design Reliability
- System Design: Availability
- System Design Scalability
- Load Balancer
- Caching for Systems
- MapReduce for Distributed Memory
- Components of a System
- Design Considerations for a Web App System
- Front-end architecture
- What is the purpose of system design?
System Design Reliability
When a system can fulfill the needs of the end-user, it is considered reliable. You should have intended to incorporate a set of features and services in your system while you were creating it. If your system can do all of those functions without failing, you may consider it to be reliable.
A system that can continue to operate reliably in the presence of problems is called a fault-tolerant system. Faults are errors that occur in a specific component of the system. The occurrence of a malfunction does not ensure that the system will fail.
Failure occurs when a system is unable to operate as intended. It can no longer deliver certain services to end consumers.
Although Design for Reliability (DFR) is not a new concept, it has gained a lot of traction in recent years.
Weibull Analysis and/or Life Data Analysis are terms that all reliability specialists are familiar with. These analysis methodologies have become almost synonymous with reliability and achieving high reliability for many people. The reality is that, while life data analysis is a crucial part of the puzzle, it isn't enough to produce reliable products. Rather, a strong reliability program and the development of reliable products require a multitude of actions. Strategic vision, good planning, adequate organizational resource allocation, and the integration and institutionalization of reliability practices into development projects are all required to meet the organization's dependability goals.
Design for Reliability, on the other hand, is more detailed than these broad concepts. It is, in fact, a procedure. DFR, in particular, refers to the entire set of tools that support product and process design (typically from early in the concept stage to product obsolescence) in order to ensure that customer expectations for reliability are fully met throughout the product's life cycle while maintaining low overall life-cycle costs. To put it another way, DFR is a systematic, streamlined, concurrent engineering approach that incorporates reliability engineering throughout the whole development cycle. It makes use of a variety of reliability engineering tools, as well as a thorough grasp of when and how to apply them throughout the design cycle. This process explains the entire deployment order that a company must follow in order to design reliability into its products. It includes a number of tools and procedures.
What is the significance of DFR?
Why should a company invest in deploying a DFR process? The answer is straightforward... warranty costs and customer satisfaction. Field failures are extremely expensive. One example is the recently publicized Xbox problem, which has cost Microsoft more than a billion dollars in warranty payments.
Managing the Design of a Concurrent System
Companies and government organizations have adopted concurrent engineering, also known as simultaneous engineering, as a technique of increasing production and lowering costs. Several teams inside an organization work on various parts of design and development at the same time.
A concurrent system's success is dependent on well-designed hardware, adaptable software that manages the hardware, and a well-defined marketing strategy. Hardware and software must have flexible architectures in order to react to changing marketing requirements.
The Concurrent Design approach aims to achieve the following goals:
Reducing the time it takes to build a product. Because there is a lot of communication between the teams, the team members have a beneficial impact on each other. Misunderstandings exist if this communication does not occur or occurs in a poor manner, which must be corrected later in the development process. It takes a long time to correct these misconceptions (change loops or engineering change orders). As a result, by effectively communicating with one another, the development lead time (time-to-market) is reduced.
Quality has improved. A higher quality product or service is achieved since the team members consider all factors at the same time.
Cost-cutting. Because the development process takes less time, less expenditures may be incurred.
It reduces the time it takes to design a product as well as the time it takes to get it to market, resulting in increased production and lower costs.
System Design: Availability
Availability is a feature of a system that seeks to maintain an agreed-upon level of operational performance, often known as uptime. In order to service the user's requests, a system must offer high availability.
The degree of Availability differs from system to system. If you are creating a Social Media Application, high availability is not a need. A few seconds of delay is tolerable. Viewing your favorite celebrity's Instagram post with a delay of 5 to 10 seconds will not be a problem. However, whether you are developing a system for hospitals, data centers, or banking, you must guarantee that it is constantly available. Because a service interruption might result in a significant loss.
In order to assure the availability of your system, you should follow the following principles:
A single point of failure in your system should not exist. In general, your system shouldn't rely on a single service to handle all of its requests. Because if that service goes down, your entire system might be endangered, and you could lose access to it.
Detecting and fixing the failure at that stage.
System Design Scalability
The capacity of the system to cope with rising demand is referred to as scalability. When designing the system, take in mind the load that it will be subjected to. If you have to build a system for load X, it is recommended that you design it for 10X and test it for 100X.
There may be times when your system is subjected to an increased load. If you are developing an e-commerce application, you might expect a surge in traffic during a Flash Sale or when a new product is released for sale. In that situation, your system should be intelligent enough to handle the increased load effectively, making it scalable.
To ensure scalability, you should be able to calculate the load that your system will face. The Load on the System is described by several factors:
- The number of requests that your system receives and processes each day.
- The total number of database calls performed by your system.
- The number of Cache Hit or Miss queries sent to your system.
- Users who are presently logged in to your system
It is challenging to forecast the future growth of a website, and we cannot anticipate how long a website will keep its popularity and growth. As a result, we must dynamically scale our database, and database sharding is the approach that can do so.
When a single database server design is used, any application's performance degrades as the number of users increases. The network bandwidth begins to saturate, and read and write requests grow slower. You'll eventually run out of space on your hard drive. By spreading the data over numerous computers, database sharding solves all of these problems.
If there is an outage in a sharded architecture, just a subset of the shards will be unavailable. All of the other shards will continue to operate normally, and the full program will remain accessible to users.
A query in a sharded database must traverse fewer rows, resulting in a faster response time.
Scaling out, or horizontal scaling, is made easier by sharding a database. Horizontal scaling involves adding more machines to the network and distributing the load among these units to allow for faster processing and response. This has several advantages.
Because your system has parallel routes, you can do more work at the same time and handle more user requests, especially when writing data. You may also load balance web servers that access shards via distinct network channels, which are handled by different CPUs, and process work using separate RAM caches or disk IO paths.
If many servers are available, each incoming request to the system must be routed to one of the various servers. We must ensure that each server receives an equal number of requests. The requests must be spread uniformly across all servers.
The Load Balancer is the component in charge of spreading these incoming requests evenly among the servers. A load balancer is a layer that sits between the user's requests and the system's many servers.
Where Do Load Balancers Normally Go?
- Between the client application/user and the server
- Between the application and cache servers
- Between the database servers and the cache servers
Load Balancers: Different Types
Load balancing can be accomplished in three ways. These are the...
- Client-side software load balancers
As the name implies, the client application is responsible for all load balancing logic (Eg. A mobile phone app). The client application will be given a list of web servers/application servers with which to communicate. The program selects the first item on the list and sends a request to the server for data. If a persistent failure occurs (after a specified number of retries) and the server becomes unavailable, it discards that server and selects another from the list to complete the process. This is one of the most cost-effective approaches of load balancing.
- Load Balancers in Software for Services
These load balancers are software components that take a collection of requests and route them according to a set of criteria. Because it can be put on any common device, this load balancer offers a lot more versatility (Ex: Windows or Linux machine). It's also less expensive since, unlike hardware load balancers, there's no need to buy or maintain the physical device. You have the choice of using an off-the-shelf software load balancer or writing your own bespoke software (for example, load balancing Active Directory Queries in Microsoft Office365).
- Load Balancers in Hardware
We employ a physical appliance to distribute traffic among a cluster of network servers, as the name implies. These load balancers, often referred to as Layer 4-7 Routers, are capable of handling all types of HTTP, HTTPS, TCP, and UDP traffic. To the outside world, HLDs give a virtual server address. When a client application sends a request, it uses bi-directional network address translation to route the connection to the most appropriate actual server (NAT). HLDs can manage a big amount of traffic, but they come at a high cost and have limited flexibility.
HLDs continue to perform health checks on each server to ensure that it is operating correctly. It promptly stops delivering traffic to the servers if any of them do not give the appropriate answer. Because load balancers are costly to purchase and setup, many service providers only employ them as the first point of entry for user requests. The data is then redirected behind the infrastructure wall using internal software load balancers.
Caching for Systems
Caching is an important aspect of system development. It's one of the most important aspects of system design. We may need to employ caching in practically all systems. It is a technique for improving the performance of a system.
A cache is a high-speed data storage layer in computing that saves a portion of data that is often temporary in nature so that subsequent requests for that data may be served up faster than accessing the data's primary storage location. Caching allows you to quickly reuse data that has been previously obtained or calculated.
Caching is an important part of any system's performance. Low latency and high throughput are ensured. Caches can be retained at any level of the system, but they should be kept at the front end to ensure that the requested data is returned rapidly. Caching improves efficiency, but it comes with certain drawbacks. The staleness of cache data must be considered while constructing a system.
Caching is a simple approach to increase speed when data is essentially static. The cache might be difficult to deploy in the case of frequently modified material. The system will be able to make greater use of its resources thanks to caching.
- Cache on the Application Server
Let's pretend that a web server has only one node in a web application. Along with the application server, an in-memory cache can be implemented. The user's request will be saved in this cache, and it will be returned from there whenever the same request is made. Data will be collected from the disk and then returned for a new request. Once the new request has been received from the disk, it will be placed in the same cache for the user's next request. Local storage is enabled by placing cache on the request layer node.
- Cache that is distributed
Each node in the distributed cache will have a portion of the total cache space, and each request may then be directed to the cache request using the consistent hashing method.
- Global Cache
You'll have a single cache area, as the name implies, and all nodes will utilize it. This single cache space will receive all requests.
- CDN (Content Distribution Network)
MapReduce for Distributed Memory
The quantity of data created by humans is quickly increasing every year as a result of the introduction of new technology, gadgets, and communication channels such as social networking sites.
Approach to Business in the Past
An enterprise will have a computer to store and process massive data in this technique. The programmers will use their preferred database providers, such as Oracle, IBM, and others, for storage. In this method, the user interacts with the program, which then takes care of the data storage and processing.
This strategy is suitable for applications that process less voluminous data than can be supported by typical database servers, or up to the processor's processing capacity. When dealing with large volumes of scalable data, however, processing such data through a single database bottleneck is a difficult operation.
Google used the MapReduce algorithm to address this challenge. This method breaks the work down into little chunks, distributes them to a number of computers, and gathers the results, which are then combined to make the result dataset.
Components of a System
A design system is a set of reusable components that may be built together to create any number of applications, governed by defined rules.
Below is a list of some of the most commonly used components.
The Avatar component displays a user's profile photo,initials, or fallback icon to represent them.
The Button component is used to conduct a specific action or event, such as submitting a form, starting a dialog, canceling an activity, or deleting something.
A Card is a content container that is both versatile and extendable. It has header and footer settings, as well as a lot of information, contextual background colors, and a lot of display possibilities.
Breadcrumbs (also known as a 'breadcrumb trail') are a type of navigation that consists of a list of links organized in either a hierarchical or chronological order. Their goal is to assist visitors in keeping track of their location by displaying the current page's location in one of the following contexts:
- the current page's categories;
- the site's structural structure of pages;
A dynamic table is a table that shows rows of data with built-in pagination, sorting, and reordering capabilities.
A checkbox is a type of input control that allows the user to choose one or more alternatives from a list of possibilities.
To display or switch between enabled and disabled states, a toggle is used.
Users can utilize search bars to find relevant material on your website or application. Most sites, especially those with a lot of material, have search bars as a standard feature.
In this system, placeholder text with the term Search is used in search bars.
A visual indicator of how far an activity has progressed.
Design Considerations for a Web App System
The following factors must be considered while building a system design for a web application.
Security ( XSS, CORS, Clicjacking etc)
CDN is being used. A content delivery network (CDN) is a network of dispersed computers that provide websites and other Web material to users based on the user's geographic location, the origin of the webpage, and the position of a content delivery server. This service is useful for speeding up the delivery of material on high-traffic websites and websites with a worldwide reach. The faster the material is provided to the user, the closer the CDN server is to the user geographically. CDNs help guard against huge spikes in traffic.
Support for offline users/progressive enhancements
Asynchronous loading of assets (Lazy load items)
Minimizing network requests (Http2 + bundling/sprites etc)
Server Side rendering
It's critical that all of our features are global-ready. What does it take to succeed? At the most basic level, it means that all features have the same qualities.
Surrogate pairs are supported by Unicode.
Locally and culturally conscious
As needed, support worldwide standards.
Different input methods, including Input Method Editors, are supported (IMEs)
Complex-script aware, including mirroring
Font independence (font may be customized by language, support font fallback, etc.)
Pluggable (MUI Aware)
Front-end architecture is a set of tools and methods aimed at improving the quality of our front-end code while also facilitating a more efficient and long-term workflow. The website user is the target audience for a front-end developer, whereas the target audience for a front-end architect is the developer themselves.
Front-end Architecture Working Components
- JS frameworks/organization/performance optimization techniques
- Asset Delivery — Front-end Ops
- Onboarding Docs
- Styleguide/Pattern Library
- Architecture Diagrams (code flow, tool chain)
- CSS/Sass Code standards and organization
- Object-Oriented approach (how do objects break down and get put
- Git Workflow
- Dependency Management (npm, Bundler, Bower)
- Build Systems (Grunt/Gulp)
- Deploy Process
- Continuous Integration (Travis CI, Jenkins)
- Performance Testing
- Visual Regression
- Unit Testing
- End-to-End Testing
What is the purpose of system design?
We require it because we want our entire system to be constructed in such a manner that we can:
- Scale our system easily, to be able to add a new machine or increase the size of an existing machine as needed. This occurs as the number of users on our site begins to increase.
- There should be no downtime in the system. There should be no failures in any requests to the server.
- The latency of our system should be minimal. The API should be quick.
- The server should be duplicated in our system. It should be able to quickly recover from hardware failure with little downtime.
- For data consistency, the system should be able to sync across many servers of the same kind.
With this article at OpenGenus, you must have the complete idea of designing a System like Facebook, Google Search and others.