Open-Source Internship opportunity by OpenGenus for programmers. Apply now.
In this article, we shall be looking into the system design and architecture of Uber, a ride-sharing application that caters to millions of customers worldwide.
Table of Content
- Introduction
- Defining System Constraints
- Defining User Roles
- The Frontend
- The Backend
- Conclusion
Let us get started with System design of Uber.
Introduction
Uber is a popular ride-sharing application that connects riders (people that want to go places) to riders (people that can take them places). Uber functions as an intermediary that connects both parties for their mutual benefit.
Riders on the application set up a pick-up point and destination from a map and then request an Uber driver. Riders can select from a choice of cars. They are given a fare estimate, an estimated time of arrival and also shown the route the driver will take. At the end of a trip, payment is automatically handled on the app.
The above interaction is pretty is simple and straight forward thanks to the beauty of abstraction. When we pull back the curtains, we can truly appreciate the sheer amount of engineering that has gone into building the application.
When Uber first began it was a large monolith application written mostly in python. They then eventually switched to a more SOA (Service Oriented Approach) architecture to scale. There are two main types of users on uber, Riders and Drivers.
Defining System Constraints
If we want to design the application, we have to understand the constraints we will face. Although constraints will differ based on time of day or location, we are going to make some key assumptions.
- We have 300M customers and 1M drivers in the system
- We have an average of 15M active rides per day with 2 million daily active drivers
- All active drivers will notify their current location every three seconds. 4. Once a customer puts in a ride request, the system will contact drivers in real-time.
Defining User Roles
- Riders should always be able to see available drivers near them.
- Riders can request a ride by putting their destination.
- Riders should be able to see the ETA and fare estimates of potential and ongoing rides
- Drivers have to constantly notify the backend of their availability and location.
- Nearby drivers also need to be notified when customers need to be picked up
- Once a ride is accepted, both the driver and customer must constantly see the other’s current location for the duration of the trip.
- When a trip has been completed, the driver completes the ride and is now available for another customer.
The Frontend
The Uber app is written in java for android and objective-C for ios. There are two versions of the app, one meant for riders and one for drivers. The driver app is constantly sending location and other metric information to a supply service (more on this later). It also constantly receives ride requests from the backend. The rider app is also sending information to a demand service and in return getting information from the backend. Because of the bi-direction flow of data, we can not employ HTTPS as our communication protocol, instead, we the Web-socket protocol to convey information between the client and the backend servers.
The Backend
Uber incorporates a lot of technology in its backend. We will be focusing on the technologies employed for the core functionality of uber, which is connecting riders to drivers.
Geo-Spatial Indexing
Uber deals with a lot of location data and so it has to find a good way of representing and storing it. Using just the latitude and longitude coordinates won't cut it so uber makes use of Google's s2 library. The s2 library creates a hierarchical decomposition of Earth's sphere into tiny cells of relatively small sizes, like maybe 4km by 4km, each of these cells is uniquely indexed. This allows Uber to easily store this data in a distributed manner. The S2 library can also give you the cells that cover a particular region. So if you give it a location and a radius it can give you all the cells that fall in that circular region.
Under the hood, the S2 library starts by projecting the points of the sphere into a cube, and each face of the cube has a quad-tree where the sphere point is projected into. The quad-tree is a special tree data structure that is the backbone of the S2 library. As the name implies every node in a quad-tree that is not a leaf will have only 4 children. Quad-trees are used for representing two-dimensional space by recursively subdividing it into 4 quadrants. The nodes represent the 4 quadrants which are Northeast, Northwest, Southeast and Southwest. Any system involving geo-location will make use of a quadtree, either explicitly or implicitly
DISCO
Disco short for Dispatch Optimisation is the core component of the Uber backend, it is made up of the supply service, demand service and the logic that matches demand to supply. The entirety of DISCO is written in javascript
How does data get to DISCO
Using the driver app to illustrate, location data is sent to a web application gateway that acts as a firewall and passes on the data to the load balancer, the load balancer then passes the information to a Kafka API that consumes it and sends it to Kafka which functions as a data hub that eventually sends the location data to the DISCO. A web socket server also establishes a connection btw a client and the DISCO, it waits for any changes in the DISCO and publishes it to the client.
The Supply Service
The Supply Service tracks cars using geolocation. Every active driver sends its geolocation to the server every 5 seconds. It keeps state machines of all of the supply in memory. It also tracks vehicles attributes like the number of seats, type of vehicle, the presence of a car seat for children, if a wheelchair can be fit.
The Demand Service
The Demand Service tracks the GPS location of the user when requested. It also tracks the requirements of the orders for example, whether a rider requires a small or big car. These demand requirements must then be matched against supply inventory.
Matching demand to supply
Here we have hundreds of dispatch servers that all equally keep track of google's S3 geographical cells of all the drivers in a distributive manner. For example, let's have only 5 servers and 20 cells, each server would be responsible for storing two of these cells and also for performing work on these cells. To scale this Uber makes use of the ringpop framework. Ringpop is a consistent hash ring with the SWIM gossip protocol. Breaking down the previous statement, a consistent hash ring is a way of equally distributing work (in this case, the geolocation cells) to machines independent of the actual number of machines. The SWIM gossip protocol means that a machine or node shares some information with a subset of its peers which also share with their peers until all the nodes in a cluster have access to the information. RingPop is very essential to Uber's very large and highly distributed system as it allows for easy scaling i.e extra servers can be spun up with no problem.
ETA service
The ETA service takes in a customers position and the current positions of drivers and then uses the road system to map out an ETA for each of them. It not only calculates the ETAs for available drivers but also takes into consideration drivers that are currently on a trip that will soon end and lead them near a customer's position. It also includes a whole lot of other factors, like turn costs, U-turn cost and traffic conditions and uses each of these factors to calculate an approximate ETA for each driver. The ETA service is distributed and runs on hundreds of servers.
Rating and Ranking
Customers can rank drivers after each ride, uber in turn uses these ratings and other factors to rank drivers. How it works is that each dispatch node in addition to the geo-spatial location of each driver also stores meta information such as their rating scores among customers, so each node ranks the drivers they have access to according to ETA and their ratings and return the top candidates. The supply server then aggregates all the top drivers that were sent and gets the overall best from all of them
How Notifications are handled
Uber has hundreds of WebSocket servers that establish persistent connections between both kinds of clients and the rest of the backend. When a customer requests a ride a connection is established with a web socket server and the client, the web socket then transmits the request to the demand service and also establishes a connection with it.
The demand service gets the rider's geolocation grid id and sends it over to a dispatch server/node that is responsible for storing it, the node then using google's S2 library draws a radius around the location and fetches all the grid ids that fall in the region in a list. Then using RPC calls via the GOSSIP protocol it communicates to all the nodes that store a particular grid id that is in the list. They send all the available driver locations that fall at this point to an ETA service that calculates the closest drivers, They then send this information back to the supply service which sends it to the WebSocket servers that have connections with the drivers, which in turn notifies the selected drivers about the ride request.
Once a driver has accepted a ride request the customer is notified via its own persistent web socket connection.
Database
Uber uses Schemaless, Riak and Cassandra. Each one serves a different purpose. Schemaless is an in-house NoSQL database built on top of MySQL. It is used for long-term data storage. Riak and Cassandra are also NoSQL databases that are used to meet low-latency and high-availability demands. Uber also employs a Hadoop warehouse for the distributed storage and analytics of complex data.
They also use Redis for both caching and queuing. Twemproxy provides scalability of the caching layer without sacrificing cache hit rate via its consistent hashing algorithm. Celery workers process async workflow operations using those Redis instances.
Fault Tolerance and Replication
The dispatch servers use of the ringpop framework as explained above, ensures excessive fault torelance among the servers responsible for handling dispatching because if any number of the thousands of server nodes fail, the workload can be quickly redistributed among the remaining available servers.
Uber also employs the use of redundant replica servers for all of its other services that can quickly take charge if a primary server fails. They also have a backup data centre and make use of the driver's phones as sort of distributed backups guarding against datacenter failures.
Conclusion
We have been able to look at how Uber, the largest ride-sharing application is designed, we have also peeked into some of the technology that uber employs and how the various components work together to give us a clear and intuitive user experience when we use Uber's services.