How Do Large-Scale Systems Deal With Millions of Users & Terabytes of Data?

Date Published

Written by

admin

In today’s world, modern Software-as-a-Service (SaaS) solutions are usually designed to serve massive global audiences with tens of millions of daily users. But how do these systems handle huge numbers of transactions in nearly real-time? And how can they process such massive volumes of data, where they store them, and how can they efficiently retrieve them?

In this post, we’ll explore some of the key technologies and architectures that, behind the scenes, power these systems to seamlessly provide this amazing level of performance.

The Basics

Before starting, it’s vital to cover some very fundamental concepts which are used all over the architecture of such systems.

Clustering and Load Balancing

Clusters composed of tens, hundreds or thousands of nodes aim to provide high availability on (sub)systems and microservices, in terms of fault tolerance (if a server becomes unresponsive or faces a hardware failure) and load balancing (where incoming traffic is directed to separate nodes, based on the load balancer’s distribution algorithm).

Containerization and Orchestration

In modern deployments, it’s common to bundle application code along with all its dependencies, libraries, configurations, environment variables, etc. into single packages called containers. Containers share the OS within an isolated process with its own filesystem and resources. Sharing the same (host’s) OS makes them more performant and lightweight than virtual machines (VMs). But one more great benefit is that they allow seamless deployments, which we’ll see later on.

Orchestration, on the other hand, means managing the execution and lifecycle of the containerized workloads and services, monitoring their load and availability, and taking actions automatically when they need to in order to preserve the system’s availability and performance at all times.

Auto-scaling

Either through horizontal scaling (spawning/terminating server instances or containers) or vertical (adding/reducing resources on servers), the underlying components of such systems are orchestrated to scale automatically based on the demand and maintain their availability and health status under traffic spikes. Horizontal scaling is usually preferred as it makes auto-scaling seamless and does not require restarting any service. Nodes are automatically added to the cluster as long as certain criteria are met (e.g. servers’ load percentage during the last minutes).

Cloud Computing

Of course, all the orchestration of auto-scaling clusters can hardly exist outside of capable computing infrastructures. Cloud computing platforms, such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure, offer scalable computing capacities that are programmable via APIs, command-line tools, and SDKs.

The Technologies

Being aware of the fundamental technologies and concepts that are used to auto-scale system components, we can now scratch a little bit more of the surface of designing global-scale SaaS platforms.

APIs

First of all, writing efficient APIs is more than critical. APIs are the public-facing gateways that communicate with users (via web and mobile apps). As API calls can reach billions, either per day or in less than a few hours, the necessity to write APIs that perform and respond super-fast becomes obvious. In each API call, every single byte, both in terms of network transmission, server memory and storage, accumulates to GBs per hour and can impact the overall performance of the platform, not to mention the costs in cloud infrastructure. So, what are the key points to writing super-fast APIs?

Databases & Big Data

Two main questions that can arise to someone interested in understanding the internals of such systems are: how these enormous volumes of data are stored and how fast they can be retrieved, especially when dealing with millions of concurrent client connections that come from all around the globe.

The answer to both questions is database sharding. Sharding is a method where a database is horizontally partitioned, meaning divided into several subsets (called shards), that are distributed over multiple instances or cluster nodes. The key point here is the logic that is used to decide where each data will be distributed over the dozens of the available shards. This key point is in fact called a “shard key”. The shard key of a table (which is actually a column or a combination of columns inside the table, just like the primary key) will define where the data will be stored to or where shall be fetched from. Combining geographical information (e.g. continent code or country code) in the shard key will easily distribute data across servers that reside close to that geographical region, hence greatly improving latency.

But the benefits of database sharding are much greater than just network latency. Different users will fetch data that resides in different shards (servers or cluster nodes); hence, it leverages parallel processing and achieves:

For many scenarios, database sharding is still not enough to deal with specific cases, especially when it comes to OLAP, real-time analytics, data science, and machine learning. For these cases, big data solutions (which leverage distributed processing and storage by design), are efficient enough for building data pipelines that transform, aggregate, and analyze the input data, even in nearly real-time and even from live data streaming sources.

Message Queues & Event Streaming

We previously mentioned that it is necessary for APIs to execute things asynchronously, at the extent that is possible (and applicable). For example, in a social media messaging app., when a user chats with another user, the sender doesn’t care if their message was saved or processed successfully on the server side. That’s only a technical detail. But they’ll care about when their message was received and when it was read by the recipient. So, why block a server thread for more milliseconds during an API call, by processing synchronously the call’s business logic, when it’s not critical to respond to the client app accordingly? In such cases, it’s fine to respond rapidly to the API call with just an acknowledgment that the call was successful, implying that the message was successfully queued for processing.

For all these cases where interaction with the server APIs is asynchronous by nature, asynchronous execution can help avoid bottlenecks on servers’ resources (as the OS has specific soft and hard limits like the max number of open file handles per process hence pipes, streams and open sockets too, on Linux systems at least), and allow continuing the communication between the client app and the server when there’s an action or update on the server side that requires it. But more importantly, to leverage parallel execution by other hosts that are dedicated to executing specific jobs.

A common way to design this kind of asynchronous architecture is through message queues or by streaming events. In these subsystems, one or more producers (like the several endpoints of the server APIs) generate and enqueue messages or stream new events, such as “send a welcome email to that user” once a new user signed up to the platform. Consumer job workers that run indefinitely, wait for those events (tasks) to come, and once they receive an incoming event, they will process it, and may (or may not, if they don’t need to) acknowledge its completion in a defined way.

Event streaming is similar to message queuing, with the obvious difference that they utilize streaming as the communication medium for both receiving events from the producers and sending events to the consumers.

In our case, the use of event streaming and message queues is mandatory to allow horizontal scaling (there can be tens or thousands of nodes processing specific types of events in parallel). But also, building consumer workers brings the opportunity to decouple and isolate independent components of the platform. The benefit is to avoid designing a monolithic architecture, and create independently scalable (and deployable) microservices.

DevOps & Automations

The increased complexity levels of these systems don’t stop at the implementation phase. It’s needless to mention that everything related to them, even after launch, is the same complex. From monitoring the usage and availability of a dozen APIs and microservices that comprise the platform and which are spread over hundreds or thousands of nodes, to deploying updates to live, is far from being a trivial task.

To begin with, for logging and monitoring, these systems use log aggregation systems, which are also horizontally scalable and highly available. The aggregators are designed for petabyte scale and high throughput, while they provide the means to process all the aggregated logs that come from multiple sources simultaneously, create alerts, and build metrics in real-time. The powerful querying engines that they bundle help dive into petabytes of log records to seek out specific entries or visualize log data of specific time ranges.

Data analytics is another point that needs special attention, as at this scale, SaaS platforms leverage no less than real-time data analytics systems that are optimized for high concurrency and low latency. These solutions outperform the traditional reporting and BI systems and can generate real-time dashboards and metrics at scale, analyze data in batch or real-time, trigger alerts, and perform other user-defined actions.

But one of the most challenging parts of DevOps is the implementation of a continuous integration, delivery, and deployment mechanism (CI/CD) to automate the reliable deployment of updates and fixes to the production environment.

The CI/CD systems usually start by monitoring the engineers’ commits at specific branches in the source code repository, and trigger the automated execution of a series of unit, integration and UX tests (the more tests, usually several thousands, the less likely to push a bug into live) that are defined in the CI/CD pipeline. If all tests pass, then the solution moves from the CI state to the CD state. The CD state has in fact a dual meaning. In the CI/CD pipeline, it firstly denotes the Delivery step, which is building a validated production release ready for launch and adding it to the code repository. It’s then the turn of the Deployment step, which automates the rolling of the just-generated production-ready build to the live environment.

Of course, automating the deployment is not trivial. However, thanks to the containerized architecture of these systems, it is at least easier to automate the packaging of the updated application components into new containers and roll them out through the orchestration platform.

Epilogue

In this post, we just uncovered the tip of the iceberg, giving a rough overview of some of the most critical points these systems take very seriously, such as: scaling, data storage & processing, API performance, event streaming/queueing, orchestration, monitoring, and automated deployment.

It’s true that building systems to serve massive audiences requires a team of skilled software engineers with many years of experience, a generous development timeframe and a budget that will easily climb between six to seven digit amounts in dollars. However, if a SaaS solution needs to be able to reach that scale, it likely means that it is worth the investment.

Nowadays, there are so many solutions for every piece of a SaaS system, and each solution is designed to solve a specific range of problems. However, every SaaS has its own requirements and someone needs to know in depth these technologies and their differences to be able to see which are the truly right ones for their project. It would cost a lot in budget and time-to-market selecting a technology that will later turn out not to be the best fit.

Our team has the expertise to advise on such decisions and help at keeping projects’ development on the right track.

We also develop global-scale SaaS solutions on behalf of start-ups and enterprises, and reliably deliver turn-key systems that are ready for launch. By handing over a well designed, well tested and well documented system to your own engineering team that’s ready to scale, and then nurturing the project with our continuous advice and support, we not only assist in reducing the development budgets, but we also take out all of the risks you would otherwise have.

Tags

SaaSAPIsAuto-ScalingBig DataData AnalyticsContainerizationOrchestrationEvent StreamingDevOpsCI/CD

INTERESTED IN OUR SERVICES?

We Are Only a Click Away