Chapter 1. Introduction to Pivotal GemFire In-Memory Data Grid and Apache Geode
Memory Is the New Disk
Prior to 2002, memory was considered expensive and disks were considered cheap. Networks were slow(er). We stored things we needed access to on disk and we stored historical information on tape.
Since then, continual advances in hardware and networking and a huge reduction in the price of RAM has given rise to memory clusters. At around the same time of this fall in memory prices, GemFire was invented, making it possible to use memory as we previously used disk. It also allowed us to use Atomic, Consistent, Isolated, and Durable (ACID) transactions in memory just like in a database. This made it possible for us to use memory as the system of record and not just as a “side cache,” increasing reliability.
What Is Pivotal GemFire?
Is it a database? Is it a cache? The answer is “yes” to both of those questions, but it is much more than that. GemFire is a combined data and compute grid with distributed database capabilities, highly available parallel message queues, continuous availability, and an event-driven architecture that is linearly scalable with a super-efficient data serialization protocol. Today, we call this combination of features an in-memory data grid (IMDG).
Memory access is orders of magnitude faster than the disk-based access that was traditionally used for data stores. The GemFire IMDG can be scaled dynamically, with no downtime, as data size requirements increase. It is a key–value object store rather than a relational database. It provides high availability for data stored in it with synchronous replication of data across members, failover, self-healing, and automated rebalancing. It can provide durability of its in-memory data to persistent storage and supports extremely high performance. It provides multisite data management with either an active–active or active–passive topology keeping multiple datacenters eventually consistent with one another.
Increased access to the internet and mobile data has accelerated the evolution of cloud computing. The sheer number of accesses by users and apps along with all of the data they generate will continue to expand. Apps must scale out to not only handle the growth of data but also the number of concurrent requests. Apps that cannot scale out will become slower to the point at which they will either not work or customers will move on to another app that can better serve their request.
A traditional web tier with a load balancer allowed applications to scale horizontally on commodity hardware. Where is the data kept? Usually in a single database. As data volumes grow, the database quickly becomes the new bottleneck. The network also becomes a bottleneck as clients transport large amounts of data across the network to operate on it. GemFire solves both problems. First, the data is spread out horizontally across the servers in the grid taking advantage of the compute, memory, and storage of all of them. Second, GemFire removes the network bottleneck by colocating application code with the data. Don’t send the data to the code. It is much faster to send the code to the data and just return the result.
What Is Apache Geode?
When Pivotal embarked on an open source data strategy, we contributed the core of the GemFire codebase to the Apache Software Foundation where it is known as the Apache Geode top-level project. Except for some commercial extensions that we discuss later, the bits are mostly the same, but GemFire is the enterprise version supported by Pivotal.
What Problems Are Solved by an IMDG?
There are two major problems solved by IMDGs. The first is the need for independently scalable application infrastructure and data infrastructure. The second is the need for ultra-high-speed data access in modern apps. Traditional disk-based data systems, such as relational database management systems, were historically the backbone of data-driven applications, and they often caused concurrency and latency problems. If you’re an online retailer with thousands of online customers, each requesting information on multiple products from multiple vendors, those milliseconds add up to seconds of wait time, and impatient users will go to another website for their purchases.
Real GemFire Use Cases
The need for ultra-high-speed data access in modern applications is what drives enterprises to move to IMDGs. Let’s take a look at some real customer use cases for GemFire’s IMDG.
Transaction Processing
Transportation reservation systems are often subject to extreme spikes in demand. They can occur at special times of year. For instance, during the Chinese New Year, one sixth of the population of the earth travels on the China Rail System over the course of just a few days. The introduction of GemFire into the company’s web and e-ticketing system made it possible to handle holiday traffic of 15,000 tickets sold per minute, 1.4 billion page views per day, and 40,000 visits per second. This kind of sudden increase in volume for a few days a year is one of the most difficult kinds of spikes to manage.
Similarly, Indian Railways sees huge spikes at particular times of day, such as 10 A.M. when discount tickets go on sale. At these times the demand can exceed the ability of almost any nonmemory-based system to respond in a timely fashion. India Railways suffered from serious performance degradation when more than 40,000 users would log on to an electronic ticketing system to book next-day travel. Often it would take users up to 15 minutes to book a ticket and their connections would often time out. The IT team at India Railways brought in the GemFire IMDG to handle this extreme workload. The improved concurrency management and consistently low latency of GemFire increased the maximum ticket sale rate from 2,000 tickets per minute to 10,000 per minute, and could accommodate up to 120,000 concurrent user sessions. Average response time dropped to less than one second, and more than 50% of the responses now occur in less than half a second. The GemFire cluster is deployed behind the application server tier in the architecture with a write-behind to a database tier to ensure persistence of the transactions.
High-Speed Data Ingest and the Internet of Things
Increasingly, automobiles, manufacturing processes, turbines, and heavy-duty machinery are instrumented with myriad sensors. Disk-centric technologies such as databases are not able to quickly ingest new data and respond in subsecond time to sensor data. For example, certain combinations of pressure and temperature and observed faults predict conditions are going awry in a manufacturing process. Operator or automated intervention must be performed quickly to prevent serious loss of material or property.
For situations like these, disk-centric technologies are simply too slow. In-memory techniques are the only option that can deliver the required performance. The sensor data flows into GemFire where it is scored according to a model produced by the data science team in the analytical database. In addition, GemFire batches and pushes the new data into the analytical database where it can be used to further refine the analytic processes.
Offloading from Existing Systems/Caching
The increase in travel aggregator sites on the internet has placed a large burden on traditional travel providers for rapid information about availability and rates. The aggregator sites frequently give preference to enterprises that respond first. Traditionally, relational database systems were used to report this information. As the load grew due to the requests from the aggregators, response time to requests from the travel providers’ own websites and customer agents became unacceptable. One of these travel providers installed GemFire as a caching layer in front of its database, enabling much quicker delivery of information to the aggregators as well as offloading work from its transactional reservations system.
Event Processing
Credit card companies must react to fraudulent use and other misuse of the card in real time. GemFire’s ability to store the results of complex decision rules to determine whether transactions should be declined means complex scoring routines can execute in milliseconds or better if the code and data are colocated. Continuous content-based queries allow GemFire to immediately push notifications to interested parties about card rejections. Reliable write-behind saves the data for further use by downstream systems.
Microservices Enabler
Modern microservice architectures need speedy responses for data requests and coordination. Because a basic tenet of microservices architectures is that they are stateless, they need a separate data tier in which to store their state. They require their data to be both highly available and horizontally scalable as the usage of the services increases. The GemFire IMDG provides exactly the horizontal scalability and fault tolerance that satisfies those requirements. Microservices-based systems can benefit greatly from the insertion of GemFire caches at appropriate places in the architecture.
IMDG Architectural Issues and How GemFire Addresses Them
IMDGs bring a set of architectural considerations that must be addressed. They range from simple things like horizontal scale to complicated things like ensuring that there are no single points of failure anywhere in the system. Here’s how GemFire deals with these issues.
Horizontal Scale
Horizontal scale is defined as the ability to gain additional capacity or performance by adding more nodes to an existing cluster. GemFire is able to scale horizontally without any downtime or interruption of service. Simply start some more servers and GemFire will automatically rebalance its workload across the resized cluster.
Coordination
GemFire being an IMDG is by definition a distributed system. It is a cluster of members distributed across a set of servers working together to solve a common problem. Every distributed system needs to have a mechanism by which it coordinates membership. Distributed systems have various ways of determining the membership and status of cluster nodes. In GemFire, the Membership Coordinator role is normally assumed by the eldest member, typically the first Locator
that was started. We discuss this issue in more detail in Chapter 2.
Organizing Data
GemFire stores data in a structure somewhat analogous to a database table. We call that structure in GemFire a +Region+. You can think of a Region
as one giant Concurrent Map that spans nodes in the GemFire cluster. Data is stored in the form of keys and values where the keys must be unique for a given Region
.
High Availability
GemFire replicates data stored in the Region
s in such a way that primary copies and backup copies are always stored on separate servers. Every server is primary for some data and backup for other data. This is the first level of redundancy that GemFire provides to prevent data loss in the event of a single point of failure.
Persistence
There is a common misconception that IMDGs do not have a persistence model. What happens if a node fails as well as its backup copy? Do we lose all of the data? No, you can configure GemFire Region
s to store their data not only in memory but also on a durable store like an internal hard drive or external storage. As mentioned a moment ago, GemFire is commonly used to provide high availability for your data. To guarantee that failure of a single disk drive doesn’t cause data loss, GemFire employs a shared-nothing persistence architecture. This means that each server has its own persistent store on a separate disk drive to ensure that the primary and backup copies of your data are stored on separate storage devices so that there is no single point of failure at the storage layer.
Get Scaling Data Services with Pivotal GemFire now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.