Apache Ignite Documentation

GridGain Developer Hub - Apache Ignitetm

Welcome to the Apache Ignite developer hub run by GridGain. Here you'll find comprehensive guides and documentation to help you start working with Apache Ignite as quickly as possible, as well as support if you get stuck.

 

GridGain also provides Community Edition which is a distribution of Apache Ignite made available by GridGain. It is the fastest and easiest way to get started with Apache Ignite. The Community Edition is generally more stable than the Apache Ignite release available from the Apache Ignite website and may contain extra bug fixes and features that have not made it yet into the release on the Apache website.

 

Let's jump right in!

 

Documentation     Ask a Question     Download

 

Javadoc     Scaladoc     Examples

Ignite Persistence

Apache Ignite Native Persistence

Overview

Ignite Native Persistence is a distributed ACID and SQL-compliant disk store that transparently integrates with Ignite's durable memory as an optional disk layer storing data and indexes on SSD, Flash, 3D XPoint, and other types of non-volatile storages.

With the Ignite Persistence enabled, you no longer need to keep all the data and indexes in memory or warm it up after a node or cluster restart because Apache Durable Memory is tightly coupled with the persistence and treats it as a secondary storage. This implies that if a subset of data or an index is missing in RAM, the Durable Memory will take it from disk.

As shown in the picture below, Ignite Native Persistence always stores a superset of data on disk, and a subset of data in RAM based on its capacity. For example, if there are 30 entries and RAM has the capacity to store only 20, then all 30 will be on disk and 20 will be in RAM as well (based on the eviction policy configured).

Also, it's worth mentioning that as with a pure in-memory use case, every individual cluster node persists only a subset of the data and indexes for which the node is either primary or backup.

Ignite Native vs 3rd Party Persistence

Apache Ignite Native Persistence has the following advantages over 3rd party stores (RDBMS, NoSQL, Hadoop) that can be used as an alternative persistence layer for an Apache Ignite cluster:

  • Ability to execute SQL queries over the data that is both in memory and on disk which means that Apache Ignite can be used as a memory-optimized distributed SQL database.
  • No need to have all the data and indexes in memory. The Ignite Persistence allows storing a superset of data on disk and only most frequently used subsets in memory.
  • Instantaneous cluster restarts. If the whole cluster goes down there is no need to warm up the memory by preloading data from the Ignite Persistence. The cluster becomes fully operational once all the cluster nodes are interconnected with each other.
  • Data and indexes are stored in a similar format both in memory and on disk that helps to avoid expensive transformations while the data sets are being moved between memory and disk.
  • An ability to create full and incremental cluster snapshots by plugging-in 3rd party solutions.

Usage

To enable the Ignite Native Persistence, pass an instance of PersistentStoreConfiguration to a cluster node configuration:

<bean class="org.apache.ignite.configuration.IgniteConfiguration">
  <!-- Enabling Apache Ignite Native Persistence. -->
  <property name="persistentStoreConfiguration">
    <bean class="org.apache.ignite.configuration.PersistentStoreConfiguration"/>
  </property>

  <!-- Additional setting. -->
 
</bean>
// Apache Ignite node configuration.
IgniteConfiguration cfg = new IgniteConfiguration();

// Native Persistence configuration.
PersistentStoreConfiguration psCfg = new PersistentStoreConfiguration();

// Enabling the Persistent Store.
cfg.setPersistentStoreConfiguration(psCfg);
        
//Additional parameters.

Once the persistence is enabled, all the data as well as indexes will be stored both in memory and on disk across all the cluster nodes. The picture below depicts the structure of the persistence at the file system level of an individual cluster node:

Ignite Native Persistence Structure on the File System

Ignite Native Persistence Structure on the File System

Firstly, there will be a unique directory for every cache deployed on the node. From the picture above, we can see that there are at least two caches (Cache_A and Cache_B) whose data and indexes are maintained by the node.

Secondly, for every partition that this node is either a primary or backup, the Ignite Native Persistence will create a dedicated file on the file system. For instance, the node from the picture above is responsible for partitions 1, 10 and 564. The indexes are stored in one file per cache.

Finally, there files and directories related to the write-ahead log activities that are explained below.

Next, when Ignite sees that the store is enabled it moves the cluster from active to inactive state making sure that applications can not modify the data until allowed. This is done to avoid situations where the cluster is being restarted and applications start modifying data that may be persisted on the nodes that have not been brought back up yet. So, the general practice here is to wait while all the nodes join the cluster and call Ignite.active(true) from any node or application you have, moving the cluster to the active state.

Ignite ignite = ...;
            
// Activating the cluster once all the cluster nodes are up and running.
ignite.active(true);

Ignite Native Persistence Root Path

By default, all the data is persisted in the Apache Ignite working directory (${IGNITE_HOME}/work). Use PersistentStoreConfiguration.setPersistentStorePath(...) method to change the default directory.

As explained above, with the Ignite Persistence enabled you no longer need to fit all the data in RAM. The disk will store all the data and indexes you have, while a subset of them will be kept in RAM. This is beneficial when you have limited physical memory resources or wish to store and query historical data in Apache Ignite.

Taking this into account, if a page is not found in RAM, then the durable memory will request it from the persistence. The subset of data that is to be stored in the off-heap memory is defined by eviction policies you use for memory regions. Also, pages of backup partitions will be evicted from RAM first, giving more space to pages of primary partitions on the node.

Write-Ahead Log

Ignite Native Persistence creates and maintains a dedicated file for every partition a node is either primary or backup. However, when a page is updated in physical memory, the update is not directly written to a respective partition file because it can affect the performance dramatically. It's rather appended to the tail of an Apache Ignite node's write-ahead log (WAL).

The purpose of the WAL is to propagate updates to disk in the fastest way possible and provide a recovery mechanism for scenarios where a single node or the whole cluster goes down. It is worth mentioning, that a cluster can always be recovered to the latest successfully committed transaction in case of a crash or restart relying on the content of the WAL.

More Details on WAL

Refer to WAL section on Ignite Native Persistence Architecture page to learn more about WAL implementation details in Apache Ignite.

The whole WAL is split into several files, called segments, that are filled out sequentially. Once the 1st segment is full, its content will be copied to the WAL archive and kept there for the time defined by the PersistentStoreConfiguration.walHistorySize setting. While the 1st segment's content is being copied to the archive, the 2nd segment will be treated as an active WAL file and will accept all the updates coming from the application side. By default, there are 10 such segments created and used. To change this number use PersistentStoreConfiguration.setWalSegmentSize setting.

Below you can find additional WAL related settings. Refer to PersistentStoreConfiguration to get the full list of configuration parameters available:

Parameter Name
Description
Default Value

setWalStorePath(...)

Sets a path to the directory where WAL is stored . If this path is relative, it will be resolved relatively to Ignite work directory.

${IGNITE_HOME}/work

setWalSegments(...)

Sets a number of WAL segments to work with. For performance reasons, the whole WAL is split into files of fixed length called segments.

10

setWalSegmentSize(...)

Sets size of a WAL segment.

64 MB

setWalHistorySize(...)

Sets a total number of checkpoints to keep in the WAL history. Refer to the checkpointing section below to learn more about that technique.

20

setWalArchivePath(...)

Sets a path for the WAL archive directory. Every WAL segment will be fully copied to this directory before it can be reused for WAL purposes.

${IGNITE_HOME}/work

Checkpointing

The WAL is an essential part of the Ignite Persistence. Its role is to:

  • Persist updates on disk in the fastest way possible, by appending a value record to the end of the file.
  • Recover the cluster to a consistent state in case of a restart or crash.

However, due to the nature of the WAL file, it constantly grows and would take significant time to recover the cluster by going over the WAL from head to tail. To mitigate this, durable memory and Ignite Persistence support a checkpointing process.

Checkpointing is the process of copying dirty pages from RAM to partition files on disk. A dirty page is a page that was updated in RAM but was not written to the respective partition file on disk (an update was just appended to the WAL).

This process helps to utilize disk space frugally by keeping pages in the most up-to-date state on disk and allowing to remove outdated WAL segments (files) from the WAL archive.

Let's review an execution flow of a simple update operation, shown in the picture below:

WAL and Checkpointing

WAL and Checkpointing

  1. Once a cluster node receives an update request, it will look up the data page in RAM where the value is to be inserted or updated. The page will be updated and marked as dirty.
  2. The update is appended to the tail of the WAL.
  3. The node sends an acknowledgment to the update initiator confirming the success of the operation.
  4. Checkpointing is triggered periodically depending on the frequency set in your Native Persistence configuration or other parameters. The dirty pages are copied from RAM to disk and passed on to specific partition files.

See the table below on how checkpointing related parameters can be adjusted depending on your application requirements. Refer to PersistentStoreConfiguration class in Apache Ignite code to get a full list of configuration parameters available:

Parameter Name
Description
Default Value

setCheckpointingFrequency(...)

Sets the checkpointing frequency which is the minimal interval when the dirty pages will be written to the Ignite Native Persistence. If the rate is high, checkpointing will be triggered more frequently.

3 minutes

setCheckpointingPageBufferSize(...)

Sets the amount of memory allocated for the checkpointing temporary buffer. This buffer is used to create temporary copies of pages that are being written to disk and updated in parallel while the checkpointing is in progress.

256 MB

setCheckpointingThreads(...)

Sets the number of threads to use for checkpointing purposes.

1

More Details on Checkpointing

Refer to the checkpointing section on Ignite Native Persistence Architecture page to learn more about checkpointing implementation details in Apache Ignite.

Transactional Guarantees

Ignite Native Persistence is an ACID-compliant distributed store. Every transactional update that comes to the store is appended to the WAL first. The update is uniquely defined with an ID. This means that a cluster can always be recovered to the latest successfully committed transaction or atomic update ‚Äčin the event of a crash or restart.

SQL Support

Ignite Native Persistence allows using Apache Ignite as a distributed SQL database, and is a fully ANSI-99 SQL compliant.

There is no need to have all the data in memory if you need to run SQL queries across the cluster. Apache Ignite is able to execute them over the data that is both in memory and on disk. Moreover, it's optional to preload data from the Ignite Native Persistence to the memory after a cluster's restart. You can run SQL queries as soon as the cluster is up and running.

Ignite Persistence Internals

This documentation provides a high-level overview of the Ignite Native Persistence. If you're curious to get more technical details refer to these documents:

Performance Tips

The performance suggestions are listed in Durable Memory Tuning documentation section.

Example

To see how the Ignite Native Persistence can be used in practice, try this example that is available on GitHub and delivered with every Apache Ignite distribution.

Ignite Persistence


Apache Ignite Native Persistence

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.