Apache Ignite

GridGain Developer Hub - Apache Ignitetm

Welcome to the Apache Ignite developer hub run by GridGain. Here you'll find comprehensive guides and documentation to help you start working with Apache Ignite as quickly as possible, as well as support if you get stuck.

 

GridGain also provides Community Edition which is a distribution of Apache Ignite made available by GridGain. It is the fastest and easiest way to get started with Apache Ignite. The Community Edition is generally more stable than the Apache Ignite release available from the Apache Ignite website and may contain extra bug fixes and features that have not made it yet into the release on the Apache website.

 

Let's jump right in!

 

Documentation     Ask a Question     Download

 

Javadoc     Scaladoc     Examples

Ignite Persistence

Apache Ignite Native Persistence

Overview

Ignite native persistence is a distributed ACID and SQL-compliant disk store that transparently integrates with Ignite's durable memory. Ignite persistence is optional and can be turned on and off. When turned off Ignite becomes a pure in-memory store.

With the native persistence enabled, Ignite always stores a superset of data on disk, and as much as it can in RAM based on the capacity of the latter. For example, if there are 100 entries and RAM has the capacity to store only 20, then all 100 will be stored on disk and only 20 will be cached in RAM for better performance.

Also, it is worth mentioning that as with a pure in-memory use case, when the persistence is turned on, every individual cluster node persists only a subset of the data, only including partitions for which the node is either primary or backup. Collectively, the whole cluster contains the full data set.

The native persistence has the following characteristics making it different from 3rd party databases that can be used as an alternative persistence layer in Ignite:

  • SQL queries over the full data set that spans both, memory and disk. This means that Apache Ignite can be used as a memory-centric distributed SQL database.
  • No need to have all the data and indexes in memory. Ignite persistence allows storing a superset of data on disk and only most frequently used subsets in memory.
  • Instantaneous cluster restarts. If the whole cluster goes down there is no need to warm up the memory by preloading data from the Ignite Persistence. The cluster becomes fully operational once all the cluster nodes are interconnected with each other.
  • Data and indexes are stored in a similar format both in memory and on disk which helps avoid expensive transformations when moving data between memory and disk.
  • An ability to create full and incremental cluster snapshots by plugging-in 3rd party solutions.

Usage

To enable the native persistence, pass an instance of PersistentStoreConfiguration to a cluster node configuration:

<bean class="org.apache.ignite.configuration.IgniteConfiguration">
  <!-- Enabling Apache Ignite Native Persistence. -->
  <property name="persistentStoreConfiguration">
    <bean class="org.apache.ignite.configuration.PersistentStoreConfiguration"/>
  </property>

  <!-- Additional setting. -->
 
</bean>
// Apache Ignite node configuration.
IgniteConfiguration cfg = new IgniteConfiguration();

// Native Persistence configuration.
PersistentStoreConfiguration psCfg = new PersistentStoreConfiguration();

// Enabling the Persistent Store.
cfg.setPersistentStoreConfiguration(psCfg);
        
//Additional parameters.

Once the persistence is enabled, all the data as well as indexes will be stored both in memory and on disk across all the cluster nodes. The picture below depicts the structure of the persistence at the file system level of an individual cluster node:

Ignite Native Persistence Structure on the File System

Ignite Native Persistence Structure on the File System

Firstly, there will be a unique directory for every cache deployed on the node. From the picture above, we can see that there are at least two caches (Cache_A and Cache_B) whose data and indexes are maintained by the node.

Secondly, for every partition that this node is either a primary or backup, the persistence creates a dedicated file on the file system. For instance, the node from the picture above is responsible for partitions 1, 10 and 564. The indexes are stored in one file per cache.

Finally, there files and directories related to the write-ahead log activities that are explained below.

Logical Caches and Partition Files

If Cache_A and Cache_B had belonged to a similar cache group than there would have been only a single directory with partition files shared by both caches. Learn more from logical caches documentation.

Next, when Ignite sees that persistence is enabled it moves the cluster from active to inactive state making sure that applications cannot​ modify the data until allowed. This is done to avoid situations where the cluster is being restarted and applications start modifying data that may be persisted on the nodes that have not been brought back up yet. So, the general practice here is to wait while all the nodes join the cluster and call Ignite.active(true) from any node or application you have, moving the cluster to the active state.

Ignite ignite = ...;
            
// Activating the cluster once all the cluster nodes are up and running.
ignite.active(true);

Ignite Native Persistence Root Path

By default, all the data is persisted in the Apache Ignite working directory (${IGNITE_HOME}/work). Use PersistentStoreConfiguration.setPersistentStorePath(...) method to change the default directory.

As explained above, with Ignite persistence enabled you no longer need to fit all the data in RAM. The disk will store all the data and indexes you have, while a subset of them will be cached in RAM.

Considering this, if a page is not found in RAM, then the durable memory will request it from the persistence. The subset of data that is to be stored in RAM is managed by eviction policies.

Write-Ahead Log

Ignite persistence creates and maintains a dedicated file for every partition a node is either primary or backup. However, when a page is updated in RAM, the update is not directly written to a respective partition file because it can affect the performance dramatically. It's rather appended to the tail of a write-ahead log (WAL).

The purpose of the WAL is to propagate updates to disk in the fastest way possible and provide a recovery mechanism for scenarios where a single node or the whole cluster goes down. It is worth mentioning, that a cluster can always be recovered to the latest successfully committed transaction in case of a crash or restart relying on the content of the WAL.

More Details on WAL

Refer to WAL section on Ignite Native Persistence Architecture page to learn more about WAL implementation details in Apache Ignite.

The whole WAL is split into several files, called segments, that are filled out sequentially. Once the 1st segment is full, its content will be copied to the WAL archive and kept there for the time defined by the PersistentStoreConfiguration.walHistorySize setting. While the 1st segment's content is being copied to the archive, the 2nd segment will be treated as an active WAL file and will accept all the updates coming from the application side. By default, there are 10 such segments created and used. To change this number use PersistentStoreConfiguration.setWalSegmentSize setting.

Below you can find additional WAL related settings. Refer to PersistentStoreConfiguration to get the full list of configuration parameters available:

Parameter Name
Description
Default Value

setWalStorePath(...)

Sets a path to the directory where WAL is stored . If this path is relative, it will be resolved relatively to Ignite work directory.

${IGNITE_HOME}/work

setWalSegments(...)

Sets a number of WAL segments to work with. For performance reasons, the whole WAL is split into files of fixed length called segments.

10

setWalSegmentSize(...)

Sets size of a WAL segment.

64 MB

setWalHistorySize(...)

Sets a total number of checkpoints to keep in the WAL history. Refer to the checkpointing section below to learn more about that technique.

20

setWalArchivePath(...)

Sets a path for the WAL archive directory. Every WAL segment will be fully copied to this directory before it can be reused for WAL purposes.

${IGNITE_HOME}/work

WAL Modes

Depending on the WAL mode set, Ignite provides the following consistency guarantees:

WAL Mode
Description
Consistency Guarantees

DEFAULT

The changes are guaranteed to be persisted to disk for every atomic write or transactional commit.

Data updates are never lost surviving any OS or process crashes, or power failure.

LOG_ONLY

The changes are guaranteed to be flushed to the OS buffer cache for every atomic write or transactional commit.

Data updates survive only process crash.

BACKGROUND

The changes are flushed to the OS buffer cache periodically.

Recent data updates may get lost in case of a process crash or other outages.

NONE

WAL is disabled. The changes are persisted only during the checkpointing process or graceful node shutdown.
Use Ignite#active(false) to shutdown the node gracefully.

No consistency guarantees.

If an Ignite node is terminated abruptly, it is very likely that the data stored on the disk will be corrupted and the persistence directory needs to be cleared for a node restart.

The following example shows how to set the WAL mode:

<bean class="org.apache.ignite.configuration.IgniteConfiguration">
  <!-- Enabling Apache Ignite Native Persistence. -->
  <bean class="org.apache.ignite.configuration.PersistentStoreConfiguration">
  	<property name="walMode" value="LOG_ONLY" />
  </bean>

  <!-- Additional setting. -->
 
</bean>
// Apache Ignite node configuration.
IgniteConfiguration cfg = new IgniteConfiguration();

// Native Persistence configuration.
PersistentStoreConfiguration psCfg = new PersistentStoreConfiguration();

// Set WAL Mode
psCfg.setWalMode(WALMode.LOG_ONLY);

// Enabling the Persistent Store.
cfg.setPersistentStoreConfiguration(psCfg);
        
//Additional parameters.

Checkpointing

Due to the nature of the WAL file, it will constantly grow and it would take significant time to recover the cluster by going over the WAL from the head to tail. To mitigate this, Ignite introduces a checkpointing process.

The checkpointing is the process of copying dirty pages from RAM to partition files on disk. A dirty page is a page that was updated in RAM but was not written to a respective partition file (an update was just appended to the WAL).

This process helps to utilize disk space frugally by keeping pages in the most up-to-date state on disk and allowing to remove outdated WAL segments (files) from the WAL archive.

Let's review an execution flow of a simple update operation, shown in the picture below:

WAL and Checkpointing

WAL and Checkpointing

  1. Once a cluster node receives an update request, it will look up the data page in RAM where the value is to be inserted or updated. The page will be updated and marked as dirty.
  2. The update is appended to the tail of the WAL.
  3. The node sends an acknowledgment to the update initiator confirming the success of the operation.
  4. Checkpointing is triggered periodically depending on the frequency set in your configuration or other parameters. The dirty pages are copied from RAM to disk and passed on to specific partition files.

See the table below for how checkpointing related parameters can be adjusted depending on your application requirements. Refer to PersistentStoreConfiguration JavaDocs to get a full list of configuration parameters:

Parameter Name
Description
Default Value

setCheckpointingFrequency(...)

Sets the checkpointing frequency which is the minimal interval when the dirty pages will be written to the Ignite Native Persistence. If the rate is high, checkpointing will be triggered more frequently.

3 minutes

setCheckpointingPageBufferSize(...)

Sets the amount of memory allocated for the checkpointing temporary buffer. This buffer is used to create temporary copies of pages that are being written to disk and updated in parallel while the checkpointing is in progress.

256 MB

setCheckpointingThreads(...)

Sets the number of threads to use for checkpointing purposes.

1

More Details on Checkpointing

Refer to the checkpointing section on Ignite Persistence Architecture page to learn more about checkpointing implementation details in Apache Ignite.

Transactional Guarantees

Ignite native persistence is an ACID-compliant distributed store. Every transactional update that comes to the store is appended to the WAL first. The update is uniquely defined with an ID. This means that a cluster can always be recovered to the latest successfully committed transaction or atomic update ​in the event of a crash or restart.

SQL Support

Ignite native persistence allows using Apache Ignite as a distributed SQL database.

There is no need to have all the data in memory if you need to run SQL queries across the cluster. Apache Ignite is able to execute them over the data that is both in memory and on disk. Moreover, it's optional to preload data from the persistence into the memory after a cluster restart. You can run SQL queries as soon as the cluster is up and running.

Ignite Persistence Internals

This documentation provides a high-level overview of the Ignite persistence. If you're curious to get more technical details refer to these documents:

Performance Tips

The performance suggestions are listed in Durable Memory Tuning documentation section.

Example

To see how the Ignite native persistence can be used in practice, try this example that is available on GitHub and delivered with every Apache Ignite distribution.

Ignite Persistence

Apache Ignite Native Persistence