Apache Ignite

GridGain Developer Hub - Apache Ignitetm

Welcome to the Apache Ignite developer hub run by GridGain. Here you'll find comprehensive guides and documentation to help you start working with Apache Ignite as quickly as possible, as well as support if you get stuck.

 

GridGain also provides Community Edition which is a distribution of Apache Ignite made available by GridGain. It is the fastest and easiest way to get started with Apache Ignite. The Community Edition is generally more stable than the Apache Ignite release available from the Apache Ignite website and may contain extra bug fixes and features that have not made it yet into the release on the Apache website.

 

Let's jump right in!

 

Documentation     Ask a Question     Download

 

Javadoc     Scaladoc     Examples

JDBC Driver

Connect to Ignite using standard JDBC driver.

JDBC Connection

Ignite is shipped with a JDBC driver that allows you to retrieve distributed data from caches using standard SQL queries and update the data using DML statements like INSERT, UPDATE or DELETE directly from the JDBC API side.

In Ignite, the JDBC connection URL has the following pattern:

jdbc:ignite:cfg://[<params>@]<config_url>
  • <config_url> is required and represents any valid URL that points to an Ignite configuration file for Ignite client node. This node will be started within the Ignite JDBC Driver when it (JDBC driver) tries to establish a connection with the cluster. The JDBC driver will forward the SQL queries, sent by the user application, to the cluster via the client node.
  • <params> is optional and has the following format:
param1=value1:param2=value2:...:paramN=valueN

The following parameters are supported:

Properties
Description
Default

cache

Cache name. If it is not defined the default cache will be used. Note that the cache name is case sensitive.

nodeId

ID of node where query will be executed. It can be useful for querying through local caches.

local

Query will be executed only on a local node. Use this parameter with nodeId parameter in order to limit data set by specified node.

false

collocated

Flag that is used for optimization purposes. Whenever Ignite executes a distributed query, it sends sub-queries to individual cluster members. If you know in advance that the elements of your query selection are collocated together on the same node, Ignite can make significant performance and network optimizations.

false

distributedJoins

Allows use distributed joins for non collocated data.

false

streaming

Turns on bulk data load mode via INSERT statements for this connection. Refer to Streaming Mode section for more details.

false

streamingAllowOverwrite

Tells Ignite to overwrite values for existing keys on duplication instead of skipping them. Refer to Streaming Mode section for more details.

false

streamingFlushFrequency

Timeout, in milliseconds, that data streamer should use to flush data. By default, the data is flushed on connection close. Refer to Streaming Mode section for more details.

0

streamingPerNodeBufferSize

Data streamer's per node buffer size. Refer to Streaming Mode section for more details.

1024

streamingPerNodeParallelOperations

Data streamer's per node parallel operations number. Refer to Streaming Mode section for more details.

16

Presently, JDBC driver requires several jars to be added to the classpath of your application or SQL tool - go to {apache_ignite_release}\libs folder and import all the jars from there and under ignite-indexing and ignite-spring subfolders.

Client vs Server Nodes

By default, all Ignite nodes are started as server nodes, and client mode needs to be explicitly enabled. However, regardless of the configuration, the JDBC driver always starts a node in client mode. See Clients and Servers section for details.

Cross-Cache Queries

Cache that the driver is connected to is treated as the default schema. To query across multiple caches, Cross-Cache Query functionality can be used.

Joins and Collocation

Just like with Cache SQL Queries used from IgniteCache API, joins on PARTITIONED caches will work correctly only if joined objects are stored in collocated mode. Refer to Affinity Collocation for more details.

Replicated vs Partitioned Caches

Queries on REPLICATED caches will run directly only on one node, while queries on PARTITIONED caches are distributed across all cache nodes.

Streaming Mode

It's feasible to add data into an Ignite cluster in a streaming mode (bulk mode) using the JDBC driver. In this mode, the driver instantiates IgniteDataStreamer internally and feeds data to it. To activate this mode, add streaming parameter set to true to a JDBC connection string:

// Register JDBC driver.
Class.forName("org.apache.ignite.IgniteJdbcDriver");
 
// Opening connection in the streaming mode.
Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://streaming=true@file:///etc/config/ignite-jdbc.xml");

Presently, the streaming mode is supported only for INSERT operations which is good for the use case when you need to achieve fast data preloading into a cache. The JDBC driver defines multiple connection parameters that affect the behavior of the streaming mode. The parameters are listed in the parameters table above.

The parameters cover almost all settings of a general IgniteDataStreamer and allow you to fine tune the streamer according to your needs. Please refer to the Data Streamers section of Ignite docs for more information on how to configure the streamer.

Time Based Flushing

By default, the data is flushed when either a connection is closed or streamingPerNodeBufferSize is met. If you need to flush the data in a timely manner, then adjust the streamingFlushFrequency parameter.

// Register JDBC driver.
Class.forName("org.apache.ignite.IgniteJdbcDriver");
 
// Opening a connection in the streaming mode and time based flushing set.
Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://streaming=true@streamingFlushFrequency=1000@file:///etc/config/ignite-jdbc.xml");

PreparedStatement stmt = conn.prepareStatement(
  "INSERT INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");

// Adding the data.
for (int i = 1; i < 100000; i++) {
      // Inserting a Person object with a Long key.
      stmt.setInt(1, i);
      stmt.setString(2, "John Smith");
      stmt.setInt(3, 25);
  
      stmt.execute();
}

conn.close();

// Beyond this point, all data is guaranteed to be flushed into the cache.

Example

Ignite JDBC driver automatically gets only those fields that you actually need from the objects stored in cache. Let's say you have a Person class declared like this:

public class Person {
    @QuerySqlField
    private String name;
 
    @QuerySqlField
    private int age;
 
    // Getters and setters.
    ...
}

If you have instances of this class in a cache, you can query individual fields (name, age or both) via the standard JDBC API, like so:

// Register JDBC driver.
Class.forName("org.apache.ignite.IgniteJdbcDriver");
 
// Open JDBC connection (cache name is not specified, which means that we use default cache).
Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://file:///etc/config/ignite-jdbc.xml");
 
// Query names of all people.
ResultSet rs = conn.createStatement().executeQuery("select name from Person");
 
while (rs.next()) {
    String name = rs.getString(1);
    ...
}
 
// Query people with specific age using prepared statement.
PreparedStatement stmt = conn.prepareStatement("select name, age from Person where age = ?");
 
stmt.setInt(1, 30);
 
ResultSet rs = stmt.executeQuery();
 
while (rs.next()) {
    String name = rs.getString("name");
    int age = rs.getInt("age");
    ...
}

Moreover, you can modify the data with the usage of DML statements.

INSERT

// Insert a Person with a Long key.
PreparedStatement stmt = conn.prepareStatement("INSERT INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");
 
stmt.setInt(1, 1);
stmt.setString(2, "John Smith");
stmt.setInt(3, 25);

stmt.execute();

MERGE

// Merge a Person with a Long key.
PreparedStatement stmt = conn.prepareStatement("MERGE INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");
 
stmt.setInt(1, 1);
stmt.setString(2, "John Smith");
stmt.setInt(3, 25);
 
stmt.executeUpdate();

UPDATE

// Update a Person.
conn.createStatement().
  executeUpdate("UPDATE Person SET age = age + 1 WHERE age = 25");

DELETE

conn.createStatement().execute("DELETE FROM Person WHERE age = 25");

A minimalistic version of ignite-jdbc.xml configuration file might look like the one below:

<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans.xsd">
    <bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <property name="clientMode" value="true"/>

        <property name="peerClassLoadingEnabled" value="true"/>

        <!-- Configure TCP discovery SPI to provide list of initial nodes. -->
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"/>
                </property>
            </bean>
        </property>
    </bean>
</beans>

Hostname Based JDBC Connection

For previous versions of Apache Ignite (prior 1.4) JDBC connection URL had the following pattern:

jdbc:ignite://<hostname>:<port>/<cache_name>

You can still keep using this driver in a current Apache Ignite version if it's convenient for you. See the following documentation for details.

To use a driver from an application or SQL add {apache_ignite_release}\libs\ignite-core-{version}.jar to the classpath.

JDBC Driver

Connect to Ignite using standard JDBC driver.