Ignite data loading and streaming capabilities allow ingesting large finite volumes of data as well as persistent/streaming data sources into the cluster in a scalable and fault-tolerant way. The rate at which data can be injected into Ignite easily exceeds millions of events per second on a moderately sized cluster.
Data loading into Ignite from a 3rd party database is covered on the dedicated data loading page. The same page explains how to preload initial data into the cluster with Ignite native persistence enabled.
- Client nodes inject finite or continuous streams of data into Ignite caches using Ignite Data Streamers.
- Data is automatically partitioned and spread evenly between Ignite data nodes.
- Streamed data can be concurrently processed directly on the Ignite data nodes in collocated fashion.
- Clients can also perform concurrent SQL queries on the streamed data.
Data streamers are defined by
IgniteDataStreamer API and are built to inject large amounts of continuous streams of data into Ignite stream caches. Data streamers are built in a scalable and fault-tolerant fashion and provide at-least-once-guarantee semantics for all the data streamed into Ignite.
You can use the full set of Ignite data indexing capabilities, together with Ignite SQL, TEXT, and Predicate based cache queries, to query the streaming data.
Apache Ignite integrates with major streaming technologies and frameworks such as Kafka, Camel, Storm, or JMS to bring even more advanced streaming capabilities to Ignite-based architectures:
Updated about a year ago