Integration with YARN supports scheduling and running Apache Ignite nodes in a YARN cluster.
YARN is a resource negotiator which provides a general runtime environment providing all the essentials you need to deploy, run, and manage distributed applications. Its resource manager and isolation helps in getting the most out of servers.
For information about YARN, refer to http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html
Deploying Apache Ignite cluster typically involves downloading the Apache Ignite distribution, changing configuration settings, and starting up the nodes. Integration with YARN allows you to avoid these actions. Apache Ignite Yarn Application greatly simplifies cluster deployment. The application consists of the following components:
- Client downloads Ignite distribution, puts necessary resources into HDFS, creates the necessary context for launching the task, launches the ApplicationMaster process.
Application master. Once registration is successful the component will begin requesting resources from Resource Manager for use with Apache Ignite nodes. The
Application Masterwill maintain the Ignite cluster at the desired total resources level (CPU, memory, etc).
Container- the entity that runs Ignite Node on slaves.
The Ignite Application requires that YARN and the Hadoop cluster are configured and running. For information on how to set up the cluster please refer to: http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html
- Download Apache Ignite.
- Configure the properties file. Update any parameters you would like to change. See the Configuration section below.
# The number of nodes in the cluster. IGNITE_NODE_COUNT=2 # The number of CPU Cores for each Apache Ignite node. IGNITE_RUN_CPU_PER_NODE=1 # The number of Megabytes of RAM for each Apache Ignite node. IGNITE_MEMORY_PER_NODE=2048 # The version of Ignite which will be run on nodes. IGNITE_VERSION=2.3.0 # URL where Ignite distribution can be downloaded from IGNITE_URL=http://mirror.linux-ia64.org/apache/ignite/2.7.0/apache-ignite-2.7.0-bin.zip # You can also provide a path to unzipped Ignite distribution instead of the URL # IGNITE_PATH=/ignite/apache-ignite-2.7.0-bin
- Run the application.
yarn jar ignite-yarn-<ignite-version>.jar ./ignite-yarn-<ignite-version>.jar cluster.properties
- In order to make sure that Application deployed correctly: open the YARN console at http://:8088/cluster. If everything is working as expected, you will see an application named
- Retrieve logs from the browser. To look through Ignite logs, click on
Logsfor any containers.
- Click on
stdoutto get stdout logs and on
stderrto get stderr logs.
All configurations are handled through environment variables or the property file. The following configuration parameters can be optionally configured.
The HDFS path to the Apache Ignite config file.
The directory which will be used for saving Apache Ignite distribution.
The HDFS directory which will be used for saving Apache Ignite distribution.
The HDFS path to libs which will be added to classpath.
The number of megabytes of RAM for each Apache Ignite node. This is the size of the Java heap. This includes on-heap caching if it is used.
The amount of memory necessary for all data regions, with padding for JVM native overhead, interned Strings, etc. This setting should always be adjusted for nodes that are used to store data, not just for pure computations. Memory requested to YARN for containers running an Ignite node is the sum of IGNITE_MEMORY_PER_NODE and IGNITE_MEMORY_OVERHEAD_PER_NODE.
IGNITE_MEMORY_PER_NODE * 0.10, with a minimum of 384
The constraint on slave hosts.
The number of nodes in the cluster.
The number of CPU Cores for each Apache Ignite node.
The version of Ignite which will be run on nodes.
The HDFS path to the Apache Ignite build. This property can be useful when the yarn
Location where Ignite binary distribution is stored to be downloaded for delivery. As per version 2.7, either IGNITE_PATH or IGNITE_URL is mandatory in practice.
Additional JVM options.
Updated about a year ago