Apache Ignite Documentation

GridGain Developer Hub - Apache Ignitetm

Welcome to the Apache Ignite developer hub run by GridGain. Here you'll find comprehensive guides and documentation to help you start working with Apache Ignite as quickly as possible, as well as support if you get stuck.

 

GridGain also provides Community Edition which is a distribution of Apache Ignite made available by GridGain. It is the fastest and easiest way to get started with Apache Ignite. The Community Edition is generally more stable than the Apache Ignite release available from the Apache Ignite website and may contain extra bug fixes and features that have not made it yet into the release on the Apache website.

 

Let's jump right in!

 

Documentation     Ask a Question     Download

 

Javadoc     Scaladoc     Examples

Multilayer perceptron

Overview

Multiplayer Perceptron (MLP) is the basic form of neural network. It consists of one input layer and 0 or more transformation layers. Each transformation layer depends of the previous layer in the following way:

In the above equation, the dot operator is the dot product of two vectors, functions denoted by sigma are called activators, vectors denoted by w are called weights, and vectors denoted by b are called biases. Each transformation layer has associated weights, activator, and optionally biases. The set of all weights and biases of MLP is the set of MLP parameters.

Model

Model in case of neural network is represented by class MultilayerPerceptron. It allows to make a prediction for a given vector of features in the following way:

MultilayerPerceptron mlp = ...
Matrix prediction = mlp.apply(coordinates); 

Model is fully independent object and after the training it can be saved, serialized and restored.

Trainer

One of the popular ways for supervised model training is batch training. In this approach, training is done in iterations; during each iteration we extract a subpart(batch) of labeled data (data consisting of input of approximated function and corresponding values of this function which are often called 'ground truth') on which we train and update model parameters using this subpart. Updates are made to minimize loss function on batches.

Apache Ignite MLPTrainer is used for distributed batch training, which works in a map-reduce way. Each iteration (let's call it global iteration) consists of several parallel iterations which in turn consists of several local steps. Each local iteration is executed by it's own worker and performs the specified number of local steps (called synchronization period) to compute it's update of model parameters. Then all updates are accumulated on the node, that started training, and are transformed to global update which is sent back to all workers. This process continues until stop criteria is reached.

MLPTrainer can be parameterized by neural network architecture, loss function, update strategy (SGD, RProp or Nesterov), max number of iterations, batch size, number of local iterations and seed.

// Define a layered architecture.
MLPArchitecture arch = new MLPArchitecture(2).
    withAddedLayer(10, true, Activators.RELU).
    withAddedLayer(1, false, Activators.SIGMOID);

// Define a neural network trainer.
MLPTrainer<SimpleGDParameterUpdate> trainer = new MLPTrainer<>(
    arch,
    LossFunctions.MSE,
    new UpdatesStrategy<>(
        new SimpleGDUpdateCalculator(0.1),
        SimpleGDParameterUpdate::sumLocal,
        SimpleGDParameterUpdate::avg
    ),
    3000,   // Max iterations.
    4,      // Batch size.
    50,     // Local iterations.
    123L    // Random seed.
);

// Train model.
MultilayerPerceptron mlp = trainer.fit(
    ignite,
    upstreamCache,
    (k, pnt) -> pnt.coordinates,
    (k, pnt) -> pnt.label
);

// Make a prediction. 
Matrix prediction = mlp.apply(coordinates); 
// Define a layered architecture.
MLPArchitecture arch = new MLPArchitecture(2).
    withAddedLayer(10, true, Activators.RELU).
    withAddedLayer(1, false, Activators.SIGMOID);

// Define a neural network trainer.
MLPTrainer<SimpleGDParameterUpdate> trainer = new MLPTrainer<>(
    arch,
    LossFunctions.MSE,
    new UpdatesStrategy<>(
        new SimpleGDUpdateCalculator(0.1),
        SimpleGDParameterUpdate::sumLocal,
        SimpleGDParameterUpdate::avg
    ),
    3000,   // Max iterations.
    4,      // Batch size.
    50,     // Local iterations.
    123L    // Random seed.
);

// Train model.
MultilayerPerceptron mlp = trainer.fit(
    upstreamMap,
    10,          // Number of partitions.
    (k, pnt) -> pnt.coordinates,
    (k, pnt) -> pnt.label
);

// Make a prediction. 
Matrix prediction = mlp.apply(coordinates); 

Examples

To see how the Deep Learning can be used in practice, try this example that is available on GitHub and delivered with every Apache Ignite distribution.

Multilayer perceptron


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.