Neural network architecture

This set of functions provide the necessary functionality to define the neural architectures of autoencoders, by connecting layers of units.

input()

Create an input layer

dense()

Create a fully-connected neural layer

variational_block()

Create a variational block of layers

conv()

Create a convolutional layer

output()

Create an output layer

dropout()

Dropout layer

layer_keras()

Custom layer from Keras

`+`(<ruta_network>) c(<ruta_network>)

Add layers to a network/Join networks

`[`(<ruta_network>)

Access subnetworks of a network

plot(<ruta_network>)

Draw a neural network

new_layer()

Layer wrapper constructor

new_network()

Sequential network constructor

as_network()

Coercion to ruta_network

encoding_index()

Get the index of the encoding

Autoencoder and variants

These functions allow to create and customize autoencoder learners.

autoencoder()

Create an autoencoder learner

autoencoder_contractive()

Create a contractive autoencoder

autoencoder_denoising()

Create a denoising autoencoder

autoencoder_robust()

Create a robust autoencoder

autoencoder_sparse()

Sparse autoencoder

autoencoder_variational()

Build a variational autoencoder

add_weight_decay()

Add weight decay to any autoencoder

weight_decay()

Weight decay

make_contractive()

Add contractive behavior to any autoencoder

make_denoising()

Add denoising behavior to any autoencoder

make_robust()

Add robust behavior to any autoencoder

make_sparse()

Add sparsity regularization to an autoencoder

is_contractive()

Detect whether an autoencoder is contractive

is_denoising()

Detect whether an autoencoder is denoising

is_robust()

Detect whether an autoencoder is robust

is_sparse()

Detect whether an autoencoder is sparse

is_variational()

Detect whether an autoencoder is variational

sparsity()

Sparsity regularization

new_autoencoder()

Create an autoencoder learner

Loss functions

These functions define different objective functions which an autoencoder may optimize. Along with these, one may use any loss defined in Keras (such as "binary_crossentropy" or "mean_squared_error").

contraction()

Contractive loss

correntropy()

Correntropy loss

loss_variational()

Variational loss

as_loss()

Coercion to ruta_loss

Model training

The following functions allow to train an autoencoder with input data.

autoencode()

Automatically compute an encoding of a data matrix

apply_filter()

Apply filters

configure()

Configure a learner object with the associated Keras objects

to_keras()

Convert a Ruta object onto Keras objects and functions

to_keras(<ruta_autoencoder>) to_keras(<ruta_autoencoder_variational>)

Extract Keras models from an autoencoder wrapper

to_keras(<ruta_filter>)

Get a Keras generator from a data filter

to_keras(<ruta_layer_input>) to_keras(<ruta_layer_dense>) to_keras(<ruta_layer_conv>) to_keras(<ruta_layer_custom>)

Convert Ruta layers onto Keras layers

to_keras(<ruta_layer_variational>)

Obtain a Keras block of layers for the variational autoencoder

to_keras(<ruta_loss_contraction>) to_keras(<ruta_loss_correntropy>) to_keras(<ruta_loss_variational>) to_keras(<ruta_loss_named>)

Obtain a Keras loss

to_keras(<ruta_network>)

Build a Keras network

to_keras(<ruta_sparsity>)

Translate sparsity regularization to Keras regularizer

to_keras(<ruta_weight_decay>)

Obtain a Keras weight decay

train()

Train a learner object with data

is_trained()

Detect trained models

Model evaluation

Evaluation metrics for trained models.

evaluate_mean_squared_error() evaluate_mean_absolute_error() evaluate_binary_crossentropy() evaluate_binary_accuracy() evaluate_kullback_leibler_divergence()

Evaluation metrics

evaluation_metric()

Custom evaluation metrics

Tasks for trained models

The following functions can be applied when an autoencoder has been trained, in order to transform data from the input space onto the latent space and viceversa.

encode()

Retrieve encoding of data

decode()

Retrieve decoding of encoded data

reconstruct() predict(<ruta_autoencoder>)

Retrieve reconstructions for input data

generate()

Generate samples from a generative model

save_as() load_from()

Save and load Ruta models

Noise generators

These objects act as input filters which generate some noise into the training inputs when fitting denoising autoencoders.

noise()

Noise generator

noise_cauchy()

Additive Cauchy noise

noise_gaussian()

Additive Gaussian noise

noise_ones()

Filter to add ones noise

noise_saltpepper()

Filter to add salt-and-pepper noise

noise_zeros()

Filter to add zero noise

Keras conversions

These are internal functions which convert Ruta wrapper objects into Keras objects and functions.

to_keras()

Convert a Ruta object onto Keras objects and functions

to_keras(<ruta_autoencoder>) to_keras(<ruta_autoencoder_variational>)

Extract Keras models from an autoencoder wrapper

to_keras(<ruta_filter>)

Get a Keras generator from a data filter

to_keras(<ruta_layer_input>) to_keras(<ruta_layer_dense>) to_keras(<ruta_layer_conv>) to_keras(<ruta_layer_custom>)

Convert Ruta layers onto Keras layers

to_keras(<ruta_layer_variational>)

Obtain a Keras block of layers for the variational autoencoder

to_keras(<ruta_loss_contraction>) to_keras(<ruta_loss_correntropy>) to_keras(<ruta_loss_variational>) to_keras(<ruta_loss_named>)

Obtain a Keras loss

to_keras(<ruta_network>)

Build a Keras network

to_keras(<ruta_sparsity>)

Translate sparsity regularization to Keras regularizer

to_keras(<ruta_weight_decay>)

Obtain a Keras weight decay

Other methods

Some methods for R generics.

print(<ruta_autoencoder>) print(<ruta_loss_named>) print(<ruta_loss>) print(<ruta_network>)

Inspect Ruta objects