PyTorch Models

In order to have more flexibility in the use of neural network models, these are directly assessible as torch.nn.Module, using the extensions .model, for example:

>>> from cdt.causality.graph.model.CGNN

to import the CGNN Pytorch model. The available models are the following:

  • CGNN

  • SAM

  • NCC

  • GNN

  • FSGNN

CGNN

class cdt.causality.graph.model.CGNN_model(adj_matrix, batch_size, nh=20, device=None, confounding=False, initial_graph=None, **kwargs)[source]

Class defining the CGNN model.

Parameters
  • adj_matrix (numpy.array) – Adjacency Matrix of the model to evaluate

  • batch_size (int) – Minibatch size. ~500 is recommended

  • nh (int) – number of hidden units in the hidden layers

  • device (str) – device to which the computation is to be made

  • confounding (bool) – Enables the confounding variant

  • initial_graph (numpy.array) – Initial graph in the confounding case.

forward()[source]

Generate according to the topological order of the graph, outputs a batch of generated data of size batch_size.

Returns

Generated data

Return type

torch.Tensor

run(dataset, train_epochs=1000, test_epochs=1000, verbose=None, idx=0, lr=0.01, dataloader_workers=0, **kwargs)[source]

Run the CGNN on a given graph.

Parameters
  • dataset (torch.utils.data.Dataset) – True Data, on the same device as the model.

  • train_epochs (int) – number of train epochs

  • test_epochs (int) – number of test epochs

  • verbose (bool) – verbosity of the model

  • idx (int) – indicator for printing purposes

  • lr (float) – learning rate of the model

  • dataloader_workers (int) – number of workers

Returns

Average score of the graph on test_epochs epochs

Return type

float

SAM

class cdt.causality.graph.model.SAM_generators(data_shape, nh, skeleton=None, cat_sizes=None, linear=False, numberHiddenLayersG=1)[source]

Ensemble of all the generators.

forward(data, adj_matrix, drawn_neurons=None)[source]

Forward through all the generators.

class cdt.causality.graph.model.SAM_discriminator(nfeatures, dnh, numberHiddenLayersD=2, mask=None)[source]

SAM discriminator.

forward(input, obs_data=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

NCC

class cdt.causality.pairwise.model.NCC_model(n_hiddens=20, kernel_size=3)[source]

NCC model structure.

Parameters
  • n_hiddens (int) – Number of hidden features

  • kernel_size (int) – Kernel size of the convolutions

forward(x)[source]

Passing data through the network.

Parameters

x (torch.Tensor) – 2d tensor containing both (x,y) Variables

Returns

output of NCC

Return type

torch.Tensor

GNN

class cdt.causality.pairwise.model.GNN_model(batch_size, nh=20, lr=0.01, train_epochs=1000, test_epochs=1000, idx=0, verbose=None, dataloader_workers=0, **kwargs)[source]

Torch model for the GNN structure.

Parameters
  • batch_size (int) – size of the batch going to be fed to the model

  • nh (int) – Number of hidden units in the hidden layer

  • lr (float) – Learning rate of the Model

  • train_epochs (int) – Number of train epochs

  • test_epochs (int) – Number of test epochs

  • idx (int) – Index (for printing purposes)

  • verbose (bool) – Verbosity of the model

  • dataloader_workers (int) – Number of workers for dataset loading

  • device (str) – device on with the algorithm is going to be run on

forward(x)[source]

Pass data through the net structure. :param x: input data: shape (:,1) :type x: torch.Tensor

Returns

Output of the shallow net

Return type

torch.Tensor

run(dataset)[source]

Run the GNN on a pair x,y of FloatTensor data.

Parameters

dataset (torch.utils.data.Dataset) – True data; First element is the cause

Returns

Score of the configuration

Return type

torch.Tensor

FSGNN

class cdt.independence.graph.model.FSGNN_model(sizes, dropout=0.0, activation_function=<class 'torch.nn.modules.activation.ReLU'>)[source]

Variant of CGNN for feature selection.

Parameters
  • sizes (list) – Size of the neural network layers

  • dropout (float) – Dropout rate of the neural connections

  • activation_function (torch.nn.Module) – Activation function of the network

forward(x)[source]

Forward pass in the network.

Parameters

x (torch.Tensor) – input data

Returns

output of the network

Return type

torch.Tensor

train(dataset, lr=0.01, l1=0.1, batch_size=-1, train_epochs=1000, test_epochs=1000, device=None, verbose=None, dataloader_workers=0)[source]

Train the network and output the scores of the features

Parameters
  • dataset (torch.utils.data.Dataset) – Original data

  • lr (float) – Learning rate

  • l1 (float) – Coefficient of the L1 regularization

  • batch_size (int) – Batch size of the model, defaults to the dataset size.

  • train_epochs (int) – Number of train epochs

  • test_epochs (int) – Number of test epochs

  • device (str) – Device on which the computation is to be run

  • verbose (bool) – Verbosity of the model

  • dataloader_workers (int) – Number of workers for dataset loading

Returns

feature selection scores for each feature.

Return type

list