Convolution Layers
Bases: MessagePassing
k3_node.layers.AGNNConv
Implementation of Attention-based Graph Neural Network (AGNN) layer
Parameters:
Name | Type | Description | Default |
---|---|---|---|
trainable |
Whether to learn the scaling factor beta. |
True
|
|
aggregate |
Aggregation function to use (one of 'sum', 'mean', 'max'). |
'sum'
|
|
activation |
Activation function to use. |
None
|
|
**kwargs |
Additional arguments to pass to the |
{}
|
Source code in k3_node/layers/conv/agnn_conv.py
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
|
Bases: Conv
k3_node.layers.APPNPConv
Implementation of Approximate Personalized Propagation of Neural Predictions
Parameters:
Name | Type | Description | Default |
---|---|---|---|
channels |
The number of output channels. |
required | |
alpha |
The teleport probability. |
0.2
|
|
propagations |
The number of propagation steps. |
1
|
|
mlp_hidden |
A list of hidden channels for the MLP. |
None
|
|
mlp_activation |
The activation function to use in the MLP. |
'relu'
|
|
dropout_rate |
The dropout rate for the MLP. |
0.0
|
|
activation |
The activation function to use in the layer. |
None
|
|
use_bias |
Whether to add a bias to the linear transformation. |
True
|
|
kernel_initializer |
Initializer for the |
'glorot_uniform'
|
|
bias_initializer |
Initializer for the bias vector. |
'zeros'
|
|
kernel_regularizer |
Regularizer for the |
None
|
|
bias_regularizer |
Regularizer for the bias vector. |
None
|
|
activity_regularizer |
Regularizer for the output. |
None
|
|
kernel_constraint |
Constraint for the |
None
|
|
bias_constraint |
Constraint for the bias vector. |
None
|
|
**kwargs |
Additional keyword arguments. |
{}
|
Source code in k3_node/layers/conv/appnp_conv.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 |
|
Bases: Conv
k3_node.layers.ARMAConv
Implementation of ARMAConv layer
Parameters:
Name | Type | Description | Default |
---|---|---|---|
channels |
The number of output channels. |
required | |
order |
The order of the ARMA filter. |
1
|
|
iterations |
The number of iterations to perform. |
1
|
|
share_weights |
Whether to share the weights across iterations. |
False
|
|
gcn_activation |
The activation function to use for GCN. |
'relu'
|
|
dropout_rate |
The dropout rate. |
0.0
|
|
activation |
The activation function to use in the layer. |
None
|
|
use_bias |
Whether to add a bias to the linear transformation. |
True
|
|
kernel_initializer |
Initializer for the |
'glorot_uniform'
|
|
bias_initializer |
Initializer for the bias vector. |
'zeros'
|
|
kernel_regularizer |
Regularizer for the |
None
|
|
bias_regularizer |
Regularizer for the bias vector. |
None
|
|
activity_regularizer |
Regularizer for the output. |
None
|
|
kernel_constraint |
Constraint for the |
None
|
|
bias_constraint |
Constraint for the bias vector. |
None
|
|
**kwargs |
Additional keyword arguments. |
{}
|
Source code in k3_node/layers/conv/arma_conv.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 |
|
Bases: MessagePassing
k3_node.layers.CrystalConv
Implementation of Crystal Graph Convolutional Neural Networks (CGCNN) layer
Parameters:
Name | Type | Description | Default |
---|---|---|---|
aggregate |
Aggregation function to use (one of 'sum', 'mean', 'max'). |
'sum'
|
|
activation |
Activation function to use. |
None
|
|
use_bias |
Whether to add a bias to the linear transformation. |
True
|
|
kernel_initializer |
Initializer for the |
'glorot_uniform'
|
|
bias_initializer |
Initializer for the bias vector. |
'zeros'
|
|
kernel_regularizer |
Regularizer for the |
None
|
|
bias_regularizer |
Regularizer for the bias vector. |
None
|
|
activity_regularizer |
Regularizer for the output. |
None
|
|
kernel_constraint |
Constraint for the |
None
|
|
bias_constraint |
Constraint for the bias vector. |
None
|
|
**kwargs |
Additional arguments to pass to the |
{}
|
Source code in k3_node/layers/conv/crystal_conv.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
|
Bases: Conv
k3_node.layers.DiffusionConv
Implementation of Diffusion Convolutional Neural Networks (DCNN) layer
Parameters:
Name | Type | Description | Default |
---|---|---|---|
channels |
The number of output channels. |
required | |
K |
The number of diffusion steps. |
6
|
|
activation |
Activation function to use. |
'tanh'
|
|
kernel_initializer |
Initializer for the |
'glorot_uniform'
|
|
kernel_regularizer |
Regularizer for the |
None
|
|
kernel_constraint |
Constraint for the |
None
|
|
**kwargs |
Additional arguments to pass to the |
{}
|
Source code in k3_node/layers/conv/diffusion_conv.py
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 |
|
Bases: MessagePassing
k3_node.layers.GatedGraphConv
Implementation of Gated Graph Convolution (GGC) layer
Parameters:
Name | Type | Description | Default |
---|---|---|---|
channels |
The number of output channels. |
required | |
n_layers |
The number of GGC layers to stack. |
required | |
activation |
Activation function to use. |
None
|
|
use_bias |
Whether to add a bias to the linear transformation. |
True
|
|
kernel_initializer |
Initializer for the |
'glorot_uniform'
|
|
bias_initializer |
Initializer for the bias vector. |
'zeros'
|
|
kernel_regularizer |
Regularizer for the |
None
|
|
bias_regularizer |
Regularizer for the bias vector. |
None
|
|
activity_regularizer |
Regularizer for the output. |
None
|
|
kernel_constraint |
Constraint for the |
None
|
|
bias_constraint |
Constraint for the bias vector. |
None
|
|
**kwargs |
Additional arguments to pass to the |
{}
|
Source code in k3_node/layers/conv/gated_graph_conv.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
|
Bases: Layer
k3_node.layers.GraphConvolution
Implementation of Graph Convolution (GCN) layer
Parameters:
Name | Type | Description | Default |
---|---|---|---|
units |
Positive integer, dimensionality of the output space. |
required | |
activation |
Activation function to use. |
None
|
|
use_bias |
Whether to add a bias to the linear transformation. |
True
|
|
final_layer |
Deprecated, use tf.gather or GatherIndices instead. |
None
|
|
input_dim |
Deprecated, use |
None
|
|
kernel_initializer |
Initializer for the |
'glorot_uniform'
|
|
kernel_regularizer |
Regularizer for the |
None
|
|
kernel_constraint |
Constraint for the |
None
|
|
bias_initializer |
Initializer for the bias vector. |
'zeros'
|
|
bias_regularizer |
Regularizer for the bias vector. |
None
|
|
bias_constraint |
Constraint for the bias vector. |
None
|
|
**kwargs |
Additional arguments to pass to the |
{}
|
Source code in k3_node/layers/conv/gcn.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 |
|
Bases: MessagePassing
k3_node.layers.GeneralConv
Implementation of General Graph Convolution
Parameters:
Name | Type | Description | Default |
---|---|---|---|
channels |
The number of output channels. |
256
|
|
batch_norm |
Whether to use batch normalization. |
True
|
|
dropout |
The dropout rate. |
0.0
|
|
aggregate |
Aggregation function to use (one of 'sum', 'mean', 'max'). |
'sum'
|
|
activation |
Activation function to use. |
'prelu'
|
|
use_bias |
Whether to add a bias to the linear transformation. |
True
|
|
kernel_initializer |
Initializer for the |
'glorot_uniform'
|
|
bias_initializer |
Initializer for the bias vector. |
'zeros'
|
|
kernel_regularizer |
Regularizer for the |
None
|
|
bias_regularizer |
Regularizer for the bias vector. |
None
|
|
activity_regularizer |
Regularizer for the output. |
None
|
|
kernel_constraint |
Constraint for the |
None
|
|
bias_constraint |
Constraint for the bias vector. |
None
|
|
**kwargs |
Additional arguments to pass to the |
{}
|
Source code in k3_node/layers/conv/general_conv.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
|
Bases: MessagePassing
k3_node.layers.GINConv
Implementation of Graph Isomorphism Network (GIN) layer
Parameters:
Name | Type | Description | Default |
---|---|---|---|
channels |
The number of output channels. |
required | |
epsilon |
The epsilon parameter for the MLP. |
None
|
|
mlp_hidden |
A list of hidden channels for the MLP. |
None
|
|
mlp_activation |
The activation function to use in the MLP. |
'relu'
|
|
mlp_batchnorm |
Whether to use batch normalization in the MLP. |
True
|
|
aggregate |
Aggregation function to use (one of 'sum', 'mean', 'max'). |
'sum'
|
|
activation |
Activation function to use. |
None
|
|
use_bias |
Whether to add a bias to the linear transformation. |
True
|
|
kernel_initializer |
Initializer for the |
'glorot_uniform'
|
|
bias_initializer |
Initializer for the bias vector. |
'zeros'
|
|
kernel_regularizer |
Regularizer for the |
None
|
|
bias_regularizer |
Regularizer for the bias vector. |
None
|
|
activity_regularizer |
Regularizer for the output. |
None
|
|
kernel_constraint |
Constraint for the |
None
|
|
bias_constraint |
Constraint for the bias vector. |
None
|
|
**kwargs |
Additional arguments to pass to the |
{}
|
Source code in k3_node/layers/conv/gin_conv.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
|
Bases: Layer
k3_node.layers.GraphAttention
Implementation of Graph Attention (GAT) layer
Parameters:
Name | Type | Description | Default |
---|---|---|---|
units |
Positive integer, dimensionality of the output space. |
required | |
attn_heads |
Positive integer, number of attention heads. |
1
|
|
attn_heads_reduction |
{'concat', 'average'} Method for reducing attention heads. |
'concat'
|
|
in_dropout_rate |
Dropout rate applied to the input (node features). |
0.0
|
|
attn_dropout_rate |
Dropout rate applied to attention coefficients. |
0.0
|
|
activation |
Activation function to use. |
'relu'
|
|
use_bias |
Whether to add a bias to the linear transformation. |
True
|
|
final_layer |
Deprecated, use tf.gather or GatherIndices instead. |
None
|
|
saliency_map_support |
Whether to support saliency map calculations. |
False
|
|
kernel_initializer |
Initializer for the |
'glorot_uniform'
|
|
kernel_regularizer |
Regularizer for the |
None
|
|
kernel_constraint |
Constraint for the |
None
|
|
bias_initializer |
Initializer for the bias vector. |
'zeros'
|
|
bias_regularizer |
Regularizer for the bias vector. |
None
|
|
bias_constraint |
Constraint for the bias vector. |
None
|
|
attn_kernel_initializer |
Initializer for the attention kernel weights matrix. |
'glorot_uniform'
|
|
attn_kernel_regularizer |
Regularizer for the attention kernel weights matrix. |
None
|
|
attn_kernel_constraint |
Constraint for the attention kernel weights matrix. |
None
|
|
**kwargs |
Additional arguments to pass to the |
{}
|
Source code in k3_node/layers/conv/graph_attention.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 |
|
Bases: Layer
k3_node.layers.conv.MessagePassing
Base class for message passing layers.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
aggregate |
Aggregation function to use (one of 'sum', 'mean', 'max'). |
'sum'
|
|
**kwargs |
Additional arguments to pass to the |
{}
|
Source code in k3_node/layers/conv/message_passing.py
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 |
|
Bases: Layer
k3_node.layers.PPNPPropagation
Implementation of PPNP layer
Parameters:
Name | Type | Description | Default |
---|---|---|---|
units |
Positive integer, dimensionality of the output space. |
required | |
final_layer |
Deprecated, use tf.gather or GatherIndices instead. |
None
|
|
input_dim |
Deprecated, use |
None
|
|
**kwargs |
Additional arguments to pass to the |
{}
|
Source code in k3_node/layers/conv/ppnp.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
|
Bases: Layer
k3_node.layers.SAGEConv
Implementation of GraphSAGE layer
Parameters:
Name | Type | Description | Default |
---|---|---|---|
out_channels |
The number of output channels. |
required | |
normalize |
Whether to normalize the output. |
False
|
|
bias |
Whether to add a bias to the linear transformation. |
True
|
Source code in k3_node/layers/conv/sage_conv.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
|