Akida Execution Engine API¶
-
akida.
__version__
¶ Returns the current version of the akida module.
Model¶
-
class
akida.
Model
(filename=None, serialized_buffer=None, layers=None, backend=BackendType.Software)[source]¶ An Akida neural
Model
, represented as a hierarchy of layers.The
Model
class is the main interface to Akida and allows to creates an emptyModel
, aModel
template from a YAML file, or a fullModel
from a serialized file.It provides methods to instantiate, train, test and save models.
- Parameters
filename (str, optional) – path of the YAML file containing the model architecture, or a serialized Model. If None, an empty sequential model will be created, or filled with the layers in the layers parameter.
serialized_buffer (bytes, optional) – binary buffer containing a serialized Model.
layers (
list
, optional) – list of layers that will be copied to the new model. If the list does not start with an input layer, it will be added automatically.backend (
BackendType
, optional) – backend to run the model on.
Methods:
add
(self, layer, inbound_layers)Add a layer to the current model.
add_classes
(num_add_classes)Adds classes to the last layer of the model.
compile
(self, num_weights, num_classes, …)Prepare the internal parameters of the last layer of the model for training
evaluate
(inputs)Evaluates a set of images or events through the model.
fit
(inputs[, input_labels])Trains a set of images or events through the model.
forward
(inputs)Forwards a set of images or events through the model.
get_layer
(*args, **kwargs)Overloaded function.
get_layer_count
(self)The number of layers.
Get statistics by layer for this network.
pop_layer
(self)Remove the last layer of the current model.
predict
(inputs[, num_classes])Returns the model class predictions.
save
(self, arg0)Saves all the model configuration (all layers and weights) to a file on disk.
summary
()Prints a string summary of the model.
to_buffer
(self)Serializes all the model configuration (all layers and weights) to a bytes buffer.
Attributes:
The backend the model is running on.
The model input dimensions (width, height, features).
Get a list of layers in current model.
The model metrics.
The model output dimensions (width, height, features).
-
add
(self: akida.core.ModelBase, layer: akida::Layer, inbound_layers: List[akida::Layer] = []) → None¶ Add a layer to the current model.
A list of inbound layers can optionally be specified. These layers must already be included in the model. if no inbound layer is specified, and the layer is not the first layer in the model, the last included layer will be used as inbound layer.
- Parameters
layer (one of the available layers) – layer instance to be added to the model
inbound_layers (a list of Layer) – an optional list of inbound layers
-
add_classes
(num_add_classes)[source]¶ Adds classes to the last layer of the model.
A model with a compiled last layer is ready to learn using the Akida built-in learning algorithm. This function allows to add new classes (i.e. new neurons) to the last layer, keeping the previously learned neurons.
- Parameters
num_add_classes (int) – number of classes to add to the last layer
- Raises
RuntimeError – if the last layer is not compiled
-
property
backend
¶ The backend the model is running on.
-
compile
(self: akida.core.ModelBase, num_weights: int, num_classes: int = 1, initial_plasticity: float = 1.0, learning_competition: float = 0.0, min_plasticity: float = 0.10000000149011612, plasticity_decay: float = 0.25) → None¶ Prepare the internal parameters of the last layer of the model for training
- Parameters
num_weights (int) – number of connections for each neuron.
num_classes (int, optional) – number of classes when running in a ‘labeled mode’.
initial_plasticity (float, optional) – defines how easily the weights will change when learning occurs.
learning_competition (float, optional) – controls competition between neurons.
min_plasticity (float, optional) – defines the minimum level to which plasticity will decay.
plasticity_decay (float, optional) – defines the decay of plasticity with each learning step.
-
evaluate
(inputs)[source]¶ Evaluates a set of images or events through the model.
Forwards an input tensor through the model and returns a float array.
It applies ONLY to models without an activation on the last layer. The output values are obtained from the model discrete potentials by applying a shift and a scale.
The expected input tensor dimensions are:
n, representing the number of frames or samples,
w, representing the width,
h, representing the height,
c, representing the channel, or more generally the feature.
If the inputs are events, the input shape must be (n, w, h, c), but if the inputs are images (numpy array), their shape must be (n, h, w, c).
Note: only grayscale (c=1) or RGB (c=3) images (arrays) are supported.
- Parameters
inputs (
numpy.ndarray
) – a (n, w, h, c) numpy.ndarray- Returns
a float array of shape (n, w, h, c).
- Return type
numpy.ndarray
- Raises
TypeError – if the input is not a numpy.ndarray.
RuntimeError – if the model last layer has an activation.
ValueError – if the input doesn’t match the required shape, format, or if the model only has an InputData layer.
-
fit
(inputs, input_labels=None)[source]¶ Trains a set of images or events through the model.
Trains the model with the specified input tensor (numpy array).
The expected input tensor dimensions are:
n, representing the number of frames or samples,
w, representing the width,
h, representing the height,
c, representing the channel, or more generally the feature.
If the inputs are events, the input shape must be (n, w, h, c), but if the inputs are images, their shape must be (n, h, w, c).
Note: only grayscale (c=1) or RGB (c=3) images (arrays) are supported.
If activations are enabled for the last layer, the output is an uint8 tensor.
If activations are disabled for the last layer, the output is an int32 tensor.
- Parameters
inputs (
numpy.ndarray
) – a numpy.ndarrayinput_labels (list(int), optional) – input labels. Must have one label per input, or a single label for all inputs. If a label exceeds the defined number of classes, the input will be discarded. (Default value = None).
- Returns
a numpy array of shape (n, out_w, out_h, out_c).
- Raises
TypeError – if the input is not a numpy.ndarray.
ValueError – if the input doesn’t match the required shape, format, etc.
-
forward
(inputs)[source]¶ Forwards a set of images or events through the model.
Forwards an input tensor through the model and returns an output tensor.
The expected input tensor dimensions are:
n, representing the number of frames or samples,
w, representing the width,
h, representing the height,
c, representing the channel, or more generally the feature.
If the inputs are events, the input shape must be (n, w, h, c), but if the inputs are images, their shape must be (n, h, w, c).
Note: only grayscale (c=1) or RGB (c=3) images (arrays) are supported.
If activations are enabled for the last layer, the output is an uint8 tensor.
If activations are disabled for the last layer, the output is an int32 tensor.
- Parameters
inputs (
numpy.ndarray
) – a numpy.ndarray- Returns
a numpy array of shape (n, out_w, out_h, out_c).
- Raises
TypeError – if the input is not a numpy.ndarray.
ValueError – if the inputs doesn’t match the required shape, format, etc.
-
get_layer
(*args, **kwargs)¶ Overloaded function.
get_layer(self: akida.core.ModelBase, layer_name: str) -> akida::Layer
Get a reference to a specific layer.
This method allows a deeper introspection of the model by providing access to the underlying layers.
- param layer_name
name of the layer to retrieve
- type layer_name
str
- return
a
Layer
get_layer(self: akida.core.ModelBase, layer_index: int) -> akida::Layer
Get a reference to a specific layer.
This method allows a deeper introspection of the model by providing access to the underlying layers.
- param layer_index
index of the layer to retrieve
- type layer_index
int
- return
a
Layer
-
get_layer_count
(self: akida.core.ModelBase) → int¶ The number of layers.
-
get_statistics
()[source]¶ Get statistics by layer for this network.
- Returns
LayerStatistics indexed by layer_name.
- Return type
a dictionary of obj
-
property
input_dims
¶ The model input dimensions (width, height, features).
-
property
layers
¶ Get a list of layers in current model.
-
property
metrics
¶ The model metrics.
-
property
output_dims
¶ The model output dimensions (width, height, features).
-
pop_layer
(self: akida.core.ModelBase) → None¶ Remove the last layer of the current model.
-
predict
(inputs, num_classes=None)[source]¶ Returns the model class predictions.
Forwards an input tensor (images or events) through the model and compute predictions based on the neuron id. If the number of output neurons is greater than the number of classes, the neurons are automatically assigned to a class by dividing their id by the number of classes.
The expected input tensor dimensions are:
n, representing the number of frames or samples,
w, representing the width,
h, representing the height,
c, representing the channel, or more generally the feature.
If the inputs are events, the input shape must be (n, w, h, c), but if the inputs are images their shape must be (n, h, w, c).
Note: only grayscale (c=1) or RGB (c=3) images (arrays) are supported.
Note that the predictions are based on the activation values of the last layer: for most use cases, you may want to disable activations for that layer (ie setting
activations_enabled=False
) to get a better accuracy.- Parameters
inputs (
numpy.ndarray
) – a numpy.ndarraynum_classes (int, optional) – optional parameter (defaults to the number of neurons in the last layer).
- Returns
an array of shape (n).
- Return type
numpy.ndarray
- Raises
TypeError – if the input is not a numpy.ndarray.
-
save
(self: akida.core.ModelBase, arg0: str) → None¶ Saves all the model configuration (all layers and weights) to a file on disk.
- Parameters
model_file (str) – full path of the serialized model (.fbz file).
-
summary
()[source]¶ Prints a string summary of the model.
This method prints a summary of the model with details for every layer:
name and type in the first column
output shape
kernel shape
If there is any layer with unsupervised learning enabled, it will list them, with these details:
name of layer
number of incoming connections
number of weights per neuron
It will also tell the input shape, the backend type and version.
-
to_buffer
(self: akida.core.ModelBase) → bytes¶ Serializes all the model configuration (all layers and weights) to a bytes buffer.
Layer¶
-
class
akida.
Layer
¶ Methods:
Returns an histogram of learning percentages.
get_variable
(name)Get the value of a layer variable.
Get the list of variable names for this layer.
set_variable
(name, values)Set the value of a layer variable.
Attributes:
The layer inbound layers.
The layer input bits.
The layer input dimensions (width, height, channels).
The layer learning parameters set.
The layer name.
The layer output dimensions (width, height, features).
The layer parameters set.
The layer trainable variables.
-
get_learning_histogram
()¶ Returns an histogram of learning percentages.
Returns a list of learning percentages and the associated number of neurons.
- Returns
a (n,2) numpy.ndarray containing the learning percentages and the number of neurons.
- Return type
numpy.ndarray
-
get_variable
(name)¶ Get the value of a layer variable.
Layer variables are named entities representing the weights or thresholds used during inference:
Weights variables are typically integer arrays of shape: (width, height, features/channels, num_neurons) row-major (‘C’).
Threshold variables are typically integer or float arrays of shape: (num_neurons).
- Parameters
name (str) – the variable name.
- Returns
an array containing the variable.
- Return type
numpy.ndarray
-
get_variable_names
()¶ Get the list of variable names for this layer.
- Returns
a list of variable names.
-
property
inbounds
¶ The layer inbound layers.
-
property
input_bits
¶ The layer input bits.
-
property
input_dims
¶ The layer input dimensions (width, height, channels).
-
property
learning
¶ The layer learning parameters set.
-
property
name
¶ The layer name.
-
property
output_dims
¶ The layer output dimensions (width, height, features).
-
property
parameters
¶ The layer parameters set.
-
set_variable
(name, values)¶ Set the value of a layer variable.
Layer variables are named entities representing the weights or thresholds used during inference:
Weights variables are typically integer arrays of shape:
(num_neurons, features/channels, height, width) col-major ordered (‘F’)
or equivalently:
(width, height, features/channels, num_neurons) row-major (‘C’).
Threshold variables are typically integer or float arrays of shape: (num_neurons).
- Parameters
name (str) – the variable name.
values (
numpy.ndarray
) – a numpy.ndarray containing the variable values.
-
property
variables
¶ The layer trainable variables.
-
LayerStatistics¶
-
class
akida.
LayerStatistics
(layer, nb_samples=0, nb_activations=0)[source]¶ Container attached to an akida.Model and an akida.Layer that allows to retrieve layer statistics: (average input and output sparsity, number of operations, number of possible spikes, row_sparsity).
Attributes:
Get the name of the corresponding layer.
Get average output sparsity for the layer.
Get possible spikes for the layer.
Get kernel row sparsity.
-
property
layer_name
¶ Get the name of the corresponding layer.
- Returns
the layer name.
- Return type
str
-
property
output_sparsity
¶ Get average output sparsity for the layer.
- Returns
the average output sparsity value.
- Return type
float
-
property
possible_spikes
¶ Get possible spikes for the layer.
- Returns
the possible spike amount value.
- Return type
int
-
property
row_sparsity
¶ Get kernel row sparsity.
Compute row sparsity for kernel weights.
- Returns
the kernel row sparsity value.
- Return type
float
-
property
InputData¶
-
class
akida.
InputData
(input_width, input_height, input_channels, input_bits=4, name='')[source]¶ This is the general purpose input layer. It takes events in a simple address-event data format; that is, each event is characterized by a trio of values giving x, y and channel values.
Regarding the input dimension values, note that AEE expects inputs with zero-based indexing, i.e., if input_width is defined as 12, then the model expects all input events to have x-values in the range 0–11.
Where possible:
The x and y dimensions should be used for discretely-sampled continuous domains such as space (e.g., images) or time-series (e.g., an audio signal).
The c dimension should be used for ‘category indices’, where there is no particular relationship between neighboring values.
The input dimension values are used for:
Error checking – input events are checked and if any fall outside the defined input range, then the whole set of events sent on that processing call is rejected. An error will also be generated if the defined values are larger than the true input dimensions.
Configuring the input and output dimensions of subsequent layers in the model.
- Parameters
input_width (int) – input width.
input_height (int) – input height.
input_channels (int) – size of the third input dimension.
input_bits (int) – input bitwidth.
name (str, optional) – name of the layer.
InputConvolutional¶
-
class
akida.
InputConvolutional
(input_width, input_height, input_channels, kernel_width, kernel_height, num_neurons, name='', convolution_mode=ConvolutionMode.Same, stride_x=1, stride_y=1, weights_bits=1, pooling_width=- 1, pooling_height=- 1, pooling_type=PoolingType.NoPooling, pooling_stride_x=- 1, pooling_stride_y=- 1, activations_enabled=True, threshold_fire=0, threshold_fire_step=1, threshold_fire_bits=1, padding_value=0)[source]¶ The
InputConvolutional
layer is an image-specific input layer.It is used if images are sent directly to AEE without using the event-generating method. If the User applies their own event-generating method, the resulting events should be sent to an InputData type layer instead.
The InputConvolutional layer accepts images in 8-bit pixels, either grayscale or RGB. Images are converted to events using a combination of convolution kernels, activation thresholds and winner-take-all (WTA) policies. Note that since the layer input is dense, expect approximately one event per pixel – fewer if there are large contrast-free regions in the image, such as with the MNIST dataset.
Note that this format is not appropriate for neuromorphic camera type input which data is natively event-based and should be sent to an InputData type input layer.
- Parameters
input_width (int) – input width.
input_height (int) – input height.
input_channels (int) – number of channels of the input image.
kernel_width (int) – convolutional kernel width.
kernel_height (int) – convolutional kernel height.
num_neurons (int) – number of neurons (filters).
name (str, optional) – name of the layer.
convolution_mode (
ConvolutionMode
, optional) – type of convolution.stride_x (int, optional) – convolution stride X.
stride_y (int, optional) – convolution stride Y.
weights_bits (int, optional) – number of bits used to quantize weights.
pooling_width (int, optional) – pooling window width. If set to -1 it will be global.
pooling_height (int, optional) – pooling window height. If set to -1 it will be global.
pooling_type (
PoolingType
, optional) – pooling type (None, Max or Average).pooling_stride_x (int, optional) – pooling stride on x dimension.
pooling_stride_y (int, optional) – pooling stride on y dimension.
activations_enabled (bool, optional) – enable or disable activation function.
threshold_fire (int, optional) – threshold for neurons to fire or generate an event.
threshold_fire_step (float, optional) – length of the potential quantization intervals.
threshold_fire_bits (int, optional) – number of bits used to quantize the neuron response.
padding_value (int, optional) – value used when padding.
FullyConnected¶
-
class
akida.
FullyConnected
(num_neurons, name='', weights_bits=1, activations_enabled=True, threshold_fire=0, threshold_fire_step=1, threshold_fire_bits=1)[source]¶ This is used for most processing purposes, since any neuron in the layer can be connected to any input channel.
Outputs are returned from FullyConnected layers as a list of events, that is, as a triplet of x, y and feature values. However, FullyConnected models by definition have no intrinsic spatial organization. Thus, all output events have x and y values of zero with only the f value being meaningful – corresponding to the index of the event-generating neuron. Note that each neuron can only generate a single event for each packet of inputs processed.
- Parameters
num_neurons (int) – number of neurons (filters).
name (str, optional) – name of the layer.
weights_bits (int, optional) – number of bits used to quantize weights.
activations_enabled (bool, optional) – enable or disable activation function.
threshold_fire (int, optional) – threshold for neurons to fire or generate an event.
threshold_fire_step (float, optional) – length of the potential quantization intervals.
threshold_fire_bits (int, optional) – number of bits used to quantize the neuron response.
Convolutional¶
-
class
akida.
Convolutional
(kernel_width, kernel_height, num_neurons, name='', convolution_mode=ConvolutionMode.Same, stride_x=1, stride_y=1, weights_bits=1, pooling_width=- 1, pooling_height=- 1, pooling_type=PoolingType.NoPooling, pooling_stride_x=- 1, pooling_stride_y=- 1, activations_enabled=True, threshold_fire=0, threshold_fire_step=1, threshold_fire_bits=1)[source]¶ Convolutional or “weight-sharing” layers are commonly used in visual processing. However, the convolution operation is extremely useful in any domain where translational invariance is required – that is, where localized patterns may be of interest regardless of absolute position within the input. The convolution implemented here is typical of that used in visual processing, i.e., it is a 2D convolution (across the x- and y-dimensions), but a 3D input with a 3D filter. No convolution occurs across the third dimension; events from input feature 1 only interact with connections to input feature 1 – likewise for input feature 2 and so on. Typically, the input feature is the identity of the event-emitting neuron in the previous layer.
Outputs are returned from convolutional layers as a list of events, that is, as a triplet of x, y and feature (neuron index) values. Note that for a single packet processed, each neuron can only generate a single event at a given location, but can generate events at multiple different locations and that multiple neurons may all generate events at a single location.
- Parameters
kernel_width (int) – convolutional kernel width.
kernel_height (int) – convolutional kernel height.
num_neurons (int) – number of neurons (filters).
name (str, optional) – name of the layer.
convolution_mode (
ConvolutionMode
, optional) – type of convolution.stride_x (int, optional) – convolution stride X.
stride_y (int, optional) – convolution stride Y.
weights_bits (int, optional) – number of bits used to quantize weights.
pooling_width (int, optional) – pooling window width. If set to -1 it will be global.
pooling_height (int, optional) – pooling window height. If set to -1 it will be global.
pooling_type (
PoolingType
, optional) – pooling type (None, Max or Average).pooling_stride_x (int, optional) – pooling stride on x dimension.
pooling_stride_y (int, optional) – pooling stride on y dimension.
activations_enabled (bool, optional) – enable or disable activation function.
threshold_fire (int, optional) – threshold for neurons to fire or generate an event.
threshold_fire_step (float, optional) – length of the potential quantization intervals.
threshold_fire_bits (int, optional) – number of bits used to quantize the neuron response.
SeparableConvolutional¶
-
class
akida.
SeparableConvolutional
(kernel_width, kernel_height, num_neurons, name='', convolution_mode=ConvolutionMode.Same, stride_x=1, stride_y=1, weights_bits=2, pooling_width=- 1, pooling_height=- 1, pooling_type=PoolingType.NoPooling, pooling_stride_x=- 1, pooling_stride_y=- 1, activations_enabled=True, threshold_fire=0, threshold_fire_step=1, threshold_fire_bits=1)[source]¶ Separable convolutions consist in first performing a depthwise spatial convolution (which acts on each input channel separately) followed by a pointwise convolution which mixes together the resulting output channels. Intuitively, separable convolutions can be understood as a way to factorize a convolution kernel into two smaller kernels, thus decreasing the number of computations required to evaluate the output potentials. The
SeparableConvolutional
layer can also integrate a final pooling operation to reduce its spatial output dimensions.- Parameters
kernel_width (int) – convolutional kernel width.
kernel_height (int) – convolutional kernel height.
num_neurons (int) – number of pointwise neurons.
name (str, optional) – name of the layer.
convolution_mode (
ConvolutionMode
, optional) – type of convolution.stride_x (int, optional) – convolution stride X.
stride_y (int, optional) – convolution stride Y.
weights_bits (int, optional) – number of bits used to quantize weights.
pooling_width (int, optional) – pooling window width. If set to -1 it will be global.
pooling_height (int, optional) – pooling window height. If set to -1 it will be global.
pooling_type (
PoolingType
, optional) – pooling type (None, Max or Average).pooling_stride_x (int, optional) – pooling stride on x dimension.
pooling_stride_y (int, optional) – pooling stride on y dimension.
activations_enabled (bool, optional) – enable or disable activation function.
threshold_fire (int, optional) – threshold for neurons to fire or generate an event.
threshold_fire_step (float, optional) – length of the potential quantization intervals.
threshold_fire_bits (int, optional) – number of bits used to quantize the neuron response.
Concat¶
-
class
akida.
Concat
(name='', activations_enabled=True, threshold_fire=0, threshold_fire_step=1, threshold_fire_bits=1)[source]¶ Concatenates its inputs along the last dimension
It takes as input a list of tensors, all of the same shape except for the last dimension, and returns a single tensor that is the concatenation of all inputs.
It accepts as inputs either potentials or activations.
It can perform an activation on the concatenated output with its own set of activation parameters and variables.
- Parameters
name (str, optional) – name of the layer.
activations_enabled (bool, optional) – enable or disable activation function.
threshold_fire (int, optional) – threshold for neurons to fire or generate an event.
threshold_fire_step (float, optional) – length of the potential quantization intervals.
threshold_fire_bits (int, optional) – number of bits used to quantize the neuron response.
Dense¶
-
class
akida.
Dense
¶ Attributes:
Returns the shape of this tensor.
Returns the size of this tensor.
Returns the type of this tensor.
Methods:
to_numpy
(self)Converts the tensor to a numpy.ndarray object.
-
property
shape
¶ Returns the shape of this tensor.
-
property
size
¶ Returns the size of this tensor.
-
to_numpy
(self: akida.core.Tensor) → array¶ Converts the tensor to a numpy.ndarray object.
- Returns
a numpy.ndarray
-
property
type
¶ Returns the type of this tensor.
-
property
Backend¶
-
class
akida.
BackendType
¶ Members:
Software
Hardware
Hybrid
-
akida.
has_backend
(backend: akida.core.BackendType) → bool¶ Checks if a given backend type is available
- Parameters
backend (BackendType) – the backend to check
- Returns
a bool
-
akida.
backends
() → Dict[akida.core.BackendType, akida.core.Backend]¶ Returns the full list of available backends
- Returns
list of BackendType
ConvolutionMode¶
-
class
akida.
ConvolutionMode
¶ Sets the effective padding of the input for convolution, thereby determining the output dimensions. Naming conventions are the same as Keras/Tensorflow.
Members:
Valid : No padding
Same : Padded so that output size is input size divided by the stride
Full : Padded so that convolution is computed at each point of overlap
PoolingType¶
-
class
akida.
PoolingType
¶ The pooling type
Members:
NoPooling : No pooling applied
Max : Maximum pixel value is selected
Average : Average pixel value is selected
LearningType¶
-
class
akida.
LearningType
¶ The learning type
Members:
NoLearning : Learning is disabled, inference-only mode
AkidaUnsupervised : Built-in unsupervised learning rules
Compatibility¶
-
akida.compatibility.
model_hardware_incompatibilities
(model, nsoc_version=None)[source]¶ Checks a model compatibility with hardware.
This method performs parameters value checking for hardware compatibility and returns incompatibility messages when needed.
- Parameters
model (
Model
) – the Model to check hardware compatibilitynsoc_version (
NsocVersion
, optional) – the NSoC version to check
- Returns
a list of str containing the hardware incompatibilities of the model. The list is empty if the model is hardware compatible.
-
akida.compatibility.
create_from_model
(model, nsoc_version=None)[source]¶ Tries to create a HW compatible model from an incompatible one
Tries to create a HW compatible model from an incompatible one, using SW workarounds for known limitations. It returns a converted model that is not guaranteed to be HW compatible, depending if workaround have been found.
- Parameters
model (
Model
) – a Model object to convertnsoc_version (
NsocVersion
, optional) – version of the NSoC
- Returns
a new Model with no guarantee that it is HW compatible.
- Return type
Model
-
class
akida.
NsocVersion
¶ Members:
Unknown
v1