Akida Execution Engine API

akida.__version__

Returns the current version of the akida module.

Model

class akida.Model(filename=None, layers=None, backend=BackendType.Software)

An Akida neural Model, represented as a hierarchy of layers.

The Model class is the main interface to Akida.

It provides methods to instantiate, train, test and save models.

Methods

add(self, layer, inbound_layers)

Add a layer to the current model.

add_classes(num_add_classes)

Adds classes to the last layer of the model.

compile(self, num_weights, num_classes, …)

Prepare the internal parameters of the last layer of the model for training

evaluate(input)

Evaluates a set of images or events through the model.

fit(input[, input_labels])

Trains a set of images or events through the model.

forward(input)

Forwards a set of images or events through the model.

get_layer(*args, **kwargs)

Overloaded function.

get_layer_count(self)

The number of layers.

get_layer_statistics(layer)

Get the LayerStatistics object attached to the specified layer.

get_observer(layer)

Get the Observer object attached to the specified layer.

get_statistics()

Get statistics by layer for this network.

pop_layer(self)

Remove the last layer of the current model.

predict(input[, num_classes])

Returns the model class predictions.

save(self, model_file)

Saves all the model configuration (all layers and weights) to a file on disk.

summary()

Prints a string summary of the model.

Attributes

backend

The backend the model is running on.

layers

Get a list of layers in current model.

output_dims

The model output dimensions (width, height, features).

__init__(filename=None, layers=None, backend=BackendType.Software)

Creates an empty Model, a Model template from a YAML file, or a full Model from a serialized file.

Parameters
  • filename (str, optional) – path of the YAML file containing the model architecture, or a serialized Model. If None, an empty sequential model will be created, or filled with the layers in the layers parameter.

  • layers (list, optional) – list of layers that will be copied to the new model. If the list does not start with an input layer, it will be added automatically.

  • backend (BackendType, optional) – backend to run the model on.

add(self: akida.core.ModelBase, layer: akida::Layer, inbound_layers: List[akida::Layer] = []) → None

Add a layer to the current model.

A list of inbound layers can optionally be specified. These layers must already be included in the model. if no inbound layer is specified, and the layer is not the first layer in the model, the last included layer will be used as inbound layer.

Parameters
  • layer (one of the available layers) – layer instance to be added to the model

  • inbound_layers (a list of Layer) – an optional list of inbound layers

add_classes(num_add_classes)

Adds classes to the last layer of the model.

A model with a compiled last layer is ready to learn using the Akida built-in learning algorithm. This function allows to add new classes (i.e. new neurons) to the last layer, keeping the previously learned neurons.

Parameters

num_add_classes (int) – number of classes to add to the last layer

Raises

RuntimeError – if the last layer is not compiled

property backend

The backend the model is running on.

compile(self: akida.core.ModelBase, num_weights: int, num_classes: int = 1, initial_plasticity: float = 1.0, learning_competition: float = 0.0, min_plasticity: float = 0.10000000149011612, plasticity_decay: float = 0.25) → None

Prepare the internal parameters of the last layer of the model for training

Parameters
  • num_weights (int) – number of connections for each neuron.

  • num_classes (int, optional) – number of classes when running in a ‘labeled mode’.

  • initial_plasticity (float, optional) – defines how easily the weights will change when learning occurs.

  • learning_competition (float, optional) – controls competition between neurons.

  • min_plasticity (float, optional) – defines the minimum level to which plasticity will decay.

  • plasticity_decay (float, optional) – defines the decay of plasticity with each learning step.

evaluate(input)

Evaluates a set of images or events through the model.

Forwards an input tensor through the model and returns a float array.

It applies ONLY to models without an activation on the last layer. The output values are obtained from the model discrete potentials by applying a shift and a scale.

The expected input tensor dimensions are:

  • n, representing the number of frames or samples,

  • w, representing the width,

  • h, representing the height,

  • c, representing the channel, or more generally the feature.

If the inputs are events, the input shape must be (n, w, h, c), but if the inputs are images (numpy array), their shape must be (n, h, w, c).

Note: only grayscale (c=1) or RGB (c=3) images (arrays) are supported.

The output tensor shape is always (n, out_w, out_h, out_c).

Parameters

input (Sparse,`numpy.ndarray`) – a (n, w, h, c) Sparse or a (n, h, w, c) numpy.ndarray

Returns

a float array of shape (n, w, h, c).

Return type

numpy.ndarray

Raises
  • TypeError – if the input doesn’t have the correct type (Sparse, numpy.ndarray).

  • RuntimeError – if the model last layer has an activation.

  • ValueError – if the input doesn’t match the required shape, format, or if the model only has an InputData layer.

fit(input, input_labels=None)

Trains a set of images or events through the model.

Trains the model with the specified input tensor.

The expected input tensor dimensions are:

  • n, representing the number of frames or samples,

  • w, representing the width,

  • h, representing the height,

  • c, representing the channel, or more generally the feature.

If the inputs are events, the input shape must be (n, w, h, c), but if the inputs are images (numpy array), their shape must be (n, h, w, c).

Note: only grayscale (c=1) or RGB (c=3) images (arrays) are supported.

If activations are enabled for the last layer, the output tensor is a Sparse object.

If activations are disabled for the last layer, the output tensor is a numpy array.

The output tensor shape is always (n, out_w, out_h, out_c).

Parameters
  • input (Sparse,`numpy.ndarray`) – a (n, w, h, c) Sparse or a (n, h, w, c) numpy.ndarray

  • input_labels (list(int), optional) – input labels. Must have one label per input, or a single label for all inputs. If a label exceeds the defined number of classes, the input will be discarded. (Default value = None).

Returns

a numpy array of shape (n, out_w, out_h, out_c).

Raises
  • TypeError – if the input doesn’t have the correct type (Sparse, numpy.ndarray).

  • ValueError – if the input doesn’t match the required shape, format, etc.

forward(input)

Forwards a set of images or events through the model.

Forwards an input tensor through the model and returns an output tensor.

The expected input tensor dimensions are:

  • n, representing the number of frames or samples,

  • w, representing the width,

  • h, representing the height,

  • c, representing the channel, or more generally the feature.

If the inputs are events, the input shape must be (n, w, h, c), but if the inputs are images (numpy array), their shape must be (n, h, w, c).

Note: only grayscale (c=1) or RGB (c=3) images (arrays) are supported.

If activations are enabled for the last layer, the output tensor is a Sparse object.

If activations are disabled for the last layer, the output tensor is a numpy array.

The output tensor shape is always (n, out_w, out_h, out_c).

Parameters

input (Sparse,`numpy.ndarray`) – a (n, w, h, c) Sparse or a (n, h, w, c) numpy.ndarray

Returns

a numpy array of shape (n, out_w, out_h, out_c).

Raises
  • TypeError – if the input doesn’t have the correct type (Sparse, numpy.ndarray).

  • ValueError – if the input doesn’t match the required shape, format, etc.

get_layer(*args, **kwargs)

Overloaded function.

  1. get_layer(self: akida.core.ModelBase, layer_name: str) -> akida::Layer

    Get a reference to a specific layer.

    This method allows a deeper introspection of the model by providing access to the underlying layers.

    param layer_name

    name of the layer to retrieve

    type layer_name

    str

    return

    a Layer

  2. get_layer(self: akida.core.ModelBase, layer_index: int) -> akida::Layer

    Get a reference to a specific layer.

    This method allows a deeper introspection of the model by providing access to the underlying layers.

    param layer_index

    index of the layer to retrieve

    type layer_index

    int

    return

    a Layer

get_layer_count(self: akida.core.ModelBase) → int

The number of layers.

get_layer_statistics(layer)

Get the LayerStatistics object attached to the specified layer.

LayerStatistics are containers attached to an akida.Layer that allows to retrieve layer statistics:

(average sparsity, number of operations and number of possible spikes, row_sparsity).

Parameters

layer (Layer) – layer where you want to obtain the LayerStatistics object.

Returns

a LayerStatistics object.

get_observer(layer)

Get the Observer object attached to the specified layer.

Observers are containers attached to a Layer that allows to retrieve layer output spikes and potentials.

Parameters

layer (Layer) – the layer you want to observe.

Returns

the observer attached to the layer.

Return type

Observer

get_statistics()

Get statistics by layer for this network.

Returns

LayerStatistics indexed by layer_name.

Return type

a dictionary of obj

property layers

Get a list of layers in current model.

property output_dims

The model output dimensions (width, height, features).

pop_layer(self: akida.core.ModelBase) → None

Remove the last layer of the current model.

predict(input, num_classes=None)

Returns the model class predictions.

Forwards an input tensor (images or events) through the model and compute predictions based on the neuron id. If the number of output neurons is greater than the number of classes, the neurons are automatically assigned to a class by dividing their id by the number of classes.

The expected input tensor dimensions are:

  • n, representing the number of frames or samples,

  • w, representing the width,

  • h, representing the height,

  • c, representing the channel, or more generally the feature.

If the inputs are events, the input shape must be (n, w, h, c), but if the inputs are images (numpy array), their shape must be (n, h, w, c).

Note: only grayscale (c=1) or RGB (c=3) images (arrays) are supported.

Note that the predictions are based on the activation values of the last layer: for most use cases, you may want to disable activations for that layer (ie setting activations_enabled=False) to get a better accuracy.

Parameters
  • input (Sparse,`numpy.ndarray`) – a (n, w, h, c) Sparse or a (n, h, w, c) numpy.ndarray

  • num_classes (int, optional) – optional parameter (defaults to the number of neurons in the last layer).

Returns

an array of shape (n).

Return type

numpy.ndarray

save(self: akida.core.ModelBase, model_file: str) → None

Saves all the model configuration (all layers and weights) to a file on disk. If this path has .fbz extension, the file will also be compressed.

Parameters

model_file (str) – full path of the serialized model. If this path has “.fbz” extension, the file will also be compressed.

summary()

Prints a string summary of the model.

This method prints a summary of the model with details for every layer:

  • name and type in the first column

  • output shape

  • kernel shape

If there is any layer with unsupervised learning enabled, it will list them, with these details:

  • name of layer

  • number of incoming connections

  • number of weights per neuron

It will also tell the input shape, the backend type and version.

Layer

class akida.Layer

Methods

get_learning_histogram()

Returns an histogram of learning percentages.

get_variable(name)

Get the value of a layer variable.

get_variable_names()

Get the list of variable names for this layer.

set_variable(name, values)

Set the value of a layer variable.

Attributes

input_dims

The layer input dimensions (width, height, channels).

learning

The layer learning parameters set.

name

The layer name.

output_dims

The layer output dimensions (width, height, features).

parameters

The layer parameters set.

variables

The layer trainable variables.

get_learning_histogram()

Returns an histogram of learning percentages.

Returns a list of learning percentages and the associated number of neurons.

Returns

a (n,2) numpy.ndarray containing the learning percentages and the number of neurons.

Return type

numpy.ndarray

get_variable(name)

Get the value of a layer variable.

Layer variables are named entities representing the weights or thresholds used during inference:

  • Weights variables are typically integer arrays of shape: (width, height, features/channels, num_neurons) row-major (‘C’).

  • Threshold variables are typically integer or float arrays of shape: (num_neurons).

Parameters

name (str) – the variable name.

Returns

an array containing the variable.

Return type

numpy.ndarray

get_variable_names()

Get the list of variable names for this layer.

Returns

a list of variable names.

property input_dims

The layer input dimensions (width, height, channels).

property learning

The layer learning parameters set.

property name

The layer name.

property output_dims

The layer output dimensions (width, height, features).

property parameters

The layer parameters set.

set_variable(name, values)

Set the value of a layer variable.

Layer variables are named entities representing the weights or thresholds used during inference:

  • Weights variables are typically integer arrays of shape:

    (num_neurons, features/channels, height, width) col-major ordered (‘F’)

or equivalently:

(width, height, features/channels, num_neurons) row-major (‘C’).

  • Threshold variables are typically integer or float arrays of shape: (num_neurons).

Parameters
  • name (str) – the variable name.

  • values (numpy.ndarray) – a numpy.ndarray containing the variable values.

property variables

The layer trainable variables.

LayerStatistics

class akida.LayerStatistics(model, layer, prev_layer=None)

Container attached to an akida.Model and an akida.Layer that allows to retrieve layer statistics: (average input and output sparsity, number of operations, number of possible spikes, row_sparsity).

Attributes

input_sparsity

Get average input sparsity for the layer.

layer_name

Get the name of the corresponding layer.

ops

Get average number of inference operations per sample.

output_sparsity

Get average output sparsity for the layer.

possible_spikes

Get possible spikes for the layer.

row_sparsity

Get kernel row sparsity.

property input_sparsity

Get average input sparsity for the layer.

Returns

the average sparsity value.

Return type

float

property layer_name

Get the name of the corresponding layer.

Returns

the layer name.

Return type

str

property ops

Get average number of inference operations per sample.

Returns

the number of operations per sample.

Return type

int

property output_sparsity

Get average output sparsity for the layer.

Returns

the average output sparsity value.

Return type

float

property possible_spikes

Get possible spikes for the layer.

Returns

the possible spike amount value.

Return type

int

property row_sparsity

Get kernel row sparsity.

Compute row sparsity for kernel weights.

Returns

the kernel row sparsity value.

Return type

float

Observer

class akida.Observer(model, layer)

Container attached to a Model that allows to retrieve output spikes and potentials for a given layer.

Methods

clear()

Clear spikes and potentials lists.

Attributes

potentials

Get generated potentials.

spikes

Get generated spikes.

clear()

Clear spikes and potentials lists.

property potentials

Get generated potentials.

Returns a dictionary of potentials generated by the attached layer

Returns

a dictionary of numpy.ndarray objects of shape (w, h, c).

property spikes

Get generated spikes.

Returns a dictionary of spikes generated by the attached layer indexed by their source id.

Returns

a dictionary of Sparse objects of shape (w, h, c).

InputData

class akida.InputData(input_width, input_height, input_channels, name='')

This is the general purpose input layer. It takes events in a simple address-event data format; that is, each event is characterized by a trio of values giving x, y and channel values.

Regarding the input dimension values, note that AEE expects inputs with zero-based indexing, i.e., if input_width is defined as 12, then the model expects all input events to have x-values in the range 0–11.

Where possible:

  • The x and y dimensions should be used for discretely-sampled continuous domains such as space (e.g., images) or time-series (e.g., an audio signal).

  • The c dimension should be used for ‘category indices’, where there is no particular relationship between neighboring values.

The input dimension values are used for:

  • Error checking – input events are checked and if any fall outside the defined input range, then the whole set of events sent on that processing call is rejected. An error will also be generated if the defined values are larger than the true input dimensions.

  • Configuring the input and output dimensions of subsequent layers in the model.

__init__(input_width, input_height, input_channels, name='')

Create an InputData layer from a name and parameters.

Parameters
  • input_width (int) – input width.

  • input_height (int) – input height.

  • input_channels (int) – size of the third input dimension.

  • name (str, optional) – name of the layer.

InputConvolutional

class akida.InputConvolutional(input_width, input_height, input_channels, kernel_width, kernel_height, num_neurons, name='', convolution_mode=ConvolutionMode.Same, stride_x=1, stride_y=1, weights_bits=1, pooling_width=- 1, pooling_height=- 1, pooling_type=PoolingType.NoPooling, pooling_stride_x=- 1, pooling_stride_y=- 1, activations_enabled=True, threshold_fire=0, threshold_fire_step=1, threshold_fire_bits=1, padding_value=0)

The InputConvolutional layer is an image-specific input layer.

It is used if images are sent directly to AEE without using the event-generating method. If the User applies their own event-generating method, the resulting events should be sent to an InputData type layer instead.

The InputConvolutional layer accepts images in 8-bit pixels, either grayscale or RGB. Images are converted to events using a combination of convolution kernels, activation thresholds and winner-take-all (WTA) policies. Note that since the layer input is dense, expect approximately one event per pixel – fewer if there are large contrast-free regions in the image, such as with the MNIST dataset.

Note that this format is not appropriate for neuromorphic camera type input which data is natively event-based and should be sent to an InputData type input layer.

__init__(input_width, input_height, input_channels, kernel_width, kernel_height, num_neurons, name='', convolution_mode=ConvolutionMode.Same, stride_x=1, stride_y=1, weights_bits=1, pooling_width=- 1, pooling_height=- 1, pooling_type=PoolingType.NoPooling, pooling_stride_x=- 1, pooling_stride_y=- 1, activations_enabled=True, threshold_fire=0, threshold_fire_step=1, threshold_fire_bits=1, padding_value=0)

Create an InputConvolutional layer from a name and parameters.

Parameters
  • input_width (int) – input width.

  • input_height (int) – input height.

  • input_channels (int) – number of channels of the input image.

  • kernel_width (int) – convolutional kernel width.

  • kernel_height (int) – convolutional kernel height.

  • num_neurons (int) – number of neurons (filters).

  • name (str, optional) – name of the layer.

  • convolution_mode (ConvolutionMode, optional) – type of convolution.

  • stride_x (int, optional) – convolution stride X.

  • stride_y (int, optional) – convolution stride Y.

  • weights_bits (int, optional) – number of bits used to quantize weights.

  • pooling_width (int, optional) – pooling window width. If set to -1 it will be global.

  • pooling_height (int, optional) – pooling window height. If set to -1 it will be global.

  • pooling_type (PoolingType, optional) – pooling type (None, Max or Average).

  • pooling_stride_x (int, optional) – pooling stride on x dimension.

  • pooling_stride_y (int, optional) – pooling stride on y dimension.

  • activations_enabled (bool, optional) – enable or disable activation function.

  • threshold_fire (int, optional) – threshold for neurons to fire or generate an event.

  • threshold_fire_step (float, optional) – length of the potential quantization intervals.

  • threshold_fire_bits (int, optional) – number of bits used to quantize the neuron response.

  • padding_value (int, optional) – value used when padding

FullyConnected

class akida.FullyConnected(num_neurons, name='', weights_bits=1, activations_enabled=True, threshold_fire=0, threshold_fire_step=1, threshold_fire_bits=1)

This is used for most processing purposes, since any neuron in the layer can be connected to any input channel.

Outputs are returned from FullyConnected layers as a list of events, that is, as a triplet of x, y and feature values. However, FullyConnected models by definition have no intrinsic spatial organization. Thus, all output events have x and y values of zero with only the f value being meaningful – corresponding to the index of the event-generating neuron. Note that each neuron can only generate a single event for each packet of inputs processed.

__init__(num_neurons, name='', weights_bits=1, activations_enabled=True, threshold_fire=0, threshold_fire_step=1, threshold_fire_bits=1)

Create a FullyConnected layer from a name and parameters.

Parameters
  • num_neurons (int) – number of neurons (filters).

  • name (str, optional) – name of the layer.

  • weights_bits (int, optional) – number of bits used to quantize weights.

  • activations_enabled (bool, optional) – enable or disable activation function.

  • threshold_fire (int, optional) – threshold for neurons to fire or generate an event.

  • threshold_fire_step (float, optional) – length of the potential quantization intervals.

  • threshold_fire_bits (unsigned int, optional) – number of bits used to quantize the neuron response.

Convolutional

class akida.Convolutional(kernel_width, kernel_height, num_neurons, name='', convolution_mode=ConvolutionMode.Same, stride_x=1, stride_y=1, weights_bits=1, pooling_width=- 1, pooling_height=- 1, pooling_type=PoolingType.NoPooling, pooling_stride_x=- 1, pooling_stride_y=- 1, activations_enabled=True, threshold_fire=0, threshold_fire_step=1, threshold_fire_bits=1)

Convolutional or “weight-sharing” layers are commonly used in visual processing. However, the convolution operation is extremely useful in any domain where translational invariance is required – that is, where localized patterns may be of interest regardless of absolute position within the input. The convolution implemented here is typical of that used in visual processing, i.e., it is a 2D convolution (across the x- and y-dimensions), but a 3D input with a 3D filter. No convolution occurs across the third dimension; events from input feature 1 only interact with connections to input feature 1 – likewise for input feature 2 and so on. Typically, the input feature is the identity of the event-emitting neuron in the previous layer.

Outputs are returned from convolutional layers as a list of events, that is, as a triplet of x, y and feature (neuron index) values. Note that for a single packet processed, each neuron can only generate a single event at a given location, but can generate events at multiple different locations and that multiple neurons may all generate events at a single location.

__init__(kernel_width, kernel_height, num_neurons, name='', convolution_mode=ConvolutionMode.Same, stride_x=1, stride_y=1, weights_bits=1, pooling_width=- 1, pooling_height=- 1, pooling_type=PoolingType.NoPooling, pooling_stride_x=- 1, pooling_stride_y=- 1, activations_enabled=True, threshold_fire=0, threshold_fire_step=1, threshold_fire_bits=1)

Create a Convolutional layer from a name and parameters.

Parameters
  • kernel_width (int) – convolutional kernel width.

  • kernel_height (int) – convolutional kernel height.

  • num_neurons (int) – number of neurons (filters).

  • name (str, optional) – name of the layer.

  • convolution_mode (ConvolutionMode, optional) – type of convolution.

  • stride_x (int, optional) – convolution stride X.

  • stride_y (int, optional) – convolution stride Y.

  • weights_bits (int, optional) – number of bits used to quantize weights.

  • pooling_width (int, optional) – pooling window width. If set to -1 it will be global.

  • pooling_height (int, optional) – pooling window height. If set to -1 it will be global.

  • pooling_type (PoolingType, optional) – pooling type (None, Max or Average).

  • pooling_stride_x (int, optional) – pooling stride on x dimension.

  • pooling_stride_y (int, optional) – pooling stride on y dimension.

  • activations_enabled (bool, optional) – enable or disable activation function.

  • threshold_fire (int, optional) – threshold for neurons to fire or generate an event.

  • threshold_fire_step (float, optional) – length of the potential quantization intervals.

  • threshold_fire_bits (int, optional) – number of bits used to quantize the neuron response.

SeparableConvolutional

class akida.SeparableConvolutional(kernel_width, kernel_height, num_neurons, name='', convolution_mode=ConvolutionMode.Same, stride_x=1, stride_y=1, weights_bits=2, pooling_width=- 1, pooling_height=- 1, pooling_type=PoolingType.NoPooling, pooling_stride_x=- 1, pooling_stride_y=- 1, activations_enabled=True, threshold_fire=0, threshold_fire_step=1, threshold_fire_bits=1)

Separable convolutions consist in first performing a depthwise spatial convolution (which acts on each input channel separately) followed by a pointwise convolution which mixes together the resulting output channels. Intuitively, separable convolutions can be understood as a way to factorize a convolution kernel into two smaller kernels, thus decreasing the number of computations required to evaluate the output potentials. The SeparableConvolutional layer can also integrate a final pooling operation to reduce its spatial output dimensions.

__init__(kernel_width, kernel_height, num_neurons, name='', convolution_mode=ConvolutionMode.Same, stride_x=1, stride_y=1, weights_bits=2, pooling_width=- 1, pooling_height=- 1, pooling_type=PoolingType.NoPooling, pooling_stride_x=- 1, pooling_stride_y=- 1, activations_enabled=True, threshold_fire=0, threshold_fire_step=1, threshold_fire_bits=1)

Create a SeparableConvolutional layer from a name and parameters.

Parameters
  • kernel_width (int) – convolutional kernel width.

  • kernel_height (int) – convolutional kernel height.

  • num_neurons (int) – number of pointwise neurons.

  • name (str, optional) – name of the layer.

  • convolution_mode (ConvolutionMode, optional) – type of convolution.

  • stride_x (int, optional) – convolution stride X.

  • stride_y (int, optional) – convolution stride Y.

  • weights_bits (int, optional) – number of bits used to quantize weights.

  • pooling_width (int, optional) – pooling window width. If set to -1 it will be global.

  • pooling_height (int, optional) – pooling window height. If set to -1 it will be global.

  • pooling_type (PoolingType, optional) – pooling type (None, Max or Average).

  • pooling_stride_x (int, optional) – pooling stride on x dimension.

  • pooling_stride_y (int, optional) – pooling stride on y dimension.

  • activations_enabled (bool, optional) – enable or disable activation function.

  • threshold_fire (int, optional) – threshold for neurons to fire or generate an event.

  • threshold_fire_step (float, optional) – length of the potential quantization intervals.

  • threshold_fire_bits (int, optional) – number of bits used to quantize the neuron response.

Dense

class akida.Dense

Attributes

shape

Returns the shape of this tensor.

size

Returns the size of this tensor.

type

Returns the type of this tensor.

Methods

to_numpy(self)

Converts the tensor to a numpy.ndarray object.

property shape

Returns the shape of this tensor.

property size

Returns the size of this tensor.

to_numpy(self: akida.core.Tensor) → array

Converts the tensor to a numpy.ndarray object.

Returns

a numpy.ndarray

property type

Returns the type of this tensor.

Sparse

class akida.Sparse

Methods

chip(self, dimension, coord)

Returns a Sparse sliced with the given coord and for which the

slice(self, mask)

Returns a Sparse containing only the events matching the specified mask.

sort(self, arg0)

Sort the sparse according to the specified dimensions order

to_dense

to_numpy(self: akida.core.Tensor) -> array

to_numpy(self)

Converts the tensor to a numpy.ndarray object.

Attributes

nnz

Returns the number of nonzero elements

shape

Returns the shape of this tensor.

size

Returns the size of this tensor.

sparsity

Returns the sparsity of this tensor.

type

Returns the type of this tensor.

chip(self: akida.core.Sparse, dimension: int, coord: int) → akida.core.Sparse

Returns a Sparse sliced with the given coord and for which the requested dimension has been removed

Parameters
  • dimension (int) – dimension to remove

  • coord (int) – coordinate to select in the dimension to remove

Returns

a Sparse with one less dimension

property nnz

Returns the number of nonzero elements

property shape

Returns the shape of this tensor.

property size

Returns the size of this tensor.

slice(self: akida.core.Sparse, mask: List[int]) → akida.core.Sparse

Returns a Sparse containing only the events matching the specified mask.

Parameters

mask (list) – shape mask to apply, -1 means ‘select all’

Returns

a Sparse with same shape

sort(self: akida.core.Sparse, arg0: List[int]) → None

Sort the sparse according to the specified dimensions order

Parameters

dim_sort_order (list[int]) – Specifies which dimensions to compare first, second, etc. All dimensions must be specified.

property sparsity

Returns the sparsity of this tensor.

to_dense()

to_numpy(self: akida.core.Tensor) -> array

Converts the tensor to a numpy.ndarray object.

Returns

a numpy.ndarray

to_numpy(self: akida.core.Tensor) → array

Converts the tensor to a numpy.ndarray object.

Returns

a numpy.ndarray

property type

Returns the type of this tensor.

coords_to_sparse

akida.coords_to_sparse(coords, shape)

Converts a list of 3D or 4D event coordinates to a Sparse input.

Event coordinates should contain:

  • an optional index corresponding to the frame or sample,

  • a first spatial coordinate (typically x, the pixel column),

  • a second spatial coordinate (typically y, the pixel line),

  • a feature index representing the spike (starting from index zero)

The output Sparse will have a shape of (n, w, h, c), where:

  • n is the number of frames or samples,

  • w is the size of the first spatial dimension (typically, the width),

  • h is the size of the second spatial dimension (typically, the height),

  • c is the size of the last dimension (typically, the channel or feature).

Event values are automatically set to 1.

Parameters
  • coords (numpy.ndarray) – a (n, 3) or (n, 4) array of coordinates.

  • shape (tuple[int]) – the 3 or 4 dimensions of the input space.

Returns

the events corresponding to the specified coordinates.

Return type

Sparse

dense_to_sparse

akida.dense_to_sparse(in_array)

Converts a hollow dense array to a Sparse input.

The input array will simply be converted to a list of events corresponding to its active (non-zero) coordinates.

The input array must have a (w, h, c) or (n, w, h, c) shape, where:

  • n is the number of samples,

  • w is the size of the first spatial dimension (typically, the width),

  • h is the size of the second spatial dimension (typically, the height),

  • c is the size of the last dimension (typically, the channel or feature).

The output Sparse will have a shape of (n, w, h, c), with n = 1 if the input array only has three dimensions.

Parameters

in_array (numpy.ndarray) – an array of 3D or 4D coordinates.

Returns

the events corresponding to non-null values.

Return type

Sparse

packetize

akida.packetize(events, shape, packet_size)

Converts a list of 3D coordinates to a 4-dimensional Sparse input.

This function converts a numpy array of event coordinates to a Sparse object where the event coordinates are grouped according to the specified packet size.

3D event coordinates should contain:

  • a first spatial coordinate (typically x, the pixel column),

  • a second spatial coordinate (typically y, the pixel line),

  • a feature index representing the spike (starting from index zero)

The output Sparse will have a shape of (n, w, h, c), where:

  • n is the number of packets,

  • w is the size of the first spatial dimension (typically, the width),

  • h is the size of the second spatial dimension (typically, the height),

  • c is the size of the last dimension (typically, the channel or feature).

Event values are automatically set to 1.

Parameters
  • events (numpy.ndarray) – a (n, 3) array of input coordinates.

  • shape (tuple[int]) – the three dimensions of the input space.

  • packet_size (int) – the number of events per packet.

Returns

the (n, w, h, c) events corresponding to the coordinates.

Return type

Sparse

Backend

class akida.BackendType

Members:

Software

Hardware

Hybrid

akida.has_backend(backend: akida.core.BackendType) → bool

Checks if a given backend type is available

Parameters

backend (BackendType) – the backend to check

Returns

a bool

akida.backends() → Dict[akida.core.BackendType, akida.core.Backend]

Returns the full list of available backends

Returns

list of BackendType

ConvolutionMode

class akida.ConvolutionMode

Sets the effective padding of the input for convolution, thereby determining the output dimensions. Naming conventions are the same as Keras/Tensorflow.

Members:

Valid : No padding

Same : Padded so that output size is input size divided by the stride

Full : Padded so that convolution is computed at each point of overlap

PoolingType

class akida.PoolingType

The pooling type

Members:

NoPooling : No pooling applied

Max : Maximum pixel value is selected

Average : Average pixel value is selected

LearningType

class akida.LearningType

The learning type

Members:

NoLearning : Learning is disabled, inference-only mode

AkidaUnsupervised : Built-in unsupervised learning rules

Compatibility

akida.compatibility.model_hardware_incompatibilities(model, nsoc_version=None)

Checks a model compatibility with hardware.

This method performs parameters value checking for hardware compatibility and returns incompatibility messages when needed.

Parameters
  • model (Model) – the Model to check hardware compatibility

  • nsoc_version (NsocVersion, optional) – the NSoC version to check

Returns

a list of str containing the hardware incompatibilities of the model. The list is empty if the model is hardware compatible.

akida.compatibility.create_from_model(model, nsoc_version=None)

Tries to create a HW compatible model from an incompatible one

Tries to create a HW compatible model from an incompatible one, using SW workarounds for known limitations. It returns a converted model that is not guaranteed to be HW compatible, depending if workaround have been found.

Parameters
  • model (Model) – a Model object to convert

  • nsoc_version (NsocVersion, optional) – version of the NSoC

Returns

a new Model with no guarantee that it is HW compatible.

Return type

Model

class akida.NsocVersion

Members:

Unknown

v1