Akida Execution Engine API

akida.__version__

Returns the current version of the akida module.

Model

class akida.Model(filename=None, layers=None)[source]

An Akida neural Model, represented as a hierarchy of layers.

The Model class is the main interface to Akida and allows:

  • to create an empty Model to which you can add layers programmatically using the sequential API,

  • to reload a full Model from a serialized file or a memory buffer,

  • to create a new Model from a list of layers taken from an existing Model.

It provides methods to instantiate, train, test and save models.

Parameters
  • filename (str, optional) – path to the serialized Model. If None, an empty sequential model will be created, or filled with the layers in the layers parameter.

  • serialized_buffer (bytes, optional) – binary buffer containing a serialized Model.

  • layers (list, optional) – list of layers that will be copied to the new model. If the list does not start with an input layer, it will be added automatically.

Methods:

add(self, layer, inbound_layers)

Add a layer to the current model.

add_classes(num_add_classes)

Adds classes to the last layer of the model.

compile(self, num_weights[, num_classes, ...])

Prepare the internal parameters of the last layer of the model for training

evaluate(inputs)

Evaluates a set of images or events through the model.

fit(inputs[, input_labels])

Trains a set of images or events through the model.

forward(inputs)

Forwards a set of images or events through the model.

get_layer(*args, **kwargs)

Overloaded function.

get_layer_count(self)

The number of layers.

map(self, device, hw_only)

Map the model to a Device using a target backend.

pop_layer(self)

Remove the last layer of the current model.

predict(inputs[, num_classes])

Returns the model class predictions.

save(self, arg0)

Saves all the model configuration (all layers and weights) to a file on disk.

summary()

Prints a string summary of the model.

to_buffer(self)

Serializes all the model configuration (all layers and weights) to a bytes buffer.

Attributes:

device

The device the Model is mapped to (or None)

hw_device

Internal method to get the hardware device the Model is mapped to (or None)

input_shape

The model input dimensions (width, height, features).

layers

Get a list of layers in current model.

metrics

The model metrics.

output_shape

The model output dimensions (width, height, features).

sequences

The list of layer sequences in the Model

statistics

Get statistics by sequence for this model.

add(self: akida.core.ModelBase, layer: akida::Layer, inbound_layers: List[akida::Layer] = []) None

Add a layer to the current model.

A list of inbound layers can optionally be specified. These layers must already be included in the model. if no inbound layer is specified, and the layer is not the first layer in the model, the last included layer will be used as inbound layer.

Parameters
  • layer (one of the available layers) – layer instance to be added to the model

  • inbound_layers (a list of Layer) – an optional list of inbound layers

add_classes(num_add_classes)[source]

Adds classes to the last layer of the model.

A model with a compiled last layer is ready to learn using the Akida built-in learning algorithm. This function allows to add new classes (i.e. new neurons) to the last layer, keeping the previously learned neurons.

Parameters

num_add_classes (int) – number of classes to add to the last layer

Raises

RuntimeError – if the last layer is not compiled

compile(self: akida.core.ModelBase, num_weights: int, num_classes: int = 1, initial_plasticity: float = 1.0, learning_competition: float = 0.0, min_plasticity: float = 0.10000000149011612, plasticity_decay: float = 0.25) None

Prepare the internal parameters of the last layer of the model for training

Parameters
  • num_weights (int) – number of connections for each neuron.

  • num_classes (int, optional) – number of classes when running in a ‘labeled mode’.

  • initial_plasticity (float, optional) – defines how easily the weights will change when learning occurs.

  • learning_competition (float, optional) – controls competition between neurons.

  • min_plasticity (float, optional) – defines the minimum level to which plasticity will decay.

  • plasticity_decay (float, optional) – defines the decay of plasticity with each learning step.

property device

The device the Model is mapped to (or None)

evaluate(inputs)[source]

Evaluates a set of images or events through the model.

Forwards an input tensor through the model and returns a float array.

It applies ONLY to models without an activation on the last layer. The output values are obtained from the model discrete potentials by applying a shift and a scale.

The expected input tensor dimensions are:

  • n, representing the number of frames or samples,

  • w, representing the width,

  • h, representing the height,

  • c, representing the channel, or more generally the feature.

If the inputs are events, the input shape must be (n, w, h, c), but if the inputs are images (numpy array), their shape must be (n, h, w, c).

Note: only grayscale (c=1) or RGB (c=3) images (arrays) are supported.

Parameters

inputs (numpy.ndarray) – a (n, w, h, c) numpy.ndarray

Returns

a float array of shape (n, w, h, c).

Return type

numpy.ndarray

Raises
  • TypeError – if the input is not a numpy.ndarray.

  • RuntimeError – if the model last layer has an activation.

  • ValueError – if the input doesn’t match the required shape, format, or if the model only has an InputData layer.

fit(inputs, input_labels=None)[source]

Trains a set of images or events through the model.

Trains the model with the specified input tensor (numpy array).

The expected input tensor dimensions are:

  • n, representing the number of frames or samples,

  • w, representing the width,

  • h, representing the height,

  • c, representing the channel, or more generally the feature.

If the inputs are events, the input shape must be (n, w, h, c), but if the inputs are images, their shape must be (n, h, w, c).

Note: only grayscale (c=1) or RGB (c=3) images (arrays) are supported.

If activations are enabled for the last layer, the output is an uint8 tensor.

If activations are disabled for the last layer, the output is an int32 tensor.

Parameters
  • inputs (numpy.ndarray) – a numpy.ndarray

  • input_labels (list(int), optional) – input labels. Must have one label per input, or a single label for all inputs. If a label exceeds the defined number of classes, the input will be discarded. (Default value = None).

Returns

a numpy array of shape (n, out_w, out_h, out_c).

Raises
  • TypeError – if the input is not a numpy.ndarray.

  • ValueError – if the input doesn’t match the required shape, format, etc.

forward(inputs)[source]

Forwards a set of images or events through the model.

Forwards an input tensor through the model and returns an output tensor.

The expected input tensor dimensions are:

  • n, representing the number of frames or samples,

  • w, representing the width,

  • h, representing the height,

  • c, representing the channel, or more generally the feature.

If the inputs are events, the input shape must be (n, w, h, c), but if the inputs are images, their shape must be (n, h, w, c).

Note: only grayscale (c=1) or RGB (c=3) images (arrays) are supported.

If activations are enabled for the last layer, the output is an uint8 tensor.

If activations are disabled for the last layer, the output is an int32 tensor.

Parameters

inputs (numpy.ndarray) – a numpy.ndarray

Returns

a numpy array of shape (n, out_w, out_h, out_c).

Raises
  • TypeError – if the input is not a numpy.ndarray.

  • ValueError – if the inputs doesn’t match the required shape, format, etc.

get_layer(*args, **kwargs)

Overloaded function.

  1. get_layer(self: akida.core.ModelBase, layer_name: str) -> akida::Layer

    Get a reference to a specific layer.

    This method allows a deeper introspection of the model by providing access to the underlying layers.

    param layer_name

    name of the layer to retrieve

    type layer_name

    str

    return

    a Layer

  2. get_layer(self: akida.core.ModelBase, layer_index: int) -> akida::Layer

    Get a reference to a specific layer.

    This method allows a deeper introspection of the model by providing access to the underlying layers.

    param layer_index

    index of the layer to retrieve

    type layer_index

    int

    return

    a Layer

get_layer_count(self: akida.core.ModelBase) int

The number of layers.

property hw_device

Internal method to get the hardware device the Model is mapped to (or None)

property input_shape

The model input dimensions (width, height, features).

property layers

Get a list of layers in current model.

map(self: akida.core.ModelBase, device: akida::Device, hw_only: bool = False) None

Map the model to a Device using a target backend.

This method tries to map a Model to the specified Device, implicitly identifying one or more layer sequences that are mapped individually on the Device Mesh.

An optional hw_only parameter can be specified to force the mapping strategy to use only one hardware sequence, thus reducing software intervention on the inference.

Parameters
  • device (Device) – the target Device or None

  • hw_only (bool) – when true, the model should be mapped in one sequence

property metrics

The model metrics.

property output_shape

The model output dimensions (width, height, features).

pop_layer(self: akida.core.ModelBase) None

Remove the last layer of the current model.

predict(inputs, num_classes=None)[source]

Returns the model class predictions.

Forwards an input tensor (images or events) through the model and compute predictions based on the neuron id. If the number of output neurons is greater than the number of classes, the neurons are automatically assigned to a class by dividing their id by the number of classes.

The expected input tensor dimensions are:

  • n, representing the number of frames or samples,

  • w, representing the width,

  • h, representing the height,

  • c, representing the channel, or more generally the feature.

If the inputs are events, the input shape must be (n, w, h, c), but if the inputs are images their shape must be (n, h, w, c).

Note: only grayscale (c=1) or RGB (c=3) images (arrays) are supported.

Note that the predictions are based on the activation values of the last layer: for most use cases, you may want to disable activations for that layer (ie setting activation=False) to get a better accuracy.

Parameters
  • inputs (numpy.ndarray) – a numpy.ndarray

  • num_classes (int, optional) – optional parameter (defaults to the number of neurons in the last layer).

Returns

an array of shape (n).

Return type

numpy.ndarray

Raises

TypeError – if the input is not a numpy.ndarray.

save(self: akida.core.ModelBase, arg0: str) None

Saves all the model configuration (all layers and weights) to a file on disk.

Parameters

model_file (str) – full path of the serialized model (.fbz file).

property sequences

The list of layer sequences in the Model

property statistics

Get statistics by sequence for this model.

Returns

SequenceStatistics indexed by name.

Return type

a dictionary of obj

summary()[source]

Prints a string summary of the model.

This method prints a summary of the model with details for every layer, grouped by sequences:

  • name and type in the first column

  • output shape

  • kernel shape

If there is any layer with unsupervised learning enabled, it will list them, with these details:

  • name of layer

  • number of incoming connections

  • number of weights per neuron

to_buffer(self: akida.core.ModelBase) bytes

Serializes all the model configuration (all layers and weights) to a bytes buffer.

Layer

class akida.Layer

Methods:

get_learning_histogram()

Returns an histogram of learning percentages.

get_variable(name)

Get the value of a layer variable.

get_variable_names()

Get the list of variable names for this layer.

set_variable(name, values)

Set the value of a layer variable.

Attributes:

inbounds

The layer inbound layers.

input_bits

The layer input bits.

input_dims

The layer input dimensions (width, height, channels).

learning

The layer learning parameters set.

mapping

The layer hardware mapping.

name

The layer name.

output_dims

The layer output dimensions (width, height, features).

parameters

The layer parameters set.

variables

The layer trainable variables.

get_learning_histogram()

Returns an histogram of learning percentages.

Returns a list of learning percentages and the associated number of neurons.

Returns

a (n,2) numpy.ndarray containing the learning percentages and the number of neurons.

Return type

numpy.ndarray

get_variable(name)

Get the value of a layer variable.

Layer variables are named entities representing the weights or thresholds used during inference:

  • Weights variables are typically integer arrays of shape: (width, height, features/channels, num_neurons) row-major (‘C’).

  • Threshold variables are typically integer or float arrays of shape: (num_neurons).

Parameters

name (str) – the variable name.

Returns

an array containing the variable.

Return type

numpy.ndarray

get_variable_names()

Get the list of variable names for this layer.

Returns

a list of variable names.

property inbounds

The layer inbound layers.

property input_bits

The layer input bits.

property input_dims

The layer input dimensions (width, height, channels).

property learning

The layer learning parameters set.

property mapping

The layer hardware mapping.

property name

The layer name.

property output_dims

The layer output dimensions (width, height, features).

property parameters

The layer parameters set.

set_variable(name, values)

Set the value of a layer variable.

Layer variables are named entities representing the weights or thresholds used during inference:

  • Weights variables are typically integer arrays of shape:

    (num_neurons, features/channels, height, width) col-major ordered (‘F’)

or equivalently:

(width, height, features/channels, num_neurons) row-major (‘C’).

  • Threshold variables are typically integer or float arrays of shape: (num_neurons).

Parameters
  • name (str) – the variable name.

  • values (numpy.ndarray) – a numpy.ndarray containing the variable values.

property variables

The layer trainable variables.

Sparsity

akida.evaluate_sparsity(model, inputs)[source]

Evaluate the sparsity of a Model on a set of inputs

Parameters
  • model (Model) – the model to evaluate

  • inputs (numpy.ndarray) – a numpy.ndarray

Returns

a dictionary of float sparsity values indexed by layers

InputData

class akida.InputData(input_shape, input_bits=4, name='')[source]

This is the general purpose input layer. It takes events in a simple address-event data format; that is, each event is characterized by a trio of values giving x, y and channel values.

Regarding the input dimension values, note that AEE expects inputs with zero-based indexing, i.e., if input_width is defined as 12, then the model expects all input events to have x-values in the range 0–11.

Where possible:

  • The x and y dimensions should be used for discretely-sampled continuous domains such as space (e.g., images) or time-series (e.g., an audio signal).

  • The c dimension should be used for ‘category indices’, where there is no particular relationship between neighboring values.

The input dimension values are used for:

  • Error checking – input events are checked and if any fall outside the defined input range, then the whole set of events sent on that processing call is rejected. An error will also be generated if the defined values are larger than the true input dimensions.

  • Configuring the input and output dimensions of subsequent layers in the model.

Parameters
  • input_shape (tuple) – the 3D input shape.

  • input_bits (int) – input bitwidth.

  • name (str, optional) – name of the layer.

InputConvolutional

class akida.InputConvolutional(input_shape, kernel_size, filters, name='', padding=<Padding.Same: 1>, kernel_stride=(1, 1), weights_bits=1, pool_size=(-1, -1), pool_type=<PoolType.NoPooling: 0>, pool_stride=(-1, -1), activation=True, threshold=0, act_step=1, act_bits=1, padding_value=0)[source]

The InputConvolutional layer is an image-specific input layer.

It is used if images are sent directly to AEE without using the event-generating method. If the User applies their own event-generating method, the resulting events should be sent to an InputData type layer instead.

The InputConvolutional layer accepts images in 8-bit pixels, either grayscale or RGB. Images are converted to events using a combination of convolution kernels, activation thresholds and winner-take-all (WTA) policies. Note that since the layer input is dense, expect approximately one event per pixel – fewer if there are large contrast-free regions in the image, such as with the MNIST dataset.

Note that this format is not appropriate for neuromorphic camera type input which data is natively event-based and should be sent to an InputData type input layer.

Parameters
  • input_shape (tuple) – the 3D input shape.

  • kernel_size (list) – list of 2 integer representing the spatial dimensions of the convolutional kernel.

  • filters (int) – number of filters.

  • name (str, optional) – name of the layer.

  • padding (Padding, optional) – type of convolution.

  • kernel_stride (tuple, optional) – tuple of integer representing the convolution stride (X, Y).

  • weights_bits (int, optional) – number of bits used to quantize weights.

  • pool_size (list, optional) – list of 2 integers, representing the window size over which to take the maximum or the average (depending on pool_type parameter).

  • pool_type (PoolType, optional) – pooling type (None, Max or Average).

  • pool_stride (list, optional) – list of 2 integers representing the stride dimensions.

  • activation (bool, optional) – enable or disable activation function.

  • threshold (int, optional) – threshold for neurons to fire or generate an event.

  • act_step (float, optional) – length of the potential quantization intervals.

  • act_bits (int, optional) – number of bits used to quantize the neuron response.

  • padding_value (int, optional) – value used when padding.

FullyConnected

class akida.FullyConnected(units, name='', weights_bits=1, activation=True, threshold=0, act_step=1, act_bits=1)[source]

This is used for most processing purposes, since any neuron in the layer can be connected to any input channel.

Outputs are returned from FullyConnected layers as a list of events, that is, as a triplet of x, y and feature values. However, FullyConnected models by definition have no intrinsic spatial organization. Thus, all output events have x and y values of zero with only the f value being meaningful – corresponding to the index of the event-generating neuron. Note that each neuron can only generate a single event for each packet of inputs processed.

Parameters
  • units (int) – number of units.

  • name (str, optional) – name of the layer.

  • weights_bits (int, optional) – number of bits used to quantize weights.

  • activation (bool, optional) – enable or disable activation function.

  • threshold (int, optional) – threshold for neurons to fire or generate an event.

  • act_step (float, optional) – length of the potential quantization intervals.

  • act_bits (int, optional) – number of bits used to quantize the neuron response.

Convolutional

class akida.Convolutional(kernel_size, filters, name='', padding=<Padding.Same: 1>, kernel_stride=(1, 1), weights_bits=1, pool_size=(-1, -1), pool_type=<PoolType.NoPooling: 0>, pool_stride=(-1, -1), activation=True, threshold=0, act_step=1, act_bits=1)[source]

Convolutional or “weight-sharing” layers are commonly used in visual processing. However, the convolution operation is extremely useful in any domain where translational invariance is required – that is, where localized patterns may be of interest regardless of absolute position within the input. The convolution implemented here is typical of that used in visual processing, i.e., it is a 2D convolution (across the x- and y-dimensions), but a 3D input with a 3D filter. No convolution occurs across the third dimension; events from input feature 1 only interact with connections to input feature 1 – likewise for input feature 2 and so on. Typically, the input feature is the identity of the event-emitting neuron in the previous layer.

Outputs are returned from convolutional layers as a list of events, that is, as a triplet of x, y and feature (neuron index) values. Note that for a single packet processed, each neuron can only generate a single event at a given location, but can generate events at multiple different locations and that multiple neurons may all generate events at a single location.

Parameters
  • kernel_size (list) – list of 2 integer representing the spatial dimensions of the convolutional kernel.

  • filters (int) – number of filters.

  • name (str, optional) – name of the layer.

  • padding (Padding, optional) – type of convolution.

  • kernel_stride (list, optional) – list of 2 integer representing the convolution stride (X, Y).

  • weights_bits (int, optional) – number of bits used to quantize weights.

  • pool_size (list, optional) – list of 2 integers, representing the window size over which to take the maximum or the average (depending on pool_type parameter).

  • pool_type (PoolType, optional) – pooling type (None, Max or Average).

  • pool_stride (list, optional) – list of 2 integers representing the stride dimensions.

  • activation (bool, optional) – enable or disable activation function.

  • threshold (int, optional) – threshold for neurons to fire or generate an event.

  • act_step (float, optional) – length of the potential quantization intervals.

  • act_bits (int, optional) – number of bits used to quantize the neuron response.

SeparableConvolutional

class akida.SeparableConvolutional(kernel_size, filters, name='', padding=<Padding.Same: 1>, kernel_stride=(1, 1), weights_bits=2, pool_size=(-1, -1), pool_type=<PoolType.NoPooling: 0>, pool_stride=(-1, -1), activation=True, threshold=0, act_step=1, act_bits=1)[source]

Separable convolutions consist in first performing a depthwise spatial convolution (which acts on each input channel separately) followed by a pointwise convolution which mixes together the resulting output channels. Intuitively, separable convolutions can be understood as a way to factorize a convolution kernel into two smaller kernels, thus decreasing the number of computations required to evaluate the output potentials. The SeparableConvolutional layer can also integrate a final pooling operation to reduce its spatial output dimensions.

Parameters
  • kernel_size (list) – list of 2 integer representing the spatial dimensions of the convolutional kernel.

  • filters (int) – number of pointwise filters.

  • name (str, optional) – name of the layer.

  • padding (Padding, optional) – type of convolution.

  • kernel_stride (list, optional) – list of 2 integer representing the convolution stride (X, Y).

  • weights_bits (int, optional) – number of bits used to quantize weights.

  • pool_size (list, optional) – list of 2 integers, representing the window size over which to take the maximum or the average (depending on pool_type parameter).

  • pool_type (PoolType, optional) – pooling type (None, Max or Average).

  • pool_stride (list, optional) – list of 2 integers representing the stride dimensions.

  • activation (bool, optional) – enable or disable activation function.

  • threshold (int, optional) – threshold for neurons to fire or generate an event.

  • act_step (float, optional) – length of the potential quantization intervals.

  • act_bits (int, optional) – number of bits used to quantize the neuron response.

Concat

class akida.Concat(name='', activation=True, threshold=0, act_step=1, act_bits=1)[source]

Concatenates its inputs along the last dimension

It takes as input a list of tensors, all of the same shape except for the last dimension, and returns a single tensor that is the concatenation of all inputs.

It accepts as inputs either potentials or activations.

It can perform an activation on the concatenated output with its own set of activation parameters and variables.

Parameters
  • name (str, optional) – name of the layer.

  • activation (bool, optional) – enable or disable activation function.

  • threshold (int, optional) – threshold for neurons to fire or generate an event.

  • act_step (float, optional) – length of the potential quantization intervals.

  • act_bits (int, optional) – number of bits used to quantize the neuron response.

BackendType

class akida.BackendType

Members:

Software

Hardware

Hybrid

Padding

class akida.Padding

Sets the effective padding of the input for convolution, thereby determining the output dimensions. Naming conventions are the same as Keras/Tensorflow.

Members:

Valid : No padding

Same : Padded so that output size is input size divided by the stride

PoolType

class akida.PoolType

The pooling type

Members:

NoPooling : No pooling applied

Max : Maximum pixel value is selected

Average : Average pixel value is selected

LearningType

class akida.LearningType

The learning type

Members:

NoLearning : Learning is disabled, inference-only mode

AkidaUnsupervised : Built-in unsupervised learning rules

HwVersion

class akida.HwVersion

Attributes:

major_rev

The hardware major revision

minor_rev

The hardware minor revision

product_id

The hardware product identifier

vendor_id

The hardware vendor identifier

property major_rev

The hardware major revision

property minor_rev

The hardware minor revision

property product_id

The hardware product identifier

property vendor_id

The hardware vendor identifier

Compatibility

akida.compatibility.create_from_model(model, hw_version=None)[source]

Tries to create a HW compatible model from an incompatible one

Tries to create a HW compatible model from an incompatible one, using SW workarounds for known limitations. It returns a converted model that is not guaranteed to be HW compatible, depending if workaround have been found.

Parameters
  • model (Model) – a Model object to convert

  • hw_version (HwVersion, optional) – version of the Hardware

Returns

a new Model with no guarantee that it is HW compatible.

Return type

Model

Device

class akida.Device

Attributes:

desc

Returns the Device description

mesh

The device Mesh layout

version

The device hardware version.

property desc

Returns the Device description

Returns

a string describing the Device

property mesh

The device Mesh layout

property version

The device hardware version.

akida.devices() List[akida.core.HardwareDevice]

Returns the full list of available hardware devices

Returns

list of Device

akida.AKD1000(hw_version=BC.00.000.002)[source]

Returns a virtual device for an AKD1000 NSoC.

This function returns a virtual device for the Brainchip’s AKD1000 NSoC.

Parameters
  • hw_version (HwVersion, optional) – optional parameter (defaults

  • revision) (to the NSoC_v2 hardware) –

Returns

a virtual device.

Return type

Device

akida.TwoNodesIP()[source]

Returns a virtual device for a two nodes Akida IP.

Returns

a virtual device.

Return type

Device

HWDevice

class akida.HardwareDevice

Methods:

evaluate(self, arg0)

Processes inputs on a programmed device, returns a float array.

forward(self, arg0)

Processes inputs on a programmed device.

read_clock_counter(self)

Reads the DMA clock counter value

toggle_clock_counter(self, arg0)

Turn the DMA clock counter on or off

unprogram(self)

Attributes:

learn_enabled

Property that enables/disables learning on current program (if possible).

memory

The device memory usage and top usage (in bytes)

pipeline

Property to enable/disable input pipeline.

program

Property that retrieves current program or programs a device using a serialized program bytes object.

soc

The SocDriver interface used by the device, or None if the device is not a SoC

evaluate(self: akida.core.HardwareDevice, arg0: numpy.ndarray[numpy.uint8]) numpy.ndarray

Processes inputs on a programmed device, returns a float array.

Parameters

inputsnumpy.ndarray with shape matching current program

:return numpy.ndarray with float outputs from the device

forward(self: akida.core.HardwareDevice, arg0: numpy.ndarray[numpy.uint8]) numpy.ndarray

Processes inputs on a programmed device.

Parameters

inputsnumpy.ndarray with shape matching current program

:return numpy.ndarray with outputs from the device

property learn_enabled

Property that enables/disables learning on current program (if possible).

property memory

The device memory usage and top usage (in bytes)

property pipeline

Property to enable/disable input pipeline.

property program

Property that retrieves current program or programs a device using a serialized program bytes object.

read_clock_counter(self: akida.core.HardwareDevice) int

Reads the DMA clock counter value

property soc

The SocDriver interface used by the device, or None if the device is not a SoC

toggle_clock_counter(self: akida.core.HardwareDevice, arg0: bool) None

Turn the DMA clock counter on or off

unprogram(self: akida.core.HardwareDevice) None

SocDriver

class akida.core.SocDriver

Attributes:

power_measurement_enabled

Power measurement is off by default.

power_meter

Power meter associated to the SoC.

property power_measurement_enabled

Power measurement is off by default. Toggle it on to get power information in the statistics or when calling PowerMeter.events().

property power_meter

Power meter associated to the SoC.

Sequence

class akida.Sequence

Attributes:

backend

The backend type for this Sequence.

name

The name of the sequence

passes

Get the list of passes in this sequence.

program

Get the hardware program for this sequence.

property backend

The backend type for this Sequence.

property name

The name of the sequence

property passes

Get the list of passes in this sequence.

property program

Get the hardware program for this sequence.

Returns None if the Sequence is not compatible with the selected Device.

Returns

a bytes buffer or None

NP

class akida.NP.Mesh

Attributes:

dma_conf

DMA configuration endpoint

dma_event

DMA event endpoint

nps

Neural processors

property dma_conf

DMA configuration endpoint

property dma_event

DMA event endpoint

property nps

Neural processors

class akida.NP.Info

Attributes:

ident

NP identifier

types

NP supported types

property ident

NP identifier

property types

NP supported types

class akida.NP.Ident

Attributes:

col

NP column number

id

NP id

row

NP row number

property col

NP column number

property id

NP id

property row

NP row number

soc

class akida.core.soc.ClockMode

Clock mode configuration

Members:

Performance

Economy

LowPower

akida.core.soc.get_clock_mode() akida.core.soc.ClockMode

Return clock mode of SoC currently connected

akida.core.soc.set_clock_mode(arg0: akida.core.soc.ClockMode) None

Set clock mode of SoC currently connected

PowerMeter

class akida.PowerMeter

Gives access to power measurements.

When power measurements are enabled for a specific device, this object stores them as a list of PowerEvent objects. The events list cannot exceed a predefined size: when it is full, older events are replaced by newer events.

Methods:

events(self)

Retrieve all pending events

latest_measure(self)

Get the latest power measure

events(self: akida.core.PowerMeter) List[akida.core.PowerEvent]

Retrieve all pending events

latest_measure(self: akida.core.PowerMeter) object

Get the latest power measure

class akida.PowerEvent

A timestamped power measurement.

Each PowerEvent contains a voltage value in µV and a current value in mA. The power in mW can be obtained as: voltage * current / 10^6.

Attributes:

current

Current value in mA

ts

Timestamp of the event

voltage

Voltage value in µV

property current

Current value in mA

property ts

Timestamp of the event

property voltage

Voltage value in µV