CNN2SNN Toolkit API

quantize

cnn2snn.quantize(model, weight_quantization=0, activ_quantization=0, input_weight_quantization=None)

Converts a standard sequential Keras model to a CNN2SNN Keras quantized model, compatible for Akida conversion.

This function returns a Keras model where the standard neural layers (Conv2D, SeparableConv2D, Dense) and the ReLU activations are replaced with CNN2SNN quantized layers (QuantizedConv2D, QuantizedSeparableConv2D, QuantizedDense, ActivationDiscreteRelu).

Several transformations are applied to the model: - the order of MaxPool and BatchNormalization layers are inverted so that BatchNormalization always happens first, - the batch normalization layers are folded into the previous layers.

This new model can be either directly converted to akida, or first retrained for a few epochs to recover any accuracy loss.

Parameters
  • model (tf.keras.Model) – a standard Keras model

  • weight_quantization (int) –

    sets all weights in the model to have a particular quantization bitwidth except for the weights in the first layer.

    • ’0’ implements floating point 32-bit weights.

    • ’2’ through ‘8’ implements n-bit weights where n is from 2-8 bits.

  • activ_quantization (int) –

    sets all activations in the model to have a particular activation quantization bitwidth.

    • ’0’ implements floating point 32-bit activations.

    • ’1’ through ‘8’ implements n-bit weights where n is from 1-8 bits.

  • input_weight_quantization (int) –

    sets weight quantization in the first layer. Defaults to weight_quantization value.

    • ’None’ implements the same bitwidth as the other weights.

    • ’0’ implements floating point 32-bit weights.

    • ’2’ through ‘8’ implements n-bit weights where n is from 2-8 bits.

Returns

a quantized Keras model

Return type

tf.keras.Model

quantize_layer

..autofunction:: quantize_layer

convert

cnn2snn.convert(model, file_path=None, input_scaling=1.0, 0, input_is_sparse=False)

Simple function to convert a Keras model to an Akida one.

These steps are performed:

  1. Merge the depthwise+conv layers into a separable_conv one.

  2. Generate an Akida model based on that model.

  3. Convert weights from the Keras model to Akida.

Note

The relationship between Keras and Akida inputs is: input_akida = alpha * input_keras + beta, optional

Parameters
  • model (tf.keras.Model) – a tf.keras model

  • file_path (str, optional) – destination for the akida model. (Default value = None)

  • input_scaling (2 elements tuple, optional) – value of the input scaling. (Default value = (1.0,0))

  • input_is_sparse (bool, optional) – if True, input will be an InputData layer, otherwise it will be InputConvolutional. (Default value = False)

Returns

an Akida model.

A detailed description of the input_scaling parameter is given in the user guide.

check_model_compatibility

cnn2snn.check_model_compatibility(model_keras, input_is_sparse)

Checks if a Keras quantized model is compatible for cnn2snn conversion.

This function doesn’t convert the Keras quantized model to an Akida model but only checks if the model is compatible. The checks are performed at two different levels:

  1. Some checks are done when the Keras model is scanned, during the generation of the model map.

  2. Other checks are then done based on the model map.

Note that this function doesn’t check if the quantization bitwidths (weights or activations) are supported by the Akida Execution Engine or by the Akida NSoC.

1. How to build a compatible Keras quantized model?

The following lines give details and constraints on how to build a Keras model compatible for the conversion to an Akida model.

2. General information about layers

An Akida layer must be seen as a block of Keras layers starting with a processing layer (QuantizedConv2D, QuantizedSeparableConv2D, QuantizedDense). All blocks of Keras layers except the last block must have exactly one quantized activation layer (ActivationDiscreteRelu). Other optional layers can be present in a block such as a pooling layer or a batch normalization layer. Here are all the supported Keras layers for an Akida-compatible model:

  • Processing layers:

    • cnn2snn.QuantizedConv2D

    • cnn2snn.QuantizedSeparableConv2D

    • cnn2snn.QuantizedDense

  • Activation layers:

    • cnn2snn.ActivationDiscreteRelu

    • any increasing activation function (only for the last block of layers) such as softmax, sigmoid set as last layer. This layer must derive from tf.keras.layers.Activation, and it will be removed during conversion to Akida-compatible model.

  • Pooling layers:

    • MaxPool2D

    • GlobalAvgPool2D

  • BatchNormalization

  • Dropout

  • Flatten

  • Input

  • Reshape

Example of a block of Keras layers:

   -------------------
   | QuantizedConv2D |
   -------------------
           ||
           \/
 ----------------------
 | BatchNormalization |
 ----------------------
           ||
           \/
      -------------
      | MaxPool2D |
      -------------
           ||
           \/
--------------------------
| ActivationDiscreteRelu |
--------------------------

3. Constraints about inputs

An Akida model can accept two types of inputs: sparse events or 8-bit images. Whatever the input type, the Keras inputs must respect the following relation:

input_akida = scale * input_keras + shift

where the Akida inputs must be positive integers, the input scale must be a float value and the input shift must be an integer. In other words, scale * input_keras must be integers.

Depending on the input type:

  • if the inputs are events (sparse), the first layer of the Keras model can be any quantized processing layer. The input shift must be zero.

  • if the inputs are images, the first layer must be a QuantizedConv2D layer.

4. Constraints about layers’ parameters

To be Akida-compatible, the Keras layers must observe the following rules:

  • all layers with the ‘data_format’ parameter must be ‘channels_last’

  • all processing quantized layers and ActivationDiscreteRelu must have a valid quantization bitwidth

  • a QuantizedDense layer must have an input shape of (N,) or (1, 1, N)

  • a BatchNormalization layer must have ‘axis’ set to -1 (default)

  • a BatchNormalization layer cannot have negative gammas

  • Reshape layers can only be used to transform a tensor of shape (N,) to a tensor of shape (1, 1, N), and vice-versa

  • only one pooling layer can be used in each block

  • a MaxPool2D layer must have the same ‘padding’ as the corresponding processing quantized layer

5. Constraints about the order of layers

To be Akida-compatible, the order of Keras layers must observe the following rules:

  • a block of Keras layers must start with a processing quantized layer

  • where present, a BatchNormalization/GlobalAvgPool2D layer must be placed before the activation

  • a Flatten layer can only be used before a QuantizedDense layer

  • an Activation layer other than ActivationDiscreteRelu can only be used in the last layer

Parameters
  • model (tf.keras model) – the model to parse.

  • input_is_sparse (bool) – if True, input will be an inputData layer, otherwise it will be inputConvolutional.

WeightQuantizer

class cnn2snn.WeightQuantizer(*args, **kwargs)

A uniform quantizer.

Quantizes the specified weights into 2^bitwidth-1 values centered on zero. E.g. with bitwidth = 4, 15 quantization levels: from -7 * qstep to 7 * qstep with qstep being the quantization step. The quantization step is defined by:

qstep = threshold * std(W) / max_value

with max_value being 2^(bitwidth-1) - 1. E.g with bitwidth = 4, max_value = 7.

All values below or above threshold * std(W) are automatically assigned to the min (resp max) value.

__init__(threshold=3, bitwidth=4)

Creates a Weights quantizer for the specified bitwidth.

Parameters
  • threshold (integer) – the standard deviation multiplier used to exclude outliers.

  • bitwidth (integer) – the quantizer bitwidth defining the number of quantized values.

Methods:

get_config()

Returns the config of the layer.

quantize(w)

Quantizes the specified weights Tensor.

scale_factor(w)

Evaluates the scale factor for the specified weights Tensor.

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

quantize(w)

Quantizes the specified weights Tensor.

Parameters

w (tensorflow.Tensor) – the weights Tensor to quantize.

Returns

a Tensor of quantized weights.

Return type

tensorflow.Tensor

scale_factor(w)

Evaluates the scale factor for the specified weights Tensor.

Parameters

w (tensorflow.Tensor) – the weights Tensor to quantize.

Returns

a Tensor containing a single scalar value.

Return type

tensorflow.Tensor

TrainableWeightQuantizer

class cnn2snn.TrainableWeightQuantizer(*args, **kwargs)

A trainable weight quantizer.

Quantizes the specified weights into 2^bitwidth-1 values centered on zero. E.g. with bitwidth = 4, 15 quantization levels: from -7 * qstep to 7 * qstep with qstep being the quantization step. The quantization step is defined by:

qstep = threshold * std(W) / max_value

with:

  • max_value being 2^(bitwidth-1) - 1. E.g with bitwidth = 4, max_value = 7.

  • threshold a trainable parameter whose initial value can be specified.

All values below or above threshold * std(W) are automatically assigned to the min (resp max) value.

This is the trainable version of the WeightQuantizer class.

__init__(threshold=3, bitwidth=4, **kwargs)

Creates a trainable weights quantizer for the specified bitwidth.

Parameters
  • threshold (integer) – the initial value of the standard deviation multiplier used to exclude outliers.

  • bitwidth (integer) – the quantizer bitwidth defining the number of quantized values.

Methods:

get_config()

Returns the config of the layer.

quantize(w)

Quantizes the specified weights Tensor.

scale_factor(w)

Evaluates the scale factor for the specified weights Tensor.

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

quantize(w)

Quantizes the specified weights Tensor.

Parameters

w (tensorflow.Tensor) – the weights Tensor to quantize.

Returns

a Tensor of quantized weights.

Return type

tensorflow.Tensor

scale_factor(w)

Evaluates the scale factor for the specified weights Tensor.

Parameters

w (tensorflow.Tensor) – the weights Tensor to quantize.

Returns

a Tensor containing a single scalar value.

Return type

tensorflow.Tensor

MaxQuantizer

class cnn2snn.MaxQuantizer(*args, **kwargs)

A quantizer that relies on maximum range.

Quantizes the specified weights into 2^bitwidth-1 values centered on zero. E.g. with bitwidth = 4, 15 quantization levels: from -7 * qstep to 7 * qstep with qstep being the quantization step. The quantization step is defined by:

qstep = max_range / max_value

with:

  • max_range = max(abs(W))

  • max_value = 2^(bitwidth-1) - 1. E.g with bitwidth = 4, max_value = 7.

__init__(bitwidth=4)

Creates a Max quantizer for the specified bitwidth.

Parameters

bitwidth (integer) – the quantizer bitwidth defining the number of quantized values.

Methods:

get_config()

Returns the config of the layer.

quantize(w)

Quantizes the specified weights Tensor.

scale_factor(w)

Evaluates the scale factor for the specified weights Tensor.

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

quantize(w)

Quantizes the specified weights Tensor.

Parameters

w (tensorflow.Tensor) – the weights Tensor to quantize.

Returns

a Tensor of quantized weights.

Return type

tensorflow.Tensor

scale_factor(w)

Evaluates the scale factor for the specified weights Tensor.

Parameters

w (tensorflow.Tensor) – the weights Tensor to quantize.

Returns

a Tensor containing a single scalar value.

Return type

tensorflow.Tensor

MaxPerAxisQuantizer

class cnn2snn.MaxPerAxisQuantizer(*args, **kwargs)

A quantizer that relies on maximum range per axis.

Quantizes the specified weights into 2^bitwidth-1 values centered on zero. E.g. with bitwidth = 4, 15 quantization levels: from -7 * qstep to 7 * qstep with qstep being the quantization step. The quantization step is defined by:

qstep = max_range / max_value

with:

  • max_range = max(abs(W))

  • max_value = 2^(bitwidth-1) - 1. E.g with bitwidth = 4, max_value = 7.

This is an evolution of the MaxQuantizer that defines the max_range per axis.

The last dimension is used as axis, meaning that the scaling factor is a vector with as many values as “filters”, or “neurons”.

Note: for a DepthwiseConv2D layer that has a single filter, this quantizer is strictly equivalent to the MaxQuantizer.

__init__(bitwidth=4)

Creates a Max quantizer for the specified bitwidth.

Parameters

bitwidth (integer) – the quantizer bitwidth defining the number of quantized values.

WeightFloat

class cnn2snn.WeightFloat(*args, **kwargs)

This quantizer actually does not perform any quantization, and it might be used for training.

__init__()

Creates a Weights quantizer for the specified bitwidth.

Parameters

bitwidth (integer) – the quantizer bitwidth defining the number of quantized values.

Methods:

get_config()

Returns the config of the layer.

quantize(w)

Quantizes the specified weights Tensor.

scale_factor(w)

Evaluates the scale factor for the specified weights Tensor.

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

quantize(w)

Quantizes the specified weights Tensor.

Parameters

w (tensorflow.Tensor) – the weights Tensor to quantize.

Returns

a Tensor of quantized weights.

Return type

tensorflow.Tensor

scale_factor(w)

Evaluates the scale factor for the specified weights Tensor.

Parameters

w (tensorflow.Tensor) – the weights Tensor to quantize.

Returns

a Tensor containing a single scalar value.

Return type

tensorflow.Tensor

QuantizedConv2D

class cnn2snn.QuantizedConv2D(*args, **kwargs)

A quantization-aware Keras convolutional layer.

Inherits from Keras Conv2D layer, applying a quantization on weights during the forward pass.

__init__(filters, kernel_size, strides=(1, 1), padding='valid', use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, kernel_constraint=None, bias_constraint=None, quantizer=<cnn2snn.quantization_ops.WeightFloat object>, **kwargs)

Creates a quantization-aware convolutional layer.

Parameters
  • filters (integer) – the number of filters.

  • kernel_size (tuple of integer) – the kernel spatial dimensions.

  • strides (integer, or tuple of integers, optional) – strides of the convolution along height and width.

  • padding (str, optional) – one of ‘valid’ or ‘same’.

  • use_bias (boolean, optional) – whether the layer uses a bias vector.

  • kernel_initializer (str, or a tf.keras.initializer, optional) – initializer for the weights matrix.

  • bias_initializer (str, or a tf.keras.initializer, optional) – initializer for the bias vector.

  • kernel_regularizer (str, or a tf.keras.regularizer, optional) – regularization applied to the weights.

  • bias_regularizer (str, or a tf.keras.regularizer, optional) – regularization applied to the bias.

  • kernel_constraint (str, or a tf.keras.constraint, optional) – constraint applied to the weights.

  • bias_constraint (str, or a tf.keras.constraint, optional) – constraint applied to the bias.

  • quantizer (cnn2snn.WeightQuantizer, optional) – the quantizer to apply during the forward pass.

Methods:

call(inputs)

Evaluates input Tensor.

get_config()

Returns the config of the layer.

call(inputs)

Evaluates input Tensor.

This applies the quantization on weights, then evaluates the input Tensor and produces the output Tensor.

Parameters

inputs (tensorflow.Tensor) – input Tensor.

Returns

output Tensor.

Return type

tensorflow.Tensor

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

QuantizedDepthwiseConv2D

class cnn2snn.QuantizedDepthwiseConv2D(*args, **kwargs)

A quantization-aware Keras depthwise convolutional layer.

Inherits from Keras DepthwiseConv2D layer, applying a quantization on weights during the forward pass.

__init__(kernel_size, strides=(1, 1), padding='valid', use_bias=True, depthwise_initializer='glorot_uniform', bias_initializer='zeros', depthwise_regularizer=None, bias_regularizer=None, depthwise_constraint=None, bias_constraint=None, quantizer=<cnn2snn.quantization_ops.WeightFloat object>, **kwargs)

Creates a quantization-aware depthwise convolutional layer.

Parameters
  • kernel_size (a tuple of integer) – the kernel spatial dimensions.

  • strides (integer, or tuple of integers, optional) – strides of the convolution along height and width.

  • padding (str, optional) – One of ‘valid’ or ‘same’.

  • use_bias (boolean, optional) – whether the layer uses a bias vector.

  • depthwise_initializer (str, or a tf.keras.initializer, optional) – initializer for the weights matrix.

  • bias_initializer (str, or a tf.keras.initializer, optional) – initializer for the bias vector.

  • depthwise_regularizer (str, or a tf.keras.initializer, optional) – regularization applied to the weights.

  • bias_regularizer (str, or a tf.keras.initializer, optional) – regularization applied to the bias.

  • depthwise_constraint (str, or a tf.keras.initializer, optional) – constraint applied to the weights.

  • bias_constraint (str, or a tf.keras.initializer, optional) – constraint applied to the bias.

  • quantizer (cnn2snn.WeightQuantizer, optional) – the quantizer to apply during the forward pass.

Methods:

call(inputs)

Evaluates input Tensor.

get_config()

Returns the config of the layer.

call(inputs)

Evaluates input Tensor.

This applies the quantization on weights, then evaluates the input Tensor and produces the output Tensor.

Parameters

inputs (tensorflow.Tensor) – input Tensor.

Returns

output Tensor.

Return type

tensorflow.Tensor

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

QuantizedDense

class cnn2snn.QuantizedDense(*args, **kwargs)

A quantization-aware Keras dense layer.

Inherits from Keras Dense layer, applying a quantization on weights during the forward pass.

__init__(units, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, kernel_constraint=None, bias_constraint=None, quantizer=<cnn2snn.quantization_ops.WeightFloat object>, **kwargs)

Creates a quantization-aware dense layer.

Parameters
  • units (integer) – the number of neurons.

  • use_bias (boolean, optional) – whether the layer uses a bias vector.

  • kernel_initializer (str, or a tf.keras.initializer, optional) – initializer for the weights matrix.

  • bias_initializer (str, or a tf.keras.initializer, optional) – initializer for the bias vector.

  • kernel_regularizer (str, or a tf.keras.regularizer, optional) – regularization applied to the weights.

  • bias_regularizer (str, or a tf.keras.regularizer, optional) – regularization applied to the bias.

  • kernel_constraint (str, or a tf.keras.constraint, optional) – constraint applied to the weights.

  • bias_constraint (str, or a tf.keras.constraint, optional) – constraint applied to the bias.

  • quantizer (cnn2snn.WeightQuantizer, optional) – the quantizer to apply during the forward pass.

Methods:

call(inputs)

Evaluates input Tensor.

get_config()

Returns the config of the layer.

call(inputs)

Evaluates input Tensor.

This applies the quantization on weights, then evaluates the input Tensor and produces the output Tensor.

Parameters

inputs (tensorflow.Tensor) – input Tensor.

Returns

output Tensor.

Return type

tensorflow.Tensor

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

QuantizedSeparableConv2D

class cnn2snn.QuantizedSeparableConv2D(*args, **kwargs)

A quantization-aware Keras separable convolutional layer.

Inherits from Keras SeparableConv2D layer, applying a quantization on weights during the forward pass.

__init__(filters, kernel_size, strides=(1, 1), padding='valid', use_bias=True, depthwise_initializer='glorot_uniform', pointwise_initializer='glorot_uniform', bias_initializer='zeros', depthwise_regularizer=None, pointwise_regularizer=None, bias_regularizer=None, depthwise_constraint=None, pointwise_constraint=None, bias_constraint=None, quantizer=<cnn2snn.quantization_ops.WeightFloat object>, quantizer_dw=None, **kwargs)

Creates a quantization-aware separable convolutional layer.

Parameters
  • filters (integer) – the number of filters.

  • kernel_size (tuple of integer) – the kernel spatial dimensions.

  • strides (integer, or tuple of integers, optional) – strides of the convolution along height and width.

  • padding (str, optional) – One of ‘valid’ or ‘same’.

  • use_bias (boolean, optional) – Whether the layer uses a bias vector.

  • depthwise_initializer (str, or a tf.keras.initializer, optional) – initializer for the depthwise kernel.

  • pointwise_initializer (str, or a tf.keras.initializer, optional) – initializer for the pointwise kernel.

  • bias_initializer (str, or a tf.keras.initializer, optional) – initializer for the bias vector.

  • depthwise_regularizer (str, or a tf.keras.regularizer, optional) – regularization applied to the depthwise kernel.

  • pointwise_regularizer (str, or a tf.keras.regularizer, optional) – regularization applied to the pointwise kernel.

  • bias_regularizer (str, or a tf.keras.regularizer, optional) – regularization applied to the bias.

  • depthwise_constraint (str, or a tf.keras.constraint, optional) – constraint applied to the depthwise kernel.

  • pointwise_constraint (str, or a tf.keras.constraint, optional) – constraint applied to the pointwise kernel.

  • bias_constraint (str, or a tf.keras.constraint, optional) – constraint applied to the bias.

  • quantizer (cnn2snn.WeightQuantizer) – the quantizer to apply during the forward pass.

Methods:

call(inputs)

Evaluates input Tensor.

get_config()

Returns the config of the layer.

call(inputs)

Evaluates input Tensor.

This applies the quantization on weights, then evaluates the input Tensor and produces the output Tensor.

Parameters

inputs (tensorflow.Tensor) – input Tensor.

Returns

a Tensor.

Return type

tensorflow.Tensor

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

ActivationDiscreteRelu

class cnn2snn.ActivationDiscreteRelu(*args, **kwargs)

A discrete ReLU Keras Activation.

Activations will be quantized and will have 2^bitwidth values in the range [0,6].

__init__(bitwidth=1, **kwargs)

Creates a discrete ReLU for the specified bitwidth.

Parameters

bitwidth (int) – the activation bitwidth.

Methods:

get_config()

Returns the config of the layer.

quantized_activation(x)

Evaluates the activations for the specified input Tensor.

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

quantized_activation(x)

Evaluates the activations for the specified input Tensor.

Parameters

x (tensorflow.Tensor) – the input values.

QuantizedReLU

class cnn2snn.QuantizedReLU(*args, **kwargs)

A Trainable Quantized ReLU Keras Activation.

Activations will be clipped to a trainable range, and quantized to a number of values defined by the bitwidth: N = (2^bitwidth - 1) values plus zero

More specifically, this class uses two trainable variables:

  • t0_k represents the lower bound of the activation range,

  • gamma_k represents the step between two quantized activation values.

The activation range is therefore [t0_k, tN_k], with:

tN_k = t0_k + N * gamma_k = (2^bitwidth - 1) * gamma_k

In other words:

  • inputs below t0_k will result in no activation

  • inputs between t0_k and t0_k + tN_k will be ceiled to the nearest t0_k + n * gamma_k, and result in a activation of n * gamma_k

  • inputs above t0_k + tN_k will result in a activation of N * gamma_k

__init__(bitwidth=1, **kwargs)

Creates a QuantizedReLU for the specified bitwidth.

Parameters

bitwidth (int) – the activation bitwidth.

Methods:

get_config()

Returns the config of the layer.

quantized_activation(x)

Evaluates the activations for the specified input Tensor.

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

quantized_activation(x)

Evaluates the activations for the specified input Tensor.

Parameters

x (tensorflow.Tensor) – the input values.

Classes:

ActivationDiscreteRelu(*args, **kwargs)

A discrete ReLU Keras Activation.

MaxPerAxisQuantizer(*args, **kwargs)

A quantizer that relies on maximum range per axis.

MaxQuantizer(*args, **kwargs)

A quantizer that relies on maximum range.

QuantizedConv2D(*args, **kwargs)

A quantization-aware Keras convolutional layer.

QuantizedDense(*args, **kwargs)

A quantization-aware Keras dense layer.

QuantizedDepthwiseConv2D(*args, **kwargs)

A quantization-aware Keras depthwise convolutional layer.

QuantizedReLU(*args, **kwargs)

A Trainable Quantized ReLU Keras Activation.

QuantizedSeparableConv2D(*args, **kwargs)

A quantization-aware Keras separable convolutional layer.

TrainableWeightQuantizer(*args, **kwargs)

A trainable weight quantizer.

WeightFloat(*args, **kwargs)

This quantizer actually does not perform any quantization, and it might be used for training.

WeightQuantizer(*args, **kwargs)

A uniform quantizer.

Functions:

check_model_compatibility(model_keras, …)

Checks if a Keras quantized model is compatible for cnn2snn conversion.

convert(model[, file_path, input_scaling, …])

Simple function to convert a Keras model to an Akida one.

create_trainable_quantizer_model(quantized_model)

Converts a legacy quantized model to a model using trainable quantizers.

load_partial_weights(dest_model, src_model)

Loads a subset of weights from one Keras model to another

load_quantized_model(filepath[, compile])

Loads a quantized model saved in TF or HDF5 format.

merge_separable_conv(model)

Returns a new model where all depthwise conv2d layers followed by conv2d layers are merged into single separable conv layers.

quantize(model[, weight_quantization, …])

Converts a standard sequential Keras model to a CNN2SNN Keras quantized model, compatible for Akida conversion.

quantize_layer(model, target_layer, bitwidth)

Converts a specific layer to a quantized version with the given bitwidth.