Upgrading to Akida 2.0

This tutorial targets Akida 1.0 users that are looking for advice on how to migrate their Akida 1.0 model towards Akida 2.0. It also lists the major differences in model architecture compatibilities between 1.0 and 2.0.

1. Workflow differences

1.0 vs. 2.0 flow

Akida 1.0 and 2.0 workflows

As shown in the figure above, the main difference between 1.0 and 2.0 workflows is the quantization step that was based on CNN2SNN and that is now based on QuantizeML.

Providing your model architecture is 2.0 compatible (next section lists differences), upgrading to 2.0 is limited to moving from a cnn2snn.quantize call to a quantizeml.models.quantize call. The code snippets below show the two different calls.

import keras

# Build a simple model that is cross-compatible
input = keras.layers.Input((32, 32, 3))
x = keras.layers.Conv2D(kernel_size=3, filters=32, strides=2, padding='same')(input)
x = keras.layers.BatchNormalization()(x)
x = keras.layers.ReLU(max_value=6.0)(x)
x = keras.layers.Flatten()(x)
x = keras.layers.Dense(units=10)(x)

model = keras.Model(input, x)
model.summary()
Model: "model_6"
_________________________________________________________________
 Layer (type)                Output Shape              Param #
=================================================================
 input_6 (InputLayer)        [(None, 32, 32, 3)]       0

 conv2d_2 (Conv2D)           (None, 16, 16, 32)        896

 batch_normalization_2 (Batc  (None, 16, 16, 32)       128
 hNormalization)

 re_lu_2 (ReLU)              (None, 16, 16, 32)        0

 flatten_1 (Flatten)         (None, 8192)              0

 dense_1 (Dense)             (None, 10)                81930

=================================================================
Total params: 82,954
Trainable params: 82,890
Non-trainable params: 64
_________________________________________________________________
import cnn2snn

# Akida 1.0 flow
quantized_model_1_0 = cnn2snn.quantize(model, input_weight_quantization=8, weight_quantization=4,
                                       activ_quantization=4)
akida_model_1_0 = cnn2snn.convert(quantized_model_1_0)
akida_model_1_0.summary()
                Model Summary
______________________________________________
Input shape  Output shape  Sequences  Layers
==============================================
[32, 32, 3]  [1, 1, 10]    1          2
______________________________________________

_______________________________________________________
Layer (type)           Output shape  Kernel shape

============ SW/conv2d_2-dense_1 (Software) ===========

conv2d_2 (InputConv.)  [16, 16, 32]  (3, 3, 3, 32)
_______________________________________________________
dense_1 (Fully.)       [1, 1, 10]    (1, 1, 8192, 10)
_______________________________________________________
import quantizeml

# Akida 2.0 flow
qparams = quantizeml.models.QuantizationParams(input_weight_bits=8, weight_bits=4,
                                               activation_bits=4)
quantized_model_2_0 = quantizeml.models.quantize(model, qparams=qparams)
akida_model_2_0 = cnn2snn.convert(quantized_model_2_0)
akida_model_2_0.summary()
/usr/local/lib/python3.8/dist-packages/quantizeml/models/quantize.py:400: UserWarning: Quantizing per-axis with random calibration samples is not accurate.                       Set QuantizationParams.per_tensor_activations=True when calibrating with                        random samples.
  warnings.warn("Quantizing per-axis with random calibration samples is not accurate.\

   1/1024 [..............................] - ETA: 1:09
  66/1024 [>.............................] - ETA: 0s  
 138/1024 [===>..........................] - ETA: 0s
 207/1024 [=====>........................] - ETA: 0s
 276/1024 [=======>......................] - ETA: 0s
 345/1024 [=========>....................] - ETA: 0s
 412/1024 [===========>..................] - ETA: 0s
 481/1024 [=============>................] - ETA: 0s
 551/1024 [===============>..............] - ETA: 0s
 619/1024 [=================>............] - ETA: 0s
 689/1024 [===================>..........] - ETA: 0s
 757/1024 [=====================>........] - ETA: 0s
 826/1024 [=======================>......] - ETA: 0s
 894/1024 [=========================>....] - ETA: 0s
 962/1024 [===========================>..] - ETA: 0s
1024/1024 [==============================] - 1s 735us/step
                Model Summary
______________________________________________
Input shape  Output shape  Sequences  Layers
==============================================
[32, 32, 3]  [1, 1, 10]    1          3
______________________________________________

__________________________________________________________
Layer (type)                 Output shape  Kernel shape

========== SW/conv2d_2-dequantizer_7 (Software) ==========

conv2d_2 (InputConv2D)       [16, 16, 32]  (3, 3, 3, 32)
__________________________________________________________
dense_1 (Dense2D)            [1, 1, 10]    (8192, 10)
__________________________________________________________
dequantizer_7 (Dequantizer)  [1, 1, 10]    N/A
__________________________________________________________

Note

Here we use 8/4/4 quantization to match the CNN2SNN version above, but most users will typically use the default 8-bit quantization that comes with QuantizeML.

QuantizeML quantization API is close to the legacy CNN2SNN quantization API and further details on how to use it are given in the global workflow tutorial and the advanced QuantizeML tutorial.

2. Models architecture differences

2.1. Separable convolution

In Akida 1.0, a Keras SeparableConv2D used to be quantized as a QuantizedSeparableConv2D and converted to an Akida SeparableConvolutional layer. These 3 layers each perform a “fused” operation where the depthwise and pointwise operations are grouped together in a single layer.

In Akida 2.0, the fused separable layer support has been dropped in favor of a more commonly used unfused operation where the depthwise and the pointwise operations are computed in independent layers. The akida_models package offers a separable_conv_block with a fused=False parameter that will create the DepthwiseConv2D and the pointwise Conv2D layers under the hood. This block will then be quantized towards a QuantizedDepthwiseConv2D and a pointwise QuantizedConv2D before conversion into DepthwiseConv2D and pointwise Conv2D respectively.

Note that while the resulting model topography is slightly different, the fused and unfused mathematical operations are strictly equivalent.

In order to ease 1.0 to 2.0 migration of existing models, akida_models offers an unfuse_sepconv2d API that takes a model with fused layers and transforms it into an unfused equivalent version. For convenience, an unfuse CLI action is also provided.

akida_models unfuse -m model.h5

2.2. Global average pooling operation

The supported position of the GlobalAveragePooling2D operation has changed in Akida 2.0 as it now must come after the ReLU activation (when there is one). In other words, in Akida 1.0 the layers were organized as follows:

  • … > Neural layer > GlobalAveragePooling > (BatchNormalization) > ReLU > Neural layer > …

In Akida 2.0 the supported sequence of layer is:

  • … > Neural layer > (BatchNormalization) > (ReLU) > GlobalAveragePooling > Neural layer > …

This can also be configured using the post_relu_gap parameter of akida_models layer_blocks.

To migrate an existing model from 1.0 to 2.0, it is possible to load 1.0 weights into a 2.0 oriented architecture using Keras save and load APIs because the global average pooling position does not have an effect on model weights. However, the sequences between 1.0 and 2.0 are not mathematically equivalent so it might be required to tune or even retrain the model.

3. Using AkidaVersion

It is still possible to build, quantize and convert models towards a 1.0 target using the AkidaVersion API.

# Reusing the previously defined 2.0 model but converting to a 1.0 target this time
with cnn2snn.set_akida_version(cnn2snn.AkidaVersion.v1):
    akida_model = cnn2snn.convert(quantized_model_2_0)
akida_model.summary()
                Model Summary
______________________________________________
Input shape  Output shape  Sequences  Layers
==============================================
[32, 32, 3]  [1, 1, 10]    1          2
______________________________________________

_______________________________________________________
Layer (type)           Output shape  Kernel shape

============ SW/conv2d_2-dense_1 (Software) ===========

conv2d_2 (InputConv.)  [16, 16, 32]  (3, 3, 3, 32)
_______________________________________________________
dense_1 (Fully.)       [1, 1, 10]    (1, 1, 8192, 10)
_______________________________________________________

One will notice the different Akida layers types as detailed in Akida user guide.

Total running time of the script: (0 minutes 2.775 seconds)

Gallery generated by Sphinx-Gallery