DS-CNN CIFAR10 inference

This tutorial uses the CIFAR-10 dataset (60k training images distributed in 10 object classes) for a classic object classification task with a network built around the Depthwise Separable Convolutional Neural Network (DS-CNN) which is originated from Zhang et al (2018).

The goal of the tutorial is to provide users with an example of a complex model that can be converted to an Akida model and that can be run on Akida NSoC with an accuracy similar to a standard Keras floating point model.

1. Dataset preparation

from tensorflow.keras.datasets import cifar10

# Load CIFAR10 dataset
(x_train, y_train), (x_test, y_test) = cifar10.load_data()

# Reshape x-data
x_train = x_train.reshape(50000, 32, 32, 3)
x_test = x_test.reshape(10000, 32, 32, 3)
input_shape = (32, 32, 3)

# Set aside raw test data for use with Akida Execution Engine later
raw_x_test = x_test.astype('uint8')

# Rescale x-data
a = 255
b = 0

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train = (x_train - b) / a
x_test = (x_test - b) / a

Out:

Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz

     8192/170498071 [..............................] - ETA: 33:48
    40960/170498071 [..............................] - ETA: 13:36
    90112/170498071 [..............................] - ETA: 9:16 
   188416/170498071 [..............................] - ETA: 5:58
   401408/170498071 [..............................] - ETA: 3:30
   811008/170498071 [..............................] - ETA: 2:04
  1581056/170498071 [..............................] - ETA: 1:09
  1638400/170498071 [..............................] - ETA: 1:11
  2867200/170498071 [..............................] - ETA: 43s 
  3276800/170498071 [..............................] - ETA: 40s
  4857856/170498071 [..............................] - ETA: 28s
  6070272/170498071 [>.............................] - ETA: 24s
  7593984/170498071 [>.............................] - ETA: 20s
  8871936/170498071 [>.............................] - ETA: 18s
 10592256/170498071 [>.............................] - ETA: 15s
 11870208/170498071 [=>............................] - ETA: 14s
 13443072/170498071 [=>............................] - ETA: 13s
 14966784/170498071 [=>............................] - ETA: 12s
 16089088/170498071 [=>............................] - ETA: 12s
 17899520/170498071 [==>...........................] - ETA: 11s
 18800640/170498071 [==>...........................] - ETA: 10s
 20570112/170498071 [==>...........................] - ETA: 10s
 21569536/170498071 [==>...........................] - ETA: 10s
 23289856/170498071 [===>..........................] - ETA: 9s 
 24281088/170498071 [===>..........................] - ETA: 9s
 25968640/170498071 [===>..........................] - ETA: 8s
 27369472/170498071 [===>..........................] - ETA: 8s
 28975104/170498071 [====>.........................] - ETA: 8s
 30482432/170498071 [====>.........................] - ETA: 8s
 32006144/170498071 [====>.........................] - ETA: 7s
 33513472/170498071 [====>.........................] - ETA: 7s
 35037184/170498071 [=====>........................] - ETA: 7s
 36528128/170498071 [=====>........................] - ETA: 7s
 38051840/170498071 [=====>........................] - ETA: 7s
 39575552/170498071 [=====>........................] - ETA: 6s
 41074688/170498071 [======>.......................] - ETA: 6s
 42557440/170498071 [======>.......................] - ETA: 6s
 44097536/170498071 [======>.......................] - ETA: 6s
 45588480/170498071 [=======>......................] - ETA: 6s
 47144960/170498071 [=======>......................] - ETA: 6s
 48635904/170498071 [=======>......................] - ETA: 6s
 50249728/170498071 [=======>......................] - ETA: 5s
 51699712/170498071 [========>.....................] - ETA: 5s
 53338112/170498071 [========>.....................] - ETA: 5s
 54616064/170498071 [========>.....................] - ETA: 5s
 56139776/170498071 [========>.....................] - ETA: 5s
 56582144/170498071 [========>.....................] - ETA: 5s
 59367424/170498071 [=========>....................] - ETA: 5s
 60792832/170498071 [=========>....................] - ETA: 5s
 61579264/170498071 [=========>....................] - ETA: 5s
 62971904/170498071 [==========>...................] - ETA: 5s
 63848448/170498071 [==========>...................] - ETA: 5s
 65265664/170498071 [==========>...................] - ETA: 5s
 66093056/170498071 [==========>...................] - ETA: 5s
 67551232/170498071 [==========>...................] - ETA: 4s
 68411392/170498071 [===========>..................] - ETA: 4s
 69902336/170498071 [===========>..................] - ETA: 4s
 70754304/170498071 [===========>..................] - ETA: 4s
 72261632/170498071 [===========>..................] - ETA: 4s
 73072640/170498071 [===========>..................] - ETA: 4s
 74620928/170498071 [============>.................] - ETA: 4s
 75407360/170498071 [============>.................] - ETA: 4s
 77037568/170498071 [============>.................] - ETA: 4s
 77766656/170498071 [============>.................] - ETA: 4s
 79470592/170498071 [============>.................] - ETA: 4s
 80175104/170498071 [=============>................] - ETA: 4s
 81887232/170498071 [=============>................] - ETA: 4s
 82599936/170498071 [=============>................] - ETA: 4s
 84336640/170498071 [=============>................] - ETA: 4s
 85090304/170498071 [=============>................] - ETA: 4s
 86777856/170498071 [==============>...............] - ETA: 3s
 87629824/170498071 [==============>...............] - ETA: 3s
 89260032/170498071 [==============>...............] - ETA: 3s
 90169344/170498071 [==============>...............] - ETA: 3s
 91791360/170498071 [===============>..............] - ETA: 3s
 92725248/170498071 [===============>..............] - ETA: 3s
 94347264/170498071 [===============>..............] - ETA: 3s
 95281152/170498071 [===============>..............] - ETA: 3s
 96993280/170498071 [================>.............] - ETA: 3s
 97853440/170498071 [================>.............] - ETA: 3s
 99590144/170498071 [================>.............] - ETA: 3s
100458496/170498071 [================>.............] - ETA: 3s
102178816/170498071 [================>.............] - ETA: 3s
103063552/170498071 [=================>............] - ETA: 3s
104783872/170498071 [=================>............] - ETA: 2s
105684992/170498071 [=================>............] - ETA: 2s
107421696/170498071 [=================>............] - ETA: 2s
108863488/170498071 [==================>...........] - ETA: 2s
110059520/170498071 [==================>...........] - ETA: 2s
111632384/170498071 [==================>...........] - ETA: 2s
112697344/170498071 [==================>...........] - ETA: 2s
114319360/170498071 [===================>..........] - ETA: 2s
115351552/170498071 [===================>..........] - ETA: 2s
117202944/170498071 [===================>..........] - ETA: 2s
118071296/170498071 [===================>..........] - ETA: 2s
119955456/170498071 [====================>.........] - ETA: 2s
120791040/170498071 [====================>.........] - ETA: 2s
122757120/170498071 [====================>.........] - ETA: 2s
123543552/170498071 [====================>.........] - ETA: 2s
125460480/170498071 [=====================>........] - ETA: 1s
126443520/170498071 [=====================>........] - ETA: 1s
128278528/170498071 [=====================>........] - ETA: 1s
129179648/170498071 [=====================>........] - ETA: 1s
131063808/170498071 [======================>.......] - ETA: 1s
131899392/170498071 [======================>.......] - ETA: 1s
133849088/170498071 [======================>.......] - ETA: 1s
134684672/170498071 [======================>.......] - ETA: 1s
136585216/170498071 [=======================>......] - ETA: 1s
137461760/170498071 [=======================>......] - ETA: 1s
139436032/170498071 [=======================>......] - ETA: 1s
140320768/170498071 [=======================>......] - ETA: 1s
142270464/170498071 [========================>.....] - ETA: 1s
143155200/170498071 [========================>.....] - ETA: 1s
145072128/170498071 [========================>.....] - ETA: 1s
146677760/170498071 [========================>.....] - ETA: 1s
147939328/170498071 [=========================>....] - ETA: 0s
149544960/170498071 [=========================>....] - ETA: 0s
150790144/170498071 [=========================>....] - ETA: 0s
152731648/170498071 [=========================>....] - ETA: 0s
153395200/170498071 [=========================>....] - ETA: 0s
156295168/170498071 [==========================>...] - ETA: 0s
158228480/170498071 [==========================>...] - ETA: 0s
160243712/170498071 [===========================>..] - ETA: 0s
162242560/170498071 [===========================>..] - ETA: 0s
164274176/170498071 [===========================>..] - ETA: 0s
166305792/170498071 [============================>.] - ETA: 0s
168370176/170498071 [============================>.] - ETA: 0s
170418176/170498071 [============================>.] - ETA: 0s
170500096/170498071 [==============================] - 7s 0us/step

2. Create a Keras DS-CNN model

The DS-CNN architecture is available in the Akida models zoo along with pretrained weights.

Note

The pre-trained weights were obtained after training the model with unconstrained float weights and activations for 1000 epochs

from tensorflow.keras.utils import get_file
from tensorflow.keras.models import load_model

# Retrieve the float model with pretrained weights and load it
model_file = get_file(
    "ds_cnn_cifar10.h5",
    "http://data.brainchip.com/models/ds_cnn/ds_cnn_cifar10.h5",
    cache_subdir='models/ds_cnn_cifar10')
model_keras = load_model(model_file)
model_keras.summary()

Out:

Downloading data from http://data.brainchip.com/models/ds_cnn/ds_cnn_cifar10.h5

    8192/10836232 [..............................] - ETA: 51s
  106496/10836232 [..............................] - ETA: 8s 
  270336/10836232 [..............................] - ETA: 5s
  466944/10836232 [>.............................] - ETA: 4s
  663552/10836232 [>.............................] - ETA: 3s
  860160/10836232 [=>............................] - ETA: 3s
 1056768/10836232 [=>............................] - ETA: 3s
 1253376/10836232 [==>...........................] - ETA: 3s
 1449984/10836232 [===>..........................] - ETA: 2s
 1646592/10836232 [===>..........................] - ETA: 2s
 1843200/10836232 [====>.........................] - ETA: 2s
 2039808/10836232 [====>.........................] - ETA: 2s
 2236416/10836232 [=====>........................] - ETA: 2s
 2433024/10836232 [=====>........................] - ETA: 2s
 2629632/10836232 [======>.......................] - ETA: 2s
 2826240/10836232 [======>.......................] - ETA: 2s
 3022848/10836232 [=======>......................] - ETA: 2s
 3219456/10836232 [=======>......................] - ETA: 2s
 3416064/10836232 [========>.....................] - ETA: 2s
 3612672/10836232 [=========>....................] - ETA: 2s
 3809280/10836232 [=========>....................] - ETA: 2s
 4005888/10836232 [==========>...................] - ETA: 1s
 4202496/10836232 [==========>...................] - ETA: 1s
 4399104/10836232 [===========>..................] - ETA: 1s
 4595712/10836232 [===========>..................] - ETA: 1s
 4792320/10836232 [============>.................] - ETA: 1s
 4988928/10836232 [============>.................] - ETA: 1s
 5185536/10836232 [=============>................] - ETA: 1s
 5382144/10836232 [=============>................] - ETA: 1s
 5578752/10836232 [==============>...............] - ETA: 1s
 5775360/10836232 [==============>...............] - ETA: 1s
 5971968/10836232 [===============>..............] - ETA: 1s
 6168576/10836232 [================>.............] - ETA: 1s
 6365184/10836232 [================>.............] - ETA: 1s
 6561792/10836232 [=================>............] - ETA: 1s
 6758400/10836232 [=================>............] - ETA: 1s
 6955008/10836232 [==================>...........] - ETA: 1s
 7151616/10836232 [==================>...........] - ETA: 1s
 7348224/10836232 [===================>..........] - ETA: 0s
 7544832/10836232 [===================>..........] - ETA: 0s
 7741440/10836232 [====================>.........] - ETA: 0s
 7938048/10836232 [====================>.........] - ETA: 0s
 8134656/10836232 [=====================>........] - ETA: 0s
 8331264/10836232 [======================>.......] - ETA: 0s
 8527872/10836232 [======================>.......] - ETA: 0s
 8724480/10836232 [=======================>......] - ETA: 0s
 8921088/10836232 [=======================>......] - ETA: 0s
 9117696/10836232 [========================>.....] - ETA: 0s
 9314304/10836232 [========================>.....] - ETA: 0s
 9510912/10836232 [=========================>....] - ETA: 0s
 9707520/10836232 [=========================>....] - ETA: 0s
 9904128/10836232 [==========================>...] - ETA: 0s
10100736/10836232 [==========================>...] - ETA: 0s
10297344/10836232 [===========================>..] - ETA: 0s
10493952/10836232 [============================>.] - ETA: 0s
10690560/10836232 [============================>.] - ETA: 0s
10838016/10836232 [==============================] - 3s 0us/step
Model: "ds_cnn_cifar10"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
input_1 (InputLayer)         [(None, 32, 32, 3)]       0
_________________________________________________________________
conv_0 (Conv2D)              (None, 32, 32, 128)       3456
_________________________________________________________________
conv_0_BN (BatchNormalizatio (None, 32, 32, 128)       512
_________________________________________________________________
conv_0_relu (ReLU)           (None, 32, 32, 128)       0
_________________________________________________________________
separable_1 (SeparableConv2D (None, 32, 32, 128)       17536
_________________________________________________________________
separable_1_BN (BatchNormali (None, 32, 32, 128)       512
_________________________________________________________________
separable_1_relu (ReLU)      (None, 32, 32, 128)       0
_________________________________________________________________
separable_2 (SeparableConv2D (None, 32, 32, 256)       33920
_________________________________________________________________
separable_2_BN (BatchNormali (None, 32, 32, 256)       1024
_________________________________________________________________
separable_2_relu (ReLU)      (None, 32, 32, 256)       0
_________________________________________________________________
separable_3 (SeparableConv2D (None, 32, 32, 256)       67840
_________________________________________________________________
separable_3_maxpool (MaxPool (None, 16, 16, 256)       0
_________________________________________________________________
separable_3_BN (BatchNormali (None, 16, 16, 256)       1024
_________________________________________________________________
separable_3_relu (ReLU)      (None, 16, 16, 256)       0
_________________________________________________________________
separable_4 (SeparableConv2D (None, 16, 16, 512)       133376
_________________________________________________________________
separable_4_BN (BatchNormali (None, 16, 16, 512)       2048
_________________________________________________________________
separable_4_relu (ReLU)      (None, 16, 16, 512)       0
_________________________________________________________________
separable_5 (SeparableConv2D (None, 16, 16, 512)       266752
_________________________________________________________________
separable_5_maxpool (MaxPool (None, 8, 8, 512)         0
_________________________________________________________________
separable_5_BN (BatchNormali (None, 8, 8, 512)         2048
_________________________________________________________________
separable_5_relu (ReLU)      (None, 8, 8, 512)         0
_________________________________________________________________
separable_6 (SeparableConv2D (None, 8, 8, 512)         266752
_________________________________________________________________
separable_6_BN (BatchNormali (None, 8, 8, 512)         2048
_________________________________________________________________
separable_6_relu (ReLU)      (None, 8, 8, 512)         0
_________________________________________________________________
separable_7 (SeparableConv2D (None, 8, 8, 512)         266752
_________________________________________________________________
separable_7_maxpool (MaxPool (None, 4, 4, 512)         0
_________________________________________________________________
separable_7_BN (BatchNormali (None, 4, 4, 512)         2048
_________________________________________________________________
separable_7_relu (ReLU)      (None, 4, 4, 512)         0
_________________________________________________________________
separable_8 (SeparableConv2D (None, 4, 4, 1024)        528896
_________________________________________________________________
separable_8_BN (BatchNormali (None, 4, 4, 1024)        4096
_________________________________________________________________
separable_8_relu (ReLU)      (None, 4, 4, 1024)        0
_________________________________________________________________
separable_9 (SeparableConv2D (None, 4, 4, 1024)        1057792
_________________________________________________________________
separable_9_BN (BatchNormali (None, 4, 4, 1024)        4096
_________________________________________________________________
separable_9_relu (ReLU)      (None, 4, 4, 1024)        0
_________________________________________________________________
separable_10 (SeparableConv2 (None, 4, 4, 10)          19456
_________________________________________________________________
separable_10_global_avg (Glo (None, 10)                0
=================================================================
Total params: 2,681,984
Trainable params: 2,672,256
Non-trainable params: 9,728
_________________________________________________________________

Keras model accuracy is checked against the first n images of the test set.

The table below summarizes the expected results:

#Images

Accuracy

100

96.00 %

1000

94.30 %

10000

93.60 %

Note

Depending on your hardware setup, the processing time may vary.

import numpy as np

from sklearn.metrics import accuracy_score
from timeit import default_timer as timer


# Check Model performance
def check_model_performances(model, x_test, num_images=1000):
    start = timer()
    potentials_keras = model.predict(x_test[:num_images])
    preds_keras = np.squeeze(np.argmax(potentials_keras, 1))

    accuracy = accuracy_score(y_test[:num_images], preds_keras)
    print("Accuracy: " + "{0:.2f}".format(100 * accuracy) + "%")
    end = timer()
    print(f'Keras inference on {num_images} images took {end-start:.2f} s.\n')


check_model_performances(model_keras, x_test)

Out:

Accuracy: 94.30%
Keras inference on 1000 images took 0.72 s.

3. Quantized model

Quantizing a model is done using CNN2SNN quantize. After the call, all the layers will have 4-bit weights and 4-bit activations.

This model will therefore satisfy the Akida NSoC requirements but will suffer from a drop in accuracy due to quantization as shown in the table below:

#Images

Float accuracy

Quantized accuracy

100

96.00 %

96.00 %

1000

94.30 %

92.60 %

10000

93.66 %

92.58 %

from cnn2snn import quantize

# Quantize the model to 4-bit weights and activations
model_keras_quantized = quantize(model_keras, 4, 4)

# Check Model performance
check_model_performances(model_keras_quantized, x_test)

Out:

Accuracy: 92.60%
Keras inference on 1000 images took 0.73 s.

4. Pretrained quantized model

The Akida models zoo also contains a pretrained quantized helper that was obtained using the tune action of akida_models CLI on the quantized model for 100 epochs.

Tuning the model, that is training with a lowered learning rate, allows to recover performances up to the initial floating point accuracy.

#Images

Float accuracy

Quantized accuracy

After tuning

100

96.00 %

96.00 %

97.00 %

1000

94.30 %

92.60 %

94.20 %

10000

93.66 %

92.58 %

93.08 %

from akida_models import ds_cnn_cifar10_pretrained

# Use a quantized model with pretrained quantized weights
model_keras_quantized_pretrained = ds_cnn_cifar10_pretrained()

# Check Model performance
check_model_performances(model_keras_quantized_pretrained, x_test)

Out:

Downloading data from http://data.brainchip.com/models/ds_cnn/ds_cnn_cifar10_iq4_wq4_aq4.h5

    8192/10741016 [..............................] - ETA: 39s
   73728/10741016 [..............................] - ETA: 11s
  270336/10741016 [..............................] - ETA: 5s 
  466944/10741016 [>.............................] - ETA: 4s
  663552/10741016 [>.............................] - ETA: 3s
  860160/10741016 [=>............................] - ETA: 3s
 1056768/10741016 [=>............................] - ETA: 3s
 1253376/10741016 [==>...........................] - ETA: 3s
 1449984/10741016 [===>..........................] - ETA: 2s
 1646592/10741016 [===>..........................] - ETA: 2s
 1843200/10741016 [====>.........................] - ETA: 2s
 2039808/10741016 [====>.........................] - ETA: 2s
 2236416/10741016 [=====>........................] - ETA: 2s
 2433024/10741016 [=====>........................] - ETA: 2s
 2629632/10741016 [======>.......................] - ETA: 2s
 2826240/10741016 [======>.......................] - ETA: 2s
 3022848/10741016 [=======>......................] - ETA: 2s
 3219456/10741016 [=======>......................] - ETA: 2s
 3416064/10741016 [========>.....................] - ETA: 2s
 3612672/10741016 [=========>....................] - ETA: 2s
 3809280/10741016 [=========>....................] - ETA: 1s
 4005888/10741016 [==========>...................] - ETA: 1s
 4202496/10741016 [==========>...................] - ETA: 1s
 4399104/10741016 [===========>..................] - ETA: 1s
 4595712/10741016 [===========>..................] - ETA: 1s
 4792320/10741016 [============>.................] - ETA: 1s
 4988928/10741016 [============>.................] - ETA: 1s
 5185536/10741016 [=============>................] - ETA: 1s
 5382144/10741016 [==============>...............] - ETA: 1s
 5578752/10741016 [==============>...............] - ETA: 1s
 5775360/10741016 [===============>..............] - ETA: 1s
 5971968/10741016 [===============>..............] - ETA: 1s
 6168576/10741016 [================>.............] - ETA: 1s
 6365184/10741016 [================>.............] - ETA: 1s
 6561792/10741016 [=================>............] - ETA: 1s
 6758400/10741016 [=================>............] - ETA: 1s
 6955008/10741016 [==================>...........] - ETA: 1s
 7151616/10741016 [==================>...........] - ETA: 1s
 7348224/10741016 [===================>..........] - ETA: 0s
 7544832/10741016 [====================>.........] - ETA: 0s
 7741440/10741016 [====================>.........] - ETA: 0s
 7938048/10741016 [=====================>........] - ETA: 0s
 8134656/10741016 [=====================>........] - ETA: 0s
 8331264/10741016 [======================>.......] - ETA: 0s
 8527872/10741016 [======================>.......] - ETA: 0s
 8724480/10741016 [=======================>......] - ETA: 0s
 8921088/10741016 [=======================>......] - ETA: 0s
 9117696/10741016 [========================>.....] - ETA: 0s
 9314304/10741016 [=========================>....] - ETA: 0s
 9510912/10741016 [=========================>....] - ETA: 0s
 9707520/10741016 [==========================>...] - ETA: 0s
 9904128/10741016 [==========================>...] - ETA: 0s
10100736/10741016 [===========================>..] - ETA: 0s
10297344/10741016 [===========================>..] - ETA: 0s
10493952/10741016 [============================>.] - ETA: 0s
10690560/10741016 [============================>.] - ETA: 0s
10747904/10741016 [==============================] - 3s 0us/step
Accuracy: 94.20%
Keras inference on 1000 images took 0.74 s.

5. Conversion to Akida

5.1 Convert to Akida model

When converting to an Akida model, we just need to pass the Keras model and the input scaling that was used during training to CNN2SNN convert.

from cnn2snn import convert

model_akida = convert(model_keras_quantized_pretrained, input_scaling=(a, b))

5.2 Check hardware compliancy

The Model.summary() method provides a detailed description of the Model layers.

It also indicates hardware-incompatibilities if there are any. Hardware compatibility can also be checked manually using model_hardware_incompatibilities.

model_akida.summary()

Out:

                                       Model Summary
___________________________________________________________________________________________
Layer (type)                           Output shape   Kernel shape
===========================================================================================
conv_0 (InputConvolutional)            [32, 32, 128]  (3, 3, 3, 128)
___________________________________________________________________________________________
separable_1 (SeparableConvolutional)   [32, 32, 128]  (3, 3, 128, 1), (1, 1, 128, 128)
___________________________________________________________________________________________
separable_2 (SeparableConvolutional)   [32, 32, 256]  (3, 3, 128, 1), (1, 1, 128, 256)
___________________________________________________________________________________________
separable_3 (SeparableConvolutional)   [16, 16, 256]  (3, 3, 256, 1), (1, 1, 256, 256)
___________________________________________________________________________________________
separable_4 (SeparableConvolutional)   [16, 16, 512]  (3, 3, 256, 1), (1, 1, 256, 512)
___________________________________________________________________________________________
separable_5 (SeparableConvolutional)   [8, 8, 512]    (3, 3, 512, 1), (1, 1, 512, 512)
___________________________________________________________________________________________
separable_6 (SeparableConvolutional)   [8, 8, 512]    (3, 3, 512, 1), (1, 1, 512, 512)
___________________________________________________________________________________________
separable_7 (SeparableConvolutional)   [4, 4, 512]    (3, 3, 512, 1), (1, 1, 512, 512)
___________________________________________________________________________________________
separable_8 (SeparableConvolutional)   [4, 4, 1024]   (3, 3, 512, 1), (1, 1, 512, 1024)
___________________________________________________________________________________________
separable_9 (SeparableConvolutional)   [4, 4, 1024]   (3, 3, 1024, 1), (1, 1, 1024, 1024)
___________________________________________________________________________________________
separable_10 (SeparableConvolutional)  [1, 1, 10]     (3, 3, 1024, 1), (1, 1, 1024, 10)
___________________________________________________________________________________________
Input shape: 32, 32, 3
Backend type: Software - 1.8.9

5.3 Check performance

We check the Akida model accuracy on the first n images of the test set.

The table below summarizes the expected results:

#Images

Keras accuracy

Akida accuracy

100

96.00 %

97.00 %

1000

94.30 %

94.00 %

10000

93.66 %

93.04 %

Due to the conversion process, the predictions may be slightly different between the original Keras model and Akida on some specific images.

This explains why when testing on a limited number of images the accuracy numbers between Keras and Akida may be quite different. On the full test set however, the two models accuracies are very close.

num_images = 1000

# Check Model performance
start = timer()
results = model_akida.predict(raw_x_test[:num_images])
accuracy = accuracy_score(y_test[:num_images], results)

print("Accuracy: " + "{0:.2f}".format(100 * accuracy) + "%")
end = timer()
print(f'Akida inference on {num_images} images took {end-start:.2f} s.\n')

# For non-regression purpose
if num_images == 1000:
    assert accuracy == 0.94

Out:

Accuracy: 94.00%
Akida inference on 1000 images took 20.46 s.

Activations sparsity has a great impact on akida inference time. One can have a look at the average input and output sparsity of each layer using Model.get_statistics() For convenience, it is called here on a subset of the dataset.

# Print model statistics
print("Model statistics")
stats = model_akida.get_statistics()
model_akida.predict(raw_x_test[:20])
for _, stat in stats.items():
    print(stat)

Out:

Model statistics
Layer (type)                  output sparsity
conv_0 (InputConvolutional)   0.59
Layer (type)                  input sparsity      output sparsity     ops
separable_1 (SeparableConvolu 0.59                0.52                62696438
Layer (type)                  input sparsity      output sparsity     ops
separable_2 (SeparableConvolu 0.52                0.55                146144939
Layer (type)                  input sparsity      output sparsity     ops
separable_3 (SeparableConvolu 0.55                0.61                273823580
Layer (type)                  input sparsity      output sparsity     ops
separable_4 (SeparableConvolu 0.61                0.65                119144917
Layer (type)                  input sparsity      output sparsity     ops
separable_5 (SeparableConvolu 0.65                0.69                212892409
Layer (type)                  input sparsity      output sparsity     ops
separable_6 (SeparableConvolu 0.69                0.68                46301354
Layer (type)                  input sparsity      output sparsity     ops
separable_7 (SeparableConvolu 0.68                0.74                49046160
Layer (type)                  input sparsity      output sparsity     ops
separable_8 (SeparableConvolu 0.74                0.84                19555155
Layer (type)                  input sparsity      output sparsity     ops
separable_9 (SeparableConvolu 0.84                0.83                24714698
Layer (type)                  input sparsity      output sparsity     ops
separable_10 (SeparableConvol 0.83                0.00                269800

5.4 Show predictions for a random image

import matplotlib.pyplot as plt
import matplotlib.lines as lines
import matplotlib.patches as patches

label_names = [
    'airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse',
    'ship', 'truck'
]

# prepare plot
barWidth = 0.75
pause_time = 1

fig = plt.figure(num='CIFAR10 Classification by Akida Execution Engine',
                 figsize=(8, 4))
ax0 = plt.subplot(1, 3, 1)
imgobj = ax0.imshow(np.zeros((32, 32, 3), dtype=np.uint8))
ax0.set_axis_off()
# Results subplots
ax1 = plt.subplot(1, 2, 2)
ax1.xaxis.set_visible(False)
ax0.text(0, 34, 'Actual class:')
actual_class = ax0.text(16, 34, 'None')
ax0.text(0, 37, 'Predicted class:')
predicted_class = ax0.text(20, 37, 'None')

# Take a random test image
i = np.random.randint(y_test.shape[0])

true_idx = int(y_test[i])
pot = model_akida.evaluate(np.expand_dims(raw_x_test[i], axis=0)).squeeze()

rpot = np.arange(len(pot))
ax1.barh(rpot, pot, height=barWidth)
ax1.set_yticks(rpot - 0.07 * barWidth)
ax1.set_yticklabels(label_names)
predicted_idx = pot.argmax()
imgobj.set_data(raw_x_test[i])
if predicted_idx == true_idx:
    ax1.get_children()[predicted_idx].set_color('g')
else:
    ax1.get_children()[predicted_idx].set_color('r')
actual_class.set_text(label_names[true_idx])
predicted_class.set_text(label_names[predicted_idx])
ax1.set_title('Akida\'s predictions')
plt.show()
Akida's predictions

Total running time of the script: ( 0 minutes 41.190 seconds)

Gallery generated by Sphinx-Gallery