Logo
Akida 1.8.10
  • Overview
  • Installation
    • Requirements
    • Quick installation
    • Running examples
  • User guide
    • Getting started
      • For beginners
      • For users familiar with deep-learning
    • Akida user guide
      • The Akida Execution Engine
        • 1. The Spiking Neural Network model
        • 2. Input data format
        • 3. Determine training mode
        • 4. Interpreting outputs
      • Neural Network model
        • Specifying the Neural Network model
        • Saving and loading
        • Input layer types
        • Data-Processing layer types
      • Using Akida Unsupervised Learning
        • Learning constraints
        • Compiling a layer
        • Learning parameters
    • CNN2SNN toolkit
      • Overview
        • Conversion workflow
        • Typical training scenario
        • Design compatibility constraints
        • Quantization compatibility constraints
        • Command-line interface
      • Layers Considerations
        • Supported layer types
        • CNN2SNN Quantization-aware layers
        • Training-Only Layers
        • First Layers
        • Final Layers
      • Tips and Tricks
    • Akida models zoo
      • Overview
      • Command-line interface for model creation
      • Command-line interface for model training
        • CIFAR10 training and tuning
        • UTK Face training
        • KWS training
        • YOLO training
      • Layer Blocks
        • conv_block
        • dense_block
        • separable_conv_block
    • Hardware constraints
      • Akida NSoC (Pre-production)
        • InputConvolutional
        • Convolutional
        • SeparableConvolutional
        • FullyConnected
      • Akida NSoC (Production)
        • InputConvolutional
        • Convolutional
        • SeparableConvolutional
        • FullyConnected
    • Akida versions compatibility
      • Upgrading models with legacy quantizers
  • API reference
    • Akida Execution Engine
      • Model
      • Layer
      • LayerStatistics
      • Observer
      • InputData
      • InputConvolutional
      • FullyConnected
      • Convolutional
      • SeparableConvolutional
      • Dense
      • Sparse
      • coords_to_sparse
      • dense_to_sparse
      • packetize
      • Backend
      • ConvolutionMode
      • PoolingType
      • LearningType
      • Compatibility
    • CNN2SNN
      • Tool functions
        • quantize
        • quantize_layer
        • convert
        • check_model_compatibility
        • load_quantized_model
        • load_partial_weights
      • Quantizers
        • WeightQuantizer
        • LinearWeightQuantizer
        • StdWeightQuantizer
        • TrainableStdWeightQuantizer
        • MaxQuantizer
        • MaxPerAxisQuantizer
        • WeightFloat
      • Quantized layers
        • QuantizedConv2D
        • QuantizedDepthwiseConv2D
        • QuantizedDense
        • QuantizedSeparableConv2D
        • QuantizedActivation
        • ActivationDiscreteRelu
        • QuantizedReLU
    • Akida models
      • Layer blocks
        • conv_block
        • separable_conv_block
        • dense_block
      • Model zoo
        • Mobilenet
        • DS-CNN
        • VGG
        • YOLO
  • Examples
    • General examples
      • GXNOR/MNIST inference
        • 1. Dataset preparation
        • 2. Load the pre-trained Akida model
        • 3. Show predictions for a single image
        • 4. Check performance
      • DS-CNN CIFAR10 inference
        • 1. Dataset preparation
        • 2. Create a Keras DS-CNN model
        • 3. Quantized model
        • 4. Pretrained quantized model
        • 5. Conversion to Akida
      • MobileNet/ImageNet inference
        • 1. Dataset preparation
        • 2. Create a Keras MobileNet model
        • 3. Quantized model
        • 4. Pretrained quantized model
        • 5. Conversion to Akida
      • DS-CNN/KWS inference
        • 1. Load the preprocessed dataset
        • 2. Load a pre-trained native Keras model
        • 3. Load a pre-trained quantized Keras model satisfying Akida NSoC requirements
        • 4. Conversion to Akida
      • Regression tutorial
        • 1. Load the dataset
        • 2. Load a pre-trained native Keras model
        • 3. Load a pre-trained quantized Keras model satisfying Akida NSoC requirements
        • 4. Conversion to Akida
        • 5. Estimate age on a single image
      • Transfer learning with MobileNet for cats vs. dogs
        • Transfer learning process
        • 1. Load and preprocess data
        • 2. Modify a pre-trained base Keras model
        • 3. Train the transferred model for the new task
        • 4 Quantize the top layer
        • 5. Convert to Akida
        • 6. Plot confusion matrix
      • YOLO/PASCAL-VOC detection tutorial
        • 1. Introduction
        • 2. Preprocessing tools
        • 3. Model architecture
        • 4. Training
        • 5. Performance
        • 6. Conversion to Akida
    • CNN2SNN tutorials
      • CNN conversion flow tutorial
        • 1. Load and reshape MNIST dataset
        • 2. Model definition
        • 4. Model quantization
        • 5. Model fine tuning (quantization-aware training)
        • 6. Model conversion
      • Advanced CNN2SNN tutorial
        • 1. Design a CNN2SNN quantized model
        • 2. Weight Quantizer Details
        • 3. Quantized Activation Layer Details
    • Edge examples
      • Akida vision edge learning
        • 1. Dataset preparation
        • 2. Prepare Akida model for learning
        • 3. Edge learning with Akida
      • Akida edge learning for keyword spotting
        • 1. Edge learning process
        • 2. Dataset preparation
        • 3. Prepare Akida model for learning
        • 4. Learn with Akida using the training set
        • 4. Edge learning
      • Tips to set Akida learning parameters
        • 1. Akida learning parameters
        • 2. Create Akida model
        • 3. Estimate the required number of weights of the trainable layer
        • 4. Estimate the number of neurons per class
  • Changelog
  • Support
  • License
Akida Examples
  • »
  • Akida examples

Akida examples¶

To learn how to use the Akida Execution Engine, the CNN2SNN toolkit and check the Akida processor performance against MNIST, CIFAR10, ImageNet and Google Speech Commands (KWS) datasets please refer to the sections below.

General examples¶

GXNOR/MNIST inference

GXNOR/MNIST inference¶

DS-CNN CIFAR10 inference

DS-CNN CIFAR10 inference¶

MobileNet/ImageNet inference

MobileNet/ImageNet inference¶

DS-CNN/KWS inference

DS-CNN/KWS inference¶

Regression tutorial

Regression tutorial¶

Transfer learning with MobileNet for cats vs. dogs

Transfer learning with MobileNet for cats vs. dogs¶

YOLO/PASCAL-VOC detection tutorial

YOLO/PASCAL-VOC detection tutorial¶

CNN2SNN tutorials¶

CNN conversion flow tutorial

CNN conversion flow tutorial¶

Advanced CNN2SNN tutorial

Advanced CNN2SNN tutorial¶

Edge examples¶

Akida vision edge learning

Akida vision edge learning¶

Akida edge learning for keyword spotting

Akida edge learning for keyword spotting¶

Tips to set Akida learning parameters

Tips to set Akida learning parameters¶

Download all examples in Python source code: examples_python.zip

Download all examples in Jupyter notebooks: examples_jupyter.zip

Gallery generated by Sphinx-Gallery

Next Previous

© Copyright Copyright 2020, BrainChip Holdings Ltd. All Rights Reserved.