Logo
MetaTF 2.2.2
  • Overview
  • Installation
    • Requirements
    • Quick installation
    • Running examples
  • User guide
    • Getting started
      • For beginners
      • For users familiar with deep-learning
    • Akida user guide
      • Introduction
        • Akida layers
        • Input Format
        • A versatile machine learning framework
      • The Sequential model
        • Specifying the model
        • Accessing layer parameters and weights
        • Inference
        • Saving and loading
        • Input layer types
        • Data-Processing layer types
      • Model Hardware Mapping
        • Devices
        • Model mapping
        • Advanced Mapping Details and Hardware Devices Usage
        • Performances measurement
      • Using Akida Edge learning
        • Learning constraints
        • Compiling a layer
    • CNN2SNN toolkit
      • Overview
        • Conversion workflow
        • Typical training scenario
        • Design compatibility constraints
        • Quantization compatibility constraints
        • Command-line interface
      • Layers Considerations
        • Supported layer types
        • CNN2SNN Quantization-aware layers
        • Training-Only Layers
        • First Layers
        • Final Layers
      • Tips and Tricks
    • Akida models zoo
      • Overview
      • Command-line interface for model creation
      • Command-line interface for model training
        • UTK Face training
        • KWS training
        • YOLO training
        • AkidaNet training
      • Command-line interface for model evaluation
      • Layer Blocks
        • conv_block
        • dense_block
        • separable_conv_block
    • Hardware constraints
      • InputConvolutional
      • Convolutional
      • SeparableConvolutional
      • FullyConnected
    • Akida versions compatibility
      • Upgrading models with legacy quantizers
  • API reference
    • Akida runtime
      • Model
      • Layer
        • Layer
        • Mapping
      • InputData
      • InputConvolutional
      • FullyConnected
      • Convolutional
      • SeparableConvolutional
      • Layer parameters
        • LayerType
        • Padding
        • PoolType
        • LearningType
      • Sequence
        • Sequence
        • BackendType
        • Pass
      • Device
        • Device
        • HwVersion
      • HWDevice
        • HWDevice
        • SocDriver
        • ClockMode
      • PowerMeter
      • NP
      • Tools
        • Sparsity
        • Compatibility
    • CNN2SNN
      • Tool functions
        • quantize
        • quantize_layer
        • convert
        • check_model_compatibility
        • load_quantized_model
        • Transforms
        • Calibration
      • Quantizers
        • WeightQuantizer
        • LinearWeightQuantizer
        • StdWeightQuantizer
        • StdPerAxisQuantizer
        • MaxQuantizer
        • MaxPerAxisQuantizer
      • Quantized layers
        • QuantizedConv2D
        • QuantizedDense
        • QuantizedSeparableConv2D
        • QuantizedActivation
        • ActivationDiscreteRelu
        • QuantizedReLU
    • Akida models
      • Layer blocks
        • conv_block
        • separable_conv_block
        • dense_block
      • Helpers
        • BatchNormalization gamma constraint
      • Knowledge distillation
      • Pruning
      • Training
      • Model zoo
        • AkidaNet
        • Mobilenet
        • DS-CNN
        • VGG
        • YOLO
        • ConvTiny
        • PointNet++
        • GXNOR
  • Examples
    • General examples
      • GXNOR/MNIST inference
        • 1. Dataset preparation
        • 2. Create a Keras GXNOR model
        • 3. Conversion to Akida
      • AkidaNet/ImageNet inference
        • 1. Dataset preparation
        • 2. Create a Keras AkidaNet model
        • 3. Quantized model
        • 4. Pretrained quantized model
        • 5. Conversion to Akida
        • 6. Hardware mapping and performance
      • DS-CNN/KWS inference
        • 1. Load the preprocessed dataset
        • 2. Load a pre-trained native Keras model
        • 3. Load a pre-trained quantized Keras model satisfying Akida NSoC requirements
        • 4. Conversion to Akida
        • 5. Confusion matrix
      • Regression tutorial
        • 1. Load the dataset
        • 2. Load a pre-trained native Keras model
        • 3. Load a pre-trained quantized Keras model satisfying Akida NSoC requirements
        • 4. Conversion to Akida
        • 5. Estimate age on a single image
      • Transfer learning with AkidaNet for PlantVillage
        • Transfer learning process
        • 1. Dataset preparation
        • 2. Get a trained AkidaNet base model
        • 3. Add a float classification head to the model
        • 4. Freeze the base model
        • 5. Train for a few epochs
        • 6. Quantize the classification head
        • 7. Compute accuracy
      • YOLO/PASCAL-VOC detection tutorial
        • 1. Introduction
        • 2. Preprocessing tools
        • 3. Model architecture
        • 4. Training
        • 5. Performance
        • 6. Conversion to Akida
    • CNN2SNN tutorials
      • CNN conversion flow tutorial
        • 1. Load and reshape MNIST dataset
        • 2. Model definition
        • 3. Model training
        • 4. Model quantization
        • 5. Model fine tuning (quantization-aware training)
        • 6. Model conversion
      • Advanced CNN2SNN tutorial
        • 1. Design a CNN2SNN quantized model
        • 2. Weight Quantizer Details
        • 3. Understanding quantized activation
        • 4. How to deal with too high scale factors
    • Edge examples
      • Akida vision edge learning
        • 1. Dataset preparation
        • 2. Prepare Akida model for learning
        • 3. Edge learning with Akida
      • Akida edge learning for keyword spotting
        • 1. Edge learning process
        • 2. Dataset preparation
        • 3. Prepare Akida model for learning
        • 4. Learn with Akida using the training set
        • 5. Edge learning
      • Tips to set Akida learning parameters
        • 1. Akida learning parameters
        • 2. Create Akida model
        • 3. Estimate the required number of weights of the trainable layer
        • 4. Estimate the number of neurons per class
  • Model zoo performances
    • Image domain
      • Classification
      • Object detection
      • Regression
      • Face recognition
    • Audio domain
      • Keyword spotting
    • Time domain
      • Fault detection
      • Classification
    • Point cloud
      • Classification
  • Changelog
  • Support
  • License
Akida Examples
  • »
  • Overview: module code

All modules for which code is available

  • akida.compatibility.conversion
  • akida.convolutional
  • akida.core
    • akida.core.soc
  • akida.fully_connected
  • akida.input_convolutional
  • akida.input_data
  • akida.separable_convolutional
  • akida.sparsity
  • akida.virtual_devices
  • akida_models.cwru.model_convtiny
  • akida_models.detection.generate_anchors
  • akida_models.detection.map_evaluation
  • akida_models.detection.model_yolo
  • akida_models.detection.processing
  • akida_models.distiller
  • akida_models.filter_pruning
  • akida_models.gamma_constraint
  • akida_models.imagenet.model_akidanet
  • akida_models.imagenet.model_akidanet_edge
  • akida_models.imagenet.model_mobilenet
  • akida_models.imagenet.model_mobilenet_edge
  • akida_models.imagenet.model_vgg
  • akida_models.imagenet.preprocessing
  • akida_models.kws.model_ds_cnn
  • akida_models.kws.preprocessing
  • akida_models.layer_blocks
  • akida_models.mnist.model_gxnor
  • akida_models.modelnet40.model_pointnet_plus
  • akida_models.modelnet40.preprocessing
  • akida_models.training
  • akida_models.utk_face.model_vgg
  • akida_models.utk_face.preprocessing
  • cnn2snn.calibration.adaround
  • cnn2snn.calibration.bias_correction
  • cnn2snn.calibration.calibration
  • cnn2snn.converter
  • cnn2snn.quantization
  • cnn2snn.quantization_layers
  • cnn2snn.quantization_ops
  • cnn2snn.transforms.batch_normalization
  • cnn2snn.transforms.equalization
  • cnn2snn.transforms.reshape
  • cnn2snn.transforms.sequential
  • cnn2snn.utils

© Copyright 2022, BrainChip Holdings Ltd. All Rights Reserved.