Skip to content
Neural Network Intelligence logo
Neural Network Intelligence Model Compression
Type to start searching
    GitHub
    • Neural Network Intelligence 
    • Use Cases and Solutions 
    • Model Compression
    GitHub
    • 概述
    • 开始使用
    • 安装
    • 快速入门
    • 用户指南
    •  超参调优
      • 概述
      •  教程
        • PyTorch
        • TensorFlow
        • HPO 教程(PyTorch 版本)
        • 将 PyTorch 官方教程移植到NNI
        • HPO Quickstart with TensorFlow
        • Port TensorFlow Quickstart to NNI
      • 搜索空间
      • Tuners
      • Assessors
      •  高级用法
        • Command Line Tool Example
        • Implement Custom Tuners and Assessors
        • Install Custom or 3rd-party Tuners and Assessors
        • Tuner Benchmark
        • Tuner Benchmark Example Statistics
    •  架构搜索
      •  神经架构搜索
        • 快速入门
        • 构建搜索空间
        • 探索策略
        • 评估器
        •  高级用法
          • Execution Engines
          • Hardware-aware NAS
          • Construct Space with Mutator
          • Customize Exploration Strategy
          • Serialization
          •  NAS Benchmark
            • Overview
            • Examples
      •  Tutorials
        • Hello NAS!
        • Search in DARTS
      • Construct Model Space
      • Model Space Hub
      • Exploration Strategy
      • Model Evaluator
      •  Advanced Usage
        • Execution Engines
        • Hardware-aware NAS
        • Construct Space with Mutator
        • Customize Exploration Strategy
        • Serialization
        •  NAS Benchmark
          • Overview
          • Examples
    •  模型压缩
      • Overview
      •  Pruning
        • Overview
        • Quickstart
        • Pruner
        • Speedup
        •  Best Practices
          • Pruning Transformer
      •  Quantization
        • Overview
        • Quickstart
        • Quantizer
        • SpeedUp
      • Config Specification
      • Evaluator
      •  Advanced Usage
        • Customize Basic Pruner
        • Customize Quantizer
        • Customize Scheduled Pruning Process
        • Utilities
    •  特征工程
      • Overview
      • GradientFeatureSelector
      • GBDTSelector
    •  实验管理
      • Overview
      •  Training Service
        • Overview
        • Local
        • Remote
        • OpenPAI
        • Kubeflow
        • AdaptDL
        • FrameworkController
        • AML
        • PAI-DLC
        • Hybrid
        • Customize a Training Service
        • Shared Storage
      •  Web Portal
        • Experiment Web Portal
        • Visualize with TensorBoard
      • Experiment Management
    • 参考
    •  Python API
      • Hyperparameter Optimization
      •  Neural Architecture Search
        • Search Space
        • Strategy
        • Evaluator
        • Others
      •  Model Compression
        • Pruner
        • Quantizer
        • Pruning Speedup
        • Quantization Speedup
        • Evaluator
        • Compression Utilities
        • Framework Related
      • Experiment
      • Others
    • 实验配置
    • nnictl 命令
    • 杂项
    • 示例
    •  社区分享
      • Overview
      •  Automatic Model Tuning (HPO/NAS)
        • Tuning SVD automatically
        • EfficientNet on NNI
        • Automatic Model Architecture Search for Reading Comprehension
        • Parallelizing Optimization for TPE
      •  Automatic System Tuning (AutoSys)
        • Tuning SPTAG (Space Partition Tree And Graph) automatically
        • Tuning the performance of RocksDB
        • Tuning Tensor Operators automatically
      •  Model Compression
        • Knowledge distillation with NNI model compression
      •  Feature Engineering
        • NNI review article from Zhihu: - By Garvin Li
      •  Performance measurement, comparison and analysis
        • Neural Architecture Search Comparison
        • Hyper-parameter Tuning Algorithm Comparsion
        • Model Compression Algorithm Comparsion
      • Use NNI on Google Colab
      • nnSpider Emoticons
    • 研究发布
    • 源码安装
    • 贡献指南
    • 版本说明

    Model Compression¶

    • Knowledge distillation with NNI model compression
    Previous Tuning Tensor Operators on NNI
    Next Knowledge Distillation on NNI
    © Copyright 2022, Microsoft.
    Created using Sphinx 5.3.0. and Material for Sphinx