Benchmarking Quantized Neural Networks on FPGAs with FINN - ENSTA Bretagne - École nationale supérieure de techniques avancées Bretagne Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Benchmarking Quantized Neural Networks on FPGAs with FINN

Résumé

The ever-growing cost of both training and inference for state-of-the-art neural networks has brought literature to look upon ways to cut off resources used with a minimal impact on accuracy. Using lower precision comes at the cost of negligible loss in accuracy. While training neural networks may require a powerful setup, deploying a network must be possible on lowpower and low-resource hardware architectures. Reconfigurable architectures have proven to be more powerful and flexible than GPUs when looking at a specific application. This article aims to assess the impact of mixed-precision when applied to neural networks deployed on FPGAs. While several frameworks exist that create tools to deploy neural networks using reduced-precision, few of them assess the importance of quantization and the framework quality. It is used on top of FINN and Brevitas, two frameworks from Xilinx labs, to assess the impact of quantization on neural networks using 2 to 8 bit precisions and weights with several parallelization configurations. The benchmark set up in this work is available in a public repository (https://github.com/QDucasse/nn_benchmark).
Fichier principal
Vignette du fichier
article_IEEE.pdf (811.41 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03085342 , version 1 (21-12-2020)

Identifiants

  • HAL Id : hal-03085342 , version 1

Citer

Quentin Ducasse, Pascal Cotret, Loïc Lagadec, Rob Stewart. Benchmarking Quantized Neural Networks on FPGAs with FINN. DATE Friday Workshop on System-level Design Methods for Deep Learning on Heterogeneous Architectures, Feb 2021, Grenoble, France. ⟨hal-03085342⟩
261 Consultations
712 Téléchargements

Partager

Gmail Facebook X LinkedIn More