Floating-Point Roundoff Error Analysis in Artificial Neural Networks |
|---|
| Hussein Al-Rikabi, Balázs Renczes |
- Abstract:
- In this paper, roundoff errors in Artificial Neural Networks (ANNs) are analyzed on a model for Solid-State Power Amplifiers (SSPAs). Calculations are carried out on 32-bit Floating-Point (FP32) arithmetics, and results are verified using 64-bit floating-point representation as reference. Besides the modeling of quantization noise at every operation, error propagation is also taken into consideration when calculating the cumulative Quantization Noise Power (QNP) after each stage and at the final output. By this means, the predictability of roundoff errors in the ANN is demonstrated. Consequently, it can be determined whether the FP32 arithmetic is sufficient instead of applying the computationally more demanding 64-bit calculations.
- Download:
- IMEKO-TC4-2022-15.pdf
- DOI:
- 10.21014/tc4-2022.15
- Event details
- IMEKO TC:
- TC4
- Event name:
- TC4 Symposium 2022
- Title:
25th IMEKO TC4 Symposium and 23nd International Workshop on ADC and DAC Modelling and Testing (IWADC)
- Place:
- Brescia, ITALY
- Time:
- 12 September 2022 - 14 September 2022