Font Size: a A A

FINITE PRECISION ARITHMETIC IN SINGULAR VALUE DECOMPOSITION ARCHITECTURES

Posted on:1988-01-23Degree:Ph.DType:Thesis
University:Cornell UniversityCandidate:DURYEA, ROBERT ARTHURFull Text:PDF
GTID:2470390017957488Subject:Engineering
Abstract/Summary:
The singular value decomposition (SVD) is an important matrix algorithm which has many applications in signal processing. However, its use has been limited due to its computational complexity. Several architectures have been proposed to compute the SVD using arrays of parallel processors. In this thesis we derive requirements for the precision of arithmetic units (AUs) used in SVD arrays and compare the resource requirements of several architectures.;Our analysis shows that we need essentially the same number of bits for either the Hestenes or Jacobi SVD algorithms. If the matrix has been scaled to prevent overflows and if we use properly rounded arithmetic, CORDIC and fixed point AUs require 8 fewer bits than floating point AUs. Our computations indicate that 32 bit floating point AUs are useful only for small arrays of 8-bit data. For 100-by-100 arrays of 16-bit data we need 40-bit floating point AUs. 32-bit fixed point AUs can be used in SVD arrays for large 8-bit matrices or moderate size 16-bit arrays.;We describe five SVD architectures, two "linear" structures and three "quadratic" arrays, and compare their resource requirements with floating point and CORDIC AUs. Our comparison shows the total resource requirements of the linear designs to be lower than that of the quadratic arrays for all size matrices. The speed of the linear structures is competitive with the quadratic arrays for matrices up to size 200-by-200 even though the linear designs require many fewer AUs. CORDIC AUs simplify the architectures but they double the resource requirements and increase the computation times. We conclude that a linear array with floating point or fixed point AUs is the best design for implementation with current VLSI technology.;Our results are based on the assumption that we are operating on matrices of quantized data. Since the matrices have quantization errors, we show that their singular values will have quantization errors which are as large as the data errors. To compute the number of bits needed in SVD AUs, we require that the AUs have enough bits to keep the round-off errors of the SVD computation smaller than the quantization errors.
Keywords/Search Tags:SVD, Aus, Singular, Quantization errors, Architectures, Floating point, Arrays, Resource requirements
Related items