摘要 |
A computer-implemented algorithm for dividing numbers involves subtracting the divisor from the divided to generate a first intermediate result, which is then shifted by N-bits to obtain a remainder value. A portion of the remainder and a portion of the divisor are utilized to generate one or more multiples from a look-up table, each of which is multiplied by the divisor to generate corresponding second intermediate results. The second intermediate results are subtracted from the remainder to generate corresponding third intermediate results. The largest multiple which corresponds to a third intermediate result having a smallest positive value is the quotient digit. The third intermediate result that corresponds to the largest multiple is the remainder for the next iteration.
|