Author Topic: Single-Precision Natural Logarithm Using Borchardt-Gauss-Carlson  (Read 3430 times)

0 Members and 1 Guest are viewing this topic.

Offline Xeda112358

  • they/them
  • Moderator
  • LV12 Extreme Poster (Next: 5000)
  • ************
  • Posts: 4704
  • Rating: +719/-6
  • Calc-u-lator, do doo doo do do do.
    • View Profile
Single-Precision Natural Logarithm Using Borchardt-Gauss-Carlson
« on: September 27, 2017, 01:18:08 pm »
Hi all, I lost my previous references and example programs and it took me this morning to locate this algorithm, digest it, and spew out my own version.  I looked on all of my calculators and Omni first, so I'm going to post it here for when it happens again :P

Anyways, this is one of my favorite algorithms for evaluating logarithms:

Code: [Select]
;Natural Logarithm on [.5,2]
;Single precision
a=.5(1+x)
g=.5(a+x/a) ;half precision divide
g=.5(g+x/g) ;full precision divide
b=a
a=(a+g)/2
c=.5(a+g)
g=.5(c+a*g/c) ;full precision divide
c=a
b = a-b/4
a=(a+g)/2
c = a-c/4-b/16
return (x-1)(1-1/4)(1-1/16)/c
  • It achieves single precision accuracy (at least 24.4996 bits) on the range of inputs from [.5,2].
  • During range reduction, x is usually reduced to some value on [c,2c].
    • The best precision is found when c=.5sqrt(2) (range is [.5sqrt(2),sqrt(2)], achieving at least 31.5 bits
    • I prefer c=2/3 since 2/3 and 4/3 are equidistant from 1-- it makes it easier for me to analyze time complexity. This still offers at least 29.97 bits, which is better than single precision
  • Cost is:
    • amean: 7 . 'amean' is the same cost as an add in binary floats
    • half divide: 1
    • full divide: 3
    • multiply:    1
    • shift-by-2:  3
    • shift-by-4:  2. This is sightly more efficient on the Z80 than 4 single-shifts when values are in RAM
    • add/sub:     5
    • add/sub const:1

I derived this algorithm from this wonderful paper which is annoyingly never at the top of a Google search and in fact took me a loooong time to ever stumble upon it, sadly.

This paper greatly accelerates the classic Borchardt-Gauss algorithm to be on par with the AGM algorithm. At their core, both algorithms perform an arithmetic and geometric mean, but AGM requires them to be done in parallel (emulated on single-core processors by some simple variable juggling), whereas B-G does them sequentially. As well, AGM achieves quadratic convergence or better (I've seen some exponential convergence in specific special cases), whereas classic B-G usually achieves linear convergence. Carlson's version of the B-G algorithm adds O(log(n)^2) additions and O(log(n)) space for quadratic convergence (where n is the number of desired bits of accuracy).

I like the B-G-C algorithm since I can easily obtain the inverse trig functions and inverse hyperbolic functions as well as the natural logarithm.