This can be answered directly but a bit lengthily.

Instead let me ask a seemingly unrelated (but actually much the same)

question.

Here are two power functions:

def powI(x,n):

result = 1

for i in range(0,n):

result = result * x

return result

def powR(x,n): return 1 if (n==0) else (x * powR(x, n-1))

What are their space/time complexities?

Which do you prefer?

They're pretty similar actually. If you rework the first one to not

use range() but instead have a more classic C style of loop, they'll

be almost identical:

def powI(x,n):

result = 1

while n > 0:

result *= x

n-=1

return result

There's a subtle difference in that powR as you've coded it will

infinite-loop if given a negative exponent, while powI in any form

will simply return 1 (neither is technically correct, fwiw). Other

than that, the only difference is that one keeps its state on the call

stack, the other uses a temporary variable.

Performance would probably benefit from a special syntax meaning

"recurse", which would avoid the LOAD_GLOBAL for the function's name

(plus, things break if you pass recursive functions around after

compiling them - for instance, powR2=powR; def powR(x,n): os.exit() -

but if you do that, then you should expect to see nice bullet-holes in

your foot). However, I do not believe that Python would overall

benefit from any such thing.

Chris Angelico

> But the recursive solution has time complexity of O(logn).The iterative >> solution has time complexity of O(n).That's a significant difference for> > large n - a significant benefit of the recursive version.

Are you sure? It will produce n function calls and n multiplications,

how is that O(log n)?

Chris Angelico

> But the recursive solution has time complexity of O(logn).The iterative >> solution has time complexity of O(n).That's a significant difference for> > large n - a significant benefit of the recursive version.

It's linear as written. I think you're talking about an

implementation like this one:

def powR(x,n):

if < < 0:

return 1.0 / powR(x, -n)

elif n == 0:

return 1

else:

half_n = n // 2

root = powR(x, half_n)

result = root * root

if half_n + half_n == n - 1:

result = result * x

return result

That's O(log n), but it was harder to write, and it could just as

easily be iterative. In fact it might actually have been a bit

simpler to write it iteratively.

:-) Ok so you beat me to it :D

I was trying to arrive at an answer to the question (by M Harris)

about why anyone would do something like fib recursively. I used power

since that illustrates the case more easily, viz:

A trivial power -- recursive or iterative -- is linear time -- O(n)

A more clever power can be O(log(n))

This more clever power can be trivially derived from the recursive one

as follows:

The recursive O(n) function:

def powR(x,n): return 1 if (n==0) else (x * powR(x, n-1))

is just python syntax for the school algebra (or Haskell) equations:

x^0 = 1

x^(n+1) = x * x^n

Adding one more equation:

x^(2*n) = (x^2)^n = (x*x)^n

Re-python-ifying we get:

def pow(x,n):

return 1 if (n==0) else pow(x*x, n/2) if even(n) else x*pow(x,n-1)

def even(n): return n % 2 == 0

Can this be done iteratively? Of course but its not so trivial.

[If you believe it is, then try writing a log(n) fib iteratively :D ]

What I'm surprised at is that nobody has pointed out that the logn

version is also generally more accurate, given traditional floats.

Usually getting the answer accurate (given the constraints of finite

precision intermediates) is more important than performance, but in this

case they go hand in hand.

DaveA

And that, Your Honour, is why I prefer bignums (especially for

integers) to floating point. Precision rather than performance.

Chris Angelico

I'm intrigued by your comment "especially for integers", which implies

that you might use bignums for non-integers. Out of curiosity, how would

you calculate:

sin(sqrt(7)*pi/31)

using bignums?

--

Steven

On Tue, May 3, 2011 at 10:49 PM, Steven D'Aprano

REXX uses decimal bignums, although I don't have a high-performance

math library (for sin) that uses anything more than IEEE double

precision. If I coded my own sin algorithm in REXX, I could go to

whatever precision memory (and patience) would allow.

Chris Angelico

REXX uses decimal bignums, although I don't have a high-performance

math library (for sin) that uses anything more than IEEE double

precision. If I coded my own sin algorithm in REXX, I could go to

whatever precision memory (and patience) would allow.

Chris Angelico

It took me a while, but this one seems to work:

from collections import namedtuple

Triple = namedtuple('Triple', 'hi mid lo')

Triple.__mul__ = lambda self, other: Triple(

self.hi * other.hi + self.mid * other.mid,

self.hi * other.mid + self.mid * other.lo,

self.mid * other.mid + self.lo * other.lo,

)

def fib(n):

f = Triple(1, 1, 0)

a = Triple(1, 0, 1)

while n:

if n & 1:

a *= f

f *= f

n >>= 1

return a.mid

-- HansM

> elf.hi * other.hi + self.mid * other.mid,> > elf.hi * other.mid + self.mid * other.l>,

> elf.mid * other.mid + self.lo * other>lo, >> >

>

> def fib>n):

> = Triple(1,>1, 0)

> = Triple(>, 0, 1)

> gt;hile n:

> > f n & 1:

>>>*= f

> >>gt; *= f

> gt;>>>= 1

> eturn a.mid

>

> -- HansM

Bravo! Can you explain this?

The tightest way I knew so far was this:

The 2x2 matrix

0 1

1 1

raised to the nth power gives the nth fibonacci number. [And then use

a logarithmic matrix mult]

Your version is probably tighter than this.

Yet one could argue that this is 'cheating' because you (and I) are

still solving the power problem.

What I had in mind was to use fib results like:

f_(2n) = f_n^2 + f_(n+1)^2

and use these in the same way (from first principles) like we use the

equation

x^2n = (x*x)^n

to arrive at a logarithmic power algo.

1. Recursion:Recursion: Recursion:Recursion

2. To understand recursion, one must first understand recursion (was apple iie overheat question)

4. recursion vs allow-recursion

5. Recursion in Java: Question for those who understand recursion well.

6. views, recursion, and allow-recursion

9. Help, recursion to iteration?

10. programming style: which is clearer? (iteration vs. recursion)

11. Prove that "recursion and iteration can replace each other"

13. Depth of Recursion with parameters Vs Length of Iteration

14. Off topic Iteration vs. Recursion

11 post • Page:**1** of **1**