## Some simple performace tests (long)

### Some simple performace tests (long)

The advantage of xrange() over range() is minimal (since xrange()
still has to create the values when asked for them) except when a very
large range is used on a memory-starved machine or when all of the
range's elements are never used (such as when the loop is usually
terminated with break)." - from Python Library Reference.

I decided to measure the performance of range and xrange. I did it with
the following functions:

def rprint( n ):
a = time.time()
for i in range(n): print i
print time.time() - a

def xrprint( n ):
a = time.time()
for i in xrange(n): print i
print time.time() - a

def rpass( n ):
a = time.time()
for i in range(n): pass
print time.time() - a

def xrpass( n ):
a = time.time()
for i in xrange(n): pass
print time.time() - a

The results were as follows:

n rprint xrprint

10^4 0.37 s 0.34 s <- (1)
10^5 4.26 s 4.25 s
10^6 42.57 s 42.57 s
10^7 431.94 s 438.32 s <- (2)

n rpass xpass

10^4 0.0012 s 0.0011 s
10^5 0.0220 s 0.0139 s
10^6 0.1463 s 0.1298 s
10^7 1.4818 s 1.1807 s

The values are the average times printed by tested functions.

Conclusions:

1) According to (1) I could say that xrange might be somewhat faster
than range with lower numbers of iterations.
2) According to (2) I could say that xrange might be slower than range
with higher number of iterations.

The test with pass is not so important as the test with print (because
we usually do something inside of loops). So despite xpass has beaten
rpass, I would say that range is not slower than xrange (especially for
higher numbers of iterations). The final conclusion is : if you want
speed, you should use xrange privided that there aren't many
iterations. If you want less memory usage, you should use xrange.

I've also made more tests. The code was as follows:

-----------------------------
import array, random, time

def test1( n, size ):
a = time.time()
for i in xrange(n):
l = []
for i in xrange(size):
l.append( random.randint( 1,10 ) )
e = sum(l) / float(size) # just for taking some time
print time.time() - a

def test2( n, size ):
a = time.time()
l = []
for i in xrange(n):
del l[:]
for i in xrange(size):
l.append( random.randint( 1,10 ) )
e = sum(l) / float(size)
print time.time() - a

def test3( n, size ):
a = time.time()
l = range(size)
for i in xrange(n):
for i in xrange(size):
l[i] = random.randint( 1,10 )
e = sum(l) / float(size)
print time.time() - a

def test4( n, size ):
a = time.time()
l = array.array( 'L', xrange(size) )
for i in xrange(n):
for i in xrange(size):
l[i] = random.randint( 1,10 )
e = sum(l) / float(size)
print time.time() - a

def test5( n, size ):
a = time.time()
ind1 = range(size)
ind2 = range(size)
for i in xrange(n):
des1 = []
des2 = []
point = random.randint( 1, size-1 )
des1 = ind1[:point] + ind2[point:]
des2 = ind2[:point] + ind1[point:]
print time.time() - a

def test6( n, size ):
a = time.time()
ind1 = range(size)
ind2 = range(size)
des1 = []
des2 = []
for i in xrange(n):
del des1[:]
del des2[:]
point = random.randint( 1, size-1 )
des1 = ind1[:point] + ind2[point:]
des2 = in

### Some simple performace tests (long)

The number of replicates and standard deviations would be useful in
analyzing these results.
--
Michael Hoffman

### Some simple performace tests (long)

"TPJ" < XXXX@XXXXX.COM > wrote in message
news: XXXX@XXXXX.COM ...

On one machine, with one binary. Relative performance on different systems
can vary by, say, 10%. But of course, if you are optimizing a program to
run on just one machine, then the results on that machine are all that
matters.

To compare two things, one wants to remove all possible noise factors. If
one wants relative performance (ratios, percentages) then constant factors
should also be removed, when possible, to make the apparent difference as
close to the real difference as possible. Print is a large constant +
large noise factor that you *DO NOT* want for the stated comparison. It
requires a call through lots of OS code with variable execution times.

This is the right comparison for the reasons noted.

These times with print are about 300x the results below without print.
They are useless for comparing range and xrange. The differences above are
differences in printing (display) times and not in range versus xrange.

Think about it. When the differences between runs with print are several
times as large as the total time without, then those large differences
cannot have anything to do with the minor difference in looping method.

The looks more coherent.

You could, but you would be wrong. The second table shows that xrange is
slightly faster on your machine over the range tested.

This is completely backwards for the reasons already given. The point
about doing something inside loops in relevant in so far as it says the
that the minor difference between range and xrange, whichever way it goes
on a particular system, will matter relatively even less in application.
So the choice hardly matters unless space is a considerations. Which is
what the quoted docs more or less said.

To reduce random noise from random, the generator should be reinitialized
with the seed(someint) at the beginning of each function. This is the
purpose of seed(x). But for the comparisons you are making, randomness is
irrelevance and the time for calls to random a nuisance.

This is exactly what you DO NOT WANT TO DO for comparing anything else.

This is trivially different. To test 'l = []' versus 'del l[:]', test just
that.

It is not surprising that allocating a list just once and filling it in is
faster than starting empty and expanding and copying multiple times. In
any case, if you want to test that, test just that without all the noise

I did not look at the slicing stuff.

Terry J. Reedy

### Some simple performace tests (long)

How much ram does your machine have?
the main point is "except when a very large range is used on a
memory-starved machine"

run
x = range(10 ** 6)
and look at the memory usage of python..

what happens when you run this program:

import time

def t(func, num):
s = time.time()
for x in func(num):
pass
return time.time() - s

def run(func, num):
times = []
for x in range(5):
times.append(t(func,num))
return min(times), max(times), sum(times)/5

def main():
x = 10 ** 6
while 1:
print "trying", x
for s, f in ('xr', xrange), (' r', range):
print s + " %.3f %.3f %.3f" % run(f, x)
x *= 1.5
x = int(x)

if __name__ == "__main__":
main()

I get (columns are mix/max/average):

trying 1000000
xr 0.110 0.115 0.111
r 0.101 0.186 0.119
trying 1500000
xr 0.082 0.087 0.083
r 0.152 0.158 0.154
trying 2250000
xr 0.124 0.138 0.128
r 0.228 0.235 0.230
trying 3375000
xr 0.184 0.189 0.186
r 0.344 0.352 0.346
trying 5062500
xr 0.276 0.284 0.279
r 0.515 0.528 0.519
trying 7593750
xr 0.415 0.421 0.416
r 0.774 0.795 0.779
trying 11390625
xr 0.623 0.634 0.626
r 1.163 1.246 1.180
trying 17085937
xr 0.934 0.941 0.937
Killed

The "Killed" is from the linux OOM killing the python process.. notice
that the xrange for that number worked fine.