r3wp [groups: 83 posts: 189283]
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

World: r3wp

[Profiling] Rebol code optimisation and algorithm comparisons.

Maxim
29-Oct-2009
[20x2]
here is a sreen dump of iteration vs maximum-of native use....  goes 
to show the speed difference in binary vs interpreted!!


>> a: [1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4 5 5 5 5 5 6 6 6 6 6 7 
7 7 7 7 8 8 8 8 8 9 9 9 9 9 10 10 10 110 110]

report-test  profile :maximum-of reduce [a]
----------------
performed: 10000000 ops within 0:00:09.781 seconds.
speed: 1022390.34863511 ops a second
speed: 34079.678287837 ops a frame
----------------

>> report-test  profile :get-largest reduce [a]
----------------
performed: 10000 ops within 0:00:01.86 seconds.
speed: 5376.34408602151 ops a second
speed: 179.21146953405 ops a frame
----------------

we are talking 190 TIMES faster here
btw


'PROFILE is a handy function I built which accepts ANY function with 
ANY args and repeats the test until it takes longer than one second, 


you can adjust its loop scaling by varying amplitude and magnitude 
of loops at each iteration.


'REPORT-TEST simply dumps human-readable and easy to compare stats 
of calls to profile (which returns a block of info on the test).
Steeve
29-Oct-2009
[22x2]
actually, you are not sorting or traversing a long serie.
>>reduce [a]
== [[1 1 1 2 2 2 ...]]

your serie contains only one value.

i suggest to do a COPY A instead
tired ?
Maxim
29-Oct-2009
[24x2]
hehe
but no, the way profile handles its second argument (here reduce 
[a])  is that it uses the second argumet AS the argument spec for 
the function... 

loop i compose/only [ (:func) (args)]

so in the end, the test becomes:

loop i [maximum-of a]
Steeve
29-Oct-2009
[26]
ok
Maxim
29-Oct-2009
[27x4]
so I can test functions with any number of args, as long as I don't 
use refinements, I'm ok.
but I guess I could try using paths as the function argument... that 
actually might work too.
I'm working on isometric rendering of 3 polygonal gfx in AGG, so 
profiling is currently quite high on my list   :-)
*3D*
Steeve
29-Oct-2009
[31]
Do you use the skew command in draw ? or are you calculating the 
coordinates of your 3D objects for each layer.

(I think SKEW allow to simulate isometric rendering in AGG, but it's 
just an assumption, i never tried it)
Sunanda
29-Oct-2009
[32]
If you are trying to find the largest in a series of not-strictly 
comparable items, then be aware that R2 behaves differently to R3:
     b: reduce [1 none 12-jan-2005 unset 'a copy []]
     last sort b       ;; r2 and r3 agree
     maximum-of  b   ;; r3 has a headache
== [[]]
Steeve
29-Oct-2009
[33]
maximum-of uses the func GREATER? in R3
To me it make sense, because in Rebol, blocks are really great !
:-)
Maxim
29-Oct-2009
[34x5]
steeve, replied in (new) !SCARE group
I'm doing an in-depth analysis of various looping funcs... and discovering 
some VERY unexpected results amongst the various tests... will report 
in a while when I'm done with the various loop use cases.
the main one being that foreach is actually the fastest series iterator!
and remove-each is 90 times faster if it always return true rather 
than false !
(probably exponentially faster as the series grows)
Maxim
30-Oct-2009
[39x2]
wow I'm already at 7kb of output text with notes and proper header 
... I haven't done half the tests yet!
did you know that FOR is 60x ... let me write that out ... SIXTY 
TIMES slower than REPEAT  !!!
Geomol
30-Oct-2009
[41]
Yeah, there's often a huge difference between a mezzanine function 
and a native. In R2, FOR is mezz, REPEAT is native.
Maxim
30-Oct-2009
[42x10]
the comment above about remove-each is false... it was a coding error.
but I'm discovering a lot of discrepancies in things like string 
vs block speed of certain loops... 
and a lot of other neat things like:

pick series 1   

is  15% faster  than

not tail? series
1000 < i: i + 1     is     10%  faster than    (i: i + 1) > 1000
and its not because of the paren... I checked that....
(i: i + 1) > 1000       same speed as      i: i + 1   i > 1000
profiling almost done... my machine has been looping series and indexes 
non-stop for about 8 hours now  :-)


be ready for the most in-depth analysis on loops ever done for R2 
 ;-)
will be nice to do the same exercise on R3
See who is the overall winner in this REBOL iterator slug fest analysis!!!
   

over 8 hours of practically non-stop cpu cycling over a wide variety 
of exit conditions, datasets and ALL iterators in rebol 2 

(loop, repeat, for, forever, foreach, remove-each, forskip, forall, 
while, until )

20 kb of data, statistics, comments and test details.

INVALUABLE data for people wanting to optimize their REBOL code.


http://www.pointillistic.com/open-REBOL/moa/scream/rebol-iterator-comparison.txt
I would a few peer reviews so I can continue to evolve this document 
in order to make it as precise/usefull for everyone.
would *like*
Steeve
30-Oct-2009
[52x3]
A thing should be noted.
repeat and foreach do a bind/copy of the evaluated block.

Even if they are the fastest loops, they should be not used too intensivly 
because they will polluate the memory.

It's particularly sensitive for graphics applications or services 
that linger in memory. 


So, that's why  I advise to use only LOOP, WHILE and UNTIL for intensive 
repeated loopings, if you don't want to blow up the memory used by 
your app.
Your bench doesn''t take in account the time taken by the GC to recycle 
 the memory.
Some functions polluate the memory some other not.
You should add the time needed to recycle after each test.
but perhaps i'm wrong, you take it in account
Maxim
30-Oct-2009
[55x5]
thanks steeve, I'm accumulating all comments

First revision of the benchmarks will include:
	-RAM stats

 -empty vs filled-up loops.  many words and a single func with the 
 same content called from the loop
	-GC de-activated tests + recycle time stats
as noted in the document test notes:

I specifically didn't do any GC control, cause I wanted, at this 
point, to see how the loops react under normal rebol execution.  

the GC normally is pretty aggressive and when you look at the tests, 
most loops roll for several hundred thousands times, so the GC will 
have kicked-in... if it can.
I did note, that there is a HUGE memory leak which occured probably 
in the actual benchmark procedure itself.


although I keep no reference to any of the data or transient test 
blocks and funcs, they are kept somewhere, and my rebol.exe process 
keeps growing and growing.... I caught it at 500MB !! but it didn't 
do any difference in actual speeds... after a few tests.... cause 
i was a bit scared.
this will also have to be investigated further (the leak)
I tried manually recycling... but it didn't do anything.
Steeve
30-Oct-2009
[60]
what do you mean ?, it does it here:


>> recycle s: stats loop 1000000 [foreach a [1 2 3][a: a]] print 
stats - s recycle print stats - s
1569504  ;memory allocated by the loop
-320          ; after the recycle
Maxim
30-Oct-2009
[61]
>> stats
== 541502965
>> recycle
>> stats
== 272784493


but that's just for about 10 % of the tests... the more tests I do 
the more ram stays "stuck" somewhere inside the interpreter.
Steeve
30-Oct-2009
[62x3]
yes, i noticed that too, it's a probem with R2
R3 is better with that
and if you activate recycle/on, does that make any difference ?
Maxim
30-Oct-2009
[65]
I think R2 GC can't determine co-dependent unused references... in 
some situations.
ex:
blk: reduce [ a: context [b: none] b: context [c: a] a/b: b ]
blk: none


in this case both a and b point to each other, and clearing blk doesn't 
tell a or b that they aren't used anymore... that is my guess.
Steeve
30-Oct-2009
[66]
yep, but your tests seem not having such cases
Maxim
30-Oct-2009
[67x3]
I reduce a block which is the test... and since foreach copy/deep, 
and there is NO word ever refering to the content of the refered 
block, I think the contents of the blocks prevent the blocks and 
the data they contain from being collected... 


the block contains words which are not GC counted as zero reference, 
so nothing gets de-allocated...

that's just my guess.
not sure I'm making sense... in how I explain it.
in any case I want to build a single script which does all the tests, 
statistics, and eventually graphics and html pages of all results 
in one (VERY) long process.

so I can better control how the tests are done and prevent automated 
test creation as I am doing now.