World: r3wp
[!REBOL3-OLD1]
older newer | first last |
BrianH 7-Feb-2009 [10654] | Think of these as a standard library of helper functions that you don't have to use if you don't need to. If you do use them, you can count on them working as correctly as the REBOL experts can make them work, and as efficiently. Either way REBOL is better. |
[unknown: 5] 7-Feb-2009 [10655x3] | Yes Brian, but the two exists functions above are necessary because a change has been made to the operation of query. In those cases it is necessary to modify mezzanines. |
Yeah, I understand the point behind mezzanines which is why I maintain a good quantity of them outside of the REBOL distribution. | |
To me, Parse is the greatest strength of REBOL. | |
BrianH 7-Feb-2009 [10658] | Re 3 mgs back, I don't get your point. The new QUERY is better. The mezzanines work the same on the outside (in theory). So? |
[unknown: 5] 7-Feb-2009 [10659x2] | Yes, I don't dispute that the new query is better at all. |
what is your undirize function? | |
BrianH 7-Feb-2009 [10661x2] | So mezzanines are different on the inside. As long as they work the same on the outside, your code doesn't need to change. That is why the mezzanines are there. And code that is not part or the REBOL distribution is not mezzanine code, just REBOL code. If you want it to be mezzanine code (with all of the optimization benefits mezzanine code gets), submit it :) |
I posted it above as FILEIZE, but here: undirize: func [ {Returns a copy of the path with any trailing "/" removed.} path [file! string! url!] ][ path: copy path if #"/" = last path [clear back tail path] path ] | |
[unknown: 5] 7-Feb-2009 [10663x2] | undirize: func [file [file! sring! url!]][if #"/" = last file [reverse remove reverse file]] |
typo | |
BrianH 7-Feb-2009 [10665] | Ouch, two reverses :( |
[unknown: 5] 7-Feb-2009 [10666x2] | yeah |
Works well. | |
BrianH 7-Feb-2009 [10668] | I don't doubt it. It is modifying rather than copying, but it looks like it works. |
[unknown: 5] 7-Feb-2009 [10669] | Yeah and at less evals then yours. |
BrianH 7-Feb-2009 [10670] | head clear back tail is much faster than reverse remove reverse. All of that reversing is series copying, as is remove from the head of a series. If you don't need your function to copy, change reverse remove reverse to clear back tail. |
[unknown: 5] 7-Feb-2009 [10671] | See already hammering out better code by talking about it. |
BrianH 7-Feb-2009 [10672] | Yup :). Also, the return value of mine matters, as it does with DIRIZE, while yours is tossed. You wouldn't be able to use yours as a swap-in replacement for DIRIZE for non-dirs. Mine is a function, while yours is more of a procedure (making the Pascal distinction). |
[unknown: 5] 7-Feb-2009 [10673x3] | I wouldn't use mine at all for myself ;-) |
I'm getting to where I use less and less mezzanines. | |
At least for the more simply things. | |
BrianH 7-Feb-2009 [10676x3] | If you add a file on the end of the function you would have a useful return value. Then the only difference would be the copying. |
My approach is to improve the mezzanines to the point where it actually makes sense to use them instead of optimizing them away, or at least to the point where their code is good enough to inline. If I don't use it in highly optimized code, it doesn't go in. | |
The simpler and faster I can make them the better. If this means imporovements to the natives to make the mezzanines better, then any code you write that also uses the natives will also be better. And you get good library funnctions too :) | |
[unknown: 5] 7-Feb-2009 [10679] | ;Just using remove undirize: func [file [file! string! url!]][if #"/" = last file [remove back tail file] file] |
BrianH 7-Feb-2009 [10680] | We should profile to see which is faster: remove or clear. |
[unknown: 5] 7-Feb-2009 [10681] | The remove is better. |
BrianH 7-Feb-2009 [10682x2] | They are within variance of each other in this case. Interchangeable. After multiple runs, both get faster times than the other. |
Which is weird, because REMOVE does more work than CLEAR, what with the refinement checking. | |
[unknown: 5] 7-Feb-2009 [10684] | I think it is the amount of movement via the index that is time consuming for the other method. |
BrianH 7-Feb-2009 [10685] | That would be the same with both. Well, remove is easier to undeerstand than clear, so it's a good choice. |
[unknown: 5] 7-Feb-2009 [10686] | Clear might have a lot of underlying code for ports use as well which may be the reason why remove is better. |
BrianH 7-Feb-2009 [10687x2] | Nah, both are actions so there is no type-specific overhead that affects use with other types. |
And I didn't fine remove to be better consistently. Clear won half the time with the same code. | |
[unknown: 5] 7-Feb-2009 [10689x2] | I don't know. That is why we profile. ;-) |
The remove is more CLEAR to understand. Pun intended. | |
BrianH 7-Feb-2009 [10691x2] | I ran a dozen profiles of each, and they were 50/50 on which was faster. That is well within the profiler variance. |
I submitted a tweak to dp that improves the accuracy, but the profiler is too inconsistent to time differences this small well enough. | |
[unknown: 5] 7-Feb-2009 [10693] | Well you definately want to make sure your profiler works. |
BrianH 7-Feb-2009 [10694x2] | It works for big differences well enough (based on my testing). |
For instance, that /into proposal was based on huge differences picked up by the profiler. If implemented it could eventually lead to user-visible reductions in overhead. That's a big deal. | |
[unknown: 5] 7-Feb-2009 [10696] | When profiling traversal operations, I have experienced skewed results. |
BrianH 7-Feb-2009 [10697] | Interesting. Examples? |
[unknown: 5] 7-Feb-2009 [10698x2] | Well, my get-block function is an example. I used it on a series of block data and get different results that don't seem to jive with my expectations. |
Some more complex reads actually resulted in better performance then less complex reads. | |
BrianH 7-Feb-2009 [10700] | Are you talking about file access? |
[unknown: 5] 7-Feb-2009 [10701x2] | I had done a test where I read a small 5000 record file and compared to a 100000 record file and the 100000 record file proviled better performance than the smaller one. |
After the file read. | |
BrianH 7-Feb-2009 [10703] | Sounds like cache is a factor here. |
older newer | first last |