r3wp [groups: 83 posts: 189283]
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

World: r3wp

[Core] Discuss core issues

[unknown: 5]
18-Dec-2008
[11642]
I'll use read-io for my contigious stuff but not for this portion
Steeve
18-Dec-2008
[11643]
each time you do a copy/part a new internal buffer is created, it's 
better to use always the same buffer. easy to understand.

more of that, the profiling script say that it''s faster to read 
16ko than 2 times  16 bytes
BrianH
18-Dec-2008
[11644]
In that case, read-io would not work well for you directly without 
implementing your own buffering. Barring that, the buffering built 
into REBOL and used by COPY would probably be good enough. If yo 
want better performance you might to make your own buffers.
[unknown: 5]
18-Dec-2008
[11645x2]
Steeve, why should I read 16k  when I only want 16 bytes?  I would 
then have to copy/part on the  portion of the buffer that I want 
if that was the case to get only the first 16 bytes of the 16k.
Correct Brian.
BrianH
18-Dec-2008
[11647]
You would have some memory overhead if you go the COPY route because 
of what Steeve said, but that may be dwarfed by the disk overhead 
savings.
[unknown: 5]
18-Dec-2008
[11648x2]
correct.
its the speed of the read which will be the most  impact.
Steeve
18-Dec-2008
[11650]
because each time you do a copy/part you create new buffers in memory 
which are not erased by the recycler so that you should consider

using always the same buffer especially if you do thousands and thousands 
access in one second
[unknown: 5]
18-Dec-2008
[11651]
I would still have to use copy/part on the buffer if I went with 
Steeves idea.
BrianH
18-Dec-2008
[11652]
You would get better performance improvements by figuring out how 
to queue your reads so you get better disk locality, which would 
lead to better internal buffer usage.
[unknown: 5]
18-Dec-2008
[11653]
But you don't get it Steeve, I would still be using copy/part on 
the buffer filled by your read-io to get just the 16 bytes I want.
Steeve
18-Dec-2008
[11654]
memory overhead is the point
[unknown: 5]
18-Dec-2008
[11655]
performance is the point.
BrianH
18-Dec-2008
[11656]
Not in this case, Steeve.
[unknown: 5]
18-Dec-2008
[11657]
I don't think steeve gets that I don't need more than 16 bytes
Steeve
18-Dec-2008
[11658]
performance you will have too with a well sized buff
[unknown: 5]
18-Dec-2008
[11659x3]
let me put it another way Steeve, I will not be reading more than 
16 bytes per request (because I don't need any more than that) and 
it isn't a 16  bytes segment that is next to another 16 byte segment 
really I'm moving back and forth all over the file to get 16 byte 
segments each time.
Thats for this particular portion of what I'm doing.
The read-io is optimal in my other needs.
BrianH
18-Dec-2008
[11662x2]
Steeve, he is doing random access. Unless he figures out how to queue 
his accesses, it won't matter if he uses the internal buffering or 
his own. The only way he could improve performance is through queueing.
Paul, there are other ways to reduce memory overhead that will get 
you more benefit, like using INSERT/part.
[unknown: 5]
18-Dec-2008
[11664x2]
in what way?
can't read with insert/part
BrianH
18-Dec-2008
[11666]
Not here. I was just using an example.
[unknown: 5]
18-Dec-2008
[11667x2]
ok
I can't think of a better options than copy/part at this point
BrianH
18-Dec-2008
[11669]
Without read queueing, neither can I.
[unknown: 5]
18-Dec-2008
[11670x4]
what about pick?
be nice  to have pick/part
without the  overhead of copy
pick already works on /seek ports
BrianH
18-Dec-2008
[11674]
Do you need to open/direct with read-io?
[unknown: 5]
18-Dec-2008
[11675x3]
no
but the head moves as you read with /direct
must like forall does
BrianH
18-Dec-2008
[11678]
It is not COPY that is buffering internally, it is OPEN without /direct.
[unknown: 5]
18-Dec-2008
[11679x5]
I might do a test to see how fast 4 successive picks would be
and 16 picks
/seek is not buffered
it is like /direct
difference is that the index stays at head as you reference it.
BrianH
18-Dec-2008
[11684]
How often do you need to do these reads, and can they be sorted in 
batches?
[unknown: 5]
18-Dec-2008
[11685]
They do get sorted but they are done often and the batch sort is 
random sized depending on the request
BrianH
18-Dec-2008
[11686]
You are using variable-length records rather than fixed length?
[unknown: 5]
18-Dec-2008
[11687x2]
yes ;-)
Your wondering how right?
BrianH
18-Dec-2008
[11689]
That is a lot slower. I am not wondering how, I've followed the discussions 
in your group.
[unknown: 5]
18-Dec-2008
[11690]
slower than what?
BrianH
18-Dec-2008
[11691]
Fixed length. Databases usually work in increments of disk pages 
because it is faster.