World: r3wp
[Core] Discuss core issues
older newer | first last |
eFishAnt 25-Apr-2006 [4050] | each block adds exactly 64 bytes...do you mean? |
Maxim 25-Apr-2006 [4051] | yes. |
eFishAnt 25-Apr-2006 [4052] | seems like an empty block should only take 2 bytes...but then that is just the ASCII representation...binaries are bigger than their source code...I would expect some list links pointing, some datatype info...dunno. Maybe try a small program like REBOL[] probe empty: [] and inspect the RAM of it. |
Maxim 25-Apr-2006 [4053] | hehe, your the embeded hw expert ;-) probing RAM directly.. man haven't done that since I've used amiga! |
eFishAnt 25-Apr-2006 [4054] | would be a good extension of a hex-editor...to just load a memory address and peek around and look at things...have REBOL inspect itself. |
Pekr 25-Apr-2006 [4055x2] | .... and hackers hack in :-) |
if I understand it correctly, we should get it with debug hooks ... powerfull debugger would help ... | |
eFishAnt 25-Apr-2006 [4057] | I don't have a Lauterbach debugger for x86 ... only for ARM, PPC, XSCALE, and some other MCUs...or that WOULD be cool. |
Maxim 25-Apr-2006 [4058x2] | I am starting to understand the challenges "big" app developpers encounter when trying to solve complex data models. |
I think I will have to design a cluster engine for liquid quite rapidly for some of my projects. | |
eFishAnt 25-Apr-2006 [4060] | for very structured data, when space is a premium, and if it is packed tight, you could always do it through a dynamic linked library. |
Maxim 25-Apr-2006 [4061x2] | yeah, I intend to recode code liquid as a c module eventually, and probably use that natively in python and make a dll for it in rebol. |
but waiting R3 for that... I'd like to allow liquids to be datatypes. | |
eFishAnt 25-Apr-2006 [4063] | ...but how much does the size increase when you populate the empty blocks with data? |
Maxim 25-Apr-2006 [4064x3] | the size of that data linearly. |
thus integers add 2bytes per ints, etc. | |
maybe a little bit more for linkeage data. | |
eFishAnt 25-Apr-2006 [4067] | (also, you might find ways to structure your data in REBOL which reduce the number of block...like make them into structured objects ... just a wild guess, maybe that would compact it more?) |
Maxim 25-Apr-2006 [4068x3] | objects take way more... hehe have you ever even tried to allocate 1 million objects which have 40 methods. REBOL won't even be able to allocate 20 thousand of them. |
I had to use shared methods (like face/feel) | |
and can still allocate 20000 nodes a second on my trusty 1.5GHz laptop. | |
eFishAnt 25-Apr-2006 [4071] | does an array help? |
Maxim 25-Apr-2006 [4072x2] | do you mean the array function? |
cause it allocates blocks ;-) | |
eFishAnt 25-Apr-2006 [4074] | seems like some improvements could be made by storing more things in each block. |
Maxim 25-Apr-2006 [4075] | each node is an atom. |
eFishAnt 25-Apr-2006 [4076] | but it seems it would be a speed - size tradeoff. |
Maxim 25-Apr-2006 [4077x3] | and needs its own block to determine its dependencies. |
note that I am allocating 1 million objects... we are talking enterprise solutions here. | |
with more ram I can go to 10 million... that's without data... | |
eFishAnt 25-Apr-2006 [4080] | the fallacy of db-centric programming ... is having data centrialized, rather than distributed. If you split it apart into separate messaging agents, then you might be able to reduce the bottlenecks created by having the data cetralized. |
Maxim 25-Apr-2006 [4081x5] | with bedrock, each cell, is its own individual atom. there are no tables. |
each cell is a store of associations this cell has with other things in the db. being bi-directional nodes, I can create sets or groups of nodes and nodes can backtrack them to query related informations. | |
so in theory, when I add offloading of nodes (to disk or remote computers) within liquid, we'll be able to scale the db infinitely. | |
well, limited by 64 bits of addressing at a time. | |
(thats if R3 has a 64bit mode) | |
eFishAnt 25-Apr-2006 [4086x2] | bits or bytes? |
oh, is that what you are worried about? I thought you were saying an empty block takes 64 bytes. | |
Maxim 25-Apr-2006 [4088x6] | two different issues. |
64 bit wide addressing/block lengths, etc, would allow liquid to scale more easily to petabyte sized management | |
where you have n number of distributed/remote machines storing infinitely large amounts of tidbits of data, de centralised. | |
is anyone aware that inserting at the head of a block is EXTREMELY slow ? | |
just paste the following (inserting seems to be exponentially slow!) do [ ;----- appending ----- print "^/^/^/===============" print "START APPENDING: one million times" blk: [] s: now/precise loop 1000000 [ insert tail blk none ; append ] print ["completed: " difference now/precise s] ;----- inserting ----- print "^/===============" print "START INSERTING: ten thousand times" blk: [] s: now/precise loop 10000 [ insert blk none ] print ["completed: " difference now/precise s] ;----- inserting ----- print "^/===============" print "START INSERTING: twenty thousand times" blk: [] s: now/precise loop 20000 [ insert blk none ] print ["completed: " difference now/precise s] ] | |
shows: =============== START APPENDING: one million times completed: 0:00:00.942 =============== START INSERTING: ten thousand times completed: 0:00:00.47 =============== START INSERTING: twenty thousand times completed: 0:00:01.863 | |
Gabriele 25-Apr-2006 [4094x2] | insert at the head should be O(n), so inserting n times should be O(n^2) |
the GC and reallocation may complicate things though. but that happens on appending too. | |
Maxim 25-Apr-2006 [4096] | why is inserting slower than appending? arent blocks internally represented as linked-lists? |
BrianH 25-Apr-2006 [4097] | No, that's the list! type. Blocks are arrays internally. |
Maxim 25-Apr-2006 [4098x2] | but why are they faster at the tail? |
I guess it because the alloc engine uses pools, and pre-allocates data larger than the empty block? | |
older newer | first last |