• Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

AltME groups: search

Help · search scripts · search articles · search mailing list

results summary

worldhits
r4wp90
r3wp879
total:969

results window for this page: [start: 901 end: 969]

world-name: r3wp

Group: Parse ... Discussion of PARSE dialect [web-public]
Maxim:
29-Apr-2011
did you do any kind of speed differences?
Sunanda:
1-Nov-2011
My test data was heavily weighted towards the live conditions I expect 
to encounter (average text length 2000. Most texts are unlikely to 
have more than 1 named entity).


All three scripts produced the same results -- so top marks for meeting 
the spec!


Under my test conditions, Ladislav was fastest, followed by Geomol, 
followed by Peter.


Other  test conditions changed those rankings....So nothing is absolute.


Using a Hash! contributed a lot to Ladislav's speed -- when I tried 
it as a Block! it was only slightly faster than Geomol's.....What 
a pity R3 removes hash!


Thanks for contributing these solutions -- I've enjoyed looking at 
your code and marvelling at the different approaches REBOL makes 
possible.
Ladislav:
1-Nov-2011
Using a Hash! contributed a lot to Ladislav's speed -- when I tried 
it as a Block! it was only slightly faster than Geomol's.....What 
a pity R3 removes hash!
 - no problem, in R3 you can use map!
Ladislav:
1-Nov-2011
Another solution is to use a sorted block and a binary search, which 
should be about the same speed as hash
Ladislav:
14-Nov-2011
Sorry for not continuing with it, Sunanda, but when I gave it a second 
thought, it did not look like a possible speed-up could be worth 
the source code complication.
BrianH:
15-Nov-2011
O(n) isn't bad if n is small, especially compared to other parts 
of the process. Most of my apps are bound by database or filesystem 
speed.
BrianH:
19-Dec-2011
Twice the speed using your method :)
Group: #Boron ... Open Source REBOL Clone [web-public]
JaimeVargas:
12-Jul-2006
That is the simplest. I just need to get meme9 to provide logs for 
it. So people can get up to speed.
JaimeVargas:
13-Jul-2006
Anton, In order to speed coding you could just target cloning.
Group: !REBOL2 Releases ... Discuss 2.x releases [web-public]
Geomol:
28-Nov-2006
In new 2.7.4 build for OSX:
>> do http://www.rebol.com/speed.r
** Script Error: query has no value
** Where: halt-view
** Near: querying: to logic! query


Is networking turned off or something? It seems, I can't run any 
script over the net with this latest 2.7 build on OSX. It works with 
old REBOL/View 1.3.2 Core 2.6.3 build.
Maxim:
24-May-2007
but the main limiting factor is still view refresh speed (view and 
AGG that is)
BrianH:
2-Jun-2009
Copied here: Answer: currently, there are two primary random generators:

1. old method, high speed, simple multiplicative:
    next = next * 1103515245L + 12345L;
    tmp = next & 0xffff0000;
    next = next * 1103515245L + 12345L;
    tmp |= (next >> 16);


2. much slower, SHA1() of a repeating pattern of the integer bytes 
(20 bytes total).
Gregg:
3-Jan-2010
Joanna, there aren't a lot of docs on serial ports, but the basics 
are easy enough.


; port speed data-bits parity stop-bit (order is not significant 
IIRC)
port-spec: serial://port2/38400/8/none/1

p: open/no-wait port-spec

p/rts-cts: off  ; THIS IS REQUIRED TO TURN OFF HARDWARE HANDSHAKING!
update p

Then you can just INSERT and COPY on the port.
TomBon:
15-Apr-2010
like this?
the cli connector is using the cli component nearly all major
databases delivering. the connection is made via rebols 

call/wait/info/output/error and a simple parse after, for the resultset.
I am using this prototype mainly for a q & d connect

to mysql/postgresql/monetdb/sqlite. on my list are also connectors 
for

firebird/oracle/greenplum/sybase/ingres/infobright/frontbase and 
cassandra.
pros:

1. very fast for single requests
2. no rewrite of code needed if a new version or protocol is out
3. easy 'data migration' between the db's

4. adding new db's are a matter of hours only (see the cli spec thats 
all)
5. fast prototyping and testing for new db's

6. robust, never had any trouble with cli's even with bigger resultsets

7. should be perfect also for traditional cgi (the process starting 
overhead is minimal, execpt you name is facebook)

8. very small footprint (~120 lines for connecting to 4 db's, could 
be the half)

with a nice tcp-server component like rebservice the 
cli multi connector could be very usefull as a c/s connector.
I made a test with 2.000 concurrent calls (simple select) 
on a 4 gig quadcore. the cpu was only close to 50%, a good value.

cons:


1. slow if you have very much serial inserts (unless you shape them 
into one sql query)
2. need to start a cli process for every request
3. needs a tcp server for non-local connections
4. some more, but who cares ;-)

with a solution to keep the cli open from rebservice,

these cons could disappear and the speed diff overhead to a memory 
based lib
could be marginal.
Geomol:
8-Jun-2010
ICarii, as mentioned, I made a floodfill in my paint program. You 
can try it with:

do http://www.fys.ku.dk/~niclasen/rebol/canvas099.r
(Works best under Windows.)


On a modern computer, it fill with about the same speed in REBOL 
as DPaint did on an A500 computer 20+ years ago. I also made a rebcode 
version, which fills the entire screen almost instantly. That version 
isn't out there.
ICarii:
8-Jun-2010
Very nice Geomol!  Impressive speed - quite a bit faster than mine 
(i need to see what i messed up ;) )
Group: Profiling ... Rebol code optimisation and algorithm comparisons. [web-public]
Maxim:
27-Jan-2011
btw the first version is exaclty the same speed as in R2, on my system.
Group: !REBOL3 GUI ... [web-public]
Robert:
5-Mar-2011
text-table: This is a very good and powerful style already. It solves 
80% of all use-cases in commercial apps. It's lean, fast and supports 
some advanced features out of the box. Speed of implementaiton is 
focus here, not configurability. The auto-filters are XLS inspired 
but the semantics are extended. You see all available values divided 
into from current shown list (above divider) and from all data (below 
the divider).
Henrik:
6-Mar-2011
By creating new contexts, we would have to have an advantage to using 
them. In a way, I don't mind it, as it could be used to apply particular 
groups of facets, in a standardized way, so you know that a particular 
context will have these items. But FACETS does technically not prevent 
this. From a technical perspective, I'm not sure I can find an advantage, 
since there is no difference from R3's perspective on speed or efficiency 
in accessing size, color or hinting information.

However, I can see the following groupings or contexts:

- color
- hint
- size
Ladislav:
18-Mar-2011
much like putting a rock under the gas pedal to prevent the kid from 
going more than 30 mph in dad's car

 - that is a nice metaphor, but I would use a different one. In many 
 contemporary cars there are means to limit the revolutions of the 
 engine. They are there, and I do not think they are meaningless. 
 They limit the revolutions of the engine exactly like a rock under 
 a gas pedal eventually could (rock under a gas pedal certainly cannot 
 limit the speed).
Group: !REBOL3 Host Kit ... [web-public]
BrianH:
2-Jan-2011
That might also require some conversion, but at least then the conversion 
would be there to use. R3 uses UCS for strings internally for speed 
and code simplicity, though strangely enough words are stored in 
UTF-8 internally, since you don't have to access and change words 
on a character basis.
Pekr:
5-Jan-2011
Cyphre - what was the result of your acceleration experiment (not 
talking JITTer here)? I do remember it provided some further speed-up 
to some folks here (not to me on intel chipset). Will it  be added 
to hostkit by default, if it works under all Windows versions?
Pekr:
5-Jan-2011
nice ... even some xy percent speed-up is worth it :-) What I am 
interested in is the smooth, not CPU hungry scrolling, not sure if 
OGL can help here though :-)
Group: ReBorCon 2011 ... REBOL & Boron Conference [web-public]
Dockimbel:
27-Feb-2011
Btw, I'm currently connecting from the train on the road back to 
Paris at 200km/h using the train's wireless network! (I guess it's 
using a satellite connection, because latency is high and upload 
speed very very low). Nice technological achievement anyway...if 
only the train could get to destination without being, on average 
20mn late, that would be even greater! ;-)
Bas:
27-Feb-2011
I see AltME is also working on high speed trains
Group: Core ... Discuss core issues [web-public]
onetom:
26-May-2011
wow, i didn't know u can do that! where is it documented? i just 
remeber get-modes in relation to setting binary mode for the console 
or parity and speed setting for the serial port...
Maxim:
26-May-2011
Geomol, using copy/deep by default would be extremely bad for speed 
and memory.   in most of the processing, you don't need to copy the 
deep content of a block, but the wrapper block itself, so you change 
the order or filter it.  


IIRC using copy/deep also causes cyclical references to break-up 
so using it by default would be disastrous.  


just look at how often we really need to use copy/deep compared to 
not and you'll see that the current behaviour is much more useful.
Ladislav:
22-Sep-2011
Also, once the directives are defined, there is no difference between 
"standard" and "user-defined" as far as the speed or other issues 
are compared.
Endo:
13-Oct-2011
There is a huge speed difference: (my benchmark function executes 
given block 1'000'000 times)
>> i: 0 benchmark [i: i + 1]
== 0:00:00.25
>> i: 0 benchmark [++ i]
== 0:00:02.578
BrianH:
13-Oct-2011
The ++ and -- functions are in R2 for convenience and compatibility, 
not for speed.
Maxim:
9-Feb-2012
The problem with R3 right now is that it isn't yet compiled in 64-bits 
we still have the 1.6GB RAM limit for a process which is the biggest 
issue right now.   I have blown that limit a few times already, so 
it makes things a bit more complex and it doesn't allow me to fully 
optimize speed by using more pre-generated tables and unfolded state 
rules.
Maxim:
9-Feb-2012
Our datasets are huge and we optimise for performance by unfolding 
and indexing a lot of stuff into rules... for example instead of 
parsing by a list of words, I parse by a hierarchical tree of characters. 
 its much faster since the speed is linear to the length of the word 
instead of to the number of items in the table. i.e.  the typical 
 O*n   vs.   O*O*n  type of scenario .  just switching to parse already 
was 10 times faster than using  hash! tables and using find on them.... 


In the end, we had a 100 time speed improvement from before parse 
to compiled parse datasets.  this means going from 30 minutes to 
less than 20 seconds....but this comes at a huge cost in RAM... a 
400MB Overhead to be precise.
Maxim:
9-Feb-2012
O*O*n
  == a typo  :-)

I guess I really meant  something like O(n*n) 


Its the kind of dramatic  linear vs logarithmic scaling difference 
when we unfold our datasets into parse.


but its not exactly that kind of scaling, since the average topology 
of the sort tree will have a lot of impact on the end-result.  for 
example in my system, when I try to index more than the first 5 characters, 
the speed gain is so insignificant that the ratio is quickly skewed, 
when compared to the difference which the first 3 letters give.  


Its 100% related to the actual dataset I use.  in some, going past 
2 is already almost useless, in others I will have to go beyond 5 
for sure.  in some other datasets we unfold them using hand-picked 
algorythms per branch of data, and others its a pure, brute force 
huge RAM gobler.
Oldes:
19-Feb-2012
In my case I will have just a few key-value pairs.. so the speed 
is not a problem, more important is to be safe that the key will 
not be mixed with values.
Ladislav:
20-Feb-2012
Well, sure, the speed may not be the most important property.
Group: !REBOL3 Proposals ... For discussion of feature proposals [web-public]
Maxim:
28-Jan-2011
I know, but everything that's done in the C side will save on speed 
and memory, since the C doesn't have to go through the GC and all 
that.  in tight loops and real-time continual processing, these details 
make a big difference in overall smoothness of the app.
Oldes:
30-Jan-2011
You talk about is so much that someone could write an extension in 
the same time and give a real prove:) What I can say, using additional 
serie is a big speed enancement. At least it was when I was doing 
colorizer.
Ladislav:
30-Jan-2011
You talk about is so much that someone could write an extension in 
the same time and give a real prove:) What I can say, using additional 
serie is a big speed enancement.

 - actually, it has been proven already, just look at the performance 
 of the UNIQUE, etc. functions
Group: Red ... Red language group [web-public]
BrianH:
28-Feb-2011
Yes, for situations that the function type specs can't handle. It's 
mostly used for making code more bulletproof, but it can speed things 
up too. Even more so in Red since the runtime overhead could be eliminated 
when unnecessary.
Dockimbel:
10-Mar-2011
I needed to prototype a direct native emitter for the upcoming JIT-compiler. 
So compilation speed was a critical.
Dockimbel:
7-Apr-2011
The exact frontier between REBOL features that will be supported 
in Red, and the ones left aside is not yet accurately defined. In 
fact, it is possible to support almost every feature of REBOL, but 
the performance (and maybe memory footprint) to pay might be too 
high. For example, supporting dynamic scoping and BIND-ing at runtime 
is possible in Red, but the speed impact would be so high, that the 
compiled version wouldn't run much faster than the interpreted one.
Dockimbel:
7-Apr-2011
The compiler could be made smart enough to do that without altering 
the original REBOL syntax (or maybe just marginally). The question 
is, can it be done without making the compiler code too complex to 
maintain. :-) There's also the JIT speed constraint, the compile 
would need to be fast enough for that case too.
Dockimbel:
13-Apr-2011
Peter: strictly speaking, they are not needed because they can be 
emulated by math operations. However, on CPUs, the speed difference 
with true shifts is huge (one order of magnitude at least). I can't 
say yet how much shifts will be used in Red's runtime (mainly in 
memory manager) because in some cases, bitfield operators could be 
used instead.
Kaj:
10-Aug-2011
Maybe I imagined the speed improvement. My previous experiments were 
the night before, and I don't have those executables anymore
PeterWood:
5-Sep-2011
Having more people testing red/system would probably help speed things 
up a little.
Kaj:
28-Sep-2011
I don't know if anyone has tested it, but Red/System should approach 
the speed of C. So that would be somewhere around the speed of C++ 
and such somewhat higher level languages
Steeve:
29-Dec-2011
for SWITCH I can see the need (computing labels in array to support 
indirect threading ) a speed issue.

But why do you need to implement CASE in red/system, it's only a 
sequence of if/else statements ?
BrianH:
29-Dec-2011
I hope you have a CASE/all option. We used the CASE/all style in 
the R3 module system code to great benefit, mostly in maintainability 
but also in speed, when compared to the nested IF and EITHER code 
it replaced. It enabled us to add features and simplify code at the 
same time.
Ladislav:
24-Jan-2012
To Geomol: the speed should not be bad, but that is not the main 
reason why this approach is good.
GrahamC:
6-Feb-2012
Is there anything others can do to speed up the development of Red?
Pekr:
6-Feb-2012
Kaj - donation is not a problem imo. I donated and will donate again 
in March. At least a bit, but the question is, if it can speed anything, 
apart from the fact, that Doc will be able to work on the Red fulltime 
eventually. I think that Graham might be in a position of need to 
work on new stuff, and is deciding what tool should he use. In such 
a case, it is a bit preliminary to consider Red imo. But who knows, 
how quickly the RED itself can be written.
Pekr:
6-Feb-2012
I am also not sure, that in the case of Red, eventual R3 source release 
would help to speed things up, as Red "natives" are going to be written 
in Red/System, not C, or so is my understanding of the platform.
Kaj:
6-Feb-2012
I don't think any R3 development could speed up Red, but R2/Forward 
may
Henrik:
6-Feb-2012
I don't think any R3 development could speed up Red

 - perhaps only already taken design decisions, as design can take 
 time and mistakes can be made.
PeterWood:
6-Feb-2012
A few points releting to recent posts:

Nenad is already working fulltime on Red.


He has already accepted contributions to the Red/System compiler 
from both Andreas and Oldes.


Finding bugs by using Red/System will help speed the process of Red 
especially as Nenad's current design is to generate Red/System code 
from Red.
Dockimbel:
7-Feb-2012
Speed up the process: you'll be able to add easily new datatypes 
to Red. I won't code myself all the 50+ REBOL datatypes, only the 
basic and important ones. For example, I would very much appreciate 
if date! and time! could be contributed by someone. The basic types 
will be quick to add as soon as the underlying Red runtime (think 
MAKE, LOAD and a few basic natives) will be operational.
Evgeniy Philippov:
13-Feb-2012
My approach would also decrease a number of layers by one. This greatly 
reduces the complexity and greatly improves compilation speed.
Pekr:
14-Feb-2012
As for compilation time. I don't know guys, but Red/System compiles 
REALLY fast. I do remember my dos times, where compiling took 30 
secs. Where's the problem? It's not imo about the compilation speed, 
at least for me - it is about the process of building app vs dynamic 
prototyping using interpreter. I don't like much the compile/build/run 
process, but I expect something like R2 console for prototyping appear 
in distant future for Red ....
Dockimbel:
14-Feb-2012
Compilation time: absolute speed is not the issue currently, we're 
talking about relative speed of the preprocessing compared to whole 
compilation time.
Group: Topaz ... The Topaz Language [web-public]
Gabriele:
20-Jul-2011
Brian: a clarification, Topaz is both an interpreter and a compiler, 
and although you can compile whenever you need speed (so in principle 
there won't be much need to manually optimize functions for the interpreter), 
in most cases you're running in an interpreter very like REBOL.
Pekr:
22-Nov-2011
Oldes in Other languages group - "Hm.. i gave it a try and must say 
that Topaz is much more interesting." So, I would like to ask - is 
there any progress lately? Is Topaz already usable for real-life 
code? An what is an speed overhead in doing some app in Topaz in 
comparison to direct JS execution?
Gabriele:
23-Nov-2011
Progress: I added the action! datatype, and am preparing to write 
the "real" compiler. i was hoping to start that this week but it's 
starting to seem very unlikely. sleep is starting to seem unlikely 
this week. :)

Being usable: no.

Speed: currently, you can use the "Fake Topaz" dialect and map 100% 
to JS; the interpreter is of course much slower. When 1.0 is ready: 
i don't think there will be reasons to worry about performance.
Group: World ... For discussion of World language [web-public]
Steeve:
30-Nov-2011
Did you do some speed benchmark ? (R3 vs R2 vc World) ?
Geomol:
2-Dec-2011
Q: Does World compile into bytecodes (a la java) or machine languages?

A: Into bytecodes for the virtual machine. Each VM instruction is 
32 bytes (256 bits) including data and register pointers.

Q: Can you do operators with more or less than 2 arguments?

A: Not yet. I've considered post-fix operators (1 argument), and 
it shouldn't be too hard to implement. To motivate me, I would like 
to figure out some really good examples. With more arguments, I can 
only think of the ternary operator ("THE ternary operator"). I'm 
not sure, World needs that.

Q: Is range! a series! type?

A: No, range! is a component datatype. It has two components just 
like pair!.

Q: What platforms are supported?

A: For now Mac OS X (64 bit), Linux (32 bit) and Windows (Win32). 
The code is very portable. It took me a few hours to port to Linux 
from OS X and just a few days to Windows.

Q: What platforms do you plan to support in the future?

A: It would be cool to see World on all thinkable platforms. I personally 
don't have time to support all. World is not a hobby project, and 
I'm open for business opportunities to support other platforms. The 
host depending code is open source. I mainly think 64-bit.


Q: I'm a little sorry to see the R2-style port model instead of the 
R3 style. Are all ports direct at least?

A: Yes, ports are direct (no buffering). The ports and networking 
are some of the most recent implemented. More work is needed in this 
area. I would like to keep it simple and fast, yet flexible so we're 
all happy.


Q: What in the world is going on with the World Programming Language? 
This looks like something that must have been under wraps for a long 
time. What's getting released?

A: I didn't speak up about this, until I was sure, there were no 
show-stoppers. The open alpha of World/Cortex is being released as 
executables for Mac OS X, Linux and Windows (Win32), as are the platform 
dependent sources and initial documentation. World implement 74 natives 
and more than 40 datatypes. The Cortex extension (cortex.w) implement 
100 or so mezzanine functions and some definitions. The REBOL extension 
(or REBOL dialect in rebol.w) implement close to 50 mezzanine functions 
(not all functionality) and some definitions.

Q: Did you do some speed benchmark? (R3 vs R2 vc World) ?
A: Yes:

(All tests under OS X using R2 v. 2.7.7.2.5 and R3 v. 2.100.111.2.5)

- A mandelbrot routine (heavy calculations using complex! arithmetic) 
is 6-7 times faster in World than code doing the same without complex! 
in R2 and 11-12 times faster than R3. If using same code, it's 2.5 
times faster in World than R2 and 4.2 times faster than R3.
- A simple WHILE loop like:
n: 1000000 while [0 < n: n - 1] []

is 1.8 times faster in World than in R2 and 2.8 times faster than 
in R3.

- I tested networking in two ways. One sending one byte back and 
forth between client and server task 100'000 times using PICK to 
get it, and another sending 1k bytes back and forth 10'000 times 
using COPY/PART to get it from the port. Both were around 3 times 
faster in World than in R2. (I didn't test this in R3.)

- I tested calling "clock" and "tanh" routines in the libc library. 
I called those routines 1'000'000 times in a loop and subtracted 
the time of the same loop without calling. Calling "clock" is 2.4 
times faster in World than in R2. Calling "tanh" (with argument 1.0) 
is 5.9 times faster in World than in R2. (I didn't test this in R3.)


(Some functions are mezzanines in World, which are natives in REBOL, 
so they'll in most cases be slower in World.)
Steeve:
2-Dec-2011
Well the claimed speed improvement is confusing me.
R3 slower than R2 on the Geomol's computer, uh !

And sorry but I also think that the memory footprint of the bytecodes 
is outrageous :-)
Maxim:
2-Dec-2011
its more like I want to link my C version of liquid rather than use 
an interpreted one.  the speed/memory impact is tremendous (10 million 
node allocations a second on the latest early prototype).
Kaj:
6-Dec-2011
John, you may want to try -Os. Optimising for size often leads to 
best speed, too, on modern architectures due to caching efficiency. 
OS X is also compiled that way
Kaj:
9-Dec-2011
It's actually a lot like Linux. Every distro has something you need, 
but none of them has everything you need. If I want to build the 
Russian Syllable website, I can only use R3. If I need system integration 
and speed, I can only use Red. If I need to write web apps, only 
Topaz targets that. If I need open source, I can only use half of 
them. If I need dynamic binding, I can only use the interpreters. 
If I need infix operators, I can't use Boron, although I could use 
its predecessor. Etcetera ad nauseum
Pekr:
13-Feb-2012
Well, a trade-off :-) It is about to get the most expected result 
preferably, vs your mentioned speed :-)
901 / 969123456789[10]