AltME groups: search
Help · search scripts · search articles · search mailing listresults summary
world | hits |
r4wp | 90 |
r3wp | 879 |
total: | 969 |
results window for this page: [start: 701 end: 800]
world-name: r3wp
Group: Core ... Discuss core issues [web-public] | ||
Maxim: 17-May-2010 | btw, in my find-fast above... when search matchs aprox 1/10 times, it ends up being twice as slow as the foreach... so speed will depend on the dataset, as always. | |
Ladislav: 18-May-2010 | Terry: "foreach is the winner speed wise.. as a bonus, If i use foreach, I don't need the index?" - unbelievable, how you compare apples and oranges without noticing | |
Ladislav: 18-May-2010 | Terry: "I don't care" - you should, since you are comparing speed of code adhering to different specifications. If you really want to find the fastest code for a given specification, that is not the way to take. | |
Ladislav: 18-May-2010 | You are certainly entitled to do whatever you like, but saying "foreach is the winner speed wise..." is wrong, since you did not allow parse to do what you allowed foreach to do. | |
Carl: 16-Jul-2010 | Regarding all the above discussion regarding MOLD... there is an assumption, once that I do not believe. The assumption is this: that any function of REBOL could ever produce the perfect output storage format for all data. Or, stated another way: part of the design of building all applications that store persistent data is to build the storage formatters (savers) and storage loaders. I don't think there's any way around this, because storage formats have specific requirements, such as speed of loading, indexing keys, resolution of references (words or other), construction of contexts, etc. | |
DideC: 25-Aug-2010 | I see several UNIQUE bugs in CureCode too. And this one is already posted (by you Henrik) http://curecode.org/rebol3/ticket.rsp?id=726&cursor=22 So far, I understand your answer speed ;-) | |
Maxim: 15-Sep-2010 | this function will often be used in loops, and the less functions and REBOL stack usage you have the bigger the speed gains. | |
Gabriele: 24-Sep-2010 | Oldes, if it's about speed, this is going to be even faster: loop 1000000 bind [e: e + 1] a/b/c/d | |
Andreas: 25-Sep-2010 | ... bla bla bla ... speed ... bla bla bla | |
Graham: 25-Sep-2010 | has anyone done any testing on whether the speed is that much different? | |
Ladislav: 26-Sep-2010 | {See UNIQUE. This function is not widely used in my apps, just because of that. Useless, because when we deals with huge series, we don't want to pay the COPY cost.} - while this looks like a reasonable argument at the first sight, it actually isn't. The reason why is based on the way how UNIQUE is (and has to be) implemented. You cannot gain any speed worth that name by allowing an implementation doing the "UNIQUE job" in place. | |
Ladislav: 4-Oct-2010 | Hi, did somebody also notice the speed difference between Vista and 7 as below? Benchmark run 27-Aug-2009/16:16:06+2:00. Rebol 2.7.6.3.1 Computer: 100Mega Athlon II X2 250/4G DDR3 OS: Windows Vista 64 Precision: 0.05 Empty block: 104000000.0Hz Eratosthenes Sieve Prime (size: 8191): 54.0Hz, result: 1899 primes Four-Banger test (+,-,*,/): 150000.0Hz, result: 10.0 Integral (icount: 10000) of sin(x) 0<=x<=pi/2: 42.7Hz, result: 1.00000000000003 Integral (icount: 10000) of exp(x) 0<=x<=1: 60.2Hz, result: 1.71828182845896 Merge Sort (500 elements): 68.4Hz Benchmark run 4-Oct-2010/17:00:19+2:00. Rebol 2.7.7.3.1 Computer: 100Mega Athlon II X2 250/4G DDR 3 OS: Windows 7 Professional 64-bit Precision: 0.05 Empty block: 131000000.0Hz Eratosthenes Sieve Prime (size: 8191): 69.0Hz, result: 1899 primes Four-Banger test (+,-,*,/): 188000.0Hz, result: 10.0 Integral (icount: 10000) of sin(x) 0<=x<=pi/2: 49.7Hz, result: 1.00000000000003 Integral (icount: 10000) of exp(x) 0<=x<=1: 74.8Hz, result: 1.71828182845896 Merge Sort (500 elements): 90.4Hz | |
Ladislav: 4-Oct-2010 | (I checked, that the speed difference between 2.7.6 and 2.7.7 is not that big) | |
Group: SVG Renderer ... SVG rendering in Draw AGG [web-public] | ||
Cyphre: 13-Oct-2009 | re 1/ yes, I don't think this is a problem of DRAW but more problem of unit conversion. DRAW works with pixels as it it low level dialect (not only for rendering SVG format). So the higher level code(SVG parser) should be responsible for this until I am missing something. re 2/ gradients for outlines were planned for addition so I hope this will be in the R3 final release ;-) re 3/ transparent outlines are known problem. (BTW is this working properly in other SVG renderers? I'd like to see the results) This is because we are using rasterizer which is drawing one shape over another. IMO solution could be to replace the rasterizer with different one (for example like Flash) which is simulating 'Constructive Solid Geometry'. But this would need major changes in the current internal implementation (and in fact also switch to higher version of AGG). My guess is it could also speed-up the rendering in some cases...I started to investigate this area but it needs more time which I currently don't have :-/ | |
Maxim: 14-Oct-2009 | for me its not speed as much as massively usefull (and free) functionality. | |
Group: !RebDB ... REBOL Pseudo-Relational Database [web-public] | ||
Ashley: 5-Feb-2010 | All new RebDB v3 released for REBOL3. To take it for a spin, try this: import http://idisk.me.com/dobeash/Public/rebdb.r help db- test: db-load http://idisk.me.com/dobeash/Public/test.bin help test sql select test Extensive documentation in the works (within a week) ... actually a large part of the doc deals with db design [from my POV] covering off on the trade-offs with fixed vs variable length records/fields, typed vs untyped columns and RAM vs speed optimization. Needless to say, I think I've got the balance about as good as is possible with pure REBOL mezz code. This has been a long time in the making ... | |
Group: !REBOL3-OLD1 ... [web-public] | ||
BrianH: 5-May-2009 | It's good for speed and memory saving, and better binary conversions. Once we have vectors, we will have less people complaining about the lack of rebcode, except for the people who never take good enough for an answer :( | |
Pekr: 6-May-2009 | Steve - how could vector help to speed operations on images, if image is separate datatype? | |
BrianH: 14-May-2009 | It's a speed/simplicity thing. | |
Steeve: 28-May-2009 | If we don't have the "chaining" behavior with reduce/into, then we will lost some speed in loops (because of the need to update the index of the serie separatly after each iteration). | |
Steeve: 28-May-2009 | Actually, the first optimization i do in my scripts which need speed and low memory usage, is to remove all the reduce and compose usage. It's always the first thing i do | |
Pekr: 28-May-2009 | Steeve - what is the speed compared to R2? | |
BrianH: 29-May-2009 | Steeve, REBOL gets its speed of programming by preferring unshifted characters to speed typing of code, and English-like naming to speed reading. The syntax and naming conventions of REBOL were carefully chosen for good reason. | |
Steeve: 30-May-2009 | But i've not fully tested it back (i only to tested the speed of APPEND) | |
Carl: 2-Jun-2009 | BTW, I estimate DO/next overhead about 30% of full speed DO. | |
Carl: 2-Jun-2009 | In other words, it is running at 66% of full speed of REDUCE!! | |
BrianH: 3-Jun-2009 | The days of hardware getting faster is going away - now the hardware is staying the same speed, but more hardware (more cores) are being added. You get more speed through parallelism. Slow langages are getting made faster or dying out. | |
Maarten: 3-Jun-2009 | But, if you want them to have reasonable speed... some native support helps... | |
shadwolf: 6-Jun-2009 | I started a deep deep thinking process wich is a heavy task for an idiot biain of mine concerning the futur of viva-rebol and where i want to lead it. If you have a little interest for what i'm doing actually you know that i'm actually working on 2 projects viva-rebol and rekini. I'm interrested in transforming viva-rebol into a real time collaborative project. manager/editor something like wave but done in rebol to create rebol application. The idea that comes to my brain is to mix IRC and vivarebol. IRC would be the supplier for sharing real time documents content information and viva-rebol will be at the same time manager and the renderer that will catalise the informations collected by irc. Why irc? first because they have lot of control feature wich can allow anyone to join and see an onShared-creation docuiment or script and only look at it without active participation. That can allow a hierachy system with master, co-writer and only viewer. and the allow the master to select who participate or not to the création. We saw with area-tc that rebol and VID and the dialect concept was really feat to handle uncomon text handling at light speed so the appears clear for me that this is the next step to go. Some people will say to me "but it's more important to have an advanced rich text editor tool" which i answer that boring to do and in the result the gain in notority for rebol is close to 0. So instead of clonning MS-Word using VID I prefere move to the next step wich I hope will lead us to make people see all the potential of rebol. It took me looooooooooong time (6 years in fact) to see how to merge all the interresting part of rebol to make a big project wich we could be all be proud of and show all the interesting part of rebol. Our comminuty is small and working together to make advance the projects is obvious if we want our project to be recognised in some way. If we all work on our sides on our own project achieving a high quality for those projects is hard. So externally we only show to the world project that looks more like teasers than completed project and that not a good thing for rebol promotion. We can say all we want about the way rebol is done by Carl but us as community which goal is to spread rebol world wide we have a part of reponsability in that too. | |
shadwolf: 6-Jun-2009 | this project that will start as a commuty oriented project can then be retaken as it by compagnies to speed the way they work. and not only for rebol scripting porpose. | |
Maxim: 11-Jun-2009 | its a question of speed... | |
Gregg: 12-Jul-2009 | But APPEND changes the spec block, which caused the speed issue Peter pointed out. | |
Pekr: 12-Jul-2009 | Because of the speed issue? | |
Pekr: 15-Jul-2009 | IIRC Carl posted some short notice on that, but can't find it. I would like to start testing it, e.g. the speed of page build in comparison to R2. | |
BrianH: 22-Jul-2009 | Datatype conversions: I think that once TO-HEX is removed for most datatypes the conversion issues of the TO-* set will be done. The rest will be handled by proper conversion functions, that we don't need to write immediately. We should probably wait on implementing those as natives until the APIs are worked out in REBOL versions, or plugin code. We can speed them up later once their behavior is agreed on. | |
BrianH: 14-Aug-2009 | Since extensions can't be statically linked with R3, I can wrap a LGPL JIT like libjit. It should work great. I'll be stuck with the RX data model, but that cold be plenty for the types of functions you wold write just for speed. | |
Maxim: 26-Aug-2009 | maybe with R4, after all of the goodies this opening will have brought, he will be able to contemplate opening up a bit more. There is always a risk that letting go of *total* control can warp your creation to something you don't like. But my experience in a decade of REBOL shows that stuff which isn't "sanctified" by RT have a lot of difficulty picking-up speed. When you (i.e. Carl) spend 10 years on a project and it doesn't take off in-part because the responsability of keeping control stymies its growth, to a slower pace than that of the industry, IMHO you realize that the possible upside to *total* control definitely is dwarfed by having a mass of like-minded peers who move along with you. obviously no one sings exactly the same tune, but you need to try out stuff in order to know if its really a good or a bad idea... I'd rather have 100 people doing this, and then selecting the obvious clear winners than trying to muse about it, try a single idea and finally realize it wasn't a good idea. Plus, what is good philosophy for RT isn't good for everyone... the proof is that the PITS model isn't enough for everyone. Even RT had to acknoledge this. | |
BrianH: 26-Aug-2009 | Maxim, the reason that custom datatypes can't extend the syntax is technical, not a control issue. When TRANSCODE/on-error was proposed, Carl revealed that TRANSCODE can't call out to external code on syntax exceptions without making it drastically slower, too slow for use. This is why the /error option was implemented instead: it doesn't use hooks or callbacks. We do have custom datatype hooks for the serialized syntax constructors, but those are passed the preparsed REBOL data inside the #[ ]. Custom syntax hooks for ordinary literals would require a complete redesign of the parser, and that redesigned parser would be much worse, in terms of resource usage (speed, memory). | |
Pekr: 9-Sep-2009 | well, everything is important. But how much speed-up do we get via new replace? How often is that used in our code? :-) | |
Pekr: 10-Sep-2009 | our killer app lays initially in plugin, and showing world few presentation, easy of gui creation, and some real service wrapping, doing a tour comparind sources. Wrap OSnews.com, when you post article there. Wrap gmail, or later on their Wave, and show the code difference, compare sizes, compare speed. And at the end of presentation do a bundle - show stand-alone app called gmail, not needing browser ... | |
Maxim: 16-Sep-2009 | and redoing the UAE and actually Adding new concepts to the GUI instead of just making it shiny... vista had no single new GUI concept over XP a part for that mini app bar on the right (which sucks sooo much energy out of your PC that you can actually notice the speed difference when its running!). | |
Henrik: 23-Sep-2009 | Indeed VID3.4 is far from done. You can probably use it for a few things, like getting a name from a user in a text field or submit a very simple form, but not much more than that. To reiterate the state of the UI: - No unicode yet in graphics (when Cyphre gets around to it). - Resizing acts like a drunken sailor. (Carl) - Skin is not published. (Me) - Style tagging is not implemented. (Carl) - Reasonable requesters are not yet implemented. (Carl or me) - Layers are not yet implemented. (Carl) - Guides are not yet implemented. (Carl) - Better font rendering. We are not taking advantage of what AGG can do. (Cyphre again) - Event system is from Gabriele's VID3. (Carl) - Many features are untested, like drag&drop. (Me, I guess) - Proper material management for skin. (Me). - Many styles are not implemented, especially lists (Me). - More elaborate animation engine (Carl or Me). - Form dialect (Carl talked about this). - More/better icon artwork (Me). Plus, Maxim has some ideas for DRAW, to greatly speed up rendering, but I don't know if they can be implemented. The overall design of the GUI engine is very good. Whenever a change or addition is made, you alter 3-5 lines of code in one place, and it works. I doubt the entire engine will be rewritten. You won't see GUI bug reports in Curecode for a while. There could easily be 2-300 reports, once we get to that point. My work regarding skins is rather big: I need to work out the basic styles first, so we have a reasonable way to build compound styles. These are being done using a very simple, but pixel accurate GUI using plain colored surfaces. This is easier for testing out, as draw blocks are small, but as Pekr likes to complain: They are not pretty to look at. Once the real skin goes into place, the draw blocks will grow a lot. I would love to see a low-level GOB management dialect, like Gabriele's MakeGOB. | |
shadwolf: 23-Sep-2009 | hum but in general you do your best to select the best 3D file format to go with your custom made 3D engine to get the best rendering real time speed and that best quality compromise. | |
BrianH: 28-Sep-2009 | The victory would be in speed and the fact that your replacement code was missing a :here, but your suggestion is nicer I suppose. | |
Maxim: 18-Oct-2009 | it could even go a step further and check if the shared serie is used in the block of code being molded, but that would hamper speed a little bit... a /compact refinement could be used to switch this extra verification. | |
BrianH: 14-Nov-2009 | The new op! behavior has allowed us to speed up DO quite a bit overall. It won't be changed. If you want fast math, use prefix functions or better yet extensions. | |
Geomol: 14-Nov-2009 | I tested under OS X with prefix math, and the same picture is seen. If it's because R3 isn't compiled for speed, then that might be the answer, so this isn't an issue. | |
BrianH: 14-Nov-2009 | It's a big picture balance thing. The optimizations were rebalanced in the change from R2 to R3 in order to increase overall power and speed of REBOL. REBOL has never been a math engine (not its focus), but now it can be because of extensions. Everything is a tradeoff. | |
Pekr: 21-Nov-2009 | Geomol - my question was rhetorical. I think I do understand what Gabriele means, I just don't agree with the outcome. There are clear places where to post, easy as that. It is a bit difficult sometimes to get Carl's attention, but 80 tickets a month get such an attention. The development process of R3 might look chaotic, jumping from one area to the other, but if we want, and we care, we know how to get such an attention. I for one asked Carl privately about your concern towards R3 speed in certain situations. And you know what? I got some answer too. I asked Carl to comment to your ticket, he did so. In few hours. You could do just the same, no? It is very easy to become a naysayer, to express some worries, etc., but other thing is to actaully act, not just talk, and then your saying applies - "less noise and more thinking (and acting) would be good for a change" :-) .... and please - I think I don't need any guides on what should I comment, or not. But the fact is, that I don't want to let anyone to dismiss the hard work which is being put into R3. I don't care about myself at all, but I see it at least as dishonest to those, who really try to bring R3 out, and we have few such friends here ... | |
Geomol: 24-Nov-2009 | Also do http://www.rebol.com/speed.r show an increase in REBOL-Hertz. | |
Cyphre: 24-Nov-2009 | Here is slight comparison with the latest R3 release: I used this script identically for all tests: http://cyphre.mysteria.cz/tests/mandelbrot-int.r results on AMD Athlon 1.4GHz, 1GB RAM: REBOL2 partially JIT compiled version 0.471s 1.0 speed ratio REBOL2 (REBOL/View 2.7.6.3.1 14-Mar-2008) 12.15s 25.8 x slower REBOL3 (r3-a95.exe) 13.87s 29.45 x slower REBOL3 (r3-a94.exe) 17.54s 37.24 x slower | |
Geomol: 24-Nov-2009 | Hm, I need to test some more, I guess, because I initially see a speed increase, but your results show differently. | |
BrianH: 19-Dec-2009 | You might also consider primes: make block! max-value instead of 10000 for speedup of prime calculation of max-value over 10000. Trade memory for speed. | |
Steeve: 31-Dec-2009 | but there is several criteria to optimize something. - Best Speed - Shortest code - shortest memory overhead - best ratio of above criteria | |
Group: !Cheyenne ... Discussions about the Cheyenne Web Server [web-public] | ||
Dockimbel: 22-Aug-2009 | I've tested only with stunnel, but nginx is also a very good option if you want to speed-up static files serving. | |
Dockimbel: 11-Sep-2009 | CGI handling: two different strategies are applied : - if the target script is a REBOL script, the process is already a REBOL session, so no need to start a new one (and avoid the startup cost, so you get FastCGI speed). Shebang line is ignored in that case. - if the target script is not a REBOL script (no REBOL header), classic approach is used: setting of environnemental variables, CALLing executable from shebang line, sending input, catching output,... | |
Dockimbel: 15-Oct-2009 | benchmarks of TCP between two processes : no, I never needed to mesure that speed because anyway, I don't have real alternatives. | |
Maxim: 15-Oct-2009 | well... the main call to remark is ... compile hehehe... I do plan on making an XML document model eventually. basically, it would convert XML elements into your current Remark document models, so you can leverage the same code base but with another data model as input. optionally you could build direct XML document models for a bit more speed. all that needs to be done to make it easy for you guys is to build a few simple base XML tags which allow you to build the dialect based on xml element names. | |
Dockimbel: 23-Dec-2009 | I looked at Node a few days ago, interesting choice to see JS at server-side. I guess that the included 8k lines of C code help a lot having a decent speed. ;-) | |
Dockimbel: 3-Jan-2010 | That would certainly boost pure static resource serving speed going close to Apache and in some cases, close to Lighttpd, but for the RSP scripts, that won't change anything. Only threading could bring a good speed boost there. | |
Graham: 29-Jan-2010 | The other issue is that many of the open source projects are just too "hard" for the casual user to contribute. It requires lots of documentation to get up to speed and often that is lacking. | |
Janko: 14-Mar-2010 | that static form for localised strings is a great idea #[...] does localisation effect performance much? I suppose it increases RAM usage since app holds the two hashtables in memory (which is needed I know for speed) | |
Dockimbel: 8-May-2010 | Nodejs: I'm not sure if JS in V8 executes much faster than R2, I guess that V8's JIT would give it a significant raw speed advantage other Cheyenne. On the scalability side, Nodejs will scale much better than Cheyenne for a high number of concurrent connections, just because it can use polling and kernel queues instead of the non-scalable select( ) used in WAIT by REBOL. This is an important aspect if long lasting connections are used. | |
Graham: 9-Jul-2010 | And Moore's law solves the speed problem | |
Dockimbel: 9-Jul-2010 | Graham: that was just an example of forbidden usage to *REBOL (as long as it is interpreted), performance is still relevant despite of current CPU speed. | |
Graham: 9-Jul-2010 | That's why people use interpreted languages/scripting languages .. to speed up the development cycle not for writing time critical apps | |
Pekr: 8-Dec-2010 | At least a memory constraint is a good one. That should prevent memory leakage. I personally don't like eval at all, as my brain is not mature enough to be able to guess, how many cycles will my script ideally need. I would welcome time constraint as well, and it was proposed, but not accepted. Here's what does not work yet: # If the program quits, the exit code is set to 101, the same as any security termination; however, we may want to use special codes to indicate the reason for the quit. # Some types of loops are not yet checked, but we will add them. For example, PARSE cycles are not yet counted. # Time limits are not yet supported, but may be added in the future. However, the cycle limit is better for most cases, because it is CPU speed independent. | |
Maxim: 15-Apr-2011 | my client had one console for each process on his cheyenne setup... I need to see what is happening independently from each other. trying to follow 5 high-speed traces in a single file, is like using altme without any groups it would be impossible. ;-) if one worker has a fuck up i need to be able to see why *it* is failing. | |
Maxim: 27-Apr-2011 | wrt api server speed adding up all the required cheyenne server handling and tcp xfer we get: >> s: chrono-time read http://localhost:81/echo.xml?value=tadam difference chrono-time s connecting to: localhost == 0:00:00.010442948 | |
Kaj: 25-May-2011 | You're right about the speed relations. We run ancient hardware, but what usually matters is network speed | |
Janko: 23-Nov-2011 | Endo, thanks for the code. I will need something similar for sqlite. I just got a first db is locked error yesterday with it at UsrJoy. What I'm trying to log is side-info (like usage info) so I don't want to inpact the speed of response by having aditional disk write in the request-response process (it has to be async). Doc: I used debug functions for various info logging too now (and I do diff on trace in cron and send myself email if there is any difference), but I am trying to log more things and I would like to leave trace.log for errors only. I was interested if the existing functionality that serializes debug to trace.log could be used to log to some other file. like info.log . That would complicate the app-code the least.. otherwise I was thinking of what Kaj proposed, to have some queue to send data over via tcp to and it will write it to disk in intervals. That would bring another dependancy into app-code.. something like redis could automatically work like this I think. | |
Maxim: 12-Feb-2012 | Gregg, I know your take on optimisation... ;-) <start rant ;-) > If I had the same opinion, liquid would still be 10 times slower than it is now. each little part of the changes add up and after years it really adds up. I have some new changes which will probably shave off another 5-10% when they are done. It requires several changes (some probably removing less than a %). its been like that since the begining. the relative impact of any optimisation is always bigger the more you do it. the first 1% looks like nothing when you compare it to the original 100% but after you've removed 25% its now 1.33%... but when your app is 10 times faster, that 1% is now 10 % of the new speed. btw, I'm not trying to justify this specific optimisation, but I'm trying to balance the general REBOLer consensus that optimisation (in speed, code size and RAM use) isn't important when compared to other Reboling tasks... Have you (using "you" in the general sense, not gregg specifically) ever looked at Carl's Code? its some of the most optimised and dense code out there... it hurts the brain... and its very enlightening too. all the little variations in the series handlers often are there by design and all balanced out against the others. Carl uses all of those little variations profusely... to me its in the very essence of REBOL to be optimal and constantly be refined, improved, and this usually means by shrinking it in all vectors. <end of rant ;-) > | |
GrahamC: 13-Feb-2012 | It doesn't really matter if you get a 133% increase in the speed of glass if it has been abandoned for Rebol! lol | |
Gregg: 14-Feb-2012 | I believe in optimizing on a case by case basis, as most do. And I believe in optimizing different things in any given case. Size, speed, felxibility, and readability are all fair game for optimization. As far as AltME and other slow REBOL UIs, I remember Carl saying once that View is a miser, saving pennies, while VID is the government and spends millions. I think whoever designed the list model used in AltMe and other apps (e.g. IOS conference and messenger) chose to make the implementation small and quick to write, knowing that they might not be fast. They may also not have imagined how far they would be pushed. | |
Pekr: 14-Feb-2012 | Altme is consistent speed wise. I have 3 years not reinstalled my Vista, so once my ntb boots, it takes 5-7 minutes to be usable.I click on many apps to start, and Altme definitely starts first. Stuff as Outlook, Firefox, etc., take usually tens of seconds! | |
Group: Profiling ... Rebol code optimisation and algorithm comparisons. [web-public] | ||
Maxim: 17-Sep-2009 | integer to pair convertion speed tests: >> s: now/precise loop 1000000 [to-pair 2] print difference now/precise s 0:00:00.547 >> s: now/precise loop 1000000 [1x1 * 2] print difference now/precise s 0:00:00.219 >> s: now/precise loop 1000000 [to pair! 2] print difference now/precise s 0:00:00.328 >> s: now/precise loop 1000000 [as-pair 2 2] print difference now/precise s 0:00:00.937 | |
Maxim: 29-Oct-2009 | here is a sreen dump of iteration vs maximum-of native use.... goes to show the speed difference in binary vs interpreted!! >> a: [1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4 5 5 5 5 5 6 6 6 6 6 7 7 7 7 7 8 8 8 8 8 9 9 9 9 9 10 10 10 110 110] report-test profile :maximum-of reduce [a] ---------------- performed: 10000000 ops within 0:00:09.781 seconds. speed: 1022390.34863511 ops a second speed: 34079.678287837 ops a frame ---------------- >> report-test profile :get-largest reduce [a] ---------------- performed: 10000 ops within 0:00:01.86 seconds. speed: 5376.34408602151 ops a second speed: 179.21146953405 ops a frame ---------------- we are talking 190 TIMES faster here | |
Maxim: 30-Oct-2009 | but I'm discovering a lot of discrepancies in things like string vs block speed of certain loops... and a lot of other neat things like: pick series 1 is 15% faster than not tail? series | |
Maxim: 30-Oct-2009 | (i: i + 1) > 1000 same speed as i: i + 1 i > 1000 | |
Maxim: 30-Oct-2009 | yeah amd with the faster fuunction calling in R3 liquid should get an immediate boost in speed... it does A LOT of tiny function calls to manage the graph as if they where first level types with accessors instead of using a controler which loops over the graphs and limits flexibility. | |
Group: !REBOL3 Priorities ... Project priorities discussion [web-public] | ||
Pekr: 3-Nov-2009 | Btw - for future, to speed up some developments, I propose the bounty system - http://bounties.morphzone.org/.... we would just need to define few rules, e.g.: - the ability to merge bounties - the ability to predefine possible implementator - not everybody's code can be realistically accepted, etc. I think that that way we can speed up some developments too ... | |
GiuseppeC: 13-Nov-2009 | Geomol, last year I have written the same thing but this year a lot has happened. Once alpha i finalized and VID is complete expect a boost into the development. Also I suppose REBOL is short of money and programmers so they cannot speed up the project. | |
Henrik: 14-Nov-2009 | If it's math heavy it will probably be around the same. If you use graphics, the better scalability of having many GOBs will help speed up certain operations. DRAW is currently around the same speed. If you use it as a C extension, then you will of course get C speeds. There are a few tricks in R3 to reduce the need for copying as well as some functions that have gone from mezzanine to native. | |
Henrik: 14-Nov-2009 | The key is that if we want real speed, we can do it in C now. | |
PeterWood: 14-Nov-2009 | Geomol: "So we can expect R3 to be slower than R2, when it comes to calculations?" No, I wouldn't expect R3 to have slower calculations. From what Carl has said, the R3 Alphas are not optimised for speed when compiled. | |
Maxim: 19-Nov-2009 | sure, its not for the host, but its still not huge, and makes for a nice feature I'd add in any of my speed-critical applications, if I had access to it. | |
Group: !REBOL3 Schemes ... Implementors guide [web-public] | ||
Graham: 14-Jan-2010 | I haven't used IMAP4 for years so ... for me it's going to take time to get up to speed again. | |
Steeve: 28-Nov-2011 | Not only that, Altme has lot of rooms for speed improvements :) | |
Group: !REBOL3 ... [web-public] | ||
WuJian: 18-Jan-2010 | !REBOL3 -OLD1 20905 messages Core 15498 messages So, start a new group after 20000 messages? For higher access speed | |
Maxim: 20-Jan-2010 | R3 is picking up speed. USERS are starting to take charge of various projects. I just wanted to make everyone realize just how different & better the R3 ecosystem is today than it ever was for R2. congrats everyone :-) | |
sqlab: 8-Feb-2010 | Recently I made some tests comparing the speed of R3 to R2. After getting results of R3 up to 100 times slowlier than R2 and more, I found that parse is now almost unusable for simple parsing under some circumstances. I will later add a Curecode ticket. | |
BrianH: 14-Feb-2010 | No rebcode in R3. Rebcode got its speed from certain tricks that don't work as well in R3 due to the changes in the context model. However, you can make your ow3n rebcode as an extension if you like (one of my pending projects). | |
Paul: 14-Feb-2010 | Yeah was looking for the speed. | |
BrianH: 4-Mar-2010 | It might be hard to believe, but R3 has gotten so efficient that BIND/copy overhead is really noticeable now in comparison. In R2 there were mezzanine loop functions like FORALL and FORSKIP that people often avoided using in favor of natives, even reorganizing their algorithms to allow using different loop functions like FOREACH or WHILE. Now that all loop functions in R3 are native speed, the FORALL and FORSKIP functions are preferred over FOREACH or FOR sometimes because FOREACH and FOR have BIND/copy overhead, and FORALL and FORSKIP don't. The functions without the BIND/copy overhead are much faster, particularly for small datasets and large amounts of code. | |
Ladislav: 5-Mar-2010 | and, the speed difference for Return and Exit may still exist, but only if the respective function does not have any parameter | |
Steeve: 2-May-2010 | Struct! in R2 is great, especially to optimize (memory and speed) data conversions. I give my consent :) | |
Maxim: 4-May-2010 | yes, 'resolve will be extensively used in liquid R3, especially in my graphic kernel which uses labeled inputs exclusively. but it still needs to use class based path access for simple speed and memory requirements. each class currently gobgles up about 20k of ram once bound. rebinding on the fly would be excessively slow. liquid's lazy provides vastly supperior optimization in any case. functions aren't even called in the first place ;-) | |
Maxim: 4-May-2010 | then the spec really behaves like a class. so its a tradeoff between speed or ram use. you decide. | |
PeterWood: 5-May-2010 | There doesn't seem much to chose between foreach and forall in terms of speed: >> dt [loop 100000 [foreach gob d/pane [x: gob]]] == 0:00:00.125035 >> dt [loop 100000 [gobs: d/pane forall gobs [x: first gobs]]] == 0:00:00.133837 Using a do-it-yourself repeat loop courtesy of Rebolek seems a little, but not much faster : >> dt [loop 100000 [repeat i length? d[x: d/:i]]] == 0:00:00.115478 | |
Group: !REBOL3 /library ... An extension adding support for dynamic library linking (library.rx) [web-public] | ||
Maxim: 12-Feb-2010 | @ Robert, the reason for this /library (which is an extension) is that most REBOLers do not want to mangle with compiling C stuff (most probably don't even now where to begin). and for most tasks, the speed hit isn't really noticeable. |
701 / 969 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | [8] | 9 | 10 |