World: r3wp
[Web] Everything web development related
older newer | first last |
[unknown: 9] 2-Feb-2006 [1045x2] | mMesssages are coming in at about 1 per second. |
Oops, sorry that was for a dif group. I like choice, and I need to DL Opera so we can make sure we are compat, but it is really fighting us. | |
Sunanda 2-Feb-2006 [1047] | As a backstop, you could try getting some older versions of Opera from a browser vault: http://browsers.evolt.org/?opera/mac Maybe then an old version of Opera will update itself to the latest for you.... |
Ashley 2-Feb-2006 [1048] | guys, you are unbelievable bashers of Mozilla I'm not! ;) I've been testing four different browsers on my Mac (Safari, Opera, Firefox and Firefox PPC - http://www.furbism.com/firefoxmac/) and while the PPC build is 9.5MB compared to Opera's 5.5MB (which also includes M2 mail), it is noticeably faster than the other browsers and has not crashed once since I installed it 2 weeks ago. The only problem I've encountered is with my !@#$%^& bank's IE-only site (even with Opera I have to change spoof modes depending upon which particular page of the site I'm at, and Safari works fine except when the site tries to open a PDF statement within the browser using an Adobe Reader plugin – never mind the fact that Mac handles PDF natively ... !@#$%&). |
[unknown: 9] 2-Feb-2006 [1049] | Thanks Sunanda, I will try that. Ashley, yeah, we have been testing IE 7, same thing, Banks! |
PhilB 3-Feb-2006 [1050] | Go to agree with Petr .... Firefox works fine for me ... even my banking sites .... I cant remember tha last time I had to fire up IE. |
Geomol 4-Feb-2006 [1051] | I mostly use Safari on Mac these days. It works with my bank too. :-) When I'm on Windows, I mostly use Opera. I used to use Mozilla, and I still use Firefox from time to time, both under Windows and Mac. I very very rarely use IE. Safari can be used for 99+% of the sites, I visit. Today I had a problem, because I wanted to watch the 2 danish Superbowl updates, our reportes sent from the US. And a danish tv channel TV2 Sputnik require IE6 under Windows to run, and only that. Argh! |
Henrik 4-Feb-2006 [1052x2] | geomol, have you tried flip4mac yet? it works impressively with WMV video |
though not TV2 Splutnik | |
Geomol 5-Feb-2006 [1054] | Nope, haven't tried that one. |
Joe 9-Feb-2006 [1055x2] | Has anybody experimented with emulating web continuations in Rebol ? some info on ruby approach is here (http://www.rubygarden.org/ruby?Continuations) and Factor (http://factorcode.org/cont-responder-tutorial.txt) |
http://lisp.tech.coop/Web%2FContinuation for lots of reference info on the topic | |
JaimeVargas 9-Feb-2006 [1057] | The technique came from Scheme. But for this technique to work you need that the language suports continuations them natively. REBOL1.0 was able, but continuations were removed in 2.0, maybe 3.0 will have them again. We hope to be able to incorporate them in Orca, with addition of tail recursion, and other goodies ;-) |
Carl 9-Feb-2006 [1058] | Yes, we took them out. REBOL ran a lot faster as a result. I used to be a huge fan of continuations 20 years ago. But, continuations do not provide enough benefit for the performance hit on evaluation speed and memory usage. (Stop and think about what is required internally to hold in an object for any period of time the entire state of evaluation.) It's more of a programmer play toy than a useful extension. |
JaimeVargas 9-Feb-2006 [1059] | I believe this the paper with the original work. 'Modeling Web Interactions and Errors' http://www.cs.brown.edu/~sk/Publications/Papers/Published/kfgf-model-web-inter-error/paper.pdf |
Pekr 10-Feb-2006 [1060x3] | Are continuations base for tasking/threading? |
Do whatever you want with Orca, it is just that when I mentioned continuations few years ago on ml, Carl got on steroids and posted something like being-there, done-that, don't try to teach me how should I design language :-) | |
If they are not really much usefull, I am not sure I am the one who calls for "language purity" just because of the language purity itself ... | |
Geomol 10-Feb-2006 [1063] | Hm, to me continuations reminds of GOTOs, which can't be good! |
Joe 10-Feb-2006 [1064x2] | I am not asking for native continuations but a way to emulate them in web applications. |
Geomol, the real advantage of continuations is for handing web forms and to ensure the users get a consistent experience. Check the paper Jaime points out | |
JaimeVargas 10-Feb-2006 [1066] | Joe I gues you could emulate continuations. But it is not an easy task. I have done some work on this direction by creatinga Monadic extension. But it is not yet complete. |
Joe 10-Feb-2006 [1067x8] | The problem I trying to solve is strictly for web programming, e.g. ensuring there are no inconsistencies in a shopping cart, etc ... |
The approach I have is that every session has a cookie and disk storage associated to the cookie. When I define a web form, the action method gets a continuation id as a cgi parameter, so if at that point you clone the browser window, you as a user have to continuation ids | |
correction: two continuation ids | |
This approach is not very scalable, it's just a start waiting for better ideas and input | |
When the user posts a form , the form cgi stores the continuation id and a rebol block with name-value pairs | |
If you post the second form also (something you would do e.g. when checking flights in a reservation engine, as Jaime's reference paper suggests) a second continuation id and rebol block would be stored for the same session | |
So basically the continuations are ensured by using both the cookie and associated storage and the continuation id that is added to the links as a cgi get parameter | |
I'll stop now so that I get more input from others. I imagine many of the gurus here have done something like this as this is the thorny issue with web apps | |
Sunanda 10-Feb-2006 [1075] | What you are doing Joe is what we old-timers call pseudoconversational processing. Usually, you can kick much of the complexity upstairs if you have a TP monitor supervising the show. Sadly, most web apps don;t (a webserver doesn't quite count). People have been doing this sort of thing for decades in languages without continuations support; so, though it's a nice-to-have feature, it is not a show-stopper. |
[unknown: 9] 10-Feb-2006 [1076] | Joe you are asking a question that finds its answer in a completely different model. It reminds of the joke "What I meant to say, was, Mother, would you please pass the salt,' (look it up). The answer is to throw away the brochure (page) model of the web, and move to web 2.0, where there is a cohesive (continuous) model. The UI is complete separated from the backend, and the UI is a single entity, that is persistent during the session. Everything else is simply a pain. Most sites are horizontal (shallow) as opposed to vertical (deep). And most are still modeling on the brochure (page) as opposed to the space (like a desktop). |
Oldes 13-Feb-2006 [1077] | I'm administrating some pages where is a lot of text articles published. And because 50% of the trafic is done by robots as Google crawler, I'm thinking about that I could give the content of the page in Rebol format (block). Robot will get the text for indexing and I will lower the data amount which is transfered with each robots request, because I don't need to generate designs and some webparts, which are not important for the robot. What do you think, should I include Rebol header? |
Sunanda 13-Feb-2006 [1078] | That's a form of cloaking. Google does not like cloaking, even "white hat" cloaking of the sort you are suggesting: http://www.google.com/support/bin/answer.py?answer=745 Better to respond to Google's if-modified-since header -- it may reduce total bandwith by a great deal: http://www.google.com/webmasters/guidelines.html Also consider supplying a Google Sitemap -- and that can have modification dates embedded in it too. It may reduce googlebot's visits to older pages http://www.google.com/webmasters/sitemaps/login |
Oldes 13-Feb-2006 [1079] | But it's not just google who is crawling, at this moment I recognize 11 crawlers who check my sites regularly. |
Sunanda 13-Feb-2006 [1080] | Some of them are just bad -- ban them with a robots.txt Some (like MSNbot) will respond to the (non-standard) crawl-delay in robots.txt: that at least keeps them coming at a reasonable speed. Some are just evil and you need to ban their IP address by other means...Like flood control or .htaccess REBOLorg has a fairly useful robots.txt http://www.rebol.org/robots.txt |
Oldes 13-Feb-2006 [1081] | So you think I should not use different (not so rich) version of the page to robots. |
Sunanda 13-Feb-2006 [1082] | Yoy could try that as a first step: -- Create a robots.txt to ban the *unwelcome* bots who visit you regularly . -- Many bots have a URL for help, and that'll tell you if they honour crawl-delay....If so, you can get some of the bots you like to pace their visits better. If that doesn't work: you have to play tough with them. |
Oldes 13-Feb-2006 [1083] | I don't need to ban them:) I would prefere to play with them:) Never mind, I will probably make the Rebol formated output anyway. If I have RSS output why not to have REBOL output as well. Maybe it could be used in the furure, when Rebol will be able to display rich text. |
Sunanda 13-Feb-2006 [1084] | Chceck if you can turn HTTP compression on with your webserver. It saves bandwidth with visitors who are served the compressed version. |
Oldes 13-Feb-2006 [1085] | The bandwidth is not such a problem now:) I was just thinking if it could be used somehow to make Rebol more visible. |
Sunanda 13-Feb-2006 [1086] | Having REBOL formatted output is / can be a good idea: REBOL.org will supply its RSS that way if you ask it nicely: http://www.rebol.org/cgi-bin/cgiwrap/rebol/rss-get-feed.r?format=rebol But *automatically* supplying a different version to a bot than that you would show to a human is called cloaking and the search engines don't like it at all. If they spot what you are doing, they may ban you from their indexes completely. |
Oldes 13-Feb-2006 [1087x3] | Do you specify content-type if you produce the output? It doesn't look goot if you open it in browser, I should look better than XML for newbies. |
(... good ... IT should :) | |
I hope I'm looking better than XML :))) | |
Sunanda 13-Feb-2006 [1090] | Yes. If you clicked the link I gave above, then you saw a page served as text/html [probably should be textplain -- so I've changed it] If you try format=rss then you get a page served as text/xml In both cases, the output is not meant for humans: one format is for REBOL and one for RSS readers. |
Oldes 13-Feb-2006 [1091] | yes, now it's ok:) |
Sunanda 13-Feb-2006 [1092] | Good to know, thanks. Sometimes changes like that break in other browsers. |
Oldes 13-Feb-2006 [1093x2] | I know it's now for human readed, but for example this chat is public and if someone would click on the link, now it looks much more better. Don't forget, that Rebol should be human friendly:) |
(I should not write in such a dark:) now = not readed = readers :) | |
older newer | first last |