World: r3wp
[!Cheyenne] Discussions about the Cheyenne Web Server
older newer | first last |
BrianH 3-Dec-2010 [9301] | We advise Carl on stuff all the time as well, even on the core design. He is always looking for new ideas. |
Pekr 3-Dec-2010 [9302] | hmm, so I turned it into advocacy anyway :-) Wrong channel again, sorry for that ... |
Henrik 3-Dec-2010 [9303] | Switching to advocacy. |
Dockimbel 3-Dec-2010 [9304] | Doc stated, why he will not port Cheyenne to R3, that is all ... and it is his free will to state that I said I won't port it now, because doing that now would be like shooting myself in the foot several times. I never said that I'll never port it in the future. When R3 will reach R2's level of features and stability, I might port it (irony not intended). |
BrianH 3-Dec-2010 [9305] | We look forward to that, seriously. But I at least understand that it is not appropriate to port it now. I hope to get advice from you about HTTP support in R3, when such advice is needed - that would be great :) |
Dockimbel 3-Dec-2010 [9306] | HTTP protocol is one of the simpliest Internet protocol to support, nothing that a skilled developer like you can't handle. :-) Anyway, I would be pleased to answer questions, as my free time permits. |
BrianH 3-Dec-2010 [9307] | Cool, thanks. And I expect that the Q&A will likely be asynchronous :) |
Dockimbel 3-Dec-2010 [9308] | Probably :) |
Cyphre 3-Dec-2010 [9309] | Doc, yes, that was nothing against you. It was more about Pekr's response to your decission to not port to R3 at the moment (which is understandable). |
Steeve 5-Dec-2010 [9310] | Doc, I wonder, It should be not that hard to use R3 for the cgi part only, right ? |
Oldes 5-Dec-2010 [9311] | I guess that not harder than using Perl as CGI or PHP as FastCGI under Cheyenne. |
Steeve 5-Dec-2010 [9312x2] | I that material up-to-date ? http://www.rebol.net/r3blogs/0182.html |
*Is that... | |
Oldes 5-Dec-2010 [9314] | It's working here under windows with R3A110 (when I change the first line of course). |
Dockimbel 5-Dec-2010 [9315] | I confirm it's working ok with a110 under Win7. |
Kaj 5-Dec-2010 [9316] | That's standard CGI, but what would be needed to give R3 a Fast CGI interface? |
Dockimbel 5-Dec-2010 [9317] | A FastCGI server protocol implementation in R3. |
Kaj 5-Dec-2010 [9318] | Are requests serialised to a FastCGI server, or is the server supposed to multi-task them? |
BrianH 5-Dec-2010 [9319] | Both are supported by the protocol, afaik, depending on the particular server settings. |
Kaj 5-Dec-2010 [9320] | Serialised operation would be easier, until R3 gets proper tasking |
Dockimbel 5-Dec-2010 [9321] | FastCGI requests are serialized by Cheyenne, but without tasking, R3 will still process one at a time, so any blocking call or long processing will block the whole server. |
Kaj 5-Dec-2010 [9322] | The R3 server, right, not Cheyenne? |
BrianH 5-Dec-2010 [9323x2] | In theory FastCGI can support app pools, for multiprocessing rather than multithreading (iirc from the docs). |
You might have to roll your own app pools though, like Cheyenne does for RSP requests. | |
Kaj 5-Dec-2010 [9325] | App pools running in separate processes? |
BrianH 5-Dec-2010 [9326] | Seperate R3 interpreter instances. |
Kaj 5-Dec-2010 [9327x2] | Yes. I'm not very interested in those, because if I would make an app server, it would be to cache my app. I probably wouldn't want multiple processes of the app server, because the caching would multiply |
That model is already supported in Cheyenne. The drawback is that it has to be wrtten in R2 | |
BrianH 5-Dec-2010 [9329] | The main plus for FastCGI of REBOL is to cut down on startup overhead. You can do things with persistent state too. |
Kaj 5-Dec-2010 [9330] | Yes, plus the startup overhead of the app |
BrianH 5-Dec-2010 [9331] | Yup. The downside to app pools is that you have to coordinate access to the data amongst multiple interpreter instances (less cache, more disk access), but that's not as big a problem for mostly-read apps. |
Kaj 5-Dec-2010 [9332] | Yeah, I'm mostly concerned about optimising the read case |
Dockimbel 5-Dec-2010 [9333] | Cheyenne has some experimental support for FastCGI multiple server instances (multiprocessing) but this has never been really tested. The balancing is very simple, distributing requests using a simple round-robin method. Pool managment is minimalistic, starting all the instances and killing them on Cheyenne's quitting, no restarting or failover handling. |
Kaj 5-Dec-2010 [9334] | Interesting. So unlike UniServe, there's a fixed number of instances, and requests to one instance are serialised? That would be quite workable |
Dockimbel 5-Dec-2010 [9335x2] | UniServe worker processes don't support multiplexed requests (multiplexing is the right word used in FastCGI specs IIRC). The FastCGI multiplexing mode requires a form of multitasking support to be able to handle all the incoming requests in a multiplexed way. Without that, you'll end up with just multiprocessing, which is what UniServe+Taskmaster are doing for CGI and RSP scripts. |
For example, PHP in FastCGI mode supports multiplexed requests over a single FastCGI socket, while dispatching the load internally between several PHP processes (or thread under Windows). In that mode, the main PHP process manages the thread/process pool alone, and starts new instances when required. | |
Pekr 5-Dec-2010 [9337] | also remember the session afinity patch, which was added to FastCGI later ... ensuring that the same session goes to the same process instance |
Kaj 5-Dec-2010 [9338] | Speaking of session affinity, how does Cheyenne do that generally? If you serialise the requests for one session to one UniServe task master, they must be queued, right? |
Dockimbel 6-Dec-2010 [9339x2] | There's no need for session affinity internally in Cheyenne, the session context is carried with the request to any worker process. CGI/RSP requests are dispatched to any available worker process. If all are busy, a new one is forked up to the maximum number (8 by default). If the max number of workers is reached and all are busy, the request is queued (the queue is global). Once a worker becomes available, it gets a new request assigned at once. |
Btw, worker processes are not equal wrt the load. The first in the list gets the more jobs to do, especially if requests can be processed fast (which should be the case in most of well-coded scripts). So, you get a natural "affinity" to the first worker (having the most code and data cached there) for all new incoming requests. So, in the long run, you can look at workers memory usage to see how the workload is spread, the first one having the biggest memory usage, the last one having the lowest. Looking at the last one is a good indicator if the number of workers needs to be raised or not, if his memory footprint hasn't changed since the last restart, your server is doing fine with the load. I should expose some workers and job queue usage stats to help fine tune workers numbers. | |
Kaj 6-Dec-2010 [9341x2] | Thanks, good to know |
I do see the asymmetry on our server. I have also had cases, though, where the number of workers went above eight or to zero. I'm not sure if that is still happening with the recent version | |
Dockimbel 6-Dec-2010 [9343] | The worker number doesn't decrease unless you're using the -w command line option (or unless your code or a native bug crash some of them badly). Having more than 8 workers is possible if some of them are blocked (in a endless loop or waiting something forever). If you quit Cheyenne in that case, they'll remain there and will need manual killing. Cheyenne could do a better job at handling those non-responding workers in future versions. |
Steeve 6-Dec-2010 [9344] | R3 is more suited for such. secure [memory integer!] and secure [eval integer!] Allow to quit from forever loops. (Not from a forever loop, which does nothing, though) |
Gregg 6-Dec-2010 [9345] | And perhaps secure [time time!] |
Steeve 6-Dec-2010 [9346x2] | yeah perhaps... is there a ticket for that request ? |
I I guess, Carl didn't want offer this by default, because the slow down may be drastic. | |
Gregg 8-Dec-2010 [9348] | I don't know if there's a ticket. I could live with relatively coarse granularity, which would be much better than nothing, if that at least made it possible. |
Pekr 8-Dec-2010 [9349] | At least a memory constraint is a good one. That should prevent memory leakage. I personally don't like eval at all, as my brain is not mature enough to be able to guess, how many cycles will my script ideally need. I would welcome time constraint as well, and it was proposed, but not accepted. Here's what does not work yet: # If the program quits, the exit code is set to 101, the same as any security termination; however, we may want to use special codes to indicate the reason for the quit. # Some types of loops are not yet checked, but we will add them. For example, PARSE cycles are not yet counted. # Time limits are not yet supported, but may be added in the future. However, the cycle limit is better for most cases, because it is CPU speed independent. |
Steeve 8-Dec-2010 [9350] | You can quit/return any exit code |
older newer | first last |