AltME groups: search
Help · search scripts · search articles · search mailing listresults summary
world | hits |
r4wp | 443 |
r3wp | 4402 |
total: | 4845 |
results window for this page: [start: 4001 end: 4100]
world-name: r3wp
Group: !Cheyenne ... Discussions about the Cheyenne Web Server [web-public] | ||
Dockimbel: 2-Dec-2010 | Graham: you should try by adding the following header to your RSP script outputting a PDF file in order to display it in the browser : response/header 'Content-disposition {inline;filename="doc.pdf"} | |
GrahamC: 2-Dec-2010 | This is what I am sending HTTP/1.1 200 OK Server: Cheyenne/0.9.20 Date: Thu, 02 Dec 2010 15:33:25 GMT Content-Length: 475 Content-Type: application/vnd.adobe.xfdf Connection: Keep-Alive Content-Encoding: deflate Set-Cookie: RSPSID=XESYTVZSEFSXPQHCTITGRDQG; path=/md; HttpOnly Cache-Control: private, max-age=0 Expires: -1 Firefox opens up the PDF referenced in the xfdf file that is downloaded. Chrome just downloads and saves the content. So maybe it's just Chrome's PDF "plugin" that doesn't understand what to do ... | |
Pekr: 3-Dec-2010 | /shell is available, just less powerfull. /wait was added at least. /output is not there, but in such a case /wait is sufficient - you can redirect to file, and read it after the return from the call/wait .... it is just a note, I don't want to lobby for R3 port, that would be - preliminary :-) ... at least unless concurrency model is available, it would be worthless ... | |
Dockimbel: 3-Dec-2010 | What is Cheyenne using DLL interface for? - UNIX: CGI support, running as user instead of root, management of external servers (like PHP) - Windows: CGI support, external servers (PHP), Desktop detection (for hiding working files), NT Services support, mutiple instances support, systray menu DLLs are generally not cross platform too, no? DLL are not, but the mappings to the DLL can be written easily in REBOL code, no need to go down to C. I see that as a big advantage in simplicity and maintainability. /shell is available, just less powerfull. Cheyenne requires /info, /output, /input and /error. /output is not there, but in such a case /wait is sufficient - you can redirect to file, and read it after the return from the call/wait If you want to have the slowest CGI support in the world, that's a good way for sure! | |
GrahamC: 29-Dec-2010 | and I'm getting this error in trace.log 9/12-01:14:18.696-[DEBUG] c: [pain "9" ptgl "8" fn "10" rapid3 "7" fatigue "4" ros "6" ems "5" patient "7"] 29/12-01:14:18.696-[RSP] ##RSP Script Error: URL = /md/add-rapid3.rsp File = /r/pwp/www/md/add-rapid3.rsp ** Script Error : patient has no value ** Where: context ** Near: [patient] | |
amacleod: 6-Jan-2011 | I can't get cheyenne to serve to an ajax json request. I can get it to read the array as a local file but it does not seem to work through url request. I played with content-type: application/json which I read was needed but I don't know if I'm on the right track. | |
Kaj: 18-Mar-2011 | Is that modeled after Amiga catalogs? The file structure is exactly the same in Syllable :-) | |
Maxim: 15-Apr-2011 | my client had one console for each process on his cheyenne setup... I need to see what is happening independently from each other. trying to follow 5 high-speed traces in a single file, is like using altme without any groups it would be impossible. ;-) if one worker has a fuck up i need to be able to see why *it* is failing. | |
Maxim: 15-Apr-2011 | but that is a single file for several thread... its impossible to use when there is traffic. | |
Dockimbel: 16-Apr-2011 | To open a console window for each worker process, you need to change the CALL command line from %UniServe/uni-engine.r: call join form to-local-file system/options/boot [" -qws " cmd] Replace: * CALL with CALL/SHOW * -qws with -s Also, in that case it is also more practical to reduce the worker number to 1 using Cheyenne command line option: -w 1 | |
Dockimbel: 17-Apr-2011 | No way to redirect those log files yet. You can change their path by patching %cheyenne.r file. You can also just delete them once Cheyenne started, if all goes well, they shouldn't reappear until next restart. | |
onetom: 17-Apr-2011 | Script: "Encap virtual filesystem" (21-Sep-2009) make object! [ code: 500 type: 'access id: 'cannot-open arg1: {/Users/onetom/rebol/cheyenne-server-read-only/Cheyenne/httpd.cfg} arg2: none arg3: none near: [conf: load either exists? file] where: 'read ] | |
Maxim: 17-Apr-2011 | it would be nice to have a little options in the cfg file... something like -trace-log %/path/to/folder/ | |
Dockimbel: 17-Apr-2011 | I won't make a per log file configuration option, that wouldn't make much sense. When I'll had such option, it will redirect all log files to the same folder. Still need to finish reading onetom proposition to see if there's a better solution. | |
Dockimbel: 17-Apr-2011 | Forking is done when Cheyenne starts, more precisely when the task-master service is loaded by UniServe. It starts with a default of 4 workers processes. If more than 4 simultaneous RSP or CGI requests are received, one or several workers are forked (up to 8). You can change those values in %cheyenne.r (search for pool-start and pool-max). It is planned to expose those values in the config file for easier access. | |
Dockimbel: 17-Apr-2011 | Cheyenne just needs to be able to pick the config file from a remote folder instead of just current one. | |
Dockimbel: 17-Apr-2011 | then how could Cheyenne know where to look for config file ? | |
Dockimbel: 17-Apr-2011 | the only way I see is passing the config file path as command-line argument, something like: cheyenne -p 8001 -conf /home/devel1/app/current/ | |
onetom: 17-Apr-2011 | any relative path in a config file is normally calculated from there | |
onetom: 17-Apr-2011 | i was thinking making a webapp which returns on-page-end the to-json context load rejoin [%. request/parsed/file %.r] | |
onetom: 18-Apr-2011 | sometimes it seems it can detect the file change, sometimes it doesn't.. is cheyenne checking the modification date of the source file | |
onetom: 18-Apr-2011 | File = ./to-json.r ** Script Error : body-of has no value | |
Dockimbel: 18-Apr-2011 | i have a couple of ideas how to hack this, but what would be the correct" way?" You could try using the ALIAS config option. Add this to the httpd.cfg file in a domain section (not sure if it would work in a webapp context): alias "/rest" %rest.rsp and call your REST resources with: /rest/asd /rest/qwe In %rest.rsp, you can put your URL parsing code to produce a "REST routing" and return the JSON object. It might also work with "/" (untested). | |
Maxim: 18-Apr-2011 | Doc, one thing I have not yet fully mapped out in my mind wrt handlers. Q1: how do the handlers actually compile/get their source code... is it sent over tcp and run there, or does the handler load a file on its own? Q2: when exactly does this happen? Q3: can I configure the handler source or data in some way before it gets compiled/executed, (at the mod conf parsing stage). I neet the handler to share some data with the mod which manages it in some way. I don't want to send this config data via the request, at each request (makes no sense) | |
Maxim: 18-Apr-2011 | so I guess, the best way to configure the handlers through cheyenne's httpd.cfg system, is to save out a temporary configuration file on the fly when the conf parser is run. this file would be executed by my handler when it is launched by any of the worker processes. | |
Dockimbel: 19-Apr-2011 | Temporary conf file: right, that should be the simplest solution. | |
Dockimbel: 19-Apr-2011 | You need to change them in %cheyenne.r (I should add them to the config file some day). | |
Dockimbel: 19-Apr-2011 | Use 'debug/probe or 'debug/print to emit debug logs in %trace.log file. | |
Maxim: 22-Apr-2011 | ok, so I promised a little announcement about work I have been doing in/with/for cheyenne... I have built a web service module (mod) for cheyenne. ----------------------- features/highlights ----------------------- * extremely fine tuned to cause the least cpu hit on the server process since ALL processing is done in worker processes. * it uses an arbitrary number of rebol script files which you assign to any host in the config file. (even the default) * once assigned, these files are compiled dynamically (as one app) by the mod and are exposed via http by the server. * only the functions you *chose* are *ever* visible on the web, allowing you to include support libs, data and function right in your server-side api. * no direct execution of code occurs, *ever* from the client to the server, all input is marshaled, and parameters are typed to your function specs. * allows ANY type of web api to be delivered, from REST to SOAP like interfaces. * output is programmable, so that you can output AS json, xml, html, txt, etc. * interface is also programmable, so that you can provide GET params, POST forms, POST (XML, JSON, REBOL native data) * Automatic API documentation via source scanning and function help strings . there will also be some form of comments which will be used by documentation. * No suport for sessions. this is part of your application layer, which should use https and session keys in the submitted data, if you require it. * it takes litterally 5 minutes to convert your internal rebol code into web services which obey internet standards. * System is auto-reconfiguring... i.e. you don't need to close cheyenne to update the service, just restart the workers. | |
Maxim: 22-Apr-2011 | maybe I just make the config file in brainfuck language, just to make it look like its a sophisticated thing... I mean, when its too easy, it can be taken for granted ;-) | |
Maxim: 22-Apr-2011 | yeah, but you still have to put the code behind. the web-api mod, provides an interface automatically based on what is actually being served. you could easily build a little WSDL to REBOL api file converter. just load the XML, extract the methods, the parameters and build an equivalent rebol function stub. Then all you'd have to do is implement the function body.... the only detail is the xml datatype which don't all map 1:1 within rebol, but that can usually be pretty well cornered within the code itself. | |
Dockimbel: 23-Apr-2011 | I still need to add some documentation to the wiki for .r file handling in RSP context and to mention the new cgi-conf script (from onetom). | |
Maxim: 27-Apr-2011 | system-libs-root: rejoin [to-rebol-file get-env "systemroot" %"/system32/" ] kernel32: load/library join system-libs-root %kernel32.dll user32: load/library join system-libs-root %user32.dll advapi32: load/library join system-libs-root %advapi32.dll shell32: load/library join system-libs-root %shell32.dll iphlpapi: load/library join system-libs-root %iphlpapi.dll | |
onetom: 27-Apr-2011 | Maxim: the path notation works on file! valued variables too: >> f: join to-rebol-file get-env "HOME" %/system32 == %/Users/onetom/system32 >> f/some32.dll == %/Users/onetom/system32/some32.dll | |
onetom: 2-May-2011 | 2/5-19:59:12.259281-## Error in [conf-parser] : Error in conf file at: ! does this look familiar to anyone? there is no ! in the httpd.cfg of course | |
onetom: 2-May-2011 | btw, that error at ! is logged even if my cfg file is only $ cat httpd.cfg modules [ internal extapp static action rsp alias ] globals [ listen [8080] bind-extern RSP to [ .r ] ] | |
Dockimbel: 2-May-2011 | I wonder why you guys are make things harder by trying to debug your apps under production conditions? Why don't you make a local development setup using Cheyenne from sources, lauching it from a console in verbose mode to have a direct look at everything that could go wrong. The only log file I need to look at during Cheyenne development is %trace.log file (and even this one is accessible from your browser in RSP 'debug mode...). | |
onetom: 2-May-2011 | because - normally - i shouldn't care about the webserver. that would be one of the great things about cheyenne. it's just a 1-file-webserver | |
onetom: 2-May-2011 | ## Error in [conf-parser] : Error in conf file at: ! was caused by writing a debug word outside of a vhost definition... | |
Dockimbel: 2-May-2011 | You should search in the Cheyenne console logs for "[HTTPd] Translated file:" to see what file Cheyenne is trying to read. That should give you a clue about what is causing the difference. | |
onetom: 2-May-2011 | Translated file: %./jsondb/to-json.r in case of the source version; so there is no differnece there. | |
onetom: 3-May-2011 | anyone wrote some memcached or tmp file storage for sessions? | |
Dockimbel: 3-May-2011 | They are persisted on disk if you specify in globals section of config file: persist [sessions] | |
Maxim: 3-May-2011 | doc, I've been able to add a config for the worker count (min/max) within the httpd.cfg file. it took some doing and a lot of file navigation to under the deep secrets of cheyenne's startup processs ;-) | |
onetom: 4-May-2011 | Most comments are lies. They don't start out to be lies, but gradually the get out of date. You might say that developers should be disciplined enough to maintain the comments; but this is not reasonable. Comments have no detectable binding to what they describe. If you modify a function, you have no way of knowing that 1000 lines above you (or perhaps in a different file) there is a comment that contradicts what you are doing). So comments slowly degrade and turn into misinformation. -- http://www.coderanch.com/t/131147/Agile/Clean-Code-Handbook-Agile-Software u just demonstrated this principle very well ;) | |
Dockimbel: 4-May-2011 | Max: I am very busy today, I am not sure I will have time to review your code now (you should send me a copy of the changed files first BTW). As you could see, supporting such feature at the config file level is complex because of config file being loaded only when HTTPd service starts (for historical reasons). I am not sure that initializing the HTTPd service ahead is a clean solution (the boot/ line has become a bit hard to read with this /no-start flag that loads and init HTTPd service...). The solution I had in mind was to extract the whole config file loading from %HTTPd.r and put it in %cheyenne.r. This is a deep and complex change, that is why I was waiting to have enough time to do it in a single working session. Anyway, I will examine your solution first. | |
onetom: 6-May-2011 | ok, probably im trying to do something forbidden there 6/5-17:27:41.971934-[Logger] New request: T6/5-17:27:41.948903-## Error in [task-handler-55484] : Make object! [ code: 312 type: 'script id: 'cannot-use arg1: 'path arg2: 'none! arg3: none near: [switch debug-banner/opts/error [ inline [html-form-error err file] popup [debug-banner/rsp-error: make err [src: file]] ]] where: 'protected-exec ] ! | |
onetom: 6-May-2011 | hmm... it's a .r file but it's in rsp format, so no rebol header but <% %> tags. so i guess it's the right behaviour | |
Kaj: 6-May-2011 | So do you think preparing the RSP interface for each request would still be faster than reading a small rebol-fast-cgi file every request? | |
onetom: 7-May-2011 | if i edit the file above while cheyenne is running, it starts to work; to-object is found | |
onetom: 7-May-2011 | this is my test.r file and im calling it from a webapp | |
onetom: 7-May-2011 | until yesterday the very same script was within <% %> tags actually and this effect was still observable. just the latest svn version didn't accept the tags in a .r file anymore, that's why im showing it as a REBOL[] script. otherwise, i was just following common sense and trying to get things done in an ultra primitive way :) | |
GrahamC: 7-May-2011 | So, now I load the db from a disk file each time ... | |
GrahamC: 7-May-2011 | I have mutiple users using a web app on different ports. Each has their own vhost, and their own pages but to keep things simple, each web app is the same. I include a config file each time a rsp page is loaded with the db definition instead of in the app-init.r where it did not work | |
Kaj: 7-May-2011 | 7-May-2011/20:07:17+2:00 : make object! [ code: 311 type: 'script id: 'invalid-path arg1: 'mod-rsp arg2: none arg3: none near: [if exists? file: service/mod-list/mod-rsp/sessions/ctx-file [try-chown file uid gid]] where: 'set-process-to ] | |
Dockimbel: 8-May-2011 | Did you see any error in Cheyenne log file (chey-pid-*.log or crash.log)? | |
onetom: 8-May-2011 | why is it giving me an error page? because of the debug option in the config file or because of the -vvvv? | |
Dockimbel: 8-May-2011 | i'd be interested in looking into the sessions during runtime too.. can i do it on the cheyenne console by pressing escape? You can access them in 3 different ways: 1) if run from sources, escape in the console, then enter: probe uniserve/services/httpd/mod-list/mod-rsp/sessions/queue. Type do-event when you want to resume Cheyenne. 2) run the %clients/rconsole.r from the source archive, you will have a remote console connected to your local Cheyenne process (try on prompt: netstat) 3) add this to the config file in globals section: persist [sessions]. When you want to look to the sessions, just stop Cheyenne process, a .rsp-sessions file is created holding the session objects. | |
onetom: 8-May-2011 | $ cat jar # Netscape HTTP Cookie File # http://curl.haxx.se/rfc/cookie_spec.html # This file was generated by libcurl! Edit at your own risk. #HttpOnly_guan-huat FALSE / FALSE 0 RSPSID MTXVGMVOMYMVGDZKFURKPQKK | |
Kaj: 8-May-2011 | The encapped version of Cheyenne needs to be started from the directory of the configuration file in order to find it. Then it produces its log files also in that directory. This is both against the structure of Unix and modern Windows systems | |
Kaj: 8-May-2011 | To produce the logs, Cheyenne needs write access to the folder with the configuration file and (future) logs. This becomes a problem when it gives up its root capabilities, and the mechanism to adapt the privileges of the log files still doesn't seem to work | |
Dockimbel: 9-May-2011 | This is both against the structure of Unix and modern Windows systems. UNIX filesystem layout usage are not identical. Here are the Apache error log location in just 3 UNIX flavours (among dozens): * RHEL / Red Hat / CentOS / Fedora Linux Apache error file location - /var/log/httpd/error_log * Debian / Ubuntu Linux Apache error log file location - /var/log/apache2/error.log * FreeBSD Apache error log file location - /var/log/httpd-error.log and here are the possible locations of configuration file: * /usr/local/etc/apache22/httpd.conf * /etc/apache2/apache2.conf * /etc/httpd/conf/httpd.conf Notice how the file name changes too (both for the log and conf files). BTW, I personnally prefer the GoboLinux approach ;-). One the Windows front, it is barely better. The registry database is fine for storing parameters (name/value couples), but not a REBOL dialect file. A common way is to store files created at runtime in %USER%/AppData/Local/<appname>/. Cheyenne stores all his files (including config file) either in the local folder or in %ALL_USERS%/Cheyenne/. Storing them in %USER% hierarchy should be better. Taking into account every OS specificities (or oddities) is not always a good choice for a cross-platform product. I know that Cheyenne needs to be gentle with the OS best practices, so I am willing to improve it whenever it is possible, but without sacrificing the default behaviour (because that is the way I want it to work for me). BTW, I am also willing to test the centralized logging approach, but it has to be a cross-platform solution. So an abstraction layer needs to be built with connectors for UNIX syslog daemon and Windows Event Logger (they are two types to support: pre-Vista system and new Vista/7 one). Has anyone already worked on such wrappers with REBOL? I personnaly need that the log files be exactly in the same format and if possible at the same place across platforms to make my life easier, so this will keep being the default anyway. The current -f internal Cheyenne command line (Windows specific currently) could be extended to work on UNIX too (and no Max, this one cannot go into the config file, because it indicates where the config file is located ;-)). | |
Kaj: 9-May-2011 | - Configuration data. The httpd.cfg file | |
Dockimbel: 9-May-2011 | PID file can be redirected as well. | |
Dockimbel: 9-May-2011 | As I understand it, this looks like Cheyenne will need a per-UNIX system install script? Or will we let users spread the files acrosse the filesystem as they want and use options to redirect properly each file classes to the right folders? | |
Kaj: 9-May-2011 | The only things really missing are paths to the configuration file and the main logs area | |
Dockimbel: 9-May-2011 | It is absolutely not simple: - Cheyenne binaries use a memory-based virtual file system. - When run from sources, files internal relative paths depends on where the REBOL binary is run from. - REBOL on Windows has 2 different working folders (one for REBOL, one for the system), while on UNIX, it seems that there is only one (from the REBOL POV). Make a cross product of items and you'll have all possible combinations to manage. | |
Kaj: 10-May-2011 | Does the encapper modify the file scheme? Couldn't that be extended? | |
Dockimbel: 10-May-2011 | No, the encapper has no effect on file scheme. | |
Dockimbel: 10-May-2011 | File scheme is native, so no, it can't be extended. | |
Kaj: 10-May-2011 | It would also have been good if the file scheme in REBOL were hackable. Per-app namespaces are always useful | |
Dockimbel: 25-May-2011 | Adding transparent compression for static resources was also planned, but: - it is not easy to support efficiently - when static file serving performance really matters, a fast front-end like nginx is preferable | |
onetom: 25-May-2011 | so for us it would be freaking efficient :) otherwise we are veeeery far from hitting the raw static file throughput ... | |
onetom: 25-May-2011 | and a plain filesize check, like if 10'000 < select info? file 'size [call "gzip..."] | |
Dockimbel: 25-May-2011 | if you have to re-compress the file on each request | |
onetom: 25-May-2011 | such automatic compression makes sense only for mid sized files, where the file needs to be seemlessly uncompressed on the other side. if the file is bigger, u want to be more specific about the compression method anyway... | |
onetom: 25-May-2011 | the small instance im running this shit from is a small ec2 instance. it compresses the mentioned file in 44ms for the 1st time, then ~28ms subsequently. no matter how i look at it, it does worth to support this for the usual text mime types, especially within the 10kB - 10MB size range | |
Dockimbel: 25-May-2011 | FYI, I plan to work this Sunday on: - adding proper log file relocation ability for UNIX platforms - make a draft mod for testing static file compression support | |
onetom: 29-May-2011 | Dockimbel: could you work on the log file location / compressoin stuff? | |
Dockimbel: 29-May-2011 | New revision 144: - added -f option to change working folder location - added -c option to load the config file from a new location See %changelog.txt for more details. | |
Dockimbel: 29-May-2011 | Re compression: after a deeper look, there is no way to support "on-the-fly" compression of static files without totally killing Cheyenne performances. If it is done in a mod, it will totally freeze Cheyenne on every CALL. If it is done in an handler, most of benefits from the async I/O main engine is lost, and Cheyenne will be limited to serve small files only (they will be transfered from workers to main process) and limited to 1 request per worker (so if 4 workers, maximum simultaneous request = 4, all others will be put in queue and waiting). So, the only way to support static file compression is by pre-compressing files and adding some config options to let Cheyenne know which files are compressed (file sufix alone is not enough). Pre-compressing means having to manage compressed versions of static files (When and how to compress? Where to put the files? When to delete them? etc...) | |
Dockimbel: 30-May-2011 | I have tested successfully config file relocation/renaming and Cheyenne working directory relocations both on Windows and Linux. Let me know if something is not working as expected. | |
Dockimbel: 19-Jun-2011 | But you need to run Cheyenne from sources at least once to force the generation of the %.cache.efs file (result of the preprocessing). | |
Dockimbel: 19-Jun-2011 | About %httpd.cfg file, it will be written down on disk if not present, so if you want to avoid that, you need to patch the sources. For that, just edit %Cheyenne/misc/conf-parser.r and comment line 69: ; write file data | |
Dockimbel: 22-Jun-2011 | Henrik, if you haven't solved yet your configuration issue, you should send me your working directory archived along with your configuration file, so I can see where is the issue. | |
Henrik: 22-Jun-2011 | The testapp script makes this error: 22/6-21:54:04.832205-[RSP] ##RSP Script Error: URL = /show.rsp File = /home/henrikmk/sites/testapp/show.rsp ** Script Error : empty? expected series argument of type: series port bitset ** Where: rsp-script ** Near: [either empty? session/content [ print "<LI>No session variables</LI>" ] [ foreach [name value] session/content [ print [<LI> <B> name ":" </B> mold value </LI>] ] ]] | |
Dockimbel: 19-Nov-2011 | Cheyenne doesn't return HTML when a session times out, it only returns a 301 (or 302, I don't remember) to the URL you've specified in the config file after AUTH. | |
Janko: 23-Nov-2011 | I want to log certain events from the webapp. What would be the most efficient way to do this (by this I mean one that would have least impact on server responsivenes). I would like to use something silimar to debug/probe debug/print .. Is it possible to have use the existing loging functionality to log custom stuff to custom log? (or is this just normal file append)? | |
Endo: 23-Nov-2011 | I think its ok to use append if its not a heavy-loaded web site. I did this to prevent possible file access problem if it happens same time in very rare cases: unless 'ok = loop 3 [ if not error? try [ save voter-file append voters session/content/username ] [ break/return 'ok ] wait 0:0:0.1 ] [ response/redirect "error.rsp?code=error-on-voting" ] | |
Dockimbel: 23-Nov-2011 | You can use debug/* logging functions, but they will only log in %trace.log file. Writing directly to a log file from RSP script is unsafe (unless you take great care about concurrent accesses). So, if you want to have custom logs from RSP scripts, you should use the OS syslog service for a really realiable solution. The debug/* log functions use their own solution for serializing disk writes, they are passing data to Cheyenne main process that does the writings to disk. | |
Janko: 23-Nov-2011 | Endo, thanks for the code. I will need something similar for sqlite. I just got a first db is locked error yesterday with it at UsrJoy. What I'm trying to log is side-info (like usage info) so I don't want to inpact the speed of response by having aditional disk write in the request-response process (it has to be async). Doc: I used debug functions for various info logging too now (and I do diff on trace in cron and send myself email if there is any difference), but I am trying to log more things and I would like to leave trace.log for errors only. I was interested if the existing functionality that serializes debug to trace.log could be used to log to some other file. like info.log . That would complicate the app-code the least.. otherwise I was thinking of what Kaj proposed, to have some queue to send data over via tcp to and it will write it to disk in intervals. That would bring another dependancy into app-code.. something like redis could automatically work like this I think. | |
Dockimbel: 23-Nov-2011 | It could be possible to extend debug object to handle an /info refinement that would log to an %info.log file, but that would put some burden on Cheyenne main process when in production. I thought about writing an OS logging service wrapper, but never found the time for that. I usually do all my writings from webapps into databases that are able to handle concurrent accesses reliably (so, not sqlite). | |
Kaj: 23-Nov-2011 | It makes heavy requirements on the file locking of the operating system for that, and it does have a document section that explains how operating systems are buggy and badly documented, so that doesn't exactly instill confidence | |
Dockimbel: 23-Nov-2011 | Reliable and efficient file locking is hard to achieve, I agree with that. That's why I went for a syslog-like solution for Cheyenne. | |
Dockimbel: 23-Nov-2011 | When any process wants to write, it must lock the entire database file for the duration of its update However, client/server database engines (such as PostgreSQL, MySQL, or Oracle) usually support a higher level of concurrency and allow multiple processes to be writing to the same database at the same time. This is possible in a client/server database because there is always a single well-controlled server process available to coordinate access. If your application has a need for a lot of concurrency, then you should consider using a client/server database. | |
Dockimbel: 24-Nov-2011 | Yes, but in Cheyenne context, having to maintain a cross-platform C lib to that would be really annoying. It would be the end of Cheyenne as a one-file server. Also, it wouldn't run on Core anymore. | |
Endo: 25-Nov-2011 | when I encap embed-demo.r, embed-demo.exe gives this error: ** Script Error: select expected series argument of type: series object port ** Where: get-cache ** Near: select cache file Do I need to do something else? I uncommented "embed" in httpd.cfg. | |
Dockimbel: 25-Nov-2011 | You would also need to patch the engine/add-file method where the RSP scripts are really loaded. | |
Endo: 29-Nov-2011 | encapped embed-demo.exe application gives the following error: ** Script Error: select expected series argument of type: series object port ** Where: get-cache ** Near: select cache file | |
Endo: 29-Nov-2011 | I think it is also possible to include mod-embed.r file. Currently embed-demo.exe requires mods/mod-embed.r file (probably the other mods as well) |
4001 / 4845 | 1 | 2 | 3 | 4 | 5 | ... | 39 | 40 | [41] | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 |