r3wp [groups: 83 posts: 189283]
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

World: r3wp

[MySQL]

amacleod
29-Apr-2009
[1117]
I'm hosting a large number of text docs broken up by sections in 
a mysql db. Once all the material is on line the changes (and thus 
downloads) wil be smal for the most part. But in the mean time large 
amounts of material are still to be uploaded to the db and when a 
client finds this new material it will download it and any changes 
to previous material. The material contains a lot of image data too 
so it could add up. Right now I have about 40 megs of data and I 
have about 10X more to upload.


If its too tuff for me I will just display some animated gif but 
I would prefer something more tangible..

I'll look into your suggestion, Thanks Doc...


BTW I tried the latest mysql-protocol and it broke my app. I have 
not had a chance to look into it but I think you changed some syntax 
for selects??
I'm using Version: 1.1.2 at the moment.
Dockimbel
29-Apr-2009
[1118]
Thanks for the additional info. The main change in 1.2.0 is the new 
READ syntax, and it's documented here : http://softinnov.org/rebol/mysql-usage.html#sect2.2.
amacleod
9-May-2009
[1119]
Doc, I played around with read-net but no luck.

Would not read-net need to know total size of query result to work?
Does the server send this info first?


also, read-net wants a URL. Where am I pointing it to? Is this the 
same as the db port i'm using?
Dockimbel
9-May-2009
[1120]
I'm just understanding now what you're trying to do. READ-NET is 
not what you want to use, it's for downloading files! You have to 
write your own progress bar function. About the total size of result 
set, AFAIK, MySQL server doesn't send that information, it just marks 
the last record of a result set.
amacleod
10-May-2009
[1121]
Thanks anyway...
Graham
14-May-2009
[1122x2]
Has anyone released a sphinx Rebol client api implementation?
There is this API document being worked on http://sphinxclient.codeplex.com/SourceControl/ListDownloadableCommits.aspx
Janko
14-May-2009
[1124x2]
I have made a primitive client libs to the SOLR search engine, if 
that helps anyone
it's not much though - but I can share without problem if anyone 
needs
Graham
14-May-2009
[1126x2]
Can you put it on rebol.org in the library.
This is interesting http://sphinxsearch.com/news/37.html.. you can 
now connect to the sphinx searchd daemon using the mysql binary protocol 
... so doc's mysql driver can also now connect to sphinx??
Janko
14-May-2009
[1128]
I will put it on rebol.org
Will
14-May-2009
[1129]
Maarten has one that's waiting Reichart approval to get released 
8)
Graham
14-May-2009
[1130x2]
That was a year ago ...
Here's an interesting conversation !  http://www.sphx.org/forum/view.html?id=1210
Maarten
14-May-2009
[1132]
Interesting indeed. I'll pick this up with Reichart; we are all in 
favour, it merely is a matter of time (pressure). The libs are nice, 
but I am sure others can help improving them.
Graham
14-May-2009
[1133]
is doc's mysql protocol the binary one they talk about?
Dockimbel
14-May-2009
[1134]
I think so.
Graham
14-May-2009
[1135]
wow ... two for the price of one!
RobertS
22-May-2009
[1136]
.
TomBon
25-May-2009
[1137]
hi doc, 
is there any trick or encoding to prevent 
inserting errors due to strings containing 
special characters like ' or /  etc?
Dockimbel
25-May-2009
[1138]
Hi TomBon, when using prepared statements, special characters are 
encoded in passed values.


send-sql db ["UPDATE table SET field=?" string]	; string value will 
be encoded here
TomBon
25-May-2009
[1139]
ahh...thx doc. I formed the insert classical without question marks.
Dockimbel
25-May-2009
[1140]
There's also an alternative approach, the SQL-ESCAPE encoding function 
of MySQL driver is exported in the global context, so you can use 
it when forming SQL queries directly :

send-sql db join "UPDATE table SET field=" sql-escape string


The prepared statement approach is recommended, because the driver 
will care about that for you automatically (so less risks that you 
forget to encode one value, opening a hole for SQL injection attacks).
TomBon
25-May-2009
[1141]
yes, will use the prepared statement. it is also more elegant. with 
escaping I need to handle all fields to be save.

btw. many thx to provide such a cool, free and very important driver!
Dockimbel
25-May-2009
[1142]
Thanks :-)
amacleod
29-May-2009
[1143]
Need some advice on db structure.
I'm not sure which would be better for this senario:


I want to store user generated data which could consist of thousands 
of rows. But I also will want to be able to search across each users 
data.


I could create a seperate table for each user and join tables when 
searching through all user data or I could make one large database 
with a user field to seperate out that user's data when needed.

There could be many hundred users.

What is SOP in such a case?
Sunanda
29-May-2009
[1144]
Is the users' data freeformat, or fixed fields?

ie do they each have the same data fields (but with different values), 
or can they define whatever data they want to store?
amacleod
29-May-2009
[1145x2]
Each table will be the exact same stucture
I guess fixed fields
Sunanda
29-May-2009
[1147]
Then it'd be normal to have one large table.

But (and I iguess this is your question): are there worthwhile (perhaps 
essential) performance improvements by denormalising? Perhaps partitioning 
different users on different drives or servers.
Sadly, normalisation theory is silent on that subject.

Can you experiment with some generated fullsize data and see?
amacleod
29-May-2009
[1148]
No, not really...

One table would be much easier and I do not think the db would get 
so large that I would see major performance problems. (I always underestimate 
the speed of these db's.)

Thanks for the advice, Sunanda.
Sunanda
29-May-2009
[1149]
One table will be much easier to manage. Two thoughts:


If you use Views to create "virtual tables" for each user, then you 
can (fairly) easily change the underlying structure without disrupting 
the application too much should you need to for performance reasons.


If your data does grow large, then partitioning my help performance 
-- ie having one table split across multiple drives according to 
data values (such as user name):
http://dev.mysql.com/doc/refman/5.1/en/partitioning-overview.html
amacleod
29-May-2009
[1150]
Nice, Thanks.
amacleod
7-Jun-2009
[1151]
Is there any reason mysql will not accept a nested block in a text 
field?
I have no problem with sqlite storing the same data...
Janko
7-Jun-2009
[1152x3]
you mean molded nested block - text?
no, where can I find it?
ups.. wrong channel
amacleod
7-Jun-2009
[1155]
for example:
[["edfefwf" 3.4.5]["gdegerg" 4.5.6]]
Janko
7-Jun-2009
[1156x2]
do you "mold" it? what kind of field do you try to store it in?
maybe sqlite molds it automagically ?
amacleod
7-Jun-2009
[1158x2]
I've tried text and varchar
text has no issues in sqlite
Janko
7-Jun-2009
[1160]
SQL command is a string so IMHO you have to mold it and enquote it 
as normal text and then there should be no problems .. what kind 
of errors does it throw?
amacleod
7-Jun-2009
[1161x2]
I molded it and it inserted into mysql but now I need to see if after 
selecting it that it behaves properly in my app....
thanks for the help
yeah, my app crashes I need to convert back to block...
Janko
7-Jun-2009
[1163]
yes you need to deserialize it after retrieval.
amacleod
7-Jun-2009
[1164x2]
I'm using 'mold/only' saving to mysql and
'to-block'  to re-block it...
is there a cleaner way like an 'unmold'?
Also, i have some datestamp issues.
rebol attaches the zone code to now when getting the date/time

but when using mysql timestamp I do not get a time zone attached 
and its screwing me up.
Is there a way to add time zone to datestamp in mysql?
Janko
7-Jun-2009
[1166]
hm.. I think you use load to get rebol data back from string