r3wp [groups: 83 posts: 189283]
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

World: r3wp

[Parse] Discussion of PARSE dialect

Maxim
11-Dec-2009
[4683]
I'd gladly give back a few $ for their efforts
Reichart
11-Dec-2009
[4684]
Jack, Parse is my fav REBOL command.  If I ever have time, this is 
the one funciton I would like to create hundreds of examples for 
in a Wiki.
WuJian
11-Dec-2009
[4685]
newbie's solution,without  PARSE:
>> s2: {1 ''2 '3 4 ' '5 ''6 '7 8 9 '0'}

>> replace/all s2 {''} {'}     replace/all s2 {'} {''}      print 
str
1 ''2 ''3 4 '' ''5 ''6 ''7 8 9 ''0''
>> str == s2
== true
Maxim
12-Dec-2009
[4686x4]
I just adopted a new notation standard for parse rules... the goal 
is to make rules a bit more verbose as to the type of each rule token... 
I find this reads well in any direction, since we encouter the "=" 
character when reading from left to right or right to left... and 
parse rules often have to be read from right to left.

example:

=terminal=: [

 =quote= copy terminal to =quote= skip (print ["found terminal: " 
 terminal])
]


on very large rules, and with the syntax highlighting in my editor 
making the "=" signs very distinct, I can instantly detect what parts 
of my rules are other rules or character patterns... it also helps 
out in the declarations... I see when blocks are intended to be used 
as rules quite instantly where ever they are in my code.


in my current little parser, I find I can edit my rules almost twice 
as fast and loose MUCH less time scanning my blocks to find the rule 
tokens, and switching them around.

wonder what you guys think about it...
another example.... in this dense block of text, I can spot the =eol= 
 (end of line) token instantly in both x and y dimensions of the 
rule paragraph:

=line-comment=: [
	=comment-symbol= [
		[thru =eol= (print "comment to end of line")]
		|[to end]
	]
	(print "success")
]
when using rules in other contexts, they also stick out...

=alphabet=: rejoin [=digit= =letter= bits "_"]


here I immediately see that bits isn't a rule, but a function or 
a word.
with syntax highlighting it's quite amazing how    bits   stands 
out. ... in my editor at least.
Graham
12-Dec-2009
[4690]
use color instead :)
Maxim
12-Dec-2009
[4691]
what do you mean color?
Graham
12-Dec-2009
[4692x3]
Use an editor that colorises the words
Gab uses the == in his literate editor ..
Chuck Moore uses color extensively in his color forth .. to replace 
other types of syntactic markup.
Maxim
12-Dec-2009
[4695x2]
syntax highlighting colorizes words ... stuff is colorized... but 
user words aren't colorised and they all get mixed up between functions, 
variables and rules... and having colors which are two strong next 
to each other and in relative distribution ... cancels out.
stuff is colorized... (*in my editor*)
Graham
12-Dec-2009
[4697x2]
so you could write a parser that reads your rules and colorises them 
...
without the need for all those = signs everywhere
Maxim
12-Dec-2009
[4699]
but not while I'm coding... this is not for presentation, its for 
coding... I'm writing rules twice as fast now... just cause I'm not 
waisting time "searching" for the keywords within all of that text.
Graham
12-Dec-2009
[4700]
exactly ... for coding.
Maxim
12-Dec-2009
[4701]
unfortunately what you say isn't feasible, even if you can technically 
do it.  who is going to program a parser to colorise code which is 
usefull for only one application? its actually going to take more 
time to write your  color parser for each piece of code than write 
the code itself  :-P


so bottom line, Graham doesn't like this syntax. any others care 
to comment?
Graham
12-Dec-2009
[4702]
Max, just do what ever suits you.
Maxim
12-Dec-2009
[4703]
I'm just trying to get a feel for what others think about the idea. 
 and sharing a bit of a discovery at the same time, if it may help 
others. the goal isn't to be popular or convince others... and sorry, 
if my last line may have looked harsh, it wasn't.  :-)


I was just resuming your reaction plainly and  relaunching the question 
to be sure others realize I want a few opinions.
Graham
12-Dec-2009
[4704]
it's not a syntax  but a convention ...
Maxim
12-Dec-2009
[4705]
true  :-)
PeterWood
12-Dec-2009
[4706]
any others care to comment?


I'm afraid t looks very messy to me and reminded me of Perl for some 
reasion.
Maxim
12-Dec-2009
[4707x2]
yay,  I've got the BNF grammar done... its ripping through a C language 
BNF grammar definition...  :-)

now I've just got to make a parse rule emitter ... easy enough.
(all in R3, but not using newer parse stuff, cause its not required)
Maxim
13-Dec-2009
[4709]
the new parse rejection system is VERY cool.    ( can simplify the 
structure of some rules a lot  :-)
Gregg
13-Dec-2009
[4710]
For a long time I've added = to the end of my parse rules, and = 
to the beginning of parse variables. I think it matches the production 
rule grammar well, and also emulates set-word/get-word syntax.
Maxim
13-Dec-2009
[4711x3]
I'll try that, its a good variant, even better since then we clearly 
identify the 3 different parse constructs separately.
I've used word=  for other things before and I liked it.
finished the rewrite of the BNF parser... funny... there is more 
documentation & comments than code.
Maxim
14-Dec-2009
[4714]
one strange thing I realised is that most people who write bnf, will 
write them in exactly the opposite of what parse needs to be..  


they'll but the smallest pattern first.  so that if applied in parse 
directly, it always short-circuits the other rules following it.
Gregg
14-Dec-2009
[4715]
Yup. Different mindset.


I just looked at your BNF compiler earlier. Good stuff. I did an 
ABNF-to-parse generator some time back. ABNF is used in a lot of 
IETF RFCs and such.
Maxim
14-Dec-2009
[4716x2]
what is the difference?
is ABNF == EBNF  ?
Gregg
14-Dec-2009
[4718]
There are a lot of differences, unfortunately. It's not terrible, 
just different. It's not EBNF.

http://en.wikipedia.org/wiki/Augmented_Backus%E2%80%93Naur_Form
Maxim
14-Dec-2009
[4719]
that is nice, is your ABNF parser still accessiblel somewhere?  it 
could improve the quatily and ease of integrating the protocols to 
R3 IMO.

ABNF also seems much more aligned to parse
Gregg
14-Dec-2009
[4720]
Generating PARSE rules wasn't too hard. It is a nice fit. Same issue 
with existing grammars though, in that you have to fix some things 
up manually, or we have to make the generator smarter.


I'll zap you what I have. Can't remember where I've posted it elsewhere.
Maxim
14-Dec-2009
[4721]
sure.
Maxim
15-Dec-2009
[4722]
I've been rewriting bnf generated parse rules (and often a bit cryptically) 
into proper parse ordered rules for 3 days now... <sigh>  

C is sooo complex for what it really does.  I''ve discovered a few 
quite mind-boggling language capabilities... 
stuff like:    

char *( *(*var)() )[10];


it takes 7 steps to define what that really is and there are other 
"fun" examples which end up being interpretation nightmares, but 
look really simple.


one thing is certain at this point... although I will be able to 
build a C to rebol converter with relative precision under specific 
goals, some of the crazy stuff just will have to be finished manually 
by humans.


at least I rarely see such twisted C code in most of what I've been 
reading so far.
BrianH
16-Dec-2009
[4723x3]
BNF is just a syntax form, with a *lot* of variation. The real difference 
that matters between Yacc and PARSE is the parsing model. Yacc implements 
an LR parser (or some variant thereof), and PARSE implements a variant 
of TDPL parsing (related to PEG), though more powerful and with a 
completely different syntax. How you structure the parse rules depends 
on the parsing model, not the syntax.


For instance, LR parsers tend to do recursion rather than iteration, 
and when they recurse the recrsive call tends to be on the left, 
with the distinguishing clause on the right. For PEG parsers, recursion 
goes the other way. This is not an error, this is a difference in 
parsing model.


If you are translating from Yacc to PARSE, it's not just a syntax 
change. You have to reorganize the rules to match the new model. 
And watch out: Certain patterns are easier to express in some parsing 
models than in others. Some patterns aren't supported at all in some 
models, and so no amount of translation will help you. We chose the 
TDPL model for PARSE because it is more expressive than the LR model, 
so in theory you should be able to translate LR rules to PARSE with 
some topological twists (redoing the sturcture of the rules). However, 
there are patterns that you can express in PARSE that can't be translated 
to LR, even with topological changes.
Unfortunately, the C grammar was designed with LR parsers in mind.
You might be better off translating a C grammar for a PEG or TDPL 
parser generator into PARSE - less topological shifts needed.
Maxim
16-Dec-2009
[4726]
well, considering that I just finished the basic rule re-organisation... 
eheheh I think I'll apply the unit testing phase right now to test 
if all the rules perform as they shoudl using input text.  there 
is probably going to be about 100kb of unit test code for what is 
now about 12kb of parse rules.
BrianH
16-Dec-2009
[4727x2]
Sounds about right.
Are you sure you have enough test code/data?
Maxim
16-Dec-2009
[4729x2]
there is all in all only two or three rules that I'm unsure of the 
transformation, as some aspects of the C syntax are a bit obscure 
to represent.
you are being sarcastic right? :-)
BrianH
16-Dec-2009
[4731x2]
No, really. The syntax of C is so complex that you would need a lot 
of data to test all of the common variations.
data
 in this case being C source.