r3wp [groups: 83 posts: 189283]
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

World: r3wp

[Core] Discuss core issues

Geomol
1-Jan-2007
[6556]
LOL Wow! That's nice, Gabriele!

Also a fine solution, Sunanda! That might be one of the fastest way 
to do it. Maybe I'll just use RANDOM in my noise generating routine, 
but if that's not good enough, I'll probably use one of these suggestions.
Robert
1-Jan-2007
[6557]
Cool. This trick could be used to implement intervall arithmetic. 
Using a PAIR to store the upper/lower bound of ranges. Than we only 
need special operator implementations for * / to handle all cases.
Geomol
1-Jan-2007
[6558]
I was trying to implement this function in REBOL:

function IntNoise(32-bit integer: x)
 

    x = (x<<13) ^ x;

     return ( 1.0 - ( (x * (x * x * 15731 + 789221) + 1376312589) & 7fffffff) 
     / 1073741824.0);
end IntNoise function


Using pair didn't do the job, I guess because of truncating along 
the way. Sunanda's method works.
Robert
1-Jan-2007
[6559x2]
Question: I have a block that represents several records (fixed size 
number of fields). Now I need to extend each record by one column. 
Inserting a new entry every X entries in the block. IIRC there is 
a special function to deal with fixed size blocks for such thing. 
Like remove-each but more generic.
the simplest way I come up is:
	FORSKIP series record-size [
		APPEND result COPY/PART series
		APPEND result new-value
	]


But this copies the series. Is there a nice inplace extension solution?
Volker
1-Jan-2007
[6561x2]
parse series[any[ record-size skip p: (p: insert  p new-value) :p 
]]
but that  shifts  a lot.
would use
 insert clear series  result

and for speed there is insert/part to avoid all the small temp blocks.
something inbuild not.
Gregg
1-Jan-2007
[6563]
I have a DELIMIT function that will do it, changing the series in 
place, with the exception of the trailing element. So the final result 
would look like this:

	append delimit/skip series new-value record-size new-value

The basic idea you want is this:

	series: skip series size 
	series: head forskip series size + 1 [insert/only series value]


My DELIMIT func also works with list! and any-string! types correctly, 
which simple code above doesn't account for (the +1 part is simplified).
Geomol
3-Jan-2007
[6564x2]
To move a file, one solution is:
write/binary <destination> read/binary <origin>
delete <origin>

If you leave out the delete, you've got a copying file routine.
To copy large files, Carl gave some ideas in this blog:
http://www.rebol.net/article/0281.html
CharlesS
3-Jan-2007
[6566]
Cool thanks
BrianH
3-Jan-2007
[6567]
If you just need to move a file within the same hard drive, there 
may be some tricks with renaming or calling external commands that 
will likely be faster. Be sure to check those out too.
Anton
9-Jan-2007
[6568]
Given some data which I might load from a file:

	data: [
	
		[code description]
	
		["CC" "Crazy Cuts"]
		["DD" "Dreadful Dingo"]
		
	]

I can process it this way:


 format: data/1 ; get the format block, which is known to be first
	
	use format [
	
		foreach blk next data [ ; skip over the format block
			
			set format blk
			
			if code = "CC" [print description]
		
		]
		
	]

with the disadvantage that I set a word ('FORMAT).
I could put that in another USE context but then I would have
yet another level of nesting in the code. (There is already
one level of nesting more than I want.)

What I would prefer to write is something like:

	USE-FOREACH (data/1) (blk) (next data) [
	
		if code = "CC" [print description]
		
	]
	
Therefore, an implementation is called for.
Any comments before I begin an implementation ?
Chris
9-Jan-2007
[6569]
What is the second argument for?  -- (blk)
Gabriele
9-Jan-2007
[6570x2]
Anton, you need a BIND there somewhere :)
foreach would work as is if it wasn't for the subblocks
Anton
9-Jan-2007
[6572x7]
Chris, sorry it's supposed to be unparenthesized, as a word, to have 
access to the original data.
Gabriele, you're right. And it's not easy to add. Which is why this 
is so uncomfortable.
This works:
data: [
	
		[code description]
	
		["CC" "Crazy Cuts"]
		["DD" "Dreadful Dingo"]
		
	]

	format: data/1
	
	foreach blk next data [

		use format compose/only [
			
			set (format) blk
		
			if code = "CC" [print description]
			
		]
		
	]
I prefer the USE to be on the outside, but it's more code:
use format compose/only [
			
		foreach blk next data (compose/only [

			set (format) blk
		
			if code = "CC" [print description]
			
		])
		
	]
But if packaged nicely..	
	
	use-foreach: func [
		words [block!]
		'word [word! block!] 
		data 
		body 
		/local 
	][
		use words compose/only [
		
			foreach (word) data (compose/only [
				
				set (words) (word)
				
				do (body)
			])
		]
	]
	
	use-foreach data/1 blk next data [
		if code = "CC" [print description ?? blk]
	]
Chris
9-Jan-2007
[6579x4]
Some variations:
If the record blocks are of fixed size, you could do:

use-foreach: func [words records body]
    foreach record records [foreach :words record body]
]
Or use join (probably 'ouch' for many records):

use-foreach: func [words records body][
    foreach record records [
        use words join [set words record] body
    ]
]
Except the second one doesn't work :(
Anton
9-Jan-2007
[6583x3]
Tricky isn't it ?
Not bad ideas though. I think I prefer to use SET, rather than FOREACH, 
to set the record values, to keep the variable length ability.
Mmm... maybe USE-FOREACH better deserves your functionality, and 
USE-FORALL could be a function which steps through the DATA block, 
allowing access to it.
Anton
10-Jan-2007
[6586x2]
This works:
	use-forall: func [
		words data body
	][
		use words compose/deep [
			forall data [
				set [(words)] bind data/1 'data
				(bind body 'data)
			]
		]
	]
	
	; test
	
	data: [
	
		[code description]
	
		["CC" "Crazy Cuts"]
		["DD" "Dreadful Dingo"]
		
	]
	
	use-forall data/1 next data [
		print index? data 
		if code = "CC" [print description]
	]
>> use-forall [leg arm hair] [["long" "skinny" black]["short" "fat" 
blonde]][?? arm]
arm: "skinny"
arm: "fat"
== "fat"
Chris
10-Jan-2007
[6588]
I think this is pretty sturdy -- an interesting exercise in context 
wrestling:

use-foreach: func [words records body][
    bind body first words: use words reduce [words]
    foreach record records [set words record do body]
]
Anton
10-Jan-2007
[6589]
That's Brazilian bindology, Chris.
Ladislav
10-Jan-2007
[6590]
why Brazilian, Anton?
Gabriele
10-Jan-2007
[6591]
anton, chris' way is how i'd probably do it.
Anton
10-Jan-2007
[6592x8]
Ladislav, a reference to Brazilian ju-jitsu, which is wrestling.
Wait, I haven't shown you this yet.
	; my second version, more like Chris's version, using FORALL
	use-foreach: func [
		words [block! word!]
		data [series!] "The series to traverse"
		body [block!] "Block to evaluate each time"
	][
		use words compose/deep [
			forall data [
				set [(words)] data/1
				(body)
			]
		]
	]
It uses less words, is easier to understand, but it has a more brittle 
dependency chain via FORALL mezzanine which uses FORSKIP mezzanine 
which uses WHILE native.
Oh, and although it uses less words, it spreads them out across more 
lines.
Chris' version does not support WORDS argument as a word! - but I 
don't think it's that important to support.
Gabriele, since you say that, Chris' version does remind me of your 
style of coding.
- Nested blocks in the user code might require BIND/COPY.
- [throw] (?)
- [catch] (?)
- RETURN and BREAK ...
Volker
10-Jan-2007
[6600]
i opt for chris version. its a loop, foreach is faster. but  add 
a version with more lines, for  explanation^^.
Chris
10-Jan-2007
[6601x3]
Anton, would it add too much to change the 'forall loop to a 'while 
loop?
Fundamentally, I don't see much difference in our separate approaches. 
 Using 'compose certainly makes it easier to read.  (I'm not sure 
why, I seem to have an inherent aversion to 'compose)
Using compose, I could get the word! argument working:

bind body first words: use words compose/deep [[(words)]]
Anton
10-Jan-2007
[6604x2]
Volker, yep, documentation last, as usual !
Chris, no, I would do it: