r3wp [groups: 83 posts: 189283]
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

World: r3wp

[!REBOL3-OLD1]

Pekr
1-Jul-2009
[15846]
Well, summer is going to be a slow time anyway, for many of us ...
BrianH
1-Jul-2009
[15847]
If necessary, put off releasing the plugins until after the vacation, 
in case perspective is needed :)
Ladislav
1-Jul-2009
[15848x3]
Peter once ( Rambo#3518 ) objected against some things being inequal. 
I could use more opinions on this.
this is a test that succeeds in R2 as well as in R3; but, is it really 
supposed to?

		a-value: first ['a/b]
		parse :a-value [b-value:]
		not equal? :a-value :b-value
(it is related to the Rambo#3518 ticket)
BrianH
1-Jul-2009
[15851]
I like the idea of non-datatype-specific EQUAL? considering datatypes 
in the any-string! family to be equal to each other, and also the 
any-block!, any-word! and number! families. I'm a little wary of 
the potential breakage, though have no idea of the scale of it. Is 
anyone aware of any code that wouuld be broken if EQUAL?, =, NOT-EQUAL?, 
<> and != changed in this way?
Ladislav
1-Jul-2009
[15852x3]
;This is an example by Geomol:
d: to-decimal to-binary 1023
blk: []
insert/dup blk 0 1024
random/seed now
loop 1000000 [
i: to-integer to-binary random d
blk/(i + 1): blk/(i + 1) + 1
]
print [blk/1 blk/512 blk/1024]
showing, that my "original" implementation of the uniform deviates 
actually generated both endpoints of the specified interval: the 
0.0 as well as the given value, but both with the frequency equal 
to the half of the frequency of any interior value
it has been corrected (ticket #1027) yesterday, but Geomol seems 
to dislike the fact, that the correction excluded the given value. 
Anybody wanting to express their preferences?
Geomol
1-Jul-2009
[15855]
I find, the first version, with both endpoints (0.0 and the input 
value) as possible output with half frequency than other possible 
values in between, gives most sense. I think of a number line going 
from 0.0 to the given value, and a random number is picked on the 
line.
Ladislav
1-Jul-2009
[15856x2]
well, for some applications (Fourier analysis) it makes the most 
sense that way
just for the record: a variant yielding all values in the interval 
including the endpoints with equal frequency is possible too (just 
the generating formula is a bit different)
Geomol
1-Jul-2009
[15858]
RANDOM is a distribution. Getting random integers, the mean value 
is well defined as:

(max + 1) / 2

So e.g.

random 10

will give a mean of 5.5. What is the mean of

random 10.0
or
random 100.0
Ladislav
1-Jul-2009
[15859]
5.0 and 50.0 (or, do you mean, it is only "roughly" 5.0 and 10.0?)
PeterWood
1-Jul-2009
[15860]
Ladislav: I reported bug #3518 in Rambo mainly because the behaviour 
of the '= function is not consistent. My thinking was if 1 = 1.0 
why doesn't  #"a" = "a"?


It appears that the '= function acts as the '== function unless the 
types are number!. 


I have come to accept that Rebol has been designed pragmatically 
and, understandably, may be inconsistent at times. I thing this makes 
the need for accurate documentation essential. I would hope that 
the function help for = can be changed to accurately reflect the 
functions behaviour.
Ladislav
1-Jul-2009
[15861]
actually, my task now is to define the desired results of such comparisons 
for R3, (which may serve as documentation too)
PeterWood
1-Jul-2009
[15862x2]
The R2 behaviour has the advantage that it is easy to define and 
understand (especially if the function helpext was improved). If 
other options are to be considered, defining the desired results 
will be more difficult. No wonder  you ar taking this on.
I believe that there needs to be some restriction on the datatypes 
on which the '= function will work. It seems to make no sense to 
comparer a URL! with an email! (Unless you define a URL! to be equla 
to  an email! if they refer to the same ip address or domain name. 
Perhaps that's something for same?).


It's harder to say whether an issue! can be equal to a binary! but 
waht about an integer! with a binary!?
Maxim
1-Jul-2009
[15864x2]
I WANT PLUGINS !!!!!    :-)
wouldn't it be cool to load a rebol instance as a plugin within rebol? 
 :-)

this could be the basis for an orthogonal REBOL kernel  :-)
Anton
1-Jul-2009
[15866]
Peter, are you sure you would never want to compare an email with 
a url?
What about urls like this?
http://some.dom/submit?email=[somebody-:-somewhere-:-net]


I might want to see if a given email can be found in the query string.
Anton
2-Jul-2009
[15867]
Ladislav, your "parsing a lit-path" example above looks ok to me 
for the proposed ALIKE?/SIMILAR? operator, and EQUAL? if it's been 
decided that EQUAL? remains just as ALIKE?/SIMILAR?, but not ok if 
EQUAL? is refitted to name its purpose more accurately (ie. EQUAL? 
becomes more strict).
BrianH
2-Jul-2009
[15868x2]
Peter, in response to the suggestions in your last message:

- issue! = binary! : not good, at least in R3. Perhaps issue! = to-hex 
binary!

- integer! = binary! : not good, at least in R3. Use integer! = to-integer 
binary!


Actually, anything-but-binary! = binary! is a bad idea in R3, since 
encodings aren't assumed. The TO whatever conversion actions are 
good for establishing what you intend the binary! to mean though, 
especially since extra bytes are ignored - this allows binary streams.
Anton, we decided that making EQUAL? more worthy of its name would 
break too much code that depends on it being loose. Oh well :(
PeterWood
2-Jul-2009
[15870]
Brian H: My "suggestions" are not suggestions merely questions.
Anton
2-Jul-2009
[15871]
BrianH, oh well, it's a pity.
Geomol
2-Jul-2009
[15872x2]
Ladislav wrote: "5.0 and 50.0 (or, do you mean, it is only "roughly" 
5.0 and 10.0?)"


Yes, the mean must be slightly below 5.0 and 50.0 with the new random. 
With your first version, it is exactly 5.0 and 50.0.
With the new random, 0.0 will also get a lot more hits than numbers 
close to 0.0. It's because the distance between different decimals 
is small with number close to zero, while the distance gets larger 
and larger with higher and higher numbers. (Because of IEEE 754 implementation.) 
So the max value will get a lot more hits than a small number, and 
all hits on the max value gets converted to 0.0.

I wouldn't use the new random function with decimals.
Ladislav
2-Jul-2009
[15874x2]
slightly below 5.0 and 50.0
 - certainly, but the difference is "undetectable" in these cases
moreover, this version is quite standard - see e.g. Wikipedia, or 
the Dylan programming language, etc.
Geomol
2-Jul-2009
[15876]
If you do a lot of

random 2 ** 300

, the mean will be a lot below 2 ** 300 / 2.
Ladislav
2-Jul-2009
[15877]
yes, in that case, sure
Geomol
2-Jul-2009
[15878]
You're doing a good job, I just don't agree with Carl's view on this.
Ladislav
2-Jul-2009
[15879x2]
hmm, but I am still not sure, you would have to use a denormalized 
number as an argument to be able to detect the difference
I think, that the "main problem" may be, that the uniform deviates 
are only rarely what is needed, quite often it is necessary to transform 
them to normal, lognormal, exponential, or otherwise differently 
distributed deviates
Geomol
2-Jul-2009
[15881]
If you did e.g.

random 10.0


many many times, wouldn't you get a result, where 0.0 has a lot of 
hits, the first number higher than 0.0 will get close to zero hits, 
and then the number of hits will grow up to the number just below 
10.0, which will have almost as many hits as 0.0?
Ladislav
2-Jul-2009
[15882x2]
no, the hits are expected to be uniformly distributed, i.e. the same 
number of hits for 0.0 as for any interior point is expected
(if it does not work that way, then there is a bug)
Geomol
2-Jul-2009
[15884]
But the number lie much closer around zero than around 10.0.
Ladislav
2-Jul-2009
[15885]
aha, yes, the numbers aren't uniformly distributed; well, can you 
test it?
Geomol
2-Jul-2009
[15886]
So when you do the calculation going from a random integer and divide 
to get a decimal, you get a result between zero and 10.0. If the 
result is close to zero, there are many many numbers to give the 
result, while if the result is close to 10.0, there are much fewer 
possible numbers to give the result.
Ladislav
2-Jul-2009
[15887]
less numbers (lower density of numbers) = higher hit count per number
Geomol
2-Jul-2009
[15888x2]
yes
The result will look strange around zero. Many many counts for 0.0 
and very few counts for the following numbers.
Ladislav
2-Jul-2009
[15890]
but, that cannot influence the mean value
Geomol
2-Jul-2009
[15891x2]
yes, it does. I would say, of course it does. :)
You take a lot of hits for the max value and convert them to 0.0.
Ladislav
2-Jul-2009
[15893x2]
...it could influence the mean value only if the compiler used an 
"insensible" way of rounding
aha, more hits for the max...sorry, did not take that into account
Geomol
2-Jul-2009
[15895]
It could take a long long time to run an example showing this.