r3wp [groups: 83 posts: 189283]
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

World: r3wp

[!REBOL3-OLD1]

Geomol
1-Jul-2009
[15858]
RANDOM is a distribution. Getting random integers, the mean value 
is well defined as:

(max + 1) / 2

So e.g.

random 10

will give a mean of 5.5. What is the mean of

random 10.0
or
random 100.0
Ladislav
1-Jul-2009
[15859]
5.0 and 50.0 (or, do you mean, it is only "roughly" 5.0 and 10.0?)
PeterWood
1-Jul-2009
[15860]
Ladislav: I reported bug #3518 in Rambo mainly because the behaviour 
of the '= function is not consistent. My thinking was if 1 = 1.0 
why doesn't  #"a" = "a"?


It appears that the '= function acts as the '== function unless the 
types are number!. 


I have come to accept that Rebol has been designed pragmatically 
and, understandably, may be inconsistent at times. I thing this makes 
the need for accurate documentation essential. I would hope that 
the function help for = can be changed to accurately reflect the 
functions behaviour.
Ladislav
1-Jul-2009
[15861]
actually, my task now is to define the desired results of such comparisons 
for R3, (which may serve as documentation too)
PeterWood
1-Jul-2009
[15862x2]
The R2 behaviour has the advantage that it is easy to define and 
understand (especially if the function helpext was improved). If 
other options are to be considered, defining the desired results 
will be more difficult. No wonder  you ar taking this on.
I believe that there needs to be some restriction on the datatypes 
on which the '= function will work. It seems to make no sense to 
comparer a URL! with an email! (Unless you define a URL! to be equla 
to  an email! if they refer to the same ip address or domain name. 
Perhaps that's something for same?).


It's harder to say whether an issue! can be equal to a binary! but 
waht about an integer! with a binary!?
Maxim
1-Jul-2009
[15864x2]
I WANT PLUGINS !!!!!    :-)
wouldn't it be cool to load a rebol instance as a plugin within rebol? 
 :-)

this could be the basis for an orthogonal REBOL kernel  :-)
Anton
1-Jul-2009
[15866]
Peter, are you sure you would never want to compare an email with 
a url?
What about urls like this?
http://some.dom/submit?email=[somebody-:-somewhere-:-net]


I might want to see if a given email can be found in the query string.
Anton
2-Jul-2009
[15867]
Ladislav, your "parsing a lit-path" example above looks ok to me 
for the proposed ALIKE?/SIMILAR? operator, and EQUAL? if it's been 
decided that EQUAL? remains just as ALIKE?/SIMILAR?, but not ok if 
EQUAL? is refitted to name its purpose more accurately (ie. EQUAL? 
becomes more strict).
BrianH
2-Jul-2009
[15868x2]
Peter, in response to the suggestions in your last message:

- issue! = binary! : not good, at least in R3. Perhaps issue! = to-hex 
binary!

- integer! = binary! : not good, at least in R3. Use integer! = to-integer 
binary!


Actually, anything-but-binary! = binary! is a bad idea in R3, since 
encodings aren't assumed. The TO whatever conversion actions are 
good for establishing what you intend the binary! to mean though, 
especially since extra bytes are ignored - this allows binary streams.
Anton, we decided that making EQUAL? more worthy of its name would 
break too much code that depends on it being loose. Oh well :(
PeterWood
2-Jul-2009
[15870]
Brian H: My "suggestions" are not suggestions merely questions.
Anton
2-Jul-2009
[15871]
BrianH, oh well, it's a pity.
Geomol
2-Jul-2009
[15872x2]
Ladislav wrote: "5.0 and 50.0 (or, do you mean, it is only "roughly" 
5.0 and 10.0?)"


Yes, the mean must be slightly below 5.0 and 50.0 with the new random. 
With your first version, it is exactly 5.0 and 50.0.
With the new random, 0.0 will also get a lot more hits than numbers 
close to 0.0. It's because the distance between different decimals 
is small with number close to zero, while the distance gets larger 
and larger with higher and higher numbers. (Because of IEEE 754 implementation.) 
So the max value will get a lot more hits than a small number, and 
all hits on the max value gets converted to 0.0.

I wouldn't use the new random function with decimals.
Ladislav
2-Jul-2009
[15874x2]
slightly below 5.0 and 50.0
 - certainly, but the difference is "undetectable" in these cases
moreover, this version is quite standard - see e.g. Wikipedia, or 
the Dylan programming language, etc.
Geomol
2-Jul-2009
[15876]
If you do a lot of

random 2 ** 300

, the mean will be a lot below 2 ** 300 / 2.
Ladislav
2-Jul-2009
[15877]
yes, in that case, sure
Geomol
2-Jul-2009
[15878]
You're doing a good job, I just don't agree with Carl's view on this.
Ladislav
2-Jul-2009
[15879x2]
hmm, but I am still not sure, you would have to use a denormalized 
number as an argument to be able to detect the difference
I think, that the "main problem" may be, that the uniform deviates 
are only rarely what is needed, quite often it is necessary to transform 
them to normal, lognormal, exponential, or otherwise differently 
distributed deviates
Geomol
2-Jul-2009
[15881]
If you did e.g.

random 10.0


many many times, wouldn't you get a result, where 0.0 has a lot of 
hits, the first number higher than 0.0 will get close to zero hits, 
and then the number of hits will grow up to the number just below 
10.0, which will have almost as many hits as 0.0?
Ladislav
2-Jul-2009
[15882x2]
no, the hits are expected to be uniformly distributed, i.e. the same 
number of hits for 0.0 as for any interior point is expected
(if it does not work that way, then there is a bug)
Geomol
2-Jul-2009
[15884]
But the number lie much closer around zero than around 10.0.
Ladislav
2-Jul-2009
[15885]
aha, yes, the numbers aren't uniformly distributed; well, can you 
test it?
Geomol
2-Jul-2009
[15886]
So when you do the calculation going from a random integer and divide 
to get a decimal, you get a result between zero and 10.0. If the 
result is close to zero, there are many many numbers to give the 
result, while if the result is close to 10.0, there are much fewer 
possible numbers to give the result.
Ladislav
2-Jul-2009
[15887]
less numbers (lower density of numbers) = higher hit count per number
Geomol
2-Jul-2009
[15888x2]
yes
The result will look strange around zero. Many many counts for 0.0 
and very few counts for the following numbers.
Ladislav
2-Jul-2009
[15890]
but, that cannot influence the mean value
Geomol
2-Jul-2009
[15891x2]
yes, it does. I would say, of course it does. :)
You take a lot of hits for the max value and convert them to 0.0.
Ladislav
2-Jul-2009
[15893x2]
...it could influence the mean value only if the compiler used an 
"insensible" way of rounding
aha, more hits for the max...sorry, did not take that into account
Geomol
2-Jul-2009
[15895]
It could take a long long time to run an example showing this.
Ladislav
2-Jul-2009
[15896x2]
but, anyway, I am still pretty sure, that such a difference actually 
is undetectable
yes, you would have to run the test "almost forever" to see anything
Geomol
2-Jul-2009
[15898]
A test could be to run
random 1.0

many many times and save the results coming close to zero. Like my 
other test, checking if the result is within the first 1024 number 
close to zero. After running the test many times, a picture can be 
seen.
Ladislav
2-Jul-2009
[15899x2]
for random 1.0 you cannot find any irregularities, there aren't any
(the random numbers you may obtain are regularly spaced in that case)
Geomol
2-Jul-2009
[15901]
ah yes. :) You need larger numbers. Up to what number is the distance 
between numbers the same? 2.0?
Ladislav
2-Jul-2009
[15902x3]
for sure, if the difference should be detectable, then you should 
try random 2 ** 1023
- in that case the chances are much higher, that you could detect 
any irregularities
do I understand correctly, that you prefer the variant including 
both endpoints?
Geomol
2-Jul-2009
[15905]
yes
Ladislav
2-Jul-2009
[15906]
...and your main reason is, that you know the endpoints exactly?
Geomol
2-Jul-2009
[15907]
I'm in doubt about the distribution of numbers. Is this R2 function 
calculating ulps between two number ok?

ulps: func [
    value1 
    value2 
    /local d1 d2
][
    d: make struct! [v [decimal!]] none 
    d1: make struct! [hi [integer!] lo [integer!]] none 
    d2: make struct! [hi [integer!] lo [integer!]] none 
    d/v: value1 
    change third d1 third d 
    d/v: value2 
    change third d2 third d 
    print [d1/hi d1/lo d2/hi d2/lo] 
    print [d1/hi - d2/hi * (2 ** 32) + d1/lo - d2/lo]
]