Jump to content
Sign in to follow this  
Tuu Munz

0 is greater than 0? Why?

Recommended Posts

LSL is calculating:

default
{
    touch_start(integer total_number)
    {
        float X = 0.10;
        float Y = 0.00;

        Y += X;
        Y += X;
        Y += -X;
        Y += -X;
          
        llOwnerSay("Y is " + (string)Y);
        
        if (Y > 0.000000)
        {
            llOwnerSay("Y is greater than 0.000000");
        }
        else if (Y == 0.000000)
        {
            llOwnerSay("Y is equal to 0.000000");
        }
        else if (Y < 0.000000)
        {
            llOwnerSay("Y is less than 0.000000");
        }
        else
        {
            llOwnerSay("Sorry, I don't know.");  
        }        
    }
}


and gives response:
[04:19] Object: Y is 0.000000
[04:19] Object: Y is equal to 0.00000

Ok.

But when LSL is calculating:

default
{
    touch_start(integer total_number)
    {
        float X = 0.10;
        float Y = 0.00;

        Y += X;
        Y += X;
        Y += X;
        Y += -X;
        Y += -X;
        Y += -X;
        llOwnerSay("Y is " + (string)Y);
        
        if (Y > 0.000000)
        {
            llOwnerSay("Y is greater than 0.000000");
        }
        else if (Y == 0.000000)
        {
            llOwnerSay("Y is equal to 0.000000");
        }
        else if (Y < 0.000000)
        {
            llOwnerSay("Y is less than 0.000000");
        }
        else
        {
            llOwnerSay("Sorry, I don't know.");  
        }        
    }
}


is response:
[04:20] Object: Y is 0.000000
[04:20] Object: Y is greater than 0.000000

Is this a known bug (which I haven't known), or is this a feature?
Or am I stupid?

Share this post


Link to post
Share on other sites

Wow!

Just tried myself and go the same result ... yet if you switch to alternating adds/subtracts

        Y += X;
        Y += -X;
        Y += X;
        Y += -X;
        Y += X;
        Y += -X;

it tells you that it is equal to .. as it should ... no idea though what is happening.

Share this post


Link to post
Share on other sites
Posted (edited)

turn the "else if"s into "if" too see exactely what it does. maybe ">" is "bigger or equal to" in SL whta is >= in other languages just tested nope it does what it should

http://wiki.secondlife.com/wiki/LSL_Operators tells different though...

also try mono and without mono

default
{
    touch_start(integer total_number)
    {
        float X = 0.10;
        float Y = 0.00;

        Y += 3*X;
        Y += -3*X;
    
        llOwnerSay("Y is " + (string)Y);
        
        if (Y > 0.000000)
        {
            llOwnerSay("Y is greater than 0.000000");
        }
        if (Y == 0.000000)
        {
            llOwnerSay("Y is equal to 0.000000");
        }
        if (Y < 0.000000)
        {
            llOwnerSay("Y is less than 0.000000");
        }
             
    }
}

yields different results in mono and non mono LOL

-> we call it numerics (look up machine epsilon) ... conclusion: don't do rocket science in LSL ;)

PS: that's why you do math in Fortran...

R does, Python does, Mathlab does ... every theoretical physicist I know does ... there's a reason for doing math that needs to be precise in Fortan - being able to fool around with your float setup (mantissa vs. exponent bit length) is one of them ;)

 

Edited by Fionalein

Share this post


Link to post
Share on other sites
Posted (edited)

I'd guess rounding errors and the fact that a float converted to a string has a precision of six digits white the internal float has a precision of something like seven and a bit digits. Or something like that.

Edited by KT Kingsley
  • Like 1

Share this post


Link to post
Share on other sites
1 hour ago, KT Kingsley said:

I'd guess rounding errors and the fact that a float converted to a string has a precision of six digits white the internal float has a precision of something like seven and a bit digits. Or something like that.

That would explain a difference in the output .. but not the math ... the code is adding 0.1 float + float .. and the comparison is to a float. It shows that if we add 0.1 three times .. then subtract 0.1 three times .. we do not get back to zero .. as we should. I understand losing precision based on deep decimals .. but really, this is pretty basic math ...

Share this post


Link to post
Share on other sites
3 hours ago, Tuu Munz said:

LSL is calculating:


 

 


        Y += X;
        Y -= X;
        Y += X;
        Y -= X;
        Y += X;
        Y -= X;

 

Share this post


Link to post
Share on other sites
Posted (edited)

This is expected behaviour !!!

A decimal floating point number like 0.1 or 1.0 or whatever is stored and computed in a binary format.

And a 0.1 in decimal notation can not be converted into a binary format without using an infinite number of digits. So the number is cut off after using 32 or 64 bits or whatever float format is used. If you caculate with this number it is NOT a 0.1 it's just very close to a 0.1

If you compare 2 different calculations it's very probable that they are NOT equal even if they should.
That's why I never compare 2 floats for equality - I compare if the difference is less than 0.0001 for example or whatever precision I need in this specific case.

Edited by Nova Convair
  • Like 1

Share this post


Link to post
Share on other sites
Posted (edited)

Remember that computers are binary, not decimal (there were decimal computers long ago). In binary floating point, the decimal number 0.1 is irrational. Like PI,  or 1/3 in decimal, no matter how many digits/bits you list, you're still not right. If you start out with irrational truncation errors, subsequent calculations with their inherent rounding will only make things worse. As Nova says, you must treat floating point numbers as approximations and you must compare them with an appropriate error band. And do make sure that your error band is not smaller than your rounding error, which depends on the floating point precision and the number and nature of math operations you do prior to making your comparison.

https://www.exploringbinary.com/why-0-point-1-does-not-exist-in-floating-point/

Edited by Madelaine McMasters
  • Like 2

Share this post


Link to post
Share on other sites

Is this statement helpful: when you do arithmetic using binary approximations of decimal fractions bits fall off the end. But when you reverse the arithmetic the bits you lost stay lost?

  • Like 1

Share this post


Link to post
Share on other sites
28 minutes ago, KT Kingsley said:

Is this statement helpful: when you do arithmetic using binary approximations of decimal fractions bits fall off the end. But when you reverse the arithmetic the bits you lost stay lost?

Well they don't stay lost. They magically arrive somewhere else in the universe, much to the consternation of programmers over there. They are the digital age equivalent of lost socks.

  • Like 2

Share this post


Link to post
Share on other sites

Thank you all very much for comments!

2 hours ago, Nova Convair said:

This is expected behaviour !!!

A decimal floating point number like 0.1 or 1.0 or whatever is stored and computed in a binary format.

And a 0.1 in decimal notation can not be converted into a binary format without using an infinite number of digits. So the number is cut off after using 32 or 64 bits or whatever float format is used. If you caculate with this number it is NOT a 0.1 it's just very close to a 0.1

If you compare 2 different calculations it's very probable that they are NOT equal even if they should.
That's why I never compare 2 floats for equality - I compare if the difference is less than 0.0001 for example or whatever precision I need in this specific case. 

 

2 hours ago, Madelaine McMasters said:

Remember that computers are binary, not decimal (there were decimal computers long ago). In binary floating point, the decimal number 0.1 is irrational. Like PI,  or 1/3 in decimal, no matter how many digits/bits you list, you're still not right. If you start out with irrational truncation errors, subsequent calculations with their inherent rounding will only make things worse. As Nova says, you must treat floating point numbers as approximations and you must compare them with an appropriate error band. And do make sure that your error band is not smaller than your rounding error, which depends on the floating point precision and the number and nature of math operations you do prior to making your comparison.

https://www.exploringbinary.com/why-0-point-1-does-not-exist-in-floating-point/

So this is a feature.
(Allways nice to be today wiser than yesterday 😋.)

Share this post


Link to post
Share on other sites
2 hours ago, KT Kingsley said:

Is this statement helpful: when you do arithmetic using binary approximations of decimal fractions bits fall off the end. But when you reverse the arithmetic the bits you lost stay lost?

kinda... the mantissa only has a certain amount of bits... after those the processor usually just clips them

Share this post


Link to post
Share on other sites
2 hours ago, Madelaine McMasters said:

Remember that computers are binary, not decimal (there were decimal computers long ago). In binary floating point, the decimal number 0.1 is irrational. Like PI,  or 1/3 in decimal, no matter how many digits/bits you list, you're still not right. If you start out with irrational truncation errors, subsequent calculations with their inherent rounding will only make things worse. As Nova says, you must treat floating point numbers as approximations and you must compare them with an appropriate error band. And do make sure that your error band is not smaller than your rounding error, which depends on the floating point precision and the number and nature of math operations you do prior to making your comparison.

https://www.exploringbinary.com/why-0-point-1-does-not-exist-in-floating-point/

Has never been a 'decimal computer'. Even Babbage's used base 2.

Share this post


Link to post
Share on other sites
Posted (edited)
10 hours ago, steph Arnott said:

Has never been a 'decimal computer'. Even Babbage's used base 2.

Though Liebniz invented the binary arithmetic system now used in digital computers, the first computers were decimal. Babbage's engines used ten position gears to store and calculate. One of the earliest and most famous electronic computers was the ENIAC. It was decimal, using dekatron tube decade ring counters to store and compute.

When Dad was in engineering school, he used an IBM 1620 computer. It was decimal. By the time I was old enough to remember such things, it had been retired to the university's museum, where it played primitive tunes through a speaker connected to some internal circuit node. So, I have seen and touched a decimal computer.

The first binary computer was Konrad Zuse's Z1, which arrived 100 years after Babbage's early efforts.

For a more complete history of decimal computers: https://en.wikipedia.org/wiki/Decimal_computer

ETA: The ENIAC used decimal ring counters, the Harwell used dekatron tubes.  I posted a video of that computer in operation elsewhere in the thread.

Edited by Madelaine McMasters
Correct ENIAC from dekatron tubes to decimal ring counters (similar function)
  • Thanks 2

Share this post


Link to post
Share on other sites
3 minutes ago, Madelaine McMasters said:

Though Liebniz invented the binary arithmetic system now used in digital computers, the first computers were decimal. Babbage's engines used ten position gears to store and calculate. One of the earliest and most famous electronic computers was the ENIAC. It was decimal, using dekatron tubes to store and compute.

When Dad was in engineering school, he used an IBM 1620 computer. It was decimal. By the time I was old enough to remember such things, it had been retired to the university's museum, where it played primitive tunes through a speaker connected to some internal circuit node. So, I have seen and touched a decimal computer.

The first binary computer was Konrad Zuse's Z1, which arrived 100 years after Babbage's early efforts.

For a more complete history of decimal computers: https://en.wikipedia.org/wiki/Decimal_computer

Babbage's engines used ten position gears to store and calculate , it was still base 2. The Blechley park was base 2. The 1620 was BCD which was base 2. They all used base 2.

Share this post


Link to post
Share on other sites
2 minutes ago, steph Arnott said:

Babbage's engines used ten position gears to store and calculate , it was still base 2. The Blechley park was base 2. The 1620 was BCD which was base 2. They all used base 2.

I think you are confusing comparison with computation.

Share this post


Link to post
Share on other sites
11 minutes ago, Madelaine McMasters said:

Though Liebniz invented the binary arithmetic system now used in digital computers, the first computers were decimal. Babbage's engines used ten position gears to store and calculate. One of the earliest and most famous electronic computers was the ENIAC. It was decimal, using dekatron tubes to store and compute.

When Dad was in engineering school, he used an IBM 1620 computer. It was decimal. By the time I was old enough to remember such things, it had been retired to the university's museum, where it played primitive tunes through a speaker connected to some internal circuit node. So, I have seen and touched a decimal computer.

The first binary computer was Konrad Zuse's Z1, which arrived 100 years after Babbage's early efforts.

For a more complete history of decimal computers: https://en.wikipedia.org/wiki/Decimal_computer

I thought some of the old stuff used octal, back then the extra 8th bit was just a “checksum bit”. I’m too lazy to google it because..octal.

Share this post


Link to post
Share on other sites

No. But if you believe decimal computers ever existed then that is up to you.

1 minute ago, Love Zhaoying said:

I thought some of the old stuff used octal, back then the extra 8th bit was just a “checksum bit”. I’m too lazy to google it because..octal.

It was, but the number crunching was binary. Base 2 is simplist method because it only relies on an ON/OFF. Even the enigma coder used base 2

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×