# 0 is greater than 0? Why?

## Recommended Posts

LSL is calculating:

```default
{
touch_start(integer total_number)
{
float X = 0.10;
float Y = 0.00;

Y += X;
Y += X;
Y += -X;
Y += -X;

llOwnerSay("Y is " + (string)Y);

if (Y > 0.000000)
{
llOwnerSay("Y is greater than 0.000000");
}
else if (Y == 0.000000)
{
llOwnerSay("Y is equal to 0.000000");
}
else if (Y < 0.000000)
{
llOwnerSay("Y is less than 0.000000");
}
else
{
llOwnerSay("Sorry, I don't know.");
}
}
}```

and gives response:
[04:19] Object: Y is 0.000000
[04:19] Object: Y is equal to 0.00000

Ok.

But when LSL is calculating:

```default
{
touch_start(integer total_number)
{
float X = 0.10;
float Y = 0.00;

Y += X;
Y += X;
Y += X;
Y += -X;
Y += -X;
Y += -X;
llOwnerSay("Y is " + (string)Y);

if (Y > 0.000000)
{
llOwnerSay("Y is greater than 0.000000");
}
else if (Y == 0.000000)
{
llOwnerSay("Y is equal to 0.000000");
}
else if (Y < 0.000000)
{
llOwnerSay("Y is less than 0.000000");
}
else
{
llOwnerSay("Sorry, I don't know.");
}
}
}```

is response:
[04:20] Object: Y is 0.000000
[04:20] Object: Y is greater than 0.000000

Is this a known bug (which I haven't known), or is this a feature?
Or am I stupid?

##### Share on other sites

Wow!

Just tried myself and go the same result ... yet if you switch to alternating adds/subtracts

```        Y += X;
Y += -X;
Y += X;
Y += -X;
Y += X;
Y += -X;```

it tells you that it is equal to .. as it should ... no idea though what is happening.

##### Share on other sites
Posted (edited)

turn the "else if"s into "if" too see exactely what it does. maybe ">" is "bigger or equal to" in SL whta is >= in other languages just tested nope it does what it should

http://wiki.secondlife.com/wiki/LSL_Operators tells different though...

also try mono and without mono

```default
{
touch_start(integer total_number)
{
float X = 0.10;
float Y = 0.00;

Y += 3*X;
Y += -3*X;

llOwnerSay("Y is " + (string)Y);

if (Y > 0.000000)
{
llOwnerSay("Y is greater than 0.000000");
}
if (Y == 0.000000)
{
llOwnerSay("Y is equal to 0.000000");
}
if (Y < 0.000000)
{
llOwnerSay("Y is less than 0.000000");
}

}
}```

yields different results in mono and non mono LOL

-> we call it numerics (look up machine epsilon) ... conclusion: don't do rocket science in LSL

PS: that's why you do math in Fortran...

R does, Python does, Mathlab does ... every theoretical physicist I know does ... there's a reason for doing math that needs to be precise in Fortan - being able to fool around with your float setup (mantissa vs. exponent bit length) is one of them

Edited by Fionalein

Precision fail.

##### Share on other sites
Posted (edited)

I'd guess rounding errors and the fact that a float converted to a string has a precision of six digits white the internal float has a precision of something like seven and a bit digits. Or something like that.

Edited by KT Kingsley

##### Share on other sites
1 hour ago, KT Kingsley said:

I'd guess rounding errors and the fact that a float converted to a string has a precision of six digits white the internal float has a precision of something like seven and a bit digits. Or something like that﻿.

That would explain a difference in the output .. but not the math ... the code is adding 0.1 float + float .. and the comparison is to a float. It shows that if we add 0.1 three times .. then subtract 0.1 three times .. we do not get back to zero .. as we should. I understand losing precision based on deep decimals .. but really, this is pretty basic math ...

##### Share on other sites
3 hours ago, Tuu Munz said:

LSL is calculating:

```
```

```
Y += X;
Y -= X;
Y += X;
Y -= X;
Y += X;
Y -= X;```

##### Share on other sites
Posted (edited)

This is expected behaviour !!!

A decimal floating point number like 0.1 or 1.0 or whatever is stored and computed in a binary format.

And a 0.1 in decimal notation can not be converted into a binary format without using an infinite number of digits. So the number is cut off after using 32 or 64 bits or whatever float format is used. If you caculate with this number it is NOT a 0.1 it's just very close to a 0.1

If you compare 2 different calculations it's very probable that they are NOT equal even if they should.
That's why I never compare 2 floats for equality - I compare if the difference is less than 0.0001 for example or whatever precision I need in this specific case.

Edited by Nova Convair

##### Share on other sites
Posted (edited)

Remember that computers are binary, not decimal (there were decimal computers long ago). In binary floating point, the decimal number 0.1 is irrational. Like PI,  or 1/3 in decimal, no matter how many digits/bits you list, you're still not right. If you start out with irrational truncation errors, subsequent calculations with their inherent rounding will only make things worse. As Nova says, you must treat floating point numbers as approximations and you must compare them with an appropriate error band. And do make sure that your error band is not smaller than your rounding error, which depends on the floating point precision and the number and nature of math operations you do prior to making your comparison.

##### Share on other sites

Is this statement helpful: when you do arithmetic using binary approximations of decimal fractions bits fall off the end. But when you reverse the arithmetic the bits you lost stay lost?

##### Share on other sites
28 minutes ago, KT Kingsley said:

Is this statement helpful: when you do arithmetic using binary approximations of decimal fractions bits fall off the end. But when you reverse the arithmetic the bits you lost stay lost?

Well they don't stay lost. They magically arrive somewhere else in the universe, much to the consternation of programmers over there. They are the digital age equivalent of lost socks.

##### Share on other sites

So it's your lost bits that've messing up my scripts all this time!

##### Share on other sites
15 minutes ago, KT Kingsley said:

So it's your lost bits that've messing up my scripts all this time!

...beams!

##### Share on other sites
3 minutes ago, Madelaine McMasters said:

...beams!

You are nefarious.

##### Share on other sites

If your bits are floating, try another attachment point.

##### Share on other sites
Posted (edited)
5 hours ago, Tuu Munz said:

Forget that.

Edited by steph Arnott

##### Share on other sites

Thank you all very much for comments!

2 hours ago, Nova Convair said:

This is expected behaviour !!!

A decimal floating point number like 0.1 or 1.0 or whatever is stored and computed in a binary format.

And a 0.1 in decimal notation can not be converted into a binary format without using an infinite number of digits. So the number is cut off after using 32 or 64 bits or whatever float format is used. If you caculate with this number it is NOT a 0.1 it's just very close to a 0.1

If you compare 2 different calculations it's very probable that they are NOT equal even if they should.
That's why I never compare 2 floats for equality - I compare if the difference is less than 0.0001 for example or whatever precision I need in this specific case. ﻿

2 hours ago, Madelaine McMasters said:

Remember that computers are binary, not decimal (there were decimal computers long ago). In binary floating point, the decimal number 0.1 is irrational. Like PI,  or 1/3 in decimal, no matter how many digits/bits you list, you're still not right. If you start out with irrational truncation errors, subsequent calculations with their inherent rounding will only make things worse. As Nova says, you must treat floating point numbers as approximations and you must compare them with an appropriate error band. And do make sure that your error band is not smaller than your rounding error, which depends on the floating point precision and the number and nature of math operations you do prior to making your comparison.

So this is a feature.
(Allways nice to be today wiser than yesterday 😋.)

##### Share on other sites
2 hours ago, KT Kingsley said:

Is﻿﻿ this statement helpful: when you do arithmetic using binary approximations of decimal fractions bits fall off the end. But when you reverse the arithmetic the bits you lost stay lost?

kinda... the mantissa only has a certain amount of bits... after those the processor usually just clips them

##### Share on other sites
6 hours ago, Tuu Munz said:

0 is greater than 0? Why?

Or am I stupid?

Welcome to the wonderful world of floating point computations.

##### Share on other sites
2 hours ago, Madelaine McMasters said:

Remember that computers are binary, not decimal (there were decimal computers long ago). In binary floating point, the decimal number 0.1 is irrational. Like PI,  or 1/3 in decimal, no matter how many digits/bits you list, you're still not right. If you start out with irrational truncation errors, subsequent calculations with their inherent rounding will only make things worse. As Nova says, you must treat floating point numbers as approximations and you must compare them with an appropriate error band. And do make sure that your error band is not smaller than your rounding error, which depends on the floating point precision and the number and nature of math operations you do prior to making your comparison.

Has never been a 'decimal computer'. Even Babbage's used base 2.

##### Share on other sites
Posted (edited)
10 hours ago, steph Arnott said:

Has never been a 'decimal computer'. Even Babbage's used base 2.

Though Liebniz invented the binary arithmetic system now used in digital computers, the first computers were decimal. Babbage's engines used ten position gears to store and calculate. One of the earliest and most famous electronic computers was the ENIAC. It was decimal, using dekatron tube decade ring counters to store and compute.

When Dad was in engineering school, he used an IBM 1620 computer. It was decimal. By the time I was old enough to remember such things, it had been retired to the university's museum, where it played primitive tunes through a speaker connected to some internal circuit node. So, I have seen and touched a decimal computer.

The first binary computer was Konrad Zuse's Z1, which arrived 100 years after Babbage's early efforts.

For a more complete history of decimal computers: https://en.wikipedia.org/wiki/Decimal_computer

ETA: The ENIAC used decimal ring counters, the Harwell used dekatron tubes.  I posted a video of that computer in operation elsewhere in the thread.

Correct ENIAC from dekatron tubes to decimal ring counters (similar function)

##### Share on other sites
3 minutes ago, Madelaine McMasters said:

Though Liebniz invented the binary arithmetic system now used in digital computers, the first computers were decimal. Babbage's engines used ten position gears to store and calculate﻿. One of the earliest and most famous electronic computers was the ENIAC. It was decimal, using dekatron tubes to store and compute.

When Dad was in engineering school, he used an IBM 1620 computer. It was decimal. By the time I was old enough to remember such things, it had been retired to the university's museum, where it played primitive tunes through a speaker connected to some internal circuit node. So, I have seen and touched a decimal computer.

The first binary computer was Konrad Zuse's Z1, which arrived 100 years after Babbage's early efforts.

For a more complete history of decimal computers: https://en.wikipedia.org/wiki/Decimal_computer

Babbage's engines used ten position gears to store and calculate , it was still base 2. The Blechley park was base 2. The 1620 was BCD which was base 2. They all used base 2.

##### Share on other sites
2 minutes ago, steph Arnott said:

Babbage's engines used ten position gears to store and calculate , it was still base 2. The Blechley park was base 2. The 1620 was BCD which was base 2. They all used base 2.

I think you are confusing comparison with computation.

##### Share on other sites
11 minutes ago, Madelaine McMasters said:

Though Liebniz invented the binary arithmetic system now used in digital computers, the first computers were decimal. Babbage's engines used ten position gears to store and calculate. One of the earliest and most famous electronic computers was the ENIAC. It was decimal, using dekatron tubes to store and compute.

When Dad was in engineering school, he used an IBM 1620 computer. It was decimal. By the time I was old enough to remember such things, it had been retired to the university's museum, where it played primitive tunes through a speaker connected to some internal circuit node. So, I have seen and touched a decimal computer.

The first binary computer was Konrad Zuse's Z1, which arrived 100 years after Babbage's early efforts.

For a more complete history of decimal computers: https://en.wikipedia.org/wiki/Decimal_computer

I thought some of the old stuff used octal, back then the extra 8th bit was just a “checksum bit”. I’m too lazy to google it because..octal.

##### Share on other sites

No. But if you believe decimal computers ever existed then that is up to you.

1 minute ago, Love Zhaoying said:

I thought some of the old stuff used octal, back then the extra 8th bit was just a “checksum bit”. I’m too lazy to google it because..octal.

It was, but the number crunching was binary. Base 2 is simplist method because it only relies on an ON/OFF. Even the enigma coder used base 2

## Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

×   Pasted as rich text.   Paste as plain text instead

Only 75 emoji are allowed.