Tuu Munz Posted January 9, 2019 Share Posted January 9, 2019 LSL is calculating: default { touch_start(integer total_number) { float X = 0.10; float Y = 0.00; Y += X; Y += X; Y += -X; Y += -X; llOwnerSay("Y is " + (string)Y); if (Y > 0.000000) { llOwnerSay("Y is greater than 0.000000"); } else if (Y == 0.000000) { llOwnerSay("Y is equal to 0.000000"); } else if (Y < 0.000000) { llOwnerSay("Y is less than 0.000000"); } else { llOwnerSay("Sorry, I don't know."); } } } and gives response: [04:19] Object: Y is 0.000000 [04:19] Object: Y is equal to 0.00000 Ok. But when LSL is calculating: default { touch_start(integer total_number) { float X = 0.10; float Y = 0.00; Y += X; Y += X; Y += X; Y += -X; Y += -X; Y += -X; llOwnerSay("Y is " + (string)Y); if (Y > 0.000000) { llOwnerSay("Y is greater than 0.000000"); } else if (Y == 0.000000) { llOwnerSay("Y is equal to 0.000000"); } else if (Y < 0.000000) { llOwnerSay("Y is less than 0.000000"); } else { llOwnerSay("Sorry, I don't know."); } } } is response: [04:20] Object: Y is 0.000000 [04:20] Object: Y is greater than 0.000000 Is this a known bug (which I haven't known), or is this a feature? Or am I stupid? Link to comment Share on other sites More sharing options...
Wandering Soulstar Posted January 9, 2019 Share Posted January 9, 2019 Wow! Just tried myself and go the same result ... yet if you switch to alternating adds/subtracts Y += X; Y += -X; Y += X; Y += -X; Y += X; Y += -X; it tells you that it is equal to .. as it should ... no idea though what is happening. Link to comment Share on other sites More sharing options...
Fionalein Posted January 9, 2019 Share Posted January 9, 2019 (edited) turn the "else if"s into "if" too see exactely what it does. maybe ">" is "bigger or equal to" in SL whta is >= in other languages just tested nope it does what it should http://wiki.secondlife.com/wiki/LSL_Operators tells different though... also try mono and without mono default { touch_start(integer total_number) { float X = 0.10; float Y = 0.00; Y += 3*X; Y += -3*X; llOwnerSay("Y is " + (string)Y); if (Y > 0.000000) { llOwnerSay("Y is greater than 0.000000"); } if (Y == 0.000000) { llOwnerSay("Y is equal to 0.000000"); } if (Y < 0.000000) { llOwnerSay("Y is less than 0.000000"); } } } yields different results in mono and non mono LOL -> we call it numerics (look up machine epsilon) ... conclusion: don't do rocket science in LSL PS: that's why you do math in Fortran... R does, Python does, Mathlab does ... every theoretical physicist I know does ... there's a reason for doing math that needs to be precise in Fortan - being able to fool around with your float setup (mantissa vs. exponent bit length) is one of them Edited January 9, 2019 by Fionalein Link to comment Share on other sites More sharing options...
Love Zhaoying Posted January 9, 2019 Share Posted January 9, 2019 Precision fail. 1 Link to comment Share on other sites More sharing options...
KT Kingsley Posted January 9, 2019 Share Posted January 9, 2019 (edited) I'd guess rounding errors and the fact that a float converted to a string has a precision of six digits white the internal float has a precision of something like seven and a bit digits. Or something like that. Edited January 9, 2019 by KT Kingsley 1 Link to comment Share on other sites More sharing options...
Wandering Soulstar Posted January 9, 2019 Share Posted January 9, 2019 1 hour ago, KT Kingsley said: I'd guess rounding errors and the fact that a float converted to a string has a precision of six digits white the internal float has a precision of something like seven and a bit digits. Or something like that. That would explain a difference in the output .. but not the math ... the code is adding 0.1 float + float .. and the comparison is to a float. It shows that if we add 0.1 three times .. then subtract 0.1 three times .. we do not get back to zero .. as we should. I understand losing precision based on deep decimals .. but really, this is pretty basic math ... Link to comment Share on other sites More sharing options...
steph Arnott Posted January 9, 2019 Share Posted January 9, 2019 3 hours ago, Tuu Munz said: LSL is calculating: Y += X; Y -= X; Y += X; Y -= X; Y += X; Y -= X; Link to comment Share on other sites More sharing options...
Nova Convair Posted January 9, 2019 Share Posted January 9, 2019 (edited) This is expected behaviour !!! A decimal floating point number like 0.1 or 1.0 or whatever is stored and computed in a binary format. And a 0.1 in decimal notation can not be converted into a binary format without using an infinite number of digits. So the number is cut off after using 32 or 64 bits or whatever float format is used. If you caculate with this number it is NOT a 0.1 it's just very close to a 0.1 If you compare 2 different calculations it's very probable that they are NOT equal even if they should. That's why I never compare 2 floats for equality - I compare if the difference is less than 0.0001 for example or whatever precision I need in this specific case. Edited January 9, 2019 by Nova Convair 1 Link to comment Share on other sites More sharing options...
Madelaine McMasters Posted January 9, 2019 Share Posted January 9, 2019 (edited) Remember that computers are binary, not decimal (there were decimal computers long ago). In binary floating point, the decimal number 0.1 is irrational. Like PI, or 1/3 in decimal, no matter how many digits/bits you list, you're still not right. If you start out with irrational truncation errors, subsequent calculations with their inherent rounding will only make things worse. As Nova says, you must treat floating point numbers as approximations and you must compare them with an appropriate error band. And do make sure that your error band is not smaller than your rounding error, which depends on the floating point precision and the number and nature of math operations you do prior to making your comparison. https://www.exploringbinary.com/why-0-point-1-does-not-exist-in-floating-point/ Edited January 9, 2019 by Madelaine McMasters 2 Link to comment Share on other sites More sharing options...
KT Kingsley Posted January 9, 2019 Share Posted January 9, 2019 Is this statement helpful: when you do arithmetic using binary approximations of decimal fractions bits fall off the end. But when you reverse the arithmetic the bits you lost stay lost? 1 Link to comment Share on other sites More sharing options...
Madelaine McMasters Posted January 9, 2019 Share Posted January 9, 2019 28 minutes ago, KT Kingsley said: Is this statement helpful: when you do arithmetic using binary approximations of decimal fractions bits fall off the end. But when you reverse the arithmetic the bits you lost stay lost? Well they don't stay lost. They magically arrive somewhere else in the universe, much to the consternation of programmers over there. They are the digital age equivalent of lost socks. 2 Link to comment Share on other sites More sharing options...
KT Kingsley Posted January 9, 2019 Share Posted January 9, 2019 So it's your lost bits that've messing up my scripts all this time! 1 Link to comment Share on other sites More sharing options...
Madelaine McMasters Posted January 9, 2019 Share Posted January 9, 2019 15 minutes ago, KT Kingsley said: So it's your lost bits that've messing up my scripts all this time! ...beams! Link to comment Share on other sites More sharing options...
Ivanova Shostakovich Posted January 9, 2019 Share Posted January 9, 2019 3 minutes ago, Madelaine McMasters said: ...beams! You are nefarious. Link to comment Share on other sites More sharing options...
Love Zhaoying Posted January 9, 2019 Share Posted January 9, 2019 If your bits are floating, try another attachment point. Link to comment Share on other sites More sharing options...
steph Arnott Posted January 9, 2019 Share Posted January 9, 2019 (edited) 5 hours ago, Tuu Munz said: Forget that. Edited January 9, 2019 by steph Arnott Link to comment Share on other sites More sharing options...
Tuu Munz Posted January 9, 2019 Author Share Posted January 9, 2019 Thank you all very much for comments! 2 hours ago, Nova Convair said: This is expected behaviour !!! A decimal floating point number like 0.1 or 1.0 or whatever is stored and computed in a binary format. And a 0.1 in decimal notation can not be converted into a binary format without using an infinite number of digits. So the number is cut off after using 32 or 64 bits or whatever float format is used. If you caculate with this number it is NOT a 0.1 it's just very close to a 0.1 If you compare 2 different calculations it's very probable that they are NOT equal even if they should. That's why I never compare 2 floats for equality - I compare if the difference is less than 0.0001 for example or whatever precision I need in this specific case. 2 hours ago, Madelaine McMasters said: Remember that computers are binary, not decimal (there were decimal computers long ago). In binary floating point, the decimal number 0.1 is irrational. Like PI, or 1/3 in decimal, no matter how many digits/bits you list, you're still not right. If you start out with irrational truncation errors, subsequent calculations with their inherent rounding will only make things worse. As Nova says, you must treat floating point numbers as approximations and you must compare them with an appropriate error band. And do make sure that your error band is not smaller than your rounding error, which depends on the floating point precision and the number and nature of math operations you do prior to making your comparison. https://www.exploringbinary.com/why-0-point-1-does-not-exist-in-floating-point/ So this is a feature. (Allways nice to be today wiser than yesterday 😋.) Link to comment Share on other sites More sharing options...
Fionalein Posted January 9, 2019 Share Posted January 9, 2019 2 hours ago, KT Kingsley said: Is this statement helpful: when you do arithmetic using binary approximations of decimal fractions bits fall off the end. But when you reverse the arithmetic the bits you lost stay lost? kinda... the mantissa only has a certain amount of bits... after those the processor usually just clips them Link to comment Share on other sites More sharing options...
Arduenn Schwartzman Posted January 9, 2019 Share Posted January 9, 2019 6 hours ago, Tuu Munz said: 0 is greater than 0? Why? Or am I stupid? Welcome to the wonderful world of floating point computations. Link to comment Share on other sites More sharing options...
steph Arnott Posted January 9, 2019 Share Posted January 9, 2019 2 hours ago, Madelaine McMasters said: Remember that computers are binary, not decimal (there were decimal computers long ago). In binary floating point, the decimal number 0.1 is irrational. Like PI, or 1/3 in decimal, no matter how many digits/bits you list, you're still not right. If you start out with irrational truncation errors, subsequent calculations with their inherent rounding will only make things worse. As Nova says, you must treat floating point numbers as approximations and you must compare them with an appropriate error band. And do make sure that your error band is not smaller than your rounding error, which depends on the floating point precision and the number and nature of math operations you do prior to making your comparison. https://www.exploringbinary.com/why-0-point-1-does-not-exist-in-floating-point/ Has never been a 'decimal computer'. Even Babbage's used base 2. Link to comment Share on other sites More sharing options...
Madelaine McMasters Posted January 9, 2019 Share Posted January 9, 2019 (edited) 10 hours ago, steph Arnott said: Has never been a 'decimal computer'. Even Babbage's used base 2. Though Liebniz invented the binary arithmetic system now used in digital computers, the first computers were decimal. Babbage's engines used ten position gears to store and calculate. One of the earliest and most famous electronic computers was the ENIAC. It was decimal, using dekatron tube decade ring counters to store and compute. When Dad was in engineering school, he used an IBM 1620 computer. It was decimal. By the time I was old enough to remember such things, it had been retired to the university's museum, where it played primitive tunes through a speaker connected to some internal circuit node. So, I have seen and touched a decimal computer. The first binary computer was Konrad Zuse's Z1, which arrived 100 years after Babbage's early efforts. For a more complete history of decimal computers: https://en.wikipedia.org/wiki/Decimal_computer ETA: The ENIAC used decimal ring counters, the Harwell used dekatron tubes. I posted a video of that computer in operation elsewhere in the thread. Edited January 10, 2019 by Madelaine McMasters Correct ENIAC from dekatron tubes to decimal ring counters (similar function) 2 Link to comment Share on other sites More sharing options...
steph Arnott Posted January 9, 2019 Share Posted January 9, 2019 3 minutes ago, Madelaine McMasters said: Though Liebniz invented the binary arithmetic system now used in digital computers, the first computers were decimal. Babbage's engines used ten position gears to store and calculate. One of the earliest and most famous electronic computers was the ENIAC. It was decimal, using dekatron tubes to store and compute. When Dad was in engineering school, he used an IBM 1620 computer. It was decimal. By the time I was old enough to remember such things, it had been retired to the university's museum, where it played primitive tunes through a speaker connected to some internal circuit node. So, I have seen and touched a decimal computer. The first binary computer was Konrad Zuse's Z1, which arrived 100 years after Babbage's early efforts. For a more complete history of decimal computers: https://en.wikipedia.org/wiki/Decimal_computer . Babbage's engines used ten position gears to store and calculate , it was still base 2. The Blechley park was base 2. The 1620 was BCD which was base 2. They all used base 2. Link to comment Share on other sites More sharing options...
Madelaine McMasters Posted January 9, 2019 Share Posted January 9, 2019 2 minutes ago, steph Arnott said: . Babbage's engines used ten position gears to store and calculate , it was still base 2. The Blechley park was base 2. The 1620 was BCD which was base 2. They all used base 2. I think you are confusing comparison with computation. Link to comment Share on other sites More sharing options...
Love Zhaoying Posted January 9, 2019 Share Posted January 9, 2019 11 minutes ago, Madelaine McMasters said: Though Liebniz invented the binary arithmetic system now used in digital computers, the first computers were decimal. Babbage's engines used ten position gears to store and calculate. One of the earliest and most famous electronic computers was the ENIAC. It was decimal, using dekatron tubes to store and compute. When Dad was in engineering school, he used an IBM 1620 computer. It was decimal. By the time I was old enough to remember such things, it had been retired to the university's museum, where it played primitive tunes through a speaker connected to some internal circuit node. So, I have seen and touched a decimal computer. The first binary computer was Konrad Zuse's Z1, which arrived 100 years after Babbage's early efforts. For a more complete history of decimal computers: https://en.wikipedia.org/wiki/Decimal_computer I thought some of the old stuff used octal, back then the extra 8th bit was just a “checksum bit”. I’m too lazy to google it because..octal. Link to comment Share on other sites More sharing options...
steph Arnott Posted January 9, 2019 Share Posted January 9, 2019 No. But if you believe decimal computers ever existed then that is up to you. 1 minute ago, Love Zhaoying said: I thought some of the old stuff used octal, back then the extra 8th bit was just a “checksum bit”. I’m too lazy to google it because..octal. It was, but the number crunching was binary. Base 2 is simplist method because it only relies on an ON/OFF. Even the enigma coder used base 2 Link to comment Share on other sites More sharing options...
Recommended Posts
Please take a moment to consider if this thread is worth bumping.
Please sign in to comment
You will be able to leave a comment after signing in
Sign In Now