Jump to content

Integers and floats


steph Arnott
 Share

You are about to reply to a thread that has been inactive for 3772 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

It's not laziness, Steph. In scientific circles, you don't specify more precision than you need. "1.0" implies that the tenths are important. If they aren't, you'd just say "1". And when you type "1.0", that's a string of three ascii bytes, which must be converted to a number before conversion to binary, rather than "1" which is only one. That conversion from ASCII to a number takes far more time than any float<->integer converstion. Once the byte code program has been compiled, the numbers are all in binary and there is no more conversion overhead as the program runs.

And with a good compiler, it's possible that an integer arithmetic operation will be carried out in floating point, because the  compiler may see that the processor's floating point unit is available to do the computation in parallel with the integer arithmetic unit. The sophistication of modern compilers can make it difficult to know exactly how to craft code for best performance.

Link to comment
Share on other sites

Sorry sis, i do not agree. What one do in there own code is upto them, what is posted on the wiki is confusing.

ADDED: Also highly competent programmers have made that fundamental mistake, here in the posts.  A float is that a float not an integer. LSL have to convert  an integer to a float, usually down grading to next whole number.

Link to comment
Share on other sites

I'm not certain what your concern is, Steph. The bottom line is that an explicit conversion pretty much requires the same overhead as an implicit one does. The only real question is whether implicit conversion is allowed, which is the case between floats and integers.

 

In fact, my code can correctly have either vector doodad = <1, 1, 1>; or  vector doodad = <1.0, 1.0, 1.0>; or vector doodad = <TRUE, TRUE, TRUE>; or even vector doodad = <0.10000E1, 0.10000E1, 0.10000E1>;and ALL of these compile to the exact same thing, with the first example requiring the least work of the compiler ( in that it has the fewest characters needed to be read from the code to form the value, which will (internally) always be 3 floats within the composite type vector).

 

The one chosen may be just a matter of personal style. Or, it may most accurately reflect how the vector is being used within the program (as it containing three boolean flags with <TRUE, TRUE, TRUE>), which would far outweigh any other consideration.

Link to comment
Share on other sites


ALL of these compile to the exact same thing,

For compilers smart enough to recognize assignment of a constant value, yes; for LSL2, not so much. Try compiling to LSL the following bit of nonsense and running it a few times. (To restore one's faith in compilers, bump the repetitions by an order of magnitude, compile to Mono, and try again.)

integer REP_COUNT = 50000;vector doodad;integer thisRep;float endTime;default{    touch_start(integer total_number)    {        thisRep = REP_COUNT;        llResetTime();        while (--thisRep)            doodad = <1.0, 1.0, 1.0>;        endTime = llGetTime();        llOwnerSay("vector assignment with floats: "+(string)endTime);        thisRep = REP_COUNT;        llResetTime();        while (--thisRep)            doodad = <1, 1, 1>;        endTime = llGetTime();        llOwnerSay("vector assignment with implicit integer conversion: "+(string)endTime);                thisRep = REP_COUNT;        llResetTime();        while (--thisRep)            doodad = <1.0, 1.0, 1.0>;        endTime = llGetTime();        llOwnerSay("(again) vector assignment with floats: "+(string)endTime);        thisRep = REP_COUNT;        llResetTime();        while (--thisRep)            doodad = <1, 1, 1>;        endTime = llGetTime();        llOwnerSay("(again) vector assignment with implicit integer conversion: "+(string)endTime);    }}

 

 

Link to comment
Share on other sites


Qie Niangao wrote:

ALL of these compile to the exact same thing,

For compilers smart enough to recognize assignment of a constant value, yes; for LSL2, not so much. 
Try compiling to LSL the following bit of nonsense and running it a few times. (To restore one's faith in compilers, bump the repetitions by an order of magnitude, compile to Mono, and try again.)
integer REP_COUNT = 50000;vector doodad;integer thisRep;float endTime;default{    touch_start(integer total_number)    {        thisRep = REP_COUNT;        llResetTime();        while (--thisRep)            doodad = <1.0, 1.0, 1.0>;        endTime = llGetTime();        llOwnerSay("vector assignment with floats: "+(string)endTime);        thisRep = REP_COUNT;        llResetTime();        while (--thisRep)            doodad = <1, 1, 1>;        endTime = llGetTime();        llOwnerSay("vector assignment with implicit integer conversion: "+(string)endTime);                thisRep = REP_COUNT;        llResetTime();        while (--thisRep)            doodad = <1.0, 1.0, 1.0>;        endTime = llGetTime();        llOwnerSay("(again) vector assignment with floats: "+(string)endTime);        thisRep = REP_COUNT;        llResetTime();        while (--thisRep)            doodad = <1, 1, 1>;        endTime = llGetTime();        llOwnerSay("(again) vector assignment with implicit integer conversion: "+(string)endTime);    }}

 

 

LMAO!

 

Oh wow, I'm just SO glad I came along after Mono was implemented. That shows the improvement the transition gave us twice over! Thank you for sharing that, made my day.

Link to comment
Share on other sites


steph Arnott wrote:

LSL for reason i not know, does not convert vectors to floats accurately. 

 

Not sure how you mean that but if you're refering to what seems to be a loss of precision when you output vectors compared to floats, then that is just a formatting difference, the internal precision is the same for both.

Observe this to see what I mean:

default{    touch_start(integer total_number)    {        vector doodad = <1.1234567, 1.1234567, 1.1234567>;        llOwnerSay((string)doodad);        llOwnerSay((string)doodad.x);    }}/*// Results in:// Output of floats within vectors are rounded to 5 decimal places[06:26] Object: <1.12346, 1.12346, 1.12346>// But of a float extracted from that vector, to 6 decimal places[06:26] Object: 1.123457*/

 

Link to comment
Share on other sites


steph Arnott wrote:

So internal is correct and output rounded? I assumed the output was the same, but just chopped the trailing numbers.

OK, I see the error you're making here. Let me try to explain it.

 

The numbers we write and those we see chatted are always just strings of characters. When the compiler encounters a string of characters and it is expecting them to represent a float or an integer value, it reads those characters and then converts them to a binary (base 2) representation of our decimal (base 10) digits.

 

Now, this is somewhat straightforward with integers. But, even then, you have written it out using the characters for decimal digits, 0-9, but it is internally stored as binary digits, 0-1. And internally, the length of these bits that represent an integer is fixed, always the same regardless of the value of the integer, And one bit (in LSL) is used to keep track of its sign, so it knows if it's a positive or negative integer. And that leads directly to someone being able to say that we can use any integer from −2,147,483,648 and +2,147,483,647 in our scripts.

 

But with floats, which is used to represent Real Numbers, it gets a bit complicated and I'm not going into how complex it is, you can refer to http://www.cprogramming.com/tutorial/floating_point/understanding_floating_point_representation.html if you're curious but I recommend one doesn't, there's all sorts of high magic done to accomplish it.

 

The point is, what you write and what you see outputted is not what is stored internally. We write numbers using base 10, decimal, characters. The computer, for either integers or floats, stores the representation of the string we wrote as a binary equivalent. And when it needs to chat it out, it calls up this binary representation and converts it to decimal characters so we can read it. Even if one had a complete understanding of how floats are being stored, it would be a wee bit difficult to make sense of that string of 0's and 1's otherwise.

 

And, when it does the conversion to output a float value, LSL also formats these decimal characters in a specific way; where we are given a string of characters having 6 decimal places if it stands alone or a string with 5 decimal places when it's being written within a vector or rotation.

 

I hope that make sense now and I haven't confused you even more.

Link to comment
Share on other sites

Setting all other considerations aside, if I want, for example, to use llSetPos and llSetRot to tell an object where to position itself, will I achieve more precise positioning if I feed the vector and quaternion values to it as a string, using chat, and then convert them in the listen event or if I hard code them in the script?   Or doesn't it make any difference?

I know it used to with LSL2, as Qie reminded us, but I'm unsure about what happens with Mono.

Link to comment
Share on other sites

I just coded something akin to Qie's test and built it on three compilers I've used in production, one for an ARM processor, one for a digital signal processor and one for a small 8-bit microcontroller. As I expected, whether I use 1 or 1.0 to initialize either an integer or float variable produces no difference in the resulting code. The versions are byte for byte identical. I can't imagine a situation in which 1 would produce a different result than 1.0 for the compilers I use. (That doesn't mean there isn't one, it's just that I can't imagine it.)

This has no bearing on LSL, of course, but does explain why programmers like me, and even the good ones, might use integer constants in floating point situations. The compilers figure it out. I do recall being admonished to put decimal points in constants I intended to be float, but that was by my Dad, who's PDP-11 compilers weren't as smart as those I use today.

One might make a case for using 1.0 rather than 1 to telegraph to the reader that something is a float, but that becomes a stylistic issue, not a technical one. The compilers I use do the right thing, even if I'm lazy. And if being lazy gets me the desired result, can you blame me?

On another note, I have used compilers that were certifed correct, running on hardware that was certified correct. I warned of the danger of allowing me to code on such a system, but I was ignored. At least LSL gives me reason to think I might not be the weakest link in the system.

;-)

Link to comment
Share on other sites


steph Arnott wrote:

Why do people become lazy and type 1 instead of 1.0 when the complier has to convert it?

An integer is a whole number and a float is a floating point number.

People don't become lazy.  They ARE lazy :-)

Please check the difference between 1/3 and 1/3.0 in almost any compiler you like to use.  In most cases the compiler is 'clever' enough to spot that only (fast) integer division is required in the first case - with the (integer) result 0.  In the second case the calculation 'ought to' use floating point division = 0.33333...

There is no excuse for being lazy.

Unless you know your compiler and don't care about any other use case.

 

Link to comment
Share on other sites


Innula Zenovka wrote:

Setting all other considerations aside, if I want, for example, to use llSetPos and llSetRot to tell an object where to position itself, will I achieve more precise positioning if I feed the vector and quaternion values to it as a string, using chat, and then convert them in the listen event or if I hard code them in the script?  
Or doesn't it make any difference?

I know it used to with LSL2, as Qie reminded us, but I'm unsure about what happens with Mono.

I seriously doubt there would be any difference in precision under Mono, since the compiler is server side now, though Qie might better say since he has worked with the code in SL longer than I have. But either way, I couldn't think of anything that wouldn't be "good enough" for SL except for equality comparison, as with:

 

default{    touch_start(integer total_number)    {        float x = 11;         float y = llSqrt(x);        y = y * y;        if (x == y)            llOwnerSay("Square root is exact");        else            llOwnerSay("Difference is " + (string)(x-y));    }}// Outputs "Difference is -0.000001"

 

And I say that because the numbers we use in SL are clustered around zero, where the approximation error for floats is the least. But, if you required (say) a result of exactly 2,000,000,000,000 a 32bit float could hold a close approximation to it, 1,999,999,991,808.0, which may not be "good enough".

 

Link to comment
Share on other sites


Madelaine McMasters wrote:

I just coded something akin to Qie's test and built it on three compilers I've used in production, one for an ARM processor, one for a digital signal processor and one for a small 8-bit microcontroller. As I expected, whether I use 1 or 1.0 to initialize either an integer or float variable produces no difference in the resulting code. The versions are byte for byte identical. I can't imagine a situation in which 1 would produce a different result than 1.0 for the compilers I use. (That doesn't mean there isn't one, it's just that I can't imagine it.)
[1]

This has no bearing on LSL, of course, but does explain why programmers like me, and even the good ones, might use integer constants in floating point situations. The compilers figure it out. I do recall being admonished to put decimal points in constants I intended to be float, but that was by my Dad, who's PDP-11 compilers weren't as smart as those I use today.

One might make a case for using 1.0 rather than 1
to telegraph to the reader
that something is a float, but that becomes a stylistic issue, not a technical one. The compilers I use do the right thing, even if I'm lazy. And if being lazy gets me the desired result, can you blame me?
[2]

On another note, I have used compilers that were certifed correct, running on hardware that was certified correct. I warned of the danger of allowing me to code on such a system, but I was ignored. At least LSL gives me reason to think I might not be the weakest link in the system.

;-)

You bring up 2 points worth looking at here;

1. LSL is a strongly typed language, which means when a variable is declared, it's memory is allocated according to its type definition This is the same amount for both an integer and float but those 4 bytes will store "1" within them very differently, set by the type definition. In other words, it makes no difference if your code says 1 or 1.0 for initializing either a float or an integer variable, both will be read, tested to see if it's a valid number string and (if so) converted to the type definition of the varable for storage.

 

2. "One might make a case for using 1.0 rather than 1 to telegraph to the reader that something is a float, but that becomes a stylistic issue, not a technical one. The compilers I use do the right thing, even if I'm lazy. And if being lazy gets me the desired result, can you blame me?" That strikes me to be the bottom line here, in so many ways.

Steph needs to realize that a program is only compiled once. If it's compiled successfully, the server doesn't consider your code again, it just runs the resulting byte code. Another way of saying that is, a running program will not "remember" that a float was initialized with an integer string originally and somehow "penalize" you for that during execution. Because once it's compiled, that variable is for sure going to be a float from that point on!

Also, readability is the second most important thing with coding. When Madelaine says "to telegraph to the reader" she is speaking about things like whether it's better to use 1 and 0 rather than TRUE and FALSE because the integers use fewer key strokes and thus reduce "chance of error" while writing your code. That is far outweighed when you're later going back over the code, seeing those Boolean constants immediately conveys much more information of what is happening than looking at 1's and 0's sprinkled around.

And vectors are used so many ways, it sometimes helps to take advantage of implicit conversion of integers within them to improve readability. For instance, if I'm using to hold and move around whole numbers as a sort of mini integer array, I'm going to write the values within the vector as integers so when I read the code later, it'll be readily apparent to me that's what it's about.

 

And, on top of all that, I am lazy and actually consider the compiler to be my slave. And, for it being so mulish in making me write things out "just so" to avoid that uninformative "Syntax Error", I admit I just might get a bit of perverse satisfaction in making it do a bit of extra work at times.

As long as the code remains readable. :)

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 3772 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...