Jump to content

Goal for Amount of Free Memory?


Love Zhaoying
 Share

You are about to reply to a thread that has been inactive for 330 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

I am currently working on a "suite" of large scripts, and for most of these scripts I manage to have an initial amount of free memory of between 20-32K (depending on the script).

This amount of initial free memory really should be enough for these scripts, since there is no recursion, no large lists (or ANY lists except as parameters for some calls), no large strings, etc. All possible due to the magic of JSON and LinksetData ("LSD").  The scripts already have major functionality "segregated" to a single script, with as few user-defined functions as possible, etc.

My question: 

Is there any specific reason I should try to get my initial free memory significantly higher than 20-32K?  It would probably mean splitting scripts and a significant loss in performance (due to required additional inter-script communication).  If so, what may that "goal" be?  

We can certainly get into a discussion of "best practices", although note that "programming styles" is somewhat off-topic.  (Yes, I went there!)

Thanks in advance,

Love

Link to comment
Share on other sites

16k~32k at startup would be my goal as well, usually that's where I end around when I have a large script I actually need to look at memory usage for.

1k is what I would regard as a 'minimum safety buffer' for if/when your scripts are actually using memory. mono can get a bit 'fuzzy' with memory allotment, but I wouldn't toe too close to the line.

  • Thanks 1
Link to comment
Share on other sites

3 minutes ago, Quistess Alpha said:

16k~32k at startup would be my goal as well, usually that's where I end around when I have a large script I actually need to look at memory usage for.

1k is what I would regard as a 'minimum safety buffer' for if/when your scripts are actually using memory. mono can get a bit 'fuzzy' with memory allotment, but I wouldn't toe too close to the line.

Thanks.

One of my current work-in-progress scripts "only" has 23k available (AFTER refactoring), and I have to really stop and think if it is worth the extra effort to try and free more initial memory.

If I can keep the "data chunks" in LSD at about 1K max, there would be no reason for this particular script to drop below say, 17k-18k free during use.

Link to comment
Share on other sites

I have a set of scripts I've been tracking in different regions that reliably hit 300~800 bytes of free memory, when checked on a weekly basis. (they were designed to have much more free mem than that, it turns out the issue was they were set to lso instead of mono). I'm not the one testing them, but I take it on good faith they fail, but it's rather rare. 1k or more should be safer from 'blue moon failure'.

  • Thanks 1
Link to comment
Share on other sites

it depends on the data. Its overall size, the size of each data element, and as LSL is CopyByVal then also the size of the return of each function/assignment within a scope

my rule of thumb approach is to calculate how much data memory is consumed by each event, sum the total of all events and then double it to give the least amount of memory needed by data should all events have fired before the garbage collector does its work

this gives us a ballpark memory total to allow, and then look at reducing the total down where we can identify scope instances where this can be done without incurring a stack-heap collision

Edited by elleevelyn
/assign
  • Like 1
Link to comment
Share on other sites

8 minutes ago, Wulfie Reanimator said:

To approach the question from another angle:

Does it matter how much free memory the script has, if it's not crashing?

No! But, the less memory available, the more constrained I am in "unit of work" size. Thus, the reason in this case to try and limit LSD entry size to about 1k.

Link to comment
Share on other sites

43 minutes ago, elleevelyn said:

it depends on the data. Its overall size, the size of each data element, and as LSL is CopyByVal then also the size of the return of each function/assignment within a scope

my rule of thumb approach is to calculate how much data memory is consumed by each event, sum the total of all events and then double it to give the least amount of memory needed by data should all events have fired before the garbage collector does its work

this gives us a ballpark memory total to allow, and then look at reducing the total down where we can identify scope instances where this can be done without incurring a stack-heap collision

Luckily, I design this specific type of script to only use the link_message() event, with data sizes no more than about 1k.

Link to comment
Share on other sites

9 hours ago, Quistess Alpha said:

16k~32k at startup would be my goal as well, usually that's where I end around when I have a large script I actually need to look at memory usage for.

1k is what I would regard as a 'minimum safety buffer' for if/when your scripts are actually using memory. mono can get a bit 'fuzzy' with memory allotment, but I wouldn't toe too close to the line.

Perhaps GC is kicking in and saving you, occasionally?

Link to comment
Share on other sites

12 minutes ago, Love Zhaoying said:

Perhaps GC is kicking in and saving you, occasionally?

Hard to say. Funny enough though, in that particular script, it llSetMemoryLimit's to max memory-1 and then to max memory again, which should trigger garbage collection (OTOH, if it's in a lso script. . .) and records the used memory before and after: no difference.

  • Like 1
Link to comment
Share on other sites

6 minutes ago, Quistess Alpha said:

Hard to say. Funny enough though, in that particular script, it llSetMemoryLimit's to max memory-1 and then to max memory again, which should trigger garbage collection (OTOH, if it's in a lso script. . .) and records the used memory before and after: no difference.

In this one, I'm going to totally avoid "forcing" GC with llSleep(), etc. until I see there's some need. 

Link to comment
Share on other sites

  • 1 month later...
On 4/8/2023 at 9:55 AM, elleevelyn said:

as LSL is CopyByVal then also the size of the return of each function/assignment within a scope

I don't believe this to be true.  Simple values (integers, floats, I think vectors and rotations too) absolutely are.  But the big ones (strings and lists) are passed by reference.  (Or rather, a reference is passed by copy — same as in just about every other language.)

Where you get bitten — and I think where most people get confused here — is on immutability, and in forgetting that the caller to your function still exists.  If you pass a value into a function, and then modify that value within said function, the original value is still held by the caller, as well as the callee now having it's own modified copy.  That can appear as pass by value, but it's not.

I'm also still not convinced poking the GC is actually effective; in every case I have run into where maybe it possibly might have been, there's also been a very good chance I just plain went over memory, and no amount of GC poking would have helped.  On the flip-side, I have scripts where I'm certain they are regularly flying well over the limit for short bursts, and I at least know they've created temporary values that together absolutely would have gone over if they hadn't been collected, and they don't crash…  That said, the LL's won't give us the facts (mostly coz they know we'll abuse them if they do), so there could be some weird corner cases where it does still do the biz…

  • Like 1
Link to comment
Share on other sites

6 hours ago, Bleuhazenfurfle said:

Where you get bitten — and I think where most people get confused here — is on immutability

Speaking of the "immutability" of strings..That reminds me, one way I've saved a good amount of memory lately is, if I have a constant, but I need a corresponding string - just store the string in LSD and look it up when needed, instead of:

if (iConstant==1)

  StringValue="Value1";

else if (iConstant==2)

  StringValue="Value2";

etc.

One bonus factor for this is, since I have a few related scripts that need the same string constants and values, they all save memory by using the shared LSD lookup.  And, I only have to add / change the values in LSD (or the notecard that contains the "base" LSD data) instead of changing each script and making sure each script is "in sync".

 

Link to comment
Share on other sites

On 5/23/2023 at 5:48 PM, Bleuhazenfurfle said:

I don't believe this to be true.  Simple values (integers, floats, I think vectors and rotations too) absolutely are.  But the big ones (strings and lists) are passed by reference.  (Or rather, a reference is passed by copy — same as in just about every other language.)

you might be right that the Linden Mono compiler does this

with the standard Mono compiler ByVal means make a copy of the data and then pass the copy to the function. We have to specifically use ByRef (or pointer) to do by reference, any changes we make to the data is on the origin

is true tho that ByVal in the way you are saying is how the Microsoft .NET compiler does ByVal. The Microsoft compiler is smart enough to do this kind of late binding. And it may very well be that Linden compiler does the same

  • Like 1
Link to comment
Share on other sites

It's pretty easy to see there's some shallow copying and referencing going on, generate a 256-length list by duplicating a seed list:

default
{
    state_entry()
    {
        integer i;
        list a = [0];
        for(i = 0; i < 8; ++i)
            a += a;
        llSleep(0.01); // ensure memory count is accurate
        llOwnerSay((string)llGetListLength(a)); // =256
        llOwnerSay((string)llGetUsedMemory()); // =5044
    }
}

As opposed to generating a 256-length list by adding individual values:

default
{
    state_entry()
    {
        integer i;
        list a;
        for(i = 0; i < 256; ++i)
            a += 0;
        llSleep(0.01); // ensure memory count is accurate
        llOwnerSay((string)llGetListLength(a)); // =256
        llOwnerSay((string)llGetUsedMemory()); // =8100
    }
}

I prefer to use the latter to be certain the reported memory use is reliable and stays static after rewriting pieces of the list, even if the initialization takes longer.

Also you can use the former to see yourself going above 64k script memory used, at least briefly:

default
{
    state_entry()
    {
        integer i;
        list a = [0];
        for(i = 0; i < 14; ++i)
            a += a;
        llSleep(0.01); // ensure memory count is accurate
        llOwnerSay((string)llGetListLength(a)); // =16384
        llOwnerSay((string)llGetUsedMemory()); // =69556
        llOwnerSay("how is this still running"); // might crash, might not, depends on if the engine catches on to being over limit on this execution frame
    }
}

 

Edited by Frionil Fang
  • Thanks 1
Link to comment
Share on other sites

On 5/26/2023 at 7:49 PM, elleevelyn said:

is true tho that ByVal in the way you are saying is how the Microsoft .NET compiler does ByVal. The Microsoft compiler is smart enough to do this kind of late binding. And it may very well be that Linden compiler does the same

It's not really a case of "late binding"…  You're making it more complicated than it needs to be.  A string or a list are dynamically sized by their very nature, the others are always the same fixed size.  The fixed size ones, get passed by value.  The dynamically sized ones, get a reference (managed pointer, probably effectively a "smart pointer" of sorts) to them passed by value instead of the thing itself.

Lists make things a little tricker…  An LSL list is a dynamically sized array of object pointers (which we can call references, given they are "managed", in that you can't actually access them), wrapped by a thing containing at least a pointer to that array, and it's length.  That wrapper thing is probably passed by value, being it has a known fixed size, and isn't terribly large (and would otherwise need to be wrapped in turn, with yet another pointer being passed by value), but itself forms a reference (or "smart pointer") to the thing we think of a "the list" (actually a dynamic array).  In the case of a string, it's basically exactly the same, except it's an array of characters, instead of an array of object pointers.  That wrapping happens in the case of every item in your list also, in order to move it onto the heap (though maybe not for strings which may already count as a wrapper, but definitely is for basic pass-by-value types like integer).  That's about as complex as you need to care about in LSL.

Trying to be any more precise, takes you down the rabbit hole…  Pointers can be "smart pointers" (basically a managed pointer, with one or two other pieces of information such as length), passing a dynamic object "ByVal" may actually cause the object to get copied on the heap, and then the copy is just passed by reference instead of the original reference, and whenever those pointers are "managed" (you don't get to see them) then they're called references.  C++ has about a dozen (or more) types of "smart pointer", as well as half a dozen syntactic constructs (I'm not certain I'm exaggerating, either), not to mention the whole compiler type infrastructure, in order to handle every conceivable combination (and in D, it's even worse — or better, depending on your perspective — with it's extensive use of "voldermort types", which is among the things C++ is trying really hard to copy).  C# seems to be stack based (as opposed to C++'s register, with overflow going to the stack), and probably has most of those capabilities too, with it's ByRef and ByVal being a messy heuristic sugar coating (being the all-consuming wanna-do-everything Microsoft style language it is).  And then "by value" itself, really isn't — it is, in reality, "by stack" (esp. C#), or "by register" (esp. C++), and in the case of a compiled language like C++ or D, also "by type" and/or "by optimisation" (ie. the compiler determines the information is constant, and so doesn't have to pass it at all).  And lets not even get into in and out parameters (outs are basically just another kind of ByRef — usually a reference to appropriate uninitialised space), return values being implemented using out parameters (very common optimisation for non-trivial return value types), and so forth.

It's also for these reasons, I tend to avoid any specific language's definitions, and just go with the basic concepts (especially in terms of LSL); references are managed pointers to things, and are used when the thing is too big or dynamic to fuss with copying onto the stack.  Trying to describe it in terms of C#'s ByVal or ByRef, brings in C# semantics which may not map precisely to LSL's, leading you up the garden path at night, without a torch.

Edited by Bleuhazenfurfle
  • Like 2
Link to comment
Share on other sites

31 minutes ago, Bleuhazenfurfle said:

Lists make things a little tricker… 

Lists are one of the only actual cases of "late binding" (even if it's technically not really that) in LSL, because only at runtime does LSL check the condition "Lists cannot contain Lists".

So even trying to avoid "definitions" for other languages, I think it is fair to say that the "Lists cannot contain Lists" error is akin to "late binding": Checking a type at Runtime.

 

Link to comment
Share on other sites

45 minutes ago, Love Zhaoying said:

Lists are one of the only actual cases of "late binding" (even if it's technically not really that) in LSL, because only at runtime does LSL check the condition "Lists cannot contain Lists".

I'd love to see your references for that.  I've written an LSL emulator (or three *cough*) — which is why I know this stuff at the depth I do.

And I've never actually needed to check at runtime…  (More specifically, the checks are there, but they're hard errors that indicate a fault in my parser, and stop happening once it's fixed.)  The syntax and type systems don't allow it, so it can't happen, and thus doesn't actually need to be checked late (ie. at runtime).  There is pretty much no "late binding" going on in LSL, it's just not that complex a language.  (Even my IDE, whenever I try, it slathers them in red squiggles and error messages quicker than I can press Save.)

I guess at a stretch, you could consider llListFindList to be "late binding", ditto for llList2xxx, and the likes…  But that's just basic object-oriented stuff, something along the lines of (in JS): llList2String = (myList, index) => myList[index].toString();

Edited by Bleuhazenfurfle
  • Thanks 1
Link to comment
Share on other sites

24 minutes ago, Bleuhazenfurfle said:

I'd love to see your references for that.  I've written an LSL emulator (or three *cough*) — which is why I know this stuff at the depth I do.

Mere observation, sorry!

I'm only working on my second emulator now. Hopefully won't need a 3rd!

To clarify: the "lists cannot contain lists" error does not sound like something akin ("like") late binding?  Ok!  You're right, really, it's not "binding" just checking at runtime the type that is being inserted into the list.  From that perspective, most type checking is done at "compile time", not at runtime in LSL.  So that's what I mean 🙂 

Like you said, definitions and terminology are quite out of scope for these discussion!

 

Edited by Love Zhaoying
Link to comment
Share on other sites

19 minutes ago, Coffee Pancake said:

Published ?

That should be interesting, if they have an "external" LSL Interpreter!

Mine will only be "LSL-to-Interpreted Language" or "Language-to-LSL" written IN LSL for use in-world (script written IN LSL for interpreting programs written IN LSL or BASIC or whatever).  

Currently still working on LSL-to-LSL (for LSL "Extension") and BASIC-to-LSL Schemas.

 

Edited by Love Zhaoying
Link to comment
Share on other sites

i tested this and can confirm that LSL Mono compiler passes lists as Bleu stated, same as the Microsoft .NET compiler does 

f(list a)
{
   llOwnerSay("function: Mem: " + (string)llGetFreeMemory() + 
    "\n Len list a: " + (string)llGetListLength(a));
      
}

default
{
    state_entry()
    {   
        list a = [0,0];
        list f = [0,0];           
        integer i = 10;
        while (--i)
        {
            f += f;
        }
        f += f + f + f + f + f;  // f = 6144 integer elements 

        i = 13;
        while (--i)
        {
            a += a;  // a = 8192 integer elements
        }
        
        llOwnerSay("state_entry: Mem: " + (string)llGetFreeMemory() + 
            "\nLen list a: " + (string)llGetListLength(a) +
            "\nLen list f: " + (string)llGetListLength(f)); 

        // free memory = 2992 at this point
        // pass a list with 8192 integer elements
        f(a);
            
    } 
}

 

 

  • Like 2
Link to comment
Share on other sites

Clarification on my Original Post (first post in this thread):

For my current projects, I do NOT use the "list" type except wherever absolutely necessary.  I use JSON, and convert to List if I absolutely must because of llFunctions() that require a list, or needing to process a JSON_ARRAY as a list for efficiency purposes..

So, just letting you know that how lists work is not actually helpful in my decisions about the original topic question!

I'm more than fine with any and all discussion though! I welcome all discussion! 🙂 

Thanks,

Love

 

Link to comment
Share on other sites

1 hour ago, Coffee Pancake said:

Published ?

Should have known that was coming…  Nah.  Though version 3 is planned to be…  Eventually.  I stopped doing most heavy coding over the past few years due to health issues.  I started this thing when my health had plateaued for long enough that I was starting to get used to it enough to do bits and pieces again…  So I tend to work on this thing for a few weeks, then put it aside for a couple months until I feel up to pulling it out again.  Soo, not putting an ETA on it.  But…  it is tantalisingly close to my first MVP milestone…!  So maybe … possibly in time for SL30…

Straight forward emulators/compilers are soooo much easier than this monstrosity, though…  And because I chose to use it as my inspiration for finally learning Typescript (which has some really very frustrating gaps here and there, much like JavaScript that underpins it), I made a few missteps early on, like not implementing proper backtracking in the parser…  Still, the emulation side is passing a whole bunch of test cases covering most of the LSL math, string, and list functions, as well as a couple LSL functions I have in a little "library" I've thrown together (some mine, some stuff from the wiki I like but can never find when I want it) to test the emulation of actual LSL code…  Getting it to pull in Sei's list of LSL functions was also a major step towards usability, filling in the (HUGE) gaps where I haven't I implemented emulation yet with syntax and tooltip info at least.  (Currently emulates a grand total of 37 of LSL's built in functions!)  The remaining "minimum viable product" milestones are; no hard errors during regular scripting, getting my "library" to pass the remaining tests (and writing about 10,000 more, considering the 500-ish it's already got), and fixing a current variable reference counting issue with respect to loops (variable assignments at the end of loops get flagged as "unused").  Then I'll see about figuring out how to go about publishing it…  It's already shaping up rather well — being able to mouse over a variable or function call in your script, and see it's computed value, is pretty darned awesome…!  (Though also pretty fragile, it rather readily throws up it's hands and says, "dunno".)

 

54 minutes ago, Love Zhaoying said:

Mine will only be "LSL-to-Interpreted Language" or "Language-to-LSL" written IN LSL for use in-world (script written IN LSL for interpreting programs written IN LSL or BASIC or whatever).  Currently still working on LSL-to-LSL (for LSL "Extension") and BASIC-to-LSL Schemas.

Yeah, my Version 2 was a pretty complete "external" LSL interpreter — at least, for the scripting I was doing (didn't have any character stuff, for example, and I didn't even bother trying to simulate physics, and don't intend to in V3, either — would be easier to just graft on an OpenSim instance or something).  And had been seen by a couple people in scripting groups while I was fishing for real life test code to throw at it — so it wasn't complete take-my-word-for-it vapour-ware.

Wrote the initial implementation of that over a weekend around 2012-2014 sometime, to track down an annoying bug I was having somewhere amidst a dozen scripts…  So it let me set a conditional breakpoint in the simulator code where the wonky value would show up (the variable assignment code, if I recall correctly), letting me pinpoint where it first appeared, and from there, fix the problem.  After the success of the initial implementation (which was pure emulation), I'd grafted on minified script generation, then optimisation, and gradually ratcheting up the optimisation over the following year, got it a touch better than Sei's optimiser (a worthy milestone!), and was using it actively through into 2017, I think…  And then I broke it, trying to shoehorn in a rather invasive new feature that was needed to sort out some issues it had.  (That was one of the first things I implemented into V3.)

Couldn't imagine doing it IN LSL, though…  Interpreting BASIC in LSL wouldn't be hard — after all, BASIC was designed for machines with less memory than an LSL script gets.  But LSL or better, in LSL…  that just sounds painful.  My emulator thing is somewhere about 5k lines of Typescript at last count, and Version 2 was about 8kLOC of Python (and that wasn't counting lines without actual code — no blank, comment, or brace-only lines)…  Granted, an awful lot of that is just reproduce LSL's many weirdnesses in another language…

listDivideExample.jpg.f340fb142c87ed63767f006459e2abaa.jpg

Small example of where V3 is at right now: Successfully shows the result of evaluating a real LSL function, an incorrect warning that the Ai variable is "assigned but not used", two errors for trying to put a list into a list on that bottom line (though the emulator still handles it, and the assertEq built-in testing function uses it to verify the value returned from the function, matches the value on the right which was produced by that exact function in a real LSL script in SL), and it's not showing the refs count for functions (number of places from which the function is called).  The "print" function is also emulated using JS's console.log, which can be kinda handy, since llOwnerSay doesn't actually do anything yet (have to figure out how to open "chat" window)…

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

43 minutes ago, Bleuhazenfurfle said:

I made a few missteps early on, like not implementing proper backtracking in the parser

I'm avoiding that by using a Tokenizer prior to Parsing. The Tokenizer only needs to "backtrack" to check for Combo Tokens (++, /*, etc.).  The Parser is iterative / recurses data using a data stack in LSD, so it's "back-tracking" is limited to when a child checks it's parent's data, or when control iterates back to a parent Schema from a Child Schema.

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 330 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...