Jump to content

LSL memory leak? Fragmentation?


animats
 Share

You are about to reply to a thread that has been inactive for 1838 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

i deleted my previous because I never explained it well. So I try again after I had a good sleep :)

in other .NET/Mono languages a list type has a Capacity property which is used to allocate memory for the list pointers.  list.Capacity is always greater than or equal to list.Count (length)

other language example:

list.Create();

list.Add([1 item]);     // list.Count = 1.  list.Capacity = 1;
list.Remove(0, 0);      // list.Count = 0. list.Capacity = 1;

list.Add([100 items]);  // list.Count = 100. list.Capacity = 100;
list.Add([2 items]);    // list.Count = 102. list.Capacity = 102;
list.RemoveAll();       // list.Count = 0. list.Capacity = 102; 
list.Add(1 item);       // list.Count = 1. list.Capacity = 102;

list.Destroy();         // all memory allocated to list is freed

once list.Capacity memory within an event or function is allocated it is not freed until list.Destroy() is called, or list.Capacity is set to 0

in LSL we can't do .Destroy() or .Capacity directly. Which we can in other languages

also, and quite importantly, in LSL, pointer memory allocated to variables declared in events (as opposed to function variables), is not freed when the event exits. The memory remains allocated until the running instance of the script is destroyed

what we can do in LSL to force list.Destroy(), to free list.Capacity from an event, is to wrap the list in a function.  Example:

process(string text)
{
   list data = llParseString2List(text, ...);

   // data.Destroy() is called when the function exits
}

event_(string text)
{
    processData(text);
}

 

a example test script that shows what is happening with pointer memory


integer mem_begin;
integer mem_event;
integer mem_func1;
integer mem_func2;


process(string text)
{
   list d = llParseString2List(text, ["|"], []); 
   
   // d.Destroy() is called when the function exits   
}


default
{
    state_entry()
    {
       mem_begin = llGetFreeMemory();
    }
    

    touch_end(integer n)
    {
       string text = "0|1|2|3|4|5|6|7|8|9";
       list data = llParseString2List(text, ["|"], []);   // list data.Capacity is 10
       mem_event = llGetFreeMemory();
       
       process(text);
       mem_func1 = llGetFreeMemory();
       
       process(text);
       mem_func2 = llGetFreeMemory();
       
       data = [];  // sets .Count to 0.  .Capacity is 10
                   // data.Capacity memory is not freed  
       
        
       llOwnerSay("begin: " + (string)mem_begin + " event: " + (string)mem_event + " func1: " + (string)mem_func1 + " func2: " + (string)mem_func2 +  " data = [] " + (string)llGetFreeMemory());
       
       llSetTimerEvent(2.0);   
    }
    
    timer()
    {
       llSetTimerEvent(0.0);
       llOwnerSay("TIMER " + (string)llGetFreeMemory());
    }
}

 

Edited by Mollymews
tyop
  • Thanks 1
Link to comment
Share on other sites

44 minutes ago, animats said:

Try building up the list by doubling, as I did.

Getting 4 bytes per element hints that there's a more efficient representation that it uses some of the time.

on this. Its that when we add to a list then a copy of the new added list is created before the old list is freed. Doubling uses less memory in the creation of the new list than does adding items one at a time

 

  

  • Thanks 1
Link to comment
Share on other sites

40 minutes ago, Mollymews said:

on this. Its that when we add to a list then a copy of the new added list is created before the old list is freed. Doubling uses less memory in the creation of the new list than does adding items one at a time

That makes sense.

All this has been very useful.

I've written a path planner which does paths of any length. If the goal is far and the route is complicated, it can run out of memory creating a long list of waypoints. The planning is done in steps, so I had the planner stop and return a partial path when low memory was detected. The next planning cycle will pick up from the new position and continue to head for the goal, but redoing it repeats some work, makes it slower, and the motion pauses.  That helped with the stack/heap collision problem.

But llGetFreeMemory always returned smaller values. Memory looked tight forever thereafter. So the first really long path got the planner stuck doing the job in little pieces until the next script reset.

Now, when the planner sees low free memory, it uses the forced GC trick:

integer memlimit = llGetMemoryLimit(); // how much are we allowed?
llSetMemoryLimit(memlimit-1); // reduce by 1 to force GC
llSetMemoryLimit(memlimit); // set it back

and then checks llGetFreeMemory again. So, after handling an extra-long path, it now goes back to normal until the next long path. No more stack/heap collisions from that script, and it doesn't get stuck in slow mode.

So, working for now. Thanks, everybody.

  • Like 2
Link to comment
Share on other sites

3 hours ago, Mollymews said:

also, and quite importantly, in LSL, pointer memory allocated to variables declared in events (as opposed to function variables), is not freed when the event exits. The memory remains allocated until the running instance of the script is destroyed

This is indeed important. Is it documented?

I'm not sure how literally to take "pointer memory" as opposed to overall list memory.

Earlier in this thread I got a little stymied by the fact the original code was gradually exhausting free memory from successive function calls.* Now knowing this distinction between local variables in functions and in event handlers, nonetheless that function sure seems to be leaving garbage behind. Is it even worse for event handlers? i.e., after exit do they leak allocated memory, as "pointer memory... is not freed" implies, not merely leave garbage on the heap?

This is subject to experimentation, of course, but it's possible I'm just misunderstanding the whole premise.

________________
* My original emphasis on gc had tacitly -- and falsely -- assumed global variables, despite the OP clearly stating otherwise.

Link to comment
Share on other sites

In regular languages, all local variables are freed when a function exits because (unless they are explicitly allocated on the heap), traditionally they are literally created on the stack. And input variables are passed via stack. And output variables are returned via stack. When the function returns, it rolls back the stack to the starting point, pushes any return variables, and exits to the caller.

  • Like 1
Link to comment
Share on other sites

1 hour ago, Qie Niangao said:

I'm not sure how literally to take "pointer memory" as opposed to overall list memory.

 

typically the way lists are implemented on VMs is (in pcode):

Node {
   ptr Data;  // point to the data value of the node;
}

List {
   Nodes [array] of Node pointers;
}

when we LSL write: list += ["First"];

then the Nodes array size is increased by 1 element. A Node is mem-created and is pointed to by the newly added array element

when we follow this by: list = [];

then Free(Nodes[0]) is called which frees only the memory used by the node. However the Nodes array is not reduced by 1 element. The Nodes array stays the same size (Capacity). The Nodes array element is marked as Available. The reason this is done on most VMs is for efficiency reasons. As when we reduce Nodes [array] then we have to MemMove all the remaining Nodes pointers

on some VMs when a event is exited, the event is treated like a function. Everything is cleared. On the LSL VM this doesn't seem to be the case, and now I speculating

it appears from the behaviour that the memory allocated to event variables is not freed. And if so then it can make sense from a performance perspective

or its just a bug, and List.Clear() (which frees each of the Node(s) memory and the Nodes [array] memory) is not called when the event exits

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

After much struggling, I have things working, but memory is still extremely tight in one script. Notes:

  • There's no memory leak. These busy scripts can run for days. The way llGetFreeMemory reports free memory creates the illusion of a memory leak, but if you force a garbage collection, llGetFreeMemory then returns a realistic number.
  • Passing link messages around creates some problems. Link messages are broadcasts to every script in the prim. Scripts can ignore messages, but a big incoming message can cause a stack/heap collision in a script that's going to discard the message. So every script has to have enough free memory for the longest message sent by any script. Putting scripts in different prims is a possible workaround; you can address messages by prim, but not by script.
  • The 64 event queue limit is real, and it's exactly 64, as documented. Two scripts exchanging messages rapidly while another script in the same prim is doing a long task can overflow the queue for the script with the long task. Then  events are lost. I hit that.
  • A stack/heap overflow produces a message on DEBUG_CHANNEL, but only within normal 20m "talk" range. For a while, I was running around after my NPCs to be in range when they crashed. I finally wrote a little script that repeats DEBUG_CHANNEL error messages to  llOwnerSay, so I can see the crash info anywhere in the same sim. Since you can't listen to yourself, that has to go in a different prim than the ones that might crash.
  • A script can't reset itself after a stack/heap overflow, but another script can reset it with llResetOtherScript. So I now have a stall timer. If all progress stops, the script with the stall timer resets all the other scripts and recovers.
  • This was a huge pain to write, but it's working reasonably well.
  • Like 1
Link to comment
Share on other sites

2 hours ago, animats said:
  • There's no memory leak. These busy scripts can run for days. The way llGetFreeMemory reports free memory creates the illusion of a memory leak, but if you force a garbage collection, llGetFreeMemory then returns a realistic number.

Did you ever figure out what was generating garbage, despite all dynamic allocation happening in functions (and hence, it would seem, on the stack)? I have a half-baked theory that I never bothered to test, that merely calling a function that returns a value of string or list type (even a local variable) uses some variable-sized bunch of heap, allocating new blocks when the required space fluctuates above the largest previously-freed block. I think I got this idea from the Calling Functions section of the (very unofficial) LSL Script Memory wiki page but now that I read it again I may have misinterpreted it (and I thought this was also happening when passing parameter values, which that page isn't saying at all). But unless it's something like this, it's a mystery whence the garbage can arise, right?

In passing: Including a monitoring script to reset other scripts that mysteriously stop running is pretty common in LSL applications that need availability; it's not uncommon to have one of the monitored scripts also monitor and reset that monitoring script, all for Murphy's Law compliance.

  • Like 1
Link to comment
Share on other sites

I've been watching this thread with interest, because I too have a project slowly fermenting that is running into similar issues through having a large amount of data to manipulate.

I'm SO glad somebody else has already done the painstaking research on it and I can add the same techniques for LSL-specific workarounds to my own toolbox :)

 

Thank you, @animats and the other contributors here :)

  • Like 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 1838 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...