Jump to content
Arduenn Schwartzman

Revived JIRA for increase in memory limit

Recommended Posts

Hi folks,

I just reissued a (slightly altered) feature request involving increased memory for scripts. Most of you will probably be able to imagine the potential and benefits for memory increase.

The old JIRA (closed, unimplemented) can be found here: https://jira.secondlife.com/browse/BUG-134167

As a solution to potential abuse of high-memory scripts, a way of pay-per-script is proposed, as follows:

  • A creator pays 500 L$ for a 128-kb script to emerge in their inventory (L$ 2000 for a 256-kb script, etc?).
  • This script remains No Trans to the creator (albeit Copy and Mod) until a 'finalize' feature is activated by menu or check box and confirmation dialog.
  • Upon 'finalize', the script becomes Copy/Trans, yet No Mod to the creator and anyone else. The creator can set to either No Copy or No Trans.
  • Optionally, limit this feature to Premium members.

Simplifying it, eliminating a flaw that enables copying the script without payment, as pointed out by @Innula Zenovka:

  • Checking a 128 kB checkbox on the script window enables 128-kb and turns the script No Trans to the creator.
  • Clicking a 'Finalize' button on the script window brings up a payment dialog.
  • Payment irreversibly turns the script No Mod/Copy/Trans to the creator.

The new JIRA is here: https://jira.secondlife.com/browse/BUG-226311

Any suggestions for or against it in this thread (since not every scripter visits the JIRA site on a regular basis)?

Edited by Arduenn Schwartzman

Share this post


Link to post
Share on other sites

I like the idea of more script memory, it is about time.
I don't like the idea of paying per script for it and I am not sure LL would really make enough money from it to be another viable source of income.
I am OK with the idea of it being part of a Premium perk but I am sure that will not be popular with non-Premium members, though perhaps anyone could buy the perk for an extra couple of USD without Premium too?
I like better the idea that once you have the perk that you can use more memory in your scripts automatically without having to make special scripts.  The key part is that they have to be created by someone who has the perk at the time of creation to can access the additional memory.  If they are transferred and then edited by the new owner (if modify) who doesn't have the perk at the time of saving changes, the the script loses access to additional memory.

Share this post


Link to post
Share on other sites

@Gabriele Graves Remember that the JIRA proposing increased memory for Premium users was already rejected by LL.

My pay-per-script is kind of a last attempt to motivate LL to implement it anyway.

11 minutes ago, Gabriele Graves said:

I am not sure LL would really make enough money from it to be another viable source of income.

I'm not sure if it will either. It's more intended as a means to limit abuse.

Share this post


Link to post
Share on other sites
7 minutes ago, Fionalein said:

Generating more lag

Arguments for generating less lag by increasing memory limit can be found in the first link. Generating less lag is actually one of the reasons for the feature request.

Edited by Arduenn Schwartzman

Share this post


Link to post
Share on other sites
1 minute ago, Arduenn Schwartzman said:

@Gabriele Graves Remember that the JIRA proposing increased memory for Premium users was already rejected by LL.

My pay-per-script is kind of a last attempt to motivate LL to implement it anyway.

I'm not sure if it will either. It's more intended as a means to limit abuse.

Fair enough and I wasn't aware that LL has rejected that already.  Still, I cannot get behind the idea of pay per script though and compared to other L$ sinks it is very high.  In my experience, the people who want to abuse stuff usually have plenty of money to do so and are quite happy if it gives them an advantage.

Perhaps it should be a perk that everyone gets for free but LL has to enable it for your account, if you get caught abusing it then that can be switched off.

  • Like 1
  • Thanks 1

Share this post


Link to post
Share on other sites
8 minutes ago, Arduenn Schwartzman said:

Arguments for generating less lag by increasing memory limit can be found in the first link.

Those arguments only work with the introduction of one aditional limit: Cap total attached script memory. Otherwise we will just end up with more memory usage by bigger scripts around, causing even bigger lag spikes whenever some script hoarder enters a sim.

Edited by Fionalein
  • Thanks 1

Share this post


Link to post
Share on other sites
10 minutes ago, Fionalein said:

Cap total attached script memory

I am so for that too. Totally agree.

Or limit 128-kb scripts to non attached objects only. But limiting attached script memory seems a more elegant and better idea in general, even regardless of script memory increase.

Edited by Arduenn Schwartzman
  • Like 1

Share this post


Link to post
Share on other sites

64kb and the ability to have multiple scripts is enough. Any thing reqiuring more can be done externally. Most lag is caused by badly written scripts or certain types. Also nearly all attatched scripts are not using 64kb any how. Script counters can not access actual script memory usage. They just count how many scripts the agent has and produce a count that is meaningless. It can not even determine if the script is disabled. I would be more happy if LL actually did what it said it would and implement such things as non owner llTeleportAgentGlobalCoords. Also L$500 is US$2.00  would pay for nothing of worth and LL do not do things that cost more than paid for..

  • Like 1
  • Haha 1

Share this post


Link to post
Share on other sites
39 minutes ago, steph Arnott said:

64kb and the ability to have multiple scripts is enough.

I disagree for many reasons (also stated in that first JIRA, https://jira.secondlife.com/browse/BUG-134167 - please people, read it first, before coming up with arguments here).

The biggest reason being the practice of many scripters circumventing memory limit by using multiple scripts and using llMessageLinked and link_message to transfer data from one to the other, resulting in less efficient and therefore laggier scripts, less reliable scripts due to asynchronicity between scripts, more complex script design and increasing the potential to introduce bugs, more time and energy wasted in developing a complex communication structure between scripts and more scripts to be transported across regions, each with their own CPU overhead.

Briefly stated, one 128-kb script is dramatically superior to two 64-kb scripts having to work in concert.

Personally, the 64-kb limit forces me to omit features in many products that I make, (to many customers' grief).

39 minutes ago, steph Arnott said:

LL do not do things that cost more than paid for..

They do. Al lot. Financially, they take a lot of risks introducing many new features in SL. For instance, I'm pretty sure that, so far, LL hasn't gotten any ROI on Pathfinding. Nonetheless, I find it very comforting that LL puts implicit selfless effort in new ideas by taking these risks. One thing that they get back for it is confidence in the SL platform by the consumer and the creator. At least, by yours truly.

Edited by Arduenn Schwartzman
  • Like 1

Share this post


Link to post
Share on other sites
1 minute ago, Arduenn Schwartzman said:

I disagree for many reasons.

The biggest reason being the practice of many scripters circumventing memory limit by using multiple scripts and using llMessageLinked and link_message to transfer data from one to the other, resulting in less efficient and therefore laggier scripts, less reliable scripts, more complex scripting design, more time and energy wasted in developing a complex communication structure between scripts and more scripts to be transported across regions, each with their own CPU overhead.

Briefly stated, one 128-kb script is dramatically superior to two 64-kb scripts having to work in concert.

Personally, the 64-kb limit forces me to omit features in many products that I make, (to many customers' grief).

They do. Al lot. Financially, they take a lot of risks introducing many new features in SL. For instance, I'm pretty sure that, so far, LL hasn't gotten any ROI on Pathfinding. Nonetheless, I find it very comforting that LL puts implicit selfless effort in new ideas by taking these risks. One thing that they get back for it is confidence in the SL platform by the consumer and the creator. At least, by yours truly.

'less efficient and therefore laggier scripts' the only lag would be the server trudging through the script looking for the releivent data called for. Have a 128kb script and it will take forever. Also i use sorted memory scripts so the main only has to go to the one which contains eg D to F.

'128-kb script is dramatically superior to two 64-kb scripts having to work in concert' and your proof of that is?

Also LL do nothing if there is no profit. Sansar was paid for by speculating investors.

Share this post


Link to post
Share on other sites
24 minutes ago, steph Arnott said:

Have a 128kb script and it will take forever.

The situation now is: Have two 64-kb scripts and it will take even longer.

24 minutes ago, steph Arnott said:

and your proof of that is?

In two 64-kb scripts:

  • Step 1: Script 1 dumps parameters into request string
  • Step 2: Script 1 sends request string to Script 2
  • Step 3: Script 2 parses request string into parameters
  • Step 4: Script 2 extracts or processes data from list using said parameters
  • Step 5. Script 2 sends data or confirmation feedback signal to Script 1
  • Step 6. Script 1 parses the response and acts accordingly
  • (Events triggered: 2--actually 4--Scripts also receive their own llMessageLinked)

In one 128 kb-script:

  • Step 1: Script 1 pulls data from list and acts accordingly
  • (Events triggered: 0)

Other example:

One script has a minimal CPU time, say .001 ms. Ergo, two 64-0kb scripts have at least .002 ms. Conversely, a single 128-kb script has a minimal CPU time of .001. Same line of reasoning is valid for average CPU time.

Edited by Arduenn Schwartzman

Share this post


Link to post
Share on other sites
1 minute ago, Arduenn Schwartzman said:

The situation now is: Have two 64-kb scripts and it will take even longer.

In two 64-kb scripts:

  • Step 1: Script 1 dumps parameters into request string
  • Step 2: Script 1 sends request string to Script 2
  • Step 3: Script 2 parses request string into parameters
  • Step 4: Script 2 extracts or processes data from list using said parameters
  • Step 5. Script 2 sends data or confirmation feedback signal to Script 1
  • Step 6. Script 1 parses the response and acts accordingly

In one 128 kb-script:

  • Step 1: Script 1 pulls data from list and acts accordingly

Other example:

One script has a minimal CPU time, say .001 ms. Ergo, two 64-0kb scripts have at least .002 ms. Conversely, a single 128-kb script has a minimal CPU time of .001. Same line of reasoning is valid for average CPU time.

LOL. What ever you say.

Share this post


Link to post
Share on other sites

Script to script communication within the same object is definitely an overhead I wish i could avoid.

But at the same time most scripts don't need 128k and without a reasonable incentive, it will go exactly the same way priority 4 anims and 1024x1024 textures did: "why use less?"

  • Like 1

Share this post


Link to post
Share on other sites
41 minutes ago, Kyrah Abattoir said:

"why use less?"

Hence the 'less and pay 0, or more and pay L$500'

I would not mind LL charging L$20 oh heck, L$40, for uploading 1024-textures either, btw. But that's in hindsight. The grid would revolt for sure.

Edited by Arduenn Schwartzman
  • Like 1

Share this post


Link to post
Share on other sites
28 minutes ago, Arduenn Schwartzman said:

Hence the 'less and pay 0, or more and pay L$500'

For only 500L$ I can get away with some quick sloppy programming instead of a good one? Deal!

Edited by Fionalein
  • Haha 1

Share this post


Link to post
Share on other sites
2 hours ago, Fionalein said:

For only 500L$ I can get away with some quick sloppy programming instead of a good one? Deal!

I don't mind quick at all. But to which of the two of the following does sloppy apply?

In two 64-kb scripts:

  • Step 1: Script 1 dumps parameters into request string
  • Step 2: Script 1 sends request string to Script 2
  • Step 3: Script 2 parses request string into parameters
  • Step 4: Script 2 extracts or processes data from list using said parameters
  • Step 5. Script 2 sends data or confirmation feedback signal to Script 1
  • Step 6. Script 1 parses the response and acts accordingly

In one 128 kb-script:

  • Step 1: Script 1 pulls data from list and acts accordingly

Share this post


Link to post
Share on other sites
7 minutes ago, Arduenn Schwartzman said:

I don't mind quick at all. But to which of the two of the following does sloppy apply?

In two 64-kb scripts:

  • Step 1: Script 1 dumps parameters into request string
  • Step 2: Script 1 sends request string to Script 2
  • Step 3: Script 2 parses request string into parameters
  • Step 4: Script 2 extracts or processes data from list using said parameters
  • Step 5. Script 2 sends data or confirmation feedback signal to Script 1
  • Step 6. Script 1 parses the response and acts accordingly

In one 128 kb-script:

  • Step 1: Script 1 pulls data from list and acts accordingly

And multiple scripts that were copies of the original do bytecode sharing. .

Edited by steph Arnott

Share this post


Link to post
Share on other sites

These days I do all my info storage on notecards or on my server. Very little gets stored in the script. I haven't had script memory issues in a long time. 

Share this post


Link to post
Share on other sites
16 minutes ago, steph Arnott said:

And multiple scripts that were copies of the original do bytecode sharing.

How is that relevant to the request for more memory? It's not about using more of the same scripts sharing the same byte code. This request is about putting more unique code and unique data in a single script.

Edited by Arduenn Schwartzman

Share this post


Link to post
Share on other sites
10 minutes ago, Gadget Portal said:

These days I do all my info storage on notecards or on my server

If you'd be selling something like AVSitter scripts, would you store the positions and rotations of individual animations and user IDs of all the AVSitter-like furniture that's out there on the grid on your server? Right now, the average sofa needs four scripts to handle and keep the data in-world, Also, less than 1% of all furniture owners probably backs up their data in notecards. And definitely no sit system reads notecard data on the fly for each animation change. It totally lacks the dynamic capabilities and speed.

Edited by Arduenn Schwartzman

Share this post


Link to post
Share on other sites
1 minute ago, Arduenn Schwartzman said:

How is that relevant to the benefit of more script memory at all? It's not about using more of the same scripts sharing the same byte code. This request is about putting more unique code and unique data in a single script.

Because ten identical scripts that are a copy of the compiled original act as if they are one script. Filling up one and moving to the next has already achieved what you want. Also LL ran a feasability study and concluded that giveng people the ability to use what was reqiured threw up an issue that Fionalein pointed out. Most will just write bad scripts. Also php (if you know how to use it) is vastly superior than increasing script memory allocation. Personally i do not do anything that needs resorting to an external server.

Share this post


Link to post
Share on other sites
3 minutes ago, Arduenn Schwartzman said:

If you'd be selling something like AVSitter scripts, would you store the positions and rotations of individual animations and user IDs of all the AVSitter-like furniture that's out there on the grid on your server? Right now, the average sofa needs four scripts to handle and keep the data in-world, Also, less than 1% of all furniture owners probably backs up their data in notecards. And definitely no sit system reads notecard data on the fly for each animation change. It totally lacks the dynamic capabilities and speed.

That's a fair point, I've made very little furniture, so maybe I just don't know what I'm talking about. The furniture I have used though, never needed more than one or two scripts.

If you've got so many poses in a single couch or bed that you're hitting memory limits, I'd be extremely impressed.

Share this post


Link to post
Share on other sites
8 minutes ago, Arduenn Schwartzman said:

Where did you get that from?

Was done eight odd years ago. LL are not going to raise the limit from 64kb. It was only set at that to keep the old LSO scripts running.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...