Jump to content

Has anyone tried the LZW Script? Any thoughts?


Love Zhaoying
 Share

You are about to reply to a thread that has been inactive for 2022 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Today I found this: http://wiki.secondlife.com/wiki/LZW_LSL_Implementation

I am thinking: If it performs OK and results in a good compression, I could use it for such things as:

1) Storing "compressed" information in notecard to reduce load time / increase capacity

2) Storing "compressed" data in a script (hard-coded) and "decompressing" it when needed..to be able to store more data (no notecard needed)..

Any thoughts?

Link to comment
Share on other sites

Without going through the process of testing this (I assume you would do that yourself to see if it's viable for your uses) :

  • This will not reduce load times.
    You are just adding more executable code to get the data you want. De/compression takes time.
     
  • A notecard can hold 65536 bytes of data that has no impact on script memory usage (which also has the same 64KB limit).
    Storing compressed data into a notecard is unnecessary. You can always add another notecard.
     
  • Storing compressed data within the script memory is the real use-case. (Especially in script-to-script communication.)
    But if you have the decompression code in the same script, you probably won't save as much as you think.
Edited by Wulfie Reanimator
Link to comment
Share on other sites

9 minutes ago, Wulfie Reanimator said:

This will not reduce load times.
You are just adding more executable code to get the data you want. De/compression takes time.

Yes but..aren’t notecard reads throttled? If so, this may be faster.

9 minutes ago, Wulfie Reanimator said:

But if you have the decompression code in the same script, you probably won't save as much as you think

The compression / decompression script as presented was designed to be called via linked message in a separate script.. 

Link to comment
Share on other sites

How do you want to compress the notecard? In a whole? Then you need to read the whole notecard 1st and then decompress it and split it to lines again. (requires double memory consumption during the process but less lines to read)

Or line by line? Max. line length (for script reading) is 256. I see no way to speed things up here.

I never had the wish to speed up notecard reading but shouldn't it work to simply fire 10 ReadLine commands at once and sort the incoming lines in the dataserver event? Or read from multiple notecards at once? (I'll not test that atm)

 

Link to comment
Share on other sites

23 minutes ago, Nova Convair said:

Then you need to read the whole notecard 1st and then decompress it and split it to lines again.

..or save the compressed output in a new notecard..so it not only reads faster but etc. 

I will do some experiments with big notecard read times vs. decompress hard-coded data times.

Link to comment
Share on other sites

If you don't mind your data being exposed to the world, you can use prim media parameters specifically the home url, current url and whitelist string fields. Be aware that Oz Linden recently added schema enforcement to the string literals, something that didn't exist for any cast type in any other function prior.

Also be aware that these fields are face dependent. If you use it on a legacy prim and reduce the face count below where data is stored you will lose said data.

Link to comment
Share on other sites

2 hours ago, Love Zhaoying said:

The compression / decompression script as presented was designed to be called via linked message in a separate script.. 

I understand that, the wiki page says as much, but that still doesn't lower the total memory usage.

That's not necessarily important though. When you decide to start compressing things, you are trading time for space -- you can't lower both at the same time. Which is more important?

2 hours ago, Love Zhaoying said:

Yes but..aren’t notecard reads throttled? If so, this may be faster.

Reading a line has a forced sleep of 0.1 seconds, yes, but that doesn't go away due to "compressed data." What this does add is script-to-script communication (which isn't even close to instant) with the added compression process.

Even if you hard-coded the values instead of reading from a script, I'm very skeptical about the "llLinkMessage + decompress + llLinkMessage" round-trip being less than just reading the line, especially enough for it to be worth the added complexity.

1 hour ago, Love Zhaoying said:

I will do some experiments with big notecard read times vs. decompress hard-coded data times.

I'd be genuinely curious to hear what you find out, let us know how you tested it too!

Link to comment
Share on other sites

6 minutes ago, Wulfie Reanimator said:

I'd be genuinely curious to hear what you find out, let us know how you tested it too!

Will do! If it’s slow, all bets are off! 

By the way, I noticed the script had a lot of “pre-mono” conventions that used to help with memory, such as list = (list [ ]) + list..

Link to comment
Share on other sites

2 minutes ago, Love Zhaoying said:

Will do! If it’s slow, all bets are off! 

By the way, I noticed the script had a lot of “pre-mono” conventions that used to help with memory, such as list = (list [ ]) + list..

Now that you mention it, the wiki page was created and last updated in September 2008. Mono was only a month old back then.

Don't use LSO though, Mono runs much faster regardless of code.

Link to comment
Share on other sites

5 minutes ago, Wulfie Reanimator said:

Now that you mention it, the wiki page was created and last updated in September 2008. Mono was only a month old back then.

Don't use LSO though, Mono runs much faster regardless of code.

Well, duh. Any harm in removing all those extra list=[] though?

Link to comment
Share on other sites

21 hours ago, Lucia Nightfire said:

Be aware that Oz Linden recently added schema enforcement to the string literals, something that didn't exist for any cast type in any other function prior.

 

Do you have a link to what Oz changed for string literals, not finding anything googling it or searching the forums, thanks

Link to comment
Share on other sites

22 hours ago, Lucia Nightfire said:

If you don't mind your data being exposed to the world, you can use prim media parameters specifically the home url, current url and whitelist string fields. Be aware that Oz Linden recently added schema enforcement to the string literals, something that didn't exist for any cast type in any other function prior.

Also be aware that these fields are face dependent. If you use it on a legacy prim and reduce the face count below where data is stored you will lose said data.

I missed this. Yes, I saw all the posts about HTTP.  That’s not a direction I want to go for this project. But thanks!

Link to comment
Share on other sites

38 minutes ago, Lexia Moonstone said:

Do you have a link to what Oz changed for string literals, not finding anything googling it or searching the forums, thanks

I think it was posts about llHttpRequest(). Some changes were made that broke a lot of stuff. Lucia can correct me if that’s not what she was referring to.

Link to comment
Share on other sites

5 hours ago, Lexia Moonstone said:

Do you have a link to what Oz changed for string literals, not finding anything googling it or searching the forums, thanks

This change, like with the change Oz made that blocked Control 0 characters from being used in urls was done without any official blog or forum post. It took the community to make posts or jira's concerning negative effects that came with said changes. Wiki page edits for llHTTPRequest() were made months after the change. The wiki pages for llSetPrimMediaParams() & llSetLinkMedia() have yet to be updated to cover the schema change.

When setting PRIM_MEDIA_CURRENT_URL or PRIM_MEDIA_HOME_URL with llSetPrimMediaParams() or llSetLinkMedia(), "https://" now gets inserted at the beginning of the string if it wasn't already present.

Link to comment
Share on other sites

On 10/4/2018 at 7:35 AM, Love Zhaoying said:

2) Storing "compressed" data in a script (hard-coded) and "decompressing" it when needed..to be able to store more data (no notecard needed)..

Any thoughts?

in this case I would go with a hard-coded dictionary tuned for the app, and decode strings of indexes. With an ESC value >= list length dictionary for literals not found in dictionary

  • Thanks 1
Link to comment
Share on other sites

8 minutes ago, ellestones said:

in this case I would go with a hard-coded dictionary tuned for the app, and decode strings of indexes. With an ESC value >= list length dictionary for literals not found in dictionary

..and the library script only needs the decoder function, so will have plenty of memory left for the data.

Link to comment
Share on other sites

yes

a simple encoder and decoder for the indexes is something like:

integer index = someint;

string encode = llBase64ToString(llIntegerToBase64(index));

integer decode = llBase64ToInteger(llStringToBase64(encode));

if (decode != index) error;

 

  • Like 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 2022 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...