Jump to content

Discussion - Would increasing llGetNotecardLine()'s return byte maximum cause issues with existing content?


Lucia Nightfire
 Share

You are about to reply to a thread that has been inactive for 1005 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

I was present at the user group meeting when that topic came up and recall that Rider Linden seemed to respond positively to it. At first glance, he didn't think it would break anything (though he may have been speaking from a server code perspective rather than user content perspective). He mentioned that they use "z-strings" (array of chars terminated by a 0 byte) as opposed to "l-strings" (which has length encoded at start of the string).

As far as user created content goes, I guess it's always a possibility. But I think in this case it might be rare - or at least easier to correct. I imagine most existing content that depends on fetching more than 255 bytes at a time would be structured to do multiple lines of up to 255 bytes. Even if that content isn't programmed defensively to account for potentially larger lines, the target notecard could be edited (or replaced) to adhere to the old limit/format. And if it can't be replaced due to being referenced by uuid or the object being no-edit, then it's likely already following the old format anyway.

Edited by Fenix Eldritch
typos
Link to comment
Share on other sites

Personally, I'm in favor of updating the old function rather than adding a new one. The only way I could see it breaking user-content, is if the returned long-string were to overflow script-memory, or if someone was (ab)using the 255 byte limit to embed comments into a setup notecard (I've not seen notecards set-up like that.)

However, if a new function were to provide a range limit like:

llGetNotecardLineRange(string NCname, integer line, integer start, integer end); // fetch line of notecard from start'th character to end'th.

that would of course merit a new function.

  • Like 2
Link to comment
Share on other sites

On the face of it, returning longer strings seems like a reasonable idea.  I started moving away from using notecards when KVP became available, though, so this is not likely to make much difference for me personally. However, the string limit is there for other functions. I'm not sure whether you can change the limit for llGetNotecardLine without also changing it for other functions as well.  I don't know what the implications may be for efficient data handling in dataserver events either, but can't comment on that..

Edited by Rolig Loon
  • Like 3
Link to comment
Share on other sites

I can't think of any breakage that would result from being able to read longer lines than before, as long as the scripts and the notecards stay the same.*
* Unless, like @Quistess Alpha pointed out, the notecard already contained lines longer than 255 bytes and relied on not being able to read more than that... which I think must be very very niche.

I would vote yes for it.

And while we're at it, I wish for being able to read multiple lines in one call. (Lines returned as a list, without the ending newline.)

 

8 hours ago, Fenix Eldritch said:

He mentioned that they use "z-strings" (array of chars terminated by a 0 byte)

C-strings 😉

Link to comment
Share on other sites

I have a few personal NC reader scripts that interpret a 255 length string as a line that most likely couldn't be read to completion, and mark it as an error in reading the NC.

So, I can imagine there may be a few scripts out there that may be confused by lines >= 255 besides just the potential for script overflow.  I lean toward Quistess's or Wulfie's proposal for a new function call to play it safe.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

1 hour ago, Phate Shepherd said:

I have a few personal NC reader scripts that interpret a 255 length string as a line that most likely couldn't be read to completion, and mark it as an error in reading the NC.

An important point though, is that increasing the number of characters returned probably wouldn't break a script-notecard combination that was already working;

case A) the notecard does not contain any long lines: no change in functionality.

case B) the notecard contains long lines and the script is set to fail on lines exactly 255 long: a script that would have failed perhaps does not, which is a change in behavior, but perhaps a beneficial one.

case C) the notecard contains long lines and the script is set to fail on lines >= 255 characters: no change in behavior for the old script-notecard combination.

  • Like 1
Link to comment
Share on other sites

I have a current project that stores a lot of output data in notecards .. longer lines would be appreciated, especially for localization .. German words are just so very extra long.

2 hours ago, Phate Shepherd said:

I have a few personal NC reader scripts that interpret a 255 length string as a line that most likely couldn't be read to completion, and mark it as an error in reading the NC.

So, I can imagine there may be a few scripts out there that may be confused by lines >= 255 besides just the potential for script overflow.  I lean toward Quistess's or Wulfie's proposal for a new function call to play it safe.

In practical terms, I'm not sure a new function would be required, all existing scripts would have shipped with notecards limited to 255 characters per line.

Creators & scripters updating notecards to >255 char per line can update the scripted warning to the new limit in the process (if they even have one).

End users editing notecards without ability to modify script (that bothers to check line length) just have a script that doesn't like lines over 255 in length and (if it bothers) already complains about it. No loss of function or change in existing usage in either case.

I don't see any case where an existing script suddenly fails to operate.

  • Like 1
Link to comment
Share on other sites

2 hours ago, Phate Shepherd said:

I have a few personal NC reader scripts that interpret a 255 length string as a line that most likely couldn't be read to completion, and mark it as an error in reading the NC.

So, I can imagine there may be a few scripts out there that may be confused by lines >= 255 besides just the potential for script overflow.  I lean toward Quistess's or Wulfie's proposal for a new function call to play it safe.

i vote for a new function as well

and if there is to be then I prefer Quitessa's suggestion

10 hours ago, Quistess Alpha said:

llGetNotecardLineRange(string NCname, integer line, integer start, integer end); // fetch line of notecard from start'th character to end'th.

this way when we want to read lines > 255 then we have to use this new function, as the older llGetNotecardLine will still only work on < 256

llGetNotecardLineRange also being able to work on any length string

and I would like negative indexing as well please: llGetNotecardLineRange(somestring, somelinenum, 0, -1);

  • Like 2
Link to comment
Share on other sites

2 hours ago, Coffee Pancake said:

You know what would be really sexy .... llGetNotecardLineRange without a data server.

What .. I can dream!!

I've brought up dedicated local memory and prim memory aspects for synchronous R/W. LL is scared to death of the notion. (or at least Oz Linden was)

@Rider Linden Maybe you can also include this feature request if this one here gets any priority. 😉

Edited by Lucia Nightfire
Link to comment
Share on other sites

4 hours ago, Phate Shepherd said:

I have a few personal NC reader scripts that interpret a 255 length string as a line that most likely couldn't be read to completion

In applications I made that try to interpret if a return might have been truncated, I required a limit of 251 bytes.

That way there is no guessing whether a 1, 2, 3 or 4 byte character got clipped or not.

Link to comment
Share on other sites

12 hours ago, Quistess Alpha said:

The only way I could see it breaking user-content, is if the returned long-string were to overflow script-memory,

You might be able to crash things which allow notecards to be dropped on them.

This is why I want stack space and heap/code space counted separately in scripts. There are some events and calls which can return unexpectedly large amounts of data.

But LL is stuck. The sim code is still 32-bit, so they have a hard memory limit at 4GB/process. This limits growth in various areas.

  • Like 2
Link to comment
Share on other sites

I'll vote for increasing the line length that can be read and praying existing content doesn't break.

The only instance I can see of it breaking stuff is very far-fetched, somebody is reading a notecard line and assuming that they're only getting the first 255 characters and therefore might get more to process than they were expecting.

(An aside, I think we're being a little too tremulous about breaking existing content, take two actual examples )

Python 2 to python 3, the developers decided they were going to break older python 2 scripts regardless

Processing 2 to Processing 3, the developers not only decided they were going to break older processing 2 scripts and libraries, but justified part of it by "some people have been doing things they shouldn't have been, so we're going to put a stop to it" (paraphrased)

The gist of it is, It's Ok to break a small proportion of older content if it's a) not going to affect too many people and b) the improvements justify the pain.

 

ETA (colour me tongue-in-cheek fr that last part :)

Edited by Profaitchikenz Haiku
  • Like 1
Link to comment
Share on other sites

Just a thought, if the scriptng language could accept optional arguments, why not add an optional argument that when present allows unlimited line length during reads but if not present, clamps the line length to a 255 maximum?

llGetNoteCardLine(name,line) works as normal

llGetNoteCardLine(name, line, optional integer toMax) returns up to a maximum length, so specifying 1024 reads a full line, 255 redundantly specifies existing behaviour

Or even make the optional argument true/false, if not present it will default to false in the way uninitialised varibles default to 0 ?

 

Edited by Profaitchikenz Haiku
Link to comment
Share on other sites

So here's my two cents on why we should just update the function, rather than create a whole new one.

llGetNotecardLine creates a query that attempts to get line from a notecard.  If that line is greater than 255 bytes, it will truncate it to the first 255 bytes.  Assuming that scripters use this function as is, all notecards must have already been pre-formatted to have lines equal to or less than 255 bytes.  Otherwise, anything greater than 255 bytes is actually broken content already -- without any useful error or information to indicate why. 

If a creator is creating a product that can be configured by an end user, there's more of a chance the end user will not understand this limit.  Allowing llGetNotecardLine to expand to 1024 bytes or more, will actually FIX existing products in that regard.

Phate had mentioned that they have a personal project that checks for exactly 255 bytes.  Sure there could be a few products or two in circulation, and if that's the case anything greater than 255 bytes would not error out with such an exact check.  It is a fringe case within a fringe case.

I am for creating other functions that ADD to the functionality of LSL (because there's a couple good suggestions in this thread), but I don't foresee a small change like this warranting a completely new function.

  • Like 2
Link to comment
Share on other sites

20 hours ago, Quistess Alpha said:

An important point though, is that increasing the number of characters returned probably wouldn't break a script-notecard combination that was already working;

case A) the notecard does not contain any long lines: no change in functionality.

case B) the notecard contains long lines and the script is set to fail on lines exactly 255 long: a script that would have failed perhaps does not, which is a change in behavior, but perhaps a beneficial one.

case C) the notecard contains long lines and the script is set to fail on lines >= 255 characters: no change in behavior for the old script-notecard combination.

The biggest issue I see with using the existing function: Any script that relies on NC line lengths <256 chars could stack-heap with longer lines. The last thing you want is a long line causing a stack-heap collision because the script that is running close to the memory wall and suddenly reads a longer than expected line and POOF.... dead.

Honestly, any function that has the potential to read 64k in one shot without user controlled constraints is fraught with potential problems. NC's are frequently used for end-user configuration, and what might have once been reas as a truncated paragraph could now be a super long line read by a script that simply doesn't have the memory left to deal with it.

Edited by Phate Shepherd
Link to comment
Share on other sites

16 minutes ago, Phate Shepherd said:

NC's are frequently used for end-user configuration, and what might have once been read as a truncated paragraph could now be a super long line read by a script that simply doesn't have the memory left to deal with it.

That's a good point.  At the same time, though, the worst that happens is that the script locks up and the user has to reset it.  After a couple of times doing that, a smart user will finally say, "Gosh, I don't think I can have the script read my novel after all."  Memory is the ultimate limitation, but a reset is a pretty good way out when the script hits the wall.

  • Like 1
Link to comment
Share on other sites

16 minutes ago, Rolig Loon said:

That's a good point.  At the same time, though, the worst that happens is that the script locks up and the user has to reset it.  After a couple of times doing that, a smart user will finally say, "Gosh, I don't think I can have the script read my novel after all."  Memory is the ultimate limitation, but a reset is a pretty good way out when the script hits the wall.

Very true. I do have to think about scripts that reset themselves and now crash without any new user intervention because of the change.

I think it comes down to this:

Does extending the functionality of the existing function provide any benefit to existing scripts in the wild? If not, and a recompile will be needed to take advantage of the new functionality, then there is no penalty and ultimate compatibility in using a new function call.

Edited by Phate Shepherd
  • Like 1
Link to comment
Share on other sites

A 64k line length would be cool to load data in one read but not useful since it will instantly crash every script and definitely will require the setting of a maxread.

With 1024 line length I see nearly no problems for existing scripts.
Scripts that need to keep an eye on memory have to do it after every line anyways but a tight calculation may trip it.
Scripts that dont truncate lines themselves but expect GetNoteCardLine to do it will fail. Anyone ever doing that? Of course - at least 1 will do it - it's SL.

Truncation is a documented feature though - other changes that happened to LSL were never documented features or just expansions - so LL is maybe more picky about this change.
I vote for just expanding the existing function but for 100% safety a new function is needed.

 

  • Like 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 1005 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...