Jump to content

Bleuhazenfurfle

Resident
  • Posts

    108
  • Joined

  • Last visited

Everything posted by Bleuhazenfurfle

  1. That response suggests a degree of misunderstanding, and so also, still not the point. So I'll try to explain one more time, then let the matter drop. I tested that (and the rest of the functions I emulate), by running them through a bunch of unit tests in my emulator, which check that the result matches character-for-character to the output of actual real LSL script (with only a few exceptions — such as doing higher-precision floating point math). That piece of actual code does in fact answer the questions you were asking. Again, the purpose (as I expressed in a portion you chose not to quote) was in large part to answer questions you were asking. That line I posted wasn't a schema or anything of the sort. At the risk of repeating myself, that was the actual (and entire) piece of TypeScript code implementing the emulation of the llDetectedKey function taken (almost — I simplified "info.currentEvent" down to just "event") directly from within my Dooberwatsit™ (hasn't yet been officially named). You were asking questions about the function, and my response encapsulated the answers in a form which I assumed you, specifically as a fellow parser/compiler/emulator/whatsit creator, would get. That line of actual code, is quite comprehensively tested (one half of the actual point of the first fragment you quoted above) to match the actual behaviour in LSL in every way I could think of to test it — but more importantly, it rather elegantly (I think, from a parser/compiler/emulator/whatsit creator perspective) expresses a couple very pervasive elements of LSL that repeat all through the language. On a more personal note, I have to say you (and a particular other also, sadly) seem to be responding to my posts somewhat defensively. Should that indeed be the case, I wish to say there's really no need. One of my teachers back in school used to have a saying I quite like written permanently across the top of their board; "Those who can, do. Those who can't, teach. Those who can't teach, teach teachers." At this point in my life — as was the same with that teacher who had previously worked in the industry he was now teaching — I have health issues that have moved me part way from the first group, towards the second (and disturbingly, seem to hover much too frequently around the third). In your case specifically, I do also quite appreciate a fellow parser/compiler/emulator/whatsit creator, and have taken an interest in watching your progress through glimpses offered — I (as do many on this forum) remember asking many of those same questions myself, more specifically in this case, for me it was a bit over a decade ago now (where relevant), when I had written a "fairly simple" LSL pre-processor to implement some higher-level functionality I was spending a LOT of time writing the rather tedious boiler plate for over and over and over and over… (My current Dooberwatsit is the third generation of that project.) Further, you appear to be doing something that I'd considered in the past myself, but never pursued because it was too limiting/distracting in specific directions I wanted to go — so what you're doing, is likewise just not my thing. But it's close enough that I'm quite certain my experience is still very much relevant, and so I am none the less eager to lend what insights I think of that you may find helpful, to hopefully see you get where I chose not to go, perhaps, with a few fewer of the mistakes I made along my way.
  2. That looks like the unicode character split across a block boundary fault — causes the script to die any time it's being unpacked. It's a known and very annoying error (I know one person who claims to have Jira'd it, but I can't remember if I've seen a Jira for it), and it's bitten quite a few people I know of personally, since we identified it (probably many more before we knew what to look for). LL seem to have a few issues with deserialising unicode strings. Good luck with that one. Can't say why it only just started happening for you. Could be something else changed, causing that string to move in memory, relative to those block boundaries. If I were you, I'd check that test script on a couple sims (aim for different versions), and if it holds reliable, shove that test script into another Jira to let LL know it's still a thing, and yet another person has been bitten by it.
  3. That wasn't really the point of sharing it — that one line answers pretty much all the questions you had about it. Am I over-thinking it? (Yes.) What happens if it's a non-detection event? (NULL_KEY) What happens if you attempt to query the key of the 10th touch when there was only 1? (NULL_KEY) Does negative indexing work? (NULL_KEY — admittedly, this one requires some JavaScript knowledge, because JS sometimes does negative indexing.) Is there a runtime error? (NULL_KEY) Am I the only person who's investigated LSL to an insane and somewhat neurotic level of detail, so as to write themselves some form of LSL parsing and/or emulation? (No.) And the others are implemented similarly, of course. For all LSL's flaws, it's at least more consistent than JS (and has none of that === operator nonsense).
  4. It's actually easier than that: llDetectedKey = (n : IntegerNode) => context.event.detected?.[n.value]?.key || KeyNode.nullValue; That's my entire implementation of llDetectedKey .
  5. In compiled code, the branchless version absolutely wins out due to pipelining and cache coherancy benefits — in an expression that simple, the variable is read into the processor just once, and you get the math ops basically overlapping each other, with a write of the final result at the end. Though LSL shows no sign of being optimised or even just JIT'd, so with VM overheads, I doubt any of that matters. I wouldn't be at all surprised if the fewer variable references of the branch version wins out. Never cared enough to actually try and measure it, though. Difference will be way down in the noise… Write it so you can still read it in 6 months time.
  6. What I wanted right from the start of LSD (before it was even out for people to try), was a function whereby you give it a key, and it gives you the next one back (or an empty string). That simple. (Well, plus a variant that applies BUG-233078… Can also do reverse versions…) You won't miss any, it won't foul up on deletions or additions, and it's absolutely trivial to use in a loop. Offset indexes are just simply utterly broken for a shared access database. You can wave it off as "only editing LSD in one script", but that's a huge part of LSD right there. And we don't HAVE flow control, either. There are ways to fake database locking, but they're cludgy and fragile, and remarkably hard to get right — plus all the scripts have to agree to abide by them. And your full-index approach, requires the ENTIRE LSD store be effectively locked even during simple searching and reading. The present method, where you find a bunch of keys lets you index within that set to your hearts content using a simple llList2String, the problem being that list can quite readily exceed your scripts total memory. But indexing off that snapshot list, or iteration without a list, both protect you from everything going to hell in a hand-basket, just because another script decides to add or remove a few items. Which is where BUG-233819 comes in (either with or without the part referring to Henri, if those issues could be figured out). Or, another option I put forward AGES ago (and several times since), of the same sort of thing, but llDetected style… That has the added benefit that they can keep the snapshot list as a simple array of pointers to the existing key string data (or maybe even the entire items themselves, depending on how it's implemented), and only convert the key and/OR value names as needed (if you never actually care about the keys, they never need to even convert them for the script) — though hopefully without that exceedingly stupid 16 entry limit… For offset indexes to be useful, you need to be able to either lock the store from anyone else accessing it (won't happen as a built in thing, because that would add blocking semantics), or snapshot the set of keys currently in effect (which means potentially allocating large arrays). Iteration doesn't have those problems — its problem is a lack of random-access, but that's not generally an actual problem for most uses. An option for locking, could be to request a lock, and have it come through as an event, during which all other scripts get an error response from LSD access — but that would break existing scripts (of which there are already a lot), so also not going to happen. Oh, I did at one point also offer a solution for snapshotting creating large arrays, too… Implementing versioning in the store. You just version the store when needed any time an update is performed, and collapse old versions as their associated events conclude. A given version can also be shared by any number of scripts, if no edits have been performed in the meantime. In the current setting, indexing the LSD store, is just plain broken. At least it's presently only an issue for searches. Could you imagine searching for an item, getting it's index, and then a frame delay happens, and when you go to update or delete that item, it's the totally wrong item…?!?
  7. FIFO ordering is what you want for forming a cache — and usually implemented using a timestamp at the front of the key. (Also, BUG-233965) Though, I don't get most of what you're saying there. I imagine it's along the lines of what I've said in a couple Jira comments, and ranted about in Scripts several times, among other things; The offset integer is just plain broken in the presence of other scripts… If another script adds items, you'll get duplicates, and if another script deletes items, you'll miss some, and while you can track linkset_data to try to adjust your offset as you go, that still suffers from races. Alpha sorting the keys is actually better for this, IF, we could use a key as the index rather than an integer offset. DB's regularly use the last key found as their index (before they added cursors), for a reason. (You'll still miss items added at or before your current cursor, but it's completely stable otherwise.)
  8. It's probably better to ask if any added feature didn't have unanticipated consequences… After all, we're using it.
  9. I realise I'd missed explaining something there. What other scripts see. The handle is great within the script doing the deletion, but not so much use to any of the others. However, fitting the regex in there without a new event is going to be ugly (there are ways, but none of them are very nice). Another alternative, of course, is to just send the regex alone, without the counts at all — though doing it that way, I think I'd prefer just sending through a free string parameter. Ewww! icky. evil. nope. It wouldn't be toooo bad if there was such a flag that only inhibited that specific script from sending linkset_data events… It would cover (badly) a few other cases of flooding other scripts with LSD nonsense, too… But it still doesn't address the problem of what if they DO need to know? Paging the delete into smaller chunks at least gives them the option — as long as they're not running behind.
  10. And this is more of why I was asking for an iterative approach way back early on, or at the very least, handle it more along the lines of the llDetected functions. (Why they insist on returning lists all the time, given a scripts woeful limited memory…) I get they don't want scripts tying up simulator memory, but there are better ways, and this way hobbles their ability to do better, as well. Two alternatives come to mind: The obvious alternative, of course, is to "page" the deletion — the same way as everything else (sigh). I suspect the typical approach would be to delete an initial batch (as per your danger threshold), and then delete one more for every event that rolls in, possibly with a timer running in case some went missing at some point. (Being able to specify a negative count to remove them from the end instead of the start could also be useful … to someone.) However, since you can already do that right now by doing a search, and then deleting the items you found… I'm all for llLinksetDataDeleteFound doing a "bulk deletion", with a LINKSET_DATA_BULK_DELETE action that just reports the numbers — and perhaps using a handle (I'd think an integer, llListen style, rather than a key) to link the two. There is existing precedent for returning several values as CSV, and returning a handle gets around having to pass the regex itself in the event (which won't work with CSV — the next option would be JSON, but that's horrible and a handle should do fine). Such a bulk delete event could additionally be combined with the llDetected method to allow the script to query the key names (and maybe even the removed values) if needed — saving the greedy conversion of key name strings, if nothing else. That could also be bolted on later, too. The combination of paged deletion for when you care (be it manual as it is now, or by paginating the new function), and bulk delete for when you don't (also, a page size of 0, if it goes that way), seem to provide the best coverage to me. (Pssst, LL, not giving us these options, actually doesn't save you very much, it just hobbles your ability to do it better, and forces us to burn more of that precious memory and script time working around the limitations.)
  11. Do you mean https://github.com/Sei-Lisa/kwdb/blob/master/database/kwdb.xml perchance…?
  12. Oh, I was also just today reminded of something I recognised a decade or two back, but have never myself put into words… First you do a job, and in the process, figure out what needs to be done. Then you do it again (but better), and in the process, figure out how it should be done. Finally you do it again, the way you should have done it originally. Version 2 was better than Version 1. So I should have it done right this time. Except that changing languages reset me back to step 1, so, Version 5 should be good. 🤪
  13. Oh, no… Tokenizing generally just tries to recognise the longest symbol first, and doesn't do second chances… +++ tokenises as ++ and +, for example, and doesn't try to switch it around if it doesn't make sense. I don't have to do any better than LSL will, so that's not the issue. Well, not until I add customised language extensions, anyhow, then I might, but the LSL parser has to parse like LSL would. (Code gen also has to watch that it doesn't generate token sequences that will be miss-parsed by the compiler — I think I had V2 inject a $ sign when needed, just coz it was more funner than a space.) If you expect good code, and you're just throwing an error and stopping when you hit something bad — as I did with V1/2, and suspect you'll be doing in yours — then it's not such an issue, and you can get away without decent backtracking, or anything else overly fancy. But if you want to be able to handle incorrect code, and have a hope of keeping going, proper backtracking and other tricks become far more important… Some of the proper parsers I've looked at can do things like recognising that a type name will be followed by an identifier, and then either "(" for a function, or "=" or ";" for a variable, and will therefore defer deciding which type of node it is until then. Mine doesn't do that to keep it flexible when I add extensions later — but I could have done it the same way they do, if I'd done the parser properly in the first place (or was just using one of them). Even then, however… if you see the three token sequence "type identifier identifier", it's probably the missing "=" of a variable definition with initialiser. But if you see "type identifier type", is that the missing ";" of a variable, or the missing "(" of a function? You have to be able to keep looking even further ahead to have a hope of figuring it out, and then come all the way back again. A simple direct parser like I used (because I was too busy trying to get a handle of TS's weird gaps — a couple of which it's only recently filled in), can't do that. Some better parsers will look further ahead, and then try to back-fill the portions their unsure of once the parser finds tokens that do make syntactic sense. I know how to do that — though I've never done it in the context of a code parser before… I just… didn't. And I regret it. But I don't feel like ripping up such core code and reimplementing it. So if there's ever a Version 4, I think I might just start with one of those better parsers — which have had many more people working on it, likely at least a few of whom are simply more experienced in that specific type of thing. That said, my current scheme seems to be … adequate … but there's a few things it should be able to handle far better than it does. Seriously, graceful handling of incorrect code is significantly harder than parsing correct code, and just bailing if it hits an error — the amount of weirdness that arises just from trying to handle a missing ";"…!
  14. Yeah… But… Whichever Linden was talking about it, was also talking about webby stuff with LSL… So I'm hoping there's actually a non-zero chance. And the function's pretty pointless if that's NOT the intention. (Plus there'll be a whole bunch of people tying them down and tickling them till they relent.) Also, if I recall the OP, this is in the context of MoaP…
  15. Should have known that was coming… Nah. Though version 3 is planned to be… Eventually. I stopped doing most heavy coding over the past few years due to health issues. I started this thing when my health had plateaued for long enough that I was starting to get used to it enough to do bits and pieces again… So I tend to work on this thing for a few weeks, then put it aside for a couple months until I feel up to pulling it out again. Soo, not putting an ETA on it. But… it is tantalisingly close to my first MVP milestone…! So maybe … possibly in time for SL30… Straight forward emulators/compilers are soooo much easier than this monstrosity, though… And because I chose to use it as my inspiration for finally learning Typescript (which has some really very frustrating gaps here and there, much like JavaScript that underpins it), I made a few missteps early on, like not implementing proper backtracking in the parser… Still, the emulation side is passing a whole bunch of test cases covering most of the LSL math, string, and list functions, as well as a couple LSL functions I have in a little "library" I've thrown together (some mine, some stuff from the wiki I like but can never find when I want it) to test the emulation of actual LSL code… Getting it to pull in Sei's list of LSL functions was also a major step towards usability, filling in the (HUGE) gaps where I haven't I implemented emulation yet with syntax and tooltip info at least. (Currently emulates a grand total of 37 of LSL's built in functions!) The remaining "minimum viable product" milestones are; no hard errors during regular scripting, getting my "library" to pass the remaining tests (and writing about 10,000 more, considering the 500-ish it's already got), and fixing a current variable reference counting issue with respect to loops (variable assignments at the end of loops get flagged as "unused"). Then I'll see about figuring out how to go about publishing it… It's already shaping up rather well — being able to mouse over a variable or function call in your script, and see it's computed value, is pretty darned awesome…! (Though also pretty fragile, it rather readily throws up it's hands and says, "dunno".) Yeah, my Version 2 was a pretty complete "external" LSL interpreter — at least, for the scripting I was doing (didn't have any character stuff, for example, and I didn't even bother trying to simulate physics, and don't intend to in V3, either — would be easier to just graft on an OpenSim instance or something). And had been seen by a couple people in scripting groups while I was fishing for real life test code to throw at it — so it wasn't complete take-my-word-for-it vapour-ware. Wrote the initial implementation of that over a weekend around 2012-2014 sometime, to track down an annoying bug I was having somewhere amidst a dozen scripts… So it let me set a conditional breakpoint in the simulator code where the wonky value would show up (the variable assignment code, if I recall correctly), letting me pinpoint where it first appeared, and from there, fix the problem. After the success of the initial implementation (which was pure emulation), I'd grafted on minified script generation, then optimisation, and gradually ratcheting up the optimisation over the following year, got it a touch better than Sei's optimiser (a worthy milestone!), and was using it actively through into 2017, I think… And then I broke it, trying to shoehorn in a rather invasive new feature that was needed to sort out some issues it had. (That was one of the first things I implemented into V3.) Couldn't imagine doing it IN LSL, though… Interpreting BASIC in LSL wouldn't be hard — after all, BASIC was designed for machines with less memory than an LSL script gets. But LSL or better, in LSL… that just sounds painful. My emulator thing is somewhere about 5k lines of Typescript at last count, and Version 2 was about 8kLOC of Python (and that wasn't counting lines without actual code — no blank, comment, or brace-only lines)… Granted, an awful lot of that is just reproduce LSL's many weirdnesses in another language… Small example of where V3 is at right now: Successfully shows the result of evaluating a real LSL function, an incorrect warning that the Ai variable is "assigned but not used", two errors for trying to put a list into a list on that bottom line (though the emulator still handles it, and the assertEq built-in testing function uses it to verify the value returned from the function, matches the value on the right which was produced by that exact function in a real LSL script in SL), and it's not showing the refs count for functions (number of places from which the function is called). The "print" function is also emulated using JS's console.log, which can be kinda handy, since llOwnerSay doesn't actually do anything yet (have to figure out how to open "chat" window)…
  16. I'd love to see your references for that. I've written an LSL emulator (or three *cough*) — which is why I know this stuff at the depth I do. And I've never actually needed to check at runtime… (More specifically, the checks are there, but they're hard errors that indicate a fault in my parser, and stop happening once it's fixed.) The syntax and type systems don't allow it, so it can't happen, and thus doesn't actually need to be checked late (ie. at runtime). There is pretty much no "late binding" going on in LSL, it's just not that complex a language. (Even my IDE, whenever I try, it slathers them in red squiggles and error messages quicker than I can press Save.) I guess at a stretch, you could consider llListFindList to be "late binding", ditto for llList2xxx, and the likes… But that's just basic object-oriented stuff, something along the lines of (in JS): llList2String = (myList, index) => myList[index].toString();
  17. I seem to recall there being talk of being able to serve notecards as HTTP responses… If that happens, you'll presumably be able to put your HTML, CSS, and JS into notecards, much for this purpose.
  18. It's not really a case of "late binding"… You're making it more complicated than it needs to be. A string or a list are dynamically sized by their very nature, the others are always the same fixed size. The fixed size ones, get passed by value. The dynamically sized ones, get a reference (managed pointer, probably effectively a "smart pointer" of sorts) to them passed by value instead of the thing itself. Lists make things a little tricker… An LSL list is a dynamically sized array of object pointers (which we can call references, given they are "managed", in that you can't actually access them), wrapped by a thing containing at least a pointer to that array, and it's length. That wrapper thing is probably passed by value, being it has a known fixed size, and isn't terribly large (and would otherwise need to be wrapped in turn, with yet another pointer being passed by value), but itself forms a reference (or "smart pointer") to the thing we think of a "the list" (actually a dynamic array). In the case of a string, it's basically exactly the same, except it's an array of characters, instead of an array of object pointers. That wrapping happens in the case of every item in your list also, in order to move it onto the heap (though maybe not for strings which may already count as a wrapper, but definitely is for basic pass-by-value types like integer). That's about as complex as you need to care about in LSL. Trying to be any more precise, takes you down the rabbit hole… Pointers can be "smart pointers" (basically a managed pointer, with one or two other pieces of information such as length), passing a dynamic object "ByVal" may actually cause the object to get copied on the heap, and then the copy is just passed by reference instead of the original reference, and whenever those pointers are "managed" (you don't get to see them) then they're called references. C++ has about a dozen (or more) types of "smart pointer", as well as half a dozen syntactic constructs (I'm not certain I'm exaggerating, either), not to mention the whole compiler type infrastructure, in order to handle every conceivable combination (and in D, it's even worse — or better, depending on your perspective — with it's extensive use of "voldermort types", which is among the things C++ is trying really hard to copy). C# seems to be stack based (as opposed to C++'s register, with overflow going to the stack), and probably has most of those capabilities too, with it's ByRef and ByVal being a messy heuristic sugar coating (being the all-consuming wanna-do-everything Microsoft style language it is). And then "by value" itself, really isn't — it is, in reality, "by stack" (esp. C#), or "by register" (esp. C++), and in the case of a compiled language like C++ or D, also "by type" and/or "by optimisation" (ie. the compiler determines the information is constant, and so doesn't have to pass it at all). And lets not even get into in and out parameters (outs are basically just another kind of ByRef — usually a reference to appropriate uninitialised space), return values being implemented using out parameters (very common optimisation for non-trivial return value types), and so forth. It's also for these reasons, I tend to avoid any specific language's definitions, and just go with the basic concepts (especially in terms of LSL); references are managed pointers to things, and are used when the thing is too big or dynamic to fuss with copying onto the stack. Trying to describe it in terms of C#'s ByVal or ByRef, brings in C# semantics which may not map precisely to LSL's, leading you up the garden path at night, without a torch.
  19. Just as another idea, I once used a triangular prim with the top clipped off (because I couldn't be bothered with doing any fancy textures for it anyhow), which gave me a square "knob" and two tracks (the angled sides — which look flat because HUD), and then moved the "knob" my messing with the skew. The shape math, and handling click position on the steeply angled faces, got slightly gnarly… but not too bad, and neither was the result. It also doesn't have the issue of sliding off the knob (except for sideways), and you could still just click the track to set a specific position. All with one prim and no textures, because I've got the graphic artist skills of a turnip. (You could probably handle a little sideways slip too, with a suitably distorted track texture.) I use a few of them for things like the adjustable speed position joggers on a camera save and restore HUD I tossed together because sometimes you have just the right scenic view (honest!) in a club, but then you wanna have a quick look around the room…
  20. Small(-ish) note; llSetPrimitiveParamsFast is not "faster" — it simply lacks the small built-in delay that comes along with llSetPrimitiveParams. But that delay is there for a reason, and removing it can cause it's own problems — most often when you have several of them running in quick succession. The way to look at it, is basically this (and I wouldn't be the slightest bit surprised if this is how it's actually implemented internally): llSetPrimitiveParams(list params) { llSetLinkPrimitiveParams(LINK_THIS, params); } llSetLinkPrimitiveParams(integer link, list params) { llSetLinkPrimitiveParamsFast(link, params); llSleep(0.2); } llSetLinkPrimitiveParamsFast(integer link, list params) { …do all the magical things… } As a general rule, try to work the delay into any animation or other stuff you have going, it lets the sim actually send out the updates rather than have them just sit around filling buffers until they overflow. Another time to use llSPFF, is if it's pretty much immediately followed by another function that also has a delay, then doubling up on the delays can be a pain… That said, it is reasonable to use llSPPF, followed by a shorter delay, like, just 0.1s or something, also. Just, don't reach for llSPPF just because it's "fast", that causes as many problems as it fixes, and far too many people see the word "fast" and go "oh, that's gotta be better than not-fast, right?!?" (and over the years, I've actually heard several people expressing it very much along those lines), and that's where their thinking stops. And then they go on to tell everyone else to do that too… (Much the same as my other favourite rant about the ++'s.) Also, I recently spotted another post on this here forum thingo from a bit back (by someone I can't remember, but do remember thinking they seemed like someone who probably actually would know), saying basically the same thing I have been, and I think I've seen a few others also… (Is nice validation for something I felt I was the only person saying, for a long time.) The bottom line; I've fixed peoples problems just by getting them to try llSPP instead, and I've seen llSPPF masking other problems people have spent hours trying to fix; one time, they'd come into the group just before I went to bed, and were still trying to fix it when I got up the next morning — and it was solved by switching to llSPP instead of llSPPF. (I think they eventually went back to llSPPF again for some of it, but that little built in delay worked wonders on multiple fronts.) So, rule of thumb; try to make llSPP your default, and only switch it to llSPPF when you actually need to (I personally consider it a minor failure — it helps keep my scripting honest).
  21. I don't believe this to be true. Simple values (integers, floats, I think vectors and rotations too) absolutely are. But the big ones (strings and lists) are passed by reference. (Or rather, a reference is passed by copy — same as in just about every other language.) Where you get bitten — and I think where most people get confused here — is on immutability, and in forgetting that the caller to your function still exists. If you pass a value into a function, and then modify that value within said function, the original value is still held by the caller, as well as the callee now having it's own modified copy. That can appear as pass by value, but it's not. I'm also still not convinced poking the GC is actually effective; in every case I have run into where maybe it possibly might have been, there's also been a very good chance I just plain went over memory, and no amount of GC poking would have helped. On the flip-side, I have scripts where I'm certain they are regularly flying well over the limit for short bursts, and I at least know they've created temporary values that together absolutely would have gone over if they hadn't been collected, and they don't crash… That said, the LL's won't give us the facts (mostly coz they know we'll abuse them if they do), so there could be some weird corner cases where it does still do the biz…
  22. Yeah, attached the whole time. Inventory "freezes" the script, complete with it's current state. That doesn't count as "stopped". Same thing happens when crossing sim borders or TPing with a running script. Its current state gets packaged up into a blob, sent along with it, and unpackaged on the other side. But, if stopped… SL just goes, "yeah, nah, not gonna bother". Also untested… Stopping, and then taking to inventory and re-rezzing… With and without a sim change. Could be interesting. But not worth actually testing it.
  23. So, I went and did a very quick test… (In case it's changed recently — has been known to happen.) Made a script in a box that counts, attached it, still counting. TP'd to another sim, still counting. Stopped it, TP'd home, started it again, it resets and counting from scratch. I've been wearing a "general purpose" HUD I made like over a decade ago now (always makes me feel old, saying things like that), that stops most of it's scripts once they've announced themselves, until they're needed again (they register their menu entry, and subscribe to a couple wakeup events as needed, etc.), and I rather distinctly remember using another script as a "settings store" for them (though only a few actually needed it)… I plan to rework that whole thing; LSD for one, will let me ditch the settings store script. Plus I've just plain gotten better at LSL in general, over that time. (It's on the TODO list… the very very long TODO list…) But yeah, TP's also. Basically, any time you bring a stopped script into a sim, it doesn't bring it's current state blob with it. And not having its current state, seems to basically be what a script reset is — it's gotta build a new one from scratch, next time it gets to run. (Not sure about a rezzed object, and a sim restart… Kinda hard to test that one.)
  24. Maybe a technicality depending on use-case, but… I think that depends on when/where it was set to not running. A script set to running, and then not running, does in fact retain it's memory, and it's entire state. If you set it running again, it'll resume where it left off. But take it out of the sim (like by TP, or derez, or whatever) while it's not running, and then start it again, and it has to start from scratch — suggesting the script state/memory (at least, other than bytecode) is not retained in that situation. Which is good news for HUDs being worn by someone who does a lot of sim-hopping, or an object being rezzed with several "modules" and then starting only the one appropriate to the situation. Does still leave the question of if we know whether a script that has not been running at all in the current sim session, occupies any memory? Does the sim even bother to fetch it's bytecode, if it's not yet been set to running? If not, then until it's started at least once in the current sim session, it only occupies the small amount of space taken up by the inventory entry.
  25. Mmmmmmm… mini-serving of buzzword soup… Yummy. Was trying not to hijack this thread, but, okay, since you bring it up… TL;DR: I'd agree with you but for one point… llSleep. State machines in their own can't suspend mid-state-function (not to be confused with LSL states), so it's at least more than just that. (It's mostly a problem with a little thing called a stack…) And asynchronous applies only in so far that there's an event system present, it also does not address llSleep. That one little function, is the bogeyman of your response, and pretty much all solutions are indeed quite fancy. Interestingly, that prompted me to go get my favourite error message to slap you around with (gently, of course). But it's changed from what I remember… And the new one doesn't lend the same insight. Whether that means they've switched from fibers to something else (hopefully async/gen — and may now indeed add new yield points), or not, I can not infer. This isn't an exhaustive list, and terms get reused and mixed up often, not to mention they've changed over the years (seems like everything used to be just "multitasking"), but I'll try to lay out the important ones hopefully reasonably clearly… There is somewhat of a conceptual hierarchy here; networking -> multitasking -> threading -> fibers -> generators -> state machines -> functions. (The ordering of generators vs. state machines is particularly arguable, but I think this ordering fits in here, and of course the whole lot is somewhat recursive with networking usually being implemented using a state machine, etc. — but a state machine usually won't be implemented using networking.) "Fiber"s allow a normal stack-based program to essentially have multiple stacks, though "stack switching". Those stacks are created explicitly (I've heard them referred to as "fiber boundaries"), which in this case, would almost certainly be the script as a whole (hence the granularity I mentioned). The old error message strongly suggested this was the method in play in LSL. This is quite fancy, and also the method I used when I wrote my first "multitasking library" some 30-odd years ago in good ol' Borland Pascal, and involves what C/C++ call a "longjump". This idea is basically the same as "threads" (from the term multi-threadding) which are simply the OS version of fibers, and usually not co-operative (used to be, and can be, but the OS will additionally also step in periodically and suspend you, and even in some OS's do so on pretty much every kernel call), and then "processes" in turn encapsulate a group of threads together, along with other attached resources (the set of allocated memory most notably). And that's not even mentioning multi-core systems, which seem to have taken over the definition of "multi-tasking". Asynchronous is that the script doesn't just stop the world to wait for external tasks to finish, but rather starts the action, and makes a note of it to come back and attend to it later when it's done. In LSL's case, that's handled by the script engine, and the insertion of a suitable event into the scripts queue, which the script engine subsequently re-invokes the script to handle at a later time. It has no bearing on the scripts ability to suspend mid-run, as llSleep does. The script "finishes" at the end of each event, returning control to the engine (and thus unwinding it's stack), before a new event can begin to be processed. In this regard, it is very much a state machine, with each state being represented by an event handler (again, not to be confused with LSL states), but at no point does this address a state being suspended mid-execution. A common implementation is that of "promises", but still requires some kind of external polling/checking mechanism, or "main loop", to which it returns (assuming nothing fancy). Generators is a newer alternative to fibers; This idea, is about getting the compiler to do it for you — much more fancy. One method is for the compiler to kind of turn a function inside out, effectively converting it into it's own little state machine — everyone does this themselves the manual way, every time they use an asynchronous function, and is the method often employed by compiled languages. (This is also roughly related to how every functional language ever does it.) I'm also most familiar with Python's version, though, in which each function invocation carries with it it's own private little perfectly sized mini-stack and "instruction pointer" (the package as a whole, called a generator), allocated on the heap along with all your other values (much less state-maciney, though) — most often employed by bytecoded languages such a Python (and perhaps Mono, though it feels more geared towards being just a brief stop on the way to compilation). One quirk of generators (as opposed to fibers) is that yields can only occur in the bytecoded portions (not exactly, but close enough), and not within any C/compiled library functions it may call (also not exactly, but close enough — Python for example has a mechanism to allow it, but you have to do the state-machinery part yourself, essentially the same as what every LSL scripter does). Either way, async (not to be confused with asynchronicity) is then typically implemented on top of the generator mechanism by utilising a generator yield point (quite often literally the yield keyword) to pass control back to the underlying script engine, rather than returning a value as such, as would typically be done with a generator. The use of "async" and "await" keywords, kind of wraps up the generator so it doesn't behave as one, both providing and capturing the yeilded values for it's own use (there's nothing strictly preventing an "async generator", but that's a level of convolution that's likely to make heads explode). In terms of granularity, the yield boundary for generators tends to be the individual function, and usually doesn't include functions it may call (hence fibers still have a place, unless you enforce a turtles all the way down policy with a little extra layering). Hope that clears things up. Any more questions? 🤪
×
×
  • Create New...