Jump to content

Bleuhazenfurfle

Resident
  • Posts

    108
  • Joined

  • Last visited

Everything posted by Bleuhazenfurfle

  1. TL;DR — So is it "multithreaded"… "kinda"? But really no. LSL appears to use a kind of "co-operative multitasking", more commonly referred to as "fibers" these days. A fiber can yield control back to the script engine, allowing the next script in the queue to pick up where it left off — but there is indeed at most only one script executing at any given time. And even more than that, there's probably a limit in that only one part of a given script can be executing at a given time (a common limitation with fibers, they tend to be fairly "granular"). In LSL, this yielding seems to come into play only in llSleep, the delays attached to many functions, and at the start of functions and loops, and maybe other similar spots. My personal suspicion is that all of those are actually the exact same thing — essentially a call to the backing function of llSleep — and for some reason (possibly something to do with using icky actual fibers rather than much cleaner async/generators), LL seem highly reluctant to add any new yield points. (I recently suggested one in the form of llAwaitDataserver, but I've seen others given the same response, too.) Anyhow, sorry to belabour an off-topic, but I've seen this come up a number of times, and I thought it worth the clarification.
  2. Should have also mentioned in my prior post, LSL often lets you go backwards backwards. This can be a solution to the tiny little second issue I mentioned, for times when you really do actually want to go forwards. If you start your loop at -length, and progress "forwards" to 0, LSL's negative indexing will turn that into regular forward looping without having to explicitly remember the end point.
  3. Don't underestimate iterating lists backwards. We tend to think forwards, with increasing numbers. But sooo very often it makes no actual difference, and counting backwards often makes the task fiendishly simpler. An example being if you're adding to removing items from a list; going forwards, you have to account for the added or removed items in your loop counter. Going backwards you don't. Another, is 0 is much easier to detect. Going forwards, you need to compare against some arbitrary stopping point, which you'll either need to keep handy in another variable, or keep referring back to the source (especially if it changes because you're adding or removing items). Going backwards, 0 is 0 (even -1, still simple, and it stays put).
  4. Animation duration should also be tossed into llRequestInventoryData too (like, as per BUG-232877) also… I'd say, along with the priority too (separated by a simple space). Adding any of the other stuff would be getting over-complicated.
  5. That is … a completely different idea…! This here is about invoking mathematical operators in bulk (ala llSIMD), not invoking any random function (what is the function id of the + operator?). Did want to take the opportunity to give it a hearty +1, but still just off topic. That said… You just described llApply (close cousin to llCall — which wouldn't actually be possible in present LSL), I think. The prior op() was basically function references (my personal preference — for a bunch of reasons). What you're suggesting, if it was meant to be in the context of this thread, would be more leaning towards an llMap (built on top of fr/llApply). Also all fantastic ideas that I've been wanting in LSL for years…! Soooooo much tedious code would be saved by llApply alone. (And don't forget llPartial!) And if they existed in LSL, I'd soooo include llApply in my llRpnMonstrosity. But alas, still essentially off-topic, I'd think.
  6. That's basically what I was calling llListMathematics, to go with llListStatistics, though I'd put the op first, pesonally. I would suggest one small change, if one list is shorter than the other, the cycle the items in a ring. ie. (Obviously not LSL): integer Al = length(A), Bl = length(B), i = max(Al, Bl); list Z[i]; while (i--) Z[i] = op(A[i % Al], B[i % Bl]); This gives the effect of the SIMD N:1 form, along with some tricks like the sign inversions common in matrix math, without having to feed in an entire matrix worth of 1's and -1's. When I was in school, we used to call it BODMAS… And this "new math" business, is just what new people learn in school, while the grown ups "old math" is just that thing more commonly referred to as "algebra". Here's the secret though; ÷ and × work just the same in algebra, as it does in "new math" — I guess people just think "new math" makes them sound smarter than "kids math". Aaaanyhow… Since you bring it up, a quick summary of what I came up with: True RPN has a couple issues; it looks ugly, it's unfamiliar, it's complicated with it's own set of operations just to manage the stack, it's hard to keep track of everything, it's virtually unreadable, and generally just a PITA. Also, it doesn't work given the data types we have; do you REALLY want to have to use strings for your operators?!? But if you use integers, then how do you tell them apart from the values? And what if you want to let it work with strings, vectors, and rotations too? (You can add strings, at least.) So right off, we swap the parameters around to resemble llGetPrimitiveParams rules. We now have something that looks much better — but you still actually can't read it. Which is why I mentioned registers before. I ended up augmenting the stack with named registers, with the "null register" (an empty string) being the traditional RPN stack, and every instruction got a destination register, so it ends up looking more like Assembly language. And if you have an empty string variable named STACK, and a column index were appropriate (1-based, with negative indexes forming a horizontal slice instead of a vertical one) you get this: RPN_INPUT, /*result table*/ "input", /*stride*/ 2, /*length*/ 18, /*data*/ "A",5,"B",7,"C",9,"D",3,"E",1,"F",4,"G",2,"H",6,"I",8, RPN_REGISTER, /*result table*/ STACK, /*source column*/ "input",-2, // need to take a horizontal slice (one row with the 9 scores) RPN_GEOMETRIC_MEAN, /*result column*/ STACK,0, /*source table*/ "input", // horizontally collapses the entire table (one row, so one value) RPN_GREATER_THAN, /*result column*/ STACK,0, /*lhs column*/ "input,2, /*rhs column*/ STACK,1, // single-row rhs is repeated cyclically RPN_FILTER, /*result table*/ STACK, /*source table*/ "input", /*selection column*/ STACK,1 // filters entire table rows, output left on stack So that's what I ended up with before I put the idea aside. It's still RPN-esque, still has a stack so you don't have to name EVERYTHING, but avoids the worst of the stack fiddling (never need DUP or SWAP or the rest, because you just use named registers for the annoying ones), and you can skip the stack entirely with a few creatively reusable names like "A", "B", "C"… Of interest, GEOMETRIC_MEAN (also ADD, and similar) horizontally collapse an entire table, since we want to process a column of numbers, instead of a table of rows of numbers, the REGISTER allows us to "slice" the values we want out of the source data, choosing whether to leave it as a column (positive index), or form it into a row instead (negative index, which is also why it doesn't get a destination column index). And functions that return a single column, can write that column back into an existing table, or replace it entirely by specifying column 0 (in the case of STACK, it pushes a new table) which also makes my OCD happier about the 1-based indexing. These basic operations were also kind of interesting, since they horizontally collapse tables into a single column, but sometimes you want them to work on the columns of several distinct tables. I had a table builder function (basically, a mass llList2List/llListReplaceList combo taking a range of source table columns, and using them to replace a range of destination table columns), and another one that slurps up the top N entries off the stack (N=0 to just take them all), and glues them into a single table. But using either (possibly several times) before each of those operations seemed kind of icky — though it definitely seems the most flexible option, while being syntactically clean. (For example; [RPN_STACK_BUILDER, 3, RPN_ADD] would take the top three entries off the stack, glue them together into one big table, and then collapse it horizontally by summing all the values in each row.) I guess it even actually kind of looks more RPNish like that (this would be horrible if it was building the new table just to be consumed, but using the iterator stacking method I don't think it actually makes a practical difference). All in all, I was mostly rather happy witht he result. Still don't think it'd ever become a thing, though. (Also, I rather like my name, too. )
  7. I'm not so sure it's DOA… As I said in my Jira response, it still beats the pants off doing it in a loop in LSL. And as was pointed out above, SIMD maybe could still come into play, possibly, but it's certainly not going to be more than an implementation detail. The list thing, though, that is a problem. Of which I had two-ish minds: Adding a list-outputting version of llListStatistics (llListMathematics?), taking two input lists, and an operator, and producing one output list. Not going to get much efficiency out of it, but, still much better than a loop in LSL — after all, llListStatistics doesn't provide anything we can't already do in LSL either, but I'm frequently sure glad it's there. Also, llListMathematicsStrided (and perhaps a matching llListStatisticsStrided) for doing the same on a strided list — the major benefit being it could potentially handle both horizontal and vertical striding (negative stride length indicating vertical rather than horizontal, perhaps) across multi-argument operations (eg. sum groups of 5 numbers, average over groups, find the maximum among each group, etc.). The major disadvantage, of course, being you'll usually have to compose that strided list for each individual operation — hence, allowing a choice of stride direction. Even better if we can provide a separate list of the striding (similar to the specifiers list for llGetJSON), allowing it to pluck the numbers out of an existing strided list containing other data (hopefully, often entirely avoiding the need to recompose a new strided list). An entire RPN style bulk calculator with internal register heap, ala llGetPrimitiveParams. This form would have a fighting chance of getting the efficiency, but I doubt it'd ever fly because it could REALLY rack up memory something shocking — UNLESS… The memory horrors of the RPN calculator style could be mitigated by trading memory for time instead; essentially, the distinct operations build a stack of iterators, which only at the end then gets evaluated and the result returned as a list. It's going to be about as far away from SIMD efficiency as you could possibly get, but, probably STILL better than a loop in LSL. (Seriously, have you ever timed those things…?!?) As reference, D's use of ranges, and pretty much every functional language ever, behaves much like this. (Along with the benefit of a decent optimising compiler, but still…) The problem, is it lends itself to the same reason that was floating around for why we didn't get regex until recently — this one command can burn a frightful amount of time. Since regex is a thing in LSL now (at least, minimally, with interest in more), I presume either they've figured out how to make it suspendable if it overruns the scripts timeslot, or, they figure the worst case isn't tooo bad that it should be safe enough. In the former case, this might still have a chance. In the latter case, keeping it strictly stack based might be sufficient Another thing that would go a long way to making this more sane, is a mixing function to construct a strided list from two source lists, or, from the concatenation of two or more source lists. Of some amusement, my idea for the RPN calculator function could actually fill that role, also.
  8. Sounds like they may have simply meant collision_start and collision_end, perhaps…? Much along the lines of what Qie just described, I was thinking you could do it in mouselook with an attachment that figures out what you're looking at, and llRegionSayTo's a message on a known channel (should totally be channel 35270522) to let them know when you start and stop "hovering"… But that's all quite nasty (you gotta get them to wear said attachment, for one), and of severely limited (if any) actual practical use. Would be a fun project, though… Going to have to make all those plywood boxes in my build space run away from you when you look at them in mouselook.
×
×
  • Create New...