Jump to content

Bleuhazenfurfle

Resident
  • Posts

    108
  • Joined

  • Last visited

Everything posted by Bleuhazenfurfle

  1. That depends on who's using it, and what it's for — I don't recall there being enough information to make assumptions here. The "average user" of a HUD, may not even be aware whether the texture they're applying is mod or not. If these textures are going no further than the faces on which they're placed, then there's no good reason to restrict their mod-ness (like, a teleporter may accept an image for each destination — since text labelling sucks in LSL).
  2. Are you sure? I rather clearly remember having problems of this sort back in the early 2010's (and have avoided it ever since), while doing a "picture fade" by having the front face shift between transparent and opaque. My solution at the time, was to click the face back slightly from either fully opaque, or fully transparent (like, 1-99%, rather than 0-100%), a second before the transition, so the texture has a chance to load — and it seemed to happen at both ends. Things may have changed since then — I can imagine viewers having added a "low priority" texture queue, or something, which is why I'm asking too (also, having been mostly text-bound for the last half a decade). I do also recall textures not loading on fully contained (unseen) objects, due to occlusion culling.
  3. Isn't that going to run into issues with no-mod textures? If all your faces report NULL_KEY for their texture, that's not going to help you very much. Would think it better to drop the texture in as an inventory item, having the script notice and pop up a message asking you select the spot into which to load the texture. (And then remember that, so you can remove the texture again if it gets replaced later.) Unfortunately, LSL doesn't have any drag-and-drop support, like, you can't indicate which faces are droppable, so a slip of the mouse and your entire HUD has now been repainted (I presume that's at least part of that "very error-prone").
  4. Until "half the work remaining" hits the limit of the simulations floating point number format, and rounds down to zero — and then you're done. (Unless it was made by Intel, then all bets are off.)
  5. Sighs. And you seemed so decent, too. Well, lets try get this over with. I came here to try and raise up what features people forsee needing of such a function, because I don't claim to be able to speak for everyone (I quite frequently reference my experience specifically, with a dash of what I've picked up from the many people I've helped over the years), and this is a fairly important function, and I think doing it right the first time (because there's plenty of evidence the sun may well swallow the planet whole before we get a second shot at it), really is worth the effort. With that said… First, why are you repeating something I myself said? Second, you're talking about a language intended to be used by people who almost certainly do not have the skills to craft a "properly made regex". Many people struggle with understanding LSL (and that includes a rather large contingent of "designers", that can't script, but can't afford to hire someone who can, either), let alone this second new (and arguably even more cryptic) language creeping into it (I have a suggestion in another Jira of my own a while back that would go a long way to helping with that) — but they can mostly handle a few options which can be readily explained to them, and a nice simple pattern of it's usage. Further, saying that to me right on the heals of your response to Love Zhaoying above, strikes me as significantly hypocritical at best, but, being the new year, and in deference to Wulfie Reanimator's response just below that, I will try to refrain from raising the other half a dozen similarly personal critiques that sprang to mind from that response. (You guys did realise that "heh" on the end, was to signify humour, right…? right?) Third, because it's particularly dumb and you made such a big point of it; such ardent cries of "show me a use case", are fairly typical of people who know they have nothing and are just trying to deflect for personal reasons. I put a good deal of thought into most things I write, especially on Jira, so how about you actually refute the points I made, rather than blatently ignoring them and trying to deflect in such a transparent and lazy way. And yes, I know I tend to get a little excessive in my suggestions — I like to try and set the bar nice and high to begin with, giving voice to all the relevant issues I can think of, so even after a good lowering of that bar, we hopefully still have something reasonably useful (and have still been all too frequently disappointed — especially when they accept an idea, promptly ignore it, and then complain that "well, we can't change it now"). That 80/20 thing is kind of just a standard (and effective) rule of thumb across a good many industries — really wasn't expecting to have to explain it, it's kind of just a given. Along with other variations such as the ever amusing (and a little unsettlingly accurate); "the first 20% of the work takes 80% of the time, the remaining 80% of the work takes 80% of the time, and the last 20% of the work takes 80% of the time". And your last comment there, is entirely wrong in every way: most blatantly, how about you show me actual numbers for your 95-98%? Actual data rather than just handwaving make-believe number you pull out of thin air to dramatize the situation. (And gotta admit, 95-98% is waaaay more dramatic than a mere 80% — which pretty much anyone with any actual experience aught to have recognised as the rule of thumb that it is, rather than a serious estimate.) With that hopefully out of the way… My entire suggestion there — even in it's full bloated glory — it's literally like, several if()'s in a loop, for the main worker function, and then you call it once from llRegex, and call it in a loop from llRegexFind. (At least combining them, saves on a bunch of argument checking.) Seriously, the hardest part specific to implementing my suggestion over yours, is deciding which of the two forms to implement (which I was rather hoping people might weigh in on, because I can't decide which one to push over the other). That's a far cry from, "too much effort". And as for that "not enough return"… LSL is a memory-constrained environment… How many of the newer functions lately are quite specifically aimed at dealing with that fact… and specifically not having to deal with those "other facilities"… and that those other facilities don't work reliably once you get beyond simple regex cases (there's probably a reason you're using regex in the first place)… and a real general llRegex function will be getting applied to much more than LSD keys — tasks like scraping web pages, for example, which people already routinely struggle to fit into a scripts memory and still have room to work with it. The best first options for reducing the memory requirements of these functions — especially when dealing with large inputs — and having some hope of your script not dying in a stack/heap collision, is getting the indices instead of strings, and iteration instead of greedily filling a list with an unknown number and/or size of items. Any function that returns a list of indeterminate size (such as llRegexFind), is just asking for a stack/heap collision. There's also a few other ideas floating around which potentially make not having to rely on those "other facilities" even more critical — like if we get uncounted event arguments (or function returns), it's very likely the last thing you'll want to do is modify it in any way (since that immediately make a copy, which most certainly will be counted), so the start offset becomes absolutely critical (and as I pointed out previously, it's also absolutly trivial to implement even if their regex implementation doesn't offer it as a feature — it's because it's trivial to do it by other means, and so doesn't need to!). Is that enough for you? Can we move on now? Also, I cut out a fair bit to get it this "short", so I we can keep going if you'd like, though if it's all the same to you, personally I'd rather not. I realy have no intention of being nasty or insulting or anything, but I get so tired of the constant measuring contests (yes, I get I'm still fairly new to this forum, but I'm most certainly not new to the topic), and people defending their ideas more than thinking about them.
  6. If primerib1 wanna talk about bikeshedding… THAT is some serious bikeshedding. heh Though I personally agree… my skill at naming things is … lacking … so, I chose to stay clear of that one.
  7. I'm not aiming for perfect either. But there is a lot of room for extensibility here, good regex matching functions do quite a lot. So, we at least need some measure of options. I also went through the top several distinct uses for regex matching I do on a regular basis in coming up with my response. And then finally, I considered that regex is itself a relatively heavy function, so it can withstand the additional weight of an options list (the metric i'm using there is a fairly simple, "the complexity of the arguments should not exceed the complexity of the function itself", type thing). I do think that start location, though, is of great importance, and negligible additional complexity. (Including the end offset, however, is a convenience.) This is an extremely simple way to avoid an extra repeated string construction which is good both for efficiency, and memory use (the latter being a critical resource in LSL). And even if it doesn't match the .Net interface, it's as simple as as string view before the regex call. (I remember correctly that .Net has string views, right?) Also, we don't know that they're using the .Net one, do we? I got the impression they're using C++, and binding C++ functions through the Mono interface, so, could be coming from either side. Lastly, while LL can implement a new function later to fix up gaps… The fact that it is "another" function, immediately kicks it way down the priorities list — and we have plenty of evidence for this happening in practice, plus strong implication by LL's own words on at least one occasion I can recall. I for one would vastly prefer we take a little more time to get the first one 80% there, rather than 20% there and wait until the sun swallows the planet for the next one. Besides, since they're on break, now's the perfect time to be discussing it — they can come back to work, implement my suggestion, and the regex party can begin…! 😁 No, actually. That doesn't consider the possibility that the text either side of the last match, doesn't itself now form a new match. You'd want to either replace it with a string that can not appear within a match, or cut the already searched head off — which was the purpose of including the end index in the match results. Whether you do it by cutting the string, or passing that index back in through a start offset, it neatly and trivially avoids exactly this issue. (It's also exactly what many regex implementations are doing internally when they expose a match iterator.) And I still think we'd be better served by a single function as I described in my Jira comment — it's not a complicated fold, presents some significant utility I've wanted myself a couple times, and saves a LOT of duplication that would be necessary to obtain equivalent functionality. Worse, if the second function doesn't cover the first, we end up having to just fall back to using the first in a loop anyhow. There are MANY occasions with regex, when you want a sub-group of the match, rather than the match itself — in fact, I'd go as far as to say most of my regex use falls into that category. Which actually reminds me, if we're stuck with one result for each match, then I'd like to see it return group 1 if there is one, otherwise return group 0. The reason being group 0 will often (as noted above) contain excess you don't want, and may well need yet another regex to remove. Group 1 on the other hand, can be made equal to group 0 trivially, and a subsequent group can be moved up by making any earlier ones non-matching.
  8. Yeah, that's what I thought he might have been thinking, too… but urgh. no. just no. *shudders* That'd be an absolute abomination — even for LL… The whole point of the (?) syntax is that it's not otherwise legal regex, and so stands out as a directive — and gets used for a whole heap of stuff! Also, (?flags) generally affect the pattern from that point on, so is often used as a prefix, but can appear elsewhere. The group version, is generally (?flags:pattern). I wouldn't be opposed to them only supporting it as a prefix, if they absolutely have to — it'd be trivial to pluck it off the start of the pattern string and convert into a set of flags internally. Though of course, I'd be sad if that's all they gave us… (but, they could still expand it out to it's full glory at a later date, so there'd still be hope!) Not otherwise sure what you mean by "global syntax"… There doesn't appear to be any. The matches are "find first", requiring you to delimit it with ^ and $ if you want "full match" (more efficient than wrapping it in .*s). And it's not really a tangent, either. Such a core work-horse function as this, clearly needs some options — one of those Lindens also recently stated they wish they could change a function signature to add options that were being discussed, but they can't. And I was particularly dismayed he was talking about a function I had been saying needed options, back while it was still an idea (I forget off-hand what, and whether it was one of my already accepted Jira's — again — or not). The point being, though, if we get the wrong implementation (eg. over-simplified), we're stuck with it forever, and it's unlikely to get fixed this side of 2050. So it's well worth discussing this stuff.
  9. They've been talking about it, so hopefully we will in the near future, and as I just said, case-insensitive regex searching has already been discussed. Also, "not already in the Linkset Data functions", isn't necessarily a limitation; assuming they didn't write their own regex engine (please say it isn't so), it's just a matter of giving us access to the existing features. Indexes and such especially, is an existing feature of any sane regex library (for good reason). Also, the LSD version is only a "does it match" boolean test (same would be true of eKVP, list searching, etc), so should be very easy for them to sprinkle a few more around the place. The string version, however, absolutely should be a more complete expression of regex matching, with the groups and find all and what-not.
  10. This reminds me of the discussion on how to do case-insensitivity, and the (?i) syntax (vs. Rider's confusing suggestion of a /i, which only makes sense in the presence of regex-native syntax). I don't seem to be able to find my comments on it in my bookmarks (did I actually post it, or lose it in a PC crash? — I did a fairly significant check of common regex options across several languages, and which ones make sense for LSL). But, that's at least two flags that would be very useful. And as I point out in my comments on this Jira, indexes are often extremely useful, and "find all" is better handled with the start index (and a matching end index in the result list), since you may want those groups, "all" the matches may be too many, and in either case all those matched strings could be problematic (considering match groups can overlap). That said, a "find all" would be convenient too, but I'd think only as a "bonus feature" option of the main function.
  11. Yeah, the leashing itself doesn't care much for LSD and the likes. But there's permissions, and settings, and stuff around it that is often rather limited for script memory reasons. That could all do with an update in many items. I know what Qie Niangao means about removing LSD from things, though… While everyone was gushing over how LSD was the saviour of every problem ever, I had experimentally added it into a couple things, and was basing something new I was starting to work on around it. And then realised, no, it really just isn't needed and doesn't fit, and all that string thunking just makes some things more complicated, and yanked it right back out again. One of them I did later re-introduced LSD again, in a different less "central" role; it is great for caching (although I really wish they'd made that a feature, rather than having to mess about with numbering keys and watching available space!!!). That said, I also have something I need to rework (it's about a decade over-due already) that WILL greatly benefit from LSD… The freeing up of eKVP has been WAY more of a boon to me than LSD… Soooooo many convoluted eldritchian horrors have been defeated by a little eKVP (one of the worst, I'd only just finished crafting, too, like the week before they announced they were doing it — it turned out well, surprisingly, but I'm not sorry about throwing it right back out again!).
  12. I feel an XKCD moment coming on… https://xkcd.com/927/ Is much sad to see this situation hasn't improved in the past decade since I last looked at it… After seeing the mess, I ended up just basically hacking what I wanted into the OC freebie grabby post and calling it done.
  13. A one-time read (plus re-reading if you detect a change in notecard uuid) is definitely the way to go. If they're in nice clean blocks like that, a separator line like [causes] and [effects] would be easier to deal with than a prefix on each line. You read till you hit a separator, keep reading till a blank line, or the next separator, that gives you your range. This also has the benefit of letting you get the full available line width, and not having to trim them each time you read one. Otherwise, if you're stuck with the current format, or they're all mixed in together, or there might be blank lines or comments or whatever, saving only the line numbers is easier on script memory than saving the lines themselves. But, the option I'm amazed no one's chimed in with yet: you can just load the whole thing into LSD. Run a cause counter, and an effect counter, and as you hit each one, you bump it's counter and store the trimmed line into an LSD key "causes/<number>" (or whatever key format you prefer). Keep the two counters around and you can just pick a line and yank it out at will. Also, LSD is bigger than notecards, and synchronous so no more dataserver event juggling once it's been read in…
  14. Which makes it rather cheeky that every single instance of the script counts the size of the bytecode out of it's memory allowance… Gotta love cloud economics. Though, just imagine how much of a mess it'd be if it didn't…! Maybe we should ask for bytecode-pooling rebates that we can apply to future scripts?
  15. Was my first thought too. But doing it that way, you run into the problem of balancing leniency with people getting freebies if they stalk the limit. What I've done in similar circumstances is essentially that, but going one step further by "slipping" the "day" by only half an hour, any time the collection is early… So set the minimum nice and lenient (like that 18 hours), but then any time you collect your next one less than 24 hours after the prior, instead of recording the time it was collected, you bump up the last collection time by just 23.5 hours (instead of 24)… This allows for huge leniency like that 18 hours, without giving you an extra one every four days.
  16. At least it used ++i… That's already a step ahead of many human scripters!
  17. for() condition is exactly a per while() an if(). for() doesn't, however, actually care what the init and post clauses are, and can also be any type. They're optional, and if given, don't have to have any relation to any other part of the loop. They are just a free open expression, whose return value is totally ignored — they just have to compile and run successfully. It literally just lowers as: (And was also added to the wiki page at some point.) init; while ( cond ) { body; post; } It'll even accept the nothing (void) type for either init or post — what gets returned from a function returning "nothing"… (Of some amusement, you get a "Internal server compile error" if you try to shove that into a list, most other cases are blocked by type checking.) And for keys, be aware you also need to check that it "looks like a key", remember that NULL_KEY isn't actually a key (it's a string — so if(NULL_KEY) is actually true, while if((key)NULL_KEY) is false), and the initial value for a key is "" not NULL_KEY (but still works because it doesn't "look like a key").
  18. And it should not. The idea is not to lose information, unless you explicitly tell it to — the set of decimal numbers (what floats are intended to represent) is a superset of the set of integer numbers, therefore it's only safe to implicitly cast in one direction. Of course it's not actually that simple in practice — the assumption in the language design is that we're using 40-bit floats, where you have a full 32 bits of mantissa to match the corresponding integers. That's not the case in LSL where we only get 24 bits of mantissa, but it's where that difference in casting comes from. And even if you accept implicit rounding, then comes the question of how to do that rounding; round towards zero, round away from zero, round towards larger, round towards smaller, round towards even, or odd, etc. (I'm probably forgetting a few, too), plus the truncation versions of all of those ("rounding" is specifically what you do as a tie breaker on a perfect 0.5, truncation is simply moving the threshold to the integers instead of the centre-point between them), are all perfectly legitimate options, and I have seen a few math libraries that offer the full set. So, implicit cast from float to integer is prevented to leave it to you to specify how you want those decimals to round. (I once implemented a math library where you could parameterise the floating type with a rounding method to allow exactly such implicit casts, and never actually used it because it was mostly pointless.) To be "correct", implicit casting of integers to floats should be prevented also unless the compiler can statically determine that the number will fit, but in practice most 32-bit integers that are subject to implicit float casting generally do actually fit in 24 bits, and it's assumed that you're aware floats are only approximations to begin with, so someone at some point long before SL was a thing decided to let that one slide.
  19. I'm inclined to agree on llGOD's return generally just acting like it doesn't exist yet… it's what I would assume, too. Still. Figured it a question worth asking. I do think there may be value in being able to query the "rez progress", though… That was a contrived example, but those are the kinds of examples that often have to be considered — if one of us can easily contrive an example, someone else will undoubtedly run into it in actual practice. So, if it's easy to slip in somewhere… could be handy, and you may well wish it had been some day. Was also mildly hoping there might be enough constructed early, that it can be put within sight of llGOD, in which case that would come along for free by virtue of the object only having been partially constructed. Most likely a vein hope, but…
  20. I mostly find llSensor good for "is there anyone nearby"… or when you're watching for one specific person. For much else, that limit it has is woeful. ( Also, good for, "is there NOT anyone nearby"… don't forget that handy no_sensor event…! )
  21. As expected, then. Would have been quite surprised if they'd become sync… (And yes, you mentioned the object_rez_fail receiving NULL_KEY from those two in the first post.) Which leaves; 1) So what will llGOD return prior to the associated object_rez or object_rez_fail event? (Or is that yet to be decided.) And is there a definitive means to query if the object is rezzed yet? (Thinking, as in communicating the UUID to another script/object — ie. one that won't be receiving the event — and then the object for some reason takes a weirdly long time to appear.) At least, please just nail down one parameter of llGOD as returning a value useful for this purpose, and document it in the Wiki. 2) Can you explain any more about why you can't change the existing signature to add a return value from a command that does not presently have one? I'm honestly curious, because I can't see any problems syntax-wise. My best guess, is that it'd cause a stack crashing issue with existing scripts, or something? Is a shame… 3) We need a REZ_FLAG_USE_CENTER, so it totally covers the existing two commands.
  22. From what I understood, the rez command was unable to return the key of the new object because the actual rez happens asynchronously. (Pretty sure that was at one point, actually stated in response to requests for them to return the key up front as this command appears to do.) If that is no longer the case, can we then immediately use llGOD to inspect the newly rezzed object? Or does this just reserve a UUID and get the ball rolling, but the bulk of the rez operation still happens ASAP? I don't see any mention of object_rez? Will this new command invoke it? Particularly in the case where a UUID is reserved, but the rez still happens asynchronously, object_rez will still be useful as a means to indicate to the script that the rezzed object actually exists now. And can the existing commands now be altered to match? They presently don't return anything, so making them return a key using the same arcane magics this command uses, should not break anything; you can't even return the return of a function that returns nothing from another function that returns nothing — which is much sad — but means those commands can't have been used in a context where this change could cause an issue. In any case, I am personally particularly glad for REZ_FLAG_TEMP, REZ_FLAG_DIE_ON_NOENTRY, and the existing commands invoking object_rez_failure. This new family relationship should also be bring with it much new funs to be had.
  23. I had really hoped llIsFriend would be fleshed out a little more as I'd put forth in my Jira issue (BUG-234404)… Even if not to that extent. (Insert sad expression) But llListFindListNext sounds hopeful… And hopefully the first of several. llSubStringIndexNext is desperately needed too, if my guess as to that functions purpose is remotely on the mark (I haven't yet watched the last meeting, if it was discussed there).
  24. The intent is useful, absolutely — we want the same thing, in fact. I was referring only to the specific method you are asking for. And I had hoped the little story in the middle of my last post would have made my understanding of your issue clear, though perhaps I cut it back a little too much — my post was about twice as long as I'd originally written it… And you're making some unfounded and distasteful assumptions there, which is always a bad footing from which to present an argument — try not to presume what someone else does or does not know, unless you at least actually know them. The little story I had in the middle of my last post was intended to signify that yes, I get the issue quite clearly — back in 2012 when I wrote my picture frame thing, I faced precisely that issue, as I suspect many others have before and since — you are not the first person to discover this issue. (Nor even to suggest that specific remediation, I should add.) Subsequent to this features implementation, all NEW images should be tagged appropriately, and it would not be an issue. Likewise, for the older ones if the original size information is present and merely not yet exposed, then also it will not be an issue. And whether it gets stored in the description as you suggest, the name as can be done right now, or another internal field, is largely irrelevant within the specific and very limited use case you have presented. My counter-argument was merely that the description is a bad place for it to begin with (for reasons I have only briefly touched on, and you appear to be unaware of), and fails to support the more general cases for which a path is already mostly clear and accepted. To make that one point clear; for any images prior to this feature being implemented where size information is absent, then yes, neither your suggestion nor mine is going to magically add size information to existing textures that lack it, and you would hence indeed have to rename those pictures to manually include it (or otherwise make some other equally manual alteration to the image inventory metadata). So I'm afraid that most of your argument there is utterly redundant and void — we will continue to do what we must, as we always have, ere this feature gets implemented.
  25. That is exactly what I was saying, yes. A texture can be applied by UUID alone — where we don't have the original name, description, or anything of the sort. And I'm not sure we want to let people read the description, even if it had one (presumably one given to the texture at upload time). Having size on the inventory item alone seems mostly pointless. Handy when you're building, sure, but in that case you'll usually have access to the original — or should… and you can always include the size in the name of the image, that's what I generally do (because I know I'm not going to remember it). Don't need anything extra for that at all. (And I'm no artist, but even I know that'd go double for "art" destined for a gallery — how an artist can fail to note it in the image name, is mind boggling.) The missing piece, is for scripts and viewers. I think most of us have probably made a photo frame at some point, that at least shows a random image dropped into it's inventory… that was like the second or third script I ever wrote when I was first learning, followed not long after by another much better one that ended up being the basis of a wedding present for an SL friend… (Like you see those billboards with the slats that rotate to show the next image, with a little staggering so it went across as a wave, and then would load the next image onto the previous faces…) As it was, she stretched it to a shape that suited the images she wanted to show, and then clipped and re-uploaded a few to fit properly. Would have loved to be able to adapt to each images aspect ratio. (I remember having that discussion, and offering to make it scale according to a code in the image name, but she said that sounded unneccesarally complicated, and I wasn't particularly inclined to argue the point.) And the reason my Jira mentions size, with aspect ratio as an after-though, was exactly recognition that knowing the actual image size could be useful for some weird esoteric cases, and since we know how the images get scaled, we can figure things out when needed, so long as we have all the information… But then the question of formatting comes up ("x" is pretty standard, but CSV would probably be better), what if they change the way they do the scaling… And even just the question of what format to use could well block the feature (if it hasn't already)… So I included the suggestion of a simple aspect ratio as fall-back. The one point raised here that I hadn't accounted for in recent thinkings on the topic… Is that images are stored scaled, but rectangular. I know better, but for some reason I was thinking they were all stored square when I came up with my "use '1' for default" idea. Still… If it uses the usual 6 decimal places for real aspect ratios, and only 5 or fewer for the defaulted ones… That'd work just fine (the "guess" will be wrong by far more than that, anyhow, unless the image was actually a power of two originally). Not ideal, and I'm not sure it even matters, which being both a totally transparent indicator, and likely adding next to nothing to the complexity.
×
×
  • Create New...