Jump to content

Bleuhazenfurfle

Resident
  • Posts

    108
  • Joined

  • Last visited

Everything posted by Bleuhazenfurfle

  1. Just a quick glance (I mostly run text, so very rarely have any reason to look at the viewer source), and it looks like that's unrelated. The "ll" there looks like a standard prefix, something along the lines of "Linden Library", in this case, just wrapping a few methods of the boost regex library to "lindenize" it, for a few uses unrelated to scripting… (Most easily recognised, by those instances all being #include's, among a bunch of others also all starting with the "ll" prefix.) Scripting commands have very little reason to show up in the viewer, outside of some example scripts and tooltip text, and stuff — it's all handled on the server (where it's safe from tampering).
  2. I kinda had that problem in three of my first year Uni classes… With C++… I rocked up to the very first lecture 5 minutes before it ended (the room was sandwiched between two others, and only accessible from a stairwell at the back of the building, which took me forever to figure out, being I was a little late and had missed seeing everyone else being guided around the back by the lecturer), then queried the lecturer on why he was teaching such an inefficient version of a construct — turns out the better version was in a lecture a couple weeks later, and after quizzing me a little, he strongly encouraged me to do another subject I was going to skip because its lectures happened to collide with his (it was a known issue, and we were advised to take his as the mandatory subject of the two). He didn't "get mad", he just said as long as I did the prac classes (and he was also the tutor, so he could check that I knew the material being taught that week), and of course passed the exam (which I did with ease), he didn't "need" (he clearly meant "want") to see me in his lecture classes that semester. Also to a lesser extent, with first semester Digital Logic; the class was basically building a logic simulator version of the 6502 we been talking about here — which goes to show how simple a processor it is. We weren't told what we were making (or even that we were "making" anything in particular), we were just taught a bunch of logic stuff, built up some simple modules, and then told to connect the modules together, and finally had to design the instruction decode and control logic to manipulate them instead of driving the control lines manually, and at the end of the semester, we magically had a whole 6502 processor which passed if it ran a piece of test code provided on the day (though the lecturer had swapped a couple opcodes around and stuff to try hide what it was — and sworn those few of us who recognised the architecture to secrecy, once he realised we were racing off ahead of the class. He also gave us a second marginally bigger processor to implement as an ungraded "challenge", mostly just to keep us busy and quiet during the classes — it worked). And then again with another class on the CompSci side, where we were rather conveniently writing a 6502 assembler and emulator (in C, this time). I almost got an exemption from the whole subject, but I didn't immediately recognise two of the terms he quizzed me on — TLD's, and cache consistency — annoying part is it turns out I knew about them, I just didn't know the proper terminology — so ended up sleeping a lot. And I'd already messed with writing emulators, so I went a little overboard and did a fully interactive debugger, as well as implement a bunch of extra assembler stuff that was barely taught, let alone required — and since I'd already done most of it before my first prac class, just for funsies (they had to teach a bunch of stuff before the students could start working on it), the lecturer let me off the otherwise mandatory classes because I'd just be disrupting the rest of the class with bothersome thumb twiddling anyhow (plus, it allowed me to resolve yet another class collision — since I was doing the CompSci stuff alongside my actual course). In that case, it was the prac tutor who was rather surprised when I rocked up to the very last class to get my work checked off (which was otherwise a "catch up" class for anyone who'd fallen a little behind), though (the lecturer had already seen it, but he wasn't allowed to just magically pass students prac work, it still had to go through the process which included checking for plagiarism and other kinds of cheating). The best part though, is because I kept falling asleep in the guys lectures, I placed myself front and centre one time, hoping it'd keep me awake. It didn't. He had the good grace to ask me "if I was tired, or if his lecture was just that boring", and smiled when I waved the question off with a, "bit of both". The only subjects that actually gave me any trouble that semester, were Maths (9am on a Monday morning — plus a couple other mornings) and Analog Electronics (basically, slightly simpler Maths, but with other stuff too). I had to actually study for those…
  3. There is no llRegex. It's been debated a few times (because it's looong overdue), but LL haven't implemented such a thing yet. We only presently have regex in: https://wiki.secondlife.com/wiki/Category:LSL_LinksetData
  4. Damn. Knew about mouse steering, but I'd forgotten the nametag thing, so haven't been using it — with a tail, wings, and long hair, finding a clear spot for that mouse steering can be a little tricky… I also keep forgetting you have to click on the avatar (because a game I been playing a bit lately uses it, too, a but a little differently), and accidentally selecting some random pie menu option. Haven't, like, accidentally deleted or returned a friends house yet, or anything, but, I'm rueing the day… (Is that even how that word works? meh, nvm)
  5. I personally love seeing LSL being used to build computers. Haven't been much bothered with that myself (except that time I made a breadboard simulator with physical LED's and resistors that get rezzed, and move them into position on a breadboard prim to build a circuit — didn't take it any further, was mostly just messing with the idea)… but I enjoy building up logic-level simulations of my own architectures, complete with matching assembler, mixing together interesting bits of various architectures I've used… I'm not quite such a masochist as to do that in LSL, though… I guess a script could simulate a smallish FPGA okay … could fit a couple LUTs into an integer … damn you all!!!
  6. Even more completely useless expansion to that; it's actually very common on most architectures that offer division, because they both just fall out of the process, so you're throwing one away each time otherwise. Even worse, unlike multiplication which is often single-cycle even on smaller modern processors, division is very often still multi-cycle on any but the largest (a single-cycle divider dwarfs a multiplier, which itself dwarfs an adder — the original largest component of an arithmetic unit). That's why a lot of languages also offer some form of divmod command which captures both results in one go. It's even more fun in something like risc-v, where you do only get one result (they do 3-register instructions, but not 4), but implementations (especially those which don't do single-cycle division) are specifically recommended to cache the result so a subsequent matching instruction can return the other, without having to re-do the (multi-cycle) division. So yeah, it really, actually, literally is division in disguise.
  7. I have a thing that rezzes a wall full of panels, and because I'm constantly tinkering with the script, it llRemoteLoadScript's them. But because that's so slow, once ready, they take it in turns to help load more. So it quickly rezzes a wall full of blank glowing panels, then the rezzer starts loading the script into the first one. Once loaded, the script cancels the glow, and puts up the texture it's supposed to be showing. The rezzer then starts loading the second panel, but now also the first panel starts loading the third. Now you have three, loading up the next three. And then six loading the next six. I'll sometimes just sit there turning it on and off a few times, because it's kind of fun watching it work.
  8. This is pretty much exactly it. Within or between scripts, events are just shoved directly into the target scripts queue, and will happen in order. But as soon as it touches the network... Like llSay (also llOwnerSay and friends), prim updates, etc., that's where things can get out of order (all the UDP stuff Wolfie mentioned). If I'm writing a lot of text (even llOwnerSay), I'll generally shove a 0.1s sleep between them — and still things occasionally get through out of order. So try to make it order independent.
  9. This is also what the "dots hud" idea is about. You add your own button to each avatar in view, so you can click that. (Gets kinda difficult in a crowd, though)
  10. Yup, that all was completely as expected, zero surprises there. (Thankfully.) But was great to get confirmation of two main points: Assume the notecard may vanish during your loop, and it can occasionally take a weirdly long time to get to cache (so assuming it'll happen during the llGNL/llGNONL is a bad idea). Shame they didn't answer the question of cache policy, but not having countered the comment, """ after the last usage of [any of the] 3 nc functions """, rather suggests LRU though, which is good (assuming they didn't either just not see it, or decide to keep that detail internal by just not saying anything). Could probably test that by reading a notecard every couple seconds, alongside the existing duration testing that's been done.
  11. Actually, it is. You're talking about a fair chunk of additional complexity they'd have to add. As pointed out by others, the sim doesn't know if the notecard even exists, it has to send off a message to the asset server to get it. The asset server will hopefully respond either with the notecard, or a negative. But it may not respond at all. Networks sometimes lose packets, the asset server could be down. Or it might just take a freakishly long time to get the response. Pretty sure the current implementation cares about none of that. It sends the request, and then forgets about it ("fire-and-forget", the way the internet was designed to work). The only one who still remembers (maybe), is you (but clearly not Anna Salyx), by hanging onto that key it gave you. To wait, and generate a NAK, means adding a timeout, and then likely switching over to guaranteed message delivery (or adding in retry and falloff logic), and deciding on timeouts, and the cost of all that and then cleaning everything up again in the very common case of the response coming back normally and none of that extra stuff was even needed. This is the benefit of fire-and-forget; there's no set-up (apart from generating a key), clean-up, or on-going costs (apart from sending that key across the network twice). Especially when the client (you) don't actually care that much (sometimes happens) — and when you do, you know better than the sim how much you care, and can take steps accordingly. As I was explaining earlier, it depends a lot on the caching mechanism used, how much stuff is fighting for that cache space, and whether your script gets paused during it's looping. Caches are often (at least partially) optimistically drawn from the systems memory, meaning the cache shrinks if the memory is needed for other tasks. As an example; a bunch of script lag caused by a lot of scripts using a lot of memory, would both shrink the cache, and introduce longer pauses into your scripts (because all those other scripts can't all fit into the same timeslice, so you find yourself waiting a couple frames in the run queue each time), and if many of them are causing other notecards to be loaded into that reduced cache… And now you're causing your notecard to be loaded into that same cache, over and over… I'm sure you can see where that ends up. (I don't think this is a likely scenario any more, memory has grown over the years, but it's still an example.) And yes, there's that other stuff that can happen, too. We also know that one physical machine generally runs more than one region, and we really don't know how much head room each region is given, or whether it's even fixed (even VM's often have a mix of fixed and shared memory). And are these caches shared across all the regions on that one box? (Probably, because balancing multiple caches on the same machine is kinda daft.) But the results seem to suggest the notecards remain cached a nice long time, so it should be very much uncommon — but, uncommon isn't the same as never, so… And for the most part, it really is as simple as that. Sprinkle a bit of llGetNotecardLineSync into your existing notecard code, and you're done. If you don't mind that initial 0.1s delay (and I know people want their scripts to run FAST, but I'd suggest that it's rare that faster in this way is actually beneficial), then that's really all you need to do to make full use of this — the bonus speed, same reliability, all for barely any additional effort. (And getting rid of that 0.1s initial delay typically just means wrapping the line handling in a function.)
  12. With the method I've been saying (and Frionil Fang demonstrated just before EDIT: actually, not quite), worst case, like, even if the caching fails completely somehow and it never finds the notecard in the cache (like they mess up an update and it's looking for an uppercase key instead of a lowercase key <insert innocent whistling>), it just degrades to what we've known and loved since forever. There's no bad, just a little sad. The bad only kicks in if you start making assumptions that the lines will be there when you want them to be — and that will always come back to bite you sooner or later, because you're not in charge of what caching scheme LL chose to apply.
  13. My point was mostly, what happens if it gets cached for 60 seconds, but you last just happened to access that notecard around 59.9 seconds ago? And, if they're using a FIFO caching scheme (actually more common than I'd ever assumed), then everything gets shoved out eventually, and a brief pause in your script could allow another to receive a new resource which shoves yours out. Getting confirmation that they're using LRU, would drastically shift those odds in favour of not having to check. But even then, caching mechanisms are the kind of thing they'd want to be able to change without asking us first. I think I'm going to be scripting like it's a FIFO cache with ~1s churn— especially when it's so easy to do.
  14. A practical example I did quite some years ago, was a "dots hud", that puts a little dot over everyone's head. You click the dot to do stuff with that avatar as the target (in this case, it was mostly used for the "magic spells" function of the hud — basically just easier than typing their name in a message, or finding them on a list).
  15. Why? The chances are rather low, for sure, but scripts get paused every bunch of iterations through a loop, especially if they're spending time actually processing the lines they're reading — which gives ample opportunity for the notecard to get unloaded. Unless they're using an LRU caching scheme (and perhaps even then), it will happen sooner or later. As they say, in an infinite universe, everything happens at least once, somewhere. And, the example presented above by Frionil Fang should handle it just fine, especially with the tweak I mentioned for anyone actually wanting to use that approach. And the natural pattern for the dataserver event version is just as simple — basically just wrapping it in a do-while statement. But yes, encouraging one of you to ask in the next meeting was kinda the intent behind my "would be nice to know" earlier… 😁 (I'd have asked myself already, if I could actually get to the darned meetings… they're smack in the middle of when I'm typically sleeping, so if I'm not fast asleep already, then I'm struggling to not fall asleep on the keyboard.)
  16. I know that's easy and requires less thought, but I actually disagree. I kinda like knowing when functions will return (even, THAT functions will return!), and I'd rather it didn't just lock up my script for ages (or crash it with an error) if the asset server goes down or something. This way, we get to chose how long to wait, how to handle a persistent error, etc. That said, I know an awful lot of people don't… There's a lot of scripts out there that just lock up and die if the asset server goes down, and for that and other similar reasons, it's not uncommon to just delete a thing and rez a new one whenever it stops working, and put llScriptReset in your on_rez handler. And the amount of effort LL would have to put in to making llGetNotecardSync behave that way, without catching on any of those edge cases, just isn't worth it, especially when we can simply do something like above, or, a simple "if not sync, then slow". Just a little tweak to the way we've been doing it already, gives us the best of both worlds. I think that's what I'll be doing.
  17. That's totally horrible…! (And also totally expected.) Was talking to someone about exactly just that the other day. And because we all *know* someone is going to do exactly that… The typical use case for something a function like that, is to be able to assume that it worked, and have it return the value — which your retry bail doesn't allow. Changing that `return FALSE` to just `llSleep(retries)`, with a lower minimum retry count (like, 3) would probably be adequate — try forever, but there's clearly something drastically wrong, and we don't want to contribute to the problem, so slow down drastically until it clears. (Another option would be just `llSleep(++retries/10)`, with no if() at all.) Would be nice to know the average time to actually load a notecard, though. And to get some idea of how long notecards will typically remain cached for (LL should know, if they've been caching them as long as I was recently reminded).
  18. My guess, is it's much more like: if (notecard in cache) return line else { /* -- maybe: fetch notecard from datastore (and cache) */ /* -- but not: if (fetch successful in X time) return line else */ return NAK } One key thing to realise is there is exactly one function to pause a script — llSleep — every other LSL built in function is essentially synchronous + delay (which so far as I can tell, is basically also llSleep). Pretty much all the "asynchronous" commands are actually a small "fire-and-forget" synchronous one, with the asynchonicity being little more than an intentional side effect, and an echoed parameter — so it's exceedingly unlikely there'll be any "in X time" in there.
  19. Sigh… Just noticed a rather important typo… that first llGetNotecardLineSync was supposed to be a regular llGetNotecardLine. The basic patterns of how a notecard is used, what they contain, where their contents get stored, etc., don't change in the slightest. It's still just a function that reads lines of a notecard. Neither the notecards, nor what you do with them, has changed. Should also be noted however, that much of the simplicity many seem to be expecting is negated by the expectation that the notecard can get evicted from the sims cache at some point. That's less likely to happen in a tight processing loop just screaming through the notecard, but random reads over time (like a "joke of the day" script that says a random line every so often) will almost certainly occasionally fail and have to resort to the usual slow method — so for random reads, it may actually be easier to just keep using the regular slow version since that way you don't need to handle two separate execution paths (one immediate, the other via a dataserver event). Just like LSD, this isn't a solution to every conceivable notecard issue ever, and in fact, this adds a small amount of additional complexity which you'll need to deal with to make use of the extra speed it offers (unless you're happy with it failing from time to time). What has changed, is that once we know the notecard is cached on the sim, we can do a fast loop that processes the notecard very quickly, rather than waiting 0.1s, plus an event invocation, between each and every line. And we can probably (but we don't actually know) assume that the notecard will remain cached during that loop — however, loops pause for frame or so periodically, and so far as any of us know, there is therefore the possibility the notecard could get evicted from the cache during such a loop. (LL may be able to offer some insight on that possibility, or, more likely, we'll just have to wait and see.) Which was what I was trying to get at in my previous post — we likely still have to write our scripts assuming the notecard will not happen to be in memory when we want it, and perhaps even assuming those fast loops won't get all the way through before they have to fall back to another dataserver event. So I expect to mostly see llGetNotecardLineSync within the regular dataserver events, not replacing them.
  20. We'll have to see how it goes in practice, but I suspect the common pattern will be; Start off each time with either llGetNotecardLineSync, or llGetNumberOfNotecardLines, and then in the dataserver event, go nuts reading it with llGetNotecardLineSync. Your old-style read can likely just be a dummy read, you don't actually need to use the returned data, just use the dataserver event as telling you the notecard is safely loaded into the sim's cache. There's a bunch of potential edge cases there, and I haven't seen any hint of whether or how LL are dealing with them, but an awful lot of scripts about today will just lock up and die should a single dataserver event fail to arrive (the most common example codes I see everywhere have that issue), and they mostly continue to work, so, I expect this to at least be better than that (a little extra hand-holding might actually be an improvement).
  21. For my part, I don't forget that other people don't know how to code, but I will admit to not really understanding it. I kind of do, I can't grok natural languages. I know English reasonably well, but my attempts to learn anything else have likewise failed. But that's because I have no memory, and natural languages make no sense. Programming languages are simple and straight forward by comparison, so I don't get why anyone has any trouble with them. 🤪
  22. Coooool…! I am glad someone actually did that. I was going to at one point, even made a start on it (think I might have mentioned that in one of my earlier posts), but got distracted by another project. I can scrub that idea off my TODO list now (I think it might be below the "TODO list horizon", though — that's the TODO list version of the cosmic horizon, the list goes on, but you'll never see any of it because items keep getting added above that point quicker than you can ever hope to check them off).
  23. Casting rounds towards zero, where llFloor rounds towards -Infinity. Just use the one that suits your requirements. Why use llSubStringIndex? You already know where the decimal point is; it's at -7. And remember you can change the truncate into a round by adding half the truncation place. It's actually the fixed method (as opposed to the "float" method). Truncation (integer cast, floor, or ceil), or rounding (adding a half before the truncation), is just one more of several tuneable parameters of the same method. Another tuneable parameter is the use of decimal digits, or something like hex, or b64, for example. LSL being as limited as it is, only offers a few combinations that work well, like b64 is much better than decimal, but there's also no easy way in LSL to get a variable-length b64 encoding — which the decimal form does for you. (Note that b64 simply encodes 6 bits per digit, hex is 4, where decimal is the same, but encode a weird 3.3-ish bits per digit.) So, an integer is just a "fixed" format (as opposed to "float") of 0 decimal places. The pre-multiply (and associated post-divide) makes it a fixed of 2 decimal places instead — either way, the exponent is still present, it's just converted from bits in the output, to complexity in your head (since LSL doesn't give us objects, which would typically be used to encode it into code). But yes, it is a time honoured tradition for saving a few bits (and by "time honoured", it literally predates floating point numbers).
  24. Best way these days, is to take the string of the float, turn the 0's into spaces (llReplaceSubString), then use llStringTrim, and turn them back again. But, if you want to preserve some decimal places, you end up having to add them back on again. You also have to remember to deal with a hanging decimal point… If you're happy with just fewer (fixed) decimal places, then add half the decimal place you want to clip it at (for rounding), and then just cut the string at a negative index according to the number of digits you want to remove (there's always 6 to start with).
  25. Gotta say, I was rather hoping for a hands-free SL controller…
×
×
  • Create New...