Jump to content

Frionil Fang

Resident
  • Posts

    385
  • Joined

Everything posted by Frionil Fang

  1. There's a string indexing error in the function, making it return FALSE unless the strings match exactly. That wouldn't cause a math error but dunno. You're giving the substring length as the ending index to llGetSubString, but it returns the characters *inclusively* between the indices so it should be decremented. For example your beginsWith("abcd", "abc") would get string length 3 for the prefix, and would then compare llGetSubString("abcd", 0, 3) => "abcd" to "abc", failing every time unless the strings match exactly and the overflowing ending index compensates. For instance, check the string length being 0 before comparing (separate if clause to avoid the string function being evaluated if the string length is 0, I guess, LSL logical AND/OR aren't shortcutting so every part of the statement gets executed), and then use length-1 instead. integer beginsWith(string str, string substr){ integer ss_length = llStringLength(substr); if(!ss_length) return FALSE; if(llGetSubString(str, 0, ss_length-1) == substr) return TRUE; return FALSE; }
  2. If you're talking about streamed parcel music fading in and out, that's been happening for some 7 years now across 3 different computers and several ISPs. I do some lowkey DJing and it happens once or twice per set for me. The viewer uses a premade library for handling audio streams and it's just configured weird or buggy, occasionally running out of data and fades out for a moment. Never found any good manual fix or explanation, and it's clearly not high priority on LL's list of bugs, dunno if it even happens for everyone. Edit: parcel media/media on prim like youtube do not use the parcel audio stream, so they wouldn't suffer from that bug.
  3. Games often include a brightness/gamma setting, but here's the thing... they're games, made with consistent art direction, the content in scenes designed to work together, so there's not much need to fiddle with things to make mix'n'match work. There's the reference environment you could design on (https://github.com/Jenna-Huntsman/Second-Life-Resources/tree/main/PBR/HDRi) but I don't see it being very foolproof in practice, not that baking in lighting and eschewing any materials or using old Phony-Bling materials is any more so.
  4. The supported character set could be changed, but you'd have to change the typeface textures as well, etc. As seen above, the set defined in the code is as follows (though it appears to be missing a couple of the line-drawing characters for some reason, maybe the version I checked is old or something):
  5. Yes, that was my gist, sorry I'm a bit extra verbally challenged today. The unfortunate design here is that the ambient occlusion is packed into the same texture as roughness and metallic, with no way of separately tiling/offsetting the different packed parts, so they're permanently stuck to each other. You're not changing only one without having to create a new texture asset. Personally I'd love to have a separate scale/offset for each of the packed ORM channels, but probably not going to be a thing.
  6. Alpha blended surfaces are often be displayed in the wrong order. Imagine you had those occlusion-shadows as a "decal" on an object, then a person with alpha blended hair stands in front of it; it's very possible for the hair to be drawn *behind* the occlusion shadow. It's aggravated by the objects overlapping or at least being close to each other in some sense: my neighbor has alpha blended palm trees far outside my window and I can't find an angle where the order goes wrong, but the prefab glass of the window that I covered with a blended effect layer will display the dull grayish glass on top about half the time, and my own enhancement the other half. It's not a SL fault, alpha blend sorting is a hard problem in general (making the sorting reliable is expensive, and usually not worth it) and if you pay attention you can see it happening in many games. More modern games can do different tricks to mitigate/circumvent the issue, but they're probably not very applicable to SL. If there're no multiple alpha blended surfaces occupying the same pixels on your screen, then there won't be unstable ordering.
  7. Has also the downside that SL doesn't actually do "decals", you are just slapping down separate alpha blended geometry which is just asking for blending order problems. I'd imagine a more specialized game engine might have a decal mode that renders in a predictable manner so the bullet holes and blood splatters on a wall are displayed at the wall's depth instead of on top of other alpha blended objects in front of the wall.
  8. You have to use the PBR channel radio buttons: Complete material sets all the channels' scale/rot/offset at once, otherwise you change only one of them.
  9. DDR memory is generally installed in pairs, it's probably just a marketing label (see the 2x4 GB, each unit is 4 GB, but sold as a 2x kit). You can install a single DDR memory unit, but it won't get the speed boost of being in a dual-channel configuration.
  10. Reflection probes behave like a panoramic camera -- they capture a snapshot of what they see, and use that for illumination/reflections inside their volume. They do "see" beyond their dimensions, but their effect is limited to their volume. A probe that sits inside a room and doesn't reach the internal walls *will* still display the walls in the reflection, but the ambient lighting will likely mismatch between the covered and uncovered areas. Conversely, if the probe leaks outside the area it's supposed to be probing, you can end up with errant reflections or mismatched lighting on the outside. The probe here is a box: the white "floor", the purple sphere and the mirror wall are within the probe; the rest of the room is not, but it's still caught by the probe in the reflection. Note the sky/surroundings being visible in the non-probe-covered slice on the left, that part is left to the always-existing automatic probe. On the opposite side of the mirror wall, we have a small mirror cube. That mirror cube sees the purple sphere thanks to the probe (from the probe's point of view, the mirror wall is not in the way), and the mirror wall sees the sphere's reflection on the cube but not the sphere itself. Multiple probes with the same ambience will mostly coexist while overlapping, but there could be some artifacts on surfaces caught in the overlapping area: each probe sees the world from a slightly different position and it might not blend seamlessly together but with how imperfect and approximate the reflections are to begin with (this isn't ray tracing) it's probably okay for many situations.
  11. You configure your marketplace store with an ANS callback URL. That means you'd have to have some kind of a static server, couldn't just slap it in an inworld script and their ephemeral URLs.
  12. Pathfinding has many caveats where it could be used at all and is generally pretty poor, and llSetPos is simple but not server efficient. Keyframed motion is by far the cheapest method; it's not "free", but once a keyframed motion has been started, it'll run with quite low resource usage. It isn't reliable, though, the movement may drift over time so you'd be wise to stop it once in a while and use standard positionining to ensure your object is where it should be, then restart the animation. You'd just use the same circle equation for the path as you'd do with llSetPos, except you'd precompute keyframes into a list instead of using the formula directly as a function of time.
  13. As per the test scripts in this thread, including mine, yes it returns EOF. I don't know where this misconception that you need tthe line count comes from all of a sudden, you just need to call one of the async functions to cache the notecard, you don't need to handle the async event or do anything with what's returned. I'm certainly going to (ab)use that style; adding the -Sync version will be a nice speed boost drop-in to existing dataserver-event based notecard reading, but having to service an async event to be able to use a sync function sounds like it's not very synchronous at all... and sometimes you just might want to get the data right there, right now (even if it takes a retry or two to get a proper return value). If the asset server is so knackered it can't fulfill that, I'm pretty sure there are more pressing issues and trying to do old async reading wouldn't be working great either.
  14. With some (very) crude testing, the speed quite comparable between the two of them. I didn't do any fine-grained or huge dataset testing, but reading a notecard with 10 lines of 1024 bytes completes within one frame (llGetTime() returns 1/45 seconds), reading the same data from linkset (keys = line numbers, values = lines) was more fluctuating but median 2 frames. Reading 80 lines of 128 bytes, NCsync reads completed within 2-3 frames (median 2), LSD read completed within 1-2 frames (median 2). That's clearly a bug on the test server release since it breaks the old notecard reads too, you can very definitely read the last line of a notecard on actual production servers. Possibly why the functions aren't available on the main grid yet even on RC servers?
  15. When you edit a notecard, it gets a whole new UUID, but much like with uploading an identical duplicate of a texture getting the same UUID, if you save an identical duplicate of an existing notecard (i.e. undid your edits and resaved -- the old asset wasn't deleted, just unlinked from the notecard inventory entry) it gets the original UUID back, and that was still cached. This was news to me too until right now, but you can confirm it by right click + copy asset UUID -- notecards with the same data in them have the same UUID, even if they weren't direct copies.
  16. For actual use you should probably try to ensure the asset exists in the first place etc., but it's just about the same as with async reading. I wasn't bothering with that just to sate my curiosity so I just made it give up instead of throttling (beyond the 0.1 built in delay on llGetNumberOfNotecardLines -- on every new uncached notecard the data was available on the first retry). But yes, the most reliable way to ensure the existence would be to successfully get to the dataserver event, at least then you can be sure there's something there.
  17. The function is available on the beta grid in the Cloud Sandbox region so I went to snoop a bit. The constant NAK consists of 3 characters 0x0A, 0x15, 0x0A, which translates to newline, NAK, newline (compare to EOF which is newline, newline, newline). You do not need to use the dataserver event at all. You just must make the call to cache the notecard, you simply can ignore the result and try the same line again after that. This example script also loads the notecard via the standard method for very loose timing comparison: loading and printing a 110 line* notecard that was not cached took ~0.6 seconds with the sync method, ~14.8 s with the classic. *NOTE: at the time of this writing the llGetNotecardLineSync AND llGetNotecardLine are broken on this beta server: they never return the last line and return EOF prematurely. integer sync_read(string name) { integer i; string line; integer retries; while((line = llGetNotecardLineSync(name, i)) != EOF) { if(line == NAK) { llGetNumberOfNotecardLines(name); // load into cache, ignore the dataserver event // safeguard against repeat errors: this could happen if you load by UUID // that is invalid; it will never give a script error, nor give up returning NAK // normally it will be available after the first attempt if(++retries > 10) return FALSE; } else { llOwnerSay((string)i + ": " + line); ++i; } } llOwnerSay("Used " + (string)retries + " retries"); return TRUE; } // for normal async read string nc_name; key nc_handle; integer nc_line; default { state_entry() { nc_name = "40fb312a-289f-47ac-d987-7fa05a1df787"; llOwnerSay("--- Sync read ---"); llResetTime(); if(sync_read(nc_name)) { llOwnerSay("Took " + (string)llGetTime() + " seconds"); } else { llOwnerSay("Notecard could not be read"); } llOwnerSay("--- Async read start ---"); llResetTime(); nc_handle = llGetNotecardLine(nc_name, nc_line); } // only used by async read, the dataserver request by the sync read is ignored dataserver(key id, string data) { if(id != nc_handle) return; if(data == EOF) { llOwnerSay("Took " + (string)llGetTime() + " seconds"); return; } llOwnerSay((string)nc_line + ": " + data); nc_handle = llGetNotecardLine(nc_name, ++nc_line); } }
  18. For some reason I thought it was ~5 minutes, but since the objects were created before the test started the numbers shouldn't be *too* wrong in this case. Thanks, good to know for future.
  19. I made a very crude test: A prim that runs a sensor repeat on avatars, once per second, in a 20 meter radius (with 1 avatar = myself in said radius). Another prim with volume detect enabled and only a collision_start event, using keyframed motion so that it bumps into me once per second. Both prims do nothing with the info but just keep a float text counter so that I can see they are correctly detecting me. Sensor repeat has no no_sensor event, volume detect doesn't have a collision_end event. After running them for a couple minutes, I checked what the script time info says: 0.0015ms for the sensor repeat. 0.0011ms for the volume detect. Next I moved outside the range of both of them and let them run another couple minutes, checked the script time again: 0.0003ms for the sensor repeat (rounded down). 0.0003ms for the volume detect (rounded up). Conclusion: with a single avatar in range, the difference is negligible but in advantage of the volumedetect approach. If I had to guess, more avatars would still sway the advantage for volume detect, but 0.0015 ms for the single avatar case is very, very little, and 0.0003 ms for no avatars is basically nothing.
  20. Try Freesound.org. You can filter by license: Creative Commons 0 means you can use the sound however you please, Attribution means you must mention the original source but otherwise aren't limited in how you use the sounds. There are others too, explained in the site's help section on licenses.
  21. I tested it with an alt present who certainly had not seen some random textures: I put them on a 100% transparent cube, which would emit them as particles on click. The textures were immediately available whether they were on the object upon the alt's arrival, or changed via the script before firing the particles. If I turned off the "preload" part, using a new texture resulted in gray loading particles. That's just one data point though, maybe there's situations where a 100% transparent object is optimized away. I think I've had my distrust about it in the past too and left some texture cachers at minimum alpha instead of 0.
  22. Peeve: mortality. I learned today that one of the people I started SL with passed away a couple months ago. We had drifted apart, but I considered him a friend nonetheless and there's many memories. Rest easy, dude.
  23. Well, I can't claim I'm clever enough to give the exact how and why, especially why only a child prim as I don't see any obvious special handling for child prim omega, but it seems like a case of traditional floating point inaccuracy. The viewer keeps track of an "accumulated angular velocity" quaternion: each update, it gets the time passed since last update, generates a rotational change quaternion from this time and composes this rotation with the accumulated velocity. Repeated composition of quaternions is almost certain to start to drift, there is only so much precision. If my logic is right, this also means the drift becomes worse the better your framerate: more frames = more updates = more repeated quaternion compositions. Supporting this is that I couldn't quite get the drift to happen by letting a spinny light stay in the background, but once I turned off the background framerate throttling, it became apparent after a couple minutes. Edit: the relevant code is in indra/newview/llviewerobject.cpp, void LLViewerObject::applyAngularVelocity(F32 dt)
  24. That explains why I saw nothing when I left a test object running for half an hour. Mental note to check the viewer code how the local prim omega effect is computed out of curiosity. If I had to guess, the child prim rotation is constantly referencing the root prim in some way that slowly accumulates and turns the axis towards -z (the position probably doesn't change)..
×
×
  • Create New...