Jump to content

Jenna Huntsman

Resident
  • Posts

    672
  • Joined

  • Last visited

Posts posted by Jenna Huntsman

  1. 14 hours ago, Qie Niangao said:

    First, a huge thank you for looking at this and giving feedback; I was so hoping you specifically would take a look. Now I'm thinking I too will end up doing some "empirical study" to better inform my next pass at this.

    It seems, if I'm reading your other notes correctly, that my laziness got me in trouble: I was trying to avoid putting labels inside swatches that would change colors depending on the environment in which the probe was being set. As a result, I put the labels outside those swatches along a horizontal line, so they erroneously appear label the level of that line, not the swatch above or below it as intended. It was all to avoid finding a contrasting and readable label color at runtime but now I think simple black or white will work for inside-swatch labels depending on the value of the swatch color.

    The "EEP ambiance" label text doesn't appear to work anyway, and I guess "Environment Ambient" needs to squeeze in there somehow.

    There's also a nagging problem with the very horizontality of that 0-1 range sum of Environmental Ambient and indirect "Irradiance" lighting. I need to show that there's a simple proportional amount of each, but that suggests there's a y-value that's constant over that range and there's not. I stared at that for a long time and just couldn't find a practical, less confusing alternative. (This is related to why I find the single float so confusing: it doesn't monotonically adjust any perceptible quantity except over piecemeal ranges, and even there it's a non-obvious assortment of quantities being adjusted. Well, non-obvious to me, that is.)

    I did some additional testing just now, here's what I found:

    total_ambient is a combination of all things that contribute to ambient lighting directly, so that means:

    • EEP Ambient Color
    • Cloud color (likely multiplied by cloud coverage value)

    That means, that with newer PBR presets that total_ambient may read ZERO_VECTOR, if their EEP ambient color is set to black ( <0,0,0> ) and have the cloud coverage set to zero, although their actual observed ambient light value is something else - hence why I say that fade_color with an additional clamping mechanism is a better measurement (among other reasons).

     

    Anyway - Some feedback on the graph you made, just from personal opinion, and I'm not sure I have any good solutions for these (As I write, I've had a few drinks and can't think straight enough haha!)

    • The graph suffers from non-linear range issues, meaning you have the 0-1 scale, the 1 - 4 scale and the 4 - 100 scale represented by the same distance
    • Not sure what the "Current EEP minimum" slider is meant to do. Could be that my inebriation is inhibiting me from understanding, but clarification would be appreciated.
    On 1/19/2024 at 9:12 PM, Qie Niangao said:

    ² If a product includes multiple reflection probes in the linkset with potentially different settings for each, there's also need to be a way of navigating among them (at least by name).

    The way that I've handled this up until this point is to put the name of the room that the reflection probe is in to the description box, which can then be read later by a script. Generally you want all reflection probes in the same room to share the same ambiance value.

    • Thanks 3
  2. 8 minutes ago, Qie Niangao said:

    Above I threatened to create a UI for the scripted function that can adjust a reflection probe's ambiance (with PRIM_REFLECTION_PROBE attributes), so I took a shot at a design. This concoction made me realize just how much I didn't know, which means this is sure to include errors, so it's a DRAFT strawman very much in need of review:

    Screenshot2024-01-19145007.thumb.png.c3443f5d262d660a569490043e3116c4.png

     

    Some feedback:

    • "EEP" ambiance isn't a constant value as this changes with the sky setting, so there should be a swatch showing the ambient setting.
    • Any probe ambiance value above 1 is a multiplier on the irradiance contribution - values above 4 make for extra sky contribution (so, "extra sky" should be clarified)
      3 hours ago, Qie Niangao said:

      he total_ambient attribute of SKY_LIGHT (which I'm calling "extra sky" in hopes it's the color of the wiki's "only indirect lighting received from the sky")

       

    • My own testing of that function has told me that fade_color is usually a more accurate approximation of indirect light produced by the environment preset. (It does require being clamped into a valid range, however).
    • Thanks 3
  3. 1 hour ago, Conifer Dada said:

    that it's possible to disable PBR by going to 'advanced' > 'rendering types' and unchecking PBR. Doing this made no difference to performance or graphics.

    That's because this option doesn't really disable PBR. It just stops any surface with a PBR material from being rendered, for debugging purposes. You can disable Blinn-Phong in the same way, too.

  4. 8 minutes ago, Arielle Popstar said:

    Someone @Coffee Pancake? recently posted a pingable address for the CDN servers. Think there was mention that they were slow at least back then.

    These aren't the CDN servers, but there's a list of IPs which are pingable for the AWS datacenters around the world -

    http://ec2-reachability.amazonaws.com/

    I've ran a few tests to these addresses as it seems like a few of the AWS nodes in US-West-2 are encountering problems which result in packet loss which can be seen in SL to some regions. Seems like an issue happened at AWS over the holidays which is yet to be fixed.

    • Like 1
    • Thanks 2
  5. 4 minutes ago, Henri Beauchamp said:

    Well, I'm afraid the Linden in question did not quite understand how the texture fetcher worked, and got mistaken by the yo-yo effect in the texture discard bias algorithm.

    First and foremost, when you got the full size texture downloaded, you do not need to re-download a lower resolution of that texture, since the JPEG2000 format allows to derive all lower LODs from a higher LOD: once the higher LOD in cache, the texture fetcher just reuses that cached raw image to decode a lower LOD from it, and there is no need to fetch anything from the network. Equally, when a low LOD gets downloaded, the HTTP fetch is simply done on part of the JPEG2000 texture file, and if a higher LOD is needed, the HTTP fetch is resumed from the point where the next LOD starts in the file, thus not requiring to re-download everything.

    As for the ”infinite loop”, it is only happening due to the yo-yo in the texture discard bias algorithm: this yo-yo happens when the texture memory (or VRAM) fills up to the point when the discard bias (which determines what lower LOD to use for textures when memory gets short) raises to its maximum (5.0 in the old algorithm), causing a ”panic” mode in which all textures are re-decoded at their lowest LOD (to free the VRAM and texture memory as fast as possible), before seeing their priority reevaluated and a proper LOD (taking into account the higher discard bias) is applied.

    Thing is, the original discard bias algorithm was very crude, and did not anticipate anything. It was also not amortized at the code level, i.e. it simply reacted to the filling up and freeing of the memory, without taking into account the memory usage variations from one frame to the next, neither extrapolating the usage for the next frame, and it relied entirely, for amortizing any yo-yo effect, on the textures download and/or decoding delays. To be fair, that algorithm was designed in a time when the network bandwidth was small (ADSL, with 512Kbps at best) and CPUs were much slower to decode textures (not to mention they were mono-core CPUs), so the yo-yo effect (AKA texture trashing) was not happening as much as it can happen today.

    However, once the discard bias algorithm rewritten, with finer grained discard bias steps, extrapolation of memory usage based on past variations, and proper amortizing of the bias variations (e.g. do not decrease bias when texture memory did not decrease yet, and do not increase it when memory usage is in the process of decreasing), the prioritized texture fetching works very well, as the Cool VL Viewer can demonstrate to the skeptic ones...

    There's also another part which I forgot to include in that quote (my fault):

    Quote
    the way jpeg 2000 works, you can essentially request whatever resolution you want and it will download just enough data to decode to that resolution

    Anyway - If things are as you say, then all I can say is that I'm sure LL would appreciate a PR for your fix.

  6. 2 hours ago, Henri Beauchamp said:

    The problem is that, instead of addressing the actual issue, by avoiding to make every boosted texture ”no delete”, and refining the texture bias algorithm (which was very prone to ”yo-yo” oscillations), LL adopted a naive algorithm which also falls short for many people, in term of rezzing speed.

    On my side, I tried and addressed the shortcomings of the old algorithm, and if you do try the Cool VL Viewer, my bet is that you will have to admit it performs quite well in term of memory usage, including in complex scenes, while not ruining the rez time (much to the contrary). 😜

    Myself, I haven't really noticed any difference between the two, but I don't really teleport around enough to notice anyway (so YMMV) - but, for the sake of completeness, this is the quote I got from the Lindens:

    Quote
    HTTP resources are allocated to whatever is highest priority at the moment the HTTP resource becomes available, but what it doesn't do that it used to do is allow a single texture to issue a request before the previous request finishes
    that was making an infinite loop of dropping data on the floor when moving the camera around
    i.e. one texture would ask for its 1024 version, then decide it didn't need it and would ask for the 512 version
    the 1024 version would finish downloading, then it would drop that data and the 512 version would get downloaded
    which was insane

     

    • Thanks 1
  7. 1 minute ago, Arielle Popstar said:

    sacrificed the animation smoothness for better looking FPS numbers, if that's possible.

    No, because animations are formed of sparse keyframes, which the viewer will interpolate to play the animation at your current framerate (This has always been the case, PBR or not.)

    3 minutes ago, Arielle Popstar said:

    mentioned here: https://community.secondlife.com/forums/topic/506421-pbr-wow/?do=findComment&comment=2668368 is that the change in loading priority is being considered as a feature not a bug, by the Lab. 

    I briefly spoke with one of the Lindens about this, and the answer is a lot more nuanced than this. The old way of doing things actually fell over often as, by the time that something was pushed to the top of the fetch queue it'd then need to drop to a lower mip level, meaning the texture would then drop to the bottom of the queue again, while rapidly filling your cache with unused textures.

    That's not to say there isn't merit to what Henri has said, but that the algorithm was changed for a reason.

    • Thanks 1
  8. 40 minutes ago, Jenna Huntsman said:

    So long as the dome is single-sided (i.e. when the camera is moved outside the dome, you can still see back inside), then it shouldn't cast a shadow, thus sunlight should still enter the dome.

    Figured I'd take some photos to illustrate this in action - this is using a hollow prim sphere as an example.

    Outer face visible (double sided):

    Snapshot_003.thumb.jpg.b739bdfd1f1d9575b407b8cabd8d08ef.jpg

    Outer face transparent (single sided):

    Snapshot_004.thumb.jpg.f51dba66f2993d8c62e36d88f8c8e78f.jpg

    • Like 1
  9. 1 minute ago, Luna Bliss said:

    Is it true that Firestorm and the SL viewer's default will be with ALM enabled, and without the ability to turn it off?

    Yes, and the documentation has stated as such for a while-

    https://wiki.secondlife.com/wiki/PBR_Materials#Removal_of_Advanced_Lighting_Model_.28_ALM_.29_Graphics_Option

    2 minutes ago, Luna Bliss said:

    I'm concerned about my store inside a skybox and the park underneath with a dome sky -- these cast shadows that make environments too dark.

    So long as the dome is single-sided (i.e. when the camera is moved outside the dome, you can still see back inside), then it shouldn't cast a shadow, thus sunlight should still enter the dome.

    • Thanks 1
  10. 15 minutes ago, BilliJo Aldrin said:

    dont forget they are going force advanced lighing on everyone. When its turned on in my computer, it destroys any reasonable playability. Add that to PBR, and it will basically render second life unplayable to a lot of people.

    This is something that's often said, but also very hyperbolic.

    If you look at the metrics from the Firestorm viewer -

    You'll see that the bulk of users are on hardware modern enough that they capable of running the PBR viewer.

    Besides, if you're in the camp of being unable to run it, then Henri's CoolVL viewer offers the ability to turn off ALM while not being out-of-date.

    Henri has mentioned that the option to turn of ALM likely won't be around forever, and will be maintained so long as it makes sense to do so.
    • Like 1
    • Thanks 1
  11. 18 minutes ago, Qie Niangao said:

    In theory, the outdated Pipelining asset streaming could hit PBR assets more if there are simply more of them to stream, but that wouldn't seem the situation (at least not yet).

    It's definitely a contributing factor.

    In viewers with ALM disabled, the viewer would only need to fetch diffuse textures for objects. This is pretty much the bare minimum amount of assets needed to display the scene.

    In viewers with ALM enabled, you effectively times that number by 3.

    In the PBR viewer, ALM is always enabled (and cannot be disabled*), you times that number by 4 if the object is using a PBR material.

    Pipelining suffers a lot proportionate to the amount of assets needing to be loaded as the assets must return in the order that they were requested. This means that if you're downloading an asset of 4MB, and then the rest of the assets in the scene are say 300KB, if that 4MB asset is taking a while to download or even gets stuck, the rest of the scene can't load until that download finishes.

    Multiplexing fixes this issue as assets can return in any order (not just the order they were requested in), among a bunch of other issues - so this means that if that 4MB asset gets stuck, the rest of the 300KB assets can continue loading.

    • Thanks 1
    • Sad 1
  12. 13 minutes ago, Conifer Dada said:

    I have a fast broadband connection (578 Mb/s)

    It's worth noting that in addition to what Henri said, you can have the best connection speed ever and still have SL slow to load.

    In part, this is because the method SL uses to fetch assets (HTTP 1.1 Pipelining) is a long depreciated method of asset streaming which is known to cause a lot of problems at both the consumer level and ISP level. HTTP/2 Multiplexing solves this issue, but isn't enabled on SL's CDN (although, this should hopefully change this year).

    This isn't a PBR issue (i.e. viewers 6.x and below encounter the same issue).

    • Thanks 1
  13. 2 hours ago, NellyYui said:

    The "wacky scaling stuff" is the core of my question. How can we calculate an accurate offset at any scale.

    https://gyazo.com/819b502746749656cc7176c82d24e6df

    Okay, so patching to allow scale to be arbitrary (no longer commented out)

        touch(integer num_detected) {
            integer link = llDetectedLinkNumber(0);
            integer side = llDetectedTouchFace(0);
            //llOwnerSay("side: " + (string)side);
            vector st = llDetectedTouchST(0);
    
            if (side == 0) { //Face is Blinn-Phong.
                vector zo = <0.5 * scl.x, 0.5 * scl.y, 0> + <-st.x * scl.x, -st.y * scl.y, 0>;
                llSetLinkPrimitiveParamsFast(link, [
                    PRIM_TEXTURE, side, cross_tex, scl, zo, 0
                ]);
            }
            else { //Face is using a PBR material, so make a glTF transform for it.
                st = <st.x,1-st.y,st.z>; //llDetectedTouchST origin point is at the bottom left, so make origin point upper left for glTF transform use.
                st = <st.x*scl.x,st.y*scl.y,st.z>; //Adjust transform by texture scale, so scale is now arbitrary.
                vector zo = <0.5-(st.x),0.5-(st.y),st.z>; //0.5 is the base offset (to have the texture "centered" at the crosshair), then add the mouse offset.
                llSetLinkPrimitiveParamsFast(link, [
                    PRIM_GLTF_BASE_COLOR, side, "", scl, zo, "", "", "", "", "", ""
                ]);
            }
        }

     

    • Thanks 1
  14. 18 minutes ago, NellyYui said:

    Still unsolved ....

    Here's my quick patch for what I think you were trying to do. Not sure what the wacky scaling stuff was about, so I commented it out for now.

        touch(integer num_detected) {
            integer link = llDetectedLinkNumber(0);
            integer side = llDetectedTouchFace(0);
            //llOwnerSay("side: " + (string)side);
            vector st = llDetectedTouchST(0);
    
            if (side == 0) {
                vector zo = <0.5 * scl.x, 0.5 * scl.y, 0> + <-st.x * scl.x, -st.y * scl.y, 0>;
                llSetLinkPrimitiveParamsFast(link, [
                    PRIM_TEXTURE, side, cross_tex, scl, zo, 0
                ]);
            }
            else {
                vector zo = <0.5+(-st.x),0.5+(st.y),st.z>; //0.5 is the base offset (to have the texture "centered" at the crosshair), then add the mouse offset. X axis is inverted.
                llSetLinkPrimitiveParamsFast(link, [
                    PRIM_GLTF_BASE_COLOR, side, "", scl, zo, "", "", "", "", "", ""
                ]);
            }
        }

     

    • Thanks 1
  15. 52 minutes ago, Qie Niangao said:

    (That scale randomization is just to drive the user crazy, right?)

    Anyway, I think you'll find that manually operating the build tool, the glTF "v" offset is sign-inverted from the Blinn-Phong "y" offset., so for the glTF Materials sides, instead of
    vector zo = st;
    try
    vector zo = <0.5 * scl.x, 0.5 * scl.y, 0> + <-st.x * scl.x, st.y * scl.y, 0>;

    which just swaps the y sign of the zo vector assignment you're using for Blinn-Phong. At least it seems to work for me.

    This is pretty much correct - the glTF transforms in SL follow the glTF spec in respect to transforms, where the Y is inverted as compared to standard OpenGL (aka, glTF uses the same texture origin point as Vulkan).

    See here for more info:

    https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#images

    Specifically:

    Quote

    The origin of the texture coordinates (0, 0) corresponds to the upper left corner of a texture image. This is illustrated in the following figure, where the respective coordinates are shown for all four corners of a normalized texture space:

     

    • Thanks 2
  16. 3 minutes ago, Zalificent Corvinus said:

    This is because SL's PBR isn't "real standard PBR", it doesn't comply with the "pure" standard for ACES tone mapping,

    Strictly speaking, that's because the tonemapping isn't a hard requirement: to that point, the glTF spec doesn't have any guidance on it - But, in reality, tonemapping is required because you can't tell users to go out and buy a top-end HDR reference monitor able to display an insane dynamic range, so tonemapping is used to map colors back into SDR space (while preserving the illusion of higher dynamic range).

    Because it's not standardized, tonemapping can be done in whichever way you like, so long as there's a tonemapper present. For example, the Alchemy viewer implements a couple of different tonemapper options to choose from beyond just modified ACES.

    6 minutes ago, Zalificent Corvinus said:

    this is an exercise in compatibility with things

    This is true - the glTF project is mostly about interoperability with other platforms (pretty much every major game engine, and content creation tool), as SL is problematic for creators as there's just no tools that create content in the way SL expects any more.

    7 minutes ago, Zalificent Corvinus said:

    Since LL PBR is a botched attempt at a cut down version of the most basic spec for PBR

    This isn't really true, to that point, the actual PBR shaders are a straight OpenGL conversion of this reference model:

    https://github.com/SaschaWillems/Vulkan-glTF-PBR

    (This isn't really a half-measure either, as it removes any guesswork or questions of 'is this right?' for how PBR materials are rendered - it also accelerates the Vulkan port of the viewer, as this code is already available for Vulkan!)

    The reason why SL's PBR doesn't include every bell-and-whistle available to us is that LL's goal is compatibility with the core glTF 2.0 spec, excluding extensions (Extensions may be added to SL at a later date, but they aren't included in the main glTF project)

    The core spec can be found here:

    https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html

    And to see what could be supported, see all of the ratified extensions here:

    https://github.com/KhronosGroup/glTF/blob/main/extensions/README.md

    • Like 4
    • Thanks 2
  17. 32 minutes ago, Sam Bellisserian said:

    I don't know anything about texturing so I am wondering if PBR can be turned off like full bright or if it's built in?

    No - that would assume that the PBR material is the same as the Blinn-Phong materials, which it isn't (there's a lot of major technical reasons, but they aren't really compatible)

     

    10 minutes ago, Scylla Rhiadra said:

    Again, if you look to the Materials editing dialog to the left, under "Metallic-Roughness," you can see that I've set the "Metallic Factor" to 0.990.

    The factors are essentially a multiplier to the Metallic-Roughness map - so if your metalness channel is black (i.e. 0,0,0), a metallic factor of 1 will look the same as a metallic factor of 0. (because, 1x0 is 0, and 0x0 is 0). If the channel is white (255,255,255), then 1x255 is 255, if you see what I mean. A white channel with a factor of 0.5 would result in 128,128,128 (0.5x255)

    • Like 1
    • Thanks 1
  18. 3 minutes ago, Henri Beauchamp said:

    This would totally kill SL for everyone but people with 16GB or more VRAM and 64GB or more RAM !

    So long as LL can nail the texture mipmapping system, the resolution of any given texture wouldn't matter.

    I would say that the viewer is not ready as things stand now. But if the work to get the mipmapping system on-point is performed, I don't see why higher resolution textures would pose much of an issue.

    • Like 2
  19. 13 minutes ago, Scylla Rhiadra said:

    I am hoping that this move will open the way, eventually, to other visual enhancements that I will find more useful.

    It certainly will.

    As an example: Glass isn't great in PBR. In my opinion, well-made Blinn-Phong is actually better than PBR for glass right now.

    But, that will change when / if the glTF extensions for Transmission ( https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_transmission/README.md ) and IOR ( https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_ior/README.md ) are implemented.

    • Like 1
    • Thanks 3
  20. 3 minutes ago, Gabriele Graves said:

    One question rattling around in my mind is do they even get stats from TPVs as well as the official viewer?

    They do. All TPVs that are in the TPV directory submit telemetry data to LL.

    One such recent use of the telemetry data was the decision to kill 32-bit support for the viewer, as LL found that less than 0.1% of the userbase genuinely needed it, and anyone else who was running a 32bit viewer didn't actually need to.

    LL's QA team test on a variety of hardware configurations, including configurations which match the most common low-end setups that are seen in SL. They mainly test that the viewer won't corrupt your system, and that performance aligns with expectations (that doesn't mean you'll get 60fps ultra on a potato, because that's not a reasonable expectation).

    • Like 5
    • Thanks 1
  21. 3 minutes ago, Qie Niangao said:

    For it to be about animation, it must be some unrelated change in the 7.mumble code base.

    It's actually caused by some major changes to the inventory services which snuck in with the inventory thumbnails viewer, which was released as a 6.x viewer.

    I'm told there's quite a few bugs with the new changes, and we're mostly waiting on LL getting back to work after the holidays to fix them.

    • Like 2
    • Thanks 1
  22. 8 hours ago, animats said:

    Which version? I was using 7.1.1. A few versions back, it worked. Released 6.6.10 works on the same machine. So, somewhere between 6.6.10 and 7.1.1, something was broken.

    I can live with LL not supporting a full Linux version, but it ought to as least work under Wine, like most other games.

    Ref to 2021 JIRA: https://jira.secondlife.com/browse/BUG-230918

    Checking now, Second Life Release 7.0.0.580782 (64bit) seems to work under Wine on my system.

×
×
  • Create New...