Jump to content

Henri Beauchamp

Resident
  • Posts

    1,208
  • Joined

Posts posted by Henri Beauchamp

  1. 1 hour ago, Qie Niangao said:

    Guess I'm still missing something. If the embedded script I created is set no-mod only for Next Owner then the whole no-mod object is still mod-ok when taken into my Inventory. (Of course if I actually transfer the script so the Next Owner permission is applied making it really no-mod, transferring it back to its creator doesn't change that permission to make it mod-ok again.) I'm sorry, I'm still confused.

    Perhaps I have not been clear enough in my first post... Using a mod-ok object as the container, with it rezzed in-world:

    1. Adding a no-mod item you do not own (e.g. a no-mod script made by someone else), makes the container no-mod for yourself when taken back to your inventory, while you still can modify it when rezzed in-world (of course the object is also made no-mod for ”next owner”), and you cannot change anything to its mod permissions from the inventory.
    2. Adding a no-mod item you created (and therefore for which you got mod-ok permission for yourself) makes the container no-mod for the next owner when you take it back to you inventory (but then, you may change this ”next owner” permission from the inventory item properties floater).
  2. 18 minutes ago, Qie Niangao said:

    the viewer won't let me ”edit the inventory object permissions to make it mod-ok (for the next owner) again” while the object is still in inventory, but maybe that wasn't the intent (of course it's still mod-ok if I rez it again).

    For it to work in the inventory, the object itself must be mod-ok for you, and you must be the creator of all no-mod items you have put inside it (e.g. no-mod scripts you wrote).

    • Like 1
  3. 1 hour ago, Suzanna Soyinka said:

    Also keep in mind I can see that latency spike at this homestead too, I have seen it, and of course then my frame rate dives down into the sub 10 range.

    You do not seem to understand... The ”latency spikes” are not a network latency at all: the network latency CANNOT reduce your frame rate. If you are not convinced, turn off your network interface and watch the frame rate: it won't go down, and even might go up (because there is no network packet for the viewer to process during the frame render loop) !

    The ”ping time” as shown in the Statistics floater is totally bogus and misleading, because it includes the frame rendering time. If you want to measure the true network ping, look at the sim host address in the About floater, and ping that host from a terminal.

    You are confusing the cause with the effect !!!

    Your problem is a frame rate issue (which, as a result, makes the Statistics floater ”ping” look bad).

    As a side note: I highly discourage overriding the viewer renderer settings (AA, ambient occlusion or Vsync) with the drivers settings: this usually leads to conflicts (at best) and sometimes may cause hiccups and ”crashes” (watchdog timeouts).

    • Like 1
  4. Your statistics floater shows a rate of 2 frames per second only, meaning the viewer executes its main thread (which is also the rendering thread) loop in 500ms: no surprise then that you see a 540ms ”ping time” in the statistics floater: this ”ping time” shall be taken with a grain of salt, really (and here a huge amount of salt), because since it is calculated once per frame, it can never be smaller than the frame rendering time (which actually adds to the real, network ping time); it's okey-ish when you render at 100fps (the error is then of only 10ms), but certainly not at 2fps !!!  If anything, subtract the frame time to the ping time in the statistics floater (here: 0.540 -1/2.1 = 540ms-476ms = 64ms).

    Note that your frame rate also affects how fast textures and meshes are downloaded, because here again, the textures and meshes updates are performed, once per frame, in the main thread (even if they download in a child thread), meaning that the slowest the frame rate, the fewer textures and meshes are rezzed (get rendered) per second, and this also affects how fast their required LOD is updated (meaning that when your camera is moving, such as when your avatar is ridding a vehicle, the LODs lag big time behind the importance to the camera of the objects around you, and the viewer will be late to trigger a better texture or mesh LOD download to match the increased importance of the objects that got closer since last frame was rendered). This is also why you shall never, ever use the Vsync (or even adaptive sync) feature with a SL viewer: it would ruin your rezzing time and texture/mesh LOD updates speed; always prefer triple buffering to fight (and suppress) tearing.

    As another note, iGPUs (Intel's) or even APUs (AMD) are notably too weak. Use a discrete graphics card instead: even an old GTX460 will perform way better than any existing iGPU/APU !

    And finally, once you got a decent enough GPU, the SL viewers rendering engine will see its bottleneck moved to the single core performance of your CPU: the highest the single core performance, the best the frame rate (the frame rate increase becomes linearly proportional with the CPU single core performance increase, once the GPU taken out of the equation).

    To summarize: you do not have a network issue, but a frame rate issue: it is because the frame rate ”clears up”, like you say, that your (Stats floater) ”ping time” sees its (inaccurate/flawed) figure going ”back to normal”, not the other way around (the frame rate is actually independent of the network speed) !

    • Like 1
  5. 10 hours ago, Pixels Sideways said:

    So whyyyyyyyy does SL mock block me re no mod or no mod/no copy or variations on perms when placing items - in this case notecards - in an attached HUD prim but has no issue moving same items into a rezzed prim?

    .../...

    Anyone know why this is set up like this

    This is because inventory permissions and rezzed objects permissions are not inherited while an object is attached to you avatar.

    For the inventory permissions to be updated with regards to what items are added to (or removed from) the object contents, SL needs for the sim server to transmit the object data to the inventory server, and this is done when you derez the object (or take a copy, when it is copy-ok) to your inventory.

    Currently, and given how things work, allowing to add no-copy or no-transfer item inside a worn attachment which is itself copy-ok or tfr-ok would allow you to bypass the permissions (since the object in your inventory would not have its permission changed: you would then be able to copy or transfer it with the no-copy or no-transfer item inside). This is why SL forbids it.

    There is however indeed a ”quirk” that I consider a genuine bug in SL permissions inheritance, and this relates to no-mod items added to the contents of a mod-ok object: currently, when you do that (with an object rezzed in-world), the object becomes itself no-mod when you take it back to your inventory; this is of course totally stupid and uncalled for, since a mod-ok object may perfectly contain no-mod items and nothing you can do with that object at the inventory level (like changing the object name, copying it or giving it away, as long it is copy-ok or tfr-ok) would change anything to the no-mod permission (or name, or asset data) of the no-mod items it contains. Note that as long as you are the object creator, you still can edit the inventory object permissions to make it mod-ok (for the next owner) again, but if you pass a copy to another avatar, then they will loose that mod-ok permission as soon as they rez the object in-world and take it back to their inventory, and they won't be able to make the inventory object mod_ok again.

    • Like 2
  6. Nice idea(s) !

    I thoroughly hate blur however. Even though, given my age, I am wearing glasses, I always see things hyper crispy when I look at anything, be it in the distance or before my nose.

    Blur is a defect of camera lenses and is in no way a normal way to see things with your eyes (your eyes adjust to the subject distance, so there is never any blur, but in the peripheral vision): why the hell people think it is so cool to reproduce camera defects is totally beyond me !

    • Like 1
    • Thanks 3
  7. 22 minutes ago, Love Zhaoying said:

    Since it is not in a ”recoverable” state - I disagree. A ”Not recoverable” state is indistinguishable from a ”crash”.

    You are wrong... A crash is when you encounter an illegal instruction in a program, causing an exception. You then get a crash dump.

    In the case of a disconnection the viewer is sitting there (and you still can operate its UI) but unable to do anything that requires an active connection to the agent sim. The viewer won't crash, even if you keep it like that for hours.

    • Like 1
  8. 3 hours ago, Love Zhaoying said:

    I'm curious if technically, when connection is lost, the viewer ”crashes” since the needed stuffs for reconnect are missing!

    Again, the viewer does not crash (if it does, it's a bug in the viewer you are using). It simply looses connection to the sims, and since there is no order (neither a protocol to transmit such an order) from a grid server to reconnect to another sim, it simply is left ”in the blue” and cannot continue without an ”agent region” (a sim where the agent is present/connected).

  9. 3 hours ago, Profaitchikenz Haiku said:

    and have not once had a TP crash-to-desktop.

    If you crash to desktop (i.e. the viewer actually crashes and vanishes all of a sudden), then it is a viewer bug !... When a TP fails, you may get disconnected, but the viewer should stay up and running (with the 3D view frozen and greyed out), alert you about the disconnection and let you the choice between closing the session or continuing to read IMs, notifications, chat and anything you have not yet read.

    If you actually crash, then please report that crash to the viewer developer(s team), with the crash info (logs, crash dump, etc)...

    • Like 2
    • Thanks 1
  10. It would be even better if the servers (and viewers, since it would involve a protocol/data exchange between them and the grid) had a fallback mechanism for failed TPs or sim restarts, like automatically teleporting your avatar in a safe place (with the proper maturity rating, corresponding to the one of the region you were trying to TP to or were logged in before the restart), just like what happens when you try and log in in a region which is currently down.

    There is no reason for being disconnected at all from the grid as long as the network is not the cause for the sim connection loss...

    • Like 6
  11. 15 hours ago, Coffee Pancake said:

    my only issue with it is .. well .. it's email. Who uses email.

    I do !!!   And no, I do not use any ”smart” phone (not even owning a mobile phone either), any ”social media” (Twitter, Facebook & Co), any messaging application, so email is the primary mean of communication for me on Internet !

    However, I'd like this additional ”security” feature (*) to be made optional and opt-in only: I do not want to be bothered with unsolicited emails (AKA SPAM) when I connect with another ”computer” (most often on another VM on the same computer), or just another OS, or with a new dynamic IP, another viewer, or whatever !

    (*) I really do not care about it, like I do not care about (and do not want to be bothered with) MFA: I am paranoid enough that I took all necessary security measures on my end for years already !

    • Like 2
    • Thanks 1
  12. 2 hours ago, Beq Janus said:

     To answer the question ”why turn off ALM?” the following two use cases quickly appear. 

    1) Some people do it because they think it helps their frame rate (which while things are undoubtedly faster with ALM off, the improvement is mostly down to the side-effect that disabling ALM kill shadows, which is the majority of any scene rendering cost). These people would have a better visual experience by reducing their shadows but keeping ALM active (or at least trying these in phases rather than On/Off). In some cases turning off ALM (when shadows are already disabled) is a backward step.

    2) Others, however, do this because their networks are awful and they'd rather not suck down the additional textures/meshes. The satellite downlink users are one example, but there are many other cases. SL has a global reach and is not limited to those with good network infrastructure.

    3) Deferred rendering (ALM) gives blurry textures and edges, when compared with direct rendering (ALM off) which provides a crisp 3D display. While I enjoy a fast viewer on a pretty powerful hardware (plenty powerful enough to render with ALM and even shadows on), I always keep ALM off for this very reason...

    • Haha 1
  13. @NiranV Dean I am afraid you are wrong. The viewer mesh repository does a pretty good job downloading quickly the needed LODs to display the scene; there is no point in downloading megabytes (mesh assets can be huge !) of data to display at LOD1 a mesh in the distance.

    Granted, LL's code can be (and has been, by some TPV devels) optimized for even better performances (try the Cool VL Viewer, and you will see what is a fast rendering viewer).

    As for download failures, they do happen, especially with buggy libcurl versions when using the HTTP pipelining feature, which is great to speed up requests, but sadly can lead to corrupted downloads with all libcurl versions greater than v7.47 (and I am speaking of LL's patched libcurl v7.47, because the official/upstream libcurl v7.47 also had HTTP pipelining bugs). Thus why I keep using that old libcurl in the Cool VL Viewer... If your viewer uses a newer libcurl and you are experiencing failed mesh LOD downloads (and/or rainbow textures), disable HTTP pipelining.

    • Thanks 1
  14. There is a lot of confusion about mesh levels of details (LOD) and the volume LOD factor. The LOD factor (the one you can change in the graphics settings via the ”Objects LOD” slider which is linked to the RenderVolumeLODFactor debug setting) determines how fast you switch from one LOD to the next as the distance to camera decreases.

    As for meshes, while Niran is right in saying they are a single asset file, he is wrong in saying that the viewer cannot request certain LODs of a mesh; that file can indeed be requested only in part, meaning that you can download only the mesh header and the lowest LOD and not the highest LOD, for example; however, as your camera gets closer to that mesh, new LODs get loaded, and in the end you can obtain a fully downloaded mesh asset file cached on your hard disk (next time you will see this mesh, the viewer will then fetch it from the cache whatever the selected LOD).

    Poor meshes (with poor ”medium” and ”low” LODs) often do not display properly until their highest LOD is selected (and this selection is based on the camera distance and the LOD factor). This is why some creators are recommending to push the RenderVolumeLODFactor setting beyond its normal value. Be aware however that doing so, you risk seeing group of objects suddenly disappearing when you get close to them: this is because there is a limit in the vertex buffer size a render group can use (governed by the RenderMaxNodeSize debug setting), and the higher the selected LODs for these groups of objects, the higher the number of vertices to render... When the limit is reached, the viewer pulls off the whole group from the render pipeline.

    • Thanks 2
  15. 29 minutes ago, Kathrine Jansma said:

    i see the use of a camera for more expressiveness (e.g. control of mesh head facial expressions) as a good idea,

    Wait until you try it while role-playing... The camera will capture your concentrated expression as you are typing your post in chat, mouth closed, eyebrows furrowed, while your previous post was indicating that your avatar was giggling... Your role-play partner will be even more confused to see your avatar expression, because they were likely doing something else IRL while you where giggling (ridiculously, will think the real life witnesses around you ! 🤣) at the camera earlier, and just see your so serious avatar expression now as you elaborate your next paragraph-long post. O.o

    The avatar expression was one of those things ”left to my imagination” (based on the role-play posts), and LL is about to rob this from me (well, likely not, because my viewer will allow to turn it off on my screen for avatars using it).

    Also, I'm impatient to see (just to have a good laugh), the result of this feature for someone using, say, a horse avatar ! 🤣

    I suppose the feature could have some use in virtual meetings and voice chats, but otherwise, it will see a small hype at its debut before its usage will dwindle down and end up in the category of the gadget features of SL...

    • Thanks 1
  16. Thinking that changing the render engine for an AAA game one would lead to a wider adoption and usage of SL is a total mistake, or so is my opinion.

    First of all (and as pointed out in some of the above replies to the OP), the SL render engine is not about dealing with pre-made optimized contents, but about rendering anything its users could have decided to create or upload, and then throw at it !  We are speaking here of totally random meshes, textures, animations, and combinations of the latter, in any variable amount/concentration at any given place.

    As also already pointed out, SL's existing contents cannot be auto-magically adapted to render engines such as UE, and SL's main advantage over any other emerging ”metaverse”, is precisely the enormous amount of existing user-made assets it contains, and the virtually unlimited amount of new assets it could see added by the said users: LL cannot decide to just throw away (even part of) the existing contents on the pretext it is no more compatible with a new render engine... They cannot either commit (again) the same mistake as they did with Sansar and adopt an engine that requires the usage of expensive or complicated external software to create new contents, with intricate procedures to get the new contents validated/uploaded. They would shoot a bullet in their foot (or several bullets in their feet) if they did since they would loose an advantage (and many old time users) over the emerging competition !

    In my opinion, the future of SL might be brighter than what some may think it could be. But Sansar and High Fidelity utterly failed and lessons should be drawn out from these failures (so that bad decisions are not made about SL's future).

    I can only speak for myself, but I do not seek ultra-realism (like UE is geared towards) in SL. Since I came to SL to role-play (which is, of course, only one of the many possible SL usages), I am happy to have some minimal realism (the fact that the avatars are not cartoonish, at least not until you want them to be, is a big plus to me when compared to other virtual worlds), but I do not want SL to become as realistic as real life, and I enjoy having at least a few things left to my imagination when I role-play.

    So I would say that, as it is, the SL render and physics engines are ”good enough” for me. The render engine just could (and should) allow for (much) better performances, by being adapted to modern hardware (including mobile one, since it is one of the keys to a more widespread SL usage/adoption). Solutions exist: threading and Vulkan. LL already started working on pushing more tasks into child threads so to alleviate the load of the main thread (which is also the only render thread). With Vulkan, it would be possible to implement a multi-threaded renderer, but LL cannot just throw away OpenGL before all SL users can do Vulkan on their device !

    My advices to LL would be:

    • Do not try to do what other metaverse competitors are trying to do. Keep SL's philosophy, i.e. users anonymity, no private data gathering, no profiling to push ads, etc: just keep monetizing the services via land fees, premium accounts, Marketplace sales, etc (the residents must not become the product, like in so-called ”free” services by Google & Co. Instead they pay for added services/bonus and in exchange have their freedom and privacy guaranteed by LL).
    • Do not loose your time on fancy hyper-realism features. I can already predict (just like I predicted Sansar's failure) that the new project about avatar expressiveness (based on a camera to capture the user expression and reproduce it on the avatar) will be just as much (or rather as little) used (and probably by the same persons) as ”voice morphing”... I.e. only by a tiny proportion of the SLers. We (SLers at large) want second life avatars, not real life clones (I do not even use voice; it's a total turn-off and mood killer for any genuine role-player) !
    • Add more build tools to the viewer (we cruelly lack tools to at least include simple pre-made meshes to in-world builds, or even to model the said simple pre-made meshes like clay with in-viewer build tools); it is important that we regain (at least part of) this marvelous feature that I discovered when I joined in SL back in 2006 (no mesh, not even a sculpty back then: everything could be created with the viewer built-in tools). (*)
    • Invest in Linux support because, you know, Android is ”Linux by Google”, iOS is ”BSD by Apple”, and mobiles support will indirectly involve Linux-like support anyway. There is very little effort to achieve this: just one Linux devel to hire (if I could develop and maintain the Cool VL Viewer for over 15 years already and alone, on my free time only, then a single Linux devel should be able to do it full work time for LL's Linux viewer), and the work already done be Linux TPV developers to reuse ”as is” for a start.
    • Develop a multi-threaded Vulkan renderer for the viewer. The per-CPU-core performances improvement curve is already flattening for several years (because the clock frequency cannot increase forever, and the architectural optimizations also have limits; we are approaching the physical limits of the quantic world); the future is to more cores, i.e. the need for more threads to exploit all those cores...
    • Once the two above points accomplished, invest in mobile platforms support (even though I do not even own a smartphone, this is the future of SL for all but old farts like me). And here I am speaking about rendering (even in a simplified way) the 3D world on a smartphone (i.e. it's not just about a chat/IM application).

    These were my two cents.

    (*) One tool I would love to see implemented would be some ”mesh hull generation” tool. Imagine selecting a prim-based build and asking the viewer (or server) to ”Make hull from object”, and you'd end up with a (preferably well optimized) mesh that would exactly enclose the object.

    • Like 7
  17. 11 hours ago, NiranV Dean said:

    FSAA and FXAA are not mutually exclusive but cannot be run at the same time.

    MSAA and FXAA are mutually completely exclusive. Having MSAA (16x) completely overwrites FXAA and breaks it (as expected for something that is forced driver-side).

    This is why I did write in my very first reply to you in this thread , that you had to disable ALM to see SSAA/FSAA at play !!!

    Quote

    That being said, in order to get FSAA to even work you not only have to relog but you also have to have Deferred disabled BEFORE you relog.

    WRONG !

    While you do need to restart the viewer after changing the FSAA setting, like I wrote in my first post in this thread, you do not need to touch ALM.

    Again: FSAA is requested at the OpenGL window/context creation; it does NOT involve any shader to function, and at creation, the driver got no clue about what are your shaders settings (the shaders are not even yet initialized at this point). Deferred rendering (ALM) is a shader-based rendering method, and its FXAA shader only intervenes when starting to render a 3D scene (not the login screen) with ALM on.

    If your or other viewers somehow fail to transmit the FSAA setting, when the viewer is configured to use ALM, to the llwindow call for the window/context creation, then this is a BUG in those viewers code. Mine certainly transmits the FSAA setting unconditionally, and as a result, I can switch off ALM (even after starting the viewer and logging in with ALM on), and enjoy proper FSAA (either 2x, 4x or 8x) after I turn ALM off later in the session. Yes, they cannot work together, but you can switch back and forth between the two without any need to restart the viewer (i.e. FSAA is automatically switched off when the FXAA ALM shader is used, but comes back as soon as you stop using the latter) !

    End of the discussion as far as I am concerned (no more time to loose on a non-existent ”issue”).

  18. 35 minutes ago, NiranV Dean said:

    Any AA above 2x has stopped working for me around Viewer 2.7 (which was around the time FXAA was introduced and replaced MSAA). As far as i can tell from the commits MSAA support was disabled completely: 

    https://bitbucket.org/lindenlab/viewer/commits/2dd8ce53e4e0d14f2bc20796eb6bdf1ef12a65df

    https://bitbucket.org/lindenlab/viewer/commits/7ee10ae1def26708fa44c25355982aa56195d5f9

    It should not be working, the Viewer has no other AA implementation either.

    Again, you are confusing FSAA (asked via the OpenGL window/context creation, dealt with in llwindow implementation), with the FXAA shader used exclusively in ALM mode, in the render pipeline. The two commits you quote have changed nothing to the Ope,GL context/window creation, where the FSAA mode is still set.

  19. 8 hours ago, NiranV Dean said:

    Did you check and make sure you have no graphics driver profile for the SL Viewer and also not globally set to ”extend the application” for AA solutions?

    I never use ”application profiles”. I never impose AA settings from the driver control panel/settings. I get the same result under Linux (where the driver configuration is much simpler, with exported environment variables preferred to configure it from the wrapper scripts of the applications).

    I always got the same results with regards to AA in 15+ years of SLing, always used FSAA/SSAA, and always choose 4x, since it always has been the best option, by far.

    Quote

    As you can see there is zero difference. Also the ground texture becomes more blurred starting at 2x and stays the same way (due to FXAA being used)

    This could be the driver imposing a different, pseudo (i.e. not super-sampling) AA mode on you... Double (triple, quadruple) check your driver settings.

    Or... Your GPU cannot do any more true AA (this might be what the future got in stock for us, with both AMD and NVIDIA pushing their new fancy AA methods down our throats, sacrificing quality for (hopes of) more speed)...

×
×
  • Create New...