Jump to content

Henri Beauchamp

  • Posts

  • Joined

  • Last visited

Posts posted by Henri Beauchamp

  1. 5 hours ago, animats said:

    The problem is peak processing per frame, not total processing time. Throttling spreads out the load over multiple frame times.

    You do not seem to understand (or I don't express myself clearly enough)... Whatever the messaging protocol, this processing time (which is not dependent on the protocol implementation itself) will be the same. And there is no throttling for the HTTP textures fetcher, for example (and there, there's an awful lot of processing time involved on rezzing the textures, way more than on UDP objects data decoding), or for the mesh repository fetcher (HTTP too).

    For example, I can easily reach 150Mbps (with less than 1Mbps of UDP bandwidth: all the rest is HTTP) when rezzing a scene after a TP into an non-cached sim with the Cool VL Viewer: yes, the frame rate drops during rezzing, but it lasts only for a few seconds seeing how fast everything is decoded and rezzed.

    Again, the UDP bandwidth throttling has nothing to do with ”spreading out the load on multiple frames”.

    • Like 1
  2. 14 hours ago, animats said:

    When you enter a new un-cached region, there's a huge blast of full object update and compressed object update messages. Processing them is a fair amount of work. For example, all the geometry for prims is generated on packet reception.

    The processing of the contents of the messages (i.e. updating objects data, or even decoding textures back when they were sent via UDP) got nothing to do with the messaging protocol itself that would have to be performed whatever the method (UDP or TCP). And whatever the protocol you use, you would see the same amount of time taken to process the messages contents.

    However the load for the UDP protocol itself, as it is implemented in SL, is negligible and certainly not the reason for the throttling.

  3. 5 hours ago, animats said:

    So, too much incoming UDP data during one frame time would reduce FPS. That's the real reason for the throttle.

    No, not really... UDP messages processing is not at all taxing on the frame rate (when compared to the render pipeline load, it is negligible).

    The reason was most likely because, back in 2003 when SL was born, the Internet connectivity of the servers was much less beefy than what it is today. Back in that time, leased lines costed a small fortune, and a single sim server was likely (but it would be interesting to get this inference of mine confirmed by a knowledgeable Linden) very limited in available bandwidth to serve all users in sim (a few Mbps per sim server, at most), so you could not have those users sucking up UDP messages at more than 500Kbps or so (which was not even a big deal, since back in that time, ADSL down-links were limited to 512Kbps at best).

  4. On 10/14/2021 at 5:58 PM, Elissa Taka said:

    Ok I restarted the viewer with the external debugging console (developer -> Console window on next run) enabled. Yikes! When I rename an item the console is flooded with messages like:

    2021-10-14T15:35:06Z WARNING #Inventory# newview/llinventorymodel.cpp(1799) LLInventoryModel::notifyObservers : Call was made to notifyObservers within notifyObservers!
    2021-10-14T15:35:06Z WARNING #Inventory# newview/llinventorymodel.cpp(1835) LLInventoryModel::addChangedMask : Adding changed mask within notify observers!  Change will likely be lost.
    2021-10-14T15:35:06Z WARNING #Inventory# newview/llinventorymodel.cpp(1846) LLInventoryModel::addChangedMask : Category **JPK Tobacco Backpack Gacha (Listerine) BOX
    2021-10-14T15:35:06Z WARNING #Inventory# newview/llinventorymodel.cpp(1835) LLInventoryModel::addChangedMask : Adding changed mask within notify observers!  Change will likely be lost.
    2021-10-14T15:35:06Z WARNING #Inventory# newview/llinventorymodel.cpp(1846) LLInventoryModel::addChangedMask : Category **JPK Tobacco Backpack Gacha (Listerine) BOX
    2021-10-14T15:35:06Z WARNING #Inventory# newview/llinventorymodel.cpp(1835) LLInventoryModel::addChangedMask : Adding changed mask within notify observers!  Change will likely be lost.
    2021-10-14T15:35:06Z WARNING #Inventory# newview/llinventorymodel.cpp(1846) LLInventoryModel::addChangedMask : Category **JPK Tobacco Backpack Gacha (Listerine) BOX
    2021-10-14T15:35:06Z WARNING #Inventory# newview/llinventorymodel.cpp(1799) LLInventoryModel::notifyObservers : Call was made to notifyObservers within notifyObservers!


    On 10/14/2021 at 7:38 PM, Monty Linden said:

    Thanks for digging.  This does smell a bit...

    It does smell... I spotted this bug back in the time when I backported the Marketplace code to the Cool VL Viewer. Here is the culprit code and my comment about it.

    #if 0	// This is a bogus thing to do here (because updateCategory() would
    		// change the modify masks and that change would have all risks to be
    		// ignored, simply triggering the warning above), and should not be
    		// needed any more now that I fixed the observer code for the
    		// marketplace (by moving changes to the inventory structure out of
    		// the observer event code and into an idle callback). HB
    		if (LLMarketplace::contains(referent))
    			LLMarketplace::updateCategory(referent, false);

    Note that since my backport is actually a partial re-implementation of LL's code, you'll need to do more than just commenting out the corresponding line in LL's viewer sources (see my comment about moving inv structure changes out of the observer code).

    The corresponding line in LL's viewer code for the recursive notifyObservers() call you observe is:

            update_marketplace_category(referent, false);

    Around line 1697 of newview/llinventorymodel.cpp.

    • Like 2
  5. 3 hours ago, Coffee Pancake said:

    I've hammered the fire out of a few cheap SSD's and been very surprised by their actual performance and longevity, of course they aren't a patch on more expensive drives, but they are worlds better than a spinning platter and despite my best efforts, I've not managed to kill one yet (although I have managed to kill quite a few HDDs).

    A $20 quick scratch drive is a no-brainer, especially when it can be used as a dumping ground for IO intensive junk that saves wear on the better more expensive drives.

    Buying second hand ram is a minefield, there is a lot of bad ram kicking about and the fix is to hawk it on eBay and make it someone else's problem.

    I have a lot of computers, ranging from new high end to vintage junk. eBay ram is the bane of my hobby.

    You also have to factor in typical SL hardware, it's mostly mid range & business laptops running well beyond their capabilities and service life. 

    Apparently, we do not share the same experience on these topics...

    I got HDDs that are over 10 years old and still spinning just fine (granted, they were not cheap HDDs either), even if others did die over time (I've had especially bad experience with the IBM ”Deathstar” in the distant past, and with a triplet of Seagate 7200.11 in a RAID5 more recently). I can't speak about my experience with cheap SSDs since I never bought any, but seeing how much data has been written already on my older (MLC) SSDs, the cheap ones would have long hit their max write cycles and exhausted their NAND provision, and faster than even the lame 7200.11s have died for me.

    As for second hand RAM modules, I bought quite a few over years, to upgrade old computers... From old SDRAM to DDR3 modules. Never got any issue with them. As long as they pass the memtest I always submit them to on arrival, there is no reason for them to suddenly die (and if they don't pass it, just return the module as faulty to the seller)...

  6. 14 hours ago, Coffee Pancake said:

    You can buy a cheap Inland 120GB SSD for $20, see your local microcenter or Amazon.

    I won't use a cheap SSD for caching purposes (and truth to be told and as far as I am concerned, for no purpose at all): they are especially fragile with a low endurance (low max write cycle counts), while a cache disk needs high endurance to writes. They are also slow (you will find QLC or non-V-NAND TLC drives in this ”cheap” category, and their write speed is abyssal).

    I'd rather use a RAM-disk, and if RAM is scarce on my system (less than 16GB), I'd rather consider adding more RAM (you can find 8GB DDR4 modules around 30-50 bucks, and you can even buy second-hand modules for RAM, while buying second-hand SSDs is quite unwise) so to be able to create a RAM-disk; the added RAM will also benefit all applications more than your cheap SSD would, the SL viewer included.

  7. It has been several consecutive releases (i.e. quite a few weeks) that the server release notes are unavailable, always leading to an ”access denied” XML page. This week, yet again, I get for Second Life Server 2021-10-01.564394:

    <Message>Access Denied</Message>

    Could you please, LL, get it right once and for all ?  All it takes is to verify the release notes URL does work whenever you update your servers... What about adding this step on the check-list for server updates ?...

  8. 4 hours ago, afrodziaq said:

    Faulting module name: libcef.dll_unloaded, version:, time stamp: 0x5eac9798”

    Not a surprise... As I explained in my above message, all CEF versions between 77 and 89 are affected by those random crashes. The current CEF releases (90 and newer) got it fixed.

  9. 30 minutes ago, Istelathis said:

    At around 2gb, it does have a noticeable impact on my restart time, for me it is almost 2 minutes from shutdown to rebooting to the login screen on windows 10.  I'm okay with that, since I reboot infrequently. 


    With 4gb of ramdisk, and about 3gb used, with a back up on my HDD it took nearly six minutes to do an entire reboot.. that was a bit too much for my taste, thus I switched over to SSD and brought down the size of my cache.

    This sounds way too long to me... A few seconds should suffice to backup 4 GB or data to your SSD...And yes, a 2GB cache would be plenty enough to store the data for at least 4 of your preferred/most visited places: increasing the size is hardly bringing any benefit (if you travel a lot in main land, the cached files will get evicted from the cache at some point anyway, whatever its size).

    I do not know what RAM-disk software (or script) you are using, but it looks like (by the time it takes) it uses compression: just disable the latter and store the data uncompressed, and if possible, as a single file (as a 'tar' file under Linux or an equivalent for other OSes: zip can do it with the ”do not compress” '-0' option, for example).


    I think I might just disable the ramdisk from saving my cache to the SSD entirely as it really is not necessary.  My internet speed is fast enough that it doesn't take long to fill it back up. 

    Things such as the inventory list cache takes time to load, even with a fast Internet link, and without a loaded inventory, your avatar will stay a cloud: you might want to at least save and restore the inventory cache files...

    • Like 1
  10. 25 minutes ago, Profaitchikenz Haiku said:

    OK, surely though it's the directory block(s) that have such information rather than the areas containing the file data and they're going to be frequently updated this way anyway?

    Under Linux for an ext4 file system, you use the ”noatime” option (at the minimum; I use personally lazytime,noatime,nobarrier,delalloc,discard) to prevent ”last accessed” time stamp update, when using a SSD. So the cache defeats that mechanism...

    Not to mention this is not even the ”last access time” data which is updated on reads in the ”simple cache”, but the ”last write time” (because you need to make sure this time stamp is indeed the result of a cache code access, and not of some random read from the file explorer, wsearch daemon or whatnot). Even with the old VFS-based cache (a single large cache file holding a virtual file system), that ”last used” info had to be stored as bytes inside the VFS file...


    Would there be any benefit to splitting up the cache so that the textures (and other fast-frequent access files such as animations) could be stored in Ram disk rather than the whle cache?

    There are several separate caches in use by the viewer:

    • The textures cache.
    • The assets cache (animations are among them, but meshes are by far the largest stored assets in it) or VFS.
    • The objects cache (per-sim files).
    • The inventory list cache.
    • Some other minor cache files (names cache, mute list, decoded sound files, etc).

    You could eventually separate the textures, assets and objects caches from the rest, but I do not really see any benefit in doing it...

  11. 1 hour ago, Profaitchikenz Haiku said:

    I'm interested to know just how much an SSD is likely to have it's operating life shortened by being used for the cache: once you've visited all your normal spots, there's not going to be a lot of writing to the cache, mostly it's going to be reads?

    Even for reads, the cached files need to have their last access time updated, so that the cache code knows what files to evict first when it needs to make room for new files (it of course first evicts the files that have not been accessed in the longest time).

    On HDs, this is not an issue, but on SSDs, where writes are done at a block granularity (i.e. you cannot just write a couple of bytes in the Flash memory: you must erase and rewrite an entire block with the few updated bytes), it can cause an excessive wear. While MLC and multi-layer TLC Flash is usually endurant enough, I won't use any single-layer TLC or multi-layer QLC for a cache (not to mention TLC and QLC writes are slow and/or cause an excessive ”write amplification”, due to the fact that each block is first written in an area used as a pseudo SLC flash, for speed, and rewritten later in the TLC/QLC ”standard” area) !

    44 minutes ago, KT Kingsley said:

    The LL RC viewer, Simplified Cache is available here: https://releasenotes.secondlife.com/viewer/ I've no idea how, or if, it improves on the existing system.

    It is just simpler (and faster, at least under Linux and macOS; Windows is another story, especially just after a reboot, when the OS does not yet have a cached directory tree in RAM). It however does not solve SSD wearing (even though it attempts to mitigate it by not writing the access time at every read if the last read timestamp is recent enough).

    38 minutes ago, Istelathis said:

    I wonder if setting up a ramdisk would be a decent option, it would be erased every time you restart your computer though.

    A proper RAM-disk saves its contents on disk on OS shut down and restores it on reboot...

    23 minutes ago, Profaitchikenz Haiku said:

    I did try this option back in 2010, I copied the cache to the ram disk at startup, and back to disk after logging out. It worked well enough back in 2010 when 512M was sufficient for the cache, but doing it now with 1-2G would be tedious.

    Not at all. Here (under Linux), it takes just a couple of seconds to save a 2GB RAM-disk and even less to restore it: I do not even bother compressing the RAM-disk data (which would take more time): I just 'tar' it to the SSD and voilà.

    • Thanks 1
  12. If you have a sufficient amount of RAM (16GB and preferably more), your best bet is to use a RAM-disk for the viewer cache: it is the fastest, by several orders of magnitude, and will prevent to wear out SSDs and HDs.

    A 2GB RAM-disk is enough, at least for one viewer (be aware that different viewers normally use different cache directories, so their cache size would add up in your RAM-disk).

    • Like 2
  13. 23 minutes ago, Beq Janus said:

    Of course CPU is not the whoel story but the stats I am using are effectively the proportion of each frame spent rendering a given Avatars geometry and any shadows. etc. I'll be adding more...

    The CPU is indeed only part of the problem, and neglecting the GPU might give you entirely false render cost figures on systems where the GPU is the actual bottleneck (*)... It would be great to get figures with CPU + GPU render time for each avatar. Not sure it is at all feasible however (perhaps by rendering only a given avatar and nothing else for a few frames in order to get its render cost).

    (*) Typically, systems using iGPUs, since those are especially weak; with a discrete GPU, even as old as a GTX 660, the CPU becomes the bottleneck in SL.

    • Like 1
  14. There is no risk to ”fry your computer”.

    At worst, your CPU or GPU would throttle down their frequency to avoid overshooting their max rated operating temperature (itself a few Celcius degrees below the absolute max temperature). Typically, a CPU that would fry above 105°C will throttle down its frequency to maintain 95°C or below, and should it reach 105°C regardless (e.g. because you'd remove the cooler), it would perform a safety shutdown to avoid burning...

    There have been examples of ”frying laptops” in the past, but they all were due to battery failures (and only because of a bad design of the latter).

    • Like 2
  15. 1 hour ago, Aishagain said:

    @Henri Beauchamp: Well you might well be correct, Henri, I won't argue.  However since one of the changes that EEP addresses is parcel environment settings at different altitudes and this is something that could NOT be set with windlight rendering.

    Wrong again... The Cool VL Viewer can render EE settings in WL rendering mode or WL settings in EE rendering mode (the settings are translated automatically). This includes altitude-based environment settings. Of course, some EE-only parameters (e.g. non-standard Moon orbits, or Sun and Moon non-standard textures) cannot be rendered in WL mode, but it won't totally break your personal experience, and won't affect at all the ”shared experience” as seen by other users around your avatar.


    That being unavailable, I would have thought that since eg I running a Windlight rendering viewer visit a friend's skybox, and this friend has set an altitude dependent environment, I would not see it: surely that breaks the shared experience rules?

    You still did not get what the rule about not ”altering the shared experience of the virtual world” means: it does not imply that you should have the exact same rendering of a scene in all viewers in a given place, but just that by its mere usage, one viewer won't break the experience as seen by other users around (as it has been the case in the past with non-standard attachment points, which in turn prompted the implementation of that rule).

    • Like 1
  16. 19 hours ago, Aishagain said:

    I will just, and totally unofficially, restate that the use of EEP rendering was made mandatory by LL and any viewer significantly deviating from the default system is deviating from the ”shared experience” that LL is so fond of restating

    Wrong !

    Once more, the ”shared experience” rule (chapter 2k of the TPVP (*)) is only here to avoid breaking things (how they render or work) in others' viewers than in the one breaking that rule. This has been the case, in a distant past, when a TPV implemented secondary attachment points (back in that time there were much less such points and each point could only have one object attached to it) that caused those supplementary attachments to appear floating around the avatar (or attached in their butt, which could look rather funny) in other viewers not supporting that non-standard feature. This is the very and only reason why this rule got implemented.

    On the other hand, LL cannot care less about what you render in your own viewer (as long as you do not come back to them to complain about a ”bug” when this is just an unusual feature their official viewer does not implement).

    If you wish to render with DoF or without, with shadows or without, with ALM or without, with a fixed, custom or a parcel environment, with an EE or WL renderer, you can do it without any issue whatsoever.

    About EE and WL, the Cool VL Viewer offers you the choice between both sets of shaders (and their matching renderer), and this does not break the ”shared experience” of other avatars around yours, since this is exclusively a viewer-side feature that does not have any impact outside your own screen.

    And if you want an example of such a viewer-side, non-”rule-2k-breaking” feature with Firestorm, there's the parcel Windlight settings based on a string in the parcel description (which LL elaborated as the Extended Environment); this feature was not supported by LL's viewer, and yet LL never invoked rule 2k..

    (*) ”2k. You must not provide any feature that alters the shared experience of the virtual world in any way not provided by or accessible to users of the latest released Linden Lab viewer.” (emphasis mine).

    • Like 2
  17. 16 hours ago, Dahlia Bloodrose said:

    In cinematic capture mode, the viewer would let you log in as a bot that automatically followed another avatar.  The avatar could be a floating cinematic camera (or invisible, I suppose but that's creepy).  The key thing is that cinematic capture mode would not have to do any rendering, at all.  No processor cycles would be spent on anything other than capturing and logging the traffic required for rendering.


    In playback mode it would let you pick time slices and render them based on the captured stream.  After the fact camera movement would be entirely possible (you wouldn't be entirely locked into the original camera angles).

    Problems would have to be solved first for the ”capturing and logging the traffic required for rendering” step: you must understand that the viewer does not grab (neither get sent) everything around your avatar, but only what it needs to render the scene; it depends on your avatar position, the configured draw distance and the camera angle and focus point; it means that your wish to allow camera movements on replay is not really feasible ”as is” (though, one could imagine rotating continuously the camera around in capture mode to trigger ”interest list” updates covering all surrounding objects, obtaining a 360° field of view).

    You would also need to store permanently (as part of the captured data) all the textures, meshes, animations data, but also particle systems parameters, environment data changes, etc. I'm not sure how this would be considered, on a legal point of view, but it might be perceived as a form of contents ripping/copy-botting (and could certainly be abused in such a way)...

    I'm skeptical on the feasibility of such a project (and not so much on the strictly technical aspect than on the legal one).

    As for the benefits in ”lag” terms on replay, you might get disappointed in the end, for the replay viewer would still need to decode textures, meshes and objects data at the proper LOD, run animations, etc, meaning the main loop won't be any faster than what happens after you got everything downloaded and in caches with a normal viewer (at which point, the exact same amount of time would be spent in your replay viewer and in a normal viewer, to render the scene)...

  18. 37 minutes ago, Semirans said:

    Where do I find the ”texture console”?

    It depends on the viewer... Normally somewhere in the Advanced menu (”Advanced” -> ”Consoles” -> ”Texture console” for the Cool VL Viewer). The keyboard shortcut for the toggle is usually CTRL SHIFT 3.


    The items rez properly, it is only after I ”refresh or reload” the textures that this issue happens.  If I open the ”edit” window or reopen the texture, it resolves, until I refresh the texture again.

    Which would indeed advocate for a lower LOD selection, maybe because of the discard BIAS. Reloading the texture or editing the face causes the best LOD to be forced, regardless of the bias... As soon as the Edit floater is closed or the importance to the camera reevaluated (as a result of a camera zoom change, for example), another LOD may get selected...

    You may also use the ”LOD info” debug feature (”Advanced” -> ”Rendering” -> ”Info displays” -> ”LOD info” for my viewer) to see what is the LOD the textures are rendered at; beware, it's ”spammy”, and you'll likely have to remove all attachments you are wearing but the one you want to test to see the proper hover text without zooming (zooming would affect the LOD, see below).


    I am confounded about why it should suddenly start happening a week ago and why it happens on multiple computers.

    A possible explanation would be that you changed your default camera settings (the farther zoomed on your avatar, the lower the selected texture LOD)...

  19. 2 hours ago, animats said:

    I'm working on an experimental rendering part of an SL viewer, using the latest and greatest technologies.Some TPV developers and graphics Lindens know what I'm trying and have seen video, but I'm not showing anything publicly. If you're seriously interested and understand this kind of thing, send me a message and we can talk. This is a high risk approach; it might work, or it might not. So far, frame rates are great because the GPU and a refresh thread are doing all the work. Update rates not so much because the asset fetch and LOD system are still very basic. It's just move and view; you can't do anything in world yet.


    while an aggressive SL viewer would happily suck assets from the asset server at well over 50Mb/s if allowed to do so.

    You got me curious... Vulkan renderer ?... I'd love to try and adapt it to my viewer, which you would likely classify as ”hyper aggressive”, since I regularly see it suck up assets (meshes and textures, mostly) at over 250Mbps (true TCP/IP observed bandwidth) on a 1Gbps (ATM bandwidth) FTTH link after TPs in un-cached region...

  20. 20 hours ago, Semirans said:

    This issue happens with a basic cube that I rez in game and attach to my avatar.  I've also been making some of these items for 3 years and never had this problem so I dont think my particular issue is related to badly designed products. Thank you for your input however.

    It is possible that your viewer is badly configured, or that it got too little memory to rez textures at the proper level of detail. Watch the ”discard bias” number in the texture console. Ideally (when properly configured and with enough available memory), it should be 0. A higher BIAS means lower texture LODs (i.e. blurry textures) on rezzed objects.

    This said, have a try with the Cool VL Viewer, and see how it fares...

    • Like 1
  21. The Cool VL Viewer is (and has always been) primarily a Linux viewer, since 2007.

    It can do everything the official SL Windows viewer can do (and more, such as Lua viewer-side scripting), with the exception of the ”path finding capsule” visualization (because there is no Open Source alternative for the closed source Havok code it uses), including uploading meshes with Physics decompositions (based on HACD, which replaces Havok for this task).

  • Create New...