Jump to content

Henri Beauchamp

Resident
  • Posts

    1,306
  • Joined

Everything posted by Henri Beauchamp

  1. The cache location is a global setting, so it won't work for keeping a different cache location for each running instance of the same viewer ”brand”.
  2. The problem with using multiple simultaneously running instances of the same viewer is that the texture cache gets shared between them, but care must be taken that the cache can be written only by one of the running instances (it is normally the first launched instance that gets cache write rights, the others only able to read it) to avoid conflicts; depending on the viewer, the detection of other running instances might be glitchy or plain bogus (it relies on marker files, their stamping and their locking). This is why LL does not support multiple running instance in their viewer by default. I worked a lot on this for the Cool VL Viewer, so you could try it and see if it solves yours issue. Another solution is to use a different viewers for each of your different simultaneous logins, since each viewer ”brand” uses its own cache files/directories and each will therefore write to a different part of your disk without any risk of collision. Of course, it means you will eat up much more disk space for the caches...
  3. This is an old race condition (between server messages and the viewer renderer) that got introduced together with the ”interest list” changes, years ago... There are workarounds for it, but not all viewers got them. The Cool VL Viewer provides ”Refresh visibility of objects” in its ”Advanced” -> ”Rendering” menu (shortcut is ALT SHIFT R), and auto-triggers this workaround on login, far TPs and region borders crossing, meaning you rarely ever need to trigger it manually...
  4. Calling cards for your friends are re-created when missing: At each login with v3+ viewers (IIRC, that auto-recreation feature appeared somewhere in v2.x viewers: not sure which minor ”x” number exactly), meaning that for these viewers, the calling card for a new friend will now only appear after your relog once having befriended them. On demand (via the Inventory floater ”File” menu, ”Resync friends calling cards” entry) in recent Cool VL Viewer releases (the Cool VL Viewer *never* modifies your inventory all by itself, and would always notice you when it does it implicitly as a result of an event, such as during rebakes for the COF: a status bar icons appear at this time when the COF gets modified). However such calling cards would be considered as ”lesser” ones (with less options available from them via their context menu in the inventory of old v1 viewers), since instead of bearing the UUID of your friend as their creator (which was the case for calling cards given by the sim server to newly befriended residents) like a ”genuine” calling card (that you still can manually give to other residents via the avatar/name tag context (pie) menu), they bear it in their description instead and are marked as being created by you (i.e. with your avatar UUID as their creator). Regardless, your friend should still appear in your friends list even after they have changed their name (the UUID never changes, and the friends list is based on UUIDs). If you still have an old calling card (”genuine” or ”lesser” alike) for one of your renamed contacts (i.e. residents in your friends list or not), double-clicking on it should still automatically open their profile (with, of course, their new name in it). If you don't have the an old calling card (or UUID) for your contact, well... there are ways around it to find their new name, but I won't post this publicly since this would give stalkers an easy way to resume their harassment.
  5. And what about using the new mesh body with the SL default avatar head ? I am currently using this solution, i.e. a mesh BoM-compatible body (because it looks much nicer), and the SL avatar head that is the exact same head (same sliders) I made up since day one in SL (because I do not want my avatar traits to change). But it relies on the body having a neck part that can connect properly with the SL avatar head/neck (and bake with it, via BoM)... Is the NUX (*) avatar able to do that ? (*) Or LiNUX in my case... :-D
  6. I myself stopped providing (and allowing to compile) 32 bits builds of the Cool VL Viewer, because: After a poll, it turned out no Cool VL Viewer user was using the 32 bits builds any more. With the migration from the LL-patched boost::coroutine code to the newer boost::fiber code, 32 bits Linux builds always crashed on startup (I suspect a bug in the boost context library, but since no one uses it any more in 32 bits Linux, it never got fixed). Frankly, it is time to say goodbye to 32 bits hardware (that would be based off a CPU as old or older than an Athlon XP or an old mono-core Pentium 4), and to 32 bits OSes (that cannot provide more than 3GB of memory to any running application, at best: way too little for today's SLing).
  7. The Cool VL Viewer (please, name it right, ”Viewer” is *part* of the name, just like ”Tower” in Eiffel Tower) has existed for the past 15 years, and will keep existing for the years and decades to come (well, unless I die sooner, or SL closes down...). I never ever said I ”won't support EEP”: I really do not know from where you got this fake news from !!! In fact, my viewer was the very first TPV to implement it, but it had (and still has, for the legacy v1.28.2 branch) support for a dual renderer that can do either of WL or EE (*), at the flip of a check box (first experimental EE branch released on 2020-05-30, and before that the stable branch could deal with and convert EE settings to WL since 2019-08-03) ! The reason for keeping the WL renderer for so long, along the EE one, was that when it got released (or rather rushed out) by LL, the EE renderer was super-slow and very glitchy (occlusions were largely broken for example, and only got properly repaired in the performance viewer). Things have changed now, and the current stable branch of the Cool VL Viewer (v1.30.0) dropped the WL renderer and only runs the EE one, in its ”performance viewer” incarnation, which is also faster than WL... -------------- (*) EE now and not EEP, since the ”P” was for ”Project”, and since Extended Environment is now live, it is no more a project... 😛
  8. Sadly, and given the model of the PC (IDE drives !), the cute kitties have already exhausted their nine lives and are in cat paradise now... And blockchain did not exist back at that PC's release date (that PC would even have had immense difficulties to deal with a blockchain). Luckily for the kitties, they did not have to face the Nefatrious Fake Trading hype ! 😝
  9. Yep, Vulkan Beta Driver 515.49.05 for Linux works fine ! \o/
  10. I reported the issue for the Linux driver v515.48.07 a couple of days after it was released (it was released on the 31st May), the time for me to install it and notice the bug; I use ALM only occasionally myself, and mainly to test the Cool VL Viewer code in this mode after a change I made, so it went unnoticed by me at first. NVIDIA replied my report 6 days later (they are usually faster than this, but, well...). The bug is referenced 3676172 in their (internal) bug tracker. In the mean time and for Linux, you may still use the 515.43.04 beta driver which works fine.
  11. Premium Plus is totally unappealing to me. Waaaaay too expensive for very moderate ”benefits”, especially after you add the VAT (that's $300 a year with the VAT in UE)... I'll keep my Premium plan, thank you !
  12. Use a viewer that properly loads textures and got a texture memory setting (found in the Graphics settings of the Preferences floater) going above the 512MB max offered by LL's viewer... You do not need 30 minutes to load everything (2 minutes should be more than enough). The Cool VL Viewer is a super-fast rezzer (with also texture rezzing boost features: see in the Advanced -> Rendering -> Textures sub-menu). Be careful however: increasing too much the texture memory may also cause large hiccups and slow downs (and sometimes seconds-long freezes !). I usually use 2048MB to 3072MB maximum for textures memory. Depending on how textures-heavy is the sim, you might have to adjust the draw distance so that all textures can load in the configured memory while keeping the Texture discard BIAS at 0.00 in the texture console (CTRL SHIFT 3 to open it, normally, or find the entry for that console in the advanced menus, depending on the viewer); if the BIAS increases above 0.00, some textures will be downgraded to a lower Level Of Details (LOD) and appear blurry (that's why LL's viewer cannot cope with only 512MB of textures memory).
  13. For Linux, you may use the last (working) beta version (515.43.04), or any other legacy versions (for older cards): 470.129.06, 390.151, etc... The list is available here.
  14. Stop it, please... You are being thick. I do no tell you how your viewer works, please don't tell me how mine has been working for the past 15 years !... NO, my viewer DOES NOT list ANY instant message in the chat history floater. It *may* list them in the chat console in the ”IM: First Last: Text” form, but this is just a configuration option (”Preferences” -> ”IM & Logs” -> ”Include IM in chat console”, and by the way this option did exist in v1.1x viewers as well.
  15. Typically a race condition between viewer spatial partitions updates and server ”interest list” messages sending... This may also be worked around by right-clicking in the empty place where you know some object should be showing: it instantly causes the object to ”pop-up” into existence. This race condition bug has existed since the ”interest list” got implemented by LL. Some viewers (the Cool VL Viewer is one of those, of course) allow you to ”rebake” or ”refresh visibility” of objects: ALT SHIFT R shortcut for the Cool VL Viewer, which also auto-triggers this ”object rebake” after login, far-TPs and region borders crossing (since these are also the victim of this race condition). However, I noticed that LL's latest ”performance viewer” was more prone to NOT rezzing some objects until you got your camera closer to them (I noticed this when comparing the FPS performances with my viewer: the texture consoles showed that only half the number of textures were loaded by LL's viewer compared with mine, and looking closer, I indeed found out that some objects simply were not rezzing at all until I zoomed in on them)...
  16. Heat should make no difference whatsoever in regard to the number of rendered objects: this is the CPU which selects what objects data is sent to the GPU (even if the latter is also used to ”cull” objects hidden behind others, but even this cull result is processed by the CPU side of the code before the final draw calls are sent to the GPU). GPU overheating would cause bugs, either in the form of graphics ”artifacts” (bogus textures on objects, distorted objects, parasites/noise on the screen) or right out crashes. Note however that all modern GPUs will simply lower automatically their frequency when reaching temperatures close to their ”absolute maximum ratings”; artifacts seen nowadays are usually the result of an excessive over-clocking of the GPU or VRAM, and not overheating. If your GPU overheats (or rather heats up too much for your taste), then consider improving the air exhaust from your computer (for desktop computers), lowering the operating frequency of your GPU (for laptops) or using frame-limiting features of your viewer (the simplest and most performances hurting method being to enable V-sync: it sadly slows everything else down, and not just the frame rate); I recently added a frame-rate-limiting feature to the Cool VL Viewer because it became so fast that it made my GPU fans spin noisily for no purpose (rendering at 300+ fps is not really useful). The coding of this feature is yet smart enough to avoid hurting network/rezzing performances (the ”free time” being used to rez textures and pump the network instead of having the CPU just waiting and twiddling thumbs for the next V-sync pulse).
  17. Once and for all: there is no such mix ! But better than words, here is a screen shot with a legend for the UI elements: As a genuine role player, I make a clear distinction between chat (for In Character exchanges between avatars) and IMs (Out Of Character discussions between role-players), and separating the two kinds of textual exchanges is essential. It should also be noted that the v1 UI is by far the one providing the best screen estate, with high information density and small floaters (unlike the super-cumbersome v2 UI floaters).
  18. The ”not made at home” syndrome which always stroke LL... They hardly ever accept a contribution ”as is”, even a better coded and more performant one as their own. 🙄 For the mini-map, I would cite the better (way faster) algorithm for drawing parcel boundaries, for example... Instead of reusing what has existed for years in some TPVs (IIRC, this was first seen in Catznip) with a fast image overlay recalculated only every few frames, LL implemented them using a slow algorithm redrawing every single border segment with GL lines at each frame !... Just laaaaaaaame !
  19. This is just a configuration choice: you may perfectly keep IMs out of the chat and console (and yes, this is what I always did)... Note also that the Cool VL Viewer offers a great deal of configuration options for the chat and IM floaters (e.g. with or without a chat input line for the chat floater) and chat console. Also, there are two kinds of ”v1 chats”: the v1.18.0 one, i.e. the ”good one” (the one I always have been using for the Cool VL Viewer) with separate Friends and Groups floaters, and no chat input line (v1 viewer have a separate chat input line, so you really do not need one in the chat floater), and the ruined v1.18.2 (first ”voice viewer”) ”chatterbox”, which is just as horrible as v2 and newer... v2 chat is an horrendous thing that I thoroughly hate !... Each one their taste (or lack thereof)... 😛
  20. Which prompted a change in how my viewer (the Cool VL Viewer, a v1 viewer that did not have v2+ viewers auto-creation of calling cards on login) handles calling cards. Quoting the release notes:
  21. If you recently updated to NVIDIA's driver 515.48.07 for Linux, be aware that it is totally borked and fails to render properly in ALM mode, with culling issues (disappearing or flickering objects & avatars, bad shadows and reflections, etc). The bug has already been reported on NVIDIA's forum by a couple people. Until NVIDIA fixes it, revert to the previous driver (515.43.04, which is the former beta driver, is working just fine).
  22. Check with the task manager for background tasks that would eat up your CPU and/or GPU: right click on the task bar, choose ”Task manager” (or whatever it is called in English or in your language), then ”more details” in it (if not already in detailed view). You will get something that will look like this (in French, here, sorry): Sorting by clicking on the column title for the CPU or GPU will allow you to see the top consumers; here, there is just the SL viewer running and consuming lot's of CPU and GPU, and this is what you ideally should see as well. You may also use the ”Performance” tab in the task manager, that will tell you how much of your CPU, GPU, memory, etc is consumed in total (it will also allow to check the actual frequency of your CPU, in case it would fail to go in turbo mode due to a bad configuration or overheating, for example)...
  23. Your PC specs are just fine and should allow viewers (even slow ones) to run at 50+ fps. Check that there is not another program using the CPU and/or GPU in the background (mining malware for example)...
  24. The commit for SL-17244 (a crash fix) won't explain any slow down (and I certainly did not notice any slow down in my viewer after backporting it)...
×
×
  • Create New...