Jump to content

Henri Beauchamp

Resident
  • Posts

    1,286
  • Joined

Everything posted by Henri Beauchamp

  1. Wow, LL did update these, at (long) last !... Sadly, the Linux requirements are all wrong: Hey, LL, wake up !!! It's been years that 64 bits Linux viewers have been available, and many months no more TPV bothers providing 32 bits Linux builds. You do not need any 32-bit compatibility environment installed (not even for SLVoice, which most TPVs now run in its Windows 64 bits incarnation under Wine, by lack of an up to date Linux SLVoice version). Nope !... Linux viewers need SSE3 minimum because of a newer CEF version (currently at 108 and soon 109 in the Cool VL Viewer Linux builds) than in current Windows/macOS LL official builds, but the latter will soon need SSE3 too, when their old, bugs and security holes riddled CEF 91 version will finally get updated. Also: The Linux TPVs are compatible with all Second Life features, with one exception: the path-finding capsule displaying, that requires the Havok proprietary library. Could we please have a Linden fixing these misleading Linux ”requirements” ?
  2. Note that the ”creator” of an item/object might not be the person to hold responsible (a mod-ok object could be modified by a griefer, made no-mod, and sent to you, and the creator name won't change, even if the original object was perfectly safe and legit while the modified object sent to you is nefarious): the ”last owner”, on the other hand, is definitely the field to look at since this is the person who sent you that object.
  3. From the dialog screen shot, the listed executable path, and the OS version number, it is pretty obvious it is Windows 10... 😛 There are other questions to ask: Is this crash happening as a result of the usage (or closing) of the CEF browser plugin ? If not, is this crash happening in a specific place in SL and what media are available/playing there ?
  4. This is a money debit/credit permission request for a scripted item: you do not need to ”Open” such a scripted item to be asked for this permission. Vendor items are of course using this permission so that their script can work, and from your screenshot, this is a Capservend vendor, so it is not surprising at all that it asks for this kind of permission. This is the problem, and if it is not a vendor you bought, you should indeed be wary about such objects: a griefer could just as well send you a scripted object disguised as a well known vendor and steal your money as soon as you give it permission. So, your reaction was quite suitable if you don't know where or from whom this object came from. You cannot see the objects properties with the Edit tools floater when this object is no-modify (like this one) and you are not its creator. To see an object's creator for an item in your Inventory, use the ”Properties” entry in the context menu for the inventory item in question. To see it for a rezzed object, use the ”Inspect” entry in the context/pie menu for that object. This is a viewer-side permission dialog (it is orange in v1 viewers to easily distinguish it from blue menus of scripted objects; v2+ viewers menus are much less explicit in this respect). It is safe to ”Deny” it, but closing it works the same.
  5. It uses basic C-library calls (or boost libraries ones, in some viewers such as LL's, which might actually prove to be slower), which are indeed fast enough for such a small amount of file read/writes as seen in the viewer (we are not in the same order of magnitude as what is seen with database servers, for example), especially once you got those file operations pushed to a thread. This said, the use of io_uring/IOCP could (slightly) improve things (but requires that your file operation can indeed be threaded/async, meaning it won't help for synchronous reads/writes still sometimes needed in the viewer main thread)... Being an SSHD, you may encounter two slow-down issues: 1.- When the SSD storage part of the disk is full, the controller will write to the hard disk platters instead, causing a noticeable slow down in file operations. 2.- When the firmware managing the SSD storage part is not using the TRIM command after a sector gets erased, subsequent writes to that sector will be slower. It is easier to do proper TRIMing with genuine SSDs, since this can be configured at the OS driver level. For the viewer cache, and provided you got enough RAM (32GB or more), you could use a RAM-disk instead, which would spare you any slow down and would save your SS(H)D endurance.
  6. ”Modern” CPUs (and by ”modern”, I mean all CPUs models released in the past 20 years or so) have an auto-throttling mechanism protecting them from damages due to overheating. A badly cooled CPU will just run slower... Recent CPUs (Anything-Lake for Intel, Zen 3-4 for AMD) are also factory-overclocked and heat up insanely fast (and up to 95 to 105°C, depending on the CPU generation and ”absolute maximum ratings”) and, more often than not, are hitting that throttling limit: this is considered ”normal” operating conditions by their maker... So, no, you do not risk damaging your CPU via overheating. However, any piece of Silicon does age, and its performances degrade over time. This aging is faster as the operating temperature increases, but also depends on other factors (such as maximum local currents for the electromigration effect, or the operating voltage for charge trapping effect, etc). I so far did see the effect of such aging on my good old Sandy Bridge 2500K that I operated during 7 years (and 18H/day in average) at an overclocked frequency of 4.6GHz (i.e. +900MHz over the rated turbo frequency) locked on all cores: after all these years, it started hanging at idle (!) time, roughly once per day, and I had to reduce the frequency to 4.5GHz to prevent this from happening... Today, and yet more years later, it is still working just fine at 4.5GHz in my second PC. In short: it is likely that even an ”overheating” CPU will get replaced by a new model (because it simply became too weak for your needs) before it even starts to show off some wear out signs. But a good cooling is always a good thing (TM) to preserve your CPU... or allow to push it a few dozen MHz higher ! 😜
  7. More than the height of your avatar (mine is also as tall as I am myself in RL, at 1.77m, and that's about the only characteristic my avatar and I do share, beside the gender), you might be encountering issues due to your overall avatar's look: big eyes (baby-like, or anime-like ones), rounded baby-like face, childish body frame, childish clothing, etc, can quickly make people categorize your avatar as an underaged one, regardless of its actual size. Not to mention the role-played behavior (should you role-play your avatar as a childish person, you would of course have it categorized as a child). This said, I did have issues with my own 1.77m tall avatar in the distant past (during the ”age-play” issue in SL), and had to clear things up in my avatar profile (avatar size, portrayed age, actual player age) to help things along. Nowadays, I am no more faced with such issues, but it may also be related to the fact my avatar itself is 16 SL-years old (meaning people won't fear any more about interacting with an underage real person).
  8. The problem is that the UI itself is rendered using OpenGL... You'd end up with two OpenGL rendered windows. It's doable, but wasteful in resources. When I first launched a SL viewer, back in 2006, the first thing that shocked me was the fact the whole UI was drawn in OpenGL... Back in that time, with the weak GPUs we had, it was extremely wasteful and UI rendering could be seen in the ”Fast timers” as using up a significant part of the render time (especially when you consider that each text character in the viewer UI is in fact a ”glyph” that must also be rendered as a some sort of tiny texture under OpenGL). I would have expected the viewer to use the OS or its toolkit(s) (GTK+ or Qt under Linux, for example) to draw its ”floaters”, menus, etc, which also would have allowed to move the said ”floaters” out of the 3D view on you screen: those do not take up any GPU power and very little CPU power to draw, since they are basically ”static” and only updated whenever the user interacts with them (or the application needs them updated to reflect changing info). However, there are benefits for games in using real-time rendered UI elements: You can make-up specialized UI elements that do not exist in the OS or toolkit provided UI (e.g. and for SL, pie menus, multi-sliders, or the XY vector and track ball recently introduced with the EEP viewer). Your UI elements can be updated in real time (at each frame if need be), without needing any threading and cross-thread syncing mechanism (if you look at how the file selector is coded in LL's viewer, which uses the OS/toolkit one, and how it is coded in the Cool VL Viewer, where I implemented an XUI file selector, you will see how much more complex (and glitchy/hacky) the former is compared with the latter). It is easier to create interactions between the UI elements and the 3D world objects (pie/context menu, drag and drop from inventory to objects, etc). The UI can be kept uniform over all OSes your game is running under (no special case needed in the code for each OS, more uniform/easier experience for the users... once they got acquainted with your UI). There has been one attempt, years ago, by a TPV developer to use toolkit (GTK, IIRC) based floaters, but it never really developed into something usable. There also have been a couple attempts by LL to reduce the UI drawing overhead: UI floaters occlusion (so that the objects hidden by the floaters do not need to be rendered at all) and, recently, sparse UI updates (so that the UI can be drawn in a separate buffer and only updated when actually needed, that buffer being overlaid over the 3D view at each frame). Neither of those lead to some adopted code, probably because the benefits in performances are not really worth the added code complexity (today's GPUs can draw the UI elements really fast anyway)... As for the screen estate issue, your best bet is to use a viewer with a dense UI layout... My main grief against LL's v2+ UI is precisely that it wastes an awful lot of 3D view space by using enormous floaters with low density information in them. That's why I find the v1 UI I kept using (and kept improving too) in the Cool VL Viewer to be way more suitable (not to mention the productivity aspect, with less clicks and mouse moves per performed action). But you can find at least one viewer (Black Dragon) with a much denser v3+ UI, that could as well suit your needs, if the v1 UI feels/looks ”too old” for you... 😛 Not sure what you mean here... The GPU brand is not an issue at all for a two-windows 3D view...
  9. You definitely did not try the Cool VL Viewer, which is immune to this issue (or more exactly, it got a workaround for it)...
  10. In the viewers release notes,I can see that a new PBR project viewer release is available, however, clicking on the link to get to it, I get an XML error page instead: You may however get around it, by replacing the build number in the link for the former release (link in Arton's post above), with the new one, listed in the viewers releases notes page, which, for 64 bits Windows gives this, for example. Yet, could someone at LL fix the PBR viewer release notes page, please ?
  11. It's way more responsive, and even speeds up textures rezzing (since it uses the ”free” time to do more tasks at the CPU level while it waits for the minimum delay between two frames to be exhausted); it only actually sleeps the CPU when there is no more work to do. 😛
  12. From the description of the symptoms, I think it is likely due to the RenderMaxNodeSize limit: this limit was implemented to prevent huge memory consumption seen with some griefing objects, but it also affects places where a lot of highly detailed meshes (meshes with a lot of vertices) are placed close together. In this case, the ”spatial group” in which they are placed by the render pipeline (this is a group of objects that get rendered together, regardless of their type, so you can also find non-mesh objects grouped with mesh ones) can exceed the RenderMaxNodeSize limit, causing the entire group to get skipped by the render pipeline. Workarounds range from reducing the RenderVolumeLODFactor (so that less vertices get rendered, with the hope that it will suffice to stay below the limit) to increasing RenderMaxNodeSize (which works better, but will cause a higher memory consumption both at the GPU and GPU levels). You may also try the Cool VL Viewer: it got a work-around for this that gets enabled whenever you push the ”Mesh objects boost factor” multiplier above 1.0 (the corresponding slider is in the Graphics Preferences); this multiplier is used both to multiply RenderVolumeLODFactor for meshes only (*), and causes mesh objects vertices to get a ”discount” when computing the total amount of vertices in the spatial group before comparing it to RenderMaxNodeSize... (*) Meaning you can keep a normal value for RenderVolumeLODFactor instead of having to push it to unreasonable values just to render some badly designed meshes, and affecting as well non-mesh objects as a result (which is bad, and adds to the max render group size issue).
  13. Once you got powerful enough a GPU, the bottleneck in the SL viewer renderer is at the CPU level and, more exactly, at the single CPU core performances level (because the renderer is mono-threaded, even though recent graphics drivers will help a bit about it by using one thread or two at their own level, loading one more core at 50-100%). So the highest the mono-core performances of your CPU, the higher your frame rates, and it is almost exactly proportional. As for the GPU load, it does not make the slightest doubt that in the simplest scenes, even the most powerful GPU will be loaded at 100%, because then the CPU (which got almost nothing to do for such simple scenes) can throw frames as much as the GPU can absorb them, and your FPS rates will skyrocket (with the Cool VL Viewer, I am seeing frame rates in excess of 800fps in my skybox, with my brand new RTX 3070 and my middle-aged 9700K, causing 200+ W power draw from the GPU). The solution is to use a (smart, if available in your viewer) frame rate limiter to avoid ludicrous fps rates and the resulting excessive power consumption/heating/noise. Limiting the fps rate to your monitor vertical refresh rate is the way to go, but do avoid VSync, which is by far the worst way to limit the frame rate.
  14. It happens for the sim where you log in: if you log in in a sim that did not get restarted in the past 2 days and 21 hours, there will be missing ”online” friends in your friends list, wherever those actually online friends are in SL (including in the same sim !)...
  15. That won't work... The Cool VL Viewer build system is a standalone one, while other viewers are all relying on LL's fancy ”autobuild” (not the GNU autobuild) Python module... I made my viewer build system so that it is easy to use, without the need of exotic dependencies, and once you got the basic required tools (compiler, cmake, Python) all you have to do (either under Linux, Windows or macOS) is to type a single command in a terminal to build it. 8Gb of RAM should be enough to build it... But you can also use the published builds for all OSes... It will also run much faster than any other viewer on your potato computer. 😄
  16. Pour Windows, le greffon Dullahan de Firestorm 6.6.3 (67470) est exactement le même que celui de LL, comme d'ailleurs pour la plupart des autres TPV; c'est une version basée sur un vieux binaire CEF 91 qui n'a pas changé depuis maintenant plus d'une année. Je viens de tester Firestorm 6.6.3 sur une partition Windows 7 Ultimate 64 bits ESU, et n'ai pas rencontré de problème, à part un bogue dans la fenêtre flottante (floater) du navigateur qui parfois ne charge pas pas la page (comme pour le Firestorm Wiki du menu d'aide) et nécessite de changer l'adresse manuellement dans la ligne de l'URL ou via son menu déroulant pour déclencher le chargement, après quoi tout fonctionne correctement dans le navigateur. Mais la page de l'écran de connexion, la recherche Web, etc, se chargent correctement dés l'ouverture. Pas de souci non plus pour la redirection vers le navigateur système (Pale Moon dans mon cas). Donc, le problème que vous rencontrez est probablement dû à votre installation de Windows 7. Il pourrait s'agir d'un cache corrompu de CEF; pour remettre le cache à blanc, supprimez le dossier: C:/Users/<votre login>/AppData/Local/Firestorm_x64/cef_cache/ Si cela ne fonctionne toujours pas et pour de plus amples investigations, il vous faudra demander de l'aide à l'équipe de Firestorm, en leur fournissant les données nécessaires (et en particulier le journal: C:/Users/<votre login>/AppData/Roaming/Firestorm_x64/logs/Firestorm.log). Vous pourriez aussi jeter un oeil dans les ”Journaux Windows” de l' ”Observateur d'évènements” des ”Outils d'administration” du panneau de configuration de Windows: il se peut que l'erreur empêchant le bon fonctionnement de CEF y soit listée...
  17. This bug is tracked with this JIRA issue: BUG-232037. It happens systematically 2 days and 21 hours after each sim restart in my home sim, and is darn annoying (since it means the sim gets restarted thrice a week: on rolling restart and twice after it, to ”fix” this missing-friends-on-login issue). I regularly nag Lindens (@Rider Linden and @Simon Linden) at the SUG . Sadly, and despite the added burden it imposes on SL support (with the resulting many sim restart requests), it does not seem to be very high on LL's priority list. ☹️ Feel free to nag with me (post in the JIRA, come and complain at the SUG, etc) !...
  18. Your best bet, when you notice some weird and new thing happening after a driver update, is to roll back to the former version... It is not unusual that new versions introduce new (and sometimes very painful) bugs... The latest is not always the greatest ! For people running (or trying to run) Windoze (yuck !), I'd recommend using NVCleanstall to install their driver for NVIDIA cards: it makes it easy to roll back to any version, and also allows to disable unwanted ”features” in all the mess the normal installer is otherwise piling on your system disk and running in your memory without your approval (which includes so-called ”telemetry” spying stuff)...
  19. Saving the VAT on the membership fee is indeed a relief, but clearly, land ownership in SL stays more expensive for an UE resident than for an US one... So, please, do ”reconsider it in the future” ! 😛 Also, it would have been great to add intermediate mainland use fees (instead of keeping one level for each doubling of the surface), because the gap to jump from, say 4096m² to 8192m² is large (around $130 a year with the VAT) and I would have appreciated (and likely used, as a result of the saving I will be doing with the membership fee VAT removal) a plan for 6144m², for example... Finally, there is something unclear in the knowledge base about stipends grand-fathering. I have been considering for a long time to change from quarterly to annual membership plan, but this knowledge base article, if exact/correct, means I would loose my L$400 stipend for a L$300 one doing so (loosing roughly $20 a year in the process), because the knowledge base says that the L$400 stipend is reserved for Premium plans opened before November 1, 2006, and I'm pretty sure mine (that does have the L$400 stipend) was opened after that (likely early 2007, since I joined SL as a basic account in October 2006)...
  20. The current version can be found by following the link for the project viewer GLTF PBR Materials found on the Alternate Viewers page. Note that this first alpha release lacks occlusions, so it is slower than what it should be: to do a fair comparison with LL's current release viewer, you must disable occlusions in the latter. But as you will see, the PBR viewer is in no way faster than the release viewer in ALM mode, and certainly not able to match the latter with ALM off on ”weak” GPUs... I do not see Firestorm not strictly following what LL is doing in this respect, but I am not involved in Firestorm development.
  21. You are totally missing the point... It's not about holding back progress, but just about keeping something which exists and allows people to enjoy SL when then won't be able to enjoy it any more if removed !!! First, I always have been an eager early adopter, and since you mention meshes, may I remind you that I backported mesh to the Cool VL Viewer only a few weeks after LL released their own mesh viewer, when some people said it was an impossible task... Here, it is not even such a hard thing to do: it's just keeping code and shaders as they exist along the new ones, just like I did for the WL+EE renderer (and in the latter case, things where much more complex, not because of the renderers, but because I had to implement real time translation between WL and EE day/sky/water settings so that both types would render fine in both renderers: no such issue here). And so far, Dave failed to deliver, based on what you can already experience today in the first alpha release of the PBR viewer. I would love to be proven wrong, but I'm afraid I won't...
  22. The Cool VL Viewer can indeed run on a Pi 4B or RockPro64, for example... And yes, that's thanks to the forward rendering mode. 😜 Running it on Chromebooks would likely be doable, after the necessary changes to adapt the Linux code and port it to Android... They can run SL, even if painfully. But what would change, should LL pursue in their suicidal way of removing the forward renderer, is that they won't be able to run it at all any more (or so slow, or in such a degraded way, that nobody would stand running SL on them). Add to this the current issues you get faced with to upgrade or buy a new computer (computer parts prices, financial constraints on your budget due to the inflation), and you can see how bad a timing it is to raise the hardware entry level to SL... Also, the future of SL depends whether a true client (with full 3D renderer) will be ported to mobile platforms or not... Anything that makes the viewer unable to run on modest hardware makes this goal more difficult to attain or right out impossible... Thing is, the ”effort” to just keep (freeze) the forward renderer as it is while still developing ALM for PBR and more, is close to zero, like I demonstrated already with the Cool VL Viewer v1.28, when I kept the WL renderer along the EE one (because EEP got pushed too soon to release status, and was so much slower, until, at last, the performances viewer fixed the broken EE renderer, months later; WL was a life savior for slow hardware then).
  23. Exactly my fear and what is likely to happen if LL removes forward rendering, making entry level laptops unusable (or miserable) at rendering SL. Hey, LL, I told you so ! I do really hope @Vir Linden and others Lindens involved in the viewer development will read this !
  24. The ”good rate” is equal or above your monitor refresh rate. Mine is a 60Hz VSync monitor, and anything above 60fps is good and smooth. However, you must also account for frame rate drops, which happen a lot when you move around, since while rezzing new objects and decoding textures, the CPU load increases a lot (the tasks linked to rezzing and texture decoding also take some time to perform during each viewer renderer ”frame” loop, so even if they are partly threaded, there is still a longer time needed to render a frame in the viewer when rezzing is in progress).
  25. I got a GTX1070Ti and it got strictly no issue with SL graphics, unless I switch on shadows (at which point, the fps rate might fall below 60 in various scenarios, which I would find unacceptable). And I'm using my viewer (the Cool VL Viewer, of course) with graphics settings maxed out, and with the draw distance set to 256m (in land sims with neighbouring sims) or 512m (in islands, or while sailing or flying). But I also got a good CPU (9700K locked @ 5.0GHz on all cores), and this is why my system works fine with SL, since as long as you got good enough a GPU, the bottleneck of the mono-threaded renderer found in the viewer is actually at the CPU level ! It you change your graphics card for something super-powerful (and a RTX 3060 would fall in that category, for SL), without changing your CPU, then you indeed will see little to no difference in fps rates (though, in my case, I would likely have better rates with shadows on). A balanced system is the key: do not put an over-sized GPU in a system with an old CPU, and vice versa.
×
×
  • Create New...