Jump to content

Henri Beauchamp

Resident
  • Posts

    1,306
  • Joined

Everything posted by Henri Beauchamp

  1. ”Modern” CPUs (and by ”modern”, I mean all CPUs models released in the past 20 years or so) have an auto-throttling mechanism protecting them from damages due to overheating. A badly cooled CPU will just run slower... Recent CPUs (Anything-Lake for Intel, Zen 3-4 for AMD) are also factory-overclocked and heat up insanely fast (and up to 95 to 105°C, depending on the CPU generation and ”absolute maximum ratings”) and, more often than not, are hitting that throttling limit: this is considered ”normal” operating conditions by their maker... So, no, you do not risk damaging your CPU via overheating. However, any piece of Silicon does age, and its performances degrade over time. This aging is faster as the operating temperature increases, but also depends on other factors (such as maximum local currents for the electromigration effect, or the operating voltage for charge trapping effect, etc). I so far did see the effect of such aging on my good old Sandy Bridge 2500K that I operated during 7 years (and 18H/day in average) at an overclocked frequency of 4.6GHz (i.e. +900MHz over the rated turbo frequency) locked on all cores: after all these years, it started hanging at idle (!) time, roughly once per day, and I had to reduce the frequency to 4.5GHz to prevent this from happening... Today, and yet more years later, it is still working just fine at 4.5GHz in my second PC. In short: it is likely that even an ”overheating” CPU will get replaced by a new model (because it simply became too weak for your needs) before it even starts to show off some wear out signs. But a good cooling is always a good thing (TM) to preserve your CPU... or allow to push it a few dozen MHz higher ! 😜
  2. More than the height of your avatar (mine is also as tall as I am myself in RL, at 1.77m, and that's about the only characteristic my avatar and I do share, beside the gender), you might be encountering issues due to your overall avatar's look: big eyes (baby-like, or anime-like ones), rounded baby-like face, childish body frame, childish clothing, etc, can quickly make people categorize your avatar as an underaged one, regardless of its actual size. Not to mention the role-played behavior (should you role-play your avatar as a childish person, you would of course have it categorized as a child). This said, I did have issues with my own 1.77m tall avatar in the distant past (during the ”age-play” issue in SL), and had to clear things up in my avatar profile (avatar size, portrayed age, actual player age) to help things along. Nowadays, I am no more faced with such issues, but it may also be related to the fact my avatar itself is 16 SL-years old (meaning people won't fear any more about interacting with an underage real person).
  3. The problem is that the UI itself is rendered using OpenGL... You'd end up with two OpenGL rendered windows. It's doable, but wasteful in resources. When I first launched a SL viewer, back in 2006, the first thing that shocked me was the fact the whole UI was drawn in OpenGL... Back in that time, with the weak GPUs we had, it was extremely wasteful and UI rendering could be seen in the ”Fast timers” as using up a significant part of the render time (especially when you consider that each text character in the viewer UI is in fact a ”glyph” that must also be rendered as a some sort of tiny texture under OpenGL). I would have expected the viewer to use the OS or its toolkit(s) (GTK+ or Qt under Linux, for example) to draw its ”floaters”, menus, etc, which also would have allowed to move the said ”floaters” out of the 3D view on you screen: those do not take up any GPU power and very little CPU power to draw, since they are basically ”static” and only updated whenever the user interacts with them (or the application needs them updated to reflect changing info). However, there are benefits for games in using real-time rendered UI elements: You can make-up specialized UI elements that do not exist in the OS or toolkit provided UI (e.g. and for SL, pie menus, multi-sliders, or the XY vector and track ball recently introduced with the EEP viewer). Your UI elements can be updated in real time (at each frame if need be), without needing any threading and cross-thread syncing mechanism (if you look at how the file selector is coded in LL's viewer, which uses the OS/toolkit one, and how it is coded in the Cool VL Viewer, where I implemented an XUI file selector, you will see how much more complex (and glitchy/hacky) the former is compared with the latter). It is easier to create interactions between the UI elements and the 3D world objects (pie/context menu, drag and drop from inventory to objects, etc). The UI can be kept uniform over all OSes your game is running under (no special case needed in the code for each OS, more uniform/easier experience for the users... once they got acquainted with your UI). There has been one attempt, years ago, by a TPV developer to use toolkit (GTK, IIRC) based floaters, but it never really developed into something usable. There also have been a couple attempts by LL to reduce the UI drawing overhead: UI floaters occlusion (so that the objects hidden by the floaters do not need to be rendered at all) and, recently, sparse UI updates (so that the UI can be drawn in a separate buffer and only updated when actually needed, that buffer being overlaid over the 3D view at each frame). Neither of those lead to some adopted code, probably because the benefits in performances are not really worth the added code complexity (today's GPUs can draw the UI elements really fast anyway)... As for the screen estate issue, your best bet is to use a viewer with a dense UI layout... My main grief against LL's v2+ UI is precisely that it wastes an awful lot of 3D view space by using enormous floaters with low density information in them. That's why I find the v1 UI I kept using (and kept improving too) in the Cool VL Viewer to be way more suitable (not to mention the productivity aspect, with less clicks and mouse moves per performed action). But you can find at least one viewer (Black Dragon) with a much denser v3+ UI, that could as well suit your needs, if the v1 UI feels/looks ”too old” for you... 😛 Not sure what you mean here... The GPU brand is not an issue at all for a two-windows 3D view...
  4. You definitely did not try the Cool VL Viewer, which is immune to this issue (or more exactly, it got a workaround for it)...
  5. In the viewers release notes,I can see that a new PBR project viewer release is available, however, clicking on the link to get to it, I get an XML error page instead: You may however get around it, by replacing the build number in the link for the former release (link in Arton's post above), with the new one, listed in the viewers releases notes page, which, for 64 bits Windows gives this, for example. Yet, could someone at LL fix the PBR viewer release notes page, please ?
  6. It's way more responsive, and even speeds up textures rezzing (since it uses the ”free” time to do more tasks at the CPU level while it waits for the minimum delay between two frames to be exhausted); it only actually sleeps the CPU when there is no more work to do. 😛
  7. From the description of the symptoms, I think it is likely due to the RenderMaxNodeSize limit: this limit was implemented to prevent huge memory consumption seen with some griefing objects, but it also affects places where a lot of highly detailed meshes (meshes with a lot of vertices) are placed close together. In this case, the ”spatial group” in which they are placed by the render pipeline (this is a group of objects that get rendered together, regardless of their type, so you can also find non-mesh objects grouped with mesh ones) can exceed the RenderMaxNodeSize limit, causing the entire group to get skipped by the render pipeline. Workarounds range from reducing the RenderVolumeLODFactor (so that less vertices get rendered, with the hope that it will suffice to stay below the limit) to increasing RenderMaxNodeSize (which works better, but will cause a higher memory consumption both at the GPU and GPU levels). You may also try the Cool VL Viewer: it got a work-around for this that gets enabled whenever you push the ”Mesh objects boost factor” multiplier above 1.0 (the corresponding slider is in the Graphics Preferences); this multiplier is used both to multiply RenderVolumeLODFactor for meshes only (*), and causes mesh objects vertices to get a ”discount” when computing the total amount of vertices in the spatial group before comparing it to RenderMaxNodeSize... (*) Meaning you can keep a normal value for RenderVolumeLODFactor instead of having to push it to unreasonable values just to render some badly designed meshes, and affecting as well non-mesh objects as a result (which is bad, and adds to the max render group size issue).
  8. Once you got powerful enough a GPU, the bottleneck in the SL viewer renderer is at the CPU level and, more exactly, at the single CPU core performances level (because the renderer is mono-threaded, even though recent graphics drivers will help a bit about it by using one thread or two at their own level, loading one more core at 50-100%). So the highest the mono-core performances of your CPU, the higher your frame rates, and it is almost exactly proportional. As for the GPU load, it does not make the slightest doubt that in the simplest scenes, even the most powerful GPU will be loaded at 100%, because then the CPU (which got almost nothing to do for such simple scenes) can throw frames as much as the GPU can absorb them, and your FPS rates will skyrocket (with the Cool VL Viewer, I am seeing frame rates in excess of 800fps in my skybox, with my brand new RTX 3070 and my middle-aged 9700K, causing 200+ W power draw from the GPU). The solution is to use a (smart, if available in your viewer) frame rate limiter to avoid ludicrous fps rates and the resulting excessive power consumption/heating/noise. Limiting the fps rate to your monitor vertical refresh rate is the way to go, but do avoid VSync, which is by far the worst way to limit the frame rate.
  9. It happens for the sim where you log in: if you log in in a sim that did not get restarted in the past 2 days and 21 hours, there will be missing ”online” friends in your friends list, wherever those actually online friends are in SL (including in the same sim !)...
  10. That won't work... The Cool VL Viewer build system is a standalone one, while other viewers are all relying on LL's fancy ”autobuild” (not the GNU autobuild) Python module... I made my viewer build system so that it is easy to use, without the need of exotic dependencies, and once you got the basic required tools (compiler, cmake, Python) all you have to do (either under Linux, Windows or macOS) is to type a single command in a terminal to build it. 8Gb of RAM should be enough to build it... But you can also use the published builds for all OSes... It will also run much faster than any other viewer on your potato computer. 😄
  11. Pour Windows, le greffon Dullahan de Firestorm 6.6.3 (67470) est exactement le même que celui de LL, comme d'ailleurs pour la plupart des autres TPV; c'est une version basée sur un vieux binaire CEF 91 qui n'a pas changé depuis maintenant plus d'une année. Je viens de tester Firestorm 6.6.3 sur une partition Windows 7 Ultimate 64 bits ESU, et n'ai pas rencontré de problème, à part un bogue dans la fenêtre flottante (floater) du navigateur qui parfois ne charge pas pas la page (comme pour le Firestorm Wiki du menu d'aide) et nécessite de changer l'adresse manuellement dans la ligne de l'URL ou via son menu déroulant pour déclencher le chargement, après quoi tout fonctionne correctement dans le navigateur. Mais la page de l'écran de connexion, la recherche Web, etc, se chargent correctement dés l'ouverture. Pas de souci non plus pour la redirection vers le navigateur système (Pale Moon dans mon cas). Donc, le problème que vous rencontrez est probablement dû à votre installation de Windows 7. Il pourrait s'agir d'un cache corrompu de CEF; pour remettre le cache à blanc, supprimez le dossier: C:/Users/<votre login>/AppData/Local/Firestorm_x64/cef_cache/ Si cela ne fonctionne toujours pas et pour de plus amples investigations, il vous faudra demander de l'aide à l'équipe de Firestorm, en leur fournissant les données nécessaires (et en particulier le journal: C:/Users/<votre login>/AppData/Roaming/Firestorm_x64/logs/Firestorm.log). Vous pourriez aussi jeter un oeil dans les ”Journaux Windows” de l' ”Observateur d'évènements” des ”Outils d'administration” du panneau de configuration de Windows: il se peut que l'erreur empêchant le bon fonctionnement de CEF y soit listée...
  12. This bug is tracked with this JIRA issue: BUG-232037. It happens systematically 2 days and 21 hours after each sim restart in my home sim, and is darn annoying (since it means the sim gets restarted thrice a week: on rolling restart and twice after it, to ”fix” this missing-friends-on-login issue). I regularly nag Lindens (@Rider Linden and @Simon Linden) at the SUG . Sadly, and despite the added burden it imposes on SL support (with the resulting many sim restart requests), it does not seem to be very high on LL's priority list. ☹️ Feel free to nag with me (post in the JIRA, come and complain at the SUG, etc) !...
  13. Your best bet, when you notice some weird and new thing happening after a driver update, is to roll back to the former version... It is not unusual that new versions introduce new (and sometimes very painful) bugs... The latest is not always the greatest ! For people running (or trying to run) Windoze (yuck !), I'd recommend using NVCleanstall to install their driver for NVIDIA cards: it makes it easy to roll back to any version, and also allows to disable unwanted ”features” in all the mess the normal installer is otherwise piling on your system disk and running in your memory without your approval (which includes so-called ”telemetry” spying stuff)...
  14. Saving the VAT on the membership fee is indeed a relief, but clearly, land ownership in SL stays more expensive for an UE resident than for an US one... So, please, do ”reconsider it in the future” ! 😛 Also, it would have been great to add intermediate mainland use fees (instead of keeping one level for each doubling of the surface), because the gap to jump from, say 4096m² to 8192m² is large (around $130 a year with the VAT) and I would have appreciated (and likely used, as a result of the saving I will be doing with the membership fee VAT removal) a plan for 6144m², for example... Finally, there is something unclear in the knowledge base about stipends grand-fathering. I have been considering for a long time to change from quarterly to annual membership plan, but this knowledge base article, if exact/correct, means I would loose my L$400 stipend for a L$300 one doing so (loosing roughly $20 a year in the process), because the knowledge base says that the L$400 stipend is reserved for Premium plans opened before November 1, 2006, and I'm pretty sure mine (that does have the L$400 stipend) was opened after that (likely early 2007, since I joined SL as a basic account in October 2006)...
  15. The current version can be found by following the link for the project viewer GLTF PBR Materials found on the Alternate Viewers page. Note that this first alpha release lacks occlusions, so it is slower than what it should be: to do a fair comparison with LL's current release viewer, you must disable occlusions in the latter. But as you will see, the PBR viewer is in no way faster than the release viewer in ALM mode, and certainly not able to match the latter with ALM off on ”weak” GPUs... I do not see Firestorm not strictly following what LL is doing in this respect, but I am not involved in Firestorm development.
  16. You are totally missing the point... It's not about holding back progress, but just about keeping something which exists and allows people to enjoy SL when then won't be able to enjoy it any more if removed !!! First, I always have been an eager early adopter, and since you mention meshes, may I remind you that I backported mesh to the Cool VL Viewer only a few weeks after LL released their own mesh viewer, when some people said it was an impossible task... Here, it is not even such a hard thing to do: it's just keeping code and shaders as they exist along the new ones, just like I did for the WL+EE renderer (and in the latter case, things where much more complex, not because of the renderers, but because I had to implement real time translation between WL and EE day/sky/water settings so that both types would render fine in both renderers: no such issue here). And so far, Dave failed to deliver, based on what you can already experience today in the first alpha release of the PBR viewer. I would love to be proven wrong, but I'm afraid I won't...
  17. The Cool VL Viewer can indeed run on a Pi 4B or RockPro64, for example... And yes, that's thanks to the forward rendering mode. 😜 Running it on Chromebooks would likely be doable, after the necessary changes to adapt the Linux code and port it to Android... They can run SL, even if painfully. But what would change, should LL pursue in their suicidal way of removing the forward renderer, is that they won't be able to run it at all any more (or so slow, or in such a degraded way, that nobody would stand running SL on them). Add to this the current issues you get faced with to upgrade or buy a new computer (computer parts prices, financial constraints on your budget due to the inflation), and you can see how bad a timing it is to raise the hardware entry level to SL... Also, the future of SL depends whether a true client (with full 3D renderer) will be ported to mobile platforms or not... Anything that makes the viewer unable to run on modest hardware makes this goal more difficult to attain or right out impossible... Thing is, the ”effort” to just keep (freeze) the forward renderer as it is while still developing ALM for PBR and more, is close to zero, like I demonstrated already with the Cool VL Viewer v1.28, when I kept the WL renderer along the EE one (because EEP got pushed too soon to release status, and was so much slower, until, at last, the performances viewer fixed the broken EE renderer, months later; WL was a life savior for slow hardware then).
  18. Exactly my fear and what is likely to happen if LL removes forward rendering, making entry level laptops unusable (or miserable) at rendering SL. Hey, LL, I told you so ! I do really hope @Vir Linden and others Lindens involved in the viewer development will read this !
  19. The ”good rate” is equal or above your monitor refresh rate. Mine is a 60Hz VSync monitor, and anything above 60fps is good and smooth. However, you must also account for frame rate drops, which happen a lot when you move around, since while rezzing new objects and decoding textures, the CPU load increases a lot (the tasks linked to rezzing and texture decoding also take some time to perform during each viewer renderer ”frame” loop, so even if they are partly threaded, there is still a longer time needed to render a frame in the viewer when rezzing is in progress).
  20. I got a GTX1070Ti and it got strictly no issue with SL graphics, unless I switch on shadows (at which point, the fps rate might fall below 60 in various scenarios, which I would find unacceptable). And I'm using my viewer (the Cool VL Viewer, of course) with graphics settings maxed out, and with the draw distance set to 256m (in land sims with neighbouring sims) or 512m (in islands, or while sailing or flying). But I also got a good CPU (9700K locked @ 5.0GHz on all cores), and this is why my system works fine with SL, since as long as you got good enough a GPU, the bottleneck of the mono-threaded renderer found in the viewer is actually at the CPU level ! It you change your graphics card for something super-powerful (and a RTX 3060 would fall in that category, for SL), without changing your CPU, then you indeed will see little to no difference in fps rates (though, in my case, I would likely have better rates with shadows on). A balanced system is the key: do not put an over-sized GPU in a system with an old CPU, and vice versa.
  21. I suppose you mean ALM (advanced lighting model, AKA deferred rendering). Be aware that LL is currently planning to remove the forward rendering mode (i.e. ”ALM off” mode) in the future PBR viewer (which alpha release already got it removed), and although I am fighting this bad move, it would make such a computer totally helpless at rendering at acceptable frame rates (above 15 fps) even the simplest scenes with the shortest draw distances and lowest graphics settings and with just a few avatars on screen. For a new buy (and should I fail to convince LL to keep the forward rendering mode in their future viewers), you should really avoid any computer with integrated graphics (Intel iGPU, such as Iris for this model, or even AMD APUs), but buy one with a discrete NVIDIA GPU (even a mobile GTX 1060 will do); also avoid AMD GPUs because their OpenGL performances are sub-par.
  22. This protocol is just one for encapsulating communications data (this is a data exchange format/protocol): it does not deal the least with the hardware infrastructure and architecture needed for these communications, neither with the network layer involved in the transmission of the data and its own communications protocol. As such, it got strictly nothing to do with scalability or performances, and won't solve SL's issues with IM chat and groups.
  23. Zen 4 is indeed too expensive for now; I'm waiting next year to upgrade my system, likely with a Zen 4, but not until AMD gets more reasonable with their prices, which they might be soon forced to do, if they don't want Intel to grab all the market shares for desktop PCs, with their Raptor Lake CPUs... Plus, Zen 4 = DDR5, so it's expensive too, at least for now... Again, next year might yield lower prices for the DDR5. Finally, the Zen 4 motherboards (with the X-chipset) are for now also too expensive (we will see what their B-chipsets will be sold for, when they will hit the market)... That's why I mentioned the 5800X (Zen 3, DDR4, reasonably priced MBs), which also happens to run neck to neck with the 12400F (or even slightly better, when overclocked) in mono-core perfs, while offering more cores and threads (good for rezzing faster in SL), and more cache (good to hold more viewer code in caches and run it faster as a result)...
  24. Ah, forum post signatures ?... I have these turned off (no cruft on my screen !). In any case, I won't trust any spec given that way: what if you are running the viewer on another computer ? Or forgot to update the specs in your signature after upgrading one of your computer parts ? The convention is to show the hardware specs via the About floater, in the screen shots you post, or to list the current specs of the PC you are using in the text of your post...
  25. For OpenGL applications/games, and therefore for the SL viewers, NVIDIA beats AMD hands down in performances (at equivalent card prices), and its OpenGL drivers are also 100% OpenGL compatible (meaning no need for ”workarounds” in the code like what happens with AMD and Intel drivers). The RTX 3060 will work like a charm for SL and with such a powerful card, the bottleneck will be on the CPU side anyway (for a given CPU generation, the more GHz on the CPU core running the viewer main thread, the more fps you will get; the improvement is proportional to the the frequency increase). So I'd use part of the $140 you considered adding for a 3060Ti into the CPU instead... And the CPU could as well be an AMD... A Ryzen 5800X would probably perform better than the 12400F (equivalent IPC, better turbo frequency, more cores to run more threads while rezzing stuff and decoding images in SL).
×
×
  • Create New...