Jump to content

Henri Beauchamp

Resident
  • Posts

    1,430
  • Joined

  • Days Won

    1

Everything posted by Henri Beauchamp

  1. I suppose you mean ”10.13.6” (macOS Sierra) ?... This is the minimum requirement as of last Summer. So, it should work... If it does not, open a JIRA issue for it, providing all the info you can (a stack trace is most helpful), and in the mean time, you could use one of the TPVs' macOS builds, instead...
  2. They would run just fine under x86 emulation over ARM, just like what Apple did with Rosetta (and no, I do not like Apple, at all, due to their closed source, closed hardware, and Open Source-hostile policies, not to mention their tax evading policy, but when they do something ”right”, they deserve my recognition for it)...
  3. Well, currently, Linux (and its forks, such as Android) and BSD (and its forks, such as macOS) are immensely more common than Windows for ARM hardware. So... 😛 As for ARM vs x86, here is my stance: As a Linux user, should we get ARM-based desktop PCs with the same performances as existing high end desktop x86 PCs, it won't bother the least to switch to the ARM solution, much to the contrary ! x86 is an antediluvian, poorly designed ISA (*), which is today impaired by all the compatibility modes it must keep to run old software, causing bloated CPUs full of (mostly) useless parts needed to run those legacy 16 and 32 bits modes, support segmented memory (eeek !) and whatnot... ARM would at least allow to get rid of all this old cruft, once and for all (and yes, at the cost of using emulators for the old software, but those won't suffer from the emulation since they were designed to work on waaaaay slower hardware). (*) If only IBM had chosen Motorola and its gorgeous m68k ISA (32 bits registers, flat memory model, I/O in memory address space), instead of Intel and the 8088 (8 bits, segmented memory, separate I/O ports), we would have gained 10 years of advance in OS design for PCs, instead of waiting before Windows 95 (1995) and Linux (1992) could finally offer true multitasking 32 bits OS for the x86; back in the 80s, we already had genuine 32 bits OSes on the Macs, the Sinclair QL, the Amigas, the ATARIs, and genuine preemptive multitasking on the QL and Amigas...
  4. In fact, you perfectly can... Micro$oft hid them, but the options to install (or upgrade) to Windows 11 do exist, even on super-old hardware; I installed it on my 4th PC (the ”main PC” I was using back in 2008-2012), which is based on an old Core Quad Q6600 with 8GB RAM and a GTX 460, without EFI (thus without a TPM neither ”secure boot”), and therefore on a MBR disk. The simplest way to do it is probably via the use of Rufus to setup a bootable USB stick installation drive. Another option is to install Linux (as a dual boot if you really want to keep Windoze after having tasted to the speed, stability, safety and freedom brought by the Penguin's OS 😜 ); it would juice out the best possible performances from your aging PC.
  5. The cache misses will happen, especially on data, but on instructions, it is likely that all critical parts of the viewer code (the parts that run at every frame) will be kept in the CPU caches: they will of course migrate between the L1, L2 and L3 caches (especially during context switching by the OS scheduler), but today's L3 caches are so large, that the probability to see this critical code evicted from it is very slim... Also, be careful: L1 and L2 caches are per CPU (full, i.e. non-virtual) core (so the total quoted amount is to be divided by the number of cores, and by two again for the L1 instruction/data caches actual size per core), while L3 is shared (though maybe split in two, e.g. for AMD's Zen4 CCDs with 2x32MB for non-3D parts and 32MB/96MB for 3D ones); Intel recently made it even more confusing, with their ”smart cache”, which may allow cores to use some L2 cache from other inactive cores... It would be interesting to do some benchmarking on a Zen4 with 3D-V cache (and two CCDs) and see wether assigning the viewer main thread to a (full = 2 SMT virtual cores) core on the 3D CCD gives better results or not than when assigned to a core on the other CCD (with the smaller L3 cache)... This can easily be done with the Cool VL Viewer, using the ”MainThreadCPUAffinity” setting.
  6. There is nothing ”stupid” in sharing info with persons you trust in ”public” places (not so public, for SL chat, just like for a RL pub; their audience is however limited and known from the chatters at the moment they are chatting). Beside, you cannot use others' ”stupidity” as an excuse to violate a TOS (here, SL TOS). I never said otherwise, but what I say is that if a place uses such a relaying tool, there must be an unmissable warning about it for everyone visiting that place that their chat will be relayed and recorded (one of the aspects of Discord I personally dislike) outside of SL. Not saying no one should use such tools, but just saying they must be used in accordance with SL TOS... And I do not see how anyone could disagree with this ! 'nuff said !
  7. The 3D-V cache would not bring any benefit to SL viewers. The latter are already small enough to fit all the critical parts of their code into the L3 cache on non-3D counterparts, and the latter boost higher in frequency (the SL viewer is very sensitive to mono-core CPU performances). I myself bought a Ryzen 7900X for my new main PC, which replaced one with a 9700K. I'm not interested in a 7900X-3D; the 7900X performs beautifully in SL, and is a beast to compile large programs. E.g. the Cool VL Viewer compiles in less than 3 minutes on it, which is a huge time savior, when compared to the 10 minutes it took on the 9700K. Even CEF (the embedded browser library used for the viewer web plugin) builds (from scratch, with downloads, git ”deltas” etc, which take about 35 minutes to complete) in less than 1H35 (against 2H10 for the 9700K)...
  8. It may contains private matters, especially during adult role-plays, e.g. about your kinks, which in turn can hint about your sexual preferences, RL gender, etc... It may also contain hints about your RL location (where you live), your political opinions, or other things you would share with persons you chat with in SL (and not necessarily during a RP, but about RL matters instead) when ”face to face” with their avatar, after you came to ”know them” better, or even, more private things you would chat with a group of SL friends, and that you would never share ”in the open” on Internet. You cannot make any assumption about what a chat post will contain, and whether or not it could lead to private info disclosure (even if indirectly). You cannot therefore relay the chat for everyone to an external service (especially one that exposes that chat publicly) without first ensuring the chatting persons are aware about it. Here is a RL analogy: you go get a drink in a pub with a group of buddies, and you chat in that pub about stuff (politics, private relations, your health, etc): you do so in a ”public” place (and other clients could overhear your conversation), yet you would sue the owner of that pub should they record your conversation and publish it on Internet... Same thing for SL !
  9. I fly and sail with 512m draw distance. 256m is not enough to see where you are going. Not an issue with the Cool VL Viewer though (it is fast enough). 😜
  10. Whatever you wrote earlier is not the issue I reacted about (and yes, I did read your former messages, and even visited the sites you linked to). What I reacted about is your dangerous shortcut in the very message I reacted to (what you wrote before this message did not shock me), and more precisely, the part in this sentence of yours I cited in my last post: This shortcut you did means ”LL TOS approval = Discord TOS approval”, and is all wrong. This is exactly what I reacted about. Period. And there is strictly nothing ”rude” in my post, so please do not be ”rude” in your turn...
  11. Re-read my message above . The SL TOS is not the Discord TOS !... A SL Resident obviously agreed to the SL TOS, but who tells you they also agreed to Discord's one (I certainly did not, and thus why I am not using Discord) ? As as per the SL TOS and Community Standards, you are not allowed to retransmit out of SL in any way the chat you get with someone in SL, unless all the chatting persons have explicitly and beforehand agreed to this. Now, if you are using your chat relaying tools for specific purposes, in a place (e.g. a shop, a private sim, etc) where any entering residents would be forewarned that you are retransmitting their chat, this is acceptable (if they are not happy about it, they can move on to another place)... Same thing for a group for which you would relay the IMs, on the condition this is clearly specified in the charter of the said group (if they joined the group, they agreed to its charter).
  12. I do, and I get at the very minimum +10% fps rates under Linux when compared with Windows 11, both optimized to the extreme, with same clocks on CPU and GPU, and for the bloated Windoze 11, every useless/superfluous ”service” turned off (or right out uninstalled/removed/destroyed), including Search, Defender, etc. The difference in fps rates can go up to +25% in favour of Linux, depending on the scene being rendered. But what is the most impressive, is the difference in smoothness: under Windows, the same viewer (whatever the viewer, provided it got both native Linux and Windows builds) will experience way more ”hiccups” and unstable frame rates than under Linux. The difference there is massive. Not to mention stability, especially with some Windows drivers (I am in particular thinking about AMD's OpenGL drivers here)... Have a look at this post for some OpenGL performances comparisons.
  13. Do not even think about trying Linux, or it will make you cry, so much faster, smoother and stabler it is compared with the other (lesser) OSes... 🤣 macOS always had a very lame/ancient/partial/bogus OpenGL implementation, so it is no surprise at all it is so much slower with SL viewers. If you want good performances in SL out of Macintosh hardware, then install Linux on it !
  14. There is one. Not using it myself (I hate Discord), but exhuming a chat log taken during Open Source meeting, here is the info about it: Channel: https://discord.gg/gP7H7XVAP3 Invite request form: https://docs.google.com/forms/d/1I0jtI2N_od9MxkECctnjpFa-W8Vc5Qke41gJcf0v5Yg/ It was setup to discuss contents creation and stuff, but there are likely other ”rooms” (or whatever they call it in Discord) for other stuff...
  15. Today, I made an immense effort (no, I'm not even kidding here), and filed up a JIRA, which took me a lot of precious time (that I could have much better spent developing my viewer instead) and made my old-fart-self grumble and pester against this poorly designed piece (to stay polite) of web site, with my password forgotten by the JIRA (again, and as pretty much every time I use it), small text boxes to fill up the form when I need a wall of text, no ”draft” saving for that form, so that you can gather any missing data from another OS (with reboot needed) and come back to the form editing when you got that data, etc, etc... 😞 So, here you go Linden Lab: https://jira.secondlife.com/browse/BUG-234564 It will not be said that I do not do every effort to help improving SL...
  16. This is good for the timeout part, then (and a proof that libcurl is the culprit for those silent retries we get in C++ viewers). Nope, not for poll requests... IIRC, only a few capabilities were configured with HTTP Keep-Alive (e.g. GetMesh2). However, and even though you get the proper server-side timeouts at your Rust code level (which is indeed a good thing), you still have the issue with the race condition occurring during the timed out HTTP poll request tear down (as explained by Monty in the first posts of this very thread): you are then still vulnerable to this race condition, unless you use the same kind of trick I implemented... or Monty fixes that race server side.... or we get a new ”reliable events” transmission channel implemented (I still think that reviving the old (for now blacklisted) UDP messages would be the simplest way to do it and would be plenty reliable enough).
  17. The timeout happens server-side after 30s without event. If you do not observe this at your viewer code level with a 90s configured timeout, then you are also the victim of ”silent retries” by your HTTP stack. Fire Wireshark (with a filter such as ”tcp and ip.addr == <sim_ip_here>”), launch the viewer and observe: when nothing happens in the sim (no event message) for 30s after the last poll request is launched, you will see the connection closed (FIN) by the server, and there, the rust HTTP stack is likely doing just what libcurl is doing, retrying ”silently” the request with SL's Apache server... Note that you won't observe this in OpenSim; I think this weird behaviour is due to the 499 or 500 errors ”in disguise” (you get a 499/500 reported in body, but a 502 in the header) we often get from SL's Apache server (you can easily observe those by enabling the ”EventPoll” debug tag in the Cool VL Viewer: errors are then logged with both header error number and body)...
  18. No, it's an entirely different issue, and I'd wish we could get back to my original post, which is all about shadows (or lack thereof)...
  19. The documentation is in the code... 😛 OK, not so easy to get a grasp on it all, so here is how it works (do hold on to your hat ! 🤣 ) : I added a timer for event polls age measurement; this timer (one timer per LLEventPoll instance, i.e. per region) is started as soon as the viewer launches a new request, and is then free-running until a new request is started (at which point it is reset). You can visualize the agent region event poll age via the ”Advanced” -> ”HUD info” -> ”Show poll request age” toggle. For SL (OpenSim is another story), I reduced the event poll timeout to 25 seconds (configurable via the ”EventPollTimeoutForSL” debug setting), and set HTTP retries to 0 (it used to be left unconfigured, meaning the poll was previously retried ”transparently” by libcurl until it decided to timeout by itself). This allows to timeout on poll requests viewer-side, and before the server would itself timeout (like it would after 30s). Ideally, we should let the server timeout on us and never retry (this is what is done and works just fine for OpenSim), but sadly, and even when setting HTTP retries to 0, libcurl ”disobeys” us and sometimes retries ”transparently” the request once (probably because it gets a 502 error from SL's Apache server, while this should be 499 or 500, and does not understand it as a timeout, thus retrying instead), masking the server-side timeout from our viewer-side code. This also involved adding ”support” for HTTP 499/500/502 errors in the code, so that these won't be considered actual errors but just timeouts. In order to avoid sending TP requests (the only kind of event the viewer is the originator for and may therefore decide to send as it sees fit, unlike what happens with sim crossing events, for example) just as the poll request is about to timeout (causing the race condition, which prevents to receive the TeleportFinish message), I defined a ”danger window” during which the TP request by the user shall be delayed until the next poll request for the agent region is fully/stably established. This involves a delay (adjustable via the ”EventPollAgeWindowMargin” debug setting, defaulting to 600ms), which is subtracted from the configured timeout (”EventPollTimeoutForSL”) to set the expiry of the free-running event poll timer (note: expiring an LLTimer does not stop it, it just flags it as expired), and is also used after the request has been restarted as a minimum delay before which we should not either send the TP request (i.e. we account for the time it takes for the sim server to receive the new request, which depends on the ”ping” time and the delay in the Apache server); note that since the configured ”EventPollAgeWindowMargin” may be too large for a delay after a poll restart (I have seen events arriving continuously with 200ms intervals or so, e.g. when facing a ban wall), the minimum delay before we can fire a TP request is also adjusted to be less than the minimum observed poll age for this sim, and I also do take into account the current frame rendering time of the viewer (else should the viewer render slower than events come in, we would not be able to TP at all). Once everything properly accounted for, this translates into a simple boolean value returned by a new LLEventPoll::isPollInFlight() method (true meaning ready to send requests to the server; false meaning not ready, must delay the request). In the agent poll age display, an asterisk ”*” is added to the poll age whenever the poll ”is not in flight”, i.e. we are within the danger window for the race condition. I added a new TELEPORT_QUEUED state to the TP state machine, as well as code to allow queuing TP request triggered by the user whenever isPollInFlight() returns false, and to allow sending it just after it returns true again. With the above workaround, I could avoid around 50% of the race conditions and improve the TP success rate, but it was not bullet-proof... Then @Monty Linden suggested to start a (second) poll request before the current one would expire, in order to ”kick” the server into resync. This is what I did, this way: When the TP request needs to be queued because we are within the ”danger window”, the viewer now destroys the LLEventPoll instance for the agent region and recreates one immediately. When an LLEventPoll instance is deleted, it yet keeps its underlying ”LLEventPollImpl” instance live until the coroutine which runs within this LLEventPollImpl finishes, and it sends an abort message to the llcorehttp stack for that (suspended, since waiting for the HTTP reply for the poll request) coroutine. As it is implemented, the abortion will actually only occur on next frame, because it goes through the ”mainloop” event pump, which is checked on start of each new render frame. So, the server will not see the current poll request closed by the viewer until next viewer render frame, and as far as it is concerned, that request is still ”live”. Since a new LLEventPoll instance is created as soon as the old one is destroyed, the viewer immediately launches a new coroutine with a new HTTP request to the server: this coroutine immediately establishes a new HTTP connection with the server, then suspends itself and yields/returns back to the viewer main coroutine. Seen from the server side, this indeed results in a new event poll request arriving while the previous one is still ”live”, and this triggers the resync we need. With this modification done, my workaround is now working beautifully... 😜
  20. The diagram is very nice, and while it brings some understanding on how things work, especially sim-server side, it does not give any clue about the various timings and potential races encountered, servers-side (sim server, Apache server, perhaps even the SQUID proxy ?)... You can have the best designed protocol at the sim server level, but if in the end, it suffers from races due to communications with other servers and/or because of weird network routing issues (two successive TCP packets might not take the same route) between viewer and servers, you still see bugs in the end. What we need is a race-resilient protocol; this will likely involve redoing the server and viewer code to implement a new ”reliable” event transmission (*), especially for essential messages such as the ones involved in sim crossing, TPs, and sims connections. I like Animat's suggestion to split message queues; we could keep the current event poll queue (for backward compatibility sake and to transmit non-essential messages such as ParcelProperties & co), and design/implement a new queue for viewers with the necessary support code, where the essential messages would be exchanged with the server (the new viewer code would simply ignore such messages transmitted over the old, unreliable queue). (*) One with a proper handshake, and no timeout, meaning a way to send ”keep-alive” messages to ensure the HTTP channel is never closed on timeout. Or perhaps... resuscitating the UDP messages that got blacklisted, because the viewer/server ”reliable UDP” protocol is pretty resilient and indeed reliable ! Try the latest Cool VL Viewer releases (v1.30.2.32 & v1.31.0.10): they implement your idea of restarting a poll before the current poll would timeout, and use my ”danger window” and TP request delaying/queuing to ensure the request is only issued after the poll has indeed been restarted anew. It works beautifully (I did not experiment a single TP failure in the past week, even when trying to race it and TPing just as the poll times out). The toggle for the TP workaround is in the Advanced -> Network menu. 😉
  21. Well, PBR is already ”live” on the main grid (in a few test regions), and Firestorm already got an alpha viewer with PBR support... So it's time for you to look at it ! 😛 I'm afraid no... LL opted to do away entirely with the old renderer (EE ALM and forward modes alike), and there will be no way to ”turn it off”. The only settings you will be able to play with are the ones for the reflections (reflection probes are extremely costly in term of FPS rates, and won't allow ”weak” PCs to run PBR decently when turned on). Of course, you will be able to use the Cool VL Viewer, which already got (for its experimental branch) a dual renderer (legacy ALM+forward, and PBR, switchable on the fly with just a check box), but it will not stay forever like this (at some point in the future, everyone will have to bite the bullet and go 100% PBR, especially if LL finally implements a Vulkan renderer, which is very desirable on its own)...
  22. Citation from the blog: Well, it would be all nice and dandy (with indeed a tone mapping that is at last ”viewable” on non-HDR monitors), if there was not a ”slight” issue with the new shaders: they ate up the shadows ! Demonstration (taken on Aditi in Morris, with Midday settings): First the current release viewer v6.6.15.581961: Second, the newest PBR RC viewer v7.0.0.581886, Midday with HDR adjustments: And even worst for the shadows (but better for the glass roof transparency), with the same RC viewer and legacy Midday (no HDR adjustment): Notice all the missing (or almost wiped out) shadows (trees, avatar, in particular), as well as how bad the rest of the few shadows look now, when compared to the ”standard”... I raised this concern as soon as I backported the commit responsible for this fiasco to the Cool VL Viewer (and immediately reverted it), but I was met with chirping crickets... Let's see if crickets do chirp here too, and if residents at all care about shadows.
  23. It should not crash in the first place: report that crash either via the JIRA for SL official viewers, or to the developer(s) for TPVs (the support channel will vary from one TPV to another, but all TPVs should have a support channel), providing the required info (crash dump/log, viewer log, etc) and repro steps where possible.
×
×
  • Create New...