Jump to content

Henri Beauchamp

Resident
  • Posts

    1,204
  • Joined

Everything posted by Henri Beauchamp

  1. Citation from the blog: Well, it would be all nice and dandy (with indeed a tone mapping that is at last ”viewable” on non-HDR monitors), if there was not a ”slight” issue with the new shaders: they ate up the shadows ! Demonstration (taken on Aditi in Morris, with Midday settings): First the current release viewer v6.6.15.581961: Second, the newest PBR RC viewer v7.0.0.581886, Midday with HDR adjustments: And even worst for the shadows (but better for the glass roof transparency), with the same RC viewer and legacy Midday (no HDR adjustment): Notice all the missing (or almost wiped out) shadows (trees, avatar, in particular), as well as how bad the rest of the few shadows look now, when compared to the ”standard”... I raised this concern as soon as I backported the commit responsible for this fiasco to the Cool VL Viewer (and immediately reverted it), but I was met with chirping crickets... Let's see if crickets do chirp here too, and if residents at all care about shadows.
  2. It should not crash in the first place: report that crash either via the JIRA for SL official viewers, or to the developer(s) for TPVs (the support channel will vary from one TPV to another, but all TPVs should have a support channel), providing the required info (crash dump/log, viewer log, etc) and repro steps where possible.
  3. Currently, such a race will pretty much never happen viewer-side in the Agent's region... The viewer always keeps the LLEventPoll instance it starts for a region (LLViewerRegion instance) on EventQueueGet capability URL receival, until the said region gets farther than the draw distance, at which point the simulator is disconnected, then the LLViewerRegion instance is destroyed, and the LLEventPoll instance for that region with it; as long as the LLEventPoll instance is live, it will keep the last received message ”id” on its coroutine stack (in the 'acknowledge' LLSD). However, should EventQueueGet be received a second time during the connection with the region, the existing LLEventPoll instance would be destroyed and a new one would be created with the new (or identical: no check is done) capability URL. For the agent's region, I so far never, ever observed a second EventQueueGet receival, and so the risk to see the LLEventPoll destroyed and replaced with a new one (with a reset ”ack” field on first request of the new instance) is pretty much inexistent. This could however possibly happen for neighbour regions (sim capabilities are often ”updated” or received in several ”bundles” for neighbour sims; not too sure why LL made it that way), but I am not even sure it does happen for EventQueueGet. I of course do not know what is the LLAgentCommunication lifespan server-side, but if race happens, it can currently only be because it does not match the lifespan of the connection between the sim server and the viewer. In fact, ”ack” is very a badly chosen key name. It is not so much of an ”ack” than a ”last received message id” field: it means that unless the viewer receives a new message, the ”ack” value stays the same for each new poll request it fires and that do not result in the server sending any new message before the poll times out (this is very common for poll requests to neighbour regions). Note also, that as I already pointed out in my previous posts, several requests with the same ”ack” will appear server-side because these requests have simply been retried ”silently” by libcurl on the client side: the viewer code does not see these retries. For LLEventPoll, a request will not been seen timing out before libcurl retried it several times and gives up with a curl timeout: with neighbour sims, the timeout may only occur after 300s or so in LLEventPoll, while libcurl will have retried the request every 30s with the server (easily seen with Wireshark), and the latter will have seen 10 requests with the same ”ack” as a result. Also, be aware that with the current code, the first ”ack” sent by the viewer (on first connection to the sim server, i.e. when the LLEventPoll coroutine is created for that region, which happens when the viewer receives the EventQueueGet capability URL), will be an undefined/empty LLSD, and not a ”0” LLSD::Integer ! Afterwards, the viewer simply repeats the ”id” field it gets in an event poll reply into the next ”ack” field of the next request. To summarize: viewer-side, ”ack” means nothing at all (its value is not used in any way, and the type of its value is not even checked), and can be used as the server sees fit. Easy to implement, but it will not be how the old viewers work, so... Plus, it would only be of use should the viewer restart an LLEventPoll with the sim server during a viewer-sim (not viewer-grid) connection/session, which pretty much never happens (see my explanations above). That hardening part is already in the Cool VL Viewer for 499, 500, 502 HTTP errors, which are considered simple timeouts (just like the libcurl timeout) and trigger an immediate relaunch of a request. All other HTTP errors are retried several times (and that retries number is doubled for the agent region: it has been of invaluable help a couple years ago, when poll requests were failing left and right with spurious HTTP errors for no reason, including in the agent region). This is already the case in current viewers code: there's a llcoro::suspendUntilTimeout(waitToRetry) call for each HTTP error, with waitToRetry increased with the number of consecutive errors. Already done in the latest Cool VL Viewer releases, for duplicate TeleportFinish and duplicate/out-of-order AgentMovementComplete messages (for the latter, based on its Timestamp field). Frankly, this should never be a problem... Messages received via poll requests from a neighbour region that reconnects, or a region the agent left a while ago (e.g. via TP) and comes back in, are not ”critical” messages, unlike messages received from the current Agent region the agent is leaving (e.g. TeleportFinish)... I do not even know why you bother counting those... As I already explained, you'll get repeated ”ack” fields at each timed out poll request retry. These repeats should simply be fully ignored; the only thing that matters, it that one ”ack” does not suddenly becomes different from the previous ones for no reason. That's a very interesting piece of info, and I used it to improve my experimental TP race workaround, albeit not with an added POST like you suggest: now, instead of just delaying the TP request until outside the ”danger window” (during which a race risks to happen), I also fake an EventQueueGet capability receival for the Agent's sim (reusing the same capability URL, of course), which causes LLViewerRegion to destroy the old LLEventPoll instance and recreate one immediately (the server then receives a second request while the first is in the process of closing (*), and I do get the ”cancel” from the server in the old coroutine). I will refine it (will add ”ack” field preservation between LLEventPoll instances, for example), but it seems to work very well... 😜 (*) yup, I'm using a race condition to fight another race condition ! Yup, I'm totally perverted ! 🤣
  4. See this old post of mine: AMD drivers may not be the only culprits (though, running a ”fixed” viewer regarding VRAM leaks, won't fix leaks happening at the OpenGL driver level, when the said driver got bugs).
  5. I am totally conscious about this, however we (animats & I) proposed you a ”free lunch”: implementing those dummy poll reply messages server-side (a piece of cake to implement server side, and which won't break anything, not even in old viewers) to get fully rid of HTTP-timeouts-related race conditions. Then we will see how things fare with TeleportFinish already, i.e. will it be always received by viewers ?... There is nothing to loose trying this, and this could possibly solve a good proportion of failed TPs... If anything, and even should it fail, it would allow to eliminate a race condition candidate (or several), and reverting the code server side would be easy and without any consequence.
  6. This is not the issue at hand, and not what I am observing or would cause the race condition I do observe and I am now able (thanks to the ”request poll age” new debug display in the Cool VL Viewer) to reproduce at will; this is really easy with configured defaults (25s viewer side timeout, and experimental TP race workaround disabled): wait until the poll age display gets a ”*” appended, which will occur at around 24.5s of age , and immediately trigger a TP: bang, TP fails (with timeout quit) ! The issue I am seeing in ”normal viewers” (viewers with LL's unchanged code and that my changes only allow to artificially reproduce ”reliably”), is a race at the request timeout boundary: the agent sim server (or Apache behind it) is about to time out (30s after the poll request has been started viewer side, which will cause a ”silent retry” by libcurl), and the user requests a TP just before the timeout occurs, but the TeleportFinish message is sent by the server just after the silent retry occurred or while it is occurring. The TeleportFinish is then lost, so what would happen in this case is: The sim server sent a previous message (e.g. ParcelProperties) with id=N, and the viewer replied with ack=N in the following request (with that new request not yet used, but N+1 being the next ”id” to send by the server). The user triggers a TP just as the ”server-side” (be it at the sim server or Apache server level, this I do not know) is about to time out on us, which happens 30s after it received the poll request from the viewer. At this point a Teleport*Request UDP message is sent to the sim server. The poll request started after ”ParcelProperties” receival by the viewer times out server-side and Teleport*Request (which took the faster UDP route) is also received by the sim server. What exactly happens at this point server-side is unknown to me: is there a race between Apache and the sim server, a race between the Teleport*Request and the HTTP timeout causing a failure to queue TeleportFinish, is TeleportFinish queued in the wrong request queue (the N+1 one, which the viewer did not even start, because the sim server would consider the N one dead) ?... You'll have to find out. Viewer side, libcurl gets the server timeout and retries silently the request (unknown to the viewer code in LLEventPoll), and a ”new” (actually the same request, but retried ”as is” by libcurl) request with the same ack=N is sent to the server (this is likely why you get 3 millions ”repeated acks”: each libcurl retry reusing the same request body). The viewer never receives TeleportFinish, and never started a new poll request (seen from LLEventPoll), so is still at ack=N, with the request started after ParcelProperties still live/active/valid/waiting for server reply, from its perspective (since successfully retried by libcurl). With my new code and its default settings (25s viewer-side timeout, TP race workaround OFF), the same thing as above occurs, but the request times out at LLEventPoll level (meaning the race only reproduces after 24.5s or so of request age), instead of server-side (and then retried at libcurl level); the only difference you will see server-side is that a ”new” request (still with ack=N) by the viewer arrives before the former timed out server-side (which might not be much ”safer” either, race-condition-wise, server-side). This at least allows a more deterministic ”danger window”, thus the easiness to reproduce the race, and my attempt at the TP race workaround (in which the sending of the UDP message corresponding to the user TP request is delayed until outside the ”danger window”), which is sadly insufficient to prevent all TP failures. As for ack=0 issues, they are, too, irrelevant to cases when TPs and region crossings fail: in these two cases, the poll request with the agent region is live, and so it is for the region crossing to a neighbour region. There will be no reset to ack=0 from the viewer in these cases since the viewer would never kill the poll request coroutines (on which stack ack is stored) for the agent and close ( = within draw distance) neighbour regions. But I want to reiterate: all these timeout issues/races would vanish altogether, if only the server could send a dummy message when nothing else needs to be sent, before the dreaded 30s HTTP timeout barrier (say, one message every 20s, to be safe).
  7. Depending on your window manager (Sawfish can do it, but some others can too), you could perhaps add a rule to it to disable window decorations (title bar, buttons, borders) for FS...
  8. LL's viewer current code considers these cases as errors, which are only retried a limited amount of times before the viewer would give up on the event polls for that sim server; these should therefore not happen in ”normal” condition, and it does not happen just because the code currently lets libcurl retry and timeout by itself, at which point the viewer gets a libcurl-level timeout, which is considered normal (not an error), and retried indefinitely.
  9. You can increase the timeout to 45s with the Cool VL Viewer now, but sadly, in some regions (*) this will translate in a libcurl-level ”spurious” retry after 30s or so (i.e. a first server-side timeout gets silently retried by libcurl), before you do get a viewer-side timeout after the configured 45s delay; why this happens in unclear (*), but sadly, it does happen, meaning there is no possibility, for now, to always get a genuine server-side timeout in the agent region (the one that matters), neither to prevent a race during the first ”silent retry” by libcurl... (*) I would need to delve into libcurl's code and/or instrument it, but I saw cases (thus ”in some regions”) when there was no silent retries by libcurl and I did get proper server-side timeout after 30s, meaning there might be a way to fix this issue server-side, since it looks like it depends on some server(s) configuration (at Apache level, perhaps... Are all your Apache servers configured the same ?)... I already determined that a duplicate TeleportFinish message could possibly cause a failed TP in existing viewers code, because there is no guard in process_teleport_finish() for TeleportFinish received after the TP state machine moved to another state than TELEPORT_MOVING, and process_teleport_finish() is the function responsible for setting that state machine to TELEPORT_MOVING... So, if the second TeleportFinish message (which is sent by the departure sim) is received after the AgentMovementComplete message (which is sent by the arrival sim which was itself connected on the first TeleportFinish message occurrence and sets TELEPORT_START_ARRIVAL), you get a ”roll back” in the TP state machine form TELEPORT_START_ARRIVAL to TELEPORT_MOVING, which will cause a failure by the viewer to finish the TP process properly. So, basically, a procedure must be put in place so that viewers without future hardened/modified code will not get those duplicate event poll messages. My proposal is as follow: The server sends a first (normal) event poll reply with the message of interest (TeleportFinish in our example) and registers the ”id” of that poll reply for that message. The viewer should receive it and restart immediately a poll request with the ”id” in the ”ack” field; if it does not, or if the ”ack” field contains an older ”id”, the viewer probably missed the message, but the server cannot know for sure, because the poll request it receives might be one started just as it was sending TeleportFinish to the viewer (request timeout race condition case). To make sure, and when the ”ack” field for the new poll does not match the ”id” field of the TeleportFinish message it sent, the server can reply to the viewer new poll with an empty array of ”events”, registering the ”id” for that empty reply. If the viewer next poll still does not contain the ”id” field of the ”TeleportFinish” request but does contain the ”id” field of the empty poll, obviously, it did not get the first ”TeleportFinish” message, and it is safe for the server to resend it... EDIT: but the more I think about it, the more I am persuaded that the definitive solution to prevent race conditions is to suppress entirely the risk of poll request timeouts anywhere in the chain (sim server, Apache, libcurl, viewer). This would ”simply” entail implementing the proposal made above. By ensuring a dummy/empty message is sent before any timeout would occur, we ensure there is no race at all, since the HTTP connection closing initiative is then exclusively the fact of the sim server (via the reply to the current event poll request, be it by a ”normal” message or by a ”dummy” message when there is nothing to do but preventing a timeout), and the poll request HTTP connection initiation only happens at the viewer code level.
  10. Cool VL Viewer releases (v1.30.2.28 and v1.31.0.6) published, with my new LLEventPoll code and experimental race condition (partial) workaround for TP failures. The new goodies work as follow: LLEventPoll was made robust against 499 and 500 errors often seen in SL when letting the server time out on its side (which is not the case with LL's current code since libcurl retries long enough and times out by itself). 502 errors (that were already accepted for Open Sim) are now also treated as ”normal” timeouts for SL. It will also retry 404 errors (instead of committing suicide) when they happen for the Agent's sim (the Agent sim should never be disconnected spuriously, or at least not after many retries). LLEventPoll now sets HTTP retries to 0 and a viewer-side timeout of 25 seconds by default for SL. This can be changed via the ”EventPollTimeoutForSL” debug setting, which new value would be taken into account on next start of an event poll. LLEventPoll got its debug message made very explicit (with human-readable sim names, detailed HTTP error dump, etc). You can toggle the ”EventPoll” debug tag (from ”Advanced” -> ”Consoles” -> ”Debug tags”) at any time to see them logged. LLEventPoll now uses an LLTimer to measure the poll requests age. The timer is started/reset just before a new request is posted. Two methods have been added to get the event poll age (getPollAge() in seconds) and a boolean (isPollInFlight()) which is true when a poll request is waiting for server events and its age is within the ”safe” window (i.e. when it is believed to be old enough for the server to have received it and not too close from the timeout). The ”safe window” is determined by the viewer-side timeout and a new ”EventPollAgeWindowMargin” debug setting: when the poll request age is larger than that margin and smaller than the timeout minus this margin, the poll is considered ”safe enough” for a TP request to be sent to the server without risking a race condition. Note that, for the ”minimum” age side of the safe window, EventPollAgeWindowMargin is automatically adjusted down if needed for each LLEventPoll instance (by measuring the minimum time taken for the server to reply a request) and the frame time is also taken into account (else you could end up never being able to TP, when the events rate equals the frame rate or is smaller than EventPollAgeWindowMargin). The age of the agent region event poll can be displayed in the bottom right corner of the viewer window via the ”Advanced” -> ”HUD info” -> ”Show poll request age” toggle: the time (in seconds) gets a ”*” appended whenever the poll request age is outside the ”safe window”. An experimental TP race workaround has been implemented (off by default), which can be toggled via the new ”TPRaceWorkAround” debug setting. It works by checking isPollInFlight() whenever a TP request is made, and if not in the safe window, it ”queues” the request until isPollInFlight() returns true, at which point the corresponding TP request UDP message is sent to the server. To debug TPs and log their progress, use the ”Teleport” debug tag.
  11. Thank you for a really useful paper ! It indeed explains a lot of things I could observe here with my improved LLEventPoll logging and my new debug settings for playing with poll requests timeouts... As well as some so far ”unexplainable” TP failure modes, that resist my new TP queuing code (queuing till next poll has started, when too close from a timeout). Tomorrow's releases of the Cool VL Viewer (both the stable and experimental branches) will have all the code changes I made and will allow you to experiment with it. I will post details about the debug settings and log tags here after release. Looking forward for the server changes. I'll have a look at what duplicated messages could entail viewer side (especially CrossedRegion and TeleportFinish which could possibly be problematic if received twice) and whether it would mandate viewer code changes or not.
  12. Unigine Superposition is far from optimized for OpenGL... You'd get better results under Windows and DirectX than Linux and OpenGL, even though OpenGL Windows' performances with it are indeed abyssal. So yes, better not trusting too much its results for OpenGL. The results of Valley are however perfectly in line with what I get with the viewer: around +10% fps in favour of Linux. In fact, you'd get better results with Windows 7/8 (less overhead than Win10 or Win11)... The problem being that you won't have valid drivers for it and such a modern GPU...
  13. When he is giving the finger, yes, he definitely looks like the stupidest man in the world... Linus Torvalds is no god, and while quite intelligent, he can also prove totally stupid, at times, like everyone (us included): giving the finger to people, for whatever reason, is one of the stupidest and pointless thing to do (and will likely achieve the exact opposite effect of what the person giving the finger would expect/hope) ! Oh, and what would it be, please ?.... I have been using NVIDIA cards and their proprietary drivers for over 19 years (my first NVIDIA card was a 6600GT), and never missed a single feature ! Settle down yourself, pretty please... I am not the person who is spreading FUD... I already replied this question, but of course, if you only read the first phrase of my previous post, you missed it... Read again: it was in the second phrase... 🫣
  14. This is only the case in the #RLV folder, and you may disable this behaviour... This is strictly how RLV is supposed to work for no-mod attachments and how Marine Kelley specified it (see the text after ”HOW TO SHARE NO-MODIFY ITEMS”); the Cool VL Viewer uses my own fork of Marine's implementation, which abides strictly to her specifications. Attachments get renamed when they are in #RLV, to add the joint name to their name (this allows to avoid detaching attachments on locked joints by accident when you change outfits, and prevents the detach/auto-reattach sequence that would ensue and could break some scripts or trigger anti-detach alarms in some objects); for no-mod attachments (which cannot be renamed), RLV instead moves them into a newly created sub-folder bearing the joint name. However, and since some people are used to RLVa viewers' way of doing things (RLVa is a rewrite of RLV and differs in many subtle and less subtle ways from RLV) , I implemented a setting to disable the auto-renaming of attachments in #RLV (which also stops the viewer from creating sub-folders for no-mod attachments): the toggle is ”Advanced -> ”RestrainedLove” -> ”Add joint name to attachments in #RLV/”. A simple question on the Cool VL Viewer support forum would have given you the answer...
  15. Maligned rightly ?... Only by stupid people, I'm afraid... The proprietary drivers for NVIDIA under Linux work beautifully (and around 10% faster than under Windows), with first class, and super-long time support: all bugs I reported to NVIDIA in the past 19+ years I have been using their drivers have been addressed, most of them quite promptly (first class indeed, especially when compared with AMD and ATI cards I owned in the distant past, for which Linux support was abyssal), and I today can still run my old GTX 460 (a 13 years old card !) with the latest Linux LTS kernels and the latest Xorg version. They are also super-stable, and adhere strictly to OpenGL specs. The Vulkan drivers and the CUDA stack are great too (with CUDA much faster and actually often better supported under Linux than OpenCL: e.g. with Blender, which only recently started implementing support for OpenCL when CUDA has been supported for years). It should also be noted that NVIDIA open-sourced their drivers for their recent GPUs and that, while AMD and Intel (used to) contribute more Open Source to Linux, they still rely on Mesa folks for their Linux driver (meaning less performances than a closed sources driver, because the Mesa maintainers do not have access to all the secret architecture details of the GPUs), and that you still need closed sources software ”blobs” to run their GPUs under Linux...
  16. What things exactly ? O.O My viewer is in fact MUCH safer than any other viewer, since it never touches your inventory structure unless you manually trigger an action it offers (such as consolidating base folders, or recreating missing calling cards), unlike what even LL's viewer is doing in your back (consolidation and calling cards recreation are systematic at each login with LL's v2+ viewers and all the TPVs forked from it). It also got safe guards against essential folders deletion or move to another folder (such as the COF: deleting or moving it could get you into BIG troubles), while allowing you to delete (if and only if you so wish and do) some unnecessary folders that got introduced with v2+ viewers and are just clutter for v1 viewer old timers like me. As for its consolidation algorithm (only triggered on demand), it is more elaborate than LL's and also able to, sometimes, repair ”broken” inventories (inventories with duplicate base folders, for example). This is not because the inventory is presented differently (like a v1 viewer does), that it got ”terrible things” done to it !
  17. In fact, you can use MFA in SL without a smartphone, but it is rather complicated and I wish LL would provide MFA via email... Here is the procedure I described in the opensource-dev mailing list (at the end of that archived email).
  18. This is likely due to an inventory server issue: the ”Marketplace Listings” folder is created in merchant's inventory as soon as they connect for the first time as a merchant to SL. In LL's original code, any failure to create that folder (which may happen, in case of inventory server issues) causes an LL_ERRS(), which ”voluntarily” crashes the viewer... Rather user-unfriendly and not very helpful either. Try and connect with the Cool VL Viewer instead: it won't crash, and should it also fail to create the Marketplace Listings folder, you can try and disable AISv3 (an HTTP-based inventory protocol, which sometimes goes mad and stops working for a while), from the ”Advanced” -> ”Network” menu (un-check the ”Use AISv3 protocol for inventory” entry), then relog and verify the ”Inventory” floater (there is no separate Marketplace floater: it's all v1-like UI with all inventory folders showing in the Inventory floater, even though you may choose to hide the Marketplace Listings folder via the corresponding entry in ”Folder” menu of the Inventory floater). It will also log useful diagnosis messages (CTRL SHIFT 4 to toggle the log console) when something goes wrong, instead of crashing... After the Marketplace Listings folder will have been successfully created, you can relog with any other viewers and won't crash with them any more (at least not at this place in that crude code 😜 )
  19. Which would be an argument in favour of @animats' suggestion to send an empty array of events instead of letting the request timing out... Of course, it means more work for the sim server (monitoring the requests timing for each connected viewer and sending an empty event when it gets close to an HTTP timeout to avoid the latter), but it should not prove too overwhelming either...
  20. As I wrote above, it has always been extremely finicky: some drivers and monitors (*) combinations more or less work, others cause crashes (usually the stack trace points deep into the OpenGL driver), or failure to set the proper resolution or image ratio: your screen shot looks to me like if the ratio is not properly set (look at the oblong shape given to the camera control ball, for example)... (*) Yup, it also depends on the monitor which transmits (or not, or late) its characteristics to the driver via the EDID protocol. Here is what I get with the Cool VL Viewer in full screen mode under Linux: notice the proper aspect ratio via the circular UI elements (in the camera controls), the HUD radar at the bottom right, and the bicycles wheels.
  21. That won't be a timeout, but a periodic empty message sent by the server before the request would actually timeout at the HTTP stack level. It means that, if it did not send any message to each viewer with an active request in the past 28 seconds of so (to avoid the 30s HTTP timeout, counting the ”ping” time, the frame time, and possible server side lag at next frame) it must send a reply with an empty events array. But yes, it would work with the current code used by viewers, and would definitely prevent some race conditions on TP (the race happening when the TP request is sent just as the server times out and libcurl silently retries the request, with TeleportFinish sent by the server too soon before libcurl could reconnect).
  22. Well, the viewer indeed keeps the event poll alive after the agent has left the region, which is needed to keep region communications alive in the case when the region border was simply crossed, or in the case of a ”medium range” TP in a neighbour region and still within draw distance. Of course, in the ”far TP” case, the viewer will keep polling until it finds out the region is to be disconnected, so it might restart a poll after a far TP, acknowledging (again) the last received message Id (same Id as previous poll)... Double acks will also happen whenever a poll request ”fails” (or simply times out) for a live region, and the viewer restarts a second poll: here again, the ”id” of the last received message is repeated in the ”ack” field of the poll request.
  23. IIRC, Ubuntu got Wayland enabled by default... Firestorm (like almost all other Linux viewers) is using X11, and the Xwayland compatibility layer provided for Wayland is known to be bogus in many aspects. What happens if you disable Wayland usage in Ubuntu 22 ? Note that the full screen mode has always been extremely finicky and crashy in SL viewers. For the Cool VL Viewer, I fully reworked it so that, when enabled, the viewer goes full screen from the very start, instead of attempting to switch from windowed to full screen (i.e. fully restart GL from scratch) on login: it solves many issues (mainly OpenGL driver level crashes, but also resolution detection issues). For Linux, it also got an optional ”full desktop” mode, instead of genuine full screen (i.e. the viewer uses your current desktop resolution with a decoration-less window, the bonus being that it can also run with other managed windows on top of it). Finally, it may be possible, depending on the window manager in use, to add a rule in the latter so that it does not decorate the viewer window and forces it full screen; you then could run the viewer in ”windowed” mode, but full screen and border-less, similar to what you would get in ”full desktop” mode.
  24. I'm more under the impression you are seeking for just one favourable testimony to use it as an excuse to follow your personal feeling/belief that an AMD card would be better suited for you... Just go ahead, and buy whatever suits your own needs/preferences, and take your responsibilities. Just don't come back here to complain ”we” gave you a bad advice, should you find out you committed a mistake. 😜 As for the graphics cards prices, it might be wiser/smarter to wait a little bit: NVIDIA's cards are already seeing an adjustment of their prices as a result of AMD's newest cards releases (competition is a Good Thing ™) . It will take some time to propagate to France (but you could just as well buy a card from a more reactive German supplier), but prices are going to drop a bit in the coming weeks. The second half of October is usually a good moment to buy computer hardware (long enough after people's return from Summer vacations, soon enough before Christmas). There is also the option to wait for a sale/opportunity on the former cards generation (even a RTX 3070 is plenty powerful enough for SLing).
×
×
  • Create New...