Jump to content

Beq Janus

Resident
  • Posts

    606
  • Joined

  • Last visited

Everything posted by Beq Janus

  1. 100% agreed. To restate what we've said here. Vsync is not "capping you fps at your monitor refresh", it will cap at some unit fraction of your refresh rate depending on the actual frame time of the scene you are rendering and has potential to cause significant jitter as a result. The built in limiter achieves the result in a far more consistent manner and results in better all round experience.
  2. For my money, Vsync is a poor frame rate limiter. I have posted why on another thread, with caveats to say that I may be totally interpretting this the wrong way. What you will most likely get with vsync on is a far more choppy performance. (As noted on the other thread, I would be very happy to have this proven incorrect, and my misunderstanding of the OpenGL documentation clarified) It will certainly save your fans and electric bills, I remain unconvinced that it will give you a nice experience while doing so. If you want to let your machine breath use the viewer's onw frame limiter which will force the viewer to take a little pause for breath between each frame. Of course, the downside of vsync off is the increased risk of frame tearing. I'm wither immune to it or just don't suffer but having buffer swaps happening independent of vsync can lead to this. You now have choices. My choices are "VSync stays off and frame limit manually as you feel the need".
  3. It's a default that we inherited (having VSync enabled) and I am not entirely sure I agree with it. The main problem I have with it is a lack of clarity over what it achieves. The underlying implementation uses an opengl extension "wglSwapIntervalEXT" which is set to 1 for "vsync on" and 0 for "vsync off", in the properties it is exposed as a simple toggle. The questions I have about this are as follows and are based on my understanding of the openGL documentation :- 1) If I have two monitors, one running at 144Hz and one running at 60Hz does it correctly detect the one that I am running on? What if I am (presumably because I just like being awkward) straddling the two monitors? 2) What happens when I am slower than the cap? Let's assume (for the sake of my brain near midnight) that my monitor is running at 50Hz. This means I have 20ms per frame. case 1: higher FPS - If I draw my screen in 15ms (~67 fps) then what happens is that the card waits for 5ms for the frame to start over and then swaps the buffers, forcing the 20ms frame time and giving me the 50FPS updates. case 2: lower FPS - If I draw my screen in 25ms (40fps) then what happens? Based on the OpenGL documentation, the GPU will ask the driver to wait until the next vsync event (known as a vblank) This means that you are now waiting two whole frames, giving you 25fps! The chances are that it would not be quite so black and white either. the sleep on one fram allowing a faster frame next time, leading to all kinds of weird stutter potential. My advice, at least until someone slaps me sensible with clear proof that I am wrong, is to turn off VSync, if you want to limit the fps then use the old fashioned fps limiter and let the viewer have micro sleeps it is (at this stage) more predictable. BTW, This whole set of scenaros is further muddied by adaptive sync technologies such as NVidia's G-Sync and AMD's Free-Sync While we're here though...hey isn't this a great problem to have? 😄
  4. I think RezMela (mela with an ell not meta with a tee) in OpenSim have a system that would in theory work in SL too, that supports scripted builds using predefined components that are rezzed on an inworld base plate. I think there are videos around from Ramesh the founder but this page gives an idea. Agreed, and I am pretty sure there was one of the start ups possibly sinewave or hifi back in the day that had modular building like the blender addons have where you have sliders for door size and windows etc. It is (I think) what @ChinReyhas been advocating for years now, "better prims" .
  5. Has the creator of the mesh floor made a proper physics shape? If you are using Firestorm you can check this by clicking the "show phsyics" (eye) icon while editing the floor. https://i.gyazo.com/13883e2576f6ad05a2e1d800613ab959.gif If not then you hcae two choices. 1) make your own floor physics that is a better match to the visible mesh and then make that a transparent prim as Rollig described 2) contact the creator and ask them to make a proper physics shape for the floor. Poorly fitted physics shapes on mesh are a significant cause of user problems This is quite an old video now but it shows the tool being used and how the bad physics causes issues. https://vimeo.com/manage/videos/240771266
  6. A few images of the mesh upload floater would help here, but I ma guessing that you are either zeroing or somehow ignoring the LOD model boces at upload time. You need to either create LOD models or let the viewer do it for you at upload time in order for it to be rendered properly at a distance.
  7. It was a deliberate choice we made not without some gnashing of teeth. This release includes MFA and also fixes a heap of issues with Apple that have plagued recent updates, while MFA is mostly nothing to do with us it is/was expected to cause enough confusion to the users that we didn't want to conflate that with the performance updates and swamp our poor support team. On the plus side, with the current release out of the door the focus is on turning the next release around so it is coming... promise.
  8. The reality is that LL do not freak out about it and are looking to implement some form of area search. To my knowledge we have never, in all the years of area search existing had a complaint from the lab about it. Ideally, they'll provide a more efficient means of achieving the same thing one day. unless you changed the viewer in the last 3 days then it must be something that you are doing. No, that's conspiracy theory level BS, if there was anything untoward in area search then LL would have asked us to stop years ago and we would have quite happily complied. What Coffee is explaining is the mechanism through which the area search works. The method is entirely legitimate and above board, but it is not as efficient as would be preferred. Many things are not, the world has not ended.
  9. I'm struggling to work out what people are actually measuring here. I have tried the following measurements, and my results do not appear to correspond with what is being said. Perhaps I am measuring incorrectly? Please post photos, similar to mine showing the prim height, the avatar etc. 1) the measurement in metres reported by the viewer appearance floater 2) the height reported by a height detection script 3) the height as measured by a prim The results are as follows:- viewer appearance script prim comparison FS 1.88m 1.63m 1.86m LL 1.63m 1.63m 1.85m For Firestorm I have this image showing the numbers The LL viewer does not easily allow you to have both the shape and the build floater open at the same time so I can't get a single snap shot however we can take it at face value that the LL viewer parrots the incorrectly reported server side height However, a visual alignment shows that the avatar remains at ~1.86m (the small discrepancy being on account of the pose as the FS avatar is stood upright.) In both cases it is very clear that the reported height from lsl and the appearance menu is nonsense. What I think is confusing people here though is the difference between the physics height that the region understands and the visible height that you actually see. The physics height is based on a capsule that is scaled on the server and does not correlate to the visual avatar (either as ruth or wearing a mesh body/head). Ultimately you need to decide which measurement you are looking for the "max headroom" approach that says "this is the minimum height of a door way I can walk through without physics blocking me" or the visual approach "this is the height that people see me as relative to known scaled objects."
  10. Sadly, I doubt there was anything so scientific as "usage data". It is odd that it is not even shown in the preferences on LL viewer.
  11. As suggested above, true 32-bit hardware is super rare now. I ran some stats in March based on Firestorm logins to SL and the number of machines that were actual 32-bit hardware was vanishingly small. However, there is a small but noticeable proportion of the SL user base that are running the 32-bit Firestorm viewer on their 64-bit hardware, and 64-bit OS.... Why would you do this? As @Profaitchikenz Haikunoted, there are some who have limited RAM (and limited scope to upgrade) and it is undoubtedly true that the 32-bit viewer has a smaller memory footprint. I am surprised that the LL viewer has just dropped the 32 bit viewer without an explicit statement of this perhaps @Vir Linden or @Alexa Linden can confirm that this is so and not just a mistake in the updater. It is also worth noting that it is not just super ancient machines that fall into this camp. Just last week I answered someone who was asking for help getting FS/SL to run on their new intel-based chrome book. This is a machine that is fundamentally ill-equipped for SL but it is a modern machine and even has guides on how to install SL, if you google for it. In this case, the person was well aware of the problems they were buying in to. They were after a cheap backup machine, for general purposes (I am told some people have lives outside of SL!!) and only expected to use it rarely, for example when travelling on business or vacation and just needing to pop in and pay rent or catch up with friends late at night in a hotel. There are also people that get advised that such machines are suited to their needs, and the fault for this lies as much with Linden Lab, I am sad to say, as with the retailers and boils down to the woefully outdated minimum specifications that many of us have been asking that the lab either update or simply remove for many years. It is not unheard of for a non-tech savvy SL user to go into an electronics store and ask advice on buying a new machine. What do you need it for?" the helpful sales assistant asks. "Oh, a bit of web browsing, email and I like to play Second Life". The assistant cheerfully googles "Second Life minimum specs" and finds an entirely misleading set of details (that under UK law is probably a breach of advertising standards in itself!) The assistant unwittingly sells the customer the cheapest low-end windows laptop without knowing that the information is woefully inadequate. https://secondlife.com/system-requirements In reality, the future of 32-bit and of weaker machines in general is looking rather grim. It has been stated by LL in recent developer meetings - which is probably the single worst place to make such announcements as no TPV developer can function on a machine this restricted, it is the end users that need to be forewarned, yet you don't see this kind of news discussed at SLB - that they intend to remove the ability to turn off advanced lighting (ALM), I have no doubt that @Inara Pey will have reported on this however. Removing the ability to disable ALM will force everyone to use the deferred rendering features. The argument being made is that other increases in performance make "the need" to turn off ALM history, this is true in many cases but I don't think that this is a true reflection of why some people turn ALM off, memory is one reason that is not directly to do with performance, network bandwidth (slow satellite feeds and metered connections for example) is another. However, it is almost certainly true that SL loses (or fails to retain) in a few months more "new" users for being out of date and ugly than it would lose in "old" users by forcing the bar higher and cutting off those on truly low end hardware. It makes life easier for all of us who have to try to maintain things too. That doesn't make it pleasant for those affected who are often the more vulnerable of the SL user base, and in the current economic climate saying "suck it up and go buy new machine" doesn't sit comfortably with any of us I doubt😞 . The stark reality is, that coming down the pipeline, we have performance improvements (which frequently occur through making harder use of the computer resources - more threads for example) improved graphical capabilities (higher quality graphics that eat more RAM and make more demands on you GPU) and dragging a long and ever lengthening tail of underpowered hardware behind us is a significant constraint. So, what if you are on a limited RAM machine and your access to 32-bit builds dies, what can you do? See if there is any option to uplift your machine at all. The minimum viable RAM is probably 8GB, so if there is any chance of getting a litle extra RAM it might go a long way. With older machines, second hand electronics stores (like CEX in the UK) may be able to help you for a very modest fee. Batten down the hatches. Reduce your memory footprint as much as possible:- Don't run any other software at the same time, modern operating systems will quite happily let you use more than the available RAM but they do so by swapping your programs in and out of memory on demand and this makes things rather slow. In Firestorm we allow you to force the texture size to be no more than 512x512. this of course reduces the graphics quality a bit, but eery 1024x1024 that is reduced to 512x512 is 25% of its original size. and every little counts. You'll find this setting in preferences->graphics->rendering. Keep all your graphics settings low, make sure features like shadows are disabled. (honestly I am struggling to believe anyone needing these nots can have advanced lighting enabled) There is an amazing amount of energy being invested in SL at the moment, it is really a very exciting time for the platform. This progress will come at a cost.
  12. As @Henri Beauchampnotes this is not currently possible, the setting is global (meaning it applies to all accounts) and by design the second and subsequent instances have read-only cache access. @Ardy Layhas a good suggestion, though think carefully as you do this because the implications are that your accounts are now entirely different users, no shared settings. When you change a "global configuration" it will apply to that windows user and any firestorm instances that they run. I've not tested this, but it sounds entirely plausible and a reasonably decent solution if you want to get that separation
  13. In my experience, it is a very mixed bag. When we do contribute things, they often get reframed in the context of the LL Viewer (so where in FS, we like our configurability and options 🙂 , LL will tend to be more plain) so from what we contributed, something derived and altered gets merged in and thus what then comes back down stream to us, has to be merged again. But this is still preferable, to them recreating the code without reference to prior art as by doing this they make our job of integrating even harder as there is no commonality on which to base the merge.
  14. The voodoo is extensive and the results are very pleasing. As Monty conforms, part of the boost is moving more things to threads, though TPVs had already done some of this work before, the LL work adds some extra offloading that is helpful. The biggest wins are undoubtedly in the handling of rigged mesh, I think you've followed some of my reports on the disruptive nature of segmented mesh bodies. Well the updates go a long way to resolving that by allowing many of those segments to be grouped together. Those bodies will still be slower but by nothing like the massive margin they are at present. This makes an enormous change in crowds scenes. We will of course being bringing these updates to you in Firestorm in due course, bear with us though. At the moment it is likely to be the release after next because we're currently trying to get things like code signing on Apple and other things sorted out. As soon as those are done the priority will be to get the Multi factor auth support out in the wild ASAP and then we can focus on hammering the lumps out of the Performance Improvements. If the former takes longer than expected then we may end up rolling them all together which is not very desirable from a support perspective as there are an awful lot of changes in the performance viewer (as you can tell) and we really want to make sure we are heavily tested on that before shipping.
  15. I don't think that you can, short of clicking on each link.
  16. This doesn't make much sense to me, as people have noted the viewer should be entirely out of scope for script events. What was the "first" case of differing events? Something unexpected is happening here, hover height was suggested that is plausible I guess. I'll see if I can run some tests when I get a moment.
  17. Yeah that would do it. but LOD name matching is the only way to guarentee the correct model assocaition across files when you have complex scenes. If you do not have _LODn suffixes in the model names then it falls back to the old "best guess" model which can lead to all kinds of weirdness. I've made an attempt to address this in way that I hope can cater to all. I've long been of the view that too much of our "user interface" is exposing underlying details that a user should not have to deal with. as such, I plan to allow you to set your preferences for LOD naming. This preference wil apply to both filename and model naming within those files and I would hope will help. It still needs some further testing, on which subject, if anyone has a Colllada file with Unity style LOD names that they would care to share with me for testing that would be useful and ensure I am not going down the wrong path here. Because the LOD matching happens at a level somewhat detached from the UI itself the changes are not per upload, but would be set as a preference that matched your personal workflow, I hope that is useful, the ability to change it in the fly for each upload is possibly useful but given the likelihood of significant changes in the uploader in the near future for PBR support and gltf it is probably nto worth the extra time. https://gyazo.com/ffd6e32fbd8aaf3fcbe4a66765d2869b
  18. I have what is possibly a silly set of questions, mostly musing... 1) When the physics is misbehaving, does rescaling the building slightly larger fix it? 2) If so, what altitude is this happening at? Does it happen at ground level? Basically, could part of the physics be on the cusp of the 0.5m dimension limit? If so, does it get worse with the distance from origin that we often see with mesh due to the quantisation? @polysail has been chasing that bug doggedly https://jira.secondlife.com/browse/BUG-225742
  19. yep, in a system where different faces have different rendering needs (go through different parts of the pipeline and end up going to different shaders) ensuring that meshes split easily into their "renderable units" is a pretty standard practice.
  20. In the same sense that LL's choice is their own, Unity and UE's proprietary choices are not some kind of standard either, just how those work. The LOD numbering convention for SL was defined way back in its early days when there was very little prior art (prims have LODs remember). In terms of "standards" while we could argue that those two are de facto standards in the game engine space now, it was not always so and outside of game engines, but still squarely in the 3d modelling space we have examples that continue to use the LL style convention. The ordering in SL makes sense technically. We stream the content and we pull in the smallest, least detailed models first, those are LOD0, then if and when we need to improve the quality (as we get nearer for example) the other LODs are appended to the list (LOD1-3). In theory, you could extend this conept to allow a "closeup" LOD with super-high poly (realtity check we can't because of other constraints, but that is at least part of the thought process behind a zero based incremental LOD numbering system. It is very common in arc vis and geo spatial data representation for LOD0 to be the lowest LOD, more or less just the ground plan model and then build them up. https://3dbuildings.medium.com/how-building-data-works-level-of-detail-e9bad0b61baa However, justifying a naming convention with an underlying technical implementation is rather weak. I personally always used to name my files and models inside them _HIGH _MED _LOW etc. because that made sense to me. I changed when I started to write viewer code and realised how it was setup. My Blender Addon uses the LL convention by default, though you can change that in the settings. Here's a question though. How helpful would it be if the naming style was selectable? After all it exists in a visible form only at upload so it would be possible. In theory I could add a drop down to select the naming convention, and a setting for the preferred default. Would that be useful?
  21. Without pictures we are just guessing. The uploader however will split any vertex that is shared between triangles that are in different materials faces. You can test this with a simple cube. 8 verts/ 12 tris for the cube, but if you set each face to be a different material slot, then it will be 6 submeshes as far as SL is concerned and the verts get split giving you 24 verts and 12 tris.
  22. Mainly because I don't think I saw it here. The "offficial" Ruth and Roth store on market place is here https://marketplace.secondlife.com/stores/228512 @Ai Austinhow regularly is that store updated? Is V4 the latest version that is "released". It is probably extra work for someone, and that's always a stumbling point with free/open source projects, but having a Ruth and Roth "latest" package that will deliver an update whenever the online version is changed would be really useful .I think that marketplace can do this now,
  23. The scale cost thing has nothing to do with fairness; that's the rules as they stand nobody said they were fair 🙂 ; the rules, however, are out of step with current rendering technology but that's a slightly different argument. The system and concept is simple enough. A small object is likely only to be seen up close, a large object will be seen from afar and switches LOD later, meaning small details on the large object are likely to be contributing to the render time for more people, more camera positions and directions, and thus they are charged higher to encourage better efficiency. This catches people out because it seems peculiar that a house frontage with 3 windows and a door and all their furnishings costs less LI when the windows and doors are separate than if they are all linked, but the expectation is that the windows will switch to a lower LOD sooner and as such their higher density mesh parts will be rendered from fewer locations. The flip side of that is the reality that as of this moment, most rendering is bottlenecked on the CPU and the drawcalls drive the majority of the render cost. And within certain parameters, the drawcall counts remain the same at all LODs, making the triangle obsession somewhat moot. However.... When the performance updates land, then rigged mesh batching becomes more of a thing. At that point we find that the cost will be less observable (it is harder to measure the impact on the GPU than on the CPU) but that the bottleneck may well move from CPU to GPU. If the cost moves to GPU then that means the CPU will be waiting for the GPU to catchup and the GPU will be choking on all the triangles that being jammed into it. The problem here is known as overdraw, having multiple triangles all fighting over the same on screen pixel means that you waste a lot of time drrawing and redrawing the same point on the screen. The denser the mesh the more overdraw, the longer the GPU spends. At the moment this is largely hidden because of the heinous drawcall induced CPU bottleneck. As for batching...Render batches reduce the drawcall overhead to some extent though batches are limited by texture or textures. Things with the same texture batch together, and some render passes can batch more than one texture. I think it may also only apply within the same linkset. I'd have to go recheck....
  24. As @Rolig Loonand @Quarrel Kukulcan note this would seem to be confusion arising from unfamiliarity with the vagaries of SL. The shader that we use in the mesh upload preview is a very basic single pass shader that has no alpha support. Once uploaded and rezzed in-world then your problems should vanish. Your have two ways to test this without incurring costs. 1) Use the beta/test grid Aditi (see here for how to connect there, https://wiki.secondlife.com/wiki/Preview_Grid) here you can upload anything without incurring real costs. 2) Use the local texture feature, this is more complex, but while the instructions to do it seem long-winded it is (I think) fairly obvious once you do it once. Instructions for local textures. Upload the mesh asset, without textures, rez it inworld by dragging and droppping from your inventory, then right click it and select edit. From the dialogue box that appears, click the Texture tab and click the little thumbnail. in the texture picker You'll see a set of radio toggles Inventory/Local/Bake. Inventory shows the textures that you have paid to upload, Local shows the textures that you have imported locally for testing (nobody but you can see these). At first this is empty, pick "add" then find the texture on your hard drive and select it. That texture will now be listed on the right and can be clicked on to be applied to the mesh...phew. Now that you have this, every time you save/export that texture in Photoshop/Substance etc it will auto refresh inworld. Note: these are local to you, anyone in world with you will not be able to see what you are working on. The following gif shows the basic workflow up to the point of hitting 'add' for the local. Stupidly I picked an item with no transparency 🙂 . The gif is a bit sparse so the mp4 may be a little clearer https://i.gyazo.com/73e107ae9ca6b756d9716b2f7047a815.mp4
×
×
  • Create New...