Jump to content

Beq Janus

Resident
  • Posts

    608
  • Joined

  • Last visited

Everything posted by Beq Janus

  1. Thank you, I'm ever so glad to hear that these features are helping people and bit by bit making the workflow slightly less painful. I need to be careful here, as I know you are going to be trying to break it 🙂but my belief is that it is pretty robust. I've not yet had a Jira or heard any whispers of their being issues. It may of course mean that I'm simply being sworn about in circles I don't frequent 😉 If you can and do find gaps then please do raise a JIRA, and explain the steps you used and I will endeavour to get that fixed as always. As to how. The old matching code would create two lists, one for the materials in the High LOD another for the materials in the LOD being validated. It would tie itself in knots and ultimately just expect the two lists to match in length, and that every name in the LOD being validated could be found in the HIGH LOD. These two rules in combination meant that we had to have every material present. I ended up re-writing it. The new code is (I hope) more straightforward and will seek to find a matching material by name, in the high LOD and associate that with the corresponding material in the lower LOD. The "trick" to this was in fact prior art. Ironically, the much maligned GLOD has a habit of eliminating faces entirely and nestled inside the GLOD support code was a tiny clause which said "if we have no geometry then we add a collapsed triangle placeholder". This happens at "the frontend" of the uploader, where we convert the Collada into an interim form that can be rendered in the preview, it is the rendering aspect that NEEDS the collapsed triangle. Then at the backend, when we take the interim form and gently cook it into the Second Life Mesh (SLM) asset format, there is another check. "if we have a material for which the only geometry is a single collapsed triangle. then replace it with a "NOGEOMETRY" marker, in the asset file. Thus when it gets sent up to the Lab, the triangle is gone and this special marker is there. (There is a 3rd step, that happens later in the story. When the asset is unpacked by a viewer for display in a scene, it finds the special marker and creates a single collapsed triangle in its place, to keep the rendering pipeline happy, or at least to reduce the number of "special" checks that have to be done.)
  2. Like others, it is not clear to me what the problem the OP is reporting actually is, there are a few anomalies to choose from none of which seem particularly unusual. The most likely one I see is the mismatched texturing but that may be intended. "These are two prim objects" immediately makes the discussion of LOD Factor somewhat moot, for while there is LOD behaviour in prims the behaviour is well defined and largely volume maintaining, features which good Mesh should aspire to. Along with reducing DD and other general good practice, I would check how close to one another the two objects are that are problematic, there are well-documented issues with Second Life accuracy at altitude due to the rounding of numbers that occurs. Checking if this happens lower to the ground should be a part of any fault finding excursion. Certainly , as others have said, have some friends verify what you are seeing, is it just you, something about your settings or hardware or does everyone see it? Make sure to mix NVidia users with AMD users and see if you can determine a pattern. If there is a pattern (and that you haven't simply got the prim surfaces too close, and altitude occlusion fighting happening) then it may well be worthy of a JIRA. https://jira.firestormviewer.org. for good measure, always check whether the same or similar happens on other viewer, in particular the LL viewer if so throw them a Jira too. OK deep breath..as we continue to hijack the OPs thread. And so to the recurring LODFactor discussion... I don't have much more to say on the LOD Factor argument. We clamped the setting back in 2018 to curtail the worst behaviours of lazy and ill-informed merchants, whilst not breaking things utterly for those who wish to take photographs and have to deal with the poor quality assets in the scene in some fashion. Meanwhile, those that ascribe to the "LOD 4 never did me no harm" are essentially missing the point (you may be justified in the observations, more on that later); at this stage a decade since we saw mesh introduced to SL, you really should not need to have your LOD high because there is no good reason for a creator to blame the viewer or SL for the crumpled garbage they sold you. There are many tutorials out there on what makes a good asset for real-time 3D engines, no credible creator can plead ignorance to this (and remain credible). Amateur and developing creators, of course, are still finding their way, the beauty of SL is in the democratisation of creativity, but at the same time, the keen amateur is not going to be the one flooding the market with poor assets, they are likely the only person ever to see what they make. It is the "big" commercial producers to which that blog was addressed and in particular those who excused their laziness by expecting you and your machine to work harder (and frequently slower) while they get busy on next month's product. Think before you buy. All the time people continue to buy sub-standard assets, other people will continue to make them (secretly laughing at you too I bet). The original reason for LODFactor hackery was to address the tendency of sculpties to explode into mangled triangles at the slightest distance, and since there were very limited means by which a lower LOD of any "volume maintaining" substance could be presented, allowing people to force them to render at full detail for longer made sense. However.... We stand at an interesting juncture, which may pitch this discussion into new realms. There is a lot of work going on at the moment to improve the rendering speed of the viewer (I am about to post a new blog, inspired by this thread and some recent news) discussing some of the implications of the advances we are beginning to see. At the moment, the vast majority of us are CPU bound in SecondLife. If you have a dedicated GPU and it is of NVidia GTX 970/GTX980 or above quality, then the chances are that it rarely gets pushed to its limits. In many cases it will be pretty much idling while your poor CPU is choking to death on sliced and diced mesh bodies. This may well change and with it will change the impact of LOD Factor. Read my blog for the details (or not) but for now let's assume that a magic wand is being waved that could, for many of us, remove that CPU bottleneck to some extent and thus start to make those GPUs we paid so much for earn their keep. It is here, on the GPU, that high LOD and dense mesh brings bad news. Your screen is made up of a finite number of pixels, and each of those can be just one colour at a time. When the viewer is sending stuff to be drawn it tries to optimise things so that we ideally only draw any given pixel once. Life is not ideal though and there is always some element of what is called "over draw" where pixels get drawn then redrawn. Every redraw is wasted time. Now... if you are sending a high triangle dense mesh through to the GPU to be drawn even though the item is just a small cluster of pixels on the screen then you can be certain significant overdraw time wastage is occurring, at some point the positions will switch around and we'll have the CPU twiddling its bits waiting for the GPU to "get on with its job already" as the GPU adds the 20th coat of pixel paint to the overdraw canvas. Some overdraw is inevitable, but Highly dense mesh, and high LOD factor (keeping dense mesh around when a simpler one would suffice) will add to the delays, slowing the GPU, and increasing the data that has to shuffle back and forth between the GPU and CPU etc. I would argue quite strongly that LOD as we have it today is problematic. There are those that will argue the we should be slaves to the original Linden Viewer default LOD Factor and not the higher Firestorm default; I don't honestly believe that is the right discussion or the right argument to be having at all (it is rather moot at this stage). With screen resolution growing (and in some cases the physical dimensions too) the potential resolution/quality of a middle distance object is higher than it used to be, and while I could argue that this is a case for reviewing the default LOD Factors, the reality is that LOD factor based on virtual distance and scale is not the right solution, it probably never was. The entire LOD swap algorithm should be replaced with one that is based on the on screen scale and resolution of an object, thus adapting to the individual's machine and circumstances and by turn removing this arcane LODFactor lever once and for all. So TL;DR summary.... Right now...LOD Factor is probably not affecting you too much, why? Because you are already hamstrung by other issues that dwarf that impact. As we pick off those issues, (and it is happening) this problem will return to bite us hard in a many-segmented densely triangulated arses. All that said, I might argue that it will be a nice problem to have. I long for the day when someone is complaining that they only get 50FPS in a nightclub 😄
  3. Hi Niran, This is good. I know a bunch of people who would love to see this. I presume that this will only work if the attachment is properly attached to the head or rigged to the head bones? A small price to pay I imagine. (ideally creators would attach things properly in the first place!) For rigged mesh attachments it is easy to deal with the shadows and the non-shadows as they are separate calls in the drawpool, for non-rigged you will probably be best served checking the sShadowPass variable and acting accordingly. That might prove a bit trickier in practice. not too bad though I would hope.
  4. The message you are seeing indicates that the name lookup is failing, you can turn on the detailed logging in the "log" tab and it'll show you the lookup attempts (admittedly they are not that easy to read). If you are using the most recent Firestorm then the error message on the log tab and in the log file (irrespective of the "show detailed logging") should be something like "Model blabblah has no High Lod (LOD3)" This only happens if the viewer had failed to find the expected high lod model for an object referenced in the DAE. The steps the viewer goes through to "find" the high lod are probably beyond the scope of this reply but it broaadly follows one of two paths. If you have follwed the recommended naming standard then for any object in your Medium LOD DAE file, called "blahblah_LOD2" it will expect to locate a model in the high LOD DAE file called "blahblah" (with no suffix). If you are not folowing this pattern then the viewer resorts to making guesses.(which are not very smart frankly) basically saying OK so the first entry n the medium LOD file is the LOD2 for the first entry in the High LOD file (even if they have nothing else in common) TL;DR of the above is that for it to not think there is a high lod the high lod must have failed OR there are more objects in the medium LOD export then there are in the High LOD and thus it is unable to do the index based matching. I did a series of cleanups of this error handling but it is far from over, there remain many corner cases where something silently fails and gets caught later on as s slightly different error. It could well be that you are seeing one of these. I am happy to take a look at the DAEs that you are using (send them to beqjanus@firestormviewer.org and let me know here to go look for them). You might also want to double check the usual suspects. 1) each DAE export for a given LOD only has the models for that LOD in it 2) there are no negative scales 3) the names are properly assigned (if you are using the proper naming method) - to ensure this you need to make sure that the underlying mesh object has the intended name NOT the top level blender object. If this is a corner case then I will have an example to use next time I am poking around that code. If not I hope to be able to explain why it is failing.
  5. And yeah, it was my slip. I even thought about it! I typed low then (incorrectly) corrected myself at the time. Thank you Rey and @Aquila Kytori for the examples, and I agree that that usage is valid, I am still reticent to include it because people would go "oooh tiny weeny phsyics, yes" and use it for all the inappropriate things where a cube would by a far better choice. Ultimately we can't stop stupidity but I try to avoid handing out free passes too. I'll revisit this once we get a little closer to being allowed to merge in the mesh optimiser branch that LL have. I'll be looking at that once I have my avatar render time and "auto tune" features mature enough.
  6. All good input thankyou. I'll try to take this into account when I am next poking at the uploader. One question though Why would anyone want triangle? I am confused as to why this is (apparently) popular as it appears to me to have no value. Indeed, it serves only to make problems. 1) It stores up problems in terms of raycasting and collision detection. 2) you are effectively adding a "useless" physics, when you might as well just leave it untouched and keep just the default convex hull. 3) It makes Beq sad. Is there a use case I am missing where the triangle is an important and valid option?
  7. We share our laziness 🙂 I had a long debate (with myself, it gets that way sometimes) over including the "user defined" worrying that the risk of additional confusion was likely higher than the number of people who'd actually use it. Now that it is out there I wonder what the general consensus is? Worthwhile option or confusing distraction.
  8. Such schemes would do little in any case. As has been noted here and in threads like this before, a commercial creator cares nothing about a few lindens in the cost as they make it back on their sales. Those it impacts more are the small time creators who (to be frank) we should not be worrying about at all, if you are only selling a few units or, making things for yourself or your RP group or whatever then you are not really part of the problem. The problem lies in the economics of supply and demand in SL. To sell many units you have to "look gorgeous" and taking pride in optimisation etc does not typically lead to more sales. A few years ago now I added the LOD preview ability to FS, the idea was to allow people to easily see how badly designed meshes were, I know of many people that (like me) use this as one of the determinants before a purchase. It hasn't really made much difference overall though (sadly). My current objective is to arm people with far better tools to evaluate the quality of items, to easily see what slows them down and allow them to make informed choices. Will it make a difference? I doubt it, but perhaps, just perhaps, a few creators will start to think carefully about how they structure their items. I don't have high expectations, but every little counts. If we could get to a point where people (consumers specifically) compared items on the basis of looks and efficiency and made informed choices then we'd really have turned the corner.
  9. I doubt it. More than likely they'd put it in prefs where it will be searchable and easy to find. We'd make it a dismissable "stop nagging me" box I expect, findable and, yes, findable in the prefs. I added the notice to the rigging tab to ask people to stop attaching everything to right hand. That was not a popup just used some available space. That had to go when the lab added the new weights list box. Is this true for Second Life? SL does generate mipmap textures automatically but based on the data from FS' obejct inspect function I understood that these are not stored in the VRAM. This is a misunderstanding. MipMaps and discards are two separate things. MipMaps (true mipmaps) are GPU hosted and frequently GPU generated. In SL we have discards which as CPU side versions of MipMaps, whilst they are similar beasts I would be wary of considering them as the same because the GPU has nothing to do with Discards. As for memory...you are right.. in so far as perhaps the inspect is wrong. The VRAM and TRAM numbers are calculated based solely on the size and number of components (RGB & A), they do not account for discards or any such thing which is not really very correct. That said, it might reflect a misunderstanding on my part so I won't say categorically that it is wrong, but I would consider adding a third to all those numbers to be safe. I will make a note to revisit those sometime. If you feel like raising me a jira to remind me to look at it again please do, they serve as a good TODO list, not that I have any shortage of TODO items 🙂 . Having said all that, the Tom's hardware thread seems to be scattered with miscalculations. The cost of Discards/MipMaps is ~33% not 50% each smaller size being 25% of the previous thus 100%+25%+6.25%+1.56%+0.78%
  10. It is already an option. In 32-bit it is mandatory, due to the memory limitations but in 64 bit you can set it form the rendering tab in the preferences
  11. One idea that I have proposed before is to give all the sizes for a single upload fee. Given that (excuse the broad sweeping generalisation here .....) most people upload at the highest resolution, the space cost of providing all the lower power of two textures is an extra ~33%. How does this help? It allows people in one fell swoop to upload a texture and try the lower resolutions out to see whether the full size one was really needed or not. At the moment, those of us who try to be sensible with textures will do this either through local textures or perhaps the beta grid. More often than not we'll just upload a range of sizes and try them out. Making them readily available would be a big incentive to creators to try them out as they have not incurred additional cost or upload time/hassle. Perhaps we could even buy items that allow us to pick the texture resolution we'd like to have in use?.
  12. @Coffee Pancakeis quite right, that if incorrectly applied the incentive would have the opposite efect to that desired, but I think the intention of the OP is correct, addressing the obsessive use of the 1024 texture where it adds no value is a noble cause but it is only a small part of a larger problem. Take the small textures and coalesce them into a single texture atlas. We've had a number of threads of later about performance this and that, to which I bang the drum about drawcalls and mesh bodies. In a recent discussion, it was noted by @ChinReythat drawcalls do not appear to have a significant impact on non-rigged items and this is because of batching done in the viewer. Faces that share the same set of rendering parameters means that faces can be batched together lowering the overheads. One of the key components of "sharing the same parameters" is having the same textures bound. Thus if we take 4 512 textures that are used on separate mesh faces on the same item and merge them into one 1024 we have just reduced the rendering cost to 25% of what it had been before. Fewer textures, more reuse leads to better performance. Smaller textures will have some benefit too, but mostly conjunction with reducing the numbers. It is a lot less clear, and indeed far more subjective. My blog posts show in detail how the drawcall overhead affects everyone from the fastest to the slowest machines. It is a very direct assault on your FPS that can be measured. texture load is rather less obvious. yes we have to download them, we have to decompress them and we have to hold them in both RAM and VRAM, but how that affects your FPS is most unclear. fetching and decompressing of images happens in parallel to the main thread, image size affects the time it takes for an item to go from dirty grey to full-colour but it does not slow you down in terms of frame rate. where the water gets muddy is in the shuttling of data to and from to the GPU, there is undoubtedly a cost to this and smaller is better. How much better is not something I have been able to measure. The main cost is thus the bandwidth and with increased VRAM size and the viewer ability to use this that can be addressed to some extent. Yes the viewer has what are known as "discard" levels, these are effectively LODs for textures or CPU-side mipmaps. In fact when the viewer is requresting a texture it does so by pulling progressively higher resolution versiosn as needed. The "need" is determined by screen space. However, I remain unconvinced that this actually works properly, testing this is on my TODO list i fact. I have a suspicion that the viewer is getting the sie wrong and does not scale back as things get smaller properly. The prioritisation of which textures to draw also seems to be rather open to be improved.
  13. As @Coffee Pancakepoints out the bandwidth slider has a very differnt purpose to what it had in the past. The slider is a throttle to protect not only the viewer but also the server and was a specific control back in the day when all assets were streamed from the server directly. It applies only to UDP traffic, today this is traffic such as the messaging from the server to viewer informing it of updates to positions and transferring your avatar "context" between regions when you TP. UDP is an unreliable protocol, this means that data packets can not arrive and will not be recovered automatically. Managing the bandwidth minimised the packet loss, by reducing the contention between packets and other traffic. Today, the majority of our assets are pulled from the CDN over HTTP, HTTP is a reliable protocol, meaning that packets lost are automatically recovered. The fact that they come from the CDN means that they are not using the server bandwidth, but they are using yours. That bandwidth is not constrained by the bandwidth slider. So what does it do today? Today it still manages the size of the "pipe" between the server and the viewer. most of the time this is very low bandwidth and setting it up to be high is (for the most part pointless. There are potentially some edge cases to this however, The UDP protocol is used to "squirt" all your avatar info between regions (and to some extent your viewer) when you cross borders/TP. The size of this "squirt" can be quite large and there is an entirely unproven theory that increasing the size of the pipe reduces the risk of disconnects during TP/sim crossing. With this in mind I asked (around 12 months ago) for the lab to review the current settings with a view to updating the defaults in Firestorm. At the time SL was mid-transition to AWS and it was agreed that making any changes to Firestorm and/or the grid was not a good idea. I raised it again earlier this year and got no conclusive response back, it is not considered a worthwhile task for the server team at present. and I have no grounds to disagree as this is, after all fuelled, by user feedback and speculation rather than testable evidence. Why wouldn't FS just upgrade their default? We could, but I am worried that if we did so we would have a ripple effect that would cause problems. It is all very well individuals saying "oh but I made mine 89432849328904MB and it was fiiine", that does not mean that everyone following suit would see the same. If we were to make it larger for every FS user then that potentially has an impact on the server side. Consider this scenario (the numbers are made up for illustrative purposes, and it is putting "real values" to those numbers that I feel needs to happen. Thought exercise: At the present time all viewers in a busy region are using a total of 2% of a server's bandwidth, if there are 20 regions on a shared machine or sharing a network device then combined traffic is now peaking at 40% on that device. If we were to double or triple the individual viewer bandwidth we are now at 80% or 120% potentially denying service, causing widespread packet loss, etc.. Is this realistic? If not what would the implication of this be? Nobody knows, and as such I am rather wary of just slapping in a change to the viewer and hoping it has no bad side-effects. As Firestorm carries the vast majority of users we need to consider the cumulative effect of things. A more likely scenario is a related issue where the short term bursts of traffic caused by sim crossings push over a limit and the increase that benefits a handful right now, ends up overall worse when we all have it. Having said all that, there is every likelihood that the viewer setting has little to no actual effect on the server side, that the server will simply ignore it making the changes on the viewer meaningless. However, without proper engagement from the server team at LL we can only guess. Overall I am with @Coffee Pancake on this, the setting should simply be removed if it has no impact, but I don't believe we have all the information we need to answer that. Perhaps @Simon Lindencould discuss this at the server meeting tomorrow. It happens at a bad time for me RL-wise but I will try to attend.
  14. It is I think off-topic for this post, or at least the original intent of it. But if you want to see the evidence I collected then please do take a moment to read one of my two blog posts (actually it takes than a moment to be fair) those, I hope put the context to the drawcall overhead. Drawcalls are absolutely a major source of lag, the single highest source of CPU bound lag in fact BUT, this is in rigged mesh,. Rigged mesh does not have ability to consolidated drawcalls into drawbatches, as such, I suspect it is likely that your observations were made versus unrigged mesh, in which case you are indeed correct the impact then is far lower. Here's an illustration of just how damaging the worst bodies are, using a tool I am working on. The times shown and the lengths of the bars are the render times of the avatars. This particular chap is wearing a gianni body until I block the asset. .
  15. @bigmoe Whitfieldessentially the problem is that shadows require everything to be drawn multiple times and also with less culling (objects behind you can still cast a shadow). The cost of rendering with shadows is approximately 3:1 compared to without. As such "shadows slow things down" is always going to be true, the objective is therefore to get to a point where "running with shadows is fast enough that you don't feel the need to turn them off. For most people, a crowd of 10-15 "typical" people with shadows will result in single digit FPS.Disabling shadows will lift to say 15-20 fps. The problem here is not so much that shadows slow us down, it is that they slow us down from an already poor framerate to a completely unacceptable framerate. If we can get to a point where the "with shadows" performance is closer to 30fps than 3fps, then the need to turn them off will largely go away. If you've read my recent rants about alpha cut bodies the answer to the problem lies largely in there, it is a content problem as much as anything, it literally takes 10 times longer to render a body made of ten parts than one of 1 part. when we multiply that up with shadows the impact becomes even more pronounced. When an avatar is made of 10 cuts, it will require 30+ calls to draw it, a popular mesh body has 200 parts and thus needs 600 draw calls. That is 50 times slower. You don't typically see those extremes when you see it in a middle of a scene but it can be clear. From the viewer side, the lab are working on a series of performance enhancements, some of which you see here, and in parallel I am working on a system that selectively manages shadows to try to balance the aesthetic and the FPS to give you something in between the all or nothing we have today. I believe Catznip are working on some rendering things that might also benefit us all ( @Coffee Pancakecan comment more knowledgably on that). There are lots of things being done to make it better; however, doing something 300 times that should only need to be done 3 times, is not fixed by making things twice as fast, it is fixed by doing less work in the first place. Doing that lower workload faster is just icing on the cake then. From the user side, you can also help by insisting on content that is better suited to the platform, the more people that can be persuaded to adopt non-alpha cut bodies (I realise that you specifically don't fall into this camp 🙂 as tinies and similar tend to be low overhead) , and heads and single component rigged hair, the better the performance for you and everyone nearby you. Part of the aim of the newer tools we all hope to have is to better equip people to both manage their FPS and to understand the causes of slowness. Longer term, a more modern pipeline will allow better efficiency, though it remains the case that shadows require a lot more work and will likely always have an impact.
  16. That looks very like the alpha bug introduced about 18 months back as part of a KDU update.
  17. Sadly, the Jeremy Linden uploading a mesh model page has been wrong more or less since it was written as attested to by the comment from Drongle. Sadly despite many attempts to get LL to correct this, including submitting a fully annotated review as a google doc, highlighting each and every point about 2 years ago, nothing has been changed. It remains not only wrong but increasingly wrong as the entire look of the mesh uploader has changed a few times now. The primary error is in the description of the naming, which is so badly wrong as to cause most people who try to do the right thingto just give up and persist in using the old school "guess and hope" matching. This is garbage...utter nonsense. This is the actual truth The mesh uploader preferably uses strict naming rules for meshes within LOD files. A separate DAE file must be used for each set of LODs For a model composed of three High LOD meshes named MyTable, MyChair, My Vase. We might have a DAE file called MyTableChairSet.dae The Medium LOD file is ideally called MyTableChariSet_LOD2.dae. and if so it will automatically be detected when the High LOD file is selected for upload. It can be named anything you like, if you do not care about the auto detection. The mesh models within the Medium LOD DAE file should be named identically to the High LOD meshes but with a _LOD2 suffix. Thus MyTable_LOD2, MyChar_LOD2, MyVase_LOD2. The same applies for each LOD with _LOD1 for LOW and _LOD0 for Lowest. therefore file: MyTableChairSet_LOD1 : MyTable_LOD1, MyChar_LOD1, MyVase_LOD1 file: MyTableChairSet_LOD0 : MyTable_LOD0, MyChar_LOD0, MyVase_LOD0 This extends to the Physics where _PHYS can be used file: MyTableChairSet_PHYS : MyTable_PHYS, MyChar_PHYS, MyVase_PHYS To re-iterate, because of the confused nonsense perpetuated by out of date, misinformation in the "official documentation". The file name is entirely your choice, it is only in the last 18 months or so that we have auto detected the LOD suffix on the file names. The object naming is also "choice" BUT if you do not then the viewer quite literally tries to guess based on order. If you do use the correct naming convention (for the meshes) - Blender users in particular please note, the meshes NOT the objects have to have this name. so if you rename the Cube.001 object to be MyMonkeyHead, that is not enough, you need to change the underlying mesh name to be MyMonkeyHead. - the when the viewer process the LOD files it will find MyTable in the High LOD and search for MyTable_LOD2, MyTable_LOD1, MyTable_LOD0 and MyTable_PHYS in turn and correctly associate them 100% accurately, no guesssing. If you enable verbose logging in the new log tab, you will see streams of this info swirling past as it tries to find the right matches. The thing that trips most people up is the "deep rename" need when exporting from the 3d app. I definitely applies to Blender not so sure about other apps as I do not use those..
  18. Nothing to do with the meshoptimiser. I would assume that it is the new one weights list i the uploader, that has to be populated and by default we now have show weights auto enabled for rigged mesh. I would suggest two steps, neither of which address the underlying problem, but may help ease the pain:- 1) turn off the auto enable weights and the auto-preview weights options on the upload preferences panel This will mean that the weights panel is only filled when you deliberately enable weights. It does not make the problem go away, you will have to wait for the list to be filled but it will be at a moment that you choose, whihc may make it slightly less inconvenient. 2) Simplify your meshes. Most reports I have seen seem to show that people are trying to use meshes with 100K+ triangles. This is right on the limit of upload capability and If you are loading so slowly that it is hanging then you might want to consider re-topologizing and reducing the triangle count. That is not in anyway intended as a brush-off, just a simple statement that the amount of work that the viewer is having to do here is proportional to the mesh complexity and that same load issue can be extrapolated to rendering performance too. i.e. slow load here is likely and indicator of slower draw inworld. Over-complex or not though I would like to help do something about this and I would love to have some example meshes that cause the slowdown/hang so that I can work on optimising this for a future release. If you have an example mesh that causes you to hang and are willing to share it so that I have a test mesh then please either raise a Jira or contact me. Beq
  19. It is undoubtedly a very tricky road to go down. It would be a very tricky and, I suspect ultimately destructive path as you say. As soon as payments get involved things get complicated people have expectations and demands (not that some people don't already treat TPVs as if we owe them something lol) and a proper commerical relationship between a TPV and LL would change the entire working relationship. I don;t think yo meant it explicitly, but just to be clear as it is raised from time to time; there is no "firestorm fund" per se. As a registered non-profit that line is very carefully trodden or things become extremely complex. We've had occasional fundraisers in the past, these typically involve a short term sale of some item of merchandise in order to raise very specific targeted funds and those are to cover out of pocket expenses such as the web servers and certain commercial software licenses that we otherwise have to fund ourselves. Not one penny passes to any individual in the FS team. This is a central tenet of how FS operates, it avoids complicated taxation (consider that the team is international, but the non-profit has to exist in some legal regime - for FS it is as a US non-profit) which then means accountants a host of legal issues and perhaps most importantly "entitlement". We'll have all seen posts in group chats or here or elsewhere, from time to time, from users that clearly feel entitled to some bug fix or other, if any payments are ever involved then people start to feel that they have rights that they simply do not and as an entirely volunteer organisation we (and many other TPVs) could never meet. @NiranV Deanquite famously makes it clear on his website that his viewer is exactly that "his viewer" and nobody but he gets to call the shots. When payments get involved that independence is increasingly hard to maintain and as such it is avoided. Having said that, in general, when LL ask for a contribution the TPV will do so if it is possible to do so and simple enough to be worth the extra effort. However, as I have discovered, this "extra effort" can be far more demanding than writing a given feature in the first place, and in some cases due to inter-dependencies on another developers code it can prove impossible to give the lab what they need in a legally acceptable way. There are some TPV devs that refuse to sign a contribution agreement (which is an entirely valid position - it does require you to give over personal details and technically allows the lab to approach your RL employer, which may not end well). There are ,of course, former TPV devs that are no longer contactable for all kinds of reasons. And yes with the viewer code being open source you'd think that it was just free to pick and choose, and for other TPVs this is to some extent true. We TPV devs and TPV projects in the wider sense have personal codes of conduct and generally agreements where we don't typically use another TPV's features without them having publicly released it or granted explicit permission. It is only fair after all that they should get the credit for their innovation, and we ours. Most (but not all) TPVs follow this. It is of course also important that attribution is given, you will often see on commits from myself, Ansa etc. on FS, credits for code coming from Rye (Alchemy) or Niran etc. and they likewise include us in either their comments or commit messages or both. (We also credit the lab of course, every FS release note has an extensive section that details those fixes and features that come from the upstream source). For the lab though, as a commerical entity, it is different, they should never lift code directly from the open source domain, it is risky and the lawyers would be severely displeased. There are many issues with them doing this but mostly it ties their hands should they ever want/need to change the licensing terms or sell the rights to the code only to find that someone asserts ownership of some key component and blocks them. The lab are not alone in this by any means. In my RL I have worked for large international corporations that have similar policies and thus struggles (I have had to have team members stay away from the office when due to HR mess ups where their contract renewals have not properly overlapped) every line of code that enters a corporate code base has to be accounted for if they are ever to be able to assert ownership. when we as TPV devs sign the contribution agreements we have to give up all rights to the code we contribute and demonstrate that we are in a position to do so (hence the note above about potentially contacting your RL employer). Anyhow, that was a very long way of saying, I agree with Niran and that the paid for contribution path is a minefield. There are ways through that minefield but it almost certainly involves employing a dev directly and that may not be feasible for many different reasons. It would also remove that person from the TPV for a period, and given that there are remarkably few of us, that may not be in the best interests of things.
  20. Then back out the change locally. You are choosing to build from a live development branch not from the last release branch, as such you are exposed to all kinds of daily churn, you also have the freedom to ignore any updates you don't think you need.
  21. It would be a pretty lame grief if it were, but no, the example you give would not work in the way you describe because the optimisation works for rigged mesh and transparency overlay does not work on rigged meshes anyway. I think when you say material, you mean mesh/submesh (i.e. a parcel of stuff for the GPU). In all cases, if you are drawing less, then the frametime will drop, absolutely. Look at the exmaple I showed of the multipose mesh feet, look at the trend in the last year or so towards multistyle hair. These are not the massive disaster that you would expect, purely because we side step the issue with the "transparency trick". Would I advise going down that route...? It depends. In most cases there are more efficient means to achieve a result. I would look into how you can achieve the same without that, be inventive. In the long term, once we've all got rid of the 1000lb gorilla's on our backs, I may well be whining about how unnecessary transparent meshes are causing cache churn and thus slowing things down. Everything has an impact somewhere. Keep in mind that frametime lag, is not the same as lag in the broadest sense. As I noted in the blogs, I am measuring only the outright render cost, the most direct impact on the FPS. If you want to consider the time it takes to download, to cache, to unpack, then that extraneous mesh will have an impact (the files is bigger after all). Those aspect do not however directly impact FPS because they are not conducted by the main thread. And I want to be clear on this. Just because I showed that the triangle cost is not a driver of the CPU render time, that is not a green light to slam things full of triangles. Far from it. Triangles do have an impact (especially on the low end machines, as my graphs indicate) they are just not detectable because the drawcall load is such a heinous thing. Once we remove those overheads then the smaller costs will show up more clearly and may well nuance "efficient" from "very efficient". And while I know that this is now talking about something else entirely. Please keep in mind that the drawcall burden is a large part of why almost everyone is CPU bound in SecondLife (that is to say that for a typical scene with avatrars in your GPU is not working as hard as it could because the CPU is too choked up with pushing boxes of body parts around to keep it busy. Once we get rid of that bottle neck then we're likely to see more people constrained by GPU limits, and one of the worst offenders there is "overdraw" (painting the same pixel more than once) which is caused by many factors and which tiny thin triangles are a big source of. We are getting deep into "future problems", and they'll be nicer problems to have in that regards as we should (hopefully) be in a generally happier place. 😄 No. Not in anything like the same way. The reason that the submesh diffuse colour transparency test works so well is that no matter how large or small the mesh, it is a single value test. "Is the face alpha set to 0.0". You cannot make the same assertion for arbitrary textures, we would have to check whether every pixel was transparent to draw the same conclusion. What we could do is test a known UUID such as the default transparent texture (somewhere in a thread over the last week or so we examined a Jira that spoke of this). We could...but why pollute things, if we have one perfectly good "off switch" we'd need to have a strong use case to want to start adding more variants. Never say never, but a compelling reason why the existing method is not adequate and proof that a new method has no unexpected gotchas (always a worry) is needed. Finally, and I use this as a summary of all the above. Yes and yes. The key term here is "without much guilt" I honestly do not feel comfortable suggesting that having invisible meshes around is a god thing. In the end it is not, if you can find a better way please please please do. However, they are not leading to anything like the kind of problems the slices cause. In the end, it would be better all round if we didn't have invisible floaty stuff being worn, it'll bite us on the bum in the long run no doubt. But if you need to pick one then it is by far the lesser of two evils. Also, all things in moderation. A few slices here and there to enable some cool feature is fine. hundreds of them for no real benefit is (I am arguing) very much not fine.
  22. Beq Janus

    Collision

    That's certainly not a requirement imposed by the uploader. Do you have an example screen shot to show us what you are trying to achieve. to answer you original question, how to create a physics model for a complex model, the basica answer is to box out the areas that you need to be able to sit upon/hide behind. Keep things are coarse as possble. SL will never properly react to a phyics shape more accurately than about half a metre in any reliable sense. I have an example image from a couple of years back. this was a statue-type model, it was a large snow sculpture for fun snowman competition. I wanted to be able to sit on it and hide behind the legs etc. But it also had a land impact budget so that dictated a very economical physics shape. I tried various sets of hulls and boxes and in the end the best fit for a balance of physics silhouette and land impact was the following max of planes and simple mesh hulls (you cannot see the back facing panels cos they get culled by the shader)-
  23. It's sole manifestation at present is the performance floater project viewer. I would recommend having a look and giving feedback on the UI. That's what project viewers are for. I hate a couple of aspects o the UI and really like some others. The content of it, so very flawed, I've covered, let's not go there I can feel my blood pressure rising 😉
  24. Yes, indeed. I was referring to the linear change in frametime. where FPS is logarithmic with respect to frametime.
  25. pretty much yes. That was, to some extent, the surprise to me. Based on what we are all used to hearing I expected to see something less than linear because I was expecting to see a larger impact from the triangles and thus as the tris per drawcall reduced you'd see a "trade off". The only time I see any indication of that is on the laptops without the dedicated GPUs and even then it only held true for the first few.
×
×
  • Create New...