Jump to content

Beq Janus

Resident
  • Content Count

    229
  • Joined

  • Last visited

Community Reputation

473 Excellent

3 Followers

About Beq Janus

  • Rank
    Advanced Member

Recent Profile Visitors

592 profile views
  1. Ha! yes, I never spotted that, good spot. I can explain why it happens. It is because people rely on "old mesh asset" rules too often, and then it bites them in the bum. For each component in a link set in the HIGH model, the viewer will try its best to find a matching model in each of the object sets passed for the remaining LODs. it prefers to do this by following explicit naming rules link in the HIGH is called "donkey", it will look for a link in the medium called donkey_LOD2, Low called donkey_LOD1 and lowest donkey_LOD0. It is also the only way to get physics to associate properly with linksets (donkey_PHYS). If that fails, then it falls back to legacy matching which is an expectation that the linksets are in the same order. as Rey noted, this was almost certainly not the case here. Animats, notes that you are not always guaranteed the same results, this is true in practice but pretty random, there are technical reasons why, that are beyond this thread really (it is to do with the storage method used in the viewer) there may be extenuating circumstances that make it worse/more likely. I may get back to have a round two of "improve the uploader" sometime. I do have a nearly working blender addon that does the exports, but it's sitting in my todo queue behind viewer stuff and RL.
  2. Out of interest, @MSTRPLN and @Chic Aeon am I correct in noting that the model does not appear to be triangulated in 3DS (and in the cases that you have observed such things Chic)? In Blender, you can choose to triangulate in app, triangulate on export, or just leave the quads/ngons and let SL sort it out. I have found that on occasion the SL attempts to triangulate can be bizarre and so I typically triangulate before I export so I know where I am at. I have no idea how 3DS workflow stands on this matter, but could it be that you are hitting that same issue, SL triangulation going off on some weird adventure?
  3. There's a lot of variables in performance so it is going to be a case of going through this bit by bit. A few things to double check first though. 1) If you have anti virus and anti malware software ensure that the cache folders for the viewer are white-listed. Ask in Firestorm support group in world about white-listing and someone there will help you and point you to the right info on our wiki. You mentioned choppiness and this is frequently the cause because every time cache gets updated you virus software decides it has to go and inspect it. If there were ever a #1 culprit for people find that things suddenly deteriorate it is probably that. 2) Do ask in Firestorm support too, they are far more experienced in helping people troubleshoot various faults and more interactive than the forums too. I am not all that familiar with AMD and radeon so I cannot tell what the expected performance/capability is but with you settings down that low there's something wrong. It is a common misconception that turning off atmospheric shaders and advanced lighting will help, it can often have the opposite effect. You should compare a session with and without those enabled (leave all other settings) and see how you fare. Having them on or off is important if you think the bottleneck is in the GPU, in your case you are forcing more on to the CPU, and if this happens to be the bottleneck then you are not helping matters. All that said, a quick google would suggest that you hard drive which is quite slow, is the weakest point. Given that you have more RAM you might want to consider testing out a RAMDisk and putting your cache on that. It would save you spending on an SSD and finding that it still made no difference. If you fancy trying this, I would read https://www.ghacks.net/2017/04/03/the-best-free-ramdisk-programs-for-windows/ or one of the other similar threads out there in the interwebz and set up a RAMDisk of say 3GB (make sure your cache etting in the viewer is smaller than that) and assign your cache to the new drive. (remember the white-list exclusions when you move the folder though). If this helps then you can proceed with a bit more confidence that the HDD is one of the issues. Regards Beq
  4. Peculiar... how long are the rails? if you import the DAE for just that LOD into 3DS or another 3d program does it look fine?
  5. As has been noted, the actual improvement to your complexity numbers depends somewhat on whether the mesh creator has entered into the spirit of BOM and actually taken steps to reduice the onion layers. Where they have done so, then your complexity will decrease. I switched from my legacy Slink body to the Slink Redux and saw a nice improvement with the added benefit of having perfect control over the alpha now (should I choose). However, I think that the reduction to your complexity is a small reward for "doing the right thing". The long term benefit to people moving to BOM is that rendering a scene with lots of avatars in it becomes far less work for the viewer. In moving to BOM, with a more efficient BOM body/head, you are contributing to the greater good of everyone shopping/dancing with you. The complexity numbers do not tell the whole story, while we can argue back and forth about what the "correct" complexity is we need to accept that it is really just a guide, and the actual benefit to an individual depends on their computer and video card, the amount of RAM etc. What we can say with some certainty is that with BOM the viewers that are drawing you will have to do less work to render you. A pre-BOM mesh body is made up not only of multiple mesh layers, but also of tiny fragments of mesh that are organised to allow you to selectively hide bits of the mesh by making them invisible. As a result a typical pre-BOM body is made up of tens of different meshes. Thus you have 2 distinct aspects of the mesh body/head that affect your complexity. The alpha slicing and the onion skin layering. BOM addresses both of these if your mesh creator chooses to fully embrace it. Sidean from SLink went all out and the Redux body models (which are offered alongside the legacy pre-BOM models) use the bake system to reduce the onion skins required and at the same time uses the bake system "alpha" masks to eliminate the need to have sliced and diced meshes. As a result the body retains the same mesh structure and shape, but dramatically reduces the drawing overhead. There is an added benefit too that the alpha can be precisely defined by a clothing creator, giving far more freedom than the old legacy models had.In the old system a designer had to ensure that the lines of the clothing followed the body segments so that when setting an alpha you didn't have a hole in your body on show. Other "BOM" body models have for the most part just allowed BOM textures to be applied, leaving their avatars just as complex as before. As a result they are not going to give you the same kind of reduction in complexity, in fact probably none at all. I suspect/hope that this will be an interim step while the creators do all the rework that is required. It is worth noting too that while SLink have stolen a march here and really set the target for BOM bodies, the existing pre-BOM "legacy" SLink models are still shipped alongside and there is good reason for this. BOM pros and cons PROS 1) Far lower rendering overhead, less work means more frames per second, lower lag. (but realistically only when it is widespread) 2) perfect alphas, but only when your creator makes one or you create one. Creators can and will provide these especially as demand grows, many never stopped; those that support standard sizing and non mesh avatars, and those that remember pre-2011 will know all about this. 3) No more texture hogging, laggy HUDs that eat up all your resources... CONS 1) no simple auto alpha. outfit folders can get you some of the way, RLV can help too (if you have it)..... 2) no HUDs means no fat packs of appliers with a simple UI Hopefully Maitreya, (as the clearly dominant female body at present) will follow in the footsteps of SLink and allow people to choose between old and new depending on their needs; more people would then have a chance to enjoy a new slimmer more lightweight second life in the new year 🙂 . At the moment I can choose to wear the original SLink models where I need appliers or don't have time to make an alpha etc. or I can use the BOM edition when I want to feel less of a social burden and reduce the impact I have on others at shopping events etc. Beq x
  6. It's one way to do it, and certainly not a bad way. The Medium LOD is typically the one that is visible to most of the people most of the time, as such it is worth making sure you get it right.
  7. Not saying that this is right or wrong but for something like the object here I would tend to do the following: for medium LOD remove all the horizontal surfaces, a few meters away when it drops to Medium those are too small to warrant inclusion. You need to be the judge of what can and can't be removed but be ruthless. for Low LOD, I'd drop to a single column, and have a fight with myself over whether it should be the thickness of the base and cap or the pillar, and probably settle on the pillar. In an object this simple though I'd probably decide I didn't care so much and leave the low and the medium as the same 24 triangle object. for the lowest LOD an 8 triangle box is going to be fine, but you could go the whole hog and use the 8 triangles in a cross (when seen from above) and then use an imposter texture, use the LOW/MED as the model for the imposter though so that the lowest is not showing detail that was not present in the preceding LOD. honestly though, making the viewer download a texture no matter how small, and carrying the extra triangle in the other LODs to support it, in this specific case is not (in my opinion) the right trade off and just go with the box.
  8. In time I revisit the uploader. those messages aren't from Firestorm, they are the mesh asset validation errors sent back from the server. They are put in the log because at present there is no simple route back to the mesh upload dialogue from the arbitrary callback that is triggered. It is not impossible to plumb them in but I've not looked into it. Ideally it could be something that @Vir Linden could sweep into the changes they are making. My reorganisation of the mesh uploader with the resizeable preview paves the way for more verbose messages there is space on the left if needed to place a scrollable widget with all the validation failures. I'm wary too though of making the uploader easier to abuse with poorly constructed mesh, every change we make should steer us towards fostering a better creation culture.
  9. OK, so....we're all kinda right. @animats is right that the viewer is able to generate mesh that the user is not (currently) allowed to upload, kind of. Yours truly is right that it is impossible to upload a mesh asset that does not have matching material counts for each LOD. @Aquila Kytori is on the right track, that the "unused" materials are assigned a placeholder. The placeholder is a single triangle of zero size. Which somewhere along the way must be decaying to a single vert in the export. I'm not inclined to chase that squirrel just yet. Here is what is happening at upload. I've documented the mesh asset format on my blog in the past, pulling in the, essential reading, prior art of @Drongle McMahon you can see my blog When is a triangle not a triangle for discussion and links off to other distractions. Skimming through I think it still holds true to what I know now (that was written before I was a viewer dev). The mesh header prefixes the model and defines the global parameters, including the set of materials to be used. That array of materials is fixed at the head and is thus why I can be so certain of my assertion that each LOD has to have a place holder array, without it, if material 6 was missing the viewer would see material 7 as 6, 8 as 7, and fall off the list looking for 8. The way we always deal with this as creators is to insert tiny single triangles to cover the missing material slots. My assumption (which I am about to prove incorrect) was that the viewer did that on upload if GLOD decided to eliminate a material entirely. In fact, the viewer does something a little unexpected, it creates a special placeholder label. 000005 [ ARRAY (8 elements) 000010 { MAP (len:1) 000025 Key:NoGeometry 000027 } 000032 { MAP (len:1) 000047 Key:NoGeometry 000049 } I've put a full annotated decoding of my 8 material plane (similar to the one Aquila used) https://pastebin.com/W7uN3UBu The entry of a key, with the string "NoGeometry" creates an entry in the mesh asset for that material, thus fulfilling my requirement above, there is no way for a user to create this themselves. So....that's all very good Beq but what about @Aquila Kytori's stray "vertices"? Well, that happens at the other end of the lifecycle. When the viewer is decoding a mesh asset ready to present it to the pipeline it faces a similar problem to the issue expressed before. You really really need to have actaul mesh to render or it would require all kinds of additional sanity checks and edge cases, so it creates some dummy geometry to act as a sentinel. Sentinels are a not uncommon way to simplify bounds checking. Software engineers may well have seen it in many algorithms, it is akin to an "Elephant in Cairo" Here is the viewer, placing its elephant: "It's a triangle Jim, but not as we know it" The viewer is creating a triangle composed of three indexes into the vertex array, each entry is set to 0, the vertex array is of length 1, and has coords (0,0,0) in the normalised mesh domain (-32768 - + 32767) i.e. the origin, for both positions and normals as well as 0,0 for the UV. Conclusion: We are all a little wiser than we were, I hope. There is indeed a path that the human creator cannot access, however, that path does not allow them to avoid the materials. The question is, how would the user access this? The viewer can do it because it "knows" which material it has eliminated and thus uses a placeholder. The placeholder is really no different to what we do manually, though arguably a little more compact. In order to do this in the manual case the viewer would need to be able to follow the unordered material lists through the discrete models and work out which ones were missing. It sounds doable, I effectively do this in my Blender Addon as shown here. The Addon (yep it will be up on github as soon as I get chance) has been used here to auto populate the Lower LODs from the original 8 material plane. I have then manually reassigned the materials on a couple of faces in the Lowest LOD in order to trigger the error shown. The "caution" marker is highlighting the fact that no geometry is present for the expected material in that model. I will have a "fix me" button at some point, which will create an arbitrary triangle, but if we can come up with a better solution that allows the uploader to recognise this without making it harder for users to debug their genuine material cockups. I need to think more about it and consider the inverted cases such as imposters where you effectively have a null material in the High, that acquires mesh data in the lower LODs. For the terminally curious, here's a short video of me fiddling with that in SLender (the Addon). Quite appropriately the video shows a bug at the end as the High LOD shouldn't have shown an error :-) https://i.gyazo.com/9950f318cd3d27fa36d1bf14e0a10b20.mp4
  10. I'll have a look at how it works when I get a chance. but yeah, as far as I can tell there is absolutely no way to upload a mesh without having at least one triangle per submesh (where a submesh is what we're calling material) what you are seeing as vertices are probably very tiny triangles is my guess. However, until I look that remains conjecture, so I'll take a peek when I can
  11. No, that Jira is nothing to do with material at all. It is to do with linksets and the fact that all but the root prim of an uplpoaded multipart mesh are called Object even though the scene file and the internal mesh held by the viewer retains the full label i.e. the same issue that you mentioned in the sentence I cited. The final upload to the server has no such label because it is not defined in the upload format, thus it cannot be fixed by the viewer alone.
  12. If you can find a mesh that reproduces this then I'll definitely dig into it. I don't believe it is the case because the internal mesh asset format has no way to deal with such an eventuality. I believe that at least one of the MAV missing level of detail messages is due to this. I will have a look when I get some spare cycles.
  13. I've tinkered a little with that, the principled BSDF should "in theory" get us some of the way if the PBR is explicitly avoided. splitting out and recombining is all doable. but the lighting is the hardest part I suspect. There is a Jira for that. https://jira.secondlife.com/browse/BUG-202864 The fix is not as simple as I first thought. The viewer has the information available to it, and it is in fact saved but there is no space for it in the upload format requiring a protocol/format change. I've added my suggestion to Liz's jira. That is a far tighter integration than any game engine I know of, even more than Unity or UE. The problem you have is that none of those tools are "game asset designers" they are one part of a chain of tools. You may start in ZBrush, move to a retopology tool, create a normal map bake down from the high poly to the retopologised HIGH LOD model, you take that map and perhaps a colour map out into substance painter. You probably upload the mesh (sans textures) and then use the local texture feature of the viewer to tweak the exports from substance to get something that looks right in SL. The fact that the uploader supports "textures" is increasingly quaint when materials ought to be being used to try to keep the vertex monster in check. Blender covers an increasingly large part of the spectrum well and for free and is probably far more common here in SL than in the wider "professional" market where Autodesk hold sway. (even more so if you only count the legitimately licensed versions of Autodesk products), I'd love to see Linden Lab as a corporate sponsor and perhaps get some directed effort towards shader and asset import limitations. (https://fund.blender.org/corporate-memberships/) Related, but separate and on my wishlist, is having a materials asset type. which contains a bundle of maps, and their settings, you'd then apply that to a face. I don't think that it has any real place in the mesh uploader though. unless you were to move to a full-blown asset importer.
  14. Firstly, feature requests should go in Jiras, so once this thread runs the usual course and we filter the trolls and wastrels stuff this into a Jira or set of Jiras, I'm al;ways happy to consider ideas and will contribute back to SL from FS once implemented, but the best place to put this might be LL Jira in the first instance. Taking your bugs in order. * What should the root prim be? The problem is that the DAE format has no concept of Root prim and thus there is no marker. Any toolchain specific encoding/ordering cannot be guaranteed, the uploader has to work for Maya, Blender, 3DS, etc etc etc They are read from an XML DOM there is no order guarantee I don't believe. If there is a way that makes sense and can be consistently enforced without breaking people's current workflows then I am happy to look at it. See below though that I don't think alpha sorting is the answer. * Texture upload works just fine. quick demo video https://i.gyazo.com/b6f284220b9039343f6ec34c661186d9.mp4 Result: https://gyazo.com/91e0acf145bba8869a7c3fa0a049ab13 * I don't believe your assertion is correct, though I am willing to be convinced. A subset should be allowed, but the uploader would still need to create a placeholder I believe. The internal mesh format requires the same number of slots. Each material is a separate drawable mesh, these are indexed into vector and thus If a mesh does not exist with at least one triangle in it then when the LOD switches the pipeline would have to deal with that. It doesn't. That does not mean it can't, just that it is not a simple alteration in the current state of things and the gain probably does not warrant the invasiveness of such a change. Feature request The root prim be the first in alpha ranking? Really? That makes little sense to me. Given that the object name is named after the root prim. The last thing I need is all my objects being called AAA_Object 🙂, in a complex build that actually makes it very hard to achieve. That said I've never found it to be a problem switching the link order so my pain threshold is different. What does the uploader do when the conglomeration of pieces is not linkable. I have frequently uploaded objects that are greater than the linkable range, it gives me a perfectly nice cluster of items. What behaviour would you expect here? Making the centre relative to the root is a whole different ball game. There is already a long-awaited change to allow arbitrary pivots, there are a number of issues that arise from that. IsIs the centre of the root prim going to be the new pivot? that is not a particularly useful state of affairs for people make single mesh items who have been waiting for the ability to define the pivot. Given the constraints that arise from the root of the object in terms of resolution it needs to be clear whether the pivot and the centre are the same thing (this is not of interest to linksets but may be important to single mesh) I agree that there are a lot of ways this could be improved and I'm happy to have a go at some that make sense to me and others, but we have to be very careful too. Whenever I have made changes in the past there have been others that want it reverted. The majority use case for uploading meshes is not complex multipart objects with textures pre-loaded. Having said that "what is the typical upload use case?" Beq
  15. Not sure asking here is going to get you an answer any better than a Jira might. The llerror module is straight from LL there are some performance optimisations but log usage is from the lab. The default SYSLOG setting is Critical but it seems to overload that with whatever your log setting is. So I guess "yes, it is on purpose" but I have no idea what that purpose was.
×
×
  • Create New...