Jump to content

Beq Janus

  • Content Count

  • Joined

  • Last visited

Community Reputation

485 Excellent


About Beq Janus

  • Rank
    Advanced Member

Recent Profile Visitors

624 profile views
  1. Yes it is totally their responsibility. Global governments are by definition entitled to set the laws for all the citizens that they govern. It is quite literally their job to do so. The lab have first and foremost to comply with US federal laws, and then (potentially) with legal regimes in countries around the globe that their users are based in. Any system/service that allows money to be converted back and forth to other currencies and forms is a vector for money laundering. If I have ill-gotten dollars I can use them to buy Linden dollars and use those linden dollars to buy services/products from myself, then withdrawing the cash as legitimate revenue. Anti-money laundering (AML) is a mandatory practice for every business operating in the global marketplace. I would not suggest that this is the reason why textures derive a fee however, that has far more to do with putting a sensible amount of friction in the system to protect performance and storage costs in the long term. I think the question is not so much "why don't they allow it?" but more "why would they want to?" The benefit to most users and to the platform is tiny.
  2. I am absolutely certain that the Lab will be announcing the changes well in advance. they have always done so in the past. However, in order to lay the groundworks for more flexibility in the packages that the lab provide they have needed to added some specific new "hooks" as @Alexa Linden noted. The lab viewer has rolled out with those new hooks in place, but with TPVs such as Firestorm who have a less frequent release cadence these things have to be scheduled well ahead of public announcements, and more importantly the lab have limited control over our release cycle. We are not allowed to merge features until the lab have them in a beta release, in practice we don;t merge them until they release, or we end up constantly re-merging changes that are still in flux. Once we are ready , and we consider that enough time has passed since the last release, we begin our QA/beta cycles and based on that feedback we may have a number of iterations until finally delivering a release. It would be hard for the lab to plan around our schedule so I don't find it surprising that they aren't able to put a date to things now. Ideally they will want people to have access to the new features before launching the new structures; ss such these need to be seeded early to give us a chance. This is not to say that they'll wait indefinitely of course, if FS was not able to release for whatever reason, in the end the lab will simply push ahead, but they have to put all of this into their planning considerations. That said the "premium plus" conversation has been ongoing in the public user group meetings for a while now. I would expect that blogers such as @Inara Pey have mentioned these in their weekly meeting update posts. The changes were also noted in the release notes for the LL viewer. https://releasenotes.secondlife.com/viewer/ what you saw in my images was the Firestorm version of those, which we will have in our release notes "when" the time comes. The confusion is therefore mostly my fault, I was previewing a future release feature for you.
  3. The (technical) changes have rolled out this week on the LL viewer, LL viewer users will already be seeing this. All other (maintained) viewers will follow suit in due course. I say "technical" because all that is in place now are the UI and serverside hooks for a future where there will be differentiation at the subscription level for the costs of various things. To my knowledge the plans to put these in place are in process but they are unlikely to appear for a little while, they want to make sure that the changes themselves are well established before fiddling with the levels. I am sure a passing Linden such as @Grumpity Linden could give more correct answer. Newer viewers will incorporate these changes and reflect the proper charges as they appear. The next version of Firestorm for example will. Hence you see the new text because my changes are part of that build as well. Older viewers that do not have the changes will still work (for the most part) but you, the user, will get misleading indications of costs because the viewer will estimate the costs based on "old knowledge" and the server will charge you based on the correct charges under the new regime.
  4. To drive home the "it depends" with an example; a single fully utilised 1024x1024 is typically preferable over 4 separate (equally populated) 512x512s (which give you the same pixel real estate) however in practice you may want to separate out textures onto different mesh faces to facilitate colour variations or other such customisations. Consider the furniture example from Chic perhaps, it really depends on how you expect the product to be used. I could use a single baked 1024x512 because I know that the product is to be present in a fixed set of configurations, but I might equally split it and have a 512 of wood/metal trims and cushions alongside a 512 for the rest of the upholstery, this gives me a little more flexibility and arguably gets better reuse. Imagine a scene where I have two or more armchairs with the same upholstery but different cushions (nobody ever said I had good taste 😉 ). In this case the viewer would be able to reuse the common upholstery texture shared between the armchairs in the scene and only have to deal with the two smaller "differences" compared to a single bake of a a 1024x512 which would require a new (unique) texture download for each variant. Note too that due to the way things are rendered it is good practice to ensure that any transparency is kept on a separate texture face to things which are fully opaque. The thing to avoid, and where the common cry over excessive texture load comes from, is using a higher resolution texture where it is never (or rarely) going to be seen and/or wasting space in a UV map so that you need two texture sheets where with a little bit of packing you could have achieved one. My favourite (bad) example is a nipple ring I own that is plain gold. it has a 1024x1024 "gold" texture, plus a blank 1024x1024 normal map (required only for the alpha channel) and a 1024x1024 specular map. all of this in an item that is little more than a few centimetres in size and (with a few exceptions) is not going to get more than a passing glance at the best of times.
  5. It was drawn to my attention today that a proportion of content creators are operating under the misguided understanding of how textures work in Second Life, and when I explain why, you'll understand how this came to be. I should note that this "revelation" was an eye opener to me, one of those biases we all have whereby we end up assuming people know something because we know it and believe that knowledge to be universal. So let me put this problem to rest. In Second Life textures are lossily compressed. That's it, goodnight. OK so that is not the full story, but it really is the truth for almost every texture you upload/download in SL. Textures in Second Life are compressed by the viewer of the creator at the time of upload, they are compressed in Jpeg2000 using a lossy algorithm, this means higher compression, lower download times at the cost of fidelity. Many of you are nodding and know this to be the case, but it would seem that a number of people don't realise. Moreover they believe that their textures are uploaded as lossless. "Wait, what? why?" I hear you cry. Well here's the problem. Oh look there's a 512x512 texture about to be uploaded. Underneath it we see a nice little checkbox, it is disabled, so we can't actually do anything with it, but it has a tick in it so *yay*, we're going to get lossless uploads, right? Right? Nope. Sadly this has been that way for about 12 years. So long that I've never given it another thought until today. The lossless upload option applies only to small images (see below for clarification on "small"), it was introduced to help avoid compression artefacts making a mess of sculpt maps, the option never applied to large textures. Unless you happen to know why that box is there, and the history of it, you could quite reasonably surmise that that tick box is telling you the exact opposite of what is really going to happen when you click upload. So what are the facts: 1) Images with 16K pixels or less (that is a max of 128x128, 256x64, 512x32, 1024x16) are optionally uploaded as lossless. 2) Images with more than this, have no option, they are always lossy, no ifs, no buts. <--- this means pretty much every texture you are likely to use. What is being shown to you is actually a global setting in the viewer that actually means "I would like small images to be uploaded lossless please". It means nothing more, it is being shown as disabled because back in the day when this was implemented, the developer making this change decided that disabling it conveyed the meaning that it was not applicable to this texture, an ambiguity that has persisted forever it seems. As of the next Firestorm release, this option will no longer appear for large images to which it can never apply. I have also raised a Jira to the lab to get a fix for this into their viewer. Other viewers may already have cleaned this up (I've not checked). The tick box will still appear for small images only and you'll be able to toggle it for them. Hopefully this will help rid us of this misinformation. In the meantime, if you hear this falsehood being perpetuated feel free to correct the individual, you'll be saving them a lot of heartache and head scratching.
  6. Your wish is going to come true-ish in the near-ish future. LL just promoted the benefits viewer changes. This is a set of changes that will allow them to vary the fees for certain activities based on an individuals subscription level, it so happens that image uploading is one of those tweakable things. So you can now choose to upgrade you membership and get cheaper texture uploads (and other perks) or stay as you are and uhm not. Note that the changes are live (in the LL viewer) but the fee adjustments and other aspects of this are not yet in place.
  7. Ha! yes, I never spotted that, good spot. I can explain why it happens. It is because people rely on "old mesh asset" rules too often, and then it bites them in the bum. For each component in a link set in the HIGH model, the viewer will try its best to find a matching model in each of the object sets passed for the remaining LODs. it prefers to do this by following explicit naming rules link in the HIGH is called "donkey", it will look for a link in the medium called donkey_LOD2, Low called donkey_LOD1 and lowest donkey_LOD0. It is also the only way to get physics to associate properly with linksets (donkey_PHYS). If that fails, then it falls back to legacy matching which is an expectation that the linksets are in the same order. as Rey noted, this was almost certainly not the case here. Animats, notes that you are not always guaranteed the same results, this is true in practice but pretty random, there are technical reasons why, that are beyond this thread really (it is to do with the storage method used in the viewer) there may be extenuating circumstances that make it worse/more likely. I may get back to have a round two of "improve the uploader" sometime. I do have a nearly working blender addon that does the exports, but it's sitting in my todo queue behind viewer stuff and RL.
  8. Out of interest, @MSTRPLN and @Chic Aeon am I correct in noting that the model does not appear to be triangulated in 3DS (and in the cases that you have observed such things Chic)? In Blender, you can choose to triangulate in app, triangulate on export, or just leave the quads/ngons and let SL sort it out. I have found that on occasion the SL attempts to triangulate can be bizarre and so I typically triangulate before I export so I know where I am at. I have no idea how 3DS workflow stands on this matter, but could it be that you are hitting that same issue, SL triangulation going off on some weird adventure?
  9. There's a lot of variables in performance so it is going to be a case of going through this bit by bit. A few things to double check first though. 1) If you have anti virus and anti malware software ensure that the cache folders for the viewer are white-listed. Ask in Firestorm support group in world about white-listing and someone there will help you and point you to the right info on our wiki. You mentioned choppiness and this is frequently the cause because every time cache gets updated you virus software decides it has to go and inspect it. If there were ever a #1 culprit for people find that things suddenly deteriorate it is probably that. 2) Do ask in Firestorm support too, they are far more experienced in helping people troubleshoot various faults and more interactive than the forums too. I am not all that familiar with AMD and radeon so I cannot tell what the expected performance/capability is but with you settings down that low there's something wrong. It is a common misconception that turning off atmospheric shaders and advanced lighting will help, it can often have the opposite effect. You should compare a session with and without those enabled (leave all other settings) and see how you fare. Having them on or off is important if you think the bottleneck is in the GPU, in your case you are forcing more on to the CPU, and if this happens to be the bottleneck then you are not helping matters. All that said, a quick google would suggest that you hard drive which is quite slow, is the weakest point. Given that you have more RAM you might want to consider testing out a RAMDisk and putting your cache on that. It would save you spending on an SSD and finding that it still made no difference. If you fancy trying this, I would read https://www.ghacks.net/2017/04/03/the-best-free-ramdisk-programs-for-windows/ or one of the other similar threads out there in the interwebz and set up a RAMDisk of say 3GB (make sure your cache etting in the viewer is smaller than that) and assign your cache to the new drive. (remember the white-list exclusions when you move the folder though). If this helps then you can proceed with a bit more confidence that the HDD is one of the issues. Regards Beq
  10. Peculiar... how long are the rails? if you import the DAE for just that LOD into 3DS or another 3d program does it look fine?
  11. As has been noted, the actual improvement to your complexity numbers depends somewhat on whether the mesh creator has entered into the spirit of BOM and actually taken steps to reduice the onion layers. Where they have done so, then your complexity will decrease. I switched from my legacy Slink body to the Slink Redux and saw a nice improvement with the added benefit of having perfect control over the alpha now (should I choose). However, I think that the reduction to your complexity is a small reward for "doing the right thing". The long term benefit to people moving to BOM is that rendering a scene with lots of avatars in it becomes far less work for the viewer. In moving to BOM, with a more efficient BOM body/head, you are contributing to the greater good of everyone shopping/dancing with you. The complexity numbers do not tell the whole story, while we can argue back and forth about what the "correct" complexity is we need to accept that it is really just a guide, and the actual benefit to an individual depends on their computer and video card, the amount of RAM etc. What we can say with some certainty is that with BOM the viewers that are drawing you will have to do less work to render you. A pre-BOM mesh body is made up not only of multiple mesh layers, but also of tiny fragments of mesh that are organised to allow you to selectively hide bits of the mesh by making them invisible. As a result a typical pre-BOM body is made up of tens of different meshes. Thus you have 2 distinct aspects of the mesh body/head that affect your complexity. The alpha slicing and the onion skin layering. BOM addresses both of these if your mesh creator chooses to fully embrace it. Sidean from SLink went all out and the Redux body models (which are offered alongside the legacy pre-BOM models) use the bake system to reduce the onion skins required and at the same time uses the bake system "alpha" masks to eliminate the need to have sliced and diced meshes. As a result the body retains the same mesh structure and shape, but dramatically reduces the drawing overhead. There is an added benefit too that the alpha can be precisely defined by a clothing creator, giving far more freedom than the old legacy models had.In the old system a designer had to ensure that the lines of the clothing followed the body segments so that when setting an alpha you didn't have a hole in your body on show. Other "BOM" body models have for the most part just allowed BOM textures to be applied, leaving their avatars just as complex as before. As a result they are not going to give you the same kind of reduction in complexity, in fact probably none at all. I suspect/hope that this will be an interim step while the creators do all the rework that is required. It is worth noting too that while SLink have stolen a march here and really set the target for BOM bodies, the existing pre-BOM "legacy" SLink models are still shipped alongside and there is good reason for this. BOM pros and cons PROS 1) Far lower rendering overhead, less work means more frames per second, lower lag. (but realistically only when it is widespread) 2) perfect alphas, but only when your creator makes one or you create one. Creators can and will provide these especially as demand grows, many never stopped; those that support standard sizing and non mesh avatars, and those that remember pre-2011 will know all about this. 3) No more texture hogging, laggy HUDs that eat up all your resources... CONS 1) no simple auto alpha. outfit folders can get you some of the way, RLV can help too (if you have it)..... 2) no HUDs means no fat packs of appliers with a simple UI Hopefully Maitreya, (as the clearly dominant female body at present) will follow in the footsteps of SLink and allow people to choose between old and new depending on their needs; more people would then have a chance to enjoy a new slimmer more lightweight second life in the new year 🙂 . At the moment I can choose to wear the original SLink models where I need appliers or don't have time to make an alpha etc. or I can use the BOM edition when I want to feel less of a social burden and reduce the impact I have on others at shopping events etc. Beq x
  12. It's one way to do it, and certainly not a bad way. The Medium LOD is typically the one that is visible to most of the people most of the time, as such it is worth making sure you get it right.
  13. Not saying that this is right or wrong but for something like the object here I would tend to do the following: for medium LOD remove all the horizontal surfaces, a few meters away when it drops to Medium those are too small to warrant inclusion. You need to be the judge of what can and can't be removed but be ruthless. for Low LOD, I'd drop to a single column, and have a fight with myself over whether it should be the thickness of the base and cap or the pillar, and probably settle on the pillar. In an object this simple though I'd probably decide I didn't care so much and leave the low and the medium as the same 24 triangle object. for the lowest LOD an 8 triangle box is going to be fine, but you could go the whole hog and use the 8 triangles in a cross (when seen from above) and then use an imposter texture, use the LOW/MED as the model for the imposter though so that the lowest is not showing detail that was not present in the preceding LOD. honestly though, making the viewer download a texture no matter how small, and carrying the extra triangle in the other LODs to support it, in this specific case is not (in my opinion) the right trade off and just go with the box.
  14. In time I revisit the uploader. those messages aren't from Firestorm, they are the mesh asset validation errors sent back from the server. They are put in the log because at present there is no simple route back to the mesh upload dialogue from the arbitrary callback that is triggered. It is not impossible to plumb them in but I've not looked into it. Ideally it could be something that @Vir Linden could sweep into the changes they are making. My reorganisation of the mesh uploader with the resizeable preview paves the way for more verbose messages there is space on the left if needed to place a scrollable widget with all the validation failures. I'm wary too though of making the uploader easier to abuse with poorly constructed mesh, every change we make should steer us towards fostering a better creation culture.
  15. OK, so....we're all kinda right. @animats is right that the viewer is able to generate mesh that the user is not (currently) allowed to upload, kind of. Yours truly is right that it is impossible to upload a mesh asset that does not have matching material counts for each LOD. @Aquila Kytori is on the right track, that the "unused" materials are assigned a placeholder. The placeholder is a single triangle of zero size. Which somewhere along the way must be decaying to a single vert in the export. I'm not inclined to chase that squirrel just yet. Here is what is happening at upload. I've documented the mesh asset format on my blog in the past, pulling in the, essential reading, prior art of @Drongle McMahon you can see my blog When is a triangle not a triangle for discussion and links off to other distractions. Skimming through I think it still holds true to what I know now (that was written before I was a viewer dev). The mesh header prefixes the model and defines the global parameters, including the set of materials to be used. That array of materials is fixed at the head and is thus why I can be so certain of my assertion that each LOD has to have a place holder array, without it, if material 6 was missing the viewer would see material 7 as 6, 8 as 7, and fall off the list looking for 8. The way we always deal with this as creators is to insert tiny single triangles to cover the missing material slots. My assumption (which I am about to prove incorrect) was that the viewer did that on upload if GLOD decided to eliminate a material entirely. In fact, the viewer does something a little unexpected, it creates a special placeholder label. 000005 [ ARRAY (8 elements) 000010 { MAP (len:1) 000025 Key:NoGeometry 000027 } 000032 { MAP (len:1) 000047 Key:NoGeometry 000049 } I've put a full annotated decoding of my 8 material plane (similar to the one Aquila used) https://pastebin.com/W7uN3UBu The entry of a key, with the string "NoGeometry" creates an entry in the mesh asset for that material, thus fulfilling my requirement above, there is no way for a user to create this themselves. So....that's all very good Beq but what about @Aquila Kytori's stray "vertices"? Well, that happens at the other end of the lifecycle. When the viewer is decoding a mesh asset ready to present it to the pipeline it faces a similar problem to the issue expressed before. You really really need to have actaul mesh to render or it would require all kinds of additional sanity checks and edge cases, so it creates some dummy geometry to act as a sentinel. Sentinels are a not uncommon way to simplify bounds checking. Software engineers may well have seen it in many algorithms, it is akin to an "Elephant in Cairo" Here is the viewer, placing its elephant: "It's a triangle Jim, but not as we know it" The viewer is creating a triangle composed of three indexes into the vertex array, each entry is set to 0, the vertex array is of length 1, and has coords (0,0,0) in the normalised mesh domain (-32768 - + 32767) i.e. the origin, for both positions and normals as well as 0,0 for the UV. Conclusion: We are all a little wiser than we were, I hope. There is indeed a path that the human creator cannot access, however, that path does not allow them to avoid the materials. The question is, how would the user access this? The viewer can do it because it "knows" which material it has eliminated and thus uses a placeholder. The placeholder is really no different to what we do manually, though arguably a little more compact. In order to do this in the manual case the viewer would need to be able to follow the unordered material lists through the discrete models and work out which ones were missing. It sounds doable, I effectively do this in my Blender Addon as shown here. The Addon (yep it will be up on github as soon as I get chance) has been used here to auto populate the Lower LODs from the original 8 material plane. I have then manually reassigned the materials on a couple of faces in the Lowest LOD in order to trigger the error shown. The "caution" marker is highlighting the fact that no geometry is present for the expected material in that model. I will have a "fix me" button at some point, which will create an arbitrary triangle, but if we can come up with a better solution that allows the uploader to recognise this without making it harder for users to debug their genuine material cockups. I need to think more about it and consider the inverted cases such as imposters where you effectively have a null material in the High, that acquires mesh data in the lower LODs. For the terminally curious, here's a short video of me fiddling with that in SLender (the Addon). Quite appropriately the video shows a bug at the end as the High LOD shouldn't have shown an error :-) https://i.gyazo.com/9950f318cd3d27fa36d1bf14e0a10b20.mp4
  • Create New...