Jump to content

Beq Janus

  • Posts

  • Joined

  • Last visited

Everything posted by Beq Janus

  1. Nothing to do with the meshoptimiser. I would assume that it is the new one weights list i the uploader, that has to be populated and by default we now have show weights auto enabled for rigged mesh. I would suggest two steps, neither of which address the underlying problem, but may help ease the pain:- 1) turn off the auto enable weights and the auto-preview weights options on the upload preferences panel This will mean that the weights panel is only filled when you deliberately enable weights. It does not make the problem go away, you will have to wait for the list to be filled but it will be at a moment that you choose, whihc may make it slightly less inconvenient. 2) Simplify your meshes. Most reports I have seen seem to show that people are trying to use meshes with 100K+ triangles. This is right on the limit of upload capability and If you are loading so slowly that it is hanging then you might want to consider re-topologizing and reducing the triangle count. That is not in anyway intended as a brush-off, just a simple statement that the amount of work that the viewer is having to do here is proportional to the mesh complexity and that same load issue can be extrapolated to rendering performance too. i.e. slow load here is likely and indicator of slower draw inworld. Over-complex or not though I would like to help do something about this and I would love to have some example meshes that cause the slowdown/hang so that I can work on optimising this for a future release. If you have an example mesh that causes you to hang and are willing to share it so that I have a test mesh then please either raise a Jira or contact me. Beq
  2. It is undoubtedly a very tricky road to go down. It would be a very tricky and, I suspect ultimately destructive path as you say. As soon as payments get involved things get complicated people have expectations and demands (not that some people don't already treat TPVs as if we owe them something lol) and a proper commerical relationship between a TPV and LL would change the entire working relationship. I don;t think yo meant it explicitly, but just to be clear as it is raised from time to time; there is no "firestorm fund" per se. As a registered non-profit that line is very carefully trodden or things become extremely complex. We've had occasional fundraisers in the past, these typically involve a short term sale of some item of merchandise in order to raise very specific targeted funds and those are to cover out of pocket expenses such as the web servers and certain commercial software licenses that we otherwise have to fund ourselves. Not one penny passes to any individual in the FS team. This is a central tenet of how FS operates, it avoids complicated taxation (consider that the team is international, but the non-profit has to exist in some legal regime - for FS it is as a US non-profit) which then means accountants a host of legal issues and perhaps most importantly "entitlement". We'll have all seen posts in group chats or here or elsewhere, from time to time, from users that clearly feel entitled to some bug fix or other, if any payments are ever involved then people start to feel that they have rights that they simply do not and as an entirely volunteer organisation we (and many other TPVs) could never meet. @NiranV Deanquite famously makes it clear on his website that his viewer is exactly that "his viewer" and nobody but he gets to call the shots. When payments get involved that independence is increasingly hard to maintain and as such it is avoided. Having said that, in general, when LL ask for a contribution the TPV will do so if it is possible to do so and simple enough to be worth the extra effort. However, as I have discovered, this "extra effort" can be far more demanding than writing a given feature in the first place, and in some cases due to inter-dependencies on another developers code it can prove impossible to give the lab what they need in a legally acceptable way. There are some TPV devs that refuse to sign a contribution agreement (which is an entirely valid position - it does require you to give over personal details and technically allows the lab to approach your RL employer, which may not end well). There are ,of course, former TPV devs that are no longer contactable for all kinds of reasons. And yes with the viewer code being open source you'd think that it was just free to pick and choose, and for other TPVs this is to some extent true. We TPV devs and TPV projects in the wider sense have personal codes of conduct and generally agreements where we don't typically use another TPV's features without them having publicly released it or granted explicit permission. It is only fair after all that they should get the credit for their innovation, and we ours. Most (but not all) TPVs follow this. It is of course also important that attribution is given, you will often see on commits from myself, Ansa etc. on FS, credits for code coming from Rye (Alchemy) or Niran etc. and they likewise include us in either their comments or commit messages or both. (We also credit the lab of course, every FS release note has an extensive section that details those fixes and features that come from the upstream source). For the lab though, as a commerical entity, it is different, they should never lift code directly from the open source domain, it is risky and the lawyers would be severely displeased. There are many issues with them doing this but mostly it ties their hands should they ever want/need to change the licensing terms or sell the rights to the code only to find that someone asserts ownership of some key component and blocks them. The lab are not alone in this by any means. In my RL I have worked for large international corporations that have similar policies and thus struggles (I have had to have team members stay away from the office when due to HR mess ups where their contract renewals have not properly overlapped) every line of code that enters a corporate code base has to be accounted for if they are ever to be able to assert ownership. when we as TPV devs sign the contribution agreements we have to give up all rights to the code we contribute and demonstrate that we are in a position to do so (hence the note above about potentially contacting your RL employer). Anyhow, that was a very long way of saying, I agree with Niran and that the paid for contribution path is a minefield. There are ways through that minefield but it almost certainly involves employing a dev directly and that may not be feasible for many different reasons. It would also remove that person from the TPV for a period, and given that there are remarkably few of us, that may not be in the best interests of things.
  3. Then back out the change locally. You are choosing to build from a live development branch not from the last release branch, as such you are exposed to all kinds of daily churn, you also have the freedom to ignore any updates you don't think you need.
  4. It would be a pretty lame grief if it were, but no, the example you give would not work in the way you describe because the optimisation works for rigged mesh and transparency overlay does not work on rigged meshes anyway. I think when you say material, you mean mesh/submesh (i.e. a parcel of stuff for the GPU). In all cases, if you are drawing less, then the frametime will drop, absolutely. Look at the exmaple I showed of the multipose mesh feet, look at the trend in the last year or so towards multistyle hair. These are not the massive disaster that you would expect, purely because we side step the issue with the "transparency trick". Would I advise going down that route...? It depends. In most cases there are more efficient means to achieve a result. I would look into how you can achieve the same without that, be inventive. In the long term, once we've all got rid of the 1000lb gorilla's on our backs, I may well be whining about how unnecessary transparent meshes are causing cache churn and thus slowing things down. Everything has an impact somewhere. Keep in mind that frametime lag, is not the same as lag in the broadest sense. As I noted in the blogs, I am measuring only the outright render cost, the most direct impact on the FPS. If you want to consider the time it takes to download, to cache, to unpack, then that extraneous mesh will have an impact (the files is bigger after all). Those aspect do not however directly impact FPS because they are not conducted by the main thread. And I want to be clear on this. Just because I showed that the triangle cost is not a driver of the CPU render time, that is not a green light to slam things full of triangles. Far from it. Triangles do have an impact (especially on the low end machines, as my graphs indicate) they are just not detectable because the drawcall load is such a heinous thing. Once we remove those overheads then the smaller costs will show up more clearly and may well nuance "efficient" from "very efficient". And while I know that this is now talking about something else entirely. Please keep in mind that the drawcall burden is a large part of why almost everyone is CPU bound in SecondLife (that is to say that for a typical scene with avatrars in your GPU is not working as hard as it could because the CPU is too choked up with pushing boxes of body parts around to keep it busy. Once we get rid of that bottle neck then we're likely to see more people constrained by GPU limits, and one of the worst offenders there is "overdraw" (painting the same pixel more than once) which is caused by many factors and which tiny thin triangles are a big source of. We are getting deep into "future problems", and they'll be nicer problems to have in that regards as we should (hopefully) be in a generally happier place. 😄 No. Not in anything like the same way. The reason that the submesh diffuse colour transparency test works so well is that no matter how large or small the mesh, it is a single value test. "Is the face alpha set to 0.0". You cannot make the same assertion for arbitrary textures, we would have to check whether every pixel was transparent to draw the same conclusion. What we could do is test a known UUID such as the default transparent texture (somewhere in a thread over the last week or so we examined a Jira that spoke of this). We could...but why pollute things, if we have one perfectly good "off switch" we'd need to have a strong use case to want to start adding more variants. Never say never, but a compelling reason why the existing method is not adequate and proof that a new method has no unexpected gotchas (always a worry) is needed. Finally, and I use this as a summary of all the above. Yes and yes. The key term here is "without much guilt" I honestly do not feel comfortable suggesting that having invisible meshes around is a god thing. In the end it is not, if you can find a better way please please please do. However, they are not leading to anything like the kind of problems the slices cause. In the end, it would be better all round if we didn't have invisible floaty stuff being worn, it'll bite us on the bum in the long run no doubt. But if you need to pick one then it is by far the lesser of two evils. Also, all things in moderation. A few slices here and there to enable some cool feature is fine. hundreds of them for no real benefit is (I am arguing) very much not fine.
  5. Beq Janus


    That's certainly not a requirement imposed by the uploader. Do you have an example screen shot to show us what you are trying to achieve. to answer you original question, how to create a physics model for a complex model, the basica answer is to box out the areas that you need to be able to sit upon/hide behind. Keep things are coarse as possble. SL will never properly react to a phyics shape more accurately than about half a metre in any reliable sense. I have an example image from a couple of years back. this was a statue-type model, it was a large snow sculpture for fun snowman competition. I wanted to be able to sit on it and hide behind the legs etc. But it also had a land impact budget so that dictated a very economical physics shape. I tried various sets of hulls and boxes and in the end the best fit for a balance of physics silhouette and land impact was the following max of planes and simple mesh hulls (you cannot see the back facing panels cos they get culled by the shader)-
  6. It's sole manifestation at present is the performance floater project viewer. I would recommend having a look and giving feedback on the UI. That's what project viewers are for. I hate a couple of aspects o the UI and really like some others. The content of it, so very flawed, I've covered, let's not go there I can feel my blood pressure rising 😉
  7. Yes, indeed. I was referring to the linear change in frametime. where FPS is logarithmic with respect to frametime.
  8. pretty much yes. That was, to some extent, the surprise to me. Based on what we are all used to hearing I expected to see something less than linear because I was expecting to see a larger impact from the triangles and thus as the tris per drawcall reduced you'd see a "trade off". The only time I see any indication of that is on the laptops without the dedicated GPUs and even then it only held true for the first few.
  9. Indeed, for now that is the case. That does not make it right. The reason I say it is worse than useless is that it quite literally makes people do the wrong thing on many occasions. There is a separate thread in the viewer forums where I shared a snapshot of the the so-called worst offenders based on their ARC alongside their actual render time on the machine I was running on. There was next to no correlation between the two. Here it is. The above image shows the Beq-hacked version of a new tool inside the viewer that the lab are planning to release. In my one I have altered the white line graphs to show the actual rendering time, but it remains (in this image) sorted by ARC, you can very clearly see that ARC has no bearing on actual FPS performance. Here is the Lab's original, same UI just that the white line is indicating the ARC instead. Sorry for the blur I cropped it from a video clip I made. Notice here how by sliding the uhm slider we are "causing the top of the list people to be jelly dolled"? Great idea, in theory, but when you look at the image from my version where the white lines are the true render cost. you can see that by jellydolling the top ARC people you are 1) de-rendering the innocents and 2) not really achieving the intended result. In fact, in my example at the top just removing that one FPS hog person would have saved more FPS than jellydolling the top 6 or maybe 7 using ARC, and with the ARC option you'd still be left with the worst offender. You might as well just pick avatars based on their hair colour. I want to be clear here though. The tool itself is reasonably well-formed, the intention is great. I like the UI shown above, I am now lobbying and asking LL at every opportunity not to release it until there is something better than ARC in place. I will be updating my blog with my progress on what I hope will be an improved version of this tool. The trouble is that ARC is not just for the jellydolls it is used elsewhere, people such as yourself use it to decide if their products are good (which is a really great thing, we need to take care over such things) but giving people the wrong advice is depressing and coutner productive. We desperately need tools. What we don't need is more badly misleading tools.
  10. The viewers test the "texture entry" attribute's colour transparency value. That is essentially the "Transparency" box on the build/edit floater or the texture face alpha property in lsl terms. llSetAlpha(0.0, face); So in that regard it won't affect the complexity. Complexity itself is a worse than useless value and while I realise that (too) many people still consider it the arbiter of good quality products it is ever so badly flawed so don't place too much stock in what complexity says.
  11. Part 2 just landed - https://beqsother.blogspot.com/2021/09/find-me-some-body-to-lovebenchmarking.html It's not as easy on the eyes as the last one. Pretty much. No major overhaul is likely until we get a completely new pipeline
  12. I've been meaning to get this all written up for a while now. Here is a two part blog on why we need to ditch alpha cuts and start making alpha layers again. The first part should be consumable by most. The second is more of a deep dive into the numbers and probably less interesting to most. part 1 - https://beqsother.blogspot.com/2021/09/why-crowds-cause-lag-why-you-are-to.html part 2 - https://beqsother.blogspot.com/2021/09/find-me-some-body-to-lovebenchmarking.html
  13. Part #1 of my blog is finally up. I'll start writing up the more numbers-based second part today.
  14. 😄 For the most part the shaders in FS correspond directly to those in LL. There are some cases where we have small tweaks and changes, typically to either fix a bug/problem that has not been addressed upstream or to maintain some feature or other that FS users want us to hang on to (and in fact the vast majority of those things tend to be outside the pipeline) If there is a repeatable use case I'm happy to look at it. Raise me a Jira to chew on. Outside of hard fact I cannot investigate speculation, or at least it is not generally a valuable use of my time to try to. The biggest difference in shaders introduced with EEP is in the water rendering where the new refraction and reflection rendering is very costly in comparison to before (it is arguably also more accurate - but whatever) If you are seeing problems since EEP you might want to look at turning down the reflections in your settings and see if that fixes it. Try opaque water too.
  15. All of the above. At the pipeline level, every texturable face is a separate "mesh", unfortunately the word gets overloaded here so at times it can be hard to know what we are referring to. When I talk about a mesh for the rest of this reply, I will be talking about a single texture face on an object. Every rigged mesh is processed separately, the render pipeline is constructed of multiple passes and different types of mesh need to be in different passes, if you have alpha blend transparency you go into a different set of passes to when you have an opaque texture for example. the implication of this is that every mesh gets dispatched to the GPU separately. It is this "dispatching" that I refer to as a drawcall(). Because the drawcall overhead is so high, the more of them you have the slower things go. I will write up a full explanation tomorrow if I can. It is 1am now so time to sleep, not to start a deep technical dive 🙂 That Jira is actually on about something slightly different. I also happen not to believe that IMG_INVISIBLE does anything valuable at present. It is mostly used within the bake system not more generally. However, all the viewers, certainly all the TPVs have code that will drop fully transparent "meshes" (see above note on what I mean by mesh) before they get rendered. This does not fully eliminate the overhead, but in rendering terms it mitigates the vast majority of the cost. This is not really the case. SLink Redux uses BOM exclusively, it has no alpha slicing and yet it supports breast and buttock mods quite happily. Such meshes which are enabled/disabled through transparency, then it fall into the same category as multiipose feet and multiistyle hairs. Which is to say that so long as the unused ones are fully transparent then the worst of the performance issues are avoided. I want to be very clear here that should some magic wand be waved and all of a sudden these disastrous bodies were all removed and replaced with lower segment versions, we'd most likely be able to see the downside of these transparent ghosts, but right now, in a world of heinous mesh body designs they are a very minor evil. (i.e. even if it is not being drawn it is using RAM and takes CPU time to load it and process it, right now that cost is lost in the screaming nightmare of alpha cuts) For this you'd need to be able to wear and unwear an alpha layer. This would achieve the effect, in fact you have far greater control over things using this method as you are not limited to wear a body creator has placed the cuts. The problem is that I do not think we can add alpha layers from a script at present without the use of RLV. Even with an experience. This is the only change that you'd need. Going back to the general "Beq advocates removing alpha cuts" yes, Beq totally does, but that is not to say it is an absolute thing. The trick that you can see with SLink Redux or Inthium Kupra both of which are pure BOM bodies; is that they minimise the number of meshes and as a result are far more efficient to draw. I am not at all saying "thou shalt make all things as single mesh or be forever damned", it is use as few meshes as possible to achieve what you need. If one mesh body is made up of 240 meshes and another is made up of 24 meshes, the 24 mesh body will draw 10x faster. It is quite literally that simple and quite literally that linear for most people on most hardware. Liz and I worked on a full set of benchmarks to test all of this and the results are pretty compelling, but they also need a lot of explaining as there's a lot of data in there. I will do my best to finish my blog post on this and link it here tomorrow or Sunday, RL permitting. I've been trying to get this out for a few months though 😞
  16. Beq Janus


    It's really just a hack though. It does no harm and I don't particularly worry about it, there is a slight argument that it will load faster but that's unproven and it would certainly not be noticeable. The reason this "works" is simply that the download cost of mesh is based upon the size of each LOD in the asset file. The Asset file "zip" compresses the mesh data and by sorting it in different ways you can achieve better or worse compression. For most meshes it makes little to no difference (it might be enough to shift you from 3.7 to 3.4 and thus save 1LI in the inworld sense. A few months back I actually got around to "making stuff" instead of fiddling with code. I made a bunch of hand tools for New Babbage where I live. I wanted each of these to work well enough that you could have them on a tool bench or workshop scene on the background or be carrying them in your hand, and ideally I wanted them to be 1LI (though I don't typically wed myself to these figures as people make far too many sacrifices for the sake of 1LI as it is) The problem we have is that the current Land Impact system (deliberately) penalises triangle usage in the lower LIs harshly, especially so for small items such as these. As general (sweeping) approximation you have at most 20 triangles in the lowest LOD before you have no chance of getting your prized 1LI. Imposters are the way forward, not the lazy crumpled triangle nonsense of the item in the OPs image, which sadly we see far too much of (and is of course the very reason I wrote the LOD viewer capability for FS in the first place). For the wrench, which is adjustable. making an imposter per segment allowed the impostered view to remain correctly adjusted (a detail nobody but me would likely ever notice!!) https://i.gyazo.com/709b8bc22060110c18ffbdcf9b7fe993.mp4 My process for making LODs is typically to start form the high, or occasionally the medium, and deliberately cut away smaller sections of mesh. Get used to think about how small they will be on screen when they are at a given LOD in order to drive your design decisions. This can be tough, and the trade off used by @Chic Aeon of either padding the bounding box to inflate the radius (which will affect the LI) or just accepting that the item is not going to be seen at that distance is entirely valid (I swear my RL house keys have the lowest LOD zeroed, as I frequently fail to see them when they are right there on the table). Here is a small video of me quickly whizzing through the attempts I made at making viable LOD models. https://i.gyazo.com/f1d06b356e38500465cf44ff71072357.mp4 Note here, that most of the tools I am making are well suited to imposters, you see them side on for the most part, if they are not side on (on a tool rack or similar) then you probably don't care much (they'll be lost against the avatar holding them etc. The exception being something like the oil can which was a total pain because it needs a proper volumetric LOD model and it is asymmetrical meaning I cannot use the "plant pot" imposter trick of a star mesh. In the end each one was 1LI, and each one is a single texture. I use a 512 for all except (apparently - I just looked) the wrench and the matches to which I have granted a 1024 but I think that's because I haven't bothered to try the 512s yet!. I also consider those a guilty pleasure given that I don't sell stuff much so I'm not polluting anyone else's Second Life 😉
  17. Indeed I meant something else entirely 🙂 , but as a texture accounting scheme this is ok. In one sense I am not entirely sure what I am looking for here either. When we render a mesh most of the CPU focus is on preparing the geometry for the GPU. This includes associating textures with faces and binding them. It is not clear to me whether the binding cost happens synchronously i.e at the time we do the bind in the code. Or later on, when opengl decides to flush things. In the synchronous case we should be good, my stats have it covered. If it is happening asynchronously then that cost would be missing from my stats and almost certainly un-attributable to the avatar. I am not a graphics expert by any definition so a lot of the time there are nuances that are missed. I am happy with these stats as I can prove that while they may not be 100% fully accountable they are highly representative of the costs. It's always nice to have a fuller picture though.
  18. I'll be sharing my implementation once I have it in a sensible form, the main objective is to get something to LL that can make sure that this new floater adds some value. This is literally the first step, I have a related "overlay" mode which is fun but of course does not work well with Realtime stats as the overlay itself is changing the avatar render 😄 Indeed, and therein lies a problem, we can do those kind of things in benchmarks but it is not an end-user tool. I don't think you can easily extract per avatar costs at GPU level. I'll take a look into that once I get further on with this. The other problem which is related is textures. I'd be interested to hear from other devs as to whether there is a way to fully account for the texture transfer cost per avatar. I don't think we can. What I do have (not yet integrated) is measurements of the swapbuffer latency which to some extent relates to the volume of information being pushed to the card but it is per frame and cannot be easily subdivided as far as I know. Even so, this is a step in the right direction I hope. The more information we give users the better choices they can make.
  19. Whoo it works!! It needs some proper loving but essentially it works. This (very early test build of Firestorm) has a measurement of the actual cost of rendering avatars. It builds upon the existing performance floater from LL, but I have integrated some proper accounting that measures the actual time spent on the CPU for each avatar. Of course CPU is not the whoel story but the stats I am using are effectively the proportion of each frame spent rendering a given Avatars geometry and any shadows. etc. I'll be adding more... For the purposes of this test I left the list sorted by complexity (ARC) and updated it so the graph (that white line) shows the true render cost. As a result you can see that the people at the top of the list are not typically the problem. Now, this being a first run through of this integration, the values might be misleading, so the following video shows a test. https://i.gyazo.com/c98b325ebd4bfb22699458db20be7fb1.mp4 First I disable the Bea person. Notice how the FPS noticably goes up (a couple of FPS when we're only running at 12) I re-enable them. Down it drops, I then disable Poyi, there is no change, and even with snowflake, arguably the second largest render cost and we see only a slight change by comparison. Early days but I think this highlights my concerns pretty well.
  20. I totally agree. The problem is that your body is not good relative to the hair. It will almost certainly be 100 times worse (making an assumption that a mesh hair is likely to be <5 draw calls. We desperately need these tools to raise awareness of the problems we have. Unless this gets fixed we will be teaching people the wrong lessons. This is my concern, you should be empowered to make choices based on good data. If you have a higher complexity because of an item, but you are content with that item then you are making an informed choice. the problem is that already you are feeling dissuaded from the perfectly good unrigged hair towards the rigged hair because the tool is misleadingly showing it as being better. It almost certainly is not. There is a reason why the unrigged hair shows higher. The unrigged hair has to have LODs that are populated because they switch LODs correctly. Rigged hair, due to ancient unfixed bugs, do not switch LODs when they "should" and as a result the creeators can choose not to porivde lower LOD models, this reduces the complexity score. What it does not do is change any aspect of the rendering time. In both cases the viewer will be drawing a few (<5) batches of data totalling ~25K triangles (or whatever) of hair. If anything the otherwise identical rigged hairwill take longer to display because there is a lot more mathematics involved in applying the weights to deform it to the rig. These are Techy details that most people won't want to understand, and the tools should be saying "This one good", "This one not so good" in a way that we can trust. Right now it is blatantly misinforming us. 😞
  21. I have very very mixed feelngs about this viewer. It needs a radical change to avoid making problems worse. A great example. This is exactly why this viewer cannot be released. It has just convinced you to change something, quite possibly for the worse. The 12 ARC is not that badly skewed, you can get hair with 10 or 20 times that (incorrectly calculated). In this case it may be that you have a rigged hair with multiple styles incorporated and a number of textures. If it were higher I would more confidently guess that your "problematic" hair was unrigged. Unrigged hair is penalised by ARC, unequally compared to its rigged equivalents. Given the same base mesh, the rigged ones will take at least as long to render, and yet their ARC can be an order of magnitude lower. Thus the entire premise of the data displayed in this floater is flawed. Back to the floater itself. I very much dislike the renaming of existing variables such that they have different names in preferences to that in the floater, this means that anything learned in the new floater does not easily transfer to the main preferences. These should be made the same. The avatar list is a nice display, however it comes to the very heart of my dismay. It uses ARC which is so fundamentally wrong as to be misleading. The Maitreya body,as an example (but not the worst), is a significant rendering overhead due to it clinging on to alpha segmentation. This is not a poke at Maitreya, ALL multipart bodies are very bad performers, in the case of Maitreya, a full body will require in the region of 3-400 draw calls (batches of triangles sent to the GPU), Legacy is the worst, with Belleza close behind. Male bodies are typically worse than female ones too. Compare this to bodies that have sensibly embraced BOM properly, such as Slink Redux or the Kupra which have far less (typically 10-15% of the number of draw calls), the render time in my experiemnts is almost linear in terms of drawcalls (almost irrespective of the triangle count even on older laptops without GPUs). ARC does not come close to reflecting this, it is mired in concerns of triangles. Tirnagle counts do have an impact, it is just hidden in the noise when draw calls are over used. The result is that you have a sorted list with supposedly the "worst" lagging avatars at the top, but which is generally misleading and in many cases utterly wrong. You can easily get into the situation where someone wearing a comparatively efficient SLink or Kupra body and a relatively low overhead unrigged mesh hair will appear at the top of the list, will appear with a very long line at the top of your list, meanwhile a person with a body full of alpha segments and a rigged mesh hair is way down lower. By using the tool you eliminate the efficient avatar and keep the inefficient one, utter madness. Moreover, we'll see this used to persuade people, either through finger pointing by their friends/associates, or through good-willed self-improvement, to ditch their efficient outfits for worse ones, achieving the opposite effect to that intended. I am in the process of developing a way of scoring these that will alter this and give more accurate data personalised to your machine. Assuming I can get it to work, I'll be contributing it the lab in the hope that we can avoid this new "finger pointing" disaster. TL;DR The presentation of the floater is not bad, the idea good, the information that it provides completely flawed and in many cases counter-productive .
  22. Do you have a screenshot to show what you mean? It does sounds like you've messed up Anti aliasing somewhere. If you have changed your driver settings externally so that the driver settings override the application then that can prevent the viewer from overriding it. One option is to hit the little "recycle" arrow on the right of the performance/quality slider in the viewer preferences. This will reset to the default for your hardware, and should (hopefully) wipe out any weirdness. Then you'll have to go through and fiddle with them again to get them how you want of course.
  23. seems reasonable (given I thought that was how it was working) I'll try to remember to look at this next time I'm in that code. If you feel like raising me a Jira that'll make sure I don't forget.
  24. "right click - render -> fully" *should* do that
  • Create New...