Jump to content

Beq Janus

Resident
  • Content Count

    208
  • Joined

  • Last visited

Community Reputation

438 Excellent

3 Followers

About Beq Janus

  • Rank
    Advanced Member

Recent Profile Visitors

452 profile views
  1. A shortish account of my 2.8 experiences so far. The giant troll at my Fantasy Faire build was entirely sculpted in Blender 2.8. In fact, without 2.8 he would probably have never come into existence. Hugh (as he became known) was my second ever sculpture (I use the word sculpture here because even though it sounds clumsy it avoids the fact that in SL the word sculpt has been polluted 🙂 ). My first was a little "snow fox" that I made in early Jan. I grabbed 2.8 to see how badly it broke my addons (the answer is completely) and to work out how I'd go about redesigning them. My Addons used the old layers system so they need to migrate to collections as well as some other changes. While I was there I thought I'd give sculpting a go, there was a "competition" to build a snowman/creature in New Babbage so I made a lumpy snowfox. It turned out OK. My artistic abilities are generally pretty woeful so "OK" was a step up 🙂 One of the problems I had though was Retopology, none of the decent retopo tools appear to have been migrated yet. (I need to check the latest status) The solution I found was to run both 2.79 and 2.80, you can then copy/paste between the two far more easily than attempting to export/import. I then retopologised it in 2.79. I decided to make sculpting in 2.8 my challenge for doing the Faire this year and thus Hugh was born. Because of his size, and the fact that he is effectively disposable, not for sale or general use, with judicious use of decimation tools I didn't need to worry too much about full on re-topology, keeping the polycount sensible was enough. Sensible, in this case, was more challenging than you'd think. On one hand, you have a vast mountain of a sculpture, so low resolution is where you want to be targetting, but at the same time people were going to be standing in his mouth (we hosted poetry and writing open mic sessions and readings/interviews with authors such as the award-winning SF and Fantasy author Elizabeth Bear, right inside the cavern that was Hugh's mouth. The installed Hugh (you can see him in my video linked on this thread in the Machinima forum) came in at about 300K triangles. Because of the scale, he had to be sliced up into SL-bite-size chunks of 64m, resulting in around 40 separate pieces, which also adds to the geometry of course. I wrote a custom 2.8 Addon tool for doing this and I will be releasing that at some point (once I make it safe for people other than me to fly it). I actually produced a single mesh version that retained the same UVs etc. and came in at a more respectable 32K triangles, again without "proper" re-topology tools just the judicious use of selective decimation tools. Sculpting in 2.8 with my Wacom tablet was brilliant, highly responsive to input, and intuitive enough for me to be comfortable. Not that you cannot exchange blend files back and forth. 2.8 will import 2.7 but it will convert them to use collections and will save them as such, this, of course, cannot be read by older versions. Conclusion Blender 2.8 is brilliant, don't wait for the "stable release", you can run it alongside your 2.79 if you want, but the "release" is not going to be a magical transformation from the nightly beta builds, it will be whatever the nightly build was on the day the seal the release. Things to consider:- Many of the larger, more complex add-ons are still being updated. You can often find a work in progress on the Addon's github page. all Addons should have a source repo available because Blender is GPL licensed (commercial add-ons exist but you are paying for the support in effect typically by supporting the author via gumroad or patreon) If you are not ready to migrate yet, you can ease your migration. Switch 2.79 over to left click select and retrain that muscle memory. I did this quite by chance last year because I was fed up with switching tools and clicking the wrong button. 2.79 works quite adequately with left mouse, it also makes using a wacom tablet easier as it happens. You can, of course, choose to run 2.8 with old-style right click too. Finally, for no reason than showing a silly image, me reading a human-scale (stone) book, laying on a stone book that is resting on a stone book that is..........
  2. Fantasy Faire has finished and I'm busily packing away our region into rezzers in case we should ever want to recreate it. As a memento, mostly for myself, I thought I would make a film and set it to the music of Grieg, which played a part in the creation of the region, and of the Troll, Hugh. The pieces featured are, in order of play: Scene 1: Morning Mood - Sunrise over Trollhaugen Scene 2: A Wedding day at Troldhaugen - A slow sweep around Hugh and his book/town Scene 3: In the hall of the mountain king - A nighttime trek through the woods and vales and the curious creatures that live there. My footage from other regions will be part of a forthcoming episode of Designing Worlds alongside that of Elrik Merlin and Saffia Widdershins. Thanks Beq x
  3. sorry, late to this, I've been ignoring everything until the Fantasy Faire was done and my RL time debt at least partly paid back 🙂 Alpha masking has a lower overhead in rendering because a pixel is either transparent or not, there is no colour mixing to be done, it is a simple stencil (effectively). highly recommended for all plants and grasses and frankly anything and everything you can get away with it on. Of course, it leaves a lot to be desired when you truly need blending. As @ChinRey noted, the averaging that occurs when the viewer produces the lower resolution versions of the texture for use at distance will indeed result in artefacts and the best way to avoid that is to solid fill the alpha channel. The result will be a blocky looking texture in some cases, but in reality, that is a true reflection of how it will be rendered. Of course, on the edges of the masked areas you still have this issue and thus an appropriate diffues colour there makes sense. for those I would strongly recommend the kind of practice employed for alpaha blending (and UV islands for that matter) which is to blur the diffuse out into the "dead space". A great guide to this that has served very well for many years is Robin Sojourner's guide to alpha masks, which uses a set of free plugins for photoshop. You can find it at the following link Alpha Channels with No White Halo the section of most interest is on page 2.
  4. Nothing sophisticated. It is literally looping once through all the objects in the scene and if they happen to be an avatar it loops through all of the currently playing animations, stops them and starts them again. One thing of note. It is explicitly checking for Avatars and does not have any code specific to Animesh, I suspect that Animesh as scripted objects may need to be handled differently. It would be worth creating a Jira for this, I don't have time to look into it for a while but it could warrant a closer look when time allows
  5. Interesting reading, I don't 100% agree with some of the assertions but I don't 100% disagree with any of it 🙂 I would note that fitted mesh and rigged mesh are no different at all in the viewer, in the end, a vertex can have 4 influences from a set of 110 bones used with the mesh as a whole. Whether those influences are derived from so-called collision bones or regular bones makes no difference. In all cases, the matrix palette for the transforms is recalculated at least every frame, for every drawable. One of the optimisations that I introduced for Animesh, that has a benefit beyond Animesh, is a caching of the matrix palette so that it is calculated just once per drawable per frame. Prior to this release, it was recalculated every render pass, which added a significant additional maths overhead to advanced rendering for things like shadows, and materials. These are all computed on the CPU by the way, then passed down and unpacked in the shaders, though I should note that I am far from an expert in the rendering pipeline, Animesh was my first real foray into that space and a large learning curve to even get to where I did; it would not surprise me to find that the full answer is more nuanced, but right now I don't believe that there is very much magic happening on the GPU for this. Just as I was writing my answer 🙂 but yes I concur... I don't think enough happens on the GPU but at the same time, it is not always that case that just because it could be done on the GPU it should be (but that is a different tale). A few comments on the blog itself. Re: Asset serving While this is true, it is worth keeping in mind that both Sculpts and Mesh and have an additional fetch overhead this is not quite the same as the Asset Server discussion really, the point is that the data fetched for a mesh and a sculpt is in two parts, asset data, then "mesh" data, where "mesh" can be defined either as a sculpt image or as a triangular mesh. From the CPU section I've not tested this... So I will bow to your assertions in the absence of any other credible argument. However, Prim rendering is not that efficient and it would be my guess that creating an exact equivalent of a given prim in Mesh could well be faster because in Mesh it is explicitly formed, where prims are in large part procedural. I am speaking here from a point of "informed ignorance", by which I mean I am stating what ought to be the case given what I know, but it is an area I have not paid close attention to. That said, taking the metadata description of a prim and drawing it, will be slower than slapping a bunch of triangles into a vertex buffer (see my comment on memory pressure though). Another example of such informed ignorance is in my belief that because prims have a number of faces, and these are to some extent hard-coded (to some extent because while a cube prim has 6 material faces, a hollow cube has more); it is my belief that a prim will always be rendered in as many parts as it has material slots, even if those are textured identically, and while the same is true of Mesh, the creator has explicit control over the number of faces and can choose to simplify/optimise. Thus (to give a somewhat stupid example) a plywood default prim will likely be drawn as 6 faces. A fully-plywood cube could be constructed to be of a single texture face and thus render faster. Extending this point, it is always worth remembering that what we might think of as a single mesh is drawn as up to 8 meshes, one for each texturable surface. There are good opportunities to optimise UV and VRAM use, and rendering performance by managing these texture slots and to echo your sentiments on alpha. If you have alpha on one texture face, consider giving it a separate texture sheet because it will remove shader passes from the faces that don't have the alpha present and avoids some of the glitches too. I can't say "always" here because there are arguments for the texture reuse where the data handling benefits may outweigh rendering cost. Nobody ever said this was going to be easy 🙂 Finally a word on bottlenecks, in your blog on CPU, you bridge a little into the subject of GPU because of their relationship. The simple fact here is that "your mileage may vary". I alluded to an optimisation I made in the Animesh release of Firestorm, it slashed the number of matrix calculations the CPU is making per frame, especially for those of us running with shadows and ALM, this was a CPU saving that was not inconsiderable, but did it result in a significant speedup for everyone? No, not really...the answer as to why is "because CPU was not your bottleneck", this is evident in fact, if you look at your CPU utilisation, it is not generally thrashing a core at 100% (people on Linux report otherwise, but I think that is a peculiarity of the viewer on Linux more than anything else), mostly your CPU is busy but it is not running flat out. If it is....then good news, my change WILL have helped you, but if your machine is like mine, then the bottleneck is not CPU and not GPU, it is IO and memory/cache utilisation. In profiling my system I found that more than 50% of elapsed time the CPU was waiting for DRAM to catch up, pulling things from RAM into Cache was/is causing the CPU to stall and wait. So I might get better frame rates if I buy faster RAM , yay for me but it doesn't help the general case, or perhaps find a way to ensure better cache locality in the viewer pipeline (if only it were that easy)
  6. Necro-ing this thread because of a conversation I had today, which taught me something new, that I think is worth sharing for the future. The BVH format is essentially unit-less. There is no "units" section to define what the unit of measurement is, the best I can find is a comment that the units are "world units" which is equally meaningless without context. Perhaps the best summary of this is in the much-cited work by Meredith and Maddock, titled "Motion Capture File Formats explained". However, as @Aquila Kytori and @Cathy Foil observed here there is an implicit use of inches when importing from BVH, the reasons for this are lost in time. Perhaps @Vir Linden may have some insight into the history of this? There is a useful nugget "hidden in plain sight" as is often the case http://wiki.secondlife.com/wiki/BVH_Hip_Location_Limits. My guess is that whatever sample animations the original importer was tested with, they were using inches and that somehow became a de facto standard. Note however that the inches unit is only relevant when reading BVH, the Second Life internal animation format (.anim) is "effectively" metric using an unsigned 16-bit integer to represent 0-5m in 1/65535 increments. The .anim format is documented here http://wiki.secondlife.com/wiki/Internal_Animation_Format This little gotcha is quite well hidden, as far as I can tell it appears just once in the viewer code, during the conversion of imported BVH data to a KeyFrameMotion object (which is the data that we, as users, then add things such as priority, ease_in and ease_out etc The BVH format itself has not units, as I said above, however, I did manage to uncover the original specification and it had this "enlightening" statement. Which goes a long way to explaining why we ever ended up with the illogical -Y forward orientation for animations. This stated, "neutral pose" also explains the lack of orientation encoded in the BVH itself. Hopefully, this little tidbit of info sitting here on the forums will make it slightly more likely that a person seeking this knowledge in future will find it. Thanks to @LichtorWright for pointing me at the Hip locations info in the first place.
  7. Materials list has to match per object, and the names need to be unique and not have spaces (unless you are using the preprocess DAE setting). In my experience, the objects in the DAEs have to be in the same order in order to load without explicit named LOD matching materials are dumped into a map as they are loaded and then referenced, if you have multiple materials of the same name the map picks up a different permutation of materials and says "hang on these aren't the same". The single high lod model (called physics and low lod) I don't understand what you are showing. The high LOD model has failed to parse there it would seem but without knowing more I can't say what's going on at all. What is the object naming? as arton suggests, remove anything that might be confused from the mesh name, note it is the name of the mesh, not the object. if it believes that you are attempting to use named LOD matching it may fail. The parsing is finicky at best.
  8. Yep, I agree, I'm just thinking out loud a bit, and not convinced that there is no scale advantage because (as you note elsewhere) my example is illustrative at best, trying to convey in human terms how much more work it is for them to tell where they collide. It's actually a poor analogy for the problem, I probably ought to have only subdivided on one axis! However, once the potential collisions have been identified through judicious use of spatial partitions and bounding boxes etc. you still have to check the hulls. This is where I start to suck my teeth a little cos, I dunno...I'm in two minds, yes it's the same amount of work (50 hulls is 50 hulls is 50 hulls) but stretched long and thin by a rescale are they not subject to the same concerns? One day when all the other jobs are cleaned up I'll summon up enough energy to look 😉
  9. A great summary, and a worthwhile pilgrimage. I take a small exception to the two points quoted. LI calculation is literal and well-defined, though it may not make sense without understanding the proper context, it is not easy to find out what it is and that gives it the air of mystique. That is just for the steaming cost, as that counts 10 times over for the physics cost element. One thing that is also very true is that the current LI calc is also old and what once held true is not quite so black and white now. The size aspect maker perfect sense and little to nothing to do with the server. the server certainly has no additional stress due to the larger scale of an object. In the past assets were served by the region, and it could be argued that that caused more load, but that makes an assumption about people requesting simultaneously that is hard to quantify. In any case, this has not been the situation for a long time now, all mesh and texture assets are served via the CDN. However, the scale aspect makes sense without that. A large object is visible from further afield at high detail, thus a scene consisting of lots of large highly detailed objects is more complex to render than the same scene consisting of an equal number of smaller versions of the same objects because some of the smaller objects will have decayed to the lower LOD models. Fewer triangles to render means less load and faster rendering. In theory, this extends to streaming because an object that is far away only has the lower LOD model data requested. Thus the viewer downloads less data. In practice, and with modern networks, the use of CDN, and larger local cache this is of less value and that is what I think we'll see those metrics revamped under project ArcTan when it appears. The size has an inverse relationship with physics. If you are using a triangular mesh physics shape then the smaller the triangles the more tests have to be done to determine collision. this DOES impact the server because that is where the physics engine runs. The reason for the 0.5 limit (which is rather arbitrary and possibly too large, but that's a separate argument) is to prevent the physics cost going asymptotic as the scale tends towards zero. I do struggle to justify (based on this argument here, the fixed cost for analysed hull physics but again, that's another matter for another day) This animation tries to illustrate the problem with an extreme example. In the best case, the physics engine tests 2 triangles for intersection with the ball in order to decide if they collided. In the worst case example here it may have to test up to 1024 triangles, clearly is it a lot more work. (taken from http://beqsother.blogspot.com/2017/10/blue-sky-thinking-physics-view-explained.html)
  10. No apologies needed, we all learn every day if we don't then we probably weren't trying anything new.
  11. Wait, if I upload a texture with only one channel it won't add more channels to it, will it? I mean, there is absolutely no reason to have more than one channel for a specularity map, and that's not new, that will be the case ever since SL introduced specularity maps, whenever that was. Does it really just inflate the textures? o.o While the viewer has support for various image formats with varying numbers of channels, by the time we are rendering it such things are lost. I'd have to dig around a lot to prove that we can even upload a single channel grey scale image these days. In theory, you can, but in practice, I am less certain, it may well get converted to 24-bit RGB colour space irrespective of what you give us. However, the statement that "its a specular map so it is just one channel" needs some clarifying, because that's not at all how it works, and never has been. In second life there are 5 channels that control specularity, First we have the 4 channels of what we call the spec map. The first 3 channels (RGB) control the per texel colour reflection these get combined with the overall tint specified in the specularity settings for the material, then we have environmental reflectivity, which or more or less the equivalent of traditional "shininess" and is controlled by the alpha channel, this gets combined with the environment intensity setting. The 5th channel is stored as the alpha map of the normal texture and is more or less a roughness map. It controls the sharpness of reflections (i.e. how much light is scattered by that surface) it gets combined with the per material glossiness value. There is no way to control just an individual channel, which does cause overhead for some types of material, but that's how materials work. This link explains it fully http://wiki.secondlife.com/wiki/Material_Data#Texture_Channel_Encoding
  12. While the conversation has moved on quite a bit one of the initial questions asked was "Is this to do with the move to cloud?" The simple answer is no. The primary reason being that none of that work has hit the grid properly yet. The less simple answer is that your textures have not come from Linden Lab servers for a number of years, they went to "the cloud" quite some time ago, but it is a different use of cloud than what is being played with at the moment. Assets are served by a Content Delivery Network (CDN), these are services that are designed to manage (and optimise) bandwidth use on the internet and are typically much closer to you than Linden Lab servers are (of course for a small number of people that is not necessarily true, Arizona residents specifically). A CDN has the assets uploaded and replicated globally, typically across its major data centre, it then has smaller edge nodes that provide localised caching nearer to you. The use of a CDN gives you faster access and takes the load away from the sims. In the past assets were delivered by UDP from the sim, but this changed to HTTP distribution some time ago. There are certain asset types that can still use UDP but even they will be disabled this week and at that point, all content will come via the CDN. http://wiki.secondlife.com/wiki/Release_Notes/Second_Life_RC_Magnum/19#19.02.21.524633
  13. most likely? i'm far from trusting that these error messages mean anything that serve the hapless user. more like the house's "good luck sucker". as to the horrifyingly tiny triangles, taken into scale with the total size of the build (avatar scaled) they just may be. that top landing's odd shape consists of vertices at each corner that have to be joined somehow. the stand alone stairs i started with were all nice and lovely before i had to integrate them into a 45 degree stepped wall. why? i'm asking that same question. @Chic Aeon is exactly right. A degenerate triangle is one that is long and thin to the point where the length of the shortest edge is far smaller than the other two. You appear to be using an older Firestorm, but if you have a look at the upload using the latest firestorm it (should) block you from even attempting that upload and highlight the problem areas in RED. The older versions of FS, use the same "highlighting" as other viewers, you can see the problems "highlighted" in your image, it just makes a very poor fist of showing them. As has been noted, the solution is to simplify the shape as much as possible, a slope rather than steps is the typical solution. As you observe, the bounding box MUST match that of the visible mesh, if not the physics model is stretched (or shrunk) to match. In the case where the stairs have a bannister or balustrade that it not part of the physics model and thus makes the visible model far taller, you have to place a single vertex (it is often easier to place a single trangle) in the physics model that fully corresponds with the visible mesh. This then prevents the stretching.
  14. This I can believe. Which comes back to the simple facts that this feature is a debug setting that reads values directly from the image buffer. It then displays them in the most common normalised form for representing 8 bit colours, which matches that you would most likely use in setting RGB values into photoshop or similar. The problem here comes back to the fact that people continue to point users to functions that are not intended for their use, give poor advice and then we get blamed for it being obscure and hidden or (apaprently) too easy to find and not well enough documented. The section you are missing is in the page I gave you that documents the glReadPixels() call, where it mentions that they are converted to a value 0-255 (2^8-1). A more comprehensive list of conversions is giving on the OpenGL4 page https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glReadPixels.xhtml I think the documentation is perfectly adequate given the intended purpose of the feature. We do not provide any guarantees as to what those values should be or can be used for. In this instance, such advice should be documented by whoever it is suggesting users do this ill-advised sampling. As I stated before, disabling ALM will give you more consistent results as it removes some of the nuanced lighting effects that will be at play, but I don't hold much confidence in the results being any better than doing this by eye. Colour matching is a poor relation to having a proper texture applier, modern skin textures are not a single tone and as such matches will always be at best approximate. I do find it amusing that some users feel so strongly that they are entitled to make demands over where we (myself, other developers/testers/support staff/mentors, etc) should be spending the time we give freely. You'd do better to contact the creator you are trying to match to and ask them to provide a sampled average tone for an area around the joint that you are matching to based on the diffuse texture that they have.
×
×
  • Create New...