Jump to content

Beq Janus

Resident
  • Content Count

    203
  • Joined

  • Last visited

Community Reputation

422 Excellent

2 Followers

About Beq Janus

  • Rank
    Advanced Member

Recent Profile Visitors

421 profile views
  1. Beq Janus

    BVH Hip Translate Moving 2.5 times More Than Expected

    Necro-ing this thread because of a conversation I had today, which taught me something new, that I think is worth sharing for the future. The BVH format is essentially unit-less. There is no "units" section to define what the unit of measurement is, the best I can find is a comment that the units are "world units" which is equally meaningless without context. Perhaps the best summary of this is in the much-cited work by Meredith and Maddock, titled "Motion Capture File Formats explained". However, as @Aquila Kytori and @Cathy Foil observed here there is an implicit use of inches when importing from BVH, the reasons for this are lost in time. Perhaps @Vir Linden may have some insight into the history of this? There is a useful nugget "hidden in plain sight" as is often the case http://wiki.secondlife.com/wiki/BVH_Hip_Location_Limits. My guess is that whatever sample animations the original importer was tested with, they were using inches and that somehow became a de facto standard. Note however that the inches unit is only relevant when reading BVH, the Second Life internal animation format (.anim) is "effectively" metric using an unsigned 16-bit integer to represent 0-5m in 1/65535 increments. The .anim format is documented here http://wiki.secondlife.com/wiki/Internal_Animation_Format This little gotcha is quite well hidden, as far as I can tell it appears just once in the viewer code, during the conversion of imported BVH data to a KeyFrameMotion object (which is the data that we, as users, then add things such as priority, ease_in and ease_out etc The BVH format itself has not units, as I said above, however, I did manage to uncover the original specification and it had this "enlightening" statement. Which goes a long way to explaining why we ever ended up with the illogical -Y forward orientation for animations. This stated, "neutral pose" also explains the lack of orientation encoded in the BVH itself. Hopefully, this little tidbit of info sitting here on the forums will make it slightly more likely that a person seeking this knowledge in future will find it. Thanks to @LichtorWright for pointing me at the Hip locations info in the first place.
  2. Beq Janus

    Mesh upload of linkset with explict low-LOD model

    Materials list has to match per object, and the names need to be unique and not have spaces (unless you are using the preprocess DAE setting). In my experience, the objects in the DAEs have to be in the same order in order to load without explicit named LOD matching materials are dumped into a map as they are loaded and then referenced, if you have multiple materials of the same name the map picks up a different permutation of materials and says "hang on these aren't the same". The single high lod model (called physics and low lod) I don't understand what you are showing. The high LOD model has failed to parse there it would seem but without knowing more I can't say what's going on at all. What is the object naming? as arton suggests, remove anything that might be confused from the mesh name, note it is the name of the mesh, not the object. if it believes that you are attempting to use named LOD matching it may fail. The parsing is finicky at best.
  3. Beq Janus

    Weird download impact calculations on mesh upload

    Yep, I agree, I'm just thinking out loud a bit, and not convinced that there is no scale advantage because (as you note elsewhere) my example is illustrative at best, trying to convey in human terms how much more work it is for them to tell where they collide. It's actually a poor analogy for the problem, I probably ought to have only subdivided on one axis! However, once the potential collisions have been identified through judicious use of spatial partitions and bounding boxes etc. you still have to check the hulls. This is where I start to suck my teeth a little cos, I dunno...I'm in two minds, yes it's the same amount of work (50 hulls is 50 hulls is 50 hulls) but stretched long and thin by a rescale are they not subject to the same concerns? One day when all the other jobs are cleaned up I'll summon up enough energy to look 😉
  4. Beq Janus

    Weird download impact calculations on mesh upload

    A great summary, and a worthwhile pilgrimage. I take a small exception to the two points quoted. LI calculation is literal and well-defined, though it may not make sense without understanding the proper context, it is not easy to find out what it is and that gives it the air of mystique. That is just for the steaming cost, as that counts 10 times over for the physics cost element. One thing that is also very true is that the current LI calc is also old and what once held true is not quite so black and white now. The size aspect maker perfect sense and little to nothing to do with the server. the server certainly has no additional stress due to the larger scale of an object. In the past assets were served by the region, and it could be argued that that caused more load, but that makes an assumption about people requesting simultaneously that is hard to quantify. In any case, this has not been the situation for a long time now, all mesh and texture assets are served via the CDN. However, the scale aspect makes sense without that. A large object is visible from further afield at high detail, thus a scene consisting of lots of large highly detailed objects is more complex to render than the same scene consisting of an equal number of smaller versions of the same objects because some of the smaller objects will have decayed to the lower LOD models. Fewer triangles to render means less load and faster rendering. In theory, this extends to streaming because an object that is far away only has the lower LOD model data requested. Thus the viewer downloads less data. In practice, and with modern networks, the use of CDN, and larger local cache this is of less value and that is what I think we'll see those metrics revamped under project ArcTan when it appears. The size has an inverse relationship with physics. If you are using a triangular mesh physics shape then the smaller the triangles the more tests have to be done to determine collision. this DOES impact the server because that is where the physics engine runs. The reason for the 0.5 limit (which is rather arbitrary and possibly too large, but that's a separate argument) is to prevent the physics cost going asymptotic as the scale tends towards zero. I do struggle to justify (based on this argument here, the fixed cost for analysed hull physics but again, that's another matter for another day) This animation tries to illustrate the problem with an extreme example. In the best case, the physics engine tests 2 triangles for intersection with the ball in order to decide if they collided. In the worst case example here it may have to test up to 1024 triangles, clearly is it a lot more work. (taken from http://beqsother.blogspot.com/2017/10/blue-sky-thinking-physics-view-explained.html)
  5. Beq Janus

    Weird download impact calculations on mesh upload

    No apologies needed, we all learn every day if we don't then we probably weren't trying anything new.
  6. Beq Janus

    Weird download impact calculations on mesh upload

    Wait, if I upload a texture with only one channel it won't add more channels to it, will it? I mean, there is absolutely no reason to have more than one channel for a specularity map, and that's not new, that will be the case ever since SL introduced specularity maps, whenever that was. Does it really just inflate the textures? o.o While the viewer has support for various image formats with varying numbers of channels, by the time we are rendering it such things are lost. I'd have to dig around a lot to prove that we can even upload a single channel grey scale image these days. In theory, you can, but in practice, I am less certain, it may well get converted to 24-bit RGB colour space irrespective of what you give us. However, the statement that "its a specular map so it is just one channel" needs some clarifying, because that's not at all how it works, and never has been. In second life there are 5 channels that control specularity, First we have the 4 channels of what we call the spec map. The first 3 channels (RGB) control the per texel colour reflection these get combined with the overall tint specified in the specularity settings for the material, then we have environmental reflectivity, which or more or less the equivalent of traditional "shininess" and is controlled by the alpha channel, this gets combined with the environment intensity setting. The 5th channel is stored as the alpha map of the normal texture and is more or less a roughness map. It controls the sharpness of reflections (i.e. how much light is scattered by that surface) it gets combined with the per material glossiness value. There is no way to control just an individual channel, which does cause overhead for some types of material, but that's how materials work. This link explains it fully http://wiki.secondlife.com/wiki/Material_Data#Texture_Channel_Encoding
  7. While the conversation has moved on quite a bit one of the initial questions asked was "Is this to do with the move to cloud?" The simple answer is no. The primary reason being that none of that work has hit the grid properly yet. The less simple answer is that your textures have not come from Linden Lab servers for a number of years, they went to "the cloud" quite some time ago, but it is a different use of cloud than what is being played with at the moment. Assets are served by a Content Delivery Network (CDN), these are services that are designed to manage (and optimise) bandwidth use on the internet and are typically much closer to you than Linden Lab servers are (of course for a small number of people that is not necessarily true, Arizona residents specifically). A CDN has the assets uploaded and replicated globally, typically across its major data centre, it then has smaller edge nodes that provide localised caching nearer to you. The use of a CDN gives you faster access and takes the load away from the sims. In the past assets were delivered by UDP from the sim, but this changed to HTTP distribution some time ago. There are certain asset types that can still use UDP but even they will be disabled this week and at that point, all content will come via the CDN. http://wiki.secondlife.com/wiki/Release_Notes/Second_Life_RC_Magnum/19#19.02.21.524633
  8. Beq Janus

    MAV_FOUND_DEGENERATE_TRIANGLES

    most likely? i'm far from trusting that these error messages mean anything that serve the hapless user. more like the house's "good luck sucker". as to the horrifyingly tiny triangles, taken into scale with the total size of the build (avatar scaled) they just may be. that top landing's odd shape consists of vertices at each corner that have to be joined somehow. the stand alone stairs i started with were all nice and lovely before i had to integrate them into a 45 degree stepped wall. why? i'm asking that same question. @Chic Aeon is exactly right. A degenerate triangle is one that is long and thin to the point where the length of the shortest edge is far smaller than the other two. You appear to be using an older Firestorm, but if you have a look at the upload using the latest firestorm it (should) block you from even attempting that upload and highlight the problem areas in RED. The older versions of FS, use the same "highlighting" as other viewers, you can see the problems "highlighted" in your image, it just makes a very poor fist of showing them. As has been noted, the solution is to simplify the shape as much as possible, a slope rather than steps is the typical solution. As you observe, the bounding box MUST match that of the visible mesh, if not the physics model is stretched (or shrunk) to match. In the case where the stairs have a bannister or balustrade that it not part of the physics model and thus makes the visible model far taller, you have to place a single vertex (it is often easier to place a single trangle) in the physics model that fully corresponds with the visible mesh. This then prevents the stretching.
  9. Beq Janus

    Idle thoughts for idle builders

    Isn't that minecraft?
  10. This I can believe. Which comes back to the simple facts that this feature is a debug setting that reads values directly from the image buffer. It then displays them in the most common normalised form for representing 8 bit colours, which matches that you would most likely use in setting RGB values into photoshop or similar. The problem here comes back to the fact that people continue to point users to functions that are not intended for their use, give poor advice and then we get blamed for it being obscure and hidden or (apaprently) too easy to find and not well enough documented. The section you are missing is in the page I gave you that documents the glReadPixels() call, where it mentions that they are converted to a value 0-255 (2^8-1). A more comprehensive list of conversions is giving on the OpenGL4 page https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glReadPixels.xhtml I think the documentation is perfectly adequate given the intended purpose of the feature. We do not provide any guarantees as to what those values should be or can be used for. In this instance, such advice should be documented by whoever it is suggesting users do this ill-advised sampling. As I stated before, disabling ALM will give you more consistent results as it removes some of the nuanced lighting effects that will be at play, but I don't hold much confidence in the results being any better than doing this by eye. Colour matching is a poor relation to having a proper texture applier, modern skin textures are not a single tone and as such matches will always be at best approximate. I do find it amusing that some users feel so strongly that they are entitled to make demands over where we (myself, other developers/testers/support staff/mentors, etc) should be spending the time we give freely. You'd do better to contact the creator you are trying to match to and ask them to provide a sampled average tone for an area around the joint that you are matching to based on the diffuse texture that they have.
  11. As has been noted. It's a debug tool, it is pretty useless as a means of colour matching. As it happens, it does exactly what the Firestorm provided documentation says it does. The fact that this is useless for the specific task of colour matching does not make the documentation incorrect. https://wiki.phoenixviewer.com/fs_quick_preferences The documentation is available by clicking the '?' on the quick prefs window. Personally, I would not have added that to quick prefs, I don't know the history of those choices. It should be noted that it is an ancient setting that has become more and more irrelevant over the years as the shading technology has advanced. If you disable ALM you may get a slightly "better" result depending on what you are trying to do. The code that Whirly posted is the correct code. roughly translated it says the following IF "show colours under cursor debug is enabled" (and we don't have NVidia debugging running) THEN get the current cursor position in the screen image (note this a 2D location in the "bit map" you see in your window) read the pixel values of the colour at that point on the screen show the values as text So what it is reading is the pixel data on the screen and showing the user the RGB and Alpha values stored by openGL for the pixel currently beneath the cursor. Exactly as it states it does, however, people interpret this as different things. to some extent this relies on the way that we process things in our own minds too, It is a distant relation to the object colour, having been blended, and lit based on the viewing angle, lighting, shadows, alpha etc. yet when we see a white cube in shadows, lit from above, we process that as "oh look there's a white cube over there in the shadows" If we didn't none of this tech would work! https://www.khronos.org/registry/OpenGL-Refpages/es2.0/xhtml/glReadPixels.xml is a place to better understand the calls being used. some things to keep in mind, firstly it is a single pixel sample, so any dithering or blending happening in the shaders is not captured by this it is a point sample and nothing more. Furthermore, I doubt the alpha value has any valid meaning for a typical user use of this feature for while the alpha channel will exist in the framebuffer (the storage used for the screen) and alpha channels are processed by many of the operations that occur in the shaders, the alpha channel is disabled by default in the framebuffer and not used, and if you think about it, unless your desktop was showing through the window, the total sum alpha of that point on the screen render must be 0. This may further ask the question as to why it is shown, to which the answer is "its a debug setting" and just because that value may mean nothing when sampling a standard scene, it may be valuable when debugging and developing.
  12. The mathematical basis is linear, bi represents the dimensionality a bi-linear is a combination of two linear functions, in the same way, the tri-linear is applied in 3d space. when dealing with raster images bilinear makes most sense and is most familiar to people but perhaps they deliberately avoided the term in case of silliness from lawyers (can you patent troll maths? ) or to deal with pedantic users making 1 pixel high images 🙂 Compare that with Lanczos, which is only ever referred to as Lanczos not "bi-lanczos" hence my *shrug*. You could, therefore, argue that citing the underlying mathematics is more consistent. That said I don;t know of Lanczos being used outside of video image processing, though I am sure it is somewhere. Calling them bi-cubic makes sense in the same way that bi-linear does because they are both the result of combining two 1 dimensional functions. An example of a cubic function in linear space is when a curve is fitted to a set of data points by creating a continuous mathematical function for that data. A Bezier curve is a cubic function. in the bicubic analysis, the curves are fitted in two dimensions allowing the points to be interpolated, whereas in the linear case it is a far more rudimentary weighted average that is used giving a simple gradient.
  13. All images in Secondlife carry 33% overhead baggage for all the discard levels (effectively CPU space mipmaps) which discard level is shown depends upon the screen space resolution of the texture. discard0 is full size discard 1 half, discard 2 quarter and so on... On the whole the viewer does an OK job at that, OK but not better than OK. It's briefly mentioned here http://wiki.secondlife.com/wiki/Image_System
  14. In case you didn't find it (though I would guess you have by now), there is no need to poke around in debug to disable upload charges. Use preferences search and type "upload", you'll find the following highlighted. 😉
  15. We've seen a number of questions already arising about a "debug setting that miraculously improves texture quality". I'd like to explain the background and the underlying facts. Firstly though, let's establish a couple of facts. There is no magic button or debug setting to improve the resolution or quality of all textures. There is no way to display textures of a resolution greater than 1024x1024 in Second Life So what is all this muttering about and is there any substance to it? The "muttering" stems from some investigation by @Frenchbloke Vanmoer that was published by @Hamlet Au on his New World Notes blog with the title "How To Display Extremely High-Res Textures In SL's Firestorm Viewer" and in spite of the headline's conflict with the facts listed at the head of this post, yes there is substance to this news as it happens. I'll keep this post relatively short. If you want to see more rambling on how and why @Frenchbloke Vanmoer hit upon something interesting you can read about my subsequent investigation in my blog post, compression depression - tales of the unexpected. The bottom line is that whether by luck or by judgement the Second Life viewer uses a bilinear resampling algorithm when it resizes images. Until yesterday I, like many other, and I would suspect most of you reading this, had somewhat slavishly followed the generally accepted advice that bicubic resampling gave better results, more specifically that bicubic-sharper was the ultimate "best for reduction" choice. The evidence that Frenchbloke stumbled upon goes contrary to that advice and, in all my tests so far, for the purpose of texturing in SecondLife where you typically want to retain high contrast details bilinear gives better results. I should re-assert here, you do not need ANY debug setting. The original article used an obscure debug setting but it was only a means to an end, you are in general far better off and have far more flexibility if you use your photo tools as you always have. So what are bilinear and bicubic and why do we care? When you downsize an image, information (detail) gets discarded, deciding which information to keep and which to lose is behind these choices. All resampling methods try to decide which data to keep, or how to blend the data into some kind of average value that will please most people. Put simply a bilinear sample takes the 4 nearest points to the current pixel and produces a weighted average of those as the new value for the resulting output pixel. Bicubic takes this further, using 16 adjacent points to form its result. By virtue of the larger sample you get and smoother average which ultimately is why it fails us when we want to preserve details. On the flip-side of this is that for smooth gradients you may find more "banding" using bi-linear sampling. Why should we not use the debug setting? Firstly, as a general rule, debug settings are not a good thing to go playing with. They can frequently have side-effects that you do not realise and we often find that people tweak some random settings because "XYZ person recommended it" and perhaps it achieves their goal at that time, or as is often the case, it seems to fix things but doesn't really. In any case, they forget the changes and move on. A week or so later they are furious because things don't work anymore, they've forgotten all about the debug changes of course. More importantly in this case, if you use the max_dimension setting to force the viewer to rescale for you, then you will only see the benefit in 1024x1024 images. 1024x1024 is appropriate to large texture surfaces but not so much for smaller objects. If you can use a 512x512 you are using a quarter of the memory of a 1024x1024. That can make quite a difference to the performance of a scene. Many people remark that using a 1024 is the only way to get the detail that they feel that they need. I urge you all to take the lesson here as an opportunity to increase the clarity and sharpness of lower resolution textures by resizing from the large form originals directly to the target size in your photo tool of choice. Don't forget, and this may sound obvious, You need to have high-resolution images to start with. You cannot create something from nothing and whatever you do don't save the resized image to disk as JPEG before uploading, use TGA or PNG both of which or lossless. Give it a try today, and raise a glass to Frenchbloke while you marvel at the increased detail. A quick example My blog post above shows a worked example, but I thought I would show you another on a natural scene. Here, an original high-resolution image has been resampled down to an SL friendly 1024x1024 using both methods (entirely within photoshop to avoid all doubt around various other compression factors). First is the bilinear https://gyazo.com/545a21efb514ed16051f791ea9d527c4 Second I give you the bicubic https://gyazo.com/a969ec986746ced323e6f2f0ddbda0e8 On their own, they don't look that different, but the bilinear shows a lot more detail which is most noticeable in areas of high contrast such as the steps on the hillside
×