Jump to content

polysail

Resident
  • Posts

    257
  • Joined

  • Last visited

Everything posted by polysail

  1. Thank you for the JIRA link Lucia. Commenting on that JIRA made me realize that my test wasn't entirely thorough here, as you will note in the code snippet above, it does not contain a "collision" event, only a collision_start event, as that was the topic of this thread. If you add a "collision" and "collision_end" event handler to the "bonk" prim as well and do some tests, a new picture of what's happening emerges. If you drop a physical cube on top of a prim with a collision detect event setup in it, the collision_start event triggers on impact, collision continues to issue updates until the physics engine makes the cube comes to a rest and then the collision_end event happens. Working as intended. Avatars w/o a collision HUD do the same thing. When the avatar hits the prim collision_start triggers, collision event continues until the avatar is stationary and at rest, then collision_end triggers. Same as the prim. Working as intended. Avatars WITH a collision HUD behave differently. When an avatar WITH a HUD hits the prim, collision_start triggers, then collision events continue to trigger 8 times per second, no matter what the avatar is doing. The avatar can be perfectly still, and motionless for hours, and it will still be triggering "collision" 8 times per second. This happens both in the HUD collision event, and in the "Bonk" prim. Both scripts stream updates at 8 updates per second with their "collision" event until the avatar vacates the "Bonk" prim either by walking off or flying, wherupon the collision_end event triggers. This may be more ideal for how the original poster wanted collision events to be handled, but it is not what the system is designed to do. The reason why collision_start and collision_end stops "randomly happening" while the avatar moves around on top of the surface is because it's trapped in an endless (erroneous) collision loop, from which there is no escape. TLDR; Avatars collision HUDS don't "suck up all the sim time" and prevent new events from happening, they throw the ground prim into the same error state that they're in.
  2. So ~ this has been nagging at the back of my mind now for quite a bit... I'd proved to myself beyond a doubt before even making my first post here that it wasn't a Firestorm specific thing. I'd gotten a large number of collision events on SL Release Viewer, but I did notice a rather stark disparity between wearing an AO HUD and not wearing an AO HUD, and I could not figure out why that would be the case. This led me down a bit of a rabbit hole~ so I'll try and keep it fairly concise, but this is a bit of a story. All tests for this were done on Second Life Release 6.6.0.571939 (64bit), so that completely removes Firestorm from the equation. However, it'll come back into this later~ I made the following code for the ground prim, it just says "Bonk" anytime someone walks on it. Putting this code in a large flat prim and standing on it creates what I call a "Bonk" prim, as it goes "Bonk" when you walk around on top of it quite frequently. default { state_entry() { } collision_start ( integer bonk ) { llOwnerSay ( "Bonk " + llGetTimestamp () ) ; } } Interestingly enough when walking around on this prim without a HUD based AO ( IE a ZHAO style Animation Override HUD ) I got tons of random collision events, ( as you can see in a few posts above ). However when I wore an AO HUD, the prim I was walking on no longer gave me random "Bonk" messages as I was walking over it. This was the mystery that has been bothering me... I solved it today by making the following script and putting it in a basic prim and wearing it as a HUD: vector UNIT_VEC = < 1.0,1.0,1.0> ; default { state_entry() { } collision_start ( integer hudBonk ) { llSetText ( "HUD BONK Start " + llGetTimestamp () ,UNIT_VEC , 1.0) ; } collision ( integer hudBonk ) { llSetText ( "HUD BONK MID " + llGetTimestamp () ,UNIT_VEC , 1.0) ; } collision_end ( integer hudBonk ) { llSetText ( "HUD BONK END " + llGetTimestamp () ,UNIT_VEC , 1.0) ; } } This is what it looks like while running : https://gyazo.com/a756c77d1bdd7ed19e64e63d574d9afa Note how the SetText on the prim is constantly being triggered by the HUD collision code. It's updating every fraction of a second. While wearing a HUD with the above code running in it, if you walk on the prim with the "Bonk" code in it, events cease to trigger. I am assuming that the HUD is monopolizing every single collision cycle the server is willing to give that avatar, and thus the prim on the ground can't manage to trigger an event, except for very large collisions, as it seems not all collision events are treated equally. Large "Bonks" gain more priority and get event priority over tons of small ones. "But Liz, what does this have to do with Animation Overrides?" Let's have a look inside ZHAO code : ZHAO-II-Core Code lines 1038 -1048 collision_start( integer _num ) { checkAndOverride(); } collision( integer _num ) { // checkAndOverride(); } collision_end( integer _num ) { checkAndOverride(); } This is the main update loop for a ZHAO ( and ZHAO derivative ) AO. This architecture was used widely by most AO's prior to the AO scripting updates, and as such many are still widely in use today. So ZHAO AO's flood the server with constant collision update checks too... This is all provable with code ~ Now back to the problem at hand... This brings us back to Firestorm. Firestorm has a built in AO that does not saturate the server with constant collision event updates requests. This leaves the server to have cycle time for that avatar to process other collision events such as our "Bonk" prim. Not all Firestorm users use the built in AO, but I'm willing to bet that enough of them do, that it becomes "statistically relevant" that on average, Firestorm users are less likely to be using ancient ZHAO style AO's. So ~ statistically, it's likely that Firestorm users aren't flooding out their collision events more often than Non-Firestorm users. Sooo ~ I'm guessing that Firestorm users "cause more random collision events" on "Bonk" prims because the prim can get enough server cycle time to process it's collision events, whereas it's drowned for anyone wearing an old HUD based AO. Ironically ~ this "problem" of random collision event triggers is because Firestorm fixed something... Anyhow !~! That's all from me ~ Take Care! - Liz
  3. Except collision_start has always worked this way, for as long as I can remember. Sometimes it just generates hit events constantly for things walking on the prim. Not Hypothetically at all. https://gyazo.com/83519adcd8f031c818e2dee643efb6db It's simply not a "viewer thing".
  4. Collision events trigger the exact same regardless of what viewer the user is using. You often get two collision_start triggers as an avatar wanders into a prim. Once for when it hits the edge of the prim then a second for when it lands on the top surface. https://gyazo.com/39451e795fb1e018a203abe0cb23cab8 If the prim is close to the sim terrain and the sim terrain is uneven, then the avatar might collide with the sim terrain and jitter them up and down based on the terrain value, rather than the prim bounding box, also making them 'tap' the top of the detection prim frequently. This, again~ doesn't have anything to do with the viewer that the person who is triggering the collision_start event is on. SL Standard Viewer triggers just as many "random" collision events as Firestorm does. It's always done this, and quite frankly I think this is an excellent example of how innate biases get repeatedly "confirmed" when no one bothers to check the facts. If you dislike X, suddenly everything that goes wrong is X's fault. Doesn't matter whether the boogeyman is Firestorm, the Federal Government ~ or "thems aliens that's always watching us". No one is saying you have to love Firestorm, but please keep your confirmation bias out of this.
  5. It is an absolutely ancient bug as well, my 2018 bug report more or less duplicates https://jira.secondlife.com/browse/BUG-5591 Which was filed back in 2014.
  6. I believe Rider took a look at it and said what he found gave him nightmares, and he gave up. I'm not sure it'll ever be fixed.
  7. Creating a whole entire piece of secondary external software is a largely obtuse solution to an easily solvable problem. The notion of "Local Mesh" has been discussed and even been partially developed by a number of Third Party Viewer Devs to allow people to just view a DAE file inside SL without actively uploading it. This would allow for the addition of textures as well. Whole entire external software packages are largely a silly idea when it's not entirely difficult to just add a temporary upload feature. Yes the content creation pipeline needs work, but not with a software package.
  8. I have been advocating for doing a lighting and reflections overhaul before doing any new content creation pipeline graphics changes, loudly, stubbornly, even belligerently for quite some time. It's been met with a deafening silence from the decision makers at the Lab, the only break in which was Vir, after much indignant protestation on my part asking in what I perceived to be a somewhat irritated tone, "what would updating reflections [[ and by extent lighting ]] actually help with" while we were discussing the topic of PBR at the most recent Content Creator Meeting ... or something to that general effect. I don't remember the exchange perfectly word for word, so my recanting of it might be somewhat flawed... Either way though, my advocacy for such a change has not waned. As far as I'm concerned, there is not much to be lost from creating a new "Experimental High Definition" Render mode, with a fairly short list of new features actually : Better sky interpolation and an actual reflection model that isn't "Environment Shiny overwrites Specular Color" Lumen (LLumen?) based lighting values ( thought we're still in SL, so there will have to be some auto convert option from 0-1.0 intensity into a lumen scale, but add a "Lumens" spinner next to the Intensity dial on the UI that spins with the Intensity slider~ This might force us to allow greater than 1.0 intensity but, I'm pretty sure that's okay~ 'cause in the present system if I put two "1.0" intensity lights right next to each other, I get a nice 2.0 Intensity light.... We can fiddle with it, this is largely just a UI change. The nice part of this is we get to just make up a lumen value for 1.0 intensity so that we can implement a proper inverse square lighting falloff and have things not look too poo. A user adjustable "Default world Glossy Value" that fits as a stand-in for all assets that are diffuse only. Does it lack a glossiness parameter ?? That's fine~ Make one up. I nominate 0.9 (Roughness) aka 0.1 ( Glossy ) for the world default value. That's mostly matte but will reflect a small amount of light, and it'll do Fresnel things which is cool. Most fabric, unpolished wood, asphalt, brick, unpolished stone and boring paint will render out quite fine with a 0.1 Glossy value. Auto camera exposure ( like Joe highlighted in the Cyberpunk 2077 walkthrough ) Fresnel reflections on surfaces where the lighting angle requires it ( which we STILL don't have despite the current SL Renderer being littered with code that has this notion in mind ) Possibly some sort of reflection probe or possibly a 360 snapshot cube map "volume prim" that can be plopped down in an area as an override for skymap reflections. ( People already make these sorts of things with kind of hacked-together methods with 6 projection lamps from each side ). Doing this removes the expectation on the part of the users that "Yes, your stuff will look the same" that was the downfall of EEP. No more of that, new model, it's gonna look different!! It SHOULD look different!! If you don't like it? No big deal~ just turn it off and use regular ALM. Call it EEP Try #2 ~ or something, who knows... Though that might leave a bad taste in people's mouths~ None of this stuff is OpenGL of Vulkan or Metal or DirectX specific. it's a short list of fixes and it lays the groundwork for everything else. It's just simple math additions to the existing environment. ( okay not that simple.. blending between cubemap spaces is actually kinda annoying and difficult to implement~ but that's the only 'actually hard thing' that there is that requires a change to the actual SL environment assets. Everything else is low-hanging fruit.) This lays the groundwork for PBR. It should be done first. It's not a huge massive project, it's a small bitesize thing that the Lindens can toss a few hundred man-hours at and ( hopefully? ) get results. Or Beq could probably do most of it in her rare moments of spare time in... like a couple months~ 'cause she's awesome like that. It completely boggles my mind that this is not the 'next project'. EEP 1.0 : Make sure we get parity for everyone who wants it. EEP 2.0 : Blow people away with cool new graphics stuff Woooo! EEP 2.0 will be required for viewing PBR assets. "But Liz, what about all the people who don't like it ~ then we have a new asset type that can't.. " ~~ Yeah yeah I know that's why I've also been demanding that any implementation of PBR includes a "Diffuse only Baked Lighting Slot" that the content creator will fill in a DIFFUSE texture fallback for their PBR asset. Why isn't this the roadmap? It's so obvious to me this is how you do this...
  9. That's the thing Joe ~ I've been fairly mute in mentioning it myself until now, but now that the Lindens are loudly tooting their horns about "We're going to be implementing PBR" seemingly without any understanding of what that actually entails~ I am starting to get increasingly pedantic about what PBR means, and Beq is as well. PBR by definition isn't just 'a set of maps' and 'a reflection model' and 'light is calculated in lumens' ~ When someone sticks a "PBR' label on something, then I am promising to the world that: "If I feed this system the PBR standardized values for glossy varnished wood, and shine 800 lumens of soft yellow light on it, it will look as close to it as it possibly can to a wooden surface with a 60 watt bulb shining on it in Real Life." "If I feed it the system the PBR standardized values for a polished gold material in 20,000 lumens per square meter, to simulate broad sunlight and I snap a screenshot of that render with an Shutter Speed , ISO etc, to match a Real Life camera, and I compare it to an actual photo taken with a camera at that rating, of a gold ring, in broad sunlight where the light-meter for the photo registers 20,000 lumens, it's going to match up." I'll be able to composite those two images together and it has a prayer of looking believable. PBR ~ at it's core is an equation. I give you a known data value for a material ~ and I add it to a known render environment, and I get a pretty picture that matches how it should look in the real world. The promise is ~ that in "this PBR world we've created" : 2 ( accurate PBR texture data ) + 2 ( the accurate rendering environment ) == 4 ( the correct pretty picture ) The promise of PBR is that 2+2 = 4, it always will, and that's what makes it PBR. If a system has the "PBR" label, it means that if take these values and cram them into the input of this lighting model, I'm going to get known results. Rend3 is a PBR system, if you feed it correct values and correct lighting information, it will produce those known results. However, when it comes to SL, we don't have light measured in lumens, and the texture input data you're using is back-converted: Diffuse, Normal , Spec data that is littered with pre-baked lighting information, (( Which by the way , to echo Beq's sentiment ~ I am so so so very impressed that you were able to do. The adaptability you display is continually astounding, both with your pathfinding project and this project. )) and shining SL sun and SL lighting info on it, which are entirely guesswork numbers, then looking at what the PBR system of Rend3 spits out when you feed it nonsense inputs can be fun!! ~~ Beautiful even! But it's not "Implementing PBR for Second Life". We don't have a reflection system to reflect the surroundings, we don't have light measured in Lumens, all we're doing is feeding the Rend3 PBR systems a "mystery meat" data dump and seeing what it does with it. It lacks the promise of equivalence, which by it's very definition means it's not PBR. I would never bother to correct you on a detail so minute were it not for the fact that the Lindens are now looking and saying "Well Joe implemented PBR for Second Life, so we can too", when the fundamental promise that the data and math of PBR offers of, "Wood on a sunny day RL is going to look like wood on a sunny day in SL" being entirely absent. The way the Lindens are talking about implementing "PBR" right now is basically "Yes what you see in the substance painter PBR lighting environment and you see in SL will be totally different but we're going to call it "PBR" anyways. That breaks the fundamental mathematical promise of PBR. The Lindens offering to 'implement pbr without doing lighting and reflections overhaul" is tantamount to saying that : 2+Y = N Where the values of Y and N are somewhere between 0 and infinity, and the values of N might be somewhere near 4, if you're lucky, but they might be 6 or 15 or 0.002, cause the environment is nonstandardized, so who @#)% knows? They don't get to call this 'PBR", it breaks the fundamental promise of PBR, and calling it such is misleading at best, and false advertising and prosecutable in court at worst. What you're doing is the opposite~ You have the Rend3 Environment so : X + 2N = X + 2N At least your equation is correct but what X is ~ is entirely a mystery. You can feed actual PBR data (2) into X and get 4, but by definition since you're using SL data as inputs, you don't know when it will actually be 2, so the promise is broken as well. Neither model offers the required solidity of 2+2=4 If 2+2 sometimes equals 5 then that's not PBR, and I'm getting somewhat exhausted trying to explain that to the Lindens. I'm terribly sorry to bring that exhausted irritation to your doorstep, as I really do admire the work you're doing. It's fantastic! It looks amazing! You're cramming SL data through a Rube-Goldberg device you built so that on the other end it produces pretty pictures. The Rube-Goldberg device in of itself is fascinating, the fact that it makes pretty pictures is frankly astounding. It's a magic trick of the highest order. But it's not PBR. The fact that you keep calling it that, makes my life difficult when I have to try and explain to the Lindens that when they're talking about "doing half of a PBR project" that's not PBR either.
  10. Aww... Now what am I going to do with my Wednesday mornings?
  11. I'm using gmail and as of a day ago it was working just fine. So I don't think it's a gmail problem.
  12. Do each of these "Child Users" come with their own camera? Or is all of this data fed into the MVP matrix for the single user camera?
  13. Yes ~ rendering distant elements with a secondary camera with a different Near / FarClip plane a lower FPS is how most games handle composing massive vistas. This helps with rounding errors as well as performance since it eliminates huge ZBuffer distances. The Z-Axis in this case is relative to camera ~ and represents the overall depth of the scene, ~ not to be confused with Z height in the SL World ~ which is how high up things are~ which also causes jitter due to the fact that things are rendered in world-coordinate space ~ these are compounding errors... but yeah... Like I said ~ Massive changes to SL render code would be required ~ but technically all of this "is possible". The question is (as with all possible things ) "would it actually materially improve anything?" I don't know the answer to that. SL is a very specific use case ~ and as animats has pointed out ~ it's not clear that rounding errors at these "relatively short "1-3 km viewing distances would be material enough to cause notable jitter. I don't actually know. I'm not a graphics programmer ~ I just play one on TV.
  14. Making image planes for entire sims in lieu of actually displaying the content of those sims simply does not work, due to the nature of such planar representations. They don't parallax correctly ~ meaning if you had such a thing while ~ say riding a motor vehicle across the mainland ~ the trees and houses up until the image plane would move correctly ~ then the trees on the image plane would not. They also don't do vertical changes correctly ~ so if there is any Z height difference, due to terrain ~ or say from a flying vehicle of some sort ~ or just a flying avatar ~ the illusion would break. The thing is ~ the people who actually do want their draw distances turned up to 500+ meters ~ are all the members of the SL community who are very interested in boating ~ flying aircraft ~ etc. You can't simply doodle their runway that's next to a mountain on an image plane and expect them to be happy about it ~ these people care about realism and accuracy to the degree that they want to make sure the landing lights on their run way pulse at the correct number of flashes per minute to mimic their real-life counterparts. They don't do lighting correctly. If I place a house on a hill on a horizon line while the sun is setting it reflects light properly in a manner that tells someone ~ even a kilometer away "there's a box there with a roof shape on it" if you replace that with an image of a house at some given time of day ~ it will necessarily look incorrect and pretty much every other time of day. Even if you try and mitigate this issue with ~ say having four different image sets ~ This still won't account for the differentiation in environment settings. Just using library EEP settings "[NB] P-Haze" vs "Dynamic Richness" will yield totally different sun angles and color tones at the same "time of day" You simply cannot use baked lighting in a dynamically lit environment. It just doesn't work. It's for the above reasons I didn't really take the "let's image plane an entire sim" idea seriously ~ among the myriad of other ( sort of proposed ?? ) notions ~ and instead focused on other steps to improve rendering efficiency / calculation spaces in order to improve the SL user experience. Image plane impostors don't work for anything besides foliage and other similarly constructed organic creations that have a central core with branched out components ~ the moment you try to simulate anything with a vaguely solid form ~ the impostor breaks down catastrophically. There is a reason these are not used in modern day game engines. Image planes won't keep SL relevant into the 2020's any more than having pose-ball based animations will.
  15. Yes ~ the "make sim surrounds on private islands a standardized feature" idea makes a lot of sense I think ~ it's also the kind of small incremental change LL seems to be comfortable with. Viewing / impostring adjacent sims / mainland sims ~ is a bit less so ~ But as I said in my original reply ~ that's a very different ask from the notion of "Make SL able to do AAA type 'Big Worlds' ~ " which implies a necessary full coordinate-space rework.
  16. Uhm~ I'm not sure how to reply to this ~ you tell me I'm incorrect ~ then proceed to explain ~ rather precisely ~ exactly how I'm correct? Maybe my explanation wasn't clear?? Yes SL has the 'open world' split up into multiple integrated coordinate spaces. That's a sim. When you set foot over a sim crossing you go from +255 in one coordinate space to 0 in the next. But within the bounds of each sim, everything is calculated, in world coordinate space. Every bone movement, every object movement, every ~ everything. You can see the errors of this visibly start to fail by~ as you indicated flying to 2000 meters and watching your eyeballs shake in your head due to floating point precision errors. This is ~ in stark contrast to how most modern games handle this problem ~ by calculating the world, relative to the player, meaning that there is never anything suffering from precision errors, unless it's in the distance~ which is precisely the paradigm change that I was referring to.
  17. There's two very different requests here. 1: "Put junk off-sim to replace the old Sim-Surround Megaprim Hacks" 2: "Have an entire exquisite horizon-line vista that is renderable from any point ( and in the cases of all of these triple A game titles showcased ~ able to be walked to as well ) These are two incredibly different things. One is expanding upon a stop-gap hacky measure to try and have matte paint type stuff exist outside a sim's numerical coordinate system~ the other... well... the other is complicated... SL is rendered in world-space. Every entity in SL ~including ones that you'd think had a local transform ~ such as a rigged avatar ~ simply doesn't have a local coordinate space. So ~ when you're swinging your arm on your avatar while SL is aware of the skeletal hierarchy in principal ~ as far as the the sim code and renderer are concerned ~ SL is going "move bone mElbow at sim location 140.444 , 22.634992 , 44.9294" to denote the location that your elbow is moving at in world-coordinate space. Because of the lack of coordinate spaces~ SL simply can't do the AAA game "walk into the distance" ~ "look at all the pretty houses on the horizon line" the underpinning maths simply aren't there. Floating point numbers only have so many degrees of precision. In order to fix this ~ we'd literally have to re-invent the entirety of the SL coordinate space, to include object-space and update the render code. At that point ~ we might as well just re-invent the rest of SL as well. Which I'm not against in principal ~ but essentially to 'properly address this' ~ you'd literally have to make SL 2.0 (Not Sansar). As for ~ "Seeing neighboring estates" ~ I'm a bit confused about this ask. They're private estates, that was the entire marketing point of them ~ that you don't see the adjacent sims, that's what makes them "Not the Main-Land". Or are you proposing that ~ each region come with an extra surrounding 8 regions of 'make it pretty space" ?
  18. Thank you Beev~ it was a lot of effort ( mostly on Beq's part ) I managed to erroneously (partially?) convince Beq pretty much every step of the code was wrong before we managed to convince ourselves that it actually wasn't. But the change won't be that ground-breaking for SL. The average mesh in SL fits more or less just fine in a cube, or a half-cube-volume. (Think dresses, beds, sofas, small bushes , rocks etc). The object ratio is only 2-5 off of a perfect cube. So the tangents would be off, yes, and look sliiiightly weird, but ~ not really in a manner that would be immediately noticeable. That being said ~ I am immensely pleased it wasn't just 'in my head' that something has been 'off' all these years ~ As I said in my first ( very incorrect ) JIRA report ~ I've been chasing this bug, in some form or another ~ for the last 5 years ~ so, it's a personal victory for me, and it will help make people's lives in SL just a little bit better ~ which is nice! So thank you again @ZedConroy for giving me the point in the correct direction that I needed. I was very much "interested". One last thing: Scale Matrices Suck. I routinely get them wrong...Thankfully Beq doesn't. 😆
  19. No they are not. This is why I explicitly stated that 3ds max is NOT a reliable tool for analyzing this. Objects taken on a tour through SL behave 100% identically to import if they originate in an inverse-normals piece of software such as Maya. I can take my test-shape in Maya, import the "in a Box" version of it and the non-enclosed copy of it into SL, view that the debug normals tool tells me they're totally wrong. IGNORE THAT. Export them~ Re-import them into Maya and compare all my vertex normals, and not a single one will be deformed. This in combination with the code exploration of SL's vertex normal code ~ turning up nothing but inverse_scale calcs leads me to believe that SL is handling normals correctly for most cases, but simply is displaying in the debug tool that it's doing it incorrectly. Which is ... all kinds of confusing. However ~ We're not out of the woods yet ~ so to speak~ ~ If I do this same experiment in 3ds max, MANY things can change this. If I have a scale applied to my object in 3DS max at a transform ( object ) level, 3ds max will handle this with it's bizarre normals * object scale matrix calc... and as best I can tell, export those.... which will require a similar parity normals * object scale matrix inside SL to get them back into parity with the system. ( Which is what Beq's optional patch addresses. in addition to adjusting how normal maps are rendered ~ but it's not a true fix.) However, if you Apply XForm in 3ds max prior to export, you will note that the moment you do this, 3ds max recalculates all the vertex normals with normals * inverse object scale, bringing it into parity with SL and Maya. However~ If you started off with a 14.0 , 1.0 , 1.0 sized object, that has ( 1.0, 1.0, 1.0 ) scale ( XForm Applied in 3ds max ) ~ and then take this ( 1.0,1.0,1.0 ) scale object and import into SL~ SL will compress it into an internal unit cube .SLM File which is akin to taking your mesh object in any 3D application, scaling it down to fit into a ( 1.0,1.0,1.0 ) sized cube and APPLYING THAT TRANSFORM, making the object effectively 1.0 sized cube, with ( 1.0,1.0,1.0 ) scale, then scaling it back up to object size, at the transform ( object ) level. In the case of our 14 meter tall box ~ regardless of what software it was sourced from is now a ( 1.0,1.0,1.0 ) sized Object with a ( 14.0 , 1.0 , 1.0 ) scale. Which if you import it into 3ds max, we're back to an object with unapplied XForm data which uses normals * object scale to draw its normals in 3ds max, and they LOOK WRONG until you apply XForm ~ returning the object to it's original 14.0 , 1.0 , 1.0 size ~ with a unit identity transform. That does not mean this is how it's handled in SL. ( Despite it being how the Render Debug Normals indicate that it is being handled in SL ... it's... there's many steps to this ) On top of this ~ absolutely NONE of the above addresses the original concern that normal maps ( note: Not vertex normals themselves ~ ) in SL are displayed in a manner that is completely consistent with how the debug tool ( apparently incorrectly ) draws the vertex normals. This bug is weird. VERY weird. Also it has nothing to do with the handed-ness RGB channels of normal maps~ the incorrect display of an arbitrary planar normal map displays incorrectly on the side of a flat cylinder in SL. That's not a problem with the normal map, it was baked in planar space with all the correct color channels and magnitudes, but when you stick it on a cylinder squashed flat, it makes the side of the cylinder render as if it has it's vertex normals ( nothing to actually do with how the normal map was created ) squashed to match the bounding box, just like the renderdebug normals in SL seem to indicate they do ( wrongly ) and in parity with how 3ds max handles un-applied object transforms. This is directly contrary to all the other inverse_scale normal calcs both in mesh packing and unpacking. If you doubt me, try doing the same test as I did ~ ignore debug normals, turn off all atmospheric shaders and just look at how objects reflect light. They do so in a manner consistent with having their vertex normals handled correctly ( in an inverse_scale manner ).
  20. Yeah. I've been down that entire rabbit hole, and came out the other side. ( I think ) .... Remember my very first going in position on this was "Scale Matrices Suck, I routinely get them wrong." So my confidence level in all of this has been fluctuating wildly between "pretty high ~ but not certain" all the way down to "I have no idea what I'm doing". Maya handles vertex normals with inverse object scale. This is NOT how 3ds max handles it ( As best I can tell ) However, the only way to get 3DS max to render vertex normals is to use the old Editable Mesh asset type, instead of Editable Poly. So, I'm really not entirely sure how the software handles this internally. 3DS max has a lot of bizarre intricacies behind the scenes, this might just be 'one of those things' it does the '3dsmax way'. Which can be kinda "speshul" sometimes~ That being said : We've found code in SL now for object storage ( squishification ) and subsequent expansion ~ for object rendering. Both of which take the normals and multiply them * inverse scale. As long as these two operations use the same maths, then ( in theory ) everything regarding mesh storage and recall is actually fine. Conversely, if both calculations used object scale ( like 3ds max appears to ) it would also be "okay", however rendering scaled objects would have to be handled in a vastly different manner~ like I assume it is handled in 3ds max. But this is not presently the case inside SL. SL clearly has the maths to use inverse_scale for both calculations. However, the display of vertex normals in SL, using the debug tool ( the little blue lines we look at ) clearly uses object scale, not inverse scale. What this means... I honestly have no idea what is going on at this point. If I turn off all atmospheric shaders, and disregard rendered debug-normals, and just analyze this with an Ambient Dark environment and a single point light SL seems to render surface normals correctly. But I can't be 100% certain that this is the case, because again, normal maps and shaders are clearly still borked. The only two things thing I am 100% certain of is that : 1: Inside SL, the display of a normal map on a curved object scaled flat is incorrect. WHY this is the case, is not something I understand yet. Still digging on that one. 2: The display of vertex normals (rendering of debug type info ..aka drawing little lines out of the verticies ) , both in 3ds max and in Second Life both, is unreliable, and should not be used as a deterministic tool to decide what is going on~ even though I used it as such in my JIRA, I realize now that may have been in error.
  21. The scale multiplier in the 3rd tab is a universal omnidirectional scale value ( applies to all axis equally ~ X Y Z ) so it doesn't actually affect the normals data at all, also you can't zero it out ~ so there's no concern for 0 magnitude normals.
  22. @ZedConroy Thank you for puzzling through the first part of this. This is has been driving me bonkers for the better part of half of a decade. https://jira.secondlife.com/browse/BUG-228952
  23. Okay ~ I've done some preliminary testing. Nothing 100% conclusive yet ~ but by all indications ( at least for meshes originating in Autodesk Softwares )~ for any meshes that aren't perfect cubes ~ during the quantization process ~ it seems to be re-calculating the normals for them using an inverse scale matrix for the surface normals ~ instead of a scale matrix~ meaning ~ the thinner and flatter your object is ~ the vertex normals of the object are going to be distorted ~ by not only the ratio of the difference from the mesh to a standard cube ~ but then that ratio AGAIN beyond that ~~ meaning if your initial object measures 0.25m x 1m x 1m ~ the surface normals are being calculated in a manner which ~ in order to get them to match the original shape ~ your object must be scaled to 4.0m x 1m x 1m ~ 16 times the original mesh dimension in the axis that was "off". If my testing is correct ~ and this is the mistake...... Holy @#*%*@
  24. Actually, once I hit post ~ I remembered. I'm pretty sure just exporting a mesh into a DAE file format "explicitly defines" it's vertex normals. DAE is a simple format, and does not allow for edge smoothing. ( Again I haven't tested, but I have a fuzzy recollection here ) that if I export a mesh with 'regular' handling of vertex normals from either 3ds max or Maya~ that just immediately re-importing it will require them to be "unlocked" again. it's just a limitation of the DAE format.
  25. Beq is 100% correct ~ all meshes are stored internally in a 'cube-like' form ~ where the mesh is scaled to fill the volume of a unit cube. How this affects things~ and whether that's "expected behavior", that's ..... that's another question~ After reading this entire thread a couple times ~ I'm thinking this is actually a long buried bug. I haven't tested this personally yet, but after reading everything here and seeing the examples, SL should be capable of preserving the correct vertex normal directions while doing whatever scaling operation it needs to during the quantization process ( where it converts the DAE into an internal 'second life mesh'). This looks like pretty definitive evidence that it does, in fact, NOT do that. Scale matrices suck. I routinely get them wrong. My guess is that the mesh uploader is also handling this incorrectly. Perhaps it's just usually not that noticeable~ but it gets progressively worse the more of a deviation that your object has off of a cubic shape? ( Though that last sentence is making me ponder ~ how does it handle flat planar objects? Are they special cases? ) I'm not really sure. SL does take in original object vertex normals and apply them to the uploaded mesh, it doesn't simply recalculate them all from nothing. At least I've seen it do that in cases where the vertex normals are "explicitly defined". To define what I mean by "explicitly defined" means ~ it's the case that Optimo referred to in his post where if you're in your 3DSoftware attempting to edit your normals with the usual face/edge tools that set up hard/ soft transitions have seemingly no effect on the object until you "unlock" them. I'm not sure of the precise mechanics of the how and why of this situation across all thee different programs discussed in this thread, but I know SL does acknowledge them as inputs. How those inputs are handled ~ and whether they are handled correctly. Well that's a question worth asking. It shouldn't matter what the source application is, 3ds max solves vertex normals mostly behind the scenes, much like Maya does, unless it gets a asked to import a file with "explicit defined" vertex normals ~ if my memory serves me correctly... this is identical to how Maya handles it ( I'm not 100% sure on this )? However this behavior noted here ~ regardless of the source application, explains a substantial amount of frustration I've had with the inconsistencies of how normal maps function inside SL. I've not tried setting cubic bounding boxes for all my meshes to see if it fixes all the normal map issues that I've been struggling with, but that will definitely be something I will try from now on with objects that are not naturally cubic. I wonder how this all calculates with rigged meshes?
×
×
  • Create New...