Jump to content


  • Posts

  • Joined

  • Last visited


176 Excellent


Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Aww... Now what am I going to do with my Wednesday mornings?
  2. I'm using gmail and as of a day ago it was working just fine. So I don't think it's a gmail problem.
  3. Do each of these "Child Users" come with their own camera? Or is all of this data fed into the MVP matrix for the single user camera?
  4. Yes ~ rendering distant elements with a secondary camera with a different Near / FarClip plane a lower FPS is how most games handle composing massive vistas. This helps with rounding errors as well as performance since it eliminates huge ZBuffer distances. The Z-Axis in this case is relative to camera ~ and represents the overall depth of the scene, ~ not to be confused with Z height in the SL World ~ which is how high up things are~ which also causes jitter due to the fact that things are rendered in world-coordinate space ~ these are compounding errors... but yeah... Like I said ~ Massive changes to SL render code would be required ~ but technically all of this "is possible". The question is (as with all possible things ) "would it actually materially improve anything?" I don't know the answer to that. SL is a very specific use case ~ and as animats has pointed out ~ it's not clear that rounding errors at these "relatively short "1-3 km viewing distances would be material enough to cause notable jitter. I don't actually know. I'm not a graphics programmer ~ I just play one on TV.
  5. Making image planes for entire sims in lieu of actually displaying the content of those sims simply does not work, due to the nature of such planar representations. They don't parallax correctly ~ meaning if you had such a thing while ~ say riding a motor vehicle across the mainland ~ the trees and houses up until the image plane would move correctly ~ then the trees on the image plane would not. They also don't do vertical changes correctly ~ so if there is any Z height difference, due to terrain ~ or say from a flying vehicle of some sort ~ or just a flying avatar ~ the illusion would break. The thing is ~ the people who actually do want their draw distances turned up to 500+ meters ~ are all the members of the SL community who are very interested in boating ~ flying aircraft ~ etc. You can't simply doodle their runway that's next to a mountain on an image plane and expect them to be happy about it ~ these people care about realism and accuracy to the degree that they want to make sure the landing lights on their run way pulse at the correct number of flashes per minute to mimic their real-life counterparts. They don't do lighting correctly. If I place a house on a hill on a horizon line while the sun is setting it reflects light properly in a manner that tells someone ~ even a kilometer away "there's a box there with a roof shape on it" if you replace that with an image of a house at some given time of day ~ it will necessarily look incorrect and pretty much every other time of day. Even if you try and mitigate this issue with ~ say having four different image sets ~ This still won't account for the differentiation in environment settings. Just using library EEP settings "[NB] P-Haze" vs "Dynamic Richness" will yield totally different sun angles and color tones at the same "time of day" You simply cannot use baked lighting in a dynamically lit environment. It just doesn't work. It's for the above reasons I didn't really take the "let's image plane an entire sim" idea seriously ~ among the myriad of other ( sort of proposed ?? ) notions ~ and instead focused on other steps to improve rendering efficiency / calculation spaces in order to improve the SL user experience. Image plane impostors don't work for anything besides foliage and other similarly constructed organic creations that have a central core with branched out components ~ the moment you try to simulate anything with a vaguely solid form ~ the impostor breaks down catastrophically. There is a reason these are not used in modern day game engines. Image planes won't keep SL relevant into the 2020's any more than having pose-ball based animations will.
  6. Yes ~ the "make sim surrounds on private islands a standardized feature" idea makes a lot of sense I think ~ it's also the kind of small incremental change LL seems to be comfortable with. Viewing / impostring adjacent sims / mainland sims ~ is a bit less so ~ But as I said in my original reply ~ that's a very different ask from the notion of "Make SL able to do AAA type 'Big Worlds' ~ " which implies a necessary full coordinate-space rework.
  7. Uhm~ I'm not sure how to reply to this ~ you tell me I'm incorrect ~ then proceed to explain ~ rather precisely ~ exactly how I'm correct? Maybe my explanation wasn't clear?? Yes SL has the 'open world' split up into multiple integrated coordinate spaces. That's a sim. When you set foot over a sim crossing you go from +255 in one coordinate space to 0 in the next. But within the bounds of each sim, everything is calculated, in world coordinate space. Every bone movement, every object movement, every ~ everything. You can see the errors of this visibly start to fail by~ as you indicated flying to 2000 meters and watching your eyeballs shake in your head due to floating point precision errors. This is ~ in stark contrast to how most modern games handle this problem ~ by calculating the world, relative to the player, meaning that there is never anything suffering from precision errors, unless it's in the distance~ which is precisely the paradigm change that I was referring to.
  8. There's two very different requests here. 1: "Put junk off-sim to replace the old Sim-Surround Megaprim Hacks" 2: "Have an entire exquisite horizon-line vista that is renderable from any point ( and in the cases of all of these triple A game titles showcased ~ able to be walked to as well ) These are two incredibly different things. One is expanding upon a stop-gap hacky measure to try and have matte paint type stuff exist outside a sim's numerical coordinate system~ the other... well... the other is complicated... SL is rendered in world-space. Every entity in SL ~including ones that you'd think had a local transform ~ such as a rigged avatar ~ simply doesn't have a local coordinate space. So ~ when you're swinging your arm on your avatar while SL is aware of the skeletal hierarchy in principal ~ as far as the the sim code and renderer are concerned ~ SL is going "move bone mElbow at sim location 140.444 , 22.634992 , 44.9294" to denote the location that your elbow is moving at in world-coordinate space. Because of the lack of coordinate spaces~ SL simply can't do the AAA game "walk into the distance" ~ "look at all the pretty houses on the horizon line" the underpinning maths simply aren't there. Floating point numbers only have so many degrees of precision. In order to fix this ~ we'd literally have to re-invent the entirety of the SL coordinate space, to include object-space and update the render code. At that point ~ we might as well just re-invent the rest of SL as well. Which I'm not against in principal ~ but essentially to 'properly address this' ~ you'd literally have to make SL 2.0 (Not Sansar). As for ~ "Seeing neighboring estates" ~ I'm a bit confused about this ask. They're private estates, that was the entire marketing point of them ~ that you don't see the adjacent sims, that's what makes them "Not the Main-Land". Or are you proposing that ~ each region come with an extra surrounding 8 regions of 'make it pretty space" ?
  9. Thank you Beev~ it was a lot of effort ( mostly on Beq's part ) I managed to erroneously (partially?) convince Beq pretty much every step of the code was wrong before we managed to convince ourselves that it actually wasn't. But the change won't be that ground-breaking for SL. The average mesh in SL fits more or less just fine in a cube, or a half-cube-volume. (Think dresses, beds, sofas, small bushes , rocks etc). The object ratio is only 2-5 off of a perfect cube. So the tangents would be off, yes, and look sliiiightly weird, but ~ not really in a manner that would be immediately noticeable. That being said ~ I am immensely pleased it wasn't just 'in my head' that something has been 'off' all these years ~ As I said in my first ( very incorrect ) JIRA report ~ I've been chasing this bug, in some form or another ~ for the last 5 years ~ so, it's a personal victory for me, and it will help make people's lives in SL just a little bit better ~ which is nice! So thank you again @ZedConroy for giving me the point in the correct direction that I needed. I was very much "interested". One last thing: Scale Matrices Suck. I routinely get them wrong...Thankfully Beq doesn't. 😆
  10. No they are not. This is why I explicitly stated that 3ds max is NOT a reliable tool for analyzing this. Objects taken on a tour through SL behave 100% identically to import if they originate in an inverse-normals piece of software such as Maya. I can take my test-shape in Maya, import the "in a Box" version of it and the non-enclosed copy of it into SL, view that the debug normals tool tells me they're totally wrong. IGNORE THAT. Export them~ Re-import them into Maya and compare all my vertex normals, and not a single one will be deformed. This in combination with the code exploration of SL's vertex normal code ~ turning up nothing but inverse_scale calcs leads me to believe that SL is handling normals correctly for most cases, but simply is displaying in the debug tool that it's doing it incorrectly. Which is ... all kinds of confusing. However ~ We're not out of the woods yet ~ so to speak~ ~ If I do this same experiment in 3ds max, MANY things can change this. If I have a scale applied to my object in 3DS max at a transform ( object ) level, 3ds max will handle this with it's bizarre normals * object scale matrix calc... and as best I can tell, export those.... which will require a similar parity normals * object scale matrix inside SL to get them back into parity with the system. ( Which is what Beq's optional patch addresses. in addition to adjusting how normal maps are rendered ~ but it's not a true fix.) However, if you Apply XForm in 3ds max prior to export, you will note that the moment you do this, 3ds max recalculates all the vertex normals with normals * inverse object scale, bringing it into parity with SL and Maya. However~ If you started off with a 14.0 , 1.0 , 1.0 sized object, that has ( 1.0, 1.0, 1.0 ) scale ( XForm Applied in 3ds max ) ~ and then take this ( 1.0,1.0,1.0 ) scale object and import into SL~ SL will compress it into an internal unit cube .SLM File which is akin to taking your mesh object in any 3D application, scaling it down to fit into a ( 1.0,1.0,1.0 ) sized cube and APPLYING THAT TRANSFORM, making the object effectively 1.0 sized cube, with ( 1.0,1.0,1.0 ) scale, then scaling it back up to object size, at the transform ( object ) level. In the case of our 14 meter tall box ~ regardless of what software it was sourced from is now a ( 1.0,1.0,1.0 ) sized Object with a ( 14.0 , 1.0 , 1.0 ) scale. Which if you import it into 3ds max, we're back to an object with unapplied XForm data which uses normals * object scale to draw its normals in 3ds max, and they LOOK WRONG until you apply XForm ~ returning the object to it's original 14.0 , 1.0 , 1.0 size ~ with a unit identity transform. That does not mean this is how it's handled in SL. ( Despite it being how the Render Debug Normals indicate that it is being handled in SL ... it's... there's many steps to this ) On top of this ~ absolutely NONE of the above addresses the original concern that normal maps ( note: Not vertex normals themselves ~ ) in SL are displayed in a manner that is completely consistent with how the debug tool ( apparently incorrectly ) draws the vertex normals. This bug is weird. VERY weird. Also it has nothing to do with the handed-ness RGB channels of normal maps~ the incorrect display of an arbitrary planar normal map displays incorrectly on the side of a flat cylinder in SL. That's not a problem with the normal map, it was baked in planar space with all the correct color channels and magnitudes, but when you stick it on a cylinder squashed flat, it makes the side of the cylinder render as if it has it's vertex normals ( nothing to actually do with how the normal map was created ) squashed to match the bounding box, just like the renderdebug normals in SL seem to indicate they do ( wrongly ) and in parity with how 3ds max handles un-applied object transforms. This is directly contrary to all the other inverse_scale normal calcs both in mesh packing and unpacking. If you doubt me, try doing the same test as I did ~ ignore debug normals, turn off all atmospheric shaders and just look at how objects reflect light. They do so in a manner consistent with having their vertex normals handled correctly ( in an inverse_scale manner ).
  11. Yeah. I've been down that entire rabbit hole, and came out the other side. ( I think ) .... Remember my very first going in position on this was "Scale Matrices Suck, I routinely get them wrong." So my confidence level in all of this has been fluctuating wildly between "pretty high ~ but not certain" all the way down to "I have no idea what I'm doing". Maya handles vertex normals with inverse object scale. This is NOT how 3ds max handles it ( As best I can tell ) However, the only way to get 3DS max to render vertex normals is to use the old Editable Mesh asset type, instead of Editable Poly. So, I'm really not entirely sure how the software handles this internally. 3DS max has a lot of bizarre intricacies behind the scenes, this might just be 'one of those things' it does the '3dsmax way'. Which can be kinda "speshul" sometimes~ That being said : We've found code in SL now for object storage ( squishification ) and subsequent expansion ~ for object rendering. Both of which take the normals and multiply them * inverse scale. As long as these two operations use the same maths, then ( in theory ) everything regarding mesh storage and recall is actually fine. Conversely, if both calculations used object scale ( like 3ds max appears to ) it would also be "okay", however rendering scaled objects would have to be handled in a vastly different manner~ like I assume it is handled in 3ds max. But this is not presently the case inside SL. SL clearly has the maths to use inverse_scale for both calculations. However, the display of vertex normals in SL, using the debug tool ( the little blue lines we look at ) clearly uses object scale, not inverse scale. What this means... I honestly have no idea what is going on at this point. If I turn off all atmospheric shaders, and disregard rendered debug-normals, and just analyze this with an Ambient Dark environment and a single point light SL seems to render surface normals correctly. But I can't be 100% certain that this is the case, because again, normal maps and shaders are clearly still borked. The only two things thing I am 100% certain of is that : 1: Inside SL, the display of a normal map on a curved object scaled flat is incorrect. WHY this is the case, is not something I understand yet. Still digging on that one. 2: The display of vertex normals (rendering of debug type info ..aka drawing little lines out of the verticies ) , both in 3ds max and in Second Life both, is unreliable, and should not be used as a deterministic tool to decide what is going on~ even though I used it as such in my JIRA, I realize now that may have been in error.
  12. The scale multiplier in the 3rd tab is a universal omnidirectional scale value ( applies to all axis equally ~ X Y Z ) so it doesn't actually affect the normals data at all, also you can't zero it out ~ so there's no concern for 0 magnitude normals.
  13. @ZedConroy Thank you for puzzling through the first part of this. This is has been driving me bonkers for the better part of half of a decade. https://jira.secondlife.com/browse/BUG-228952
  14. Okay ~ I've done some preliminary testing. Nothing 100% conclusive yet ~ but by all indications ( at least for meshes originating in Autodesk Softwares )~ for any meshes that aren't perfect cubes ~ during the quantization process ~ it seems to be re-calculating the normals for them using an inverse scale matrix for the surface normals ~ instead of a scale matrix~ meaning ~ the thinner and flatter your object is ~ the vertex normals of the object are going to be distorted ~ by not only the ratio of the difference from the mesh to a standard cube ~ but then that ratio AGAIN beyond that ~~ meaning if your initial object measures 0.25m x 1m x 1m ~ the surface normals are being calculated in a manner which ~ in order to get them to match the original shape ~ your object must be scaled to 4.0m x 1m x 1m ~ 16 times the original mesh dimension in the axis that was "off". If my testing is correct ~ and this is the mistake...... Holy @#*%*@
  15. Actually, once I hit post ~ I remembered. I'm pretty sure just exporting a mesh into a DAE file format "explicitly defines" it's vertex normals. DAE is a simple format, and does not allow for edge smoothing. ( Again I haven't tested, but I have a fuzzy recollection here ) that if I export a mesh with 'regular' handling of vertex normals from either 3ds max or Maya~ that just immediately re-importing it will require them to be "unlocked" again. it's just a limitation of the DAE format.
  • Create New...