Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

55 Excellent

About polysail

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I'm far less of an expert from Beq on this topic, so I will defer to her "informed ignorance". I just wanted to point out that Prims appearing before everything else in a scene has nothing to do with their render-impact, but rather the fact that they're constructs of the viewer itself. However, people tend to think that means they're more efficient since they just poof appear. From what I understand about the viewer is that you don't have to download a prim mesh every time you want to look at one, rather you're just getting info from the server "prim box, these inputs". As such they appear in the scene before everything else. Which, as Beq noted ~ doesn't mean that they induce less GPU lag than say a similarly constructed box that someone uploaded.
  2. Getting a dino to look good shouldn't take more than 1-2 1024 maps if you use symmetry and UV Pack well. If you UV Stack the left and right sides on top of each other and you split it into two 1024 x 1024 maps you should be able to get a good normal map resolution with minimal seam distortion down the back and underbelly. Additionally, along the top of the back, that specific ridge you can add extra ( non-normal map ) based geometry to help your silhouette. It'll help distract from the natural normal map defects you get from ( relatively ) smaller normal map sizes. As a general rule I'd avoid making super-tiny spines like the ones you have along the neck, but defer to something a little bit courser and built to help the silhouette will hide the normal map seam, especially if it goes from neck-to-tail-tip. You can put these spines on their own UV islands that break symmetry and sprinkle them into the crevices of your existing UV maps to maximize space utility and keep the symmetry from being too stark. ~Cheers! -Liz Edit! :: A dino like that can be well executed with a triangle count of roughly 25-55K triangles ( no subdivisions needed ). Just got to break some edge loops up near the spines and head and do your topology right !
  3. When exporting a file from 3ds max ~ if the 3ds max scene is set to cm ~ and upon export, the file is set to convert cm to meters. That item will appear in world at it's appropriately converted scale. However, the rigging data will be incorrect~ as it won't have been converted properly. If you look in the DAE file, the values are all off. ( I think? .. it's a vague recollection I have of this ).
  4. Based on my testing on all this sort of stuff awhile ago ( and my memory is a bit hazy on all of it ~ ) this is how stuff works ( I think? )~ so please correct me if any of this tests out to be incorrect. In world avatar BB (bounding box) size altered based on the SL dimensions of the rigged asset and where it is attached to ~ despite how it visually looks. That is to say the same formula for altering an avatar's bounding box is used regardless of the worn items rigged status. The alterations to the avatar bounding box are based on the worn position and scale of the unrigged parameters of the worn item, regardless of whether rigging data is present for that item, or if it's simply just a giant box-prim~ The same calculation is used. It's the same issue I originally dredged up when assessing the validity of the ARC calculations with respect to rigged assets. Rigged mesh assets have a bounding box size based on what their unrigged scale is. IE if a rig a 0.5 m (per side) 6 sided box to my head bone~ and upload it. The bounding box size of the rigged item ( and also the effect that wearing it has upon my overall avatar bounding box ) is based solely upon what size that object is in SL when it's placed on the ground. If I place it on the floor and scale it down to 0.01 m ~ then it's associated bounding box on the avatar when worn ~ will be that size.... however visually it will appear as a 0.5 m rigged mesh box, overtop the avatar head, because it's vertex data dictates that's where it will appear. The ARC costs will be calculated as if I was wearing an unrigged asset that size ~ and the overall avatar bounding box does not grow. If I scale that same box up to 64 m per side and wear it ~ it will now affect the avatar's overall bounding box to be absolutely ginormous, and it's effective ARC cost for the box alone, is raised. However the associated ARC costs for all the other worn attachments that have now had their LOD swap rate changed by the fact that I'm wearing a 64 meter box~ (thus making my avatar bounding box absolutely gigantic ) ~ is not. Which is probably not intended behavior either. It's important to keep in mind that in both of these cases the box appears as a 0.5 m cube on the avatar's head at all times. It's just "under the hood" it's different. This however has nothing to do with how rigged assets are stored and rendered by SL~ which I think was the original question! All vertex data are stored as having an offset from the avatar skeleton. The offset of these vertices is stored in a rather generic system unit ~ which based on what Beq tells me is an integer distance that's divided into and interpreted as meters ~ effectively making your vertex distances from your avatar skeleton stored as measurements that are ~ for the sake of argument: meters. This offset position is multiplied at render time by all of the skeleton bone scale modifiers with the skeleton ~ IE joint sliders, or ~ in the cases of giant avatars and tinies alike ~ a scale applied directly to the mPelvis bone ( which I confess I'm still not entirely certain how this scale modifier actually makes it in world and is applied ! I've just seen it in action enough times to know that it works. ) This vector based offset position is updated with skeleton deformations etc and that's how our rigged content moves. However. If you export from your scene from your external 3D app in cm without doing the proper conversion to meters, such that your vertex data is measured in cm, SL will still interpret those values as if they were in meters. This will effectively set every vertex offset from its parent bone to be different from your intended value by a factor of 100. It's got internal limits to cap these values and you'll wind up with a giant puffy avatar that's just a near-spherical blob of vertices, each following their respective bone~ jiggling about like some sort of terrifying vaguely humanoid kooshball. This behavior however ~ will have zero effect on the avatar's ARC cost or bounding box when compared to a correctly scaled and exported avatar that has the same SL object scale ( size on the ground ) . It is worth mentioning as a final aside though ~ that SL will interpret an upload that is 50 cm to a side as a 0.5 meter cube. But at the same time it will translate the vertex data from the object at upload time into meters, creating a 0.5 m large cube in world that has rigged vertex offset data specified at 50 meters away from a bone ( which hits the integer limit for maximum allowable distance from a bone ) and makes things a mess. Note: that this effect is something I've noticed while browsing DAE files, but it's possible that different DAE writing export plugins might do the unit conversion differently such that SL interprets the data correctly, despite doing an export in cm.
  5. Just to add a little addendum to Beq's lovely essay describing physics. The avatar collision skeleton is primarily used for resolving clicks ( left and right ) on an avatar. For example when you want to right click on your friend to see what their profile update is ~ in order for the viewer to "solve" that you've clicked on your friend it uses the collision volumes. These collision volumes are also what allow you to "cam onto" someone and for the camera to stick. The camera uses the Collision Volume to resolve where to point and what to follow around. This is what allows you to "stick" your camera to your friends head to look at their face, or stick it to another region of their body to examine that. I believe that this is slightly different than "bounding box" resolving which is how we bump into things in the world or what responses are given for LLCastRay collision hits from such things as weapons and combat scripts. This means that if your avatar is playing an animation that has your hands extended in front of you like a zombie, the arm collision volumes do not dictate when you bump into the wall, rather the extended avatar bounding box does. As best I can tell the avatar collision bounding box does change based on a number of things including your present height, sitting status, etc, but it does not change based on what animations you are presently in, in fact I'm pretty sure that you can walk your "Collision volumes" completely out of your bounding box via an animation, thus visually being clickable in one location, while your bounding box is in fact in another.
  6. Dohhhh Whirls... what kind of flame fest did you drag me in to... stop baiting me into messes like this, I have more important things to waste my time on, like playing Don't Starve. Feh... fine. @Klytyna - It's not a Linden that came up with the idea to allow the Serverside Baked output textures to be applied to Rigged Mesh Avatars, it was mine. The reason is pretty simple. It's not for reviving millions of system layer clothing assets, ( though that will happen too as a byproduct ), but rather for giving users a reliable way of applying skins, tattoos and stocking/lingerie layers to mesh bodies without the mesh body designers having to use additional polygon shells. Presently all standard mesh bodies contain 3-5 copies of the original body "shelled" or "layered" overtop the basic layer. Each copy of the body is used solely for applying a tattoo layer or a freckle layer or a piece of lingerie/clothing to that mesh body. This means that anyone viewing that avatar has to render four copies of the entire mesh body at all times, This makes that mesh body 4x as laggy. Additionally, on top of the polygon count cost, there's the cost of keeping individual textures in memory~ all at 1024x1024 pixel size. This means for each part of the body~ a fully tattooed mesh avatar with system layer lingerie on will have: 3X 1024 textures for the skin layer ( head, torso and legs ) 3X 1024 textures for the tattoo layer alpha blended overtop that ( head, torso and legs ) 2X 1024 textures for the lingerie layer ( torso and legs ) This leads to a grand total of 8 x 1024 textures stuck in memory. This is before we even get to whatever mesh clothing that's presently being worn on top of that body. Now. Using the serverside baked textures on all body parts eliminates the need for the extra 3 layers of "shelled" copies of the mesh body, making the rez time for the mesh asset 4x faster than it presently is, but it also culls the texture count down from 8x 1024x1024 textures down to just 3x ( baked head , body & legs ), more than doubling the efficiency there as well. This texture efficiency boost doesn't even take into account the massively reduced render complexity differences that come from rendering layered alpha blended textures of tattoos overtop skin layers. But because of the way that the serverside baker operates, it doesn't have to stop at just there. Allowing use of server compounded textures would allow for users to wear multiple tattoo layers, multiple stocking and lingerie layers. A lot of people love to customize every singled detail of their avatar down to single tattoos, single moles, single scars etc. This ability was lost with the arrival of mesh bodies, but would be regained by this feature. The original notion of this thread was that someone lamented the arrival of mesh because of the aberrant load times. This is a very valid concern, one that the Lindens actually take rather seriously. So, this proposal that you're so thoroughly upset about is actually a massive ( 4-5X ) optimization, that would allow him to continue to enjoy SL in a much more ideal way~ as well as make SL events 4-5X less laggy.
  7. This is still on my project list ~ but during the course of Bento, something much more pressing finally stole the majority of my build-time, namely the inability for my native software suite to produce SL related animations / skeletons. That project is finally nearing "Live" status. I'm still VERY interested in establishing this sort of thing. But I haven't the mesh head and saleable content I was hoping to have at this time myself. So, I'm lagged a bit behind on that regard.
  8. Hi~ you seem upset! First off, disconnecting from teleports is nothing to do with Bento~ that's just the servers being finicky, or your internet connection being bad. I'm sure it'll settle out in no time. As for "things looking exactly the same as they did before", that is precisely the point. If the Lindens did their job correctly, all existing content should look exactly the same!! This is an affirmation that things are in fact, working correctly! New content however, will look different ~ and you may enjoy it sometime soon. I hope this makes you less-upset! Have a happy new year!
  9. Thus far physics has only been applied to "Volume" bones, rather than skeletal animation bones. All the new bones are skeletal animation, rather than volumetic. So sadly physics can't really be applied to them directly, however something like a "dragging tail" can be faked with animations. Alternatively there are prim ray-cast script solutions that can mimic the behavior you're requesting, but it's a tad bit late in the Bento project to be adding collision physics. Edit: Also, it's worth mentioning that making things move with the physics engine drastically reduces the number of alternative uses that a bone can be repurposed for ~ as it's suddenly being influenced by the environment.
  10. Okay, so it turns out twist is a special case~ because bone orientation is specified by everything but the axis twist of a bone. Think about a look at constraint, it never spins the camera on it's "look at" axis. I'm guessing something about SL labels this as a special case it seems.
  11. If you parent your current bone skeleton ~ instead of from one bone to another in proper skeletal heirarchy ~ but instead you parent them to another copy of the skeleton, then their local transforms relative to the parent node is still going to be [1,0,0] [0,1,0] [0,0,1] [0,0,0] but it will be in correct position in world space~ as defined by it's parent heirarchy. Then you can move them around without altering their identity transform [1,0,0] [0,1,0] [0,0,1] [0,0,0] definition. I dunno!! ~ it works in 3ds max this way ~ but again, I haven't tried that twist case. Which is still going to generate horrid topology once the limb is reoriented to fit the SL TPose.
  12. You need to complely disassociate the skeletal heirarchy for upload but still preserve local rotational values at zero. This has the downside of making "export with joint positions" completely non-functional. But with a non-directly-parented skeleton you can twist any of the parts around any which way to make a "bind pose" and the as long as the joint rotations are all still locally zeroed, your file will produce an intact result. I haven't tested the "twist" case though. But my code is currently under some rather heavy revision~ so I don't really have the means of doing so. It should also be noted that the "twist" case is topologically a NIGHTMARE and should never be used. There are some solutions you don't want to have work ~ and that's one of them. Because the forearm topology and UV's will be a distorted twirl of a mess.
  13. This project adds stuff to the existing ~ completely invisible, avatar skeleton. It does not change anything for anyone unless you are a content creator that "makes rigged mesh". For everyone else, if you have purchased something that "uses Bento" you will need the new viewer to see your new content properly ( much like you you would need a mesh enabled viewer to see mesh content ) Other than that nothing has changed, there are no new options for you to worry about except for a few new "attachment points" to wear stuff on. That's all! Enjoy!
  14. Are those sliders position based? ~ Are your animations that are playing there adjusting the position of the bones? If so ~ it's been publicly stated over and over again that translation ( position ) based animations will negate the effects of any translation based slider control. If these animations don't use translation then this might well be a bug. Edit: Reset skeleton is something done locally on each viewer. One person pressing "Reset Skeleton" does NOT reset the skeleton for everyone else viewing that avatar. Therefore it cannot be "scripted" into your Animations. 2nd Edit: Here is a list of bones that are partially or completely affected by sliders that move their position, and therefore may have those slider values overridden by translation animations that also affect position as well: https://wiki.secondlife.com/wiki/Project_Bento_Skeleton_Guide#Bones_Currently_Affected_By_Positional_Sliders
  15. Beq and AndryK seem to have ironed out the problems with the fix, which is awesome news, and the fix itself was actually fairly simple~ I feel compelled to mention however, that the bug itself was incredibly bizarre. As it was rooted in the methodology of the piggybacking of mesh display behavior on top of prim base types. Bugs that are simple to locate don't last 5 years~ nor do the bugs that are simple to solve. This particular one was insanely difficult to locate, but simple to solve. The vast majority of the rest of the LOD problems sadly fall under the "simple to locate, but nearly impossible to solve." umbrella. That being said, while there are a LOT of bugs with the LOD system, it's still 'passibly functional' as it presently stands. Remember: The Lindens operate as a business should. Cost benefit analysis. Which means solving really convoluted pain-in-the-ass problems that have no drastic impact on the world really isn't high on their to-do list. The majority of the problems come from the way the upload system presently is incentiveizing uneducated creators to create content with single-triangle Low LOD's. And while I do hold the Lindens partially accountable for not doing more to at least clarify in the uploader what was "good" and "bad" for SL. At the end of the day, there's no amount of enforcement that can stop a unskilled creator from creating bad content, without limiting the creative freedom of a good content creator to create something impressive. That being said~ SL documentation, especially on mesh has been abysmally lacking. As for actually fixing things ~ I've actually submitted a number of random LOD related bug reports to LL ~ and they've seemed very receptive to at least trying to figure out a solution. After looking through the JIRAs of the last 5-7 years, I can say with reasonable confidence, a lot of longstanding bugs of SL predominantly exist because no one has filed a report that reliably points to "this specific thing is broke." ~ Most of it is "when I go here, and do this thing, half the time this other strange thing happens that I didn't expect." While that's helpful ~ as someone who's recently taken up trying to write code junk~ I can tell you it's not always quite good enough. Things seem to be heading in the right direction though! Even if it's a bit painfully on the slow side.
  • Create New...