Jump to content

Fluffy Sharkfin

Resident
  • Posts

    1,107
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Fluffy Sharkfin

  1. 14 hours ago, Theresa Tennyson said:

    I can think of two ways - there may be more, I'm not really an expert in this.

    One way would to have "placeholder" textures on the object that the viewer would automatically replace with its owner's appropriate latest bake. This would be the most similar thing to how system bodies work now but would require everyone's viewer to be updated, and would have little use for non-avatar applications.

    The other way would be to have an applier-like object that would manually send the baked texture to pre-configured faces when clicked. The texture would then be treated like any other texture and be part of that object until deliberately updated. This would have the most compatibility with old viewers and be most useful for non-avatar applications but raises permissions issues. If this could be done the texture probably will have to be no-modify and no-transfer automatically - I don't know if there's any way for a baked texture to inherit permissions from its component parts.

    I think the placeholder texture option would probably be ideal given the limited scope of what they're trying to accomplish with this feature,  I can even see it getting some limited use on other attached objects, despite the lack of material support.  

    Honestly, on reflection, while I'd love to see some sort of support for layered textures in SL I wouldn't want to see them try and extend this feature much further than what they're already trying to accomplish simply because the use of wearable system avatar layers doesn't make nearly as much sense when applied to non-attached objects.  Of course it would be nice if they could find some way to support materials with this, but if they can't then the fact it will provide a way to composite wearable "alpha layers" and apply them to mesh bodies is still a notable improvement on what we currently have.

  2. 51 minutes ago, Pussycat Catnap said:

    Ponder what program that is. I've been struggling for a long time to figure out making some textures I've wanted to put on my outfits... and spinning my wheels...

    I'd love it if I knew of something that could help me out like this.

    If I had to guess I'd say it was Filter Forge, which has a huge library of user created procedural textures which can be used to generate near-infinite variations (in most cases complete with appropriate normal map, etc).

    • Like 3
  3. 2 minutes ago, Penny Patton said:

    An alternate approach to this issue would be for LL to investigate next-gen building tools, similar to EverQuest Next, which allows people to build using basic prim shapes, then bake those shapes into a single mesh object (removing hidden vertices/faces) and applying a softening feature to edges to round them out for more organic shapes.

    The voxel building tools in Landmark were actually very impressive, and if LL were to implement a similar in world building system where people could create things using basic voxel building tools and then have a mesh object automatically generated based on a similar process to the "wrap" method they use to calculate physics models in the mesh uploader (you could even use the same mesh upload window for generating the LOD models), that alone would be an amazing addition to in-world creativity.  Of course you'd have the issues of UV mapping, but even if it were limited to planar mapping only it would allow users to create all sorts of organic shapes and objects for landscaping and scenery, etc that you simply can't do efficiently with prims.

    • Like 3
  4. 1 hour ago, Theresa Tennyson said:

    These composite textures are already assigned UUID's and information is included that currently tells the viewer to apply them to the system mesh. The system will be changed so that the viewer will be able to apply them to a mesh object instead of the avatar mesh. These textures could be routed to the appropriate faces of the mesh the same way applier systems do now.

    The rest of your post made sense, but this part seems a little vague?  The viewer will be able to apply the baked texture to a mesh object instead of the avatar mesh, and will be able to route it to the appropriate faces of the mesh the same way applier systems do now... but who tells the viewer which faces are the appropriate ones? Are they going to present the user with a list of faces 0-7 and let them guess which one, or will the information about which faces to apply the baked texture to be somehow embedded in the system avatar layers that the user wears?

    I think the idea of getting rid of multiple faces for alpha cuts and reducing the use of onion skinned meshes for tattoo layers etc is great and hopefully creators of mesh bodies will adopt the feature and change the way they make their bodies accordingly.  Having clothing layers for mesh bodies as a separate attachment would not only solve the issue of having to select which faces a baked texture is applied to but would also give us an easy way to reduce the number of excess polygons we wear, so that's also great.  However if their intention is to extend this to other types of mesh items (which seemed to be the case based on what was said at previous meetings) then they'll need to support things like linksets and multiple faces and in order to do that I think they're going to have to come up with something a lot more complex than what you're suggesting.

  5. 41 minutes ago, Theresa Tennyson said:

    The baking service creates a texture. Freckles and tattoos are never seen in the world as independent things. The texture that is created would be applied to a mesh just the same way as any other texture is applied to it. This is also why your idea of unlocking and tiling textures going into the bake won't be a significant improvement in performance - those tiled textures would never be seen by anyone else in-world, only the completed bake.

    I'm curious as to how you envisage these textures being selected for baking.  Currently for system avatars it's based on what clothing layers (i.e. the asset type clothing layers in inventory, not the mesh onion skin thing) we wear, but I just don't see that working for mesh bodies because there HAS to be some way to apply these to select faces rather than the entire object.  The UUID for the baked texture would have to be accessible via script in order for it to be applied, creators won't simply be able to provide a "layer" asset with mesh items that a user can wear and magically have it applied to the correct faces of the correct attachment.

  6. 10 minutes ago, Theresa Tennyson said:

    Show me where I ever said that clothing layers wouldn't be used. I said at least twice that they still are useful. They'll be a lot less of an issue when they're two faces rather than being dozens of faces. And when layers for clothing are still used all your complaints about materials not being bakeable are invalid. I did mention that baking on clothing would be a possibility, but that would strictly be for low-impact layering purposes.

    So if a mesh body has clothing layers attached to it, those would be either additional faces or alternatively part of the same linkset as the mesh body, which then raises the question of how are we going to be able to specify which faces/parts of the linkset the clothing layers we wear will be applied to, unless you want freckles and tattoos on your shirts and jackets as well as your skin?!

     

    16 minutes ago, Theresa Tennyson said:

    In Second Life, materials are a completely separate thing. With most animals, the texture of their fur or scales is independent of their coloration. For instance, a horse's hair will generally have the same texture whether or not it has a blaze or pinto spots. Right now the Waterhorse rideable horse needs a separate layer for spots and blazes. With the ability to use baked diffuse textures that would be unnecessary, while the separate hair specular and normal layers don't even need to know that they are there.

    I've already explained to you in this thread why that assertion is wrong, normal maps may be unaffected by colors in the diffuse map, but the same cannot be said for specular maps!

  7. 4 minutes ago, Theresa Tennyson said:

    Here's the thing. They could still do exactly that. I have never advocated getting rid of clothing layers. I just don't appreciate having to make the philosophical decision of whether to use the tattoo layer for body freckles or a certain patch of hair I am wont to wear from time to time because? Can't wear both. Not to mention that either may vanish under sheer clothing anyway. Or, of course, that I can't get a, well, tattoo.

    Besides, this is the more likely scenario when a resident gets dressed with a mesh body:

    They put on a piece of mesh clothing, then they grumble that part of their body is sticking out so they go to their big honkin' hud which realistically needs to be worn at all times. They then try to find a combination of the dozens of alpha cuts of their body to hide what they need hidden. Then, if this is to be repeatable, it needs to be saved into a scripted something.

    (Oh yes, I do know - some clothing is scripted to automate this. Which is why my aunt has a nightie that turns off a piece of her mesh that doesn't need to be turned off and she has a hole in her chest.)

    Meanwhile, this system means that their avatar needs to be made up of a collection of multiple meshes with scripts to turn on each mesh face on and off. Dozens of faces. I imagine some bodies have over a hundred faces when you take all the layers into account. That, of course, means the avatar needs to be made of x/8 meshes, with x being the number of faces.

    Or... they can put the piece of clothing and an alpha layer made for that clothing and be good to go

    If the avatar doesn't need to be sliced into alpha cuts the main portion of the body and each clothing layer can be reduced to two faces each.

    As far as what goes where - it works the same way it does now. A script. Just a much simpler one.

    Okay, if you're talking about the new feature being used purely as a way to apply the equivalent of a body alpha to mesh bodies, and using it to apply simple decals like tattoos and freckles etc. thereby reducing the number of layers of polygons used and the numbers of sub-objects/faces used to accommodate alpha layers in HUDs, then yes I'd agree this feature will probably work quite well (although I still think the issue of having to choose which attached mesh its applied to and therefore having to apply it multiple times if you have separate hands and feet is going to be potentially unpopular).

    If that's really all that LL are trying to accomplish by implementing this new feature then I guess it will more or less achieve their goal, however I still think it would be nice if this, at some point in the future, led to them expanding the functionality of the texture baker to include more advanced methods of texturing which would allow creators to optimize and further improve the quality of content in general.

  8. 2 hours ago, Theresa Tennyson said:

    I just don't see a realistic need to make the significant changes to the baking system to provide materials support, especially because materials are already a separate thing and much of the same function could be done by generic materials appliers.

    While the same functionality can be achieved using clothing layers and separate applier HUDs, the effect on usability should also be considered.

    Using the current system, a resident purchases an item of clothing, opens their inventory, wears the HUD, selects the item they wish to wear, and with the press of a single button all three required textures are applied to their avatar.

    Using the new system with clothing layers and corresponding HUD, a resident purchases an item of clothing, opens their inventory and finds the correct version of the clothing layer for the item they want to wear (we'll just assume that creators will be helpful here and not name the different color variations with whimsically abstract names like "Springtime", "Frappuccino" and "Zowie!"), they then right click the layer and choose which of the meshes currently attached to them they wish it to be applied to, then they wear the HUD for the item and find the button that corresponds to the clothing layer they just applied and press that to apply the other textures required for materials, overwriting any other materials that are necessary for any other layers they may be wearing, since there's no texture baking for normal or specular maps so only one of each can be applied at a time.

    The above is assuming that this feature will be used solely for mesh bodies, because without LL somehow adding in the ability to select which faces/parts of a linkset the layer is going to be applied to I don't see this feature getting much use for any other type of mesh items, and since we currently can't specify names for each individual material/face when uploading mesh I can't imagine how they could even begin to incorporate that functionality into clothing layers.

    None of this seems like a step forward in terms of usability, and I don't see the majority of residents welcoming this added complexity in the process of dressing themselves considering one of the most common complaints about SL is its steep learning curve.  Also, I'm still unclear on how exactly people are going to specify which order the layers appear on the items, unless in order to wear a layer beneath other layers they're expected to strip down and reapply them all in the correct order.  And what about residents that wear a mesh body but then use separate hands and feet, if they try to wear a layer that should cover both will they have to apply that clothing layer to each attached object separately?

    I'm not saying that the entire feature is pointless, but just because something is technically viable and would improve performance you can't just assume that people will adopt the feature regardless of how much more complicated it makes things or how much it impedes their ability to use existing features that they're already accustomed to.  Usability may not seem important in comparison to improvements to performance but the fact is that if end users (i.e. customers) don't like it, how long do you think it will be before creators (i.e. merchants) refuse to use it.  Frankly if they can't add some functionality which allows creators to somehow improve the quality of the content they produce then from an end-users perspective this feature is going to seem like a whole lot of stick with very little carrot attached!

    • Like 1
  9. 3 minutes ago, Prokofy Neva said:

    I disagree that the maker whose tree you are showing "never really made it". And you're selecting two trees out of the many made by multiple top-selling merchants and not showing the best sellers. In fact, the people who make the best-looking stuff sell the most.

    Assuming that they make things for the purpose of selling them, marketing is just as important as creative ability in SL these days.  Not everyone is motivated by profit, especially when you're talking about artists and those driven to creativity rather than drawn by greed!

    • Like 1
  10. 13 minutes ago, ChinRey said:

    It would only work if the baked texture could be tiled though and I can't see how that can be possible for an avatar. Tiled seed texture will only give a marginal speed icnrease for the baking process and not affect the rendring at all.

    Ahh, no I think you misunderstand my meaning.  The tiling of seamless textures would occur pre-bake so that the resulting baked texture would still be a single non-tiled texture, but any of the layers used to bake that texture could contain low-resolution, seamlessly tiled textures (in the case of a dress it could be patterned fabric, in the case of a static object a brick texture), then on top of that you could apply a single non tiled texture with windows, shadows, etc.. or even a texture as a decal with tiling turned off and the scale and offset set so it appears in a certain place.  This would mean instead of needing a 1024x1024 texture containing the repeated seamless brick texture complete with windows, shadows, etc. you could make do with a 128x128 seamless brick texture and a single window texture tiled several times for the windows, with a non repeating decal map for the door (or alternatively think plaid patterns, buttons, stitching, etc).  Basically a way to have more control over the way the textures of each layer are handled before the final bake is created and passed to the object.

     

    21 minutes ago, ChinRey said:

    As for the performance cost of polys vs textures, I think we should put that into perxpective.

    I agree that for scene objects excessive numbers of resolutions of textures is the main cause of lag and it certainly adds a lot to avatar lag too.

    But even so: A scene with about 300,000 active static polys (active as in actually rendered on the screen) is moderately heavy to render. You don't really want to go much higher than that and fortunately we hardly ever do so the lag caused by gemoetry is usually less significant than the lag caused by textures.

    I use a Maitreya mesh body most of the time. It's a fairly low lag mesh body but it still has more than 300,000 polys all on its own. And since those are all dynamic polys, they are much harder for the gpu to manage than the ones wh just sit still and never move or change shape or size. Not all of those polys are active of course but enough of them are that they add significantly to the overall render cost of the scene.Three or four mesh avatars in a picture can easily add more to the gpu load than the entire scene they're in. With baked textures the polycount for mesh bodies can potentially be cut to a quarter of what it is today. Even if they retin separate layers for skin and clothes, it's still a 50% reduction of the polycount. That is a significant performance improvement.

    I'll admit I don't have enough technical knowledge to be able to predict which would cause more lag, though I suspect it depends on the type of performance issues you're talking about.  While high polygon count would probably cause more of a slowdown when rendering, wouldn't a high volume of texture data cause more of an issue with ram than number of polygons?

  11. 12 minutes ago, Theresa Tennyson said:

    With most other "companies whom you're paying to provide a service," when you ask them to make some major, impractical, ill-thought out change largely to make yourself happy you'll end up either having a pleasant conversation with someone in a cubicle in Bangalore or have your E-mail answered with the Form Letter to That Kind of Guy/Girl/Frog/Bovine.

    At least with Second Life there's the possibility that you can do the work yourself.

    Provided, of course, that you can do the work.

     

     

     

    Yes, and if I'd come onto this forum and started a thread demanding some major, impractical, ill-thought out change purely for my own amusement/benefit then I could understand your response.  However the whole topic of this thread is the fact that LL have announced this new feature, and I'm simply trying to suggest ways in which it could be expanded on so that it may be of maximum benefit to everyone.

    I really don't think giving people a link to the source code and telling them "go do it yourself" is adding anything constructive to the conversation.

    I can understand how you may get tired of people who continually make unreasonable demands, but since i rarely even bother to post on these forums I don't see how you can justify acting that way towards someone who's basically a complete stranger and just trying to engage in a civilized discussion!

  12. 1 minute ago, ChinRey said:

    Isn't that how SL always has worked?

    Seriously, I'm sure I'm not the only one who agrees with Penny that dynamically baked normal and specular maps would be absolutely wonderful. But can you imagine how much work it would take to debelop something like that? My head is spinning just thinking about it.

    It is, but that doesn't mean we have to like or encourage it! :)

    As for clothing layers and materials I agree completely, especially when it comes to normal maps since you can't just slap one normal map on top of another using the alpha channel as a mask, for a start the alpha channel of the normal map is, I believe, already in use as the specular exponent channel (I guess you could cheat and use the same alpha you're using for the diffuse though), but it would just look weird since the height differences of the clothing layers wouldn't be present automatically so they'd have to add additional height information to each layer in order to mimic the effect of layers of clothing.

    The concept of unlocking the UV mapping on clothing layers so that instead of it always being set to 1:1 it could be scaled and then repeated or used as a decal may be equally complicated, but it's about the most basic addition to the new feature I could think of which would allow a greater scope of use and potentially greater impact when it comes to improving performance beyond just reducing the sheer number of polygons and textures being displayed on some avatars.

  13. 33 minutes ago, Theresa Tennyson said:

    My post was largely a response to the sort of person whose stock reply to everything is, "Well, they should have done that too. Stoopids," without taking into account the amount of effort required, existing conditions, having a coherent plan or offering to do significant work - basically, it's a hat/cattle ratio problem which, considering some people who show this tendency, can be a bit ironic.

    Yes I caught the overall tone of your post but opted to ignore the snark since I assume it's a product of some weird sort of virtual world Stockholm Syndrome.  I really can't think of any other possible reason for someone to suggest that it's the customers job to do the work of those employed by the company whom they're paying to provide a service!?  Or are you suggesting that the continued viability and growth of SL as a platform is purely of benefit to its users and in no way is LL profiting from its existence?

    • Like 3
  14. 46 minutes ago, Theresa Tennyson said:

    Clothes? Use layers. Already done.

    By "layers" do you mean mesh layers?  Isn't one of the primary reasons LL are implementing this feature to allow creators to stop using mesh layers?

     

    49 minutes ago, Theresa Tennyson said:

    Scars, scales, muscles and fur textures? Applier. Already done. Because these things are generally constant and independent of the color of what they're modeling.

    Since the purpose of a specular map is to vary the color and brightness of the light being reflected off the object, and those change based on the colors in the diffuse map, you need to be able to apply a new specular map when you change the diffuse map if you include any color information in the specular map.

     

    While I can see this new feature reducing the number of polygons required for mesh bodies and lowering the number of textures being displayed per avatar, it does seem a little underwhelming when compared to other systems that have multiple texture layers, separate mask channels/maps, different blending/tiling options, etc.  

    It would be nice if they added some extra functionality like the ability to set texture scale on a "clothing layer" so we can use smaller seamless tiling textures as base textures and the ability to set texture offsets and toggle tiling on and off so we can use additional layers for decal maps like detailing, shading, logos, etc.  Adding that sort of functionality would at least give creators a viable alternative to using 1024x1024 textures on everything, which I suspect would have as much impact on performance as removing the additional polygons used to create mesh layers on bodies, and if they were to extend the feature to non-worn items too then it could potentially be used to improve upon/optimize a great deal of existing SL content.

    • Like 1
  15. 3 hours ago, Zazaaji said:

    2 - X-Axis read error. Sadly the only fixed I've found for this is to delete half the avatar down the middle, and remirror it when perfectly aligned with the axis.

    3 - Mirroring vertices read error. If you mirrored a part of it, make sure all the vertices on the mirrored axis are properly merged.

    Haven't encountered either of these myself but you may find that deleting the history for the object (Edit > Delete by Type > History) may help.

     

    3 hours ago, Zazaaji said:

    4 - Scale properties of the object do not match the scale properties of the rigging. I do not know how to fix it in Maya sadly.

    5 - Same as 3, but rotation instead of scale.

    Both these issues can be fixed by freezing the transformations on your object (Modify > Freeze Transformations).

  16. 33 minutes ago, Lucia Nightfire said:

    Afaik, supplemental animations are only going to be used with new animation override functions. You might want to confirm that at the next Content Creation meeting.

    Ah, thanks for the clarification.  While I'd heard mention of it at a couple of the meetings I didn't realize it was only going to be implemented for use in the new AO function.  It's a pity though since being able to simultaneously run multiple animations on different bones within the same rig would allow us to add a much wider range of movement to a single animated mesh using fewer animations instead of having to create a unique animation for every possible combination of motions the object could perform

  17. 1 hour ago, Lucia Nightfire said:

    Animated mesh is still an object linkset that has a root link and will allow linking and delinking.

    Legacy prims will perform just as they do with any linkset. They just won't be apart of any animations since those are only rendered with mesh links that have rigging data.

    This will allow ease of building scenes, settings, buildings, furniture, vehicles, etc. and combining them with animated mesh, especially with existing content. It will allow packaging them for rezzers as well.

    Animated mesh is not an only-rigged-links exclusive.

    Ah, you were referring to having actual legacy prims as part of the linkset, I understand now, sorry. :)

    1 hour ago, Lucia Nightfire said:

    As far as land impact based on bones actually being used, wouldn't the animation itself dictate what bones are "dead" or "freewheeling" and not the skeleton. As far as we know there will be no skeleton customization offered.

    Since the introduction of bento the number of bones in the rig exceeded the max limit for mesh uploads so they changed things to allow mesh to be uploaded with only the bones to which it's rigged, afaik you still have to include the entire skeleton when uploading bento animations (animations without the "Hip" bone as the root don't seem to work), it's just that any bone which doesn't have transformations applied to it by the animation isn't affected when it's running so is "freewheeling".  I'd assume that it would be easier to determine how many bones are being used in an animated mesh by looking at how many bones a mesh is rigged to rather than how many bones in the animation have transformations applied to them.  

    The other reason for using the mesh to calculate the land impact rather than the animation would be that since we'll be able to add multiple animations to animated meshes and trigger them via script it would mean the LI for those meshes would change depending on which animation is currently playing and I'd imagine that would cause all sorts of problems.

    Since they were also recently discussing the ability to trigger multiple animations on avatars, it will be interesting to see if they do the same for animated mesh so we can have one animation playing to control one part of the mesh and trigger a different animation to move other bones which are rigged to a different area of the same mesh (or perhaps another mesh in the same linkset).

  18. 15 minutes ago, Lucia Nightfire said:

    Personally I don't think land impact should need anything special other than a minimum value, sort of how pathfinding characters are done, but with legacy prim land impact being honored instead.

    How exactly would legacy prim land impact work with animated mesh when its mesh, not prims?

    57 minutes ago, Lucia Nightfire said:

    So with min([32 or 64,actual land impact]) that only increases with prim torture, material changes or size changes like any normal linkset, I don't see anything else special is needed.

    A minimum is the only thing needed to curve abuse as far as rezzed animated mesh is concerned. That is to prevent rezzing thousands of 1 land impact meshes for tiny critters like bugs or flowers, etc.

    I think keeping the existing formula for calculating mesh land impact and just increasing it based on the number of bones the mesh contains would be a good solution, so that simple animated objects that only require a couple of bones to perform smooth animations like doors, curtains, rocking chairs, etc. would still have relatively low land impact compared to say a fully animated mesh character or creature.

    There are a lot of potential uses for animated mesh that have nothing to do with NPCs, it would be a shame if we lost sight of that and nerfed other forms of content with unnecessarily high land impact, etc because we were focusing solely on the most advanced possible application (especially since we don't have any idea what sort of impact animated mesh has on performance yet).

  19. 5 hours ago, Medhue Simoni said:

    We already have impostors for rezzed meshes. They are called LODs. These are going to be really important to animated meshes. 

    Avatar impostors and LODs aren't exactly the same thing though, which is why we have LOD models for rigged attachments to reduce the complexity of the 3D avatars over distance and impostors to limit the number of 3D avatars we see on screen at any one time.  LODs are still simplified 3D models whereas impostors are 2D sprite representations of 3D models which are, one would assume, far less resource intensive.

     As for LODs being really important to animated meshes I'm sure we can all agree that, for responsible content creators, LODs are really important for all meshes not just the animated ones (in fact, depending on how they handle land impact for animated mesh, I'd say they're still more important for worn rigged-mesh since attachments aren't governed by land impact at all).

    • Like 1
  20. 1 hour ago, Love Zhaoying said:

    *raises hand* I know! I know! More client-side lag!! :ph34r:

    I guess that depends on how it's implemented, but I'd imagine that it will be less laggy than having every frame of an animation as a separate mesh all linked together as one object with a script constantly switching alphas on and off, and since these will be rezzed objects they'll be subject to the restrictions of land impact, which means they shouldn't be anywhere near as laggy as a lot of current avatars, some of which wear millions of rigged, animated polygons at a time without any restrictions (although that does raise the question of whether LL will extend the "jellydoll" feature to apply to animated mesh as well as avatars, but i suspect the performance issues will be as much about the number of animated meshes on screen rather than just their complexity so maybe they'll also implement impostors and a "max # of non-impostor animesh" setting similar to the one they have for avatars).

    • Like 2
  21. 1 hour ago, Love Zhaoying said:

    Ugh, I just realized this will be used for "breedables" and pets to make them more "lifelike". Not sure why that annoys me.

    I'm not sure why either!  Animated mesh would mean smoother, more realistic motion and less lag, what could possibly go wrong? :D ¬¬

    • Like 1
  22. 8 hours ago, Lucia Nightfire said:

    One thing I'm still going to continue to ask to be considered is the ability, by script, to assign a body shape and avatar physics from the object's inventory so we can use existing fitted mesh content and not have to reinvent the wheel faking it with non-intuitive, completely redesigned, custom animations, clothing and bits. The adult industry is another huge market for this and it will demand the use of standard fitted mesh content.

    I must admit I have very little idea about how the internals of avatar physics layers work in relation to the avatar skeleton so can only guess whether that part would be possible, but if it is then it could potentially be a basic form of flexible mesh, since you could re-position the bones that are affected by avatar physics, rig the mesh to those and then adjust the settings on the physics layer to control how the "fleximesh" behaves when it moves.

    As for avatar shapes, since they're basically just a list of bone transformations one would think that it should be possible to apply them to any skeleton in which those bones exist.

    One "must have" feature for animated mesh, in my opinion, would be variable animation speeds with interpolation between frames, so that we can control the speed at which animations will play while still having them play smoothly.  A simple example of this would be an animated mesh vehicle with an animation containing one full revolution of the wheels,  using a simple formula like FPS = (speed / circumference ) * total_frames you could calculate the exact speed that the animation should be playing at for the wheels to "roll" along the ground rather than spinning or sliding.  A similar formula could be applied to any animated mesh object that walks, crawls, rolls or slithers in-world and would remove the need to create and upload multiple animations in order to provide a range of movement speeds for animated meshes.  

    Having a way to control the speed an animation plays at would mean more realistic movement for animated meshes in world, less work for creators and less lag and bandwidth usage for end users (less animations to download, no lag when one animation ends before the next has loaded, etc).

  23. 1 hour ago, Medhue Simoni said:

    How many flies do you think I can rig to the skeleton? Probably dozens! All with moving legs and flapping wings! Imagine this swarm buzzing around your next SL picnic.

     

    As a big fan of Ashley A. Adams work, especially some of her latest phobia series bug-monster sculpts, I'm looking forward to seeing a whole host of creepy and disgusting looking things in SL once this feature is implemented.

    I must admit that upon hearing that they weren't going to support custom rigs I was initially a little disappointed, but given the number of bones available for use in the bento skeleton I think we should have plenty of options for creating weird and wonderful new things and perhaps replacing some of the inefficient, lag-inducing weird and wonderful old things too.

  24. 6 minutes ago, Love Zhaoying said:

    Y'all are sounding like a Dilbert strip with your "Use Cases". :P

    LOL!  I've been completely nerding out ever since this got announced, I'm actually having trouble sleeping at night because I lay awake thinking about all the awesome things that can potentially be done with this new feature. :D

    • Like 4
×
×
  • Create New...