Jump to content

Fluffy Sharkfin

  • Content Count

  • Joined

  • Last visited

Everything posted by Fluffy Sharkfin

  1. The recent email that Qie just quoted from made mention of "new products with more flexible pricing" which is probably the basis for the assumption, but yes at the moment it is all just speculation.
  2. By "server scripts" I mean scripts for network vendor servers or any other type of script that sends and receives data to other objects around the grid. As for code rewrites for virtual pets it won't really matter what code is in the scripts, since if an on-demand sim goes offline when there are no avatars present then that sim would no longer be available to run scripts so no code would be executed and anything attempting to communicate with objects on that sim wouldn't receive a response.
  3. My guess would be a sim which is only running when avatars are connected to it? Since the only reason that you'd need a sim to keep running when there are no avatars on it would be if you have server scripts or virtual pets, etc. I'd imagine on demand sims would be an ideal solution for a lot of residents who want to own an island but don't want to pay a fortune for it.
  4. Given the popularity of gacha resales (at least I'm assuming they're popular based on the ratio of gacha items vs regular products listed on the marketplace some days) I'd imagine that the first few people to list items from newly opened events would have less trouble/make more profit reselling their unwanted duplicates than those listing them once the event has been open a while, the marketplace has become saturated and the inevitable price wars have begun. That's about the only sensible reason I can think of for needing to get into events as soon as they open, but I suspect for the majority of people it's more about being the first to wear it in public so they can complain about how everyone is copying their style once all their friends start wearing the same thing.
  5. What is it? Animesh is a new feature which will allow creators to animate any rigged mesh object without it needing to be attached to an avatar (essentially you can rez a pair of pants, drop an animation in them and make them walk/dance/whatever). It could potentially be used to create anything from simple animated objects like doors, rocking chairs, bouncing balls, etc. to full blown animated NPCs and pets, as well as visual effects like smoke, fire, water, etc. (and, knowing SL, no doubt a whole bunch of other bizarre stuff too). Basically, anything that's currently animated using alpha swapping or scripted movement could possibly benefit from being converted to animesh, although there's no way of telling what it will actually be useful for until it's implemented and we find out what limitations are going to be placed upon it i.e. land impact, etc. Where to get it? Currently the only people that have access to animesh are the Lindens working on developing the feature, however there are a lot of very eager creators waiting to get their hands on a project viewer so once animesh gets released I expect there'll be a surge in available animesh content. I know I'm very much looking forward to playing with the new feature (as Peter Quill/Starlord so eloquently put it in Guardians of the Galaxy Vol 2... "I'm gonna make some weird *****!" )
  6. Maybe I'm jumping to the wrong conclusion but I assumed the OP meant to say "i have been blamed for removing these items which i didn't do so wanted to find out why it shows (???)"
  7. You may have more luck looking at the Data Transfer Modifier. Drongle McMahon gives a very good explanation of the process in this thread Blender & normal editing - rounded edges
  8. Yes my comment wasn't really relevant to the OPs problem, but I thought it worth mentioning in case anyone read it and thought it safe to assume that what you see in Maya is what you'll get in SL.
  9. A logical assumption, but that isn't always the case with Maya. Maya materials can utilize a second texture for transparency, as is the case when using the Transfer Maps feature to bake texture information from a high poly model to a low poly "game ready" model. Since it doesn't include transparency in the Diffuse map, you have to bake a separate Alpha map which Maya then automatically attaches as an input for the transparency channels of the new shader. When viewing the resulting shader in Maya the transparency appears as normal, even though the Diffuse texture is only 24-bit and the alpha is being handled by a completely separate texture, so in order to upload it to SL you'd need to manually composite the RGB channels with the alpha channel in Photoshop first. I find it's usually a good idea to open up your textures in Photoshop before uploading to SL and checking your alpha channel in there.
  10. Yeah I think the last two times I've been to their site it's been the final day of a sale, but I already signed up for lifetime upgrades to the pro version a year or two back. It's definitely worth exploring the filter editor rather than just playing with the settings for existing filters, with the latest version you can even perform some pretty advanced image manipulation in addition to creating procedural textures. Also, there is a fourth way to use Filter Forge to create textures for SL. Most of the time I use it to generate a set of base textures (diffuse, height, specular, ambient occlusion, etc.) then import those into an app like Substance Painter or 3D Coat where they can used to create PBR (Physically Based Rendering) shaders which use the geometry of the model to calculate where the edges and indents, etc. are and allow you to automatically apply wear to edges, dirt to cracks and crevices, and so on. Being able to generate multiple versions of the same texture but with additional effects like dirt, weathering, corrosion, etc. is ideal for a PBR workflow since you can then simply apply each on a separate layer and tweak the settings to control where on the model a particular version of a texture appears and how it blends into the layer above and below. Those layers/shaders can then all be baked to a single set of textures (diffuse, normal & specular) and uploaded and applied to your mesh in SL. The latest texturing applications are quite advanced and will do a lot of the work for you in some respects, however it still requires some knowledge of the concepts of lighting & contrast, color theory and so forth in order to know how to set up all these shaders and what settings to use in order to get the best results. And of course, as with any automated process based on procedural textures, the quality of the end result can always be improved upon significantly with a final pass by a skilled texture artist.
  11. Doh, sorry! Still, at least I didn't let on about the 80% sale they're having that ends today!
  12. I think the placeholder texture option would probably be ideal given the limited scope of what they're trying to accomplish with this feature, I can even see it getting some limited use on other attached objects, despite the lack of material support. Honestly, on reflection, while I'd love to see some sort of support for layered textures in SL I wouldn't want to see them try and extend this feature much further than what they're already trying to accomplish simply because the use of wearable system avatar layers doesn't make nearly as much sense when applied to non-attached objects. Of course it would be nice if they could find some way to support materials with this, but if they can't then the fact it will provide a way to composite wearable "alpha layers" and apply them to mesh bodies is still a notable improvement on what we currently have.
  13. If I had to guess I'd say it was Filter Forge, which has a huge library of user created procedural textures which can be used to generate near-infinite variations (in most cases complete with appropriate normal map, etc).
  14. The voxel building tools in Landmark were actually very impressive, and if LL were to implement a similar in world building system where people could create things using basic voxel building tools and then have a mesh object automatically generated based on a similar process to the "wrap" method they use to calculate physics models in the mesh uploader (you could even use the same mesh upload window for generating the LOD models), that alone would be an amazing addition to in-world creativity. Of course you'd have the issues of UV mapping, but even if it were limited to planar mapping only it would allow users to create all sorts of organic shapes and objects for landscaping and scenery, etc that you simply can't do efficiently with prims.
  15. The rest of your post made sense, but this part seems a little vague? The viewer will be able to apply the baked texture to a mesh object instead of the avatar mesh, and will be able to route it to the appropriate faces of the mesh the same way applier systems do now... but who tells the viewer which faces are the appropriate ones? Are they going to present the user with a list of faces 0-7 and let them guess which one, or will the information about which faces to apply the baked texture to be somehow embedded in the system avatar layers that the user wears? I think the idea of getting rid of multiple faces for alpha cuts and reducing the use of onion skinned meshes for tattoo layers etc is great and hopefully creators of mesh bodies will adopt the feature and change the way they make their bodies accordingly. Having clothing layers for mesh bodies as a separate attachment would not only solve the issue of having to select which faces a baked texture is applied to but would also give us an easy way to reduce the number of excess polygons we wear, so that's also great. However if their intention is to extend this to other types of mesh items (which seemed to be the case based on what was said at previous meetings) then they'll need to support things like linksets and multiple faces and in order to do that I think they're going to have to come up with something a lot more complex than what you're suggesting.
  16. I'm curious as to how you envisage these textures being selected for baking. Currently for system avatars it's based on what clothing layers (i.e. the asset type clothing layers in inventory, not the mesh onion skin thing) we wear, but I just don't see that working for mesh bodies because there HAS to be some way to apply these to select faces rather than the entire object. The UUID for the baked texture would have to be accessible via script in order for it to be applied, creators won't simply be able to provide a "layer" asset with mesh items that a user can wear and magically have it applied to the correct faces of the correct attachment.
  17. So if a mesh body has clothing layers attached to it, those would be either additional faces or alternatively part of the same linkset as the mesh body, which then raises the question of how are we going to be able to specify which faces/parts of the linkset the clothing layers we wear will be applied to, unless you want freckles and tattoos on your shirts and jackets as well as your skin?! I've already explained to you in this thread why that assertion is wrong, normal maps may be unaffected by colors in the diffuse map, but the same cannot be said for specular maps!
  18. Okay, if you're talking about the new feature being used purely as a way to apply the equivalent of a body alpha to mesh bodies, and using it to apply simple decals like tattoos and freckles etc. thereby reducing the number of layers of polygons used and the numbers of sub-objects/faces used to accommodate alpha layers in HUDs, then yes I'd agree this feature will probably work quite well (although I still think the issue of having to choose which attached mesh its applied to and therefore having to apply it multiple times if you have separate hands and feet is going to be potentially unpopular). If that's really all that LL are trying to accomplish by implementing this new feature then I guess it will more or less achieve their goal, however I still think it would be nice if this, at some point in the future, led to them expanding the functionality of the texture baker to include more advanced methods of texturing which would allow creators to optimize and further improve the quality of content in general.
  19. While the same functionality can be achieved using clothing layers and separate applier HUDs, the effect on usability should also be considered. Using the current system, a resident purchases an item of clothing, opens their inventory, wears the HUD, selects the item they wish to wear, and with the press of a single button all three required textures are applied to their avatar. Using the new system with clothing layers and corresponding HUD, a resident purchases an item of clothing, opens their inventory and finds the correct version of the clothing layer for the item they want to wear (we'll just assume that creators will be helpful here and not name the different color variations with whimsically abstract names like "Springtime", "Frappuccino" and "Zowie!"), they then right click the layer and choose which of the meshes currently attached to them they wish it to be applied to, then they wear the HUD for the item and find the button that corresponds to the clothing layer they just applied and press that to apply the other textures required for materials, overwriting any other materials that are necessary for any other layers they may be wearing, since there's no texture baking for normal or specular maps so only one of each can be applied at a time. The above is assuming that this feature will be used solely for mesh bodies, because without LL somehow adding in the ability to select which faces/parts of a linkset the layer is going to be applied to I don't see this feature getting much use for any other type of mesh items, and since we currently can't specify names for each individual material/face when uploading mesh I can't imagine how they could even begin to incorporate that functionality into clothing layers. None of this seems like a step forward in terms of usability, and I don't see the majority of residents welcoming this added complexity in the process of dressing themselves considering one of the most common complaints about SL is its steep learning curve. Also, I'm still unclear on how exactly people are going to specify which order the layers appear on the items, unless in order to wear a layer beneath other layers they're expected to strip down and reapply them all in the correct order. And what about residents that wear a mesh body but then use separate hands and feet, if they try to wear a layer that should cover both will they have to apply that clothing layer to each attached object separately? I'm not saying that the entire feature is pointless, but just because something is technically viable and would improve performance you can't just assume that people will adopt the feature regardless of how much more complicated it makes things or how much it impedes their ability to use existing features that they're already accustomed to. Usability may not seem important in comparison to improvements to performance but the fact is that if end users (i.e. customers) don't like it, how long do you think it will be before creators (i.e. merchants) refuse to use it. Frankly if they can't add some functionality which allows creators to somehow improve the quality of the content they produce then from an end-users perspective this feature is going to seem like a whole lot of stick with very little carrot attached!
  20. Assuming that they make things for the purpose of selling them, marketing is just as important as creative ability in SL these days. Not everyone is motivated by profit, especially when you're talking about artists and those driven to creativity rather than drawn by greed!
  21. Ahh, no I think you misunderstand my meaning. The tiling of seamless textures would occur pre-bake so that the resulting baked texture would still be a single non-tiled texture, but any of the layers used to bake that texture could contain low-resolution, seamlessly tiled textures (in the case of a dress it could be patterned fabric, in the case of a static object a brick texture), then on top of that you could apply a single non tiled texture with windows, shadows, etc.. or even a texture as a decal with tiling turned off and the scale and offset set so it appears in a certain place. This would mean instead of needing a 1024x1024 texture containing the repeated seamless brick texture complete with windows, shadows, etc. you could make do with a 128x128 seamless brick texture and a single window texture tiled several times for the windows, with a non repeating decal map for the door (or alternatively think plaid patterns, buttons, stitching, etc). Basically a way to have more control over the way the textures of each layer are handled before the final bake is created and passed to the object. I'll admit I don't have enough technical knowledge to be able to predict which would cause more lag, though I suspect it depends on the type of performance issues you're talking about. While high polygon count would probably cause more of a slowdown when rendering, wouldn't a high volume of texture data cause more of an issue with ram than number of polygons?
  22. Yes, and if I'd come onto this forum and started a thread demanding some major, impractical, ill-thought out change purely for my own amusement/benefit then I could understand your response. However the whole topic of this thread is the fact that LL have announced this new feature, and I'm simply trying to suggest ways in which it could be expanded on so that it may be of maximum benefit to everyone. I really don't think giving people a link to the source code and telling them "go do it yourself" is adding anything constructive to the conversation. I can understand how you may get tired of people who continually make unreasonable demands, but since i rarely even bother to post on these forums I don't see how you can justify acting that way towards someone who's basically a complete stranger and just trying to engage in a civilized discussion!
  23. It is, but that doesn't mean we have to like or encourage it! As for clothing layers and materials I agree completely, especially when it comes to normal maps since you can't just slap one normal map on top of another using the alpha channel as a mask, for a start the alpha channel of the normal map is, I believe, already in use as the specular exponent channel (I guess you could cheat and use the same alpha you're using for the diffuse though), but it would just look weird since the height differences of the clothing layers wouldn't be present automatically so they'd have to add additional height information to each layer in order to mimic the effect of layers of clothing. The concept of unlocking the UV mapping on clothing layers so that instead of it always being set to 1:1 it could be scaled and then repeated or used as a decal may be equally complicated, but it's about the most basic addition to the new feature I could think of which would allow a greater scope of use and potentially greater impact when it comes to improving performance beyond just reducing the sheer number of polygons and textures being displayed on some avatars.
  24. Yes I caught the overall tone of your post but opted to ignore the snark since I assume it's a product of some weird sort of virtual world Stockholm Syndrome. I really can't think of any other possible reason for someone to suggest that it's the customers job to do the work of those employed by the company whom they're paying to provide a service!? Or are you suggesting that the continued viability and growth of SL as a platform is purely of benefit to its users and in no way is LL profiting from its existence?
  25. By "layers" do you mean mesh layers? Isn't one of the primary reasons LL are implementing this feature to allow creators to stop using mesh layers? Since the purpose of a specular map is to vary the color and brightness of the light being reflected off the object, and those change based on the colors in the diffuse map, you need to be able to apply a new specular map when you change the diffuse map if you include any color information in the specular map. While I can see this new feature reducing the number of polygons required for mesh bodies and lowering the number of textures being displayed per avatar, it does seem a little underwhelming when compared to other systems that have multiple texture layers, separate mask channels/maps, different blending/tiling options, etc. It would be nice if they added some extra functionality like the ability to set texture scale on a "clothing layer" so we can use smaller seamless tiling textures as base textures and the ability to set texture offsets and toggle tiling on and off so we can use additional layers for decal maps like detailing, shading, logos, etc. Adding that sort of functionality would at least give creators a viable alternative to using 1024x1024 textures on everything, which I suspect would have as much impact on performance as removing the additional polygons used to create mesh layers on bodies, and if they were to extend the feature to non-worn items too then it could potentially be used to improve upon/optimize a great deal of existing SL content.
  • Create New...