Jump to content

Cyrule Adder

Resident
  • Posts

    40
  • Joined

  • Last visited

Posts posted by Cyrule Adder

  1. On 5/11/2021 at 7:30 AM, arabellajones said:

     

    3: LOD models. These can be generated in the import process, which may be good enough. For some things a very simple mesh may be enough for the lowest LOD. Maybe an inside wall for a house. Who is ever going to see it, through other walls, at the lowest LOD? It's still needed, but it needs to have the same number of texture faces.

    4: UV mapping: how the usually non-flat surface of the model is split up and fitted onto 1 or more flat textures. Compare how a spherical prim is UV mapped to what the flexibility of mesh can give you, something like a Goode Homolosine projection.640px-Goode_homolosine_projection_SW.jpg.fc9861682d4f69979ccde329bf146b3d.jpg

    I don't think the standard spherical prim uses a Mercator projection but the difference is getting a bit picky. There's a huge amount of stretching at the poles. This may be the most complicated part of making a mesh. But, when it is done well, it can let you use bump maps for detail.

    None of this needs Blender-level tools.

    The Homolosine projection is not well designed for UVs. The reason why this is not used for any sort of UV projection is that there's a lot of wasted texture space in the image. Notice how much of that image is just negative space? Where a typical UV sphere will cover the entire image. An efficient UV will not only make better use of space, it can let you reduce the size of the image file (Thus the strain on people's computers), as well as increase the texel density allowing more details to be seen the closer you get without increasing the image's size.

    The purpose of the homolosine projection is that it normalizes the shapes of the landmasses based on their distances, where the normal projection of the globe to a UV sphere will streatch and distort the continental landmasses. Unoticiable if on an actual 3D model, but useless for scientific purposes.

    Quote

    The Goode homolosine projection is a pseudocylindrical, equal-area, composite map projection used for world maps. Normally it is presented with multiple interruptions. Its equal-area property makes it useful for presenting spatial distribution of phenomena.




    So the ability to make your own UVs is actually incredibly important in the world of CG. I suppose there could be an argument of writing it by hand if you truely want to.

    As for LOD models. This is not wrong, but this is also not always correct. The issue behind generated UVs from SL is that it does not actually try to preserve your shape. Paid tools often do a much better job at this at estimating what polygons or looks can be reduced at various camera distances based on the area of triangles. Additionally... generating UVs directly from SL is not always the best depending on what it is for.

    For example. If you were to try and reduce the end cap of a sphere... it will look like a cube when you try LODing it through SL. If you hand edit it, you can actually reduce the polycount by 80% more than SL can, and still retain the look of roundness from all angles.

    Lastly, there's the issue with animated meshes. Animated meshes generally cannot go through generation methods, because generation methods do not respect the origional topology of the model that allowed the mesh to bend and contort evenly over the bones. If you read some of the topology for animation articles, you can see that there's typically special topology patterns used for joints, the spines, rib cages, places that needs to keep their general shape while bending, and even smaller things like dimples that becomes visible when you smile.

    • Like 1
  2. 5 hours ago, UnPandaRojo said:

    Wow! I call that  memory and performance savings (sarcasm)

    Kinda impressive when the CATWA head shapes aren't that unique from the competition...

     

    But what you got there is fine. 5k polygons is still very efficient. If you're concerned about performances still, the rest of it would be in LODs. where you can start removing loops, delete the teeth and tongue, etc.

    • Thanks 2
  3. 8 minutes ago, Wulfie Reanimator said:

    This is kind of a disingenuous point. While it's true, the environments those bodies are in are completely different. Context matters.

    Final Fantasy 14 (a 'massively multiplayer' game) for example has a standard body type that is used for almost all races (and the two recently new races that don't use it are suffering from lacking outfits), and the outfits will outright replace parts of the body with optimized pieces. This means that, for example, a top with exposed shoulders won't have any topology for the chest or arms. Any exposed skin is part of the top itself, not the original avatar.

    Meanwhile in Second Life the body and outfit are generally made entirely separate from each other, by different people with independent goals. One creator puts all their effort into producing a generalized naked body with its own set of features, while another creator puts all their effort into producing an outfit with its own set of features and retrofitting it to one or more of those bodies. This means that while you can hide parts of the body, that technically unnecessary topology is still there.

    This is true, but the point I was making was more to the Maitreya's awful levels of optimization. It's not exactly possible to compensate for the user's additional geometry without asking them to cut off parts of the body that they do not use (that are infact removable due to many bodies splitting pieces off for the Alpha layers.)

    • Haha 1
  4.   

      

    On 4/12/2021 at 10:29 PM, Gabriele Graves said:

    Sorry but if you think that the Kemono body proves that 30K looks as good as a M. Lara, you just don't see what myself and the rest of the market does.  This is what I mean about the subjective difference between what one group of people would be happy with versus another.  I remain to be convinced it would be enough to recreate M. Lara indistinguishably from the 176K it is today.

    The majority of AAA games will have hero characters of 15k to 40k polygons on average. This is for the entire body, including heads, hands, feet, and clothing. Higher polygon counts do not mean quality, nor does it make a mesh look "smoother". Especially when we concider that some MMOs like final fantasy, Guild wars, and so forth will have characters of polygonal coutn no higher than 20k in total. And despite the increase of technology, the polygon counts have not changed much. Not because GPUs cannot support them. But because the methodology of use them as you need them.

    The issue with Maitreya's 176k polygon count (with no LODS) is that most of the polygons are absolutely not necessary.
    Does it lend to the smoothness of the model? No. If I can draw a single line, and it's able to tangent 4 or 5 polygon loops with barely a noticable change, than the excessive polygons are unnecessary.

    The second issue Maitreya's massive polygon count comes with the problem of micro-triangles. Where triangles can become smaller than a pixel. The problem with these is that they may have no impact on the final image or cause the same pixel to be redrawn multiple times, and the GPU still has to do all the math, and lighting for it. For a single player? This is nothing. But for how popular the body is. You can easily have 12 maitreya users on screen at a time... and in many cases going up to nearly 80 which will start dumpstering people's computers no matter how good the hardware is.

     

    TLDR: a 30k model will look just as good as a Maitreya.

     

    On 4/14/2021 at 3:45 AM, Gabriele Graves said:

    It would be a pretty unprecedented move.  A technical limitation would be difficult to do without affecting lots of other content and has the added danger of killing the golden goose of the fashion economy.  I probably wouldn't stick around, there is only so much going back to square one a person can take and that would be a pretty massive one.

    Doing it via policy would need policing and we have all seen how effective that is in other areas as it needs a serious investment to provide enough staff to make it effective.

    If it didn't affect the bodies in people's inventories then people just wouldn't upgrade.  M. Lara still has lots of v4.1 people from a couple of years back, heck there are still people using v3.5 from at least 4 years ago.

    So yeah they "could" do it.  I doubt that will happen personally considering the road behind us.

    Yeeaaaaah. Unfortunately the most they can do is set some guidelines, but can't really enforce it without hurting themselves. And that might not change much. Especially since I can name at least 100 items on the market place where a screw no larger than 3cm has a 1024x1024 texture and roughly 3k polygons. These objects being advertised as HD.

    • Haha 1
  5. 52 minutes ago, OptimoMaximo said:

    Adding on top of that, remember that UDIM is good for ease of work, but then SL as any other game engine does not support that feature, and you must assign materials prior to upload to reflect the texture separation and be able to assign them to the correct set of faces.

    Actually. The export handles that for you. The only thing you need to be mindful of, is if your mesh makes use of more than 8 materials, you will need to start splitting up the mesh.

  6. Adding onto what Optimo said. You can setup UDIMs which will allow you to paint on two or more textures at once, rather than needing to swap materials. You'll find that this is relatively important if you want to minimize the apperance of seams. Especially given that the SL UV/Omega UV sets have seams all over the place.

  7. For a mesh set of nails? I'd say 20 dollars. Rigging the nails honestly wouldn't take more than thirty minutes. The reason why I say twenty dollars, is this is a minor item that someone honestly could have learned themselves. They can either pay you a relatively high price for a low effort job. Or learn to do it themselves.

  8. Without looking at your textures, this is the only thing I can guess to what is happening.

    When you bake your UVs, you're not adding in the 'skirt'. Which is basically telling the software to extend the pixels on the edge of the UV seams so they can be blended in.

    Additionally, It's also possible that your UVs are just horrid in general. And yes, how you UV your mesh does actually matter. If the UVs are not evenly distributed, you will start seeing issues like these arise as well near the edge due to one plane having significantly lower texel density than the other. It also helps if you work to Hide your UVs seams, to insure that the edges can't be seen if they cannot be fixed.

  9. 158MB of VRam what in the *****...

    But yeah... LI in second life is heavily influenced by the LODs. a common hack people do to save LIs at the expense of lower spec machines is to simply put a single triangle as their lowest LOD and nothing else. So an LI of 50 suddenly drops to four or two. You can further reduce the LI by simply not having a collision at all.

  10. Basic english breakdown for a more direct answer.

    Here's the settings you need.

    Diffuse Map, RGB Channel will be your normal diffuse. Alpha channel will be for glow maps, alpha transparencies, etc.

    In your Normal Map, you need the Normal to be in the RGB. But the Alpha is actually important here. For some f*cking reason, Secondlife has designed it so that the Specular Map is located in the Normal Map's alpha channel. Go figure.

    The RGB channel of the specular map is actually the specular color. If you're doing anything non-metalic, for realism this should be white. But you can do what ever you want with this channel. Environment map is located in the Alpha channel of the Specular map.

    The Specular Map effects the highlights you receive from projectors. Meaning it becomes useless in sunlight, or no light.
    The Environment Map effects the reflectivity of the surface. This also means that you get highlights from the sun and moon, as well as reflections of the environment.

    To convert the Substance Diffuse to a useable SL texture. You need to multiply the AO map onto the surface. Using concavity, curvature maps, and what have you can also go along way. And you may need to paint false lighting information to the surface to help give it omph where the SL render can not (trust me, it will look flat without that aid.).

    • Thanks 1
  11. On 12/27/2020 at 7:57 PM, Quarrel Kukulcan said:

    Once you've created a normal map, you won't see its effect on your low-poly model unless you use it as part of the low-poly model's display parameters/material/whatever your software calls it (or upload the normal map texture and assign it to your low-poly mesh to SL). (Also, SL won't show it unless you have Advanced Lighting on, which you probably do unless you're on a slow laptop -- I'm just covering all the bases here.)

    It helps to do something that seems brainlessly simple when trying a new technique, like here: I'd start from an empty file, make a rectangle, sculpt a gouge into it, then turn that into a normal map on a single low-poly quad. If you get that working you know you've got the basics and can handle it for clothes.

    Adding onto this... The normal map alone is not always enough depending on the lighting system you are dealing with. A good deal of the heavy lifting has to be done in the textures. And Normal maps are only one half of this. You'll want to bake an AO to your low poly model as well. And use that in a multiply layer to add some lighting information into your diffuse. Because clothing generally won't be super reflective, you don't need to worry too much about adding highlights unless the particular garment is just absurdly dark.

  12. 1. Yes. You need to understand how to weight your piercings correctly for them to follow the body in motion. Different bodies will normally have different weights, so there rarely ever is a catch all solution to anything. As far as applying to mesh body developers for their kits, that's only some of the bodies. The big name bodies, Maitreya, Bezella, and what have you have this requirement. It's rediculous in my opinion. But whatever. Your typical furry bodies such as Kemono, Avatar 2.0, Regallia, Snaggletooth, Develin, etc all tends to have their dev kits open to the public. I honestly don't bother with the popular bodies..

    2. My best guess is that they don't want people on SL to steal their stuff. Another possibility is the exclusivity means that they can control the quality of items that is made for their bodies. Which only helps in sales. Normally for any body to do this, would drastically hurt support. But in their case, since they already have popularity, they can control who gets their dev kits to their standards. However, as much as a hot take this is, the quality of Maitreya is like looking at the project of someone who only guessed at anatomy, and is capitalizing on the fact that they have six digit numbers for polygons.

    3. If the nipple does not have jiggle physics, you do not need weighting information to attach it. If it has jiggle physics, or for what ever reason the breast are being animated say... sex furniture, meme animations, or sexual gestures, you need to weight the nipple piercings to the breast.

    4. No. But it makes life easier for you if you did. The mesh to SL process is absurdly annoying without a tool assisting you.

    5. Already answered above. If the piercing is not weighted, it will not move with body physics.

    • Thanks 1
    • Confused 1
  13. This is a good week or so later. but I want to add in a few more details for you later that hasn't already been mentioned in the answers above.

    Firstly, and this will probably become important for you later as you start playing around with blender's sculpting functions to make more detailed jewelry. Generally, you only want to use as many polygons as necessary to correctly display your model at their intended viewing distances and importance. This distance varies massively based on the object's size. For example, you shouldn't need to use 3000 polygons on a button if you are viewing it from a meter away. You probably wouldn't even need 200 at the distance it would take to fill the screen. Basically... anything that adds definitely volume to an object most likely need the polygons to help define the shape. And how big those polygons can get can get pretty technical.

    That being said... when you start sculpting, you will will have a massive amount of polygons. Fortunately you can simply encase that sculpt for jewlery in simpler geometry and use blender's internal tools to create a normal. These normals will make an object appear far more detailed than what they really are. Even make objects appear to be more rounded than normal.

     

    1. There are some cases in when it is better to not join objects together. If you want objects to be more easily scaled to better fit a model, or perhaps customized by the user, then uploading objects unjoined is definitely not bad if the components needs to maintain a size to look reasonable. Another option when you might not want to join objects is when you make more complicated objects like avatars, cars, homes, etc. Prim count no longer matters in second life, it's now Land impact. Which land impact can change for the better or worst if you join objects. This will need to be something for you to ask yourself, but land impact is something you don't need to concern yourself with over accessories.

    4. LODs are something that's very important, but often times just ignored by devlopers which causes massive framerate issues for second life. As well as visual problems. If you place your Mesh LOD to the default value, which I believe is one. You will see that a lot of objects in the SL world will suddenly either disappear or look like garbage. What happens is many developers will simply upload a single triangle for an LOD and expect users to set their LOD settings to the max.

    The LOD basically means that as your camera gets further and further away from an object, it swaps to each level. Each level is expected to have a much lower polygon count making it easier to render. Because an object only takes up 20 pixels on your screen, you probably don't need to render a triangle that's only half a fraction of a pixel, right? And that trinagle has a massive cost for rendering on only a single pixel.

    So... here's an example of a good LOD design. You have a furry SL avatar with ear piercings and teeth which you can easily see at about two meters.  Fifteen meters away those details are so small they are almost invisible. So the developer, being the mindful dev he is, decided that he should delete the polygons for the teeth as well as the piercings, and maybe even delete the toes of his feet or greatly simplified them to ps2 levels.

     

    In your case. Your model is incredibly low in polygon costs. You can honestly get away with using the same model for multiple levels. But if that model hits... say 1k polygons, You would want to look into making LODs.

  14. On 11/4/2020 at 1:12 PM, Penny Patton said:

    I think Cyrule is asking if this is a feasible feature LL could add to EEP in the future. And it's not a bad suggestion, either. There are sim surround terrain skyboxes, and they have their uses, but they also have their own limits and setbacks.

    It would be an interesting feature to have. And the problem with Sim Surround is the limits and setbacks. If I am not mistaken, it's also an additional expense. Where most people work around this by a different means.

  15. Diffuse in the RGB. Alpha may be Alpha Transparency, or Emission Map.

    Specular Map is a bit weird. The RGB of specular is the Specular Color, the Alpha represents the Environment Reflections (Yeah I know, wtf Linden labs).

    Normal Map RGB is the Normals. But the Alpha of the Normal Map is the actual Specular Highlight Map (seriously wtf Linden)


    To get a correct Diffuse map, you will need to add lighting data for Second Life. Usually AOs, edge maps, and curvatures combinations will be good enough. You will also need to color your metals, as PBR relies on specular highlights to provide color maps. Baking a mostly even lighting scenario onto the diffuse map is also not a bad idea, as it helps makes it look better for Lower end systems, and assists Second life's lighting system by providing shadows it normally would not be able to make.

    Specular is an oddball. The specular RGB node is going to be the Specular colormap which is fine. This will be predominantly white for non-conductive materials (most materials in the world). For metals, iridescent, exotic materials, and what have you, you'll want to add color to the specular highlights. The environment reflections is the Alpha map of the Specular texture. It reflects the environment around you.

    Normal Maps. The RGB is the standard OGL tangent space normal map. The alpha channel however holds the actual glossy/specular map. This map is the one we all know to be dependent on point lights as well as the sun. This is responsible for how the object is illuminated in respects to light.

     

    You will need to faf about with the Glossy and Enviornment to get a correct looking effect in all lighting conditions.

    • Like 3
  16. As the title states, is it possible to add terrain to the EEP skybox? The main reason why I am asking here, is that prim based skyboxes... generally don't work very well. The idea behind a skybox is that it's far away enough from the camera that the user is able to get a sense of a greater world around them, but not see the loony toons style perspective differential when they are up close and about to run into a wall.

    Think of it like a modern game, where the map is small. But you see mountains, and forests off in the background that are all simply part of the skybox to help make the world feel more complete.

    • Like 1
  17. If you have the sculpt map, AND permissions from the original creator.

    Sculpt maps are textures with per-vertex positional data, where each vertex is represented by a single pixel. That being said, you can convert the sculpties back to a mesh in blender via vector displacement maps. However, you will need to find the exact Second Life geometry object as that is not native to blender. You can probably find some blend files with that out in the ether or something. And I am not sure if it can... as I remember the sculpty days mostly from when Blender was in 1.x

    I should note that it might not be worth the time. As the sculpties are not like normal geometry. They are products of pushing and pulling vertecies into non-manifold patterns. Which would mean that the rigging process would be utterly painful, and the results could be just bad all around.

  18. So... an explanation for the cost. This is only my theory, but I think it has to do with the fact that the physics mesh is not convex.

    The reason why this matters is because the physics in computing is much faster when your meshes are convexed. This means that a complete mesh surrounds the entire object with no holes or cavities.  When you start introducing concavity then you start getting issues with computation and it grows to be much more expensive. However... if you were to stack a bunch of building blocks to closely resemble the shape, That is actually computationally cheaper by magnitudes.

    I believe you can get around this if you made the physics mesh for the objects with holes as separate planes and rectangular prisms. Remember, you need only to approximate your mesh in the world of physics.

    Finally. You'll also need to keep the size of your triangles in mind for the physics models. As a triangle that's too large can start experiencing errors. As long as a single triangle is no larger than 5 meters on either side, it should be good I suspect.

  19. Step 1. Model Something. Most of the techniques used here are software agnostic with basic tools. But each software does have it's own collection of tools and workflows. Sculpting, pushing around vertexes and what have you. Make sure that they are game ready. That is to say that you're only using enough polygons to reasonably see detail. But not so much that when you run wireframe it looks solid.

    Step 2. Texture that something. Photoshop is still pretty much king here. You can use Substance Painter as well, but that is for a PBR pipeline. And it's generally a pain in the ass to convert PBR into traditional. As you'll be going back and forth with an image editing software, and SL to see how it looks. You can also use Krita, which is what I use. Or Gimp, which has had some major improvements lately.

    Step 3. Export as a Colladae file. You might need to use a special second life exporter or something. But generally speaking any colladae exporter should work. And they come standard in all software.

    Step 4. Upload to second life.

  20. Just upgrade to 2.8. The features are stable enough that you can follow the tutorials just fine.

     

    Other than that. You need to select your armature, and mesh with shift click.

     

    Go into weight paint mode. You can select your bone by holding control and clicking. 

     

    The standard transformation shortcuts will move the bone.

  21. On 8/17/2020 at 1:33 AM, Darksteelhorse said:

    Thanks, that helps figuring out the odd ramping effect is saw when i tracked down the video for the garment  Now if you don't mind me taking a bit more of your time, they seem to be able to get a very precise set of ruffles in her dress, is that done with internal lines?

     

     

    As Nomius mentioned, yes. They are using internal lines with fold settings to achieve this effect. 

×
×
  • Create New...