Jump to content

Cyrule Adder

Resident
  • Posts

    40
  • Joined

  • Last visited

Reputation

24 Excellent

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The Homolosine projection is not well designed for UVs. The reason why this is not used for any sort of UV projection is that there's a lot of wasted texture space in the image. Notice how much of that image is just negative space? Where a typical UV sphere will cover the entire image. An efficient UV will not only make better use of space, it can let you reduce the size of the image file (Thus the strain on people's computers), as well as increase the texel density allowing more details to be seen the closer you get without increasing the image's size. The purpose of the homolosine projection is that it normalizes the shapes of the landmasses based on their distances, where the normal projection of the globe to a UV sphere will streatch and distort the continental landmasses. Unoticiable if on an actual 3D model, but useless for scientific purposes. So the ability to make your own UVs is actually incredibly important in the world of CG. I suppose there could be an argument of writing it by hand if you truely want to. As for LOD models. This is not wrong, but this is also not always correct. The issue behind generated UVs from SL is that it does not actually try to preserve your shape. Paid tools often do a much better job at this at estimating what polygons or looks can be reduced at various camera distances based on the area of triangles. Additionally... generating UVs directly from SL is not always the best depending on what it is for. For example. If you were to try and reduce the end cap of a sphere... it will look like a cube when you try LODing it through SL. If you hand edit it, you can actually reduce the polycount by 80% more than SL can, and still retain the look of roundness from all angles. Lastly, there's the issue with animated meshes. Animated meshes generally cannot go through generation methods, because generation methods do not respect the origional topology of the model that allowed the mesh to bend and contort evenly over the bones. If you read some of the topology for animation articles, you can see that there's typically special topology patterns used for joints, the spines, rib cages, places that needs to keep their general shape while bending, and even smaller things like dimples that becomes visible when you smile.
  2. Kinda impressive when the CATWA head shapes aren't that unique from the competition... But what you got there is fine. 5k polygons is still very efficient. If you're concerned about performances still, the rest of it would be in LODs. where you can start removing loops, delete the teeth and tongue, etc.
  3. This is true, but the point I was making was more to the Maitreya's awful levels of optimization. It's not exactly possible to compensate for the user's additional geometry without asking them to cut off parts of the body that they do not use (that are infact removable due to many bodies splitting pieces off for the Alpha layers.)
  4. The majority of AAA games will have hero characters of 15k to 40k polygons on average. This is for the entire body, including heads, hands, feet, and clothing. Higher polygon counts do not mean quality, nor does it make a mesh look "smoother". Especially when we concider that some MMOs like final fantasy, Guild wars, and so forth will have characters of polygonal coutn no higher than 20k in total. And despite the increase of technology, the polygon counts have not changed much. Not because GPUs cannot support them. But because the methodology of use them as you need them. The issue with Maitreya's 176k polygon count (with no LODS) is that most of the polygons are absolutely not necessary. Does it lend to the smoothness of the model? No. If I can draw a single line, and it's able to tangent 4 or 5 polygon loops with barely a noticable change, than the excessive polygons are unnecessary. The second issue Maitreya's massive polygon count comes with the problem of micro-triangles. Where triangles can become smaller than a pixel. The problem with these is that they may have no impact on the final image or cause the same pixel to be redrawn multiple times, and the GPU still has to do all the math, and lighting for it. For a single player? This is nothing. But for how popular the body is. You can easily have 12 maitreya users on screen at a time... and in many cases going up to nearly 80 which will start dumpstering people's computers no matter how good the hardware is. TLDR: a 30k model will look just as good as a Maitreya. Yeeaaaaah. Unfortunately the most they can do is set some guidelines, but can't really enforce it without hurting themselves. And that might not change much. Especially since I can name at least 100 items on the market place where a screw no larger than 3cm has a 1024x1024 texture and roughly 3k polygons. These objects being advertised as HD.
  5. Actually. The export handles that for you. The only thing you need to be mindful of, is if your mesh makes use of more than 8 materials, you will need to start splitting up the mesh.
  6. Adding onto what Optimo said. You can setup UDIMs which will allow you to paint on two or more textures at once, rather than needing to swap materials. You'll find that this is relatively important if you want to minimize the apperance of seams. Especially given that the SL UV/Omega UV sets have seams all over the place.
  7. For a mesh set of nails? I'd say 20 dollars. Rigging the nails honestly wouldn't take more than thirty minutes. The reason why I say twenty dollars, is this is a minor item that someone honestly could have learned themselves. They can either pay you a relatively high price for a low effort job. Or learn to do it themselves.
  8. Without looking at your textures, this is the only thing I can guess to what is happening. When you bake your UVs, you're not adding in the 'skirt'. Which is basically telling the software to extend the pixels on the edge of the UV seams so they can be blended in. Additionally, It's also possible that your UVs are just horrid in general. And yes, how you UV your mesh does actually matter. If the UVs are not evenly distributed, you will start seeing issues like these arise as well near the edge due to one plane having significantly lower texel density than the other. It also helps if you work to Hide your UVs seams, to insure that the edges can't be seen if they cannot be fixed.
  9. You can login to the Beta Server. Upload costs to the beta grid are free. But they do not transfer to the main grid. From there you can test your meshes, and make sure everything works correctly.
  10. 158MB of VRam what in the *****... But yeah... LI in second life is heavily influenced by the LODs. a common hack people do to save LIs at the expense of lower spec machines is to simply put a single triangle as their lowest LOD and nothing else. So an LI of 50 suddenly drops to four or two. You can further reduce the LI by simply not having a collision at all.
  11. Basic english breakdown for a more direct answer. Here's the settings you need. Diffuse Map, RGB Channel will be your normal diffuse. Alpha channel will be for glow maps, alpha transparencies, etc. In your Normal Map, you need the Normal to be in the RGB. But the Alpha is actually important here. For some f*cking reason, Secondlife has designed it so that the Specular Map is located in the Normal Map's alpha channel. Go figure. The RGB channel of the specular map is actually the specular color. If you're doing anything non-metalic, for realism this should be white. But you can do what ever you want with this channel. Environment map is located in the Alpha channel of the Specular map. The Specular Map effects the highlights you receive from projectors. Meaning it becomes useless in sunlight, or no light. The Environment Map effects the reflectivity of the surface. This also means that you get highlights from the sun and moon, as well as reflections of the environment. To convert the Substance Diffuse to a useable SL texture. You need to multiply the AO map onto the surface. Using concavity, curvature maps, and what have you can also go along way. And you may need to paint false lighting information to the surface to help give it omph where the SL render can not (trust me, it will look flat without that aid.).
  12. Whats the problem with making a new shape with the same parameters that you can transfer?
  13. Adding onto this... The normal map alone is not always enough depending on the lighting system you are dealing with. A good deal of the heavy lifting has to be done in the textures. And Normal maps are only one half of this. You'll want to bake an AO to your low poly model as well. And use that in a multiply layer to add some lighting information into your diffuse. Because clothing generally won't be super reflective, you don't need to worry too much about adding highlights unless the particular garment is just absurdly dark.
  14. 1. Yes. You need to understand how to weight your piercings correctly for them to follow the body in motion. Different bodies will normally have different weights, so there rarely ever is a catch all solution to anything. As far as applying to mesh body developers for their kits, that's only some of the bodies. The big name bodies, Maitreya, Bezella, and what have you have this requirement. It's rediculous in my opinion. But whatever. Your typical furry bodies such as Kemono, Avatar 2.0, Regallia, Snaggletooth, Develin, etc all tends to have their dev kits open to the public. I honestly don't bother with the popular bodies.. 2. My best guess is that they don't want people on SL to steal their stuff. Another possibility is the exclusivity means that they can control the quality of items that is made for their bodies. Which only helps in sales. Normally for any body to do this, would drastically hurt support. But in their case, since they already have popularity, they can control who gets their dev kits to their standards. However, as much as a hot take this is, the quality of Maitreya is like looking at the project of someone who only guessed at anatomy, and is capitalizing on the fact that they have six digit numbers for polygons. 3. If the nipple does not have jiggle physics, you do not need weighting information to attach it. If it has jiggle physics, or for what ever reason the breast are being animated say... sex furniture, meme animations, or sexual gestures, you need to weight the nipple piercings to the breast. 4. No. But it makes life easier for you if you did. The mesh to SL process is absurdly annoying without a tool assisting you. 5. Already answered above. If the piercing is not weighted, it will not move with body physics.
  15. If you mean an existing SL Avatar, you need a developer kit from the devs themselves. Otherwise, you would need full permissions in order to download the avatar from the SL servers. But that download data does not normally have skinned information I think.
×
×
  • Create New...