Jump to content

Cyrule Adder

Resident
  • Content Count

    40
  • Joined

  • Last visited

Everything posted by Cyrule Adder

  1. The Homolosine projection is not well designed for UVs. The reason why this is not used for any sort of UV projection is that there's a lot of wasted texture space in the image. Notice how much of that image is just negative space? Where a typical UV sphere will cover the entire image. An efficient UV will not only make better use of space, it can let you reduce the size of the image file (Thus the strain on people's computers), as well as increase the texel density allowing more details to be seen the closer you get without increasing the image's size. The purpose of the homolosine proje
  2. Kinda impressive when the CATWA head shapes aren't that unique from the competition... But what you got there is fine. 5k polygons is still very efficient. If you're concerned about performances still, the rest of it would be in LODs. where you can start removing loops, delete the teeth and tongue, etc.
  3. This is true, but the point I was making was more to the Maitreya's awful levels of optimization. It's not exactly possible to compensate for the user's additional geometry without asking them to cut off parts of the body that they do not use (that are infact removable due to many bodies splitting pieces off for the Alpha layers.)
  4. The majority of AAA games will have hero characters of 15k to 40k polygons on average. This is for the entire body, including heads, hands, feet, and clothing. Higher polygon counts do not mean quality, nor does it make a mesh look "smoother". Especially when we concider that some MMOs like final fantasy, Guild wars, and so forth will have characters of polygonal coutn no higher than 20k in total. And despite the increase of technology, the polygon counts have not changed much. Not because GPUs cannot support them. But because the methodology of use them as you need them. The issue with M
  5. Actually. The export handles that for you. The only thing you need to be mindful of, is if your mesh makes use of more than 8 materials, you will need to start splitting up the mesh.
  6. Adding onto what Optimo said. You can setup UDIMs which will allow you to paint on two or more textures at once, rather than needing to swap materials. You'll find that this is relatively important if you want to minimize the apperance of seams. Especially given that the SL UV/Omega UV sets have seams all over the place.
  7. For a mesh set of nails? I'd say 20 dollars. Rigging the nails honestly wouldn't take more than thirty minutes. The reason why I say twenty dollars, is this is a minor item that someone honestly could have learned themselves. They can either pay you a relatively high price for a low effort job. Or learn to do it themselves.
  8. Without looking at your textures, this is the only thing I can guess to what is happening. When you bake your UVs, you're not adding in the 'skirt'. Which is basically telling the software to extend the pixels on the edge of the UV seams so they can be blended in. Additionally, It's also possible that your UVs are just horrid in general. And yes, how you UV your mesh does actually matter. If the UVs are not evenly distributed, you will start seeing issues like these arise as well near the edge due to one plane having significantly lower texel density than the other. It also helps if yo
  9. You can login to the Beta Server. Upload costs to the beta grid are free. But they do not transfer to the main grid. From there you can test your meshes, and make sure everything works correctly.
  10. 158MB of VRam what in the *****... But yeah... LI in second life is heavily influenced by the LODs. a common hack people do to save LIs at the expense of lower spec machines is to simply put a single triangle as their lowest LOD and nothing else. So an LI of 50 suddenly drops to four or two. You can further reduce the LI by simply not having a collision at all.
  11. Basic english breakdown for a more direct answer. Here's the settings you need. Diffuse Map, RGB Channel will be your normal diffuse. Alpha channel will be for glow maps, alpha transparencies, etc. In your Normal Map, you need the Normal to be in the RGB. But the Alpha is actually important here. For some f*cking reason, Secondlife has designed it so that the Specular Map is located in the Normal Map's alpha channel. Go figure. The RGB channel of the specular map is actually the specular color. If you're doing anything non-metalic, for realism this should be white. But you can do
  12. Whats the problem with making a new shape with the same parameters that you can transfer?
  13. Adding onto this... The normal map alone is not always enough depending on the lighting system you are dealing with. A good deal of the heavy lifting has to be done in the textures. And Normal maps are only one half of this. You'll want to bake an AO to your low poly model as well. And use that in a multiply layer to add some lighting information into your diffuse. Because clothing generally won't be super reflective, you don't need to worry too much about adding highlights unless the particular garment is just absurdly dark.
  14. 1. Yes. You need to understand how to weight your piercings correctly for them to follow the body in motion. Different bodies will normally have different weights, so there rarely ever is a catch all solution to anything. As far as applying to mesh body developers for their kits, that's only some of the bodies. The big name bodies, Maitreya, Bezella, and what have you have this requirement. It's rediculous in my opinion. But whatever. Your typical furry bodies such as Kemono, Avatar 2.0, Regallia, Snaggletooth, Develin, etc all tends to have their dev kits open to the public. I honestly don't
  15. If you mean an existing SL Avatar, you need a developer kit from the devs themselves. Otherwise, you would need full permissions in order to download the avatar from the SL servers. But that download data does not normally have skinned information I think.
  16. This is a good week or so later. but I want to add in a few more details for you later that hasn't already been mentioned in the answers above. Firstly, and this will probably become important for you later as you start playing around with blender's sculpting functions to make more detailed jewelry. Generally, you only want to use as many polygons as necessary to correctly display your model at their intended viewing distances and importance. This distance varies massively based on the object's size. For example, you shouldn't need to use 3000 polygons on a button if you are viewing it fr
  17. It would be an interesting feature to have. And the problem with Sim Surround is the limits and setbacks. If I am not mistaken, it's also an additional expense. Where most people work around this by a different means.
  18. Diffuse in the RGB. Alpha may be Alpha Transparency, or Emission Map. Specular Map is a bit weird. The RGB of specular is the Specular Color, the Alpha represents the Environment Reflections (Yeah I know, wtf Linden labs). Normal Map RGB is the Normals. But the Alpha of the Normal Map is the actual Specular Highlight Map (seriously wtf Linden) To get a correct Diffuse map, you will need to add lighting data for Second Life. Usually AOs, edge maps, and curvatures combinations will be good enough. You will also need to color your metals, as PBR relies on specular highlights to prov
  19. As the title states, is it possible to add terrain to the EEP skybox? The main reason why I am asking here, is that prim based skyboxes... generally don't work very well. The idea behind a skybox is that it's far away enough from the camera that the user is able to get a sense of a greater world around them, but not see the loony toons style perspective differential when they are up close and about to run into a wall. Think of it like a modern game, where the map is small. But you see mountains, and forests off in the background that are all simply part of the skybox to help make the world
  20. If you have the sculpt map, AND permissions from the original creator. Sculpt maps are textures with per-vertex positional data, where each vertex is represented by a single pixel. That being said, you can convert the sculpties back to a mesh in blender via vector displacement maps. However, you will need to find the exact Second Life geometry object as that is not native to blender. You can probably find some blend files with that out in the ether or something. And I am not sure if it can... as I remember the sculpty days mostly from when Blender was in 1.x I should note that it mig
  21. So... an explanation for the cost. This is only my theory, but I think it has to do with the fact that the physics mesh is not convex. The reason why this matters is because the physics in computing is much faster when your meshes are convexed. This means that a complete mesh surrounds the entire object with no holes or cavities. When you start introducing concavity then you start getting issues with computation and it grows to be much more expensive. However... if you were to stack a bunch of building blocks to closely resemble the shape, That is actually computationally cheaper by magn
  22. Step 1. Model Something. Most of the techniques used here are software agnostic with basic tools. But each software does have it's own collection of tools and workflows. Sculpting, pushing around vertexes and what have you. Make sure that they are game ready. That is to say that you're only using enough polygons to reasonably see detail. But not so much that when you run wireframe it looks solid. Step 2. Texture that something. Photoshop is still pretty much king here. You can use Substance Painter as well, but that is for a PBR pipeline. And it's generally a pain in the ass to convert PB
  23. Did you check to make sure the bones weren't locked?
  24. Just upgrade to 2.8. The features are stable enough that you can follow the tutorials just fine. Other than that. You need to select your armature, and mesh with shift click. Go into weight paint mode. You can select your bone by holding control and clicking. The standard transformation shortcuts will move the bone.
  25. As Nomius mentioned, yes. They are using internal lines with fold settings to achieve this effect.
×
×
  • Create New...