Jump to content

Cyrule Adder

  • Content Count

  • Joined

  • Last visited

Community Reputation

17 Good

About Cyrule Adder

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Without looking at your textures, this is the only thing I can guess to what is happening. When you bake your UVs, you're not adding in the 'skirt'. Which is basically telling the software to extend the pixels on the edge of the UV seams so they can be blended in. Additionally, It's also possible that your UVs are just horrid in general. And yes, how you UV your mesh does actually matter. If the UVs are not evenly distributed, you will start seeing issues like these arise as well near the edge due to one plane having significantly lower texel density than the other. It also helps if yo
  2. You can login to the Beta Server. Upload costs to the beta grid are free. But they do not transfer to the main grid. From there you can test your meshes, and make sure everything works correctly.
  3. 158MB of VRam what in the *****... But yeah... LI in second life is heavily influenced by the LODs. a common hack people do to save LIs at the expense of lower spec machines is to simply put a single triangle as their lowest LOD and nothing else. So an LI of 50 suddenly drops to four or two. You can further reduce the LI by simply not having a collision at all.
  4. Basic english breakdown for a more direct answer. Here's the settings you need. Diffuse Map, RGB Channel will be your normal diffuse. Alpha channel will be for glow maps, alpha transparencies, etc. In your Normal Map, you need the Normal to be in the RGB. But the Alpha is actually important here. For some f*cking reason, Secondlife has designed it so that the Specular Map is located in the Normal Map's alpha channel. Go figure. The RGB channel of the specular map is actually the specular color. If you're doing anything non-metalic, for realism this should be white. But you can do
  5. Whats the problem with making a new shape with the same parameters that you can transfer?
  6. Adding onto this... The normal map alone is not always enough depending on the lighting system you are dealing with. A good deal of the heavy lifting has to be done in the textures. And Normal maps are only one half of this. You'll want to bake an AO to your low poly model as well. And use that in a multiply layer to add some lighting information into your diffuse. Because clothing generally won't be super reflective, you don't need to worry too much about adding highlights unless the particular garment is just absurdly dark.
  7. 1. Yes. You need to understand how to weight your piercings correctly for them to follow the body in motion. Different bodies will normally have different weights, so there rarely ever is a catch all solution to anything. As far as applying to mesh body developers for their kits, that's only some of the bodies. The big name bodies, Maitreya, Bezella, and what have you have this requirement. It's rediculous in my opinion. But whatever. Your typical furry bodies such as Kemono, Avatar 2.0, Regallia, Snaggletooth, Develin, etc all tends to have their dev kits open to the public. I honestly don't
  8. If you mean an existing SL Avatar, you need a developer kit from the devs themselves. Otherwise, you would need full permissions in order to download the avatar from the SL servers. But that download data does not normally have skinned information I think.
  9. This is a good week or so later. but I want to add in a few more details for you later that hasn't already been mentioned in the answers above. Firstly, and this will probably become important for you later as you start playing around with blender's sculpting functions to make more detailed jewelry. Generally, you only want to use as many polygons as necessary to correctly display your model at their intended viewing distances and importance. This distance varies massively based on the object's size. For example, you shouldn't need to use 3000 polygons on a button if you are viewing it fr
  10. It would be an interesting feature to have. And the problem with Sim Surround is the limits and setbacks. If I am not mistaken, it's also an additional expense. Where most people work around this by a different means.
  11. Diffuse in the RGB. Alpha may be Alpha Transparency, or Emission Map. Specular Map is a bit weird. The RGB of specular is the Specular Color, the Alpha represents the Environment Reflections (Yeah I know, wtf Linden labs). Normal Map RGB is the Normals. But the Alpha of the Normal Map is the actual Specular Highlight Map (seriously wtf Linden) To get a correct Diffuse map, you will need to add lighting data for Second Life. Usually AOs, edge maps, and curvatures combinations will be good enough. You will also need to color your metals, as PBR relies on specular highlights to prov
  12. As the title states, is it possible to add terrain to the EEP skybox? The main reason why I am asking here, is that prim based skyboxes... generally don't work very well. The idea behind a skybox is that it's far away enough from the camera that the user is able to get a sense of a greater world around them, but not see the loony toons style perspective differential when they are up close and about to run into a wall. Think of it like a modern game, where the map is small. But you see mountains, and forests off in the background that are all simply part of the skybox to help make the world
  13. If you have the sculpt map, AND permissions from the original creator. Sculpt maps are textures with per-vertex positional data, where each vertex is represented by a single pixel. That being said, you can convert the sculpties back to a mesh in blender via vector displacement maps. However, you will need to find the exact Second Life geometry object as that is not native to blender. You can probably find some blend files with that out in the ether or something. And I am not sure if it can... as I remember the sculpty days mostly from when Blender was in 1.x I should note that it mig
  14. So... an explanation for the cost. This is only my theory, but I think it has to do with the fact that the physics mesh is not convex. The reason why this matters is because the physics in computing is much faster when your meshes are convexed. This means that a complete mesh surrounds the entire object with no holes or cavities. When you start introducing concavity then you start getting issues with computation and it grows to be much more expensive. However... if you were to stack a bunch of building blocks to closely resemble the shape, That is actually computationally cheaper by magn
  15. Step 1. Model Something. Most of the techniques used here are software agnostic with basic tools. But each software does have it's own collection of tools and workflows. Sculpting, pushing around vertexes and what have you. Make sure that they are game ready. That is to say that you're only using enough polygons to reasonably see detail. But not so much that when you run wireframe it looks solid. Step 2. Texture that something. Photoshop is still pretty much king here. You can use Substance Painter as well, but that is for a PBR pipeline. And it's generally a pain in the ass to convert PB
  • Create New...