Jump to content

Kyrah Abattoir

Resident
  • Posts

    2,296
  • Joined

  • Last visited

Everything posted by Kyrah Abattoir

  1. Yup you got it, and you can draw masks by hand, or generate them on the fly (various noises, pointiness, cavities, etc etc...) For all intents and purposes, node graphs are essentially visual shaders, you are building a mathematic equation to drive the properties of each rendered pixel.
  2. A mask is typically a black & white texture (but not necessarily) that you feed into your node graph to combine multiple paths into one on a per UVpixel basis, you obviously need to give "some kind" of unwrap to your high polygon mesh if you plan to do that on it. Example below
  3. For smaller details like stitches I usually draw a mask & a bump map in Krita and integrate them into my blender material. And yes, if you use normal/specular maps only people with ALM enabled will get the full effect. But unless policies have changed at Linden Lab the non-alm renderer is on its way out. You can still bake "some" (not all or you will fight against the normal map) details into your diffuse map for those users. My position, realistically, is that, if you can't run ALM, you are very unlikely to have the ram/vram required to support larger textures anyway. As for your question about blurryness, this is something I'm still tweaking, but essentially... baking at your final resolution, rather than 2X/4x and then shrinking it down might look grainy in photoshop, but it will look fine in SL. Trust what Blender show you, and use temporary textures to double check in SL.
  4. Make a checker material for your object where the check count is equal to your texture resolution if you want to get a feel on the distorsion. Do keep in mind that transitions between pixels are filtered, unlike what photoshop/gimp shows us. so a stretchy pixel might not necessarily be that visible.
  5. Pretty much, the extra data of your normal map makes up for the difference, and the high poly subdivided model essentially never touches SL.
  6. Yup, you use the full memory that your texture occupies regardless of whether you are using that piece of texture for something or not. It's not always possible but try to aim for 90% coverage, unless you are using a tiling texture in which case, obviously you want your UV to conform to the tiling instead. This is one of my most recent uvmap:
  7. @Butler OffcourseSince you are going to bake anyway i'd recommend to create a separate low-polygon model of your shoe, using surface snapping with your highpoly as a reference so you don't have to guess smoothness too much, and try to reconstruct a "close approximation" without using subdivision, only adding what is strictly necessary and not going to be highlighted enough by the normal map.
  8. @Butler OffcoursePassing comment on your UV, you are wasting a massive amount of texture space having your object unfolded like this. You should consider introducing seams to break it up further and make better use of that space.
  9. It is a problem that exist in pretty much every game under the sun, some are better at mitigating it, but alpha blended surfaces are rendered in reverse order to account for multi-layered surfaces.
  10. Yeah it is called "edge splitting" Edges get split for various reasons: Material change. UV seam. Hard edge normals
  11. A word on strided lists, I'm not sure they are beneficial in any way compared to just using two lists. since lists used as arguments are copied around, it seem that two lists would move around less memory than a strided list?
  12. Pro tip, there is an interesting little pattern you can use for flip flop systems: on = !on;
  13. Yeah i think that normal map is already here but there is something wrong with your model's shading. or something wrong with now you baked those normals because all that flat blue indicate "no normal differences" which is simply not possible given that your upload is smooth shaded. Speaking of, why is it if the normal target was flat shaded?
  14. https://jira.secondlife.com/browse/BUG-227448 It is definitely something wrong with how meshs are either stored, or rendered.
  15. It is definitely avoidable, could be that the creator didn't double check their triangulation before uploading (which you really should if you're making layers) Only them can fix this sort of problem.
  16. I never know which one to use because we use "octets" here. The problem is that anytime they go see their engineers and show them secondlife they end up going like "why does this even work?", the original people at Linden Lab were extremely proficient technically. Philip Rosedale was RealNetworks's former CTO, so someone extremely competent at the time when it comes to streaming tech, and I can only assume he brought in the very best with him when this all began. These are people who literally added scripting to SL "over a weekend". I'm not trying to deify those people or anything, but this is very different from the SecondLife we have today, and from any company with the funds to pursue this type of venture.
  17. But the platform IS the tech, there is a reason why no one is currently doing what SL is doing, and that all those that tried have failed.not your desires for it to just swallow anything you throw at it. The reason no company has pushed LL into irrelevance is that what LL has done is both insane, unique and extremely complex to design and maintain, as a result, it also moves very slowly. But blaming the tools doesn't lead us anywhere.
  18. Another thing to keep in mind is that it is "just" for textures, and while it can be enlarged on 3rd party viewers, you can never allocate the full amount of onboard vram because you also need it for other things, like geometry.
  19. They also do a lot of manual culling, such as unloading floors above & below you, a lot of games also use different assets for cutscenes, and once your brain was imprinted with those details you don't notice when they are missing in 90% of the game.
  20. Was Lumya doing any of the rendering on the phone itself at all?
  21. This is unrelated to your question obviously and something no one likes to hear.... But You have to realize that memory doesn't exactly grows on trees. Performances plummet once a scene doesn't fit into your video card anymore, and craters once the viewer begins to swap textures in and out of the disk cache because you don't have enough ram to hold it either >_<. At 3-4Mb per 1024^2 textures, 44 textures eats 132-176Mb of vram. You only get 768Mb of texture memory to play with on the official viewer, do the math.
  22. Just upload them separately (like... basically everyone?)
  23. A good texture memory saving technique too.
  24. What, the lightmap packer? In general people need to stop thinking there is a button for everything they don't want to do themselves.
×
×
  • Create New...