Jump to content

Fluffy Sharkfin

Resident
  • Posts

    916
  • Joined

  • Last visited

Everything posted by Fluffy Sharkfin

  1. I could have used an AO map generated by Filter Forge (as well as the colour map, etc.) but I wanted to illustrate how to create a diffuse texture using nothing but a normal map since, depending on the source, you may not have access to corresponding diffuse/albedo maps or AO maps. Also, 3D Coat will automatically generate AO and curvature maps when you activate conditional masking options or smart materials and, while you can substitute your own external AO map, it's normally easier just to let 3D Coat handle both (the RGB curvature map is relatively uncommon and I'm not sure an external curvature map would be entirely compatible even if you could find an app other than 3D Coat to generate one).
  2. I'd very much like to see LL provide some detailed information on recommended practices along with examples of content created using the latest features accompanied by in-depth tutorials, etc. I'm just not sure the forums are the ideal venue for such material given all the dissenting opinions and random fights which tend to break out in otherwise useful threads. Yes, trim sheets and modular building techniques are extremely useful for keeping 3D environments optimized, unfortunately the impact those types of optimizations would have on performance in SL are probably not as dramatic as you may imagine. Still, rather than focusing on the negative, here's a quick run down of how to use a normal map to create a completely new base/diffuse texture from scratch... For this example I'm using 3D Coat, Substance Painter is also capable of achieving similar results but the workflow is quite different (I personally prefer 3D Coat since it's quite similar to Photoshop). First I loaded up a simple blank plane with a UV island that covered the entire UV map area and imported a normal map I created using Filter Forge. I then used the normal map to calculate an ambient occlusion map and a curvature map, which enabled me to use 3D Coats Conditional Masking in conjunction with the Paint Tools and Stroke Modes and the Layers & Blending Modes to start adding details to the base texture which correspond to the surface detail information stored in the normal map. By utilizing the conditional masking to isolate certain areas of the surface I added some extra depth and "fake" AO between the blocks, a little extra wear, roughness and discoloration to the edges of the blocks, and some dirt and a few extra cracks. I then composited them all together with a stone base texture using the various layer blending modes available. (one area in which 3D Coat has a clear advantage over Photoshop is that you can work on multiple maps at once meaning you can paint using colour, depth, roughness/specularity and metalness simultaneously, so the cracks and edge wear add additional depth to the normal map as well as adding colour to your diffuse texture). The majority of the detail being added to the Diffuse map is based on the Curvature and AO maps (which were calculated using the original Normal map) so the details should align perfectly with the normal map and since most of that detail is edge wear, dirt & cracks, etc. rather than the highlights and shadows you would get by adding directional lighting information the finished Diffuse Map still works perfectly well with a Normal map and there's no conflicting highlights or shadows. Add a little dynamic (moving) lighting and this is the end result...
  3. Learning the more advanced techniques can be confusing and frustrating to begin with but once you've figured out which boxes to tick and which particular settings to use to generate results that work with SL then you'll find the process is pretty simple and little more than an excuse to walk away and make a cuppa while you wait for the software to do its thing. I have Substance Painter myself but very rarely use it since I prefer to use 3D Coat for the vast majority of the modelling and texturing process so I can't offer any specific advice (although I'd be surprised if there isn't a fix for your problem with having to bake each piece individually), however looking at your normal map it does seem that you have a lot of unused space on your UV map. It's a little hard to offer any advice on how you could improve your results without examining the low poly model and UV mapping, etc. but I suspect that making some changes to the way in which your UVs are unwrapped may help quite a bit. For example, all the jewels on the choker appear to be identical so if your low poly model contains polygons representing each jewel (rather than being a purely flat surface with no additional geometry for each jewel) then you could simply generate the normal map for a single jewel and stack the UV map for all the jewels on top of each other so they all use the same part of the texture. Working out how best to lay out the UVs of your low poly model in order to maximize the amount of pixels you have to work with and ensuring you take every opportunity to save space by reusing the same pixels on identical parts of your model is something that takes time and practice but once you're able to unwrap UVs efficiently you'll find that your end results will be a lot more satisfactory.
  4. To be clear, the results you get from attempting to extract a normal map from a regular texture are not even remotely comparable to what can be achieved by baking details from a high poly model to a low poly model in a 3D app. You can get reasonable results if your original image meets certain criteria however you're invariably much better off baking your normal maps using an app like Substance Painter, 3D Coat, Marmoset Toolbag or Blender if you have access to both the high and low poly models. Here's a useful article that outlines why filtering photographs isn't the best approach How NOT To Make Normal Maps From Photos Or Images (it also gives some details on how to generate normal maps by creating your own depth maps and has a ton of links to additional resources at the beginning of the article which provide a thorough explanation of how to create and manipulate normal maps and details on some of the various tools available).
  5. I sometimes like to play "what if" and imagine what SL would be like with all the bells and whistles of modern game engines. For example if SL were using Unity I like to think this would be what teleporting looks like.. Outfit changes would look pretty awesome too...
  6. Quick Tip: when adding an image double-clicking it will open a window where you can set the size at which it will appear in your post.
  7. I think an additional and seldom mentioned side-effect of the tendency to butcher LOD models in favour of lower land impact is that having lower land impact means you can rez more items simultaneously which in turn means you have more textures to load. Sure it's great being able to rez 4 times as much stuff on your land, but when you consider that each of your neighbours is doing the same and realise that it all adds up to potentially hundreds or even thousands of extra textures in every region you have to question if cheating the LI restrictions is really such a good idea after all. I'm very rigid in my determination to make proper LOD models whenever necessary but then, as previously mentioned, I enjoy the challenge of working out exactly how all the pieces of a model fit together so will happily spend time laying out my high LOD UVs so that when I come to make the low LOD I can just delete large groups of polygons and seamlessly replace them with a flat plane or two. I'll also quite often start by creating the mid or even low LOD model and then create the higher LODs afterwards but as I said it depends a lot on the type of asset I'm working on. I mostly use the default LL viewer and never change my LOD settings since I want to see how the things I'm creating look using the defaults, so I often see people wandering around bald and naked aside from some randomly placed triangles. And yes your screenshot contains an impressive lack of random floating triangles, you deserve multiple gold stars! */me hands you a whole packet of gold stars with which to adorn yourself*
  8. I'm not going to deny that SL has major performance issues and a large part of that is simply down to the way people use the platform and a general lack of understanding when it comes to how best to manage resources in order to optimize performance. I'm also in agreement with you that, without access to proper resources and clear examples of how to use advanced features like materials, the majority of residents are just going to cause more performance issues. As you pointed out in your previous thread on the subject... This is because the AO map contains no directional lighting information and out of all the maps in both the existing and upcoming materials systems it is the easiest to composite since the AO map contains (lack of) lighting information which can be easily interpreted visually. Of all the other maps the normal map is unique in that, unlike the diffuse/albedo, roughness and metalness maps, the normal map doesn't contain information about the intensity of surface properties, it contains information about the angles of surface details (i.e. geometry) which, while under certain circumstances (seamless tiling textures for example) can be used to fake additional texture detail, is really only useful when correctly converted from tangent space into world space and combined with information about the surfaces to which its applied and the lights being reflected off that surface. In the case of seamless tiling textures, since they are created by baking geometric detail onto a flat plane where the initial surface normals are all facing the same direction, it's a little easier to manipulate the data contained in the red and green channels of a normal map to produce potentially useful results and with a little creative use of blending modes and adjustment of the levels and curves of each channel you can even isolate specific ranges of surface angles in order to apply directional lighting and shading effects. However the truth is, like it or not, SL is about to get an update that will cause normal maps to be visible to everyone which means that you'll get far greater benefit from simply using them as intended than you will using even the most advanced techniques to try and fake the same effect on a diffuse texture. Anyway, I could go into great detail about the various ways normal maps & PBR materials could be used to actually improve performance but I don't want to hijack your thread any further with this tangent ("tangent", get it... like "tangent space"!?... okay, nevermind... ). As I said previously some of the other points you made were good and I think we've spent enough time focusing on normal maps when there's a ton of other good advice which could be shared in this thread. For example, it would be nice if we could find a way to educate some of these creators who insist on using ten 1k textures for a single house on the benefits of modular design and trim sheets, they could save themselves some production time and everyone else a whole lot of VRAM.
  9. Apply the diffuse and normal map to a flat plane in any 3D app that supports texture baking, set up your desired light sources and bake the additional lighting information to the diffuse texture! Even isolating the directional lighting information contained in the red and green channels of the normal map and then using it to create masks in order to manually apply highlights and shadows gives you more control over the end results than simply desaturating, inverting the colors and playing with blending modes. I'm not even sure how you can call "Go wild here and try all sorts of things" a method, but frankly if that's the best solution you can find then far be it from me to stop you, "go wild" and waste as many hours trying it as you like!
  10. Thanks, I'll definitely take a look once some of the more serious issues are resolved. I'm currently using a mini tower which is limited to a 230w power supply which in turn means a 2GB GT1030 with passive cooling is the best option available so I'm not expecting miracles. It will certainly be interesting to see how well the new viewer performs on a $100 graphics card in a $250 refurbished office PC. 😅
  11. I don't believe there's really a "best way" to manually create LOD models, the ideal method will vary depending on the software & workflow that you're using as well as the type of asset you're making and its intended use. It's the "intended use" part that makes creating content for SL tricky because while you can create a pair of shoes with the intent that people will wear them on their feet, you can't guarantee that someone won't try to scale one up so it's 40 meters tall and then try to build a house on top of it. Not knowing how an asset will be used, at what scale, distance or angles it will be viewed, what other assets it will be used in conjunction with, etc. can make creating well-optimized and effective LOD models extremely difficult. I actually quite enjoy creating LOD models (but then I also enjoy retopology and UV mapping so I may just be a masochist). I tend to treat each project as a puzzle to be solved rather than following the same workflow every time I create something. It gives me the opportunity to learn new techniques and tools while forcing me to really think about how I'm creating each asset and how to achieve the best results in terms of both aesthetics and performance, also my brain tends to stay more engaged and focused when not repeating a familiar process and an active mind means an active imagination. Most importantly, for me at least, I find that it makes creating a lot more fun if you get to experiment a little and try something different now and then.
  12. Yeah, I assumed that performance would be high on the priority list and that, since the project is still in early alpha stages and the list of known issues includes "downgraded performance & stability", improvements in that area are still very much ongoing. My own PC is kind of a potato so I'm curious as to how it will cope but haven't gotten around to testing it as yet. I'm also very excited about the new reflection probes so may have to go and experiment soon (assuming I can still get onto the beta grid, it's been a while since I tried).
  13. Yes, regardless of whether your normal maps are using the correct formula the other maps will need to be converted from PBR specular to PBR metallic and repacked into the correct channels. However since you can edit materials in the viewer it should be possible to upload a new material with no normal map and then apply a previously uploaded normal map afterwards (assuming that the previously uploaded normal map uses MikkTSpace and doesn't include the specular exponent/alpha channel). But you're absolutely right, given the number of maps that the new PBR system utilizes which are not part of SLs current materials system creators are going to need to rebake their textures before uploading anyway so other than normal maps that happen to use MikkTSpace there isn't much that could be salvaged and repurposed for use with the new PBR materials.
  14. I noticed it was listed under the Resolved Issues section of the release notes but I assumed that LLs original caveat concerning performance still applies and the decision to remove it hadn't been finalised yet.
  15. And yet your proposed method for compositing normal maps with diffuse textures is remarkably crude. Even the basic information contained in the video I posted provides clues to a more reliable and accurate method for transferring the lighting information to a diffuse texture (not to mention the multiple correct ways in which you could use a normal map to fake lighting on a diffuse texture). As is the possible removal of forward rendering, which leads me to question why on earth you would try to convince people to go to such lengths to create content which is incompatible with the new features LL are planning to implement. As has already been pointed out, SL has already implemented them (for several years now) and the new PBR materials will have them too. Regardless of what you claim to know about normal maps you seem quite unfamiliar with the current materials system in SL, perhaps you should refrain from offering advice on how to (mis)use it? On this we can agree, which is why it's important to try and educate creators (and residents) on the best way to utilize the new PBR materials system and why advice like "just load your normal map into photoshop and tweak the blending mode until it looks pretty!" is so counter-productive!
  16. Channel packing isn't really a problem since 3D Coat provides an export constructor allowing the user to assign specific maps to each of the RGB channels when exporting a set of textures (it also has support for GLTF export but I always like to check the export constructor as well so I know exactly which channels my individual maps are hiding in). As for per-fragment vs per-vertex tangent computation, I would assume that since both methods are used in the two most popular game engines (Unreal Engine uses per-fragment while Unity does not) most modern apps capable of baking textures will provide support for both. It's unfortunate that it was necessary to make such drastic changes to the mesh asset format, I imagine a few people are going to be miffed that they have to re-upload all their assets in order to make use of PBR but then, as the saying goes, if you want to make an omelette you have to break a few eggs.
  17. It certainly wouldn't make much sense to so drastically limit the versatility of PBR materials... unless LL have finally had enough of hosting millions of images and are planning to remove traditional texture uploads at the same time they remove the forward renderer and limit everyone to materials in gltf format only?! 😮 🤣 I'm intrigued as to how treating PBR materials as a brand new asset type will play out and how conflicts between permissions of materials and objects will impact the ability to modify objects with PBR materials applied to them, but I don't really keep up with the progress of the test viewer, etc. since I enjoy surprises so I figure I'll just wait until something exciting happens and then read about it on the forums.
  18. Yes, although using the wrong formula to generate your normal map will still produce incorrect shading it's far less noticeable on large smooth-shaded UV islands. It becomes more noticeable on UV seams because the values contained in the normal map are incompatible with the formula used by the shader and are producing incorrect surface normal angles which don't align correctly between the various UV islands, resulting in hard edges where they should be smooth. From what I understand the creation of material assets will initially only be possible using an external editor prior to uploading, so it may be that you end up having to re-upload all those MikkTSpace normal maps anyway. 😲
  19. It's more complicated. Whereas swizzle is simply a matter of ensuring that the y axis data in the green channel is correctly oriented, there are multiple formula which can be used to calculate the tangent basis and if you don't use the same formula as the shader uses to display the end result when generating the normal map then the model won't be shaded correctly. Here's a link to the Polycount wiki entry on normal maps and specifically the section on tangent basis Normal Map Technical Details - Tangent Basis
  20. @EliseAnne85I don't know why you're having trouble finding examples of PBR in modern games, pretty much all modern games using a 3D engine utilize PBR. Since you're looking for something that isn't just "shiny or bumpy" here's a clip from Red Dead Redemption 2, everything you see in this clip (and in the game overall) is done using PBR shaders/materials...
  21. The specific software used is less important than ensuring you use the correct tangent space formula, etc. to generate the normal maps. Normal maps in the current SL materials system use Eric Lengyel's formula but once the new PBR materials are introduced normal maps will be using the much more widely supported MikkTSpace formula.
  22. It really depends on how you create that "smoothness", there are right ways and wrong ways and for the purposes of SL adding a ton of extra geometry to your final model is invariably one of the wrong ways. The most useful tool a developer has for creating seemingly high poly objects using just a few triangles is baking normal maps, and it's definitely the best approach if you're using Zbrush to add detail to your models. I already posted this short youtube video in another thread but it is, at least in my opinion, information that bears repeating...
  23. This is one of the areas in which PBR materials may actually help to improve matters (if creators can be persuaded to take advantage of it). Unlike diffuse textures (which may contain information derived from a combination of the roughness/metalness of the surface and diffuse lighting & shading), albedo maps contain purely the base colour of the surface they're applied to, so by using a desaturated/grey-scale albedo map you can specify luminance values of the surface without adding any colour information and then allow the user to select the colour by tinting parts of the asset instead. Since all the highlights and shading on objects are the result of the roughness, metalness, normal & AO maps, tinting the desaturated albedo map of a PBR material should have a more realistic effect than can be achieved with a diffuse map. Certain games use a very similar approach for user-customizable content, although rather than individual faces/materials they employ an RGB mask to specify which areas of a texture/model should be colourized.
  24. No, just.... no! You make some good points in your post (and some not so good ones) but blending normal maps with your diffuse in a photo editor to try and simulate lighting and shadow is honestly a terrible idea! This short video explains exactly what normal maps are, how they work and why using them correctly will greatly improve the visual quality and detail of both mesh/prims.
  25. @DulceDivaYou can find a full explanation of the concept of subdivision levels in Zbrush in the official documentation here Subdivision Levels | ZBrush Docs (I'd also recommend reading the following chapter on Dynamic Subdivision). If you search Youtube for "zbrush subdivision" you'll find plenty of explanations and tutorials on how to use it to solve your problem with facets on your base mesh, here's a nice short one that summarizes things quite well... I'd also recommend checking out the youtube channels of Flipped Normals and Micheal Pavlovich, two excellent resources for Zbrush users. I don't personally use Zbrush so can't give you any specific assistance but 3D Coat works on a very similar basis so I'm pretty familiar with the workflow. One tip I can offer is to keep in mind the resolution of your final textures and how much texel density/UV space each part of your base mesh has to work with when sculpting additional details. You can add a few million polygons to a mesh and go crazy adding fine detail like stitching on seams, etc. but if the UV mapping of the base mesh for the part of the model you're sculpting on only occupies 256x256 pixels of the final texture all that extra detail is going to look like a grainy, pixelated mess when you bake it onto the low poly model.
×
×
  • Create New...