Jump to content

OptimoMaximo

Resident
  • Posts

    1,810
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by OptimoMaximo

  1. @Chic Aeon ah well, we tried all that was possible... fortunately some meaningful improvement came up, although not as significant as i initially hoped
  2. @Chic Aeon well over there you can find the AA (AntiAliasing) samples which might help improving the pixellation issue on diagonals
  3. @Chic Aeon Out fo curiosity, i downloaded Blender 2.78. DId you play with these settings down here? Because THESE look VERY similar to the render settings Maya offers within Arnold renderer. If you haven't, let me know, i can explain to you how this thing works as i'm playing with it and it's pretty consistent with Arnold's working method EDIT: turning the "Square Samples" checkbox makes it work exactly as Arnold does.
  4. Well perhaps they might be more important when you will have an indoor scene to bake. Don't forget to try it when you get a chance. On a side note, just a floor like in your picture doesn't really help too much if you don't have any walls, as the chances of bouncing back towards the object isn't that high this way, also considering the light being perfectly perpendicular to the floor. But hey, you got an improvement, so thumbs up! Would love to see what you come up next using your newly acquired knowledge, the improvement stats and the actual finished item too =)
  5. @Chic Aeon Try if there's a noticeable difference if you change the bounces in render settings to lower/higher values while keeping this same lighting set up. That might be the answer also to remove noise =)
  6. I don't get why a round zero is not accepted, but it's ok Checkboxes: reflective is what goes with Gloss shader, Refractive is what would go with IOR. This latter is really needed only if you use IBL (Image Based Lighting), which means hooking up an environemt HDRI image (thing that i STRONGLY recommend to do). Unless you have thick lens shaped glass, Refraction is not really much needed, but you should test it. Metals may benefit from that too. Bounces side: Max is set to the same value of the property that has the highest value among those on the lower side. Transmission actually means the translucency (example: foliage or wax) or simple non refractive glass. If you don't have any of these two material types, dump it to zero. Now for the core properties you want to use for baking: Diffuse is set to 4, but the higher you go, the better lit (and less noisy) the diffuse colors should be. When you have pretty dark and noisy corners, setting this higher improves both luminosity and noise. Same goes for Glossy, which is the important bit for metals as they don't have a diffuse component, so if a metal reflection looks noisy in its color, this is the setting to touch, not the diffuse. So i would suggest to recycle those spare 12 samples from the unused Transmission over to these two properties, setting them to maybe 6 each one. Once you've found a good balance, the highest value among those should be set as max bounces. Min bounces should stay at 3 (really the bare minimum, but you can up this to maybe 4). I'd like to know what Filter G:0.00 is in its extended version Now here: samples from this light look like default for an open world type of environment, and to me it looks way overkill if set to 1024 max bounces. Samples from this light would just flood the scene and after a few bounces, they will travel in the void and won't stop being accounted for. So, something more realistic for this may be around 10 or 12, and i would go higher only if set to Portal mode. However i don't really know how this property was actually coded within Blender, you'd better do testing on this. I'm talking from a Maya-Arnold perspective and Blender devs might have had a different measurement unit in mind. Portal mode: this is something that goes along with an environmental HDRI, so that the light set as portal doesn't actually add up to the total in the scene, it just funnels the lighting from the HDRI into an INDOOR area: great example for this is to put them on the windows pointing indoors. This mode is especially useful when you do render of an interior and the environment lighting is not needed outside of your mesh, so that the exterior doesn't get calculated. You can mix normal mode area lights for the exterior and portal mode ones for the interior, saving a lot of resources and increasing quality brought from the HDRI environment This is called remapping or texture transfer or texture warp and it's basically the method i was explaining in my previous post. with the difference that i suggest to re-map everything you've got on a high res texture onto smaller textures spread across multiple UV tiles. But this isn't really supported in Blender, however there is a work around. This implies the use of a second UVSet (UVMap in Blender, within the MeshData panel, the triangle icon) This also relates to my method: i do a very high res texture, free from worries of texture space, then remap it to the new UV map with the optimized UVs, organized in more UV tiles, because... ... which means that for each texture that Blender starts to bake, the whole process of calculation RESTARTS from SCRATCH, since Blender doesn't have lightmap caching features. That is why Light Path is better on one single image: you do your tests with low samples and for the properties that you actually intend to use (dumping the useless ones) to TEST the result on a smaller res texture, so it takes less time. You can then test how it looks in regard of lit areas, whether it looks correct or not, not caring of the quality in the first stage. At this point you shouldn't care if it's noisy, you should check how the lighting hits and bounces in general. When it looks ok, you can launch the final render on the single high res which will involve calculations only once. When all is good, texture transfer to the other UVSet that is split into multiple textures. This transfer doesn't take actual render and is very fast. This way, the same high res result you would have gotten in a long time to bake, shrinks down quite noticeably but with a far greater texture quality and a directly proportional weaker headache
  7. Chic, Volume sampling is related to Volumetric shaders, like a self contained smoke or atmospheric scattering, and i would suggest to turn that always to zero if you're not doing scenery renders. Dithering instead should do something but it's most likely to be unnoticeable if it's applied on <2K render images but... Exactly. This is ground for the type of approach that i was trying to explain in the other thread where i was using 2 UVSets, one for the main bake within the regular UV space (0-1,big image) and a second UVSet using more UV Tiles/texturable faces. The problem with baking a huge texture is that, no matter what you do, after resizing you get some texture degradation as a mandatory collateral damage to the image. That's why you should try and use the LightPath feature instead of the general sampling i can see you refer to. That way you can tell your software to not waste samples on features you're not using like SSS or Volume and you can also set (if i remember Blender correctly) how many bounces you allow to be calculated before the sample dies off and stops being accounted for. Moreover, the sample bounces values imply that there should be some geometry to bounce off from, so setting up a sort of huge floor with walls at a distance helps avoiding waste of samples that fall in the void to never stop being calculated because they never bounce. I don't know whether lights in Blender have their own samples, but if they do, set those to a high value and keep the scene render sampling relatively lower. These three things help improve render/bake time in relation to a bigger output size.
  8. I don't do any human stuff either, so i can hear you there. The turns are really unnatural as the hip location is totally stationary and only the legs and torso move. My latest avi is a quadruped dragon and the turns using the serverside animations work flawless, as opposed to using the llStartAnimation function which, regardless of my custom turn animation priority being set to 6, the default turn still kicks in for half a second, very irritating.Using the serverside function for those two animations completely solve the problem with a regular priority 3 as they should be. So when you will be making anything with legs, keep this in mind, maybe a hybrid solution is the best route to take, at some point.
  9. Maya users do get the ability to have filtering applied on bake time, using the appropriate settings. Not using filtering gives you pixellation anyway, and getting overboard with these filters lengthen the bake time and decrease the texture sharpness like applying a quite strong blur in photoshop.
  10. Yep. Indeed you eiher force the jump by applying the impulse vertically and force the state change, or you end up sliding in pre-jump for 2-3 seconds. Not ALL animations kick in abruptedly, however a few do, like the landing, which ends up blending to the default stand (although it should be overridden?!) unless you use a higher priority version of your custom stand. Other animations are fine, the old school AO has quite a lot of troubles with the turning animations regardless of the priority you set on your custom turns, while the serverside AO make them work well. I ended up mixing the serverside animation overrider with avatar state controls to override problematic animation transitions using the oldschool system llStartAnimation =\ quite meh, confusing and time consuming
  11. i know them too, and you know what? they only care about selling and getting your money giving you what you blindly ask, regardless of that being good or bad for the platform. They don't care, as shown by the adoption of high poly count meshes (instead of optimizing them) and with their LoDs policies to game the system to appear "professional" with high surface smoothness at apparently no higher LI/Complexity cost. So if the assumption is that if those individuals adopt BoM, it means it is good, please realize that they only want "to be first" and sell more before all the others.
  12. The point is that the multiplier has to subdue to the map value to begin with. Black is zero gloss, you can set the mutiplier at 255 and still have no glossiness showing up on the zero gloss area defined in the map. Being the exploitation of a BUG, my dear, can you understand that i doubled the UV count and not the polygons? As result of a BUG, the final result IS BUGGY and therefore miscalculation is an obvious consequence. Reuploading the same model gives different LI, WAY different (higher and lower) because the mesh file fools the interpretation done by the uploader. Therefore, in that regard, this statement from you is non sense because i got the effect feeding "nonsense" to the uploader which, in turn, gave me that effect with a "nonsense based" calculation. What i'm advocating is a development in that same direction without tricks on the uploaded mesh data (read below) Five sets of materials, total 15 textures plus 1 AO for the whole build altogether. Now read intently, and don't skip what you don't like to hear: to have the same visual resolution with bakes, you should have each side of those cubes to get more than one texture faces, baked textures that, nowadays, are also coupled with their corresponding set of material textures to go with the bake in question itself. So in the end, to get the same visual resolution result, where i got away with a total of 5 sets of materials (logs, regular wood, stone blocks and 2 versions of the hay, plus 1 single AO for the whole build to layer on top) there would be the need for many more baked diffuse textures WITH their own corresponding material textures (normals and specular). Therefore, texture baking workflow always results in more maps than a layered system, which is why all platforms/engines/3D softwares fold over this solution rather than on pre-rendered textures: it looks dynamic for the materials used can be randomized in their placement, giving us less obvious repetitions and more variety using the same set of textures, higher resolution for the tiling approach, and less texture memory consumption. Example: to get a final visual resolution of 256 px/sqm, using baked textures require a ton of them, which in the end keep showing repetitions and blurring as opposed to the layering system that relies on tiling which COVERS obvious repetition by using different layers to result in more variated textures with the same set of materials, not more sets. The wrong thing is that it applies only to mesh avatars, neglecting all the rest of content types, while the approach i describe can be applied ALSO to avatars and can be flexible enough to allow different types of workflow, single tile textures as well as tiled textures. Moreover, BakesOnMesh(bodies) applies the same separations as per the default avatar, also limiting the available slots to what the classic avatar offers, forcing to recycle the skirt, for example, to allow left/right arms to get different texturing. This goes well, PERHAPS, for the mesh avatars that comply to the classic avatar's standards, but those that have proprietary UVs and/or different UV arrangements are being cut off from this anyway. Using the classic avatar's UVs "for compatibility" was and still is a draw back on the possibilities that mesh avatars have, and it was done just to be able to, again, recycle older content instead of making new and better (also higher rez if you will) content. After all this said, couple that with mip mapping being properly implemented, reducing the texture size as LoDs would, based on the view distance. Currently, also that is non-standard mipmapping, as it is right now, it's just a progressive increase of resolution while the texture is being downloaded, but then it stays at final resolution regardless of the viewing distance. To conclude: the problem with bake on mesh is that it won't solve the performance problems that SL currently has. What i'm advocating is a DEVELOPMENT (= IMPROVEMENT) of the current system, making sure that the shader begins to work on a layering system base AND textures are being scaled in regard to viewing distance (materials are applied on a shader, which is a piece of code that defines the material per se and how textures should be interpreted; please don't reply that SL has no shaders. It's got only one that WE can't work on and WE get only the input slots to add our textures). I think it was Klytyna who was saying that mips should be included within the textures themselves and therefore not a viable solution because of the final texture compressed size; that was true in the past, there is no more need of mips inclusion in the main resolution texture nowadays as most of the games compute them on the fly because now hardware and code support this. You feed a final resolution and the resizing is automated without cluttering texture memory, which instead gets a relief by doing this, discarding the unnecessary allocated memory that the smaller texture version needs in comparison to the bigger ones, and just downloading the missing bits when a "scale up" is necessary instead of the full texture. This is why BoM is a quick sop for you LL fanboys. You don't realize how this project's aim is to make a fool out of the user base and creators altogether to avoid serious development, and still being able to say "we're working for you and the platform improvement" and justify the latest increase on cash out fees, while there's NO proportion in regard of their increased income and the actual work they do in development. Because what they're doing is patching an old feature, that remotely resembles what is common standard anywhere else, to work on the newer content instead of DEVELOPING a newer base feature (materials) to include NEW functionality.
  13. I knew that over a year ago, when I told you and the other Bake-Fail propaganda spewers that bake-fail was a bloody stupid idea. It's STILL a bloody stupid idea, thats only supported by those tech-illiterate enough to believe the propaganda, and by those who think it will resurrect their failing "system clothing and skins" businesses, by allowing them to sell their 10 yar old back-catalog to future generations of tech-illiterate noobs. Is there no end to your hypocrisy and cluelessness... One of the "advantages" of bake-fail that YOU specifically have been so keen to stress, is doing away for the need for meshed layers close to the skin, but a translucent latex catsuit for example, that was MODELED, in 3d, would basically be a copy of the body, with the minor modification of spanning arous the cleavage, and butt crack, just like ... The clothing layer on your materials applier friendly mesh onion-layer body. So, you are basically saying, wear an onio layer... And then you make the mistake of calling for MESHED seams and fine wrinkles, apparently oblivious to the massive increase in poly could resulting from this, and apparently oblivious to the FACT that much of the mesh based clothing these days, USES NORMAL MAPS to add such details rather than modeling them. I'd suggest you turn on Advanced Lighting Model, wear a Maitreya body, and try a demo from one of the materials based latex applier vendors, there are several, so you can see just how WRONG you are... Again... Here's a product shot from ONE of the materials based applier makers, not the fine wrinkles and seam lines in the suit, the kind of detail that NOBODY would try and model into a mesh, unless they were completely ignorant of the technical realities of mesh-making and rendering costs. ... Let's sum up YOUR position so far. 1. Bake-Fail uber alles, because onion layered mesh is EVIL (PS. You can wear all your 10 year old system rags again) 2. Matte finish flat 'body paint" looks better than spec/normal mapped materials. 3. Appliers for materials on onion-skin bodies are obsolete because... You could wear a 3rd party skin tight mesh suit with a massive poly count and rendering cost, that uses materials, in stead of simply applying materials to a layer on your body... ... Facts tend to disagree with YOUR opinion. Hmmm, so... Fact free Pro-Bake-Fail Propaganda Materials Hating SL-Fossil Hypocrisy much? I MUST take back all i previously said about you Klytyna. This post of yours actually sums it all up. Please take a look at my video i posted in the feedback thread to add my points to yours, since they pretty much go well together. I HONESTLY no longer think you're a 3D/materials illiterate. These fanboys deserve that tile in your stead.
  14. Already proved you wrong in the other thread about this BS you're saying. Glossiness map is inside the normal map's alpha channel and the parameter in the build tool is a multiplier to modulate it. PERIOD. BakeFail on MeshBodiesOnly is a crippled useless feature that shouldn't have come to life in the first place. Materials Layering is the answer to overall complexityy reduction, from polycount to texture load. BoM is BS.
  15. A point you clearly aren't able to grasp Done, but you keep quoting me. And we all know how LL operates. So BoM won't solve onion skinned avatar high complexity for "a time" which won't be short, if we're lucky enough to get them to make a follow up project. Good luck with making appliers for all possible combination of normal maps and specular maps you don't have access to because you don't own them (like the skin, how do you make an applier for all possible skins to be put under your texture garment?)
  16. Sure thing, you work on one frame to apply the editing to the whole sequence, no question about that Can you export nodes setups for that in order to reuse it in another project? You would have just to tweak it instead of re-doing all the nodes piping
  17. Antialiasing is basically edge detection and blurring along the line, and the filter type is how it gets blurred (plus how many pixels get involved around each sampled pixel) Indeed: That is the post processing for video editing usually, with nodes that operate like photoshop filters However, if you're willing/able to invest a few hundred bucks, VRay is a good choice and supports Blender. Takes a bit of learning but it's definitely a great option.
  18. When setting up the pose, make sure to keyframe the COG bone for both Location and Rotation. Select the COG, then (hovering your mouse in the 3D view) hit the "i" key on your keyboard and choose LocRot from the drop down menu you get.
  19. Well i knew you were working on it from the thread you mention in your post, but i didn't know how it turned out for you. There should be a feature request to the Blender Devs for a baking improvement to cover this then. @Spinell Chic's suggestions for manual post processing is the best way to go, at this point.
  20. Ehm no, Maya is another 3D software. You should look for the Anitialiasing settings and filter types available in Blender in order to get less pixellation.
  21. Alright, took me a few minutes but tthere you go: i exported both Male and Feemale Bento skeletons, compliant to BVH standards, in FBX format. They can be found in my GoogleDrive Assumption is Y axis UP upon import Please let me know IF these work for Poser. Avatar meshes are NOT included since they're NOT bound to this specific skeleton in reality. But at least you have a base to start from. EDIT: forgot to mention, it's a zipped folder in .7Z format, so you'll need 7Zip to open it
  22. Actually the implied skeleton scale for BVH export is quite different from the meter scale you get in Blender and therefore animations will be quite a bit off after import. Moreover, if the bvh file exported from poser also contains joint position, the inworld avatar would end up becoming quite bigger (and distorted) because the joints offset are off the default offsets. In general, skeletal assumptions for the 2 departments of works (rigging and animation) suffer from BIG discrepancies in regard of World orientation, overall character scale and implied measurement units, plus the root joints name is assumed to be different.
  23. It looks like unfiltered antialiasing. Most likely @Chic Aeon has encountered this issue too, as i remember from a past thread, but i don't know how she managed to fix it. In Maya this would be fixed by using a higher Anti-Aliasing value in combination with a better filter type (most common is the Gaussian, but the best in my opinion is Blackman-Harris) Good luck =)
  24. The problem is that BoM should never have been brought to existence in the first place and that LL should have done what can refresh their obsolete platform. BoM is just a quick sop that will never get follow up projects to include materials and, moreover, doesn't benefit content in the broader aspect, it works only on mesh avatars. EVEN IF it would get any follow up project it keeps being a crippled, backward feature that neglects the tools that were introduced along the years and really doesn't solve scene's complexity. Onion skin avatars are not the only problem, it's also and very importantly texture load (for the reason i explained in one of the above posts) which MY POINT tends to solve for both, avatars and scenery, polycount and texture load. Now stick your fingers in your ears and shout "lalalalala"
  25. yeah let us all down with a backward feature.
×
×
  • Create New...