Jump to content

Jake Koronikov

  • Content Count

  • Joined

  • Last visited

Everything posted by Jake Koronikov

  1. Drongle McMahon wrote: In case anyone is prepared to put up with the tapering/pinching effect, it is also possible to use the selected-to-active bake to transform the prebaked map. In this case, both meshes are identical except for their UV maps. This is actually very interesting feature. I have never noticed that Blender really is able to convert tangent basis between two uv-layouts. As I understood your example, you entered the original NM in the texture slot with image sampling "normal map". And after that baked "selected to active" using Bake mode "normals". This does the tangent basis conversion! Amazing : ) Very useful indeed. Baking normals from hi-poly to low-poly is very clear, but I never thought Blender can do the bake from the NM texture channel too.
  2. D'oh! I just tested same kind of situation..you are right, the normals break in the object space normal map too. Actually, I should have thought a bit more: of course they go wrong because the object space isnt same anymore in the new object and new uv... I cant figure out any other solution than converting the very original normal map into grayscale bumpmap. Using PhotoShop normal map filter or GIMP normal map tools. Then baking with grayscale bumpmaps and converting them back to normal map using GIMP of PS....
  3. As far as I know, this cannot be done in Blender. However, you might try to use XNormal and a "Tools"-menu. There is a tool called "Object/Tangent space converter". Maybe you can convert your very original woodplank normal map into Object Spaced Normal map (using a simple plane mesh fed into XNormal tool) and then start to work with these generated Object Spaced Normal maps inside Blender. Do all the bakings with Object Spaced Normal maps and finally convert them back into Tangent Spaced Normal map using XNormal converter tool. I did not make any practical test on this, but I think it might be a good starting point for your problem
  4. I wanna say my words too, for Blender. According to my opinion Blender has the most logical 3d tool UI that I have ever seen. The way how Blender is organized is just superior compared to even Maya. Blender is also very usable with only one display. To use Maya efficiently, you need to have another display connected to your computer. The various pop-up and set-up windows all around the workspace just do not work with only one monitor. And, Blender has fastest possible polygonal modelling interface. Well, Maya is more sophisticated regarding teamwork, rendering, complex production pipelines and integration into other software. Go with Blender, there is no need to confuse one's head with several tools. Blender does it all.
  5. How about taking the "Image Texture" color output and creating a "Math, multiply" node. Then connecting the multiply output value into "Material Output" displacement slot. Then playing with multiply node to get a correct effect. Dunno..this was just a theoretical quick idea : D
  6. Nobody took the challenge? :matte-motes-silly: I made a very simple blend-file. It contains a shorts-kind-of-a-model extracted from a standard male avatar. You will find the blend file here: http://s000.tinyupload.com/?file_id=29492499795713467961 The sample pants already have weight groups. They are generaed from Avastar using "Adjust Weight Groups", "Copy from Avastar" and "Fitted Mesh" -settings on. This is a very good starting point for fitted mesh rigging - according to my opinion. The blend-file does not include SL Avatar. But you can add (if you wish) Avastar tool into the 0,0,0 position - and the pants are located in correct position. Set the Avastar to male body. After that, just bind the pants into the Avastar with keep weight-groups option on. After that: rigging and testing in Avastar with "Attach Sliders" or inworlds. ( *hits his head into the desk several times* ) What I want to tell with this example? There is no way to make fitted mesh rigging for these kind of pants with only one size. The solution is to make at least 4 separate sizes depending on avatar butt and body fat values. Thats it. *desperately waiting for new second life platform to come*
  7. Aww thanks Gaia . I can make a sample blend file later this week. But as an example a photo below. If you take the default avatar lower male body as starting point. And cut the mesh about on same level as the black GreasePencil stroke in photo above. (near waist line). Now we have a very simplified pants, or jeans, or whatever. [ETA: Sorry about silly and quikcly made photho, the GreasePencil stroke is the thick black line, not the thin one] Now, how to rig these male pants so that inworld: * Butt size, Body Fat, Leg Muscles, Body Thickness, Saddle Bags -sliders can have some reasonable values between about 10..80 (we dunt even need to go up to 100 for this example..). And almost any combination of those values. As we know, male bodies can have quite many combinations inworld. We can forget the Package-slider in this case. There is no solution for that slider anyway. If we play with the above mentioned sliders inworld, there are far too many combinations where the waistline just does not work. It goes inside the body or gets badly deformed. Or over-deforms in extreme settings of those sliders. The main problem here is ofcourse that the mesh edge is very well visible. If this would be female dress, problems could be solved with alphas.
  8. As a creator-point-of-view (and eventually customer-point-of-view), the fitted mesh concept just does not work. That is the only correct answer. No matter what kind of tools there are, the idea does not work. I do not mean to critisize Avastar or anyone making those tools, you have done a great work. We see those problems in Medhue's tut video starting from about 35:00. Medhue starts to play with the problematic areas and finally more or less gives up in the video . And I do understand that. Female body ends up somewhat to the state that is needed but with male avatar body the area starting from upper leg and ending to around belly bone is just impossible to rig so that it looks even fairly ok with all possible shape slider combinations. Well, at least when we talk about attachments created for male body. Well. There is hope. Some day when we see the SL2.0, all these problems will be solved. One interesting page to visit is http://www.lindenlab.com/careers With a short peek you can tell that they really are developing something that has never seen before in the scene of "Massively Multiplayer Online Games"
  9. arton Rotaru wrote: Splitting UVs on hard edges where possible is good advice. I think splitting UV's on hard edges in needed only when normal map will be included. With only diffuse texture the splitting is not needed.
  10. *makes a confession* my obsession is specular maps. I could spend half an hour sitting in a public toilet trying to figure out how various materials should be specular mapped :smileyembarrassed:
  11. Jenni Darkwatch wrote: erm. SL only has texture(diffuse)+normal map+specular map. Sure. I was referring to the diffuse texture baking process. To preserve or generate a diffuse texture from a HP model to LP model, you can (if you wish, not necessary tho) take advantage of various mapping methods: cavity maps, occlution maps, dirt maps, displacement maps and so on : )
  12. With organic mesh: No matter what sculpting method you use, always make a retopology process for the mesh. Retopo the high density mesh into a low density mesh that has quad structure and even edgeloops. After retopo LOD modelling is a very easy and straightforward process - it is done by just deleting suitable amount of edgeloops. To get all the high density mesh details into low density one: use normal maps, ambient occlution maps, cavity maps, dirtmaps, or whatever mapping technique that preserves the details. The correct way to make game assets is to let your creativity fly with sculpting, but after that go engineering and retopo, design LOD's and optimize texture usage. That is all you need to do
  13. Drongle McMahon wrote: Normal and spec maps have their own LOD known as mipmaps that would hide that detail. I don't think that' can be generally true. The switches are not coordinated, and simply interpolating the higher res normal map doesn't necessarily make the right kind of adjustment. Also, it depends on the nature of the LOD meshes..... I would agree with Drongle in this. Simply because the NM is baked against highLOD geometry. Of course, lowLOD geometry is different. The render will not be same, as you stated above, and will have a small artefact. Artefact might be fixed, if we have two NMs and the lowLOD model uses hidden material/face/UV that render special lowLOD NM .... (Utilizing the same trick as described earlier in this thread regarding diffuse texture). But I think using this trick would introduce some new artefacts when LOD changes in the screen. Hidden NM would propably take some time to load up and cause own kind of temporary artefacts.
  14. leliel Mirihi wrote: Jake Koronikov wrote: The SL shader system does not have any special solution for these kind of objects. Some other game platforms have special shaders to render those. At least those shaders can be coded. Those games use alpha masking instead of alpha blending, SL now has alpha masking as well. Give it a few more years and most well made content will use it making over draw much less of a problem. Both alpha masking and alpha blending have to be maintained in the future SL2. They are two quite different methods. The problem is, that when residents (=content creators) have the freedom...they usually use alpha blending. Alpha blending is just like the old school alpha texture, that we see all over SL. Alpha (amount of see-thru) is coded in alpha channel as 0-255. This is same as alpha blending. High FPS games also use alpha blending, but they have a possibility to code their custom shaders so, that the expensive light calulations can be reduced in alpha objects. This will help increasing FPS. They give less quality shaders for alpha intensive objects. They can even define that how deep the alpha blending will go (z-rejection), meaning how many layered alpha textures is the maxium the shader will ever calculate. But if the SL resident chooses the shaders...I am afraid that they will choose the most beautiful and most GPU expensive ones.
  15. IvanBenjammin wrote: .... Once a texture is cached, it isn't going to be much of a performance drag. Its a quick hop from your cache to VRAM, but the download TO your cache is the bottleneck for textures. For mesh, its a little different. It too gets cached and loaded into RAM, but the GPU is having to calculate its vertex positions relative to camera position for every frame it renders. On the face of it, it might sound like mesh is incredibly render intensive if its doing that 30+ times a second, but modern GPUs ('modern' in this instance being anything less than 10 years old) can crunch those numbers blazingly fast. Even onboard graphics setup is very quick at this process, its just using the system resources to do it rather than its own. ......... Very good explanation. But there is one thing that no-one has been worried about, and what I would say is the most significant singe cause of FPS drop in SL: alpha overdraw. Even old GPUs can hande amazing number of triangles and still keep the fps at about 50 or above. But even the latest GPUs cannot handle overlayed alpha textures. That is called alpha overdraw. To put it simple, when you render dozen of alpha blended textures over eachother, the GPU has to make "Photoshop level" graphics work with every drawcall... According to my experience, most of the laggy sims in SL are just filled up with trees and grass that have dozens if not even hundreds of alpha textures showing thru eachother. In different angles and with different semi-transparent textures. The SL shader system does not have any special solution for these kind of objects. Some other game platforms have special shaders to render those. At least those shaders can be coded. This issue is gonna be one big pain in the butt, when they develop mobile, tablet (plus all sorts of pods and tubes) version of SL2. Because those poor devices can never ever handle the alpha inferno that content creators will create....
  16. Drongle McMahon wrote: There can be layers for small detailed objects that will disappear faster. For large buildings LOD handling can be such that they are visible from a distance. In SL we do not have this kind of functionality... I'm not sure I understand what you mean here. If you design your LOD meshes appropriately, you have quite a lot of control over when details, or whole models, disappear. You can also add another level of control with invisible geometry and/or joining/splitting objects. Of course, if you stick to the automatically generated LODs, that is pretty hopeless. I did once do a jira asking for a object-by-object settable LOD distance modifier factor - so I must have agreed with you a bit, but the existing syatem is pretty flexible. My apologies, I didnt express myself very clearly above. What I mean is about this: http://docs.unity3d.com/Manual/class-LODGroup.html The properties section in above link explains the idea what I was looking for. I was not giving critisism for SL implementation, I was more like considering the possibilities in other platforms. And I agree with you that object-by-object setable LOD would be a great idea.
  17. I am not answering here to any specific question, but just some considerations: - In games generally, average triangle count in a scene is anything between 50 000 - 1 000 000, depending on FPS requirements. In an acton game the scene is never as high as 1 million. In a slow motions strategic game it may get as high as 1 million - Avatars are usually quite dense. It is not rare that an avatar has even 20K tris, or more. But: usually in games, there will not be dozens of avatars in a scene at once. In SL there is... - In games, there are possibiliies to handle geometry more efficiently. In SL all the objects are loaded via network at sim setup. In games, the scenes can be opitmized so that only camera direction is loaded. Behind your back there can be used low-poly substitutes for physics and so on - In general. One big mesh object is less expensive than several smaller objects linked together - in the GPU point of view - In game engines, LOD handling can be different for separate object sets. There can be layers for small detailed objects that will disappear faster. For large buildings LOD handling can be such that they are visible from a distance. In SL we do not have this kind of functionality... - According to my knowledge, SL uses a method called Frustum Culling. Even those objects behind large objects consume GPU. In games, there is possibility to use Occlution Culling whitch is less GPU expensive. Stuffs behind large objects are not drawn at all. I believe LL has faced a lot of problems during the development of SL. How to combine reasonable user experience and still keep things high FPS. SL is not an action game, it is very slow game where people mostly just stand still, zoom and admire amazingly detailed builds.
  18. Hi, I know exactly how you feel. The male body fitted mesh rigging is not straightforward process. There is a couple of bad issues I have ran into. First: The Butt area can not be rigged perfectly with only one size. To make the garmet to fit every possible body shape, you just have to import at least 3 or 4 additional sizes. This is especially true when the mesh edge is on about pelvis area. (as an example: Trousers, jeans, shorts, quite long shirts that fall on your butt area). Second: The arm/shoulder area volume bones do not work quite as expected. To get around of that issue, you have to weight arms/shoulders partly to volume bones and partly traditional bones. There is also some fuzzyness with HANDLE, LOWER_BACK and UPPER_BACK bones. Depending on your garmet behaviour you have to rig it with different methods. Tight fitting t-shirts for example require different rigging than loose fit leather jacket. I have never found any document or tutorial about male body rigging. I have made a sort of a "template mesh" for myself, witch I can use for weight transferring to my meshes. If you wish, I could share some weight paint photos in this thread.
  19. Whops. I didnt see the diffuse map image in the original post. Yes, definetely, the uv layout will not work for 1024 image. You can forget my advise for sampling method at this point. First the uv-layout re-desing.
  20. I am not familiar with Max but I have had texture bake issues (very bad issues actually) after starting to use Maya2015. Previous version baked very well. It took a lot of time to find a reason, but it turned out to be Mental ray "Unified sampling"-option. Unified Sampling is default in Mental ray in my case, but it just did not work. Results vere very blocky and there was a lot of black spots all around the texture. No matter how high the quality settings were. After setting the sampling mode to Legacy everthing worked very fine. I see in your mentalray setting this Unified sampling. How about trying some other sampling method? My aplogies, I am not familiar with Max, but I think mentalray is quite about same both in Maya and Max. MR quality settings:
  21. That sounds interesting, and the cause might be in your graphics settings. Does your graphics card have any advanced settings? The z-buffer thingie is usually implemented in graphics card and/or graphics driver software or firmware. I know some graphics cards do have options to manipulate z-buffer size and handling, but mostly not...
  22. There is no simple solution, the phenomenon is called Z-Fighting. Two polygons fight for their position in a place called z-buffer. The larger the scene the worse the z-fighting gets. And as we know, SL scenes are large... Usually a procedure called z-test returns almost randomly whatever polygon as the topmost. One solution in your card stack might be to add some more geometry. Maybe twist the cards using a couple of extra edgeloops. (You know, old cards are kinda curved after they have been tortured by players ). That might add natural distance between polygons...
  23. The base avatars can be downloaded here http://lecs.opensource.secondlife.com/fittedmesh/SecondLifeAvatarSkeleton.zip (OBJ, DAE, Maya and Blender formats are included for both male and female) To be honest...ZBrush is not a suitable tool for rigged mesh creation. You definetely need something else, Blender is a great free tool for that. You can of course design the garmet in Zbrush, but the final rigging has to be done somewhere else. I would start by learning how to use Blender, and after you have learned the basics of rigging, then go back to Zbrush with exported basemesh. Sculpt the garmet and bring it back to Blender for final rigging. (Dunt forget, creations has to be lowpoly )
  24. I definetely agree that ZRemesher is quite amazing tool. However, as Ashasekayi Ra stated, other tools are needed for finetuning and adjusting edgeloops. One imaginary situation might be an avatar with quite general forms, but for example small cavities betweeh toes or fingers or behind ears. Or long and thin tentacles of some creature. With ZRemesher it tends to be time-consuming to fine-tune dense mesh areas where detailed geometry is needed. Result is usually either too blocky or too dense on those special areas. Using ZRemesher density paint feature is not very exact....it either generates too many polygons or does not use them at all.... For those kind of situations the two OP's tools could be very useful. General retopo using ZRemesher and fine tuning complex geometry with easy-workflow Blender tools.
  • Create New...