Jump to content

Jake Koronikov

Resident
  • Posts

    98
  • Joined

  • Last visited

Reputation

11 Good

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Drongle McMahon wrote: In case anyone is prepared to put up with the tapering/pinching effect, it is also possible to use the selected-to-active bake to transform the prebaked map. In this case, both meshes are identical except for their UV maps. This is actually very interesting feature. I have never noticed that Blender really is able to convert tangent basis between two uv-layouts. As I understood your example, you entered the original NM in the texture slot with image sampling "normal map". And after that baked "selected to active" using Bake mode "normals". This does the tangent basis conversion! Amazing : ) Very useful indeed. Baking normals from hi-poly to low-poly is very clear, but I never thought Blender can do the bake from the NM texture channel too.
  2. D'oh! I just tested same kind of situation..you are right, the normals break in the object space normal map too. Actually, I should have thought a bit more: of course they go wrong because the object space isnt same anymore in the new object and new uv... I cant figure out any other solution than converting the very original normal map into grayscale bumpmap. Using PhotoShop normal map filter or GIMP normal map tools. Then baking with grayscale bumpmaps and converting them back to normal map using GIMP of PS....
  3. As far as I know, this cannot be done in Blender. However, you might try to use XNormal and a "Tools"-menu. There is a tool called "Object/Tangent space converter". Maybe you can convert your very original woodplank normal map into Object Spaced Normal map (using a simple plane mesh fed into XNormal tool) and then start to work with these generated Object Spaced Normal maps inside Blender. Do all the bakings with Object Spaced Normal maps and finally convert them back into Tangent Spaced Normal map using XNormal converter tool. I did not make any practical test on this, but I think it might be a good starting point for your problem
  4. I wanna say my words too, for Blender. According to my opinion Blender has the most logical 3d tool UI that I have ever seen. The way how Blender is organized is just superior compared to even Maya. Blender is also very usable with only one display. To use Maya efficiently, you need to have another display connected to your computer. The various pop-up and set-up windows all around the workspace just do not work with only one monitor. And, Blender has fastest possible polygonal modelling interface. Well, Maya is more sophisticated regarding teamwork, rendering, complex production pipelines and integration into other software. Go with Blender, there is no need to confuse one's head with several tools. Blender does it all.
  5. How about taking the "Image Texture" color output and creating a "Math, multiply" node. Then connecting the multiply output value into "Material Output" displacement slot. Then playing with multiply node to get a correct effect. Dunno..this was just a theoretical quick idea : D
  6. Nobody took the challenge? :matte-motes-silly: I made a very simple blend-file. It contains a shorts-kind-of-a-model extracted from a standard male avatar. You will find the blend file here: http://s000.tinyupload.com/?file_id=29492499795713467961 The sample pants already have weight groups. They are generaed from Avastar using "Adjust Weight Groups", "Copy from Avastar" and "Fitted Mesh" -settings on. This is a very good starting point for fitted mesh rigging - according to my opinion. The blend-file does not include SL Avatar. But you can add (if you wish) Avastar tool into the 0,0,0 position - and the pants are located in correct position. Set the Avastar to male body. After that, just bind the pants into the Avastar with keep weight-groups option on. After that: rigging and testing in Avastar with "Attach Sliders" or inworlds. ( *hits his head into the desk several times* ) What I want to tell with this example? There is no way to make fitted mesh rigging for these kind of pants with only one size. The solution is to make at least 4 separate sizes depending on avatar butt and body fat values. Thats it. *desperately waiting for new second life platform to come*
  7. Aww thanks Gaia . I can make a sample blend file later this week. But as an example a photo below. If you take the default avatar lower male body as starting point. And cut the mesh about on same level as the black GreasePencil stroke in photo above. (near waist line). Now we have a very simplified pants, or jeans, or whatever. [ETA: Sorry about silly and quikcly made photho, the GreasePencil stroke is the thick black line, not the thin one] Now, how to rig these male pants so that inworld: * Butt size, Body Fat, Leg Muscles, Body Thickness, Saddle Bags -sliders can have some reasonable values between about 10..80 (we dunt even need to go up to 100 for this example..). And almost any combination of those values. As we know, male bodies can have quite many combinations inworld. We can forget the Package-slider in this case. There is no solution for that slider anyway. If we play with the above mentioned sliders inworld, there are far too many combinations where the waistline just does not work. It goes inside the body or gets badly deformed. Or over-deforms in extreme settings of those sliders. The main problem here is ofcourse that the mesh edge is very well visible. If this would be female dress, problems could be solved with alphas.
  8. As a creator-point-of-view (and eventually customer-point-of-view), the fitted mesh concept just does not work. That is the only correct answer. No matter what kind of tools there are, the idea does not work. I do not mean to critisize Avastar or anyone making those tools, you have done a great work. We see those problems in Medhue's tut video starting from about 35:00. Medhue starts to play with the problematic areas and finally more or less gives up in the video . And I do understand that. Female body ends up somewhat to the state that is needed but with male avatar body the area starting from upper leg and ending to around belly bone is just impossible to rig so that it looks even fairly ok with all possible shape slider combinations. Well, at least when we talk about attachments created for male body. Well. There is hope. Some day when we see the SL2.0, all these problems will be solved. One interesting page to visit is http://www.lindenlab.com/careers With a short peek you can tell that they really are developing something that has never seen before in the scene of "Massively Multiplayer Online Games"
  9. arton Rotaru wrote: Splitting UVs on hard edges where possible is good advice. I think splitting UV's on hard edges in needed only when normal map will be included. With only diffuse texture the splitting is not needed.
  10. *makes a confession* my obsession is specular maps. I could spend half an hour sitting in a public toilet trying to figure out how various materials should be specular mapped :smileyembarrassed:
  11. Jenni Darkwatch wrote: erm. SL only has texture(diffuse)+normal map+specular map. Sure. I was referring to the diffuse texture baking process. To preserve or generate a diffuse texture from a HP model to LP model, you can (if you wish, not necessary tho) take advantage of various mapping methods: cavity maps, occlution maps, dirt maps, displacement maps and so on : )
  12. With organic mesh: No matter what sculpting method you use, always make a retopology process for the mesh. Retopo the high density mesh into a low density mesh that has quad structure and even edgeloops. After retopo LOD modelling is a very easy and straightforward process - it is done by just deleting suitable amount of edgeloops. To get all the high density mesh details into low density one: use normal maps, ambient occlution maps, cavity maps, dirtmaps, or whatever mapping technique that preserves the details. The correct way to make game assets is to let your creativity fly with sculpting, but after that go engineering and retopo, design LOD's and optimize texture usage. That is all you need to do
  13. Drongle McMahon wrote: Normal and spec maps have their own LOD known as mipmaps that would hide that detail. I don't think that' can be generally true. The switches are not coordinated, and simply interpolating the higher res normal map doesn't necessarily make the right kind of adjustment. Also, it depends on the nature of the LOD meshes..... I would agree with Drongle in this. Simply because the NM is baked against highLOD geometry. Of course, lowLOD geometry is different. The render will not be same, as you stated above, and will have a small artefact. Artefact might be fixed, if we have two NMs and the lowLOD model uses hidden material/face/UV that render special lowLOD NM .... (Utilizing the same trick as described earlier in this thread regarding diffuse texture). But I think using this trick would introduce some new artefacts when LOD changes in the screen. Hidden NM would propably take some time to load up and cause own kind of temporary artefacts.
  14. leliel Mirihi wrote: Jake Koronikov wrote: The SL shader system does not have any special solution for these kind of objects. Some other game platforms have special shaders to render those. At least those shaders can be coded. Those games use alpha masking instead of alpha blending, SL now has alpha masking as well. Give it a few more years and most well made content will use it making over draw much less of a problem. Both alpha masking and alpha blending have to be maintained in the future SL2. They are two quite different methods. The problem is, that when residents (=content creators) have the freedom...they usually use alpha blending. Alpha blending is just like the old school alpha texture, that we see all over SL. Alpha (amount of see-thru) is coded in alpha channel as 0-255. This is same as alpha blending. High FPS games also use alpha blending, but they have a possibility to code their custom shaders so, that the expensive light calulations can be reduced in alpha objects. This will help increasing FPS. They give less quality shaders for alpha intensive objects. They can even define that how deep the alpha blending will go (z-rejection), meaning how many layered alpha textures is the maxium the shader will ever calculate. But if the SL resident chooses the shaders...I am afraid that they will choose the most beautiful and most GPU expensive ones.
×
×
  • Create New...