Jump to content

Jake Koronikov

Resident
  • Posts

    98
  • Joined

  • Last visited

Posts posted by Jake Koronikov

  1. Below another example about how to use tris. This is a quite low-poly shirt kind of a model. With both sides visible, the model has 792 faces (1580 tris).

    There are two small triangles on the shoulder area (only the front tri is visible in the photo below, the back tri is at the same location behind the body). The purpose of those little tris is to "sub-divide" only the very shoulder area quads. This  helps to reduce blockiness in the final user experience. We do not need to subdivide the whole edge loop around the body, only a short distance between shoulder front and shoulder back.

    The illustration below clarifies this.

    quads-tris.png

    Other advantages of quads in this case:

    * Very easy to UV-unwrap

    * Very easy weight painting process due to natural edge flow (almost like human muscles)

    * Very smooth final appearance inworld.

    This model could be made even with a lot less faces, but to achieve good fitted-mesh results, we need some extra edgeloops on belly, arms and so on.


  2. I just wonder if the switched pictures, on the wall on the right, are on purpose, or by accident? :matte-motes-smile:

    Maybe it is to make the work derivative, to be sure not to violate intellectual property rights o.O (...that was a joke. Don't kill me :matte-motes-silly:)


  3. Jake Koronikov wrote:

    Jira is now filed:

     

    Regarding this old thread, and a Jira above:

    Geenz Spad (from Exodus viewer project) wrote an interesting comment for this jira. I think it might be a good idea to share his knowledge here too - this explains what is happening inside SL (warning: the embedded link below containts mathematics, and may cause serious health issues for living organisms :matte-motes-silly:) :

    "

    Geenz Spad added a comment - 01/Oct/14 12:13 PM

    What I suspect is happening here is how the viewer generates a special set of coordinates called tangents.

    Tangents are required for normal mapping to function properly. What you're seeing here looks to me like the tangents your normal map was baked with aren't lining up with what the viewer is generating. This isn't surprising, as most 3D modeling packages have their own, often times proprietary, way of calculating tangents on a surface. The method that the viewer uses (or at least, what the viewer's implementation is based off of) can be found here: http://www.terathon.com/code/tangent.html

    What this can result in is edge discontinuities like you're experiencing in SL. As for other applications like Toolbag, those applications actually import the tangents that were generated by the 3D package you exported from to ensure that normal maps all look correct when viewed within those applications.

    Second Life does not do this for the sake of file size, and instead generates them on the fly for meshes and most other geometry.

    You likely won't be seeing a fix for this any time soon, as it basically requires that the SL mesh format support tangents which it currently does not."

     


  4. MilaaMiami wrote:

    i think too that 1600 is over for a simple torus..can you please give me some advises on how to reduce triangles .. i use maya as software ?

    Select every second edge loop in your geometry (by double clicking the first loop and shift-double-clicking the following ones).

    Press ctrl-Delete to delete the selected edge loops.

     


  5. gustav2005 wrote:

    ....Simply do I just have to wait patiently till the SL weather changes looking up the sky and praying?


    Oh yes, my fellow designer. Sometimes I very quietly sneak to my laptop in the middle of the nite or early morning. The Supreme Beings are asleep those times. And wow, the rain suddenly stops for some time ;D Tho, sometimes I get a punishment for that sneaking. Double amount of ulpload problems next week o.O

     

     

     

  6. After you make separate physics shapes for your multiobject mesh model:

    Remember to name the physics objects so that they are in same alphabetic order as the actual objects in your multiobject upload :)

    That means they are in same order inside the physics .dae file as the corresponding objects in mesh .dae file.

    Well, if physics shapes are all simple boxes, then it does not matter. But if the shapes are custom for every part, then it matters.

     

     

  7. If you are starting (or learning) to create content for SL, then Blender is better starting point. Not because of it's features but because of the large user community and support. Gaia's Avastar for example is a great tool for SL avatar and clothing work.

    Personally I find Blender much easier to work with polygonal models, hard surface or any models that are made using just vertices and stuffs. That is my personal opinion, I find Maya modelling interface a bit slower to work with.

    On the other hand, Maya has some far better features regarding animation, fluid and physics simulation. Maya is also superior when dealing with massive projects that use memory and CPU, or need complex production pipelines and caching of processed data. Or when multi-processor capabilities are needed. But are those superior features needed in SL..well not necessary.

    Where is Maya better? I find that rendering and texture bakes. Maya license includes mental ray and Turtle renderers. Mental ray has huge amount of options for Final Gather, Global Illumination, Photon simulation, Advanced light and shadow rendering, transparency, translucency, anisotropic reflections, displacement mapping...neverending list. The final quality of renders are more realistic than in Blender.

    (One example of mental ray power, a commercial video:

    Would it be too time consuming to make that with Blender? Or even possible? Dont know...)

    But Maya is not free. It requires some enthusiasm to make such an investment for a software.

     

     

     


  8. Coby Foden wrote:

    I have noticed the same thing with some rigged mesh clothes. It is if the light somehow "leaks" through or bends around the edge. However I have also noticed that by enabling shadows the light does not do this and everything looks ok.

    Interesting, the turning shadows on seems to remove the issue. You are right.

    This shading is somehow originating from the rest pose. I noticed that in my case, the pants are so loose-fit style, that in rest pose the legs go "inside" eachother and umh..the connected faces turn upside down. I placed the cam inside the pants in rest pose, like in the photo below. The rest pose shading seems to stay in A-pose too. Weird.pants5.JPG

    ETA: So, basically, I think I should re-model this thingie so that there is no inside-out faces in rest-pose. But the technical shading issue is still quite interesting.

  9. I have encountered following issue, and I just cant figure out what is causing this. There is a rigged mesh pants, no normal map, no specular map. When wearing the pants, the area between the legs (yeah..) get shaded wrong.

    There is a photo below. Mid-day sunlight coming from straight up. So the areas facing downward should be shaded dark, but they seems to get very bright. As bright as those faces pointing up towards the lightsource, sun.

    pants1.JPG

    I am absolutely sure there is no double vertices, and face normals are pointing correctly.

    Below is a photo from blender. (there is some bad topology at this point, will be fixed later. But that is not the issue here)

    pants4.JPG

     

    The weird thing is, that if I upload this object as static mesh...the shading works normally and downward facing areas are darker - just as they should be.

    Any idea what I am doing wrong here?

  10. Comparing these two images:

    http://mindseye.scifi-art.com/august99/guests/mk84_4.jpg

    https://slm-assets2.secondlife.com/assets/8536376/view_large/mesh_type9_01_log.jpg?1381179015

    These two meshes are not the same. Texture and shape has similarities but the mesh is different, I would even say definetely different meshes.

    The only relevant question here is the StarTrek trademark. And that question ...Is tricky. In principle this product violates the StarTrek trademark. But in reality (if we forget the priciples..) big Trademark owners do not pay attention on small business violations. The above ship creator makes about 1.90 USD per sold ship. And in closed markets like in SL, the yearly numbers are quite small.

    The story would be different if the ship creator had a real world toy store in a real world city...

  11. Hey. I am not very familiar with MD, but I know it makes very dense tri-mesh.


    You can definetely avoid spiral loops in ZRemesher, I am sure about it:

    Use several ZRemesherGuide curves, or use CreaseCurves and after that in ZRemesherQuide -> Frame Mesh -> Creased Edges. That is easier than pure GuideLines.

    CreaseCurves will allways close the holes, but do not worry about it, because you can Ctrl-Shift-click the polygroups on closed areas and after that hit "DelHidden" in Modify Topology - palette. That is the way how you delete the closed areas polygons. After that you will see your original mesh shape without closings.

    ETA: You have to do this polygroup deletion before you hit the ZRemesh button. And do not put the ZRemesher "curve strenght" up to 100, because that will cause ZRemesh to make weird triangles.

    I am sure that spiral edgeflows will disappear. :)

    I could make you a short step-by-step video about the workflows, if you need.

  12. Oh yes, I forgot to mention about Inworld material settings. They all are defined here: http://wiki.secondlife.com/wiki/Material_Data

    Specular "roughness" or "how tight or sharp" specular is, can be controlled using normal map's alpha channel. The more white alpha channel pixel is, the more "sharp" specular effect is in the corresponding spec map pixel. The base value for this is the one you give in material settings "Glossiness" parameter.

    However:
    I would not use normal map alpha channel, if it really is not needed.This is because of resources: For example 1024 x 1024 normal map without alpha channel consumes 3 MB of graphics memory, and same map with alpha channel consumes 4 MB. Also, normal map alpha channel forces graphics engine to make expencive alpha check with every draw. It is unnecessary if we really do not need to modulate glossiness parameter across the surface.

    To save an TGA image without alpha channel, 24bit image should be used. If saved with alpha channel, 32 bit image is used.

    Instead of normal map alpha, black color in specular map means same thing as no specular at all.

     

     

     

     

  13. In my opinion (this is just an opinion, not a scientific fact) specular maps are somewhat different between hard surface models and organic models.

    Hard surface models could be something like houses, space ships, robots, hard surfaced furniture. And organic could be something like clothes and avatars.

    Normal map is a good starting point when dealing with hard surfaced models. The video mentioned earlier is very very good tutorial for hard surface models (

    ). The idea is to use hard edges that are found using normal map and Photshop "find edges" filter and start to develop specular effects around those areas. By making those areas white, we mimic very bright specular in hard corners. That is true with hard surface models, but not necessary true with soft and organic models.

    Let's look at a real world example of a leather sofa (image below). We see that specular effect is mostly visible where light falls on the surface and reflects back. On the other hand shadow areas do not have any specular at all.

    leather sofa and specular effects in real life.jpg

    When we model clothes in SL, the shadows/ambient occlusion are usually baked inside a diffuse texture. Just as in Fernanda's baked texture example above. To make a specular map (again, according to my opinion), we should make the specular effect very strong in bright areas as almost black in shadow areas. This is quite different approach than in hard surface, where we play with strong edges rather than baked shadows.

    In Photoshop/GIMP you could try to take the black-and-white version of baked diffuse texture and play with "levels" or "s-curve" and bring the very bright areas visible. Also make the shadow almost black. That would be the base for specular map. Another method would be to use ambient occlusion or cavity map to control dark areas, by multiplying them over the spec map.

    Then just go artistic and try to think what specular effect looks like in real world, it looks different in leather, silk, latex....

    Below two examples, first a diffuse texture of non-human avatar skin. Second, a specular map. Diffuse and cavity map were used when tweaking with speculars. Also hand-painting was used a lot.

    (Note: these examples are under development, but this is just an example. Inworld, you would see a green skin and alien-looking blue specular effect when moving cam around the body. There are also black spots in spec map to bring visible some dark areas in specular reflections, but not in diffuse itself)

    DIFFUSE_SAMPLE.png


    SPEC_SAMPLE.png

    (ETA: edited all sorts of details)

    • Like 1
  14. If I understood right, you want a mesh object that is rigged to mElbow and mWrist bones, so that mesh follows smoothly your hand and arm.

    mWrist does not "deform as fitted mesh" the mesh but mElbow does. If you change the avatar arm lenght, the mesh with stretch according to that slider in mElbow area. You cannot avoid that.

    I would say only way to avoid that is to use non-rigged mesh (as Gaia suggested earlier) and attach it to your hand or something.

    Or: It might be possible to replace mElbow bone with exported joints and mSkull bone that is re-located in the arm..but I havent tested. mSkull should not stretch anything with slider. Might it work? Does anyone know?

     

    (ETA: corrected lot of typos)

     


  15. Monti Messmer wrote:

    Can the viewer determine copies of the same ?

    If yes then by name or .... because even copies have their own UUID.

    Would be interesting to get insights from the programmers. LL or maybe Firestorm.

    Monti

    Intersting question. Is there maybe hidden id telling the viewer that object is allready once downloaded from the server? Maybe Firestorm developers would answer that .

     

     

    Anyway, theoretically thinking, after all the assets have been downloaded from SL servers to the viewer:

    The actual object is more like graphic card's problem at this point. For graphics engine there no difference with object being duplicate in SL or not - it has to draw the polygons in the scene anyway as separate objects. So..thinking this as pure graphics lags...there is no difference.

    • Like 1
  16. If I understood right, you want to have shiny material instead of matte.


    So many ways to do it. Best way is to use SL inworld normal map and specular map.

    But if you wish to try to bake them into the diffuse texture, you could try starting (in Maya) with:

    - mental ray activated, select Window -> Settings -> Plugin-Manager -> mayatomr.sll as loaded.

    - in hypershade, assign mia_material_x to the object

    - select some glossy preset from the mia_material_x presets

    - set up some lights in the scene


    Start playing with mia_material_x various settings and build up your material. Remeber to use mental ray when rendering tests and baking final texture.

    Shiny materials definetely require some surface details to pop up. Folds, bump maps, dispacement maps or something. Or some fabric texture used to control the glossy or shiny channels.


    Anyway. Quite complex topic to handle in a short answer:)

     

    • Like 1
  17. It is a pleasure to follow sophisticated argumentation between sophisticated people in this thread :)

    Some of my opinions about shadows and ambient occlution:


    Direct light comes directly from light source.

    Indirect light comes from same light but via bouncing from other surfaces. Surfaces do not need to be reflective, also matte lambertian surface emits light rays back

    Global Illumination is a method for calculating scene lighting taking into account both direct lighting and indirect lighting.

    Shadow is an area of darkness (not necessary black) created when a source of light is blocked - either direct or indirect.

    Ambient occlution is a method for *simulating soft global illumination shadows* caused by white ambient light in the scene.


    There is no AO in real life. AO is used in only in computer graphics to mimic real life situation where light rays bounce between surfaces with complex mathematics. This is also sometimes called global illumination calculation.

    In real life you can think a long cylinder with open ends. Take it outside when it is overcast day. Overcast day is a good example of pure ambient light, the light rays come equally from all around. Look inside the cylinder and you will
    see the middle parts of the cylinder darker than the parts more close to the ends.

    Reason for darkening is not caused only by shadows, but also because light rays bouncing inside the cylinder are absorbed, bounced back outside, destroyed, turned into some weird quantum electron energies and so on. All the light rays will never reach the middle parts of the cylinder. The bouncing light rays inside the cylinder are sometimes called indirect lighting. Indirect term is used because light rays are not coming directly from the light source, but bouncing from another surface.

    AO calculation does not compute these bounces and absorbed light rays. Instead, it tries to mimic same kind of a result without using raytracing calculation. AO calculation finds out how much there is geometry around each point, and gives the point a color between white and black. The result is simplified simulated global illumination effect.

    An example where AO calculation goes totally wrong:
    Take outside another cylinder, made of very bright and scratch-free steel. Look inside it - the bright areas are everywhere. You might even find brightest spots in the middle of the cylinder. The global illumincation calculation would solve this, but simplified AO calculation does not.

    (Well, in principle multiplying AO over other texture maps are somewhat wrong method. AO should not be multiplied over specular highlights, glossy highlights, reflections and so on)

    So to put it simple. AO map carries information from both shadows and global light rays, with very simplified assumptions about material properties.

  18. Commenting the last normal map bake question:

    You can bake normal map inside ZB. There is a nice tool called 'multimap exporter', located in plugin menu.

     

    The idea is to use highest subdiv level and bake maps to lowest subdiv. Then you only need to take lowpoly model into Blender and use zb baked maps. My opinion is that xnormal is a better tool than zb when it comes to map baking...

  19. You make tricky questions : ).

    Usually I go with Standard Brush and also very basic alpha 01 is good.  A lot of lazy mouse effect to make it easyer with free hand.

    One trick is to use both Zadd and Zsub between strokes to make folds come up from the surface and on the next valley go below the neutral surface. Sometimes I play with BrushSettings -> Depth -> GravityStrenght. This setting causes gravity effect on brush stroke.

    Also Smooth or DamStandard brush after each stroke is useful.

    Sometimes it is easyer to draw folds using Stroke->Curve->Curve mode, and set the "curve modifiers" to give some intensity and size effect. This causes the curve to fade away by itself.

    Anyway, your question is hard. There is a lot of different kind of approaches in youtube. If you search there with "cloth folds Zbrush" you might find some ideas.

  20. Very complex question as a whole. There are so many ways to do it. One is to paint in Gimp or photoshop or anything.


    My workflow has been something like this:

    1. Sculpting the details in ZB

    2. In ZB Baking out Normal map and Displacement map

    3. Exporting the low-poly model out of ZB (hi-poly is not needed anymore from this on)

    4. Importing the low-poly into baking software (is it then Blender or Maya or whatever)

    5. Setting up lightning (requires a lot of experimenting...)

    6. Baking out the final diffuse texture using Normal map and Displacement map as one input. Materials and shading are set up in baking software

    The baked diffuse texture includes ambient occlution as baked in. If SL materials are to be used -> Normal map can be used. Specular map manually made in painting software.

    Inside ZB you can do about same thing using "Bake out matcaps" plugin. Requires a lot of tweaking tho.

    How about this kind of workflow? :)

  21. Oh yes, it seems to close all the holes.

    How about deleting extra polygroups:

    1. Press Ctrl-Shift and click the dress polygroup (red in your image). So now only main dress polygroup is selected and visible.

    2. In the tools palette choose Geometry -> Modify Topology -> Del Hidden

    This should delete all the other polygroups except the one that you chose visible in step 1.

×
×
  • Create New...