Jump to content

Drongle McMahon

Advisor
  • Posts

    3,539
  • Joined

  • Last visited

Everything posted by Drongle McMahon

  1. Oh yes. I should have looked first - more than 12000 triangles. So it won't get very low whatever you do with the LODs.
  2. COncerning the LI, if you need to reduce it... Not sure, as it depends on the exact size, but I suspect the low LOD (rather than the lowest) will be the most important to simplify for low LI. You can probably remove the wires completely at that LOD, and maybe the upright bannisters too. If you want to be sophisticated, you can replace the wires and bannisters with planes (four, to face both ways for each side) with an alpha texture with the wires and bannisters, for the low and lowest LODs. It has to be on its own material, of course. Use a hidden triangle for that material at the higher LODs. That's the way railings were done before mesh, and at the low LOD distance, it should be adequate. This is the same technique used with window frames, fences, etc. It can provide very low LIs with very acceptable appearance. The alpha glitch is unlikely to be a problem at the low LOD distances.
  3. Just in case... Is your model made of several Blender objects? If so, the physics model needs the same number of objects, each being the shape for the corresponding visual mesh, each having the same bounding box as the corresponding visible mesh, and with corresponding meshes in the Collada file in the same order in visual and physics model files (done by appropriate naming and checking "Sort by object name" in the exporter). That's a tall order for something like this staircase, and you would probably be better off joining it as one object, unless there is a special need for separate objects in SL.Then follow Aquila's advice for the physics model.
  4. "I think this has bearing on your question..." Yes it does. However, it looks like that was settled out of court while under appeal, which I think means the question was not legally determined there. Also, I don't think that was a question of activity being outside the scope of the safe harbor provision, was it? The relevant part of the DMCA* is (sorry, I don't know the official way of formatting legal citations) Title II, Section 512, ©(1)(B), which requires that the service provider ".. does not receive a financial benefit directly attributable to the infringing activity, in a case in which the service provider has the right and ability to control such activity;" I suppose we can't even consider that until we know exactly what LL may be planning tom do with their new rights. The "directly attributable" may be the significant point. If it's going to be just taking a commision, then In guess it stays within the safe harbor, since that's already the situation with the marketplace. I suppose that must be considered indirect? This does seem the most likely, all things considered. * http://www.copyright.gov/legislation/pl105-304.pdf
  5. The ISSUE for me is that even though they have made this rule and many of us are complying -- there are just as many blatantly stolen items up on the Marketplace and fraud is rampant. Indeed. Does the ToS really absolve them of all liability if they step beyond the scope of protection by DMCA? Would it really be plausible for them to claim ignorance of the extent of violations?
  6. I am sure Arton is right. That is also consistent with your description of other cases. The lower LODs of each mesh get stretched or squeezed to fit the bounding box of the high LOD. So if the wrong ones are associated at different LODs, the lower LODs come out distorted as well as moved. In case you are using Blender, here's how you get the naming right: name each mesh with a name shared between LODs, then a postfix for the LOD, as in left_antler_hi, right_antler_hi; left_antler_med, right_antler_med; and so on. Then check the "Sort by Object Name" option in the export settings (this should be automatic in the SL-specific modes).
  7. Thanks. I think that may be where I read it. I'll quote the salient extract here, for the record... Changed prim accounting for legacy prims which use the new accounting system All legacy-style prims have their streaming cost capped at 1.0 (except for sculpts, which will be capped at 2.0). This provides the benefit of not penalizing prim-based creators for optimizing their content by opting into the new system and will make the streaming cost more reflective of the true network cost of the objects. Server cost will be adjusted to MIN{ (0.5*num_prims) + (0.25 * num_scripts), num_prims }. This preserves the current value for unscripted linksets and reduce the cost for linksets containing fewer than 2*num_prims scripts. It provides the benefit of rewarding creators for reducing the number of scripts in their objects.
  8. The official formula for the server weight is num_prims*0.5 + num_scripts*0.25. Arton, is there an official source for this? I remember reading about the changes, but I can't find it anywhere. The wiki still has the old calculation in. Somebody should change it.
  9. What puzzles me is what the tools are for editing the normals. Doing them one-by-one would be impractical in most cases. So there must be operations that handle collective editing. How do you set up the adjustments you showed here in the Autodesk tools?
  10. I got it down to 192. There's a tiny change on nthe corners, barely noticebale. Any more seems to mess up the shading badly. I also tried using inset faces. but that left horrible triangulation artefacts because of trapezoidal faces.
  11. Absolutely. That's why we need the normal editing in Blender, as you said. When the joints are as sharp as Aquila's though, I guess it doesn't really matter.
  12. Yes. You are right, of course. As long as you can keep pointy corners, that is much better. The effect is the same as massive subdivision followed by redundant loop removal, as in this picture. The triangle counts for this, and a final optimisation, are shown there. That's probably the best solution for the OP's case too. (Gaps kept wide so you can see them.)
  13. Yes.That's what the MWA from the article is.... average of (normalised) face normals weighted by the angles between the edges connecting the face to the vertex.
  14. "Split Normals arrived in Blender today" Good news. I hope we don't lose Split in the Edges menu though. I use that a lot to dissect away parts of my meshes. Or maybe there's another way?
  15. "The eventual triangulation of quads (whether in Blender shading or upload to SL) inevitably twists the normals." I don't think it's the triangulation that does the twisting. If you look at these, you can see that the normals don't change at all when the triangulation changes. In particular, with the central normal in the two at the right of the second row, first there are three horizontal and four vertical triangles, then six horizontal and two vertical triangles. Nevertheless, the normal stays exactly the same. So it certainly isn't as simple as a straightforward averaging of the face normals (components) from each contacting triangle. Does anyone know the mathematics involved in making these vertex normals*? The triangulation does affects the details of the shading errors, because it changes which three normals are used for interpolating at any point, but the twist in the normals is independent of the triangulation. I added the top right model to illustrate the effect of simple subdivision of the whole mesh, two levels here. This confines the shading errors to the tiny triangles right in the corner. With sufficient subdivision, they becone essentially invisible. I guess this is one of the reasons for the popularity of excessively high poly models, and the costs they incur. *spent a few hours googling this, which was quite interesting. The majority suggest weighting the averaging by the areas of the triangles using the vertex (sum of cross products of edges defining triangles, followed by normalisation). That isn't what's used in Blender, because the normals don't change with the areas (lowering the top of the vertical faces). Instead, it seems to be using weighting by the angles between the defining edges, which is consistent with the behaviour in Blender as far as I can see. For anyone with mathematical bent, I found the article below which compares a variety of methods for calculating the vertex normals, including the two above (MWAAT and MWA). MWA, which Blender appears to be using, scored well in their more general tests, although it was not so good at highly regular sphere and torus models. http://users.tricity.wsu.edu/~bobl/personal/mypubs/2003_vertnorm_tvc.pdf
  16. Don't eat them :matte-motes-agape: !!
  17. There is always this kind of problem with complex corners like this. Others may have better solutions, Meanwhile.... I made a simple model to illustrate. The pictures are Matcap display to make the unwanted shading effects more obvious. This helps, as long as you look at it from plenty of angles. In both pictures, the top row is in edit mode with vertex normals turned on. The same objects are shown in the second row in Object mode, to remove the distraction of the edges. They are all smooth shaded throughout, but with some edges split*. If you look at the first two models, you can see where the problem arises. In the first, all the internal edges are split, so that each quad is a separated mesh with its own unshared vertices. In that case, all the vertex normals are identical to the face normal. When two adjacent faces are coplanar, then you don't really need the split edge between them to get flat shading, because their face normals are already identical, and so will bec the vertex normal. You can see the different vertex normals coming out of the split vertices, and you will notice that they are all perpendicular to the faces of the quads they belong to (if a quad isn't flat, that's a different matter). In the second model, none of the edges are split. Now each vertex has just one normal, which is the average of the face normals of the faces using that vertex. Inside each triangle, the normal at any point is a linear intertpolation of the three vertex normals, giving the variation in shading across the triangle. Bacuse the triangles in a quad have only two shared normals, the difference in the third gives rise to the diagonal discontionuities in each quad. The normals are different at opposite edges of the quads, so the overall efeect is shading as if the quad was twisted. The third model is the same as the second, except that the triangulation of each quad is along its other diagonal, as suggested by Gaia. This changes the direction of the triangular artefacts, and in some cases this may be sufficient, but in this model the vertex normals are still skewed so that the twisting effect is still there. Im models 4 and 5, just the highlighted edges are split. In model 4, this leaves the right hand flap, equivalent to your b and c, are isolated and beacse their face normals are the same, so are all the vertex normals, and they are coplanar and flat. However, we may want smooth shding across the 45 degree crease, which we don't get with this edge split. In model 5, that edge is not split, so the shadinf along that edge is smooth. However, the vertex normal at the nearer two vertices along the crease is 22.5 degrees (half the 45 degrees from averaging), but at the third vertex. which is now unattached because of the split, is on free, so it is at 45 degrees. So there are still different vertex normals at the two ends of the second quad, and the twisting, giving the diagonal shading artefact, is still there in that quad. The last model uses Leprekhaun's hidden plane extension. You can just see that the horizontal flap adjoining the 45 degree crease is now completed with another hidden quad. Now the third vertex along the crease has exactly the same two face nornals on either side of it as do the first two vertices. So the resulting vertex normals are all the same. The smooth shading along the crease is preserved, but now there is no twisting artefact. That's fine, as long as we don't want smooth shading between, say, the 45 degree face and the vertical face. That is more tricky. The left model in the second picture has yet another hidden plane, this time going vertically down from the first, with the normal pointing to the rear. This does make the three normals along the crease the same, so at first sight, it looks like we have achieved the effect. However, the normals between the 45 degree flap and the vertical quad are completely wrong, and after looking a bit longer, it is clear that the shading is all wrong, even though it is free of twisting artefacts. Perhaps the only solution here is to resort to real geometry, by bevelling the 45 degree crease, This is shown on the right of the second picture. The edges where the vertical and horizontal edges cut throght the bevel have to be split. Otherwise there are other horrible shading artefacts there. This works as wanted, but only at the cost of a lot of extra geometry. The ideal solution to this kind of situation would be the facility to edit the vertex normals directly, optionally overriding the automatic averaging of the face normals. This is something I think Gaia is trying to get added to Blender. That might allow much more elegant solutions to this kind of problem. It still won't be easy. *This can be done with the edge split modifier, where it affects shading straight away, but doesn't actually split the vertices until the modifier is applied, or immediately by using Mesh->Edges->Split with the edges selected. It has exactly the same effecft as using flat shading. For SL, the uploader will do the necessary splitting for flat shaded faces.
  18. It depends whether the asset database includes an uploader ID. If it does, they can easily check whether they accepted the new ToS. If not, then I guess it couldn't be retroactive because they would have to rely on the datestamp to know that it was uploaded by someone who had accepted the ToS. If there's no uploader ID or datestamp, then I don't see how they could tell at all. Maybe some other time-indicating versioning information? I would guess they have both uploader ID and datestamp, but I could certainly be wrong.
  19. "The parts of the TOS that I felt had changed ... had nothing to do withi 2.7" Ah. That explains it. Is there anywhere that keeps up-to-date records of the changes?
  20. Oh dear. From your title, I thought you were going to reassure us that the SL servers don't use (a relevant version of) OpenSSL, but you didn't.:smileysurprised: Does anyone know?
  21. I think that bit of 2.7 hasn't changed since the copy I have from August 2013. CC Attrib does retain rights for the originator, but it gives anyone permission to do just about anything with the work, EXCEPT to remove the attribution. So the question is whether the uses contemplated by the ToS imply removal of attribution. If they do, then you can't upload it. If a requirement for attribution is inconsistent with "unconditional" and/or "unrestricted", then they do, and you can't upload it. If they don't, then I don't see why you can't. My own, ill-informed interpretation is that the explicit waiver of the uploader's own right to attribution implies that the contemplated use does involve removal of attribution, and therefore that it is unsafe to upload CC BY stuff (if you care about complying with the ToS). CC0 and public domain stuff is rare. So that is severely restrictive.
  22. Thanks Gaia. Clarification on thid and other related issues "from the horses mouth"* is what we all really need. *this is an English idiom meaning from the most reliable or definitive source.
  23. "you cant use lossy compression on them" If they are bigger than 128x128, you have no choice, they are always lossy (irrespective of what the checkbox says).
  24. The key words are unconditional and unrestricted. Requiring an attribution is a restricting condition. I guess that is a reasonable interpretation, in which case, I am glad I have been cautious. The ToS are not retroactive and could not be, from a legal standpoint. It only applies to content uploaded after the new ToS went up. That's was my initial assumption too, although the consensus of discussion, here and from the (apparently) legally qualified panel that held an open discussion, was that it was indeed retrospective - more or less on the grounds that you were not forced to accept the ToS if you didn't want to or were not entitled to provide rights to existing content, and that there is nothing in the ToS that restricts its effect to content by the date of uploading. The term "all or any portion of your User Content " does not say anything about when it was uploaded, The definition ""User Content" means any Content that a user of the Service has uploaded, published, or submitted to or through the Servers, Websites or other areas of the Service." , even appears to explicitly refer to past uploading. However, I am certainly not legally qualified. So I will defer to your opinion. I would certainly expect difficulties for a retrospective effect it it were ever to come to a court, but then it probably never will, as the ToS also binds us to accept Arbitration.
  25. I did some experiments. Very instructive, and the runaway winner is Ivan's floating geometry combined with Kwak's material trick. The high poly model is a cylinder with sixteen segments around and 15 top-to-bottom. All the quads are square. Every fourth square has a rivet. In the first version, the rivets are connected to the cylinder, and their bottom edge is sharp following a tight 5-segment bevel. The edge points are connected to the nearest corners of the square (trying other arrangemenst didn't make anyb useful difference). In the second, the floating geometry version, the squares with rivets are the same as the others, and the rivet is separate, with no bevel and its bottom edhes coplanar with the square. The low poly model is just the cylinder without rivets. Its UV map is a simple tiling of the unit UV space, and the square that will carry rivets are a different material. For each version, normal maps for the were baked onto the low poly cylinder at three resolutions. Then one of the rectangles to receive a rivet was remapped to the whole UV space and the normal map was baked onto that at 64x64 resolution. Everything except the attached rivet edges were smooth shaded throughout. Th picture shows the attached river version on the left and the floating rivet version on the right. At the topm of eaxh column is the high poly mesh, with blank normal map and spec map. Note that there are severe artefacts in the normals of the squares with the rivets sitting on them. These artefacts are absent from the floating geometry version. The next three, top to bottom, are the low poly mesh with the whole-model normal maps baked at 256x256, 512x612 and 1024x1024. The normal map pixelation effects are much as the were in the flat case. The nasty normals in the squares with attahed rivets do not improve (hardly surprising as they are there in the model itself). At the bottom, the materials are used. The rivet-bearing squares us the 64x64 single rivet normal map baked from the floating rivet model, repeated 16x15, with a 0.5 offset for the former. The rest of the cylinder is wither using the blank normal map (left) or the 256x256 floating rivet map (right). These are very nearly as good as the 1024x1024 full-model normal map, while using only a small fraction of the texture data. It was easy to set the repeat and offset here, because the model was designed with that in mind. It could be very difficult to achieve this in more realistic and useful cases. However, I guess it might be possible to obtain the desired effects by moving the targeted UV islands around instead when they start out in random positions. ETA: Of course a real object like this might well have flattened patches to get the rivet to bind better -m in which case the "bad" version might be more realistec!
×
×
  • Create New...