Jump to content

Drongle McMahon

Advisor
  • Posts

    3,539
  • Joined

  • Last visited

Everything posted by Drongle McMahon

  1. The sphere and the capsule are the two most efficient shapes - we do not have access to either at all True for uploaded mesh, but these havok primitives do get used elsewhere, if I remember correctly. I think the avatar collision shape uses a capsule. Also, the primitives are used for the physics shape of legacy prims, spheres, cylinders and boxes, as long as they are not distorted and scaled only so that they are still the primitive shape. The primitives have a constant physics weight of 0.1. That means you can make very cheap physics by using only linked undistorted prims. You can see the switch from primitive to triangle-based physics in the physics shape view. If you make sets of ten linked objects, you can see the differences in physics weight using the More Info link. These don't get used as they are in the LI though. Here are some examples, all a .05x0.5 cube, Prim/CH physics weights for ten linked.... Plain cube 1.0/1.0; Hollow 50% 27.0/3.6; Twist 180 32.9/6.8; taper 0.1,0 3.6/3.6 The Blender physics system includes all the havok physics primitives, but I don't think the exporter exports them. Unfortunately, the mesh uploader can't recognise the perfect havok primitives if you put the in you mesh. So you can't take advantage of them in uploaded physics shapes. Collada actually has capsules, spheres, cylinders and boxes in its physics section, as does Blender, which would provide possibility of using them, but I don't think the Blender Collada exported knows about them, and neither does the SL uploader. ETA: There's another interesting thing that happens when you increase the size of the twisted cube. Up to 5x5x5, it's behaviour is as expected: the CH weight stays at 0.7, while th Prim weight decreases to 0.5, from 3.3 at 0.5x0.5x0.5. Then the complexity of the underlying geometry increases, affecting both weights, So a 10x10x10 version has CH weight of 1.3 and Prim weight of 2.1.
  2. Solidify will double the number of (selected) vertices.
  3. I think it's the 10Mb limit on a single text node, that comes from the library used by the uploader (libxml2). #define XML_MAX_TEXT_LENGTH 10000000 If you google your error message, you will see it ocurring in many other pieces of software. Exactly what that corresponds to in the sketchup before it's exported, I have no idea, but it's probably too much geometry of one kind or another in one piece. So the overall size of the file may not be the relevant factor.
  4. "you get higher LI if you don't keep your vertices straight" Yes. I guess we were just thinking about how they annoyingly get unstraight in the first place. I suppose the reason the strightness helps is because it improves the compressibility of the uploaded data. Is that your impression too?
  5. Not sure what you did here ... ie what step you scaled at. Meanwhile, I can confirm that 1.100000023841858 is exactly what you get when you convert the nearest possible 32 bit binary representation of 1.1 to decimal. It's not exact because dividing by 10 (multiplying by 0.1) gives a recurring binary fraction. That gets rounded up to the value we see. In binary, the significand should be 1.000110011001100110011001,1001100110011.... where the comma is the last available digit. So that gets rounded up to 1.000110011001100110011010. So that is the predictable rounding error. If you move the point by an amount that is exactly representable in a 32 bit float, such as 0.5 or 0.25, there will be no error. I imagine rotations are done my matrix multiplication, which is again subject to rounding errors, and that's where the other errors come from. Of course, as these all accumulate on top of each other, the effects can get larger. That may account for the "wobble" we are seeing. ETA: in the case of 1.100000023841858, the error is too small to show up in the 7 significant digitd produced by the collada exporter. So this particulat vertex shows up as 1.1
  6. That was all probably rubbish. :matte-motes-sick: Found a detailed source for all this. It seems 2^-24 is also the minimum rounding error for a 32 bit floating point value that is supposed to be 1. That is, it is half the value of the least significant digit in the significand (mantissa), referred to as 0.5ulp. Of course, it's much more complicated than that. having read about 1/4of the indicated document, it appears that 0.5 ulp is the target for error in much more complicated calculations than addition and subtraction, which are achieved by using extra bits in intermediate values. So that remains the expected minimum rounding error. What happens if you scale the plane up by 100? Does the funny number scale up too? The 7 significant decimal didgits used by the Blender exporter is more than enough for the 16 bit integer representations used by the SL uploader, but that document mentions that nine decimal digits are required for the originating 32 bit (single precision) binary number to be recovered when it is read back in.
  7. 5.960464477539063e-08 is 2^-24. It is the smallest denormalized number that can be represented in the 16 bit IEE floating point format. Denormalised numbers* are numbers smaller than the minimum representable with full precision, at the cost of reduced precision. They are used (unless excluded by a compiler directive) because they can avoid divison-by zero errors produced by rounding of intermediate vales produced by subtraction of two floating point numbers that differ by less than the minimum normal number. So this number is as close as the format can go to 0 without actually being 0. Why they are generated by your particular manipulations, I don't know. *A link with some explanations, albeit for 32 bit numbers. The Blender collada exporter does use seven decimal digitd of precision for export, which is consistent with the use of 32 bit floats (single precision) for internal data, which I have read elsewhere is the case. So there doesn't seem to be any reason for a number whch is the natural consequence of using 16 bit (half precision) floats.So I am rather confused about that. I suppose it could be using SSE/SIMD instructions** which can do parallel calculations on registers carrying multiple lower-precision numbers, making it faster. If that used 16 bit segmentation of the registers, that could account for this effect. Maybe someone with knowledge of Blender internals can enlighten us? The SL uploader converts the floating point data to 16 bit integers, with a scaling factor calculated so that the whole range (-32768..32767) is used over the size of the bounding box in each dimension. This gives even precison over the whole range, unlike floating point numbers. 6*10^-8 would almost always become zero in that conversion (i.e. unless the mesh was extremely small!). I have to suspect that the other small, but larger numbers, and the differences that creep in, are the results of similar precision errors. **ETA: Or maybe using 16 bit calculations on graphics processor? Surely not worth the loss of precision for either? ETA: Or maybe it's the Python that's doing it?
  8. I'm going to make a better one that can iron out small variations without being as drastic as rounding. Maybe that could do positions as well as normals. It would be better if someone could do it in Python, as anyone with Blender automatically has Python. Even better would be an addon modifier that could be put last on the stack in Blender to do it! Too hard for me at the moment.
  9. "The reason si that it'll take so long to clean up the file after the modifier..." Are you saying that all Blender modifiers suffer from this "wobble" effect? I haven't investigated that. Auto-smooth, which does, isn't, technically, a modifier. Anyway, I wrote an R function that does the rounding of the normals I described (13,000 or so of them) in about half a second (on ssd disc). That's fast enough for me. daetrim<-function(infile,idkeywd="normal",decdigs=4,outfile=infile){ library(xml2) adoc<-read_xml(infile) fanodes<-xml_find_all(adoc,"//d1:float_array") nodeids<-sapply(xml_attrs(fanodes),function(x){x["id"]}) normidx<-grep(idkeywd,nodeids,ignore.case=TRUE) for(i in 1:length(normidx)){ norms<-as.numeric(strsplit(xml_text(fanodes)[normidx[i]]," ")[[1]]) norms<-round(norms,decdigs) anode<-fanodes[normidx[i]] xml_text(anode)<-paste0(as.character(norms),collapse=" ") } write_xml(adoc,outfile)} In case anyone is unfamiliar with R, you can find it at R-project.org. It's the most commonly used open source software used by academic statisticians, and is especially good at handling large arrays/vectors. I should add some error trapping code to this!
  10. It turns out to be more complicated than I had imagined. First, I was forgetting the effect I have commented on before, a long time ago. If you don't have a UV map in the dae file, the uploader uses uninitialised data, effectively random UV mapping. That usually leads to a lot of separated islands. The seams between these split the vertex list just as different normals do. Furthermore, it's can be different each time you upload, so that the vertex count varies. The way round this is to make a UV map, but to collapse it all to one point. Then all the UV coordinates are identical and no splitting can be induced on that account. That reduced the vertex count considerably, but only down to 6500 or so. So I looked at the dae files in detail. When you use data transfer to modify the normals in Blender, you have to turn Auto-smooth on (set to 180 deg, so that it has no effect), or the unmodified normals get used instead*. However, any time you turn auto smooth on, some "wobble" gets introduced into the exported normal data values, and that affects the transferred normals. If that wobble is sufficient, it can split vertices because the normals are detected as different. In the case of the problem model, rounding the normals to four decimal places eliminated the wobble and produced an uploaded model with the expected 4320 vertices. The number increased as the rounding was made less stringent. Here is a section of the normal data before and after rounding... > nmlsq[101:160] [1] 9.945234e-01 -1.323220e-05 -3.090524e-01 9.510451e-01 -4.528400e-06 -3.090322e-01 [7] 9.510517e-01 9.924170e-06 -3.089848e-01 9.510670e-01 6.020850e-06 -1.045139e-01[13] 9.945235e-01 -1.913310e-05 3.090501e-01 9.510459e-01 -5.126000e-06 3.090020e-01[19] 9.510614e-01 1.814960e-05 6.691052e-01 7.431678e-01 2.948870e-06 6.691429e-01[25] 7.431339e-01 -1.642110e-05 -6.691060e-01 7.431671e-01 -2.892380e-06 -6.691432e-01[31] 7.431335e-01 1.680850e-05 8.089966e-01 5.878134e-01 3.188850e-06 8.090273e-01[37] 5.877711e-01 -1.564620e-05 -8.089970e-01 5.878129e-01 -4.617800e-06 -8.090271e-01[43] 5.877715e-01 1.776220e-05 9.135313e-01 4.067686e-01 3.395910e-06 9.135525e-01[49] 4.067209e-01 -1.585480e-05 -9.135320e-01 4.067672e-01 -4.293090e-06 -9.135530e-01[55] 4.067196e-01 1.671910e-05 9.781405e-01 2.079452e-01 2.592020e-06 9.781511e-01 > nmlsqr[101:160] [1] 0.9945 0.0000 -0.3091 0.9510 0.0000 -0.3090 [7] 0.9511 0.0000 -0.3090 0.9511 0.0000 -0.1045[13] 0.9945 0.0000 0.3091 0.9510 0.0000 0.3090[19] 0.9511 0.0000 0.6691 0.7432 0.0000 0.6691[25] 0.7431 0.0000 -0.6691 0.7432 0.0000 -0.6691[31] 0.7431 0.0000 0.8090 0.5878 0.0000 0.8090[37] 0.5878 0.0000 -0.8090 0.5878 0.0000 -0.8090[43] 0.5878 0.0000 0.9135 0.4068 0.0000 0.9136[49] 0.4067 0.0000 -0.9135 0.4068 0.0000 -0.9136[55] 0.4067 0.0000 0.9781 0.2079 0.0000 0.9782 This is from R, which I used to do the rounding. However, having discovered this, I proceeded to try to reproduce the effect, starting with a new copy of the mesh. This time it imported straight away with 4320 vertices. There was wobble in the normal data again, but apparently not enough to show up (I haven't checked explicitly). So I am still confused. I need to try to repoduce the model with the excessive wobble. Anyway, it does appear that the auto-smooth function in Blender, which has to be applied to export transferred normals, introduces "wobble" in the values for the custom normals that can be sufficient to cause artefactual splitting of vertices. I would have to regard this as a bug (or feature) in Blender (although the uploader could be modified to work around it by doing less stringent matching in the code that eliminates duplicates). I suspect the wobble must be there in the internal Blender data, rather than being introduced by the exporter. Whether there is a sensible reason for the wobble, I will not speculate, as I don't know enough about the code and/or its motivating priciples. *That's very strange, because smooth shaded mesh without data transfer behave the opposite way round. If you turn auto smooth on for these, they get exported as flat shaded although they look smooth shaded in Blender. You have to turn it off to get the smooth shaded normals exported. When it's on, the normals, specifying the unwanted flat shading, still have the wobble.
  11. "Can I ask what the difference is in vertex count" Good question. The answer is: Edge-split 4568 vertices, bevelled 10240 vertices. More than twice as many! As you will no doubt be, I was surprised by this. In case others don't see why, I show the vertex normals of a typical corner of the mesh in the two cases, edge split above and bevelled below. Althought the edfge split version has only two vertex positions, these are split into three because the SL format vertex list includes the normals at each vertex. So it has to list them each time they appear with a different normal, even though the positions are the same. So that's six vertices for SL. Now the bevelled version actually has six vertex positions, but each has only one normal. So that should still be six vertices in SL.  Indeed, in Blender, both versions have the same vertex count, 4320. My initial thought was that the differences were the result of the UV mapping. There were a lot of seams, and these cause duplication of vertices along them in much the same way as do different normals, more in the bevelled version. However, removing the UV maps only changed the couints to 4320 (the same as in Blender) for the edge-spli version, and 7050 for the bevelled version. This must mean something peculiar is goiung on in the bevelled version that is causing tiny differences between what should be identical normals in the exported file, or that the uploader code that is supposed to avoid duplication of identical vertices is not working quite as intended. It's also possible that removing the UVs is not giving the same data as not having them in the first place. I will have to do some more experiments to find out.
  12. I did the experiments... As I suspected, if you make a completely flat downward facing triangle, it get placed at half way up the bounding box. So that's no good. Adding a micro-triangle method works ok, as long as you make the tiny triangle at the top small enough that it's less than one pixel when the LOD switches to it. It can be anywhere as long as it's above the downward triangle, because the uploader will stretch the whole thing so that it's at the top and the downward triangle is at the bottom. The other methods depend on a quirk of the viewer code that I reported as a bug some time ago. It's still working in the latest viewer. So I doubt that it's going to be changed, but it can't be guaranteed to keep working. However, if it is changed that will not affect meshes that have already been uploaded, as the uploader has already been fooled for those. It works because the uploader initialises the bounding box limits with the first value it finds in the list of vertex positions in the collada file. Then it changes them as it goes through vertices referenced in the triangle list. So if that first vertex is never referenced in a triangle, and any of its coordinates is outside the range of the vertices used for triangles, it will remain there as the bounding box limit. So all we have to do is put a suitable vertex as the first in the position list (this can also be used to offset the pivot for doors, which is also the centre of the bounding box, albeit at a cost of increasing the size and therefore the LI). Method 1 is to edit the collada file directly. The result is shown below for a single flat triangle (one material) file. The position count (blue) has been increased by three to account for the three inserted numbers (green) whichdefine a vertex higher up, Z being greater than for the vertices of the triangle. As with the micro triangle, it doesn't matter how high, as the bounding box will be stretched. We also have to increment by one (red) the count in <accessor>, and all the indices in the polylist (<p> tag) that refer to the position source, because we have shifted all the vertices by adding the special one. <library_geometries> <geometry id="Cube_001-mesh" name="Cube.001"> <mesh> <source id="Cube_001-mesh-positions"> <float_array id="Cube_001-mesh-positions-array" count=""> 0.2 0.4 -1 0.4 0.2 -1 0.4 0.4 -1</float_array> <technique_common> <accessor source="#Cube_001-mesh-positions-array" count="" stride="3"> ..... <polylist material="Material-material" count="1"> <input semantic="VERTEX" source="#Cube_001-mesh-vertices" offset="0"/> <input semantic="NORMAL" source="#Cube_001-mesh-normals" offset="1"/> <vcount>3 </vcount> <p> 0 0 0</p> </polylist> </mesh> </geometry> </library_geometries> Fortunately, there is a much easier way to do this in Blender. The picture shows two versions of the flat triangle with an extra vertex added at the end of an edge attached to one of its vertices (the edge makes it easier to see). To make sure this extra vertex appears first in the exported collada, simply select it and then snap the cursor to it (Mesh->Snap->Cursor to Selected). Then select all the vertices and sort elements by closeness to the cursor (Mesh->Sort Elements->Cursor Distance). Now when the file is exported, the extra vertex will appear first, followed by the more distant triangle vertices. In the left hand version, the extra vertex is directly above the triangle, made by simply extruding one of the triangle vertices. In this case, the triangle will be stretched in the horizontal plane to the extents of the bottom of the bounding box. If you want the triangle remain small, you can move the extra vertex to the opposite corner of the bounding box, as shown to the right.  You can check that these have worked by dialing RenderVolumeLODFactor down to zero, which will always render the lowest LOD, even when you are on top of the object. To see whether a micro-triangle is small enough not to render, you should test by zooming out with RenderVolumeLODFactor set to 1. The flat downward triangle of any size is alright for things on the ground, but for things that might be seen from below, you should be able either to use two micro-triangles, or to use a single micro-triangle with an extra vertex. ETA: Note that, because of the bounding box stretch/squeeze, one dae file can be used for any model with the same number of materials, as long as the material numbers are the same. So if you adopt a convention for naming you materials (e.g. mat1, mat2, ....) then you won't have to keep making new lowest LOD files. ETA: Just to recap thecontext here - This is about methods to use a one-triangle-per-meterial flat polygon lowest LOD that will make the object disappear completely at the lowest LOD, thus avoiding ugly triangles sometimes produced by the automatic LOD generator when the triuangle count is set to zero. It deals initially with a downward facing polygon, which will be invisible for objects on the ground. Extension to ungrounded objects is then mentioned. All this does not imply a recommendation fo the use of minimal geometry lowest LODs. It is simply to help avoid aesthetically unpleasant results from those who choose to use them.
  13. It needs to have one triangle for each material. However, now that I think about it, it's going to be stretched to fill the bounding box of the high LOD mesh. So that's not going to work the way I was suggesting. Maybe you have to put a very tiny triangle at the top too. There's also a way of fooling the uploader with a vertex not assigned to a triangle. Maybe I can use that, but that might require editing the collada file. I'll have to do some experiments.
  14. "I almost never see that triangle" More often the case than not. Here is a set of four table bases. Left is high LOD throughout; then my manual LODs; then one with triangle count settings close to the ones you used (gave 3803, 3808, 476, 3 tris); then default LODs. First two rows either side of mediu->low LOD switch. Bottom row just after low->lowest. All at RVLF=1.125. You can just about see the a single triangle in the third one, but you probably wouldn't notice it if you weren't looking for it. It will depend on what triangles GLOD happens to end up with, and that can vary each time you upload the same file. It might sometimes be bigger. It's really pretty easy to make the single downward-facing polygon for the lowest LOD instead of hoping GLOD will be nice to you. So I would stil suggest that.  It's interesting to note here that even the first model, highest detail at all LODS, looks broken up in the bottom row. This must be simpoly because too many triangloes are smaller than one pixel. It does show why using too detailed models at lowest LODs is always going to be a waste of resource.
  15. Yes. It's too dark. Here's a lighter closep. Given the effort involved in getting it to show up, I guess the answer should be that it's probably not worth it, at least for daylight. Under local lights, the highlights become more prominent. 
  16. While I had the table, I tried one more thing that demonstrates how little effect the high LOD complexity has on the LI for something this size. I made a single segment bevel on all the sharp edges (made with edge split modifier in the original high LOD model) and then transferred the normals from the previous high LOD version. This has the effect of adding edge highlights under advanced lighting, as you can see in the left panel of the picture, comapred with the righ panel. Of course, these are only visible when you are this close. This raised the high LOD triangle count to 9136, nearly two and a half times as many as in the original. So that's two and a half times the work for the renderer (is it?). The download weigh and LI were still 1. I would be interested to know people's views about whether all that extra goemetry is worthwhile for this improvement. Doing something similar with much bigger objects, like a house, would not be so cheap, as the high LOD has increasing effect on the LI.  (9 am default sunlight, with shadows.)
  17. Indeed, I was not trying to disagree with you. My intention was to show the poor quality of the default LODs, to add to the context of your model. It was too tempting because this is the type of model GLOD does its worst with. However, since you raise the issue of GLOD with selected triangle counts vs. hand-made LODs, I will make the following comparisons. First, your medium LOD uses the high LOD. This is not unusual in a model of this size, as the effect on download weight will be very small. However, it does mean that, compared with the hand-made LODs, the viewer has to download and display about eight times the number of triangles when the object is displayed at medium LOD. This is a load for the download and rendering system, compared with using the hand-made medium LOD, that might be considered unecessary if you find the hand-made medium LOD's appearance acceptable. Second, you have used the zero-triangle fechnique at the lowest LOD. The result is unpredictable because it depends on which few triangles GLOD decides to preserve, and that's not even always the same for the same input. If they happen to be visible, this is unlikely to be satisfactory (for those using low RVLF). This model is a good example of where the minimal triangle technique can be used effectively with a custom lowest LOD mesh. A single polygon with one triangle for each material can be placed so that it's on the ground facing downwards. Then it will never be visible, and the table is guaranteed to disappear completely at the lowest LOD. You can't arrange that with the GLOD generated zero-triangle mesh. I wasn't trying to make an exact comparison, but anyway, I added top and bottom to the model (mine is 30-sided cylinders). I also added UV mapping, which increases the data size (although not needed if everything is blank textured). These didn't really make much difference. The triangle counts were 3803, 472, 188, 28, and the download weight went up to 1.00. I think these considerations do convince me that there are advantages to using hand-made LODs instead of using GLOD. However, I would guess it took me two to three times the work than just using GLOD. With many models, the extra work is much more than that. So I would not disagree that it is perfectly reasonable to decide that it's not worth the extra investment in cases where you can achieve acceptable results without. I guess I just prefer messing around in Blender than fiddling with the uploader settings. I also enjoy the challenge of optimisation. So, of course, I am biased towards the hand-made LODs. In othe words, I am still not trying to say you are wrong. Oh - and a note about the size of the picture: There isn't really any choice if you want to show the model at the transition points as they happen in the viewer. These were original size screen captures from a viewer session at full screen on a 1600 x 900 pixel monitor. So they show exactly what you see in the viewer. There is a point in the zooming system where you get one LOD if you are zooming in, and the next if you are zooming out, while the overall size is the same. That's what I call the transition point. So this kind of picture shows exactly what you see in the viewer at the extreme range of each LOD with whatever RendeVolumeLODFactor setting you are using. It's not really universally exact, however, because it will still be different with different resolution displays. If my monitor had twice the resolution, the pictures would be twice the size. Using RVLF=2 is acompromise. Obviously, the pictures would be larger, and the lower LODs would look worse, if the equivalent points were captured with RVLF=1.125 (default on LL viewer).
  18. Interested in your eaxmple. So I made a thing like the base of your table and uploaded with the default LODs generated by the uploader, and with hand-made LODs. Then looked at it with RenderVolumeLODFactor=2. The triangle counts for the auto-LODs were 3576, 894, 220, 110. For my hand made LODs they were 3576, 452, 168, 24. Here they are at the transition points where you can get both flanking LODs at the same size by zooming in and out a bit. The autoLOD is on the left, mine in the middle, and the one on the right is using the high LOD at all LOD steps. The download weights are 3.5, 0.6 and 126 at 1.2m high. 
  19. Good start, except that I think an elephant would have a voice at least two octaves lower! :matte-motes-evil:
  20. Much of the difficulty with shiny things under Advanced Lighting in SL comes from the limitations of the two types of specular reflection. There is pure specular reflection which is added to the diffuse texture, but only reflects the sun, moon and local lights. Then there is the environmental reflection which attempts to mimic reflection of indirect light, but comes only from the sky (and a bit from the sun). Instead of being added to the diffuse (texture + color) reflection, the environmental reflection replaces it. So as you increase the environmental reflection, you lose the texture and/or color that was there without it. You can investigate the effects of these settings with a set of objects like those in this picture. These have a blank white texture and a fully saturated pure red color for the diffuse reflection. They also all have a blank white texture added for the shininess (click the shininess radio button and select Blank texture). The settings for Glossiness (left to right) and Environment (front to back) are 0, 50, 100, 150, 200 (100% would be 255). They are shown at one angle under default lighting at midnight, midday and 6pm (top to bottom). Of course, this is only a tiny set of possible lighting conditions. If you want to choose the best combination for you use, then you will need to make such a set of objects and look at them in all the lighting conditions you expect them to be used in.  If you want highlights without advanced lighting, then you will have to use baked (or painted) highlights. These will be unresponsive to lighting conditions, which may be incongruous, and will conflict with highlights from Advanced lighting if it is turned on. It is possible to reach a compromise that is acceptable in either conditions, but that is very much a matter to be determined by personal choice.
  21. Probably best to stick to one scale for now. Essentially for intermediate sized objects (5m..30m, say) the same decisions and LI consequences apply, but they just move up the LODS scale to higher LODE switches. Then for the very largest (60m+) objects, LOD has no effect on either visible object or LI. Then there is the whole area of breaking large objects into smaller pieces to change their LOD behaviour and LI, including splitting iterior and exterior parts of buildings. That's a whole other story, but it depends on understanding the basic behaviour of single objevts first. So that is probably best left to another vide, if you feel like it after the first one :matte-motes-dont-cry:.
  22. "the medium level has almost no effect on your ending land impact." Well, that actually depends on the size of the object. I was really simplifying to keep it simple, and had trees in mind. The same sort of considerations apply with the single-triangle trick at different LODs depending on the object size. It's fair to say it's more common at the lower LODs. I'm looking forward to seeing your video, wondering how you will cover such potentially large subject. :matte-motes-grin:
  23. Sorry. I guess that's a biased adjective that I should have avoided. It is the outcome of my view is that setting the second (medium) LOD to a single triangle is essentially an abuse of the intended application of the facility to define LODs, becuase it forces users to use viewer settings outside the normal range to preserve acceptable appearances. I do realise that others don't share this view, and that they are as entitled to that opinion as I am to mine. I'll change it. ETA. In fact I could go further. If LOD meshes are provided, instead of using the defualt LOD generator, it is possible to make sure that a single triangle is invisible (e.g. upside-down, underground). The effect is that the object disppears entirely, rather than collapsing into an ugly state. Although I don't think this is as good as a properly crafyed LOD, it is more acceptable than the ugly collapse cases.
  24. In the LL viewer, the Object detail slider will only raise RenderVolumeLODFactor to 2.00, the default for Ultra graphics. That isn't generally enough for the products of the extreme LOD abusers settings used by some creators.
  25. Perhaps I have misunderstood the porblem here, because I find no obstacle to baking out the normal map that contains the normals generated by applying a procedural texture to the diusplacement input of a Material Output node. Here is an example setup. The image "Untitled" is selected in the node editor so that that is where the baked normal map goes. The 3D view is in "Rendered" mode, to reveal the displacement effect. The inset at lower left is the same cylinder with blank texture, pink color and the baked normal map applied. 
×
×
  • Create New...