Jump to content

Drongle McMahon

Advisor
  • Posts

    3,539
  • Joined

  • Last visited

Everything posted by Drongle McMahon

  1. PS. I guess you will need a door somewhere? That's ok. Just cut it out of a wall in the right place. It won't increase the physics weight much, as pecially if you make it a gap the whole height of the wall. The secret for low physics weight with triangle-based shapes is to avoid small/narrow triangles.
  2. The physics mesh you show is the right kind for making a hull-based shape which you do by clicking "Analyze". As far as I can see, it should work with all the default parameters. It should then say that you have 15 hulls with 122 vertices. (This will give a prim-type weight of 5.48, so that the LI will be 5 or more). Is that what the uploader says - the numbers are just above the bottom buttons? If not, something is wrong. Are you sure all the slabs have all their normals pointing outwards? You describe this as the shape for the inner walls, but it sounds as if your upload is combined with the other walls etc. Note that every (Blender) object in the uploaded scene has to have its own physics shape. If any does not, then it will get the default single convex hull for its own mesh. Thus the convex hull you are looking at may not belong to the inner walls*. Maybe it's the physics of the root of the linkset (yellow highlight). Try unlinking and relinking with the inner wall as the root (selected last before linking). Then set all the other parts to "None" ans the inner walls to "Prim". and see what you get. This isn't the long-term solution, but will show you what's going on. While you have made the right sort of mesh for an "Analyzed" shape, in this case you would probably get a lower physics weight by using a triangle-based shape, especially as your outside walls can also have their own physics so that there is no need for thickness here. Then the inner walls physics can just be a single plane for each wall. Same thing for the outside, and keeping the normals as they are for the walls. Anything else, such as window frames, if they are separate objects, can have their default physics shape and be set to type "None" inworld. There is one more thing you need to be careful about when uploading a multi-object model. The visual objects and their physics counterparts must be in the same order in the exported file. To ensure this in Blender, name them with a postfix, such as "obj1_hi"...., and "obj1_ph". Then for each file, make sure the "Sort by object name" option is selected in the exporter. That will ensure each visible mesh gets associated with the right physics mesh. Otherwise all sorts of horrible effects can be discovered. *ETA: in fact it appears that the blue comvex hull doesn't reach to the bottom of the inner walls.
  3. Unfortuanately, the sculpt map image is the only input you have for a sculpty. It only defines the xyz positions of vertices (as rgb colour) within the bounding box of the object. There is no way of changing the size other than stretching/squeezing the object. This is one of the inherent limitations of the sculpty format. You would have to use mesh to achive control over the size within the upload. If you can get the bounding box dimensions from 3Ds. then you could just type those into the edit dialog, rather than having to match things by eye.
  4. Oh well, not that then. I did test my theory though: Made a mesh with stacks of 12-vertex ngon disks (10 tris each), displaced to make flat shaded normals different for adjacent triangles. Triangulated, it uploaded fine with 65550 vertices. Without triangulating it failed, as predicted. It never got as far as showing a vertex count, as with your mesh, but the error message wasn't the one you got. It just gave the red cross and "Missing level of detail". Taking off two of the disks, dropping the vertex count to 65530, it uploaded fine without triangulation, although I had to use generate normals and set crease angle to 0 to see the full vertex count - otherwise it treats the triangulated ngons as smooth shaded, I guess. That last observation raises possibility that my explabation isn't completely accurate, but the main prediction is right - with untriangulated ngons it is possible to be hit by the 65534 vertex limit, but triangulation before upload can overcome it. I guess there must still be something else in your case if the triangulation doesn't work. Since the error message I got was different, I guess that also says it's something else.
  5. "Note that I do not do mine AT ALL in the way Drongle does" I don't know why you say that. For the picture-bearing face we both used project from view (bounds). That's the important thing. I did mapped the frame and back in one go using default unwrap, which is quicker, but requires knowing how to make and use seams. My frame is a bit more complicated because the woodgrain of the edges continues on the back, and the picture is inset, but those details don't really matter, I think. We both used face inset to make the frame (although there are plenty of alternatives).
  6. Here is a simple frame with two materials. The lighter is the picture area. Look directly down in the frame (Num pad 7 if it's horizontal, as here). Select just the picture face and then press U key and select "Project from View (bounds)" from the options. You will get the UV map of that face as shown at the top, filling the whole UV space exactly. Now a texture dropped on it inworld will also exactly fill that face. The mapping of the rest of this frame is superimposed in the lower picure after selecting everything. The details of that don't matter for your question, but note that there is only one UV map because SL will only ever use one. The Mappings of different materials can overlap like this. 
  7. Ah. I might have seen how this can happen. The code looks at each polygon and triangulates it before checking the numbers, fisrt the number of vertices (>=65535 gives error), then the number of indices in the triangle table. It does these after each polygon. There are always three indices per triangle, and the check is whether its >=65532. There can never be more vertices than indices in the triangle table. So as long as the input is all triangles, that is to say all polygons have only three vertices, the first check can never fail before the second one. The second check does not produce an error. Instead it starts a new material (prim face in SL) which resets the counts to zero. So triangulated meshes with more than 65535 vertices can always be uploaded. However, if there are ngons (even possibly just quads) then a polygon loads more than one triangle between each series of checks. Then the number of vertices can exceed 65534 before the checks are run. In that case the first check will produce the error. Generally, this will only happen if the mesh is flat shaded without any coplanar adjacent faces (and/or the UV map is totally fragmented*), because otherwise many of the indices in the triangle table will point to already stored vertices and the vertex count will increase slower than the triangle index count. So can this explain why going via Blender allows the mesh to be uploaded? Yes! Firstly, the default setting (for SL upload preset) is for the exporter to triangulate the mesh. If that's active, we are back to triangles only, and the first check should never be triggered. Even if it isn't active, so there are still ngons and/or quads, the order of polys will be different. That could change things so that the second check is triggered exactly at the end of a ploygon, allowing the upload to proceed. This presents a possible solution - just triangulate the mesh before you export it from Maya. If I am right, it will then upload. It's better to triangulate before uploading really anyway, because that gives you more control than leaving it to the uploader. The details of the triangulation can affect shading if polys are non-planar. It's going to get triangulated by the uploader if you don't do it anyway, because SL only uses triangles. Just make sure you can undo the triangulation after exporting, or save an untriangulated copy, because editing triangulated meshes is horrible. Incidentally, this thing with the vertex count limit being pre-empted by the triangle index count check is the subject of a very old bug report in the jira (BUG-1001). Maybe I need to add to it with this new facet of unexpected behaviour. Let me know if the triangulation works. *Note - this can also happen if there is no UV map at all, because the uploader then uses uninitialised data for the UV map!
  8. If you mean you did did ... Maya->Collada->SL ERROR Maya->Collada->Blender->Collada->SL OK That is strange indeed. The files will be far too big to look for anything by hand. The error message you see seems to come from an XML file. I don't know how that gets displayed. So I can't look for where it's triggered in the code. There are error messages embedded in the code that are easier to locate. I think they (or at least some) come out in the log file if you have the right logging option. Before you log in (LL viewer) you can set that in the menu Debug->Set Logging Level->Error. I think that might be enough. Then if you login, immediately start the upload, logout as soon as the error appears, you might find an error message in the log file which might tell us a bit more. In particular, if it comes from the code, it might be possible to see where it's triggered at the lowest level. Of course that might not reveal what the prblem is in the collada file. When you deleted some of the faces, do we know whether it was just the number left, or whether something nasty got deleted by chance. Does it still work if you delete a completely different set of facs?
  9. Me->Preferences->Advanced->Show Developer Menu Develop->Render Metadata->Physics shapes. That will show you the physics shapes inworld, but you may not understand what you see, and you can't edit them. You have to get them right in the upload (unless you opt for the linked invisible prim method). Where is your mesh? I could have a look if you like. It does sound to me as if you do have multiple objects. This means that the default physics for eac will be the convex hull of its low LOD, which can be very bad. If you did upload a physics shape, then maybe the objects are in the wrong order, which can create all sorts of anomalies. In case it's relevant, Blender does have its own physics, which does get exported to collada, but none of that is used by the SL uploader. It only uses ordinary mesh geometry. Any Blender physics is completely irrelevant to SL. There are few cases where the visible mesh will make acceptable physics, especially for buildings. So you generally have to make a physics mesh. If each wall etc. is a separate object, then it might work, but that makes the LI higher.
  10. There's a lot to learn about making physics/collision shapes for your houses. It makes a lot of difference whether your house is all one mesh object or a several. If it's several objects, then each will get its own physics shape. If you do nothing on the physics tab of the uploader, then it will make a default physics shape for each object which will be the convex hull* of the low-LOD mesh. Whether it corresponds to the visible mesh shape depends on whether the low-LOD mesh does, unlikely if you rely on the auto-generated LODs. This default convex hull is the collision shape you get when the "physics shape type" is set to "convex hull" inworld. If you haven't specified another shape, that is the only choice other than "None". If you specify a mesh on the physics tab, one of the LOD meshes or a new dae file, then that will be used to make the default convex hull. It's still a convex hull with no holes. As long as you don't click "Analyze", this will also make a triangle-based physics shape which will be used instead when the physics shape type is set to "Prim" inworld. Note that the uploader will only tell you the physics weight of the convex hull and you may get nasty surprises when you switch to "Prim" with a triangle-based shape if it has small/narrow triangles (see below). This kind of collision shape basically consists of the triangles in the specified mesh. If you do click "Analyze", the uploader will try to make a set of smaller convex hulls that approximate the shape of your mesh. There are lots of parameters that control this process, but it's never very good for buildings. You either get very expensive (in LI) shapes or you get the open spaces filled in. The best thing is to make a new model that already consists of only convex hull, and keep small gaps between them so that they never overlap. That way you have the most control over the "Analyzed" shape. Note that you still have to set the shape type to "Prim" to use this shape. Otherwise it will use the single filled-in default convex hull. Here are three physics shapes I made for an octagonal greenhouse/shed some years ago. The cheapest (inn LI) is the top one, which is just 22 triangles, so that the physical walls have no thickness. This is use without "Analyze" to make a triangle-based shape. To get more accurate collisions, you can use a triangle-based shape with double layer walls, at twice the cost in physics weight. If you look carefully, you will see that there are no faces on any of the narrow edges of the walls or roof parts. This is because these would be narrow triangles which greatly increase the physics weight. The fewer and bigger the triangles the better. The lower model is for using "Analyze" to make a convex hull based shape. Now the narrow edge faces are all there, Each panel is a simple convex box and there are gaps between the everywhere. If I remember correctly, this is the most expensive of the three (physics weight should be 4.76, LI>=5).  It's important to know that the physics shape for each object will be stretched to fit the xyz bounding box of the high-LOD visible mesh. So if it doesn't fill it already, you will need to add to it so that it doesn't get distorted to mismatch with the visible shape. Also, if you have multiple mesh objects in the visible model, then you need one object for each in the physics model and they need to be in the same order in the collada file (use the sort-by-object-name exporter option to control the order). It is also possible to make a physics shape from simple regular prims - link them to the mesh, make one of them the root of the linkset, make them invisible by setting their texture to "default transparent", set them to physics shape type "Convex Hull" and set the visible mesh to "None". I think that;s an unsatisfactory solution, because they can get unlinked. but it does work. You still need to be careful with prim parameters because the physics weights will get very high if you start using them. Keep to simple box prims of possible. You can see my octagonal shed/greenhouse thing on top of the yellow house in Siona. Use Develop->Render Metadata->Physics shapes to see the physics. If the block in your model was default cube, that probably means you forgot to use the "Selected only" option in the exporter, or selected it by mistake, *i.e. the mesh filled in until it has no holes or concavities. ETA - :matte-motes-shocked: What have they done to the image interface - Ugh!!!!
  11. I didn't answer this because I don't know where this error message gets triggered. In the code there is supposed to be a limit of 65536 vertices, but the trap fot that should never trigger because another limit, on the triangle count per material, always gets triggered first, as far as I know. The latter error doesn't stop the upload. Instead it just secretly starts new materials. I just tried uploading a mesh with more than 100,000 vertices with the current LL release viewer and there was no problem. So I suppose it has to be something specific with your mesh. You say you tried uploading old versions of the mesh that worked before, but did you avoid the slm file mechanism? This is the system that keeps settings for the last time a file with the same name was used. If something happened with those settings and the new file, it would the affect the old file too. You can stop the slm thing by (a) deleting the .slm file from the directory where your mesh file is, or (b) clicking the reset button on the uploader dialig, or permanently © by disbl;ing the relevant debug setting ( I think it's MeshImportUseSLM). I have it turned off permanently because it can cause several different problems. If that doesn't change anything, can you say exactly when in the sequence of actions the error message appears. Also, what you did with the physics tab (if anything) and LOD slots, and whether the mesh was rigged (in which case I won't be able to help unless it also happens when uploaded as a static object). Also, what were the vertex and triangle counts displayed in the uploader (if it got that far).
  12. "Makes links menu (ctrl+ L) and choosing Modifier, this modifier is copied to all the objects selected." I didn't know that one - brilliant. I can't count the number of times I've been frustrated with repeating the same modifier setup over and over. "how you ever work out what all the options do in these advanced panels of Blender" Experiment. Like the proverbial monkey with a typewriter. I'm still a long way from understanding the data transfer modifier though ... what is a "Layer" in the context used there. anyone? ETA: I guess it wasn't as useless as I thought because you only had a couple of orientations to deal with. If they had been all over the place, it would probably have been much more difficult.
  13. Hmm. I was over-optimistic there. It turns out that the data transfer modifier only worked because the shackles in your example file were in the same orientation. That means you would have to rotate them to match the source before doing the transfer, then back again afterwards. That's more than twice as much work as simply deleting all but the mapped one and replacing each one with a duplicate. So it's completely useless. Here's the evidence. First I tried some very simple objects, using the make-link-transfer-UV method, to mirrored versions, and they all worked perfectly. So maybe it requires a certain complexity before the matching a;gorithms run into trouble. So I resorted to the coffee cup. The one on the left is the "original", which was UV mapped after the others were duplicated and mirrored and normal flipped in edit mode, then separated (actually, one mirrored one was made that way, then it was duplicated in object mode. A and B use Make-links-Transfer UV from the original; A was without doing anything else, B was after mirror/flip-normal/transfer/mirror/flip-normals. If you look at the UV maps, these two look as id they have about the same amout wromg, but from the texturing it is clear that A is a complete disaster, while B has quite a lot of properly matched faces. Neither is useable. C was made by rotating the whole object 180 degrees before using the data transfer modifier (then rotating it back again). The transfer is perfect, but if you look carefully you can see that the result is not a mirrored version of the original. If you don't do the rotation, you get D, which is as useless as A or B.
  14. Inspecting even the transferred map of anchor-3 using synched sselection and selecting a face at a time reveals that it is worse than it looks. Some of the polys are mapped very wrong even when they look normal. You can also see this with a test grid pattern. Most of the errors seem to be use of the wrong one from a set of split vertices (split by UV seams). The sorting works a bit, but not well enough. The time I tested it, there was only one misplaced vertex, but that isn't reproducible. However, the Data Transfer modifier does give much better control of the matching between source and destination meshes. Surprisingly, it seems that you need to transfer the UV map, which is a face/corner property, using the "Nearest Face & Nearest Matching Face Normal" option. Here are pictutes of anchor-1 (top) and anchor-4 (which is much worse than anchor-3) before (middle) and after (bottom) applying the data transfer modifier set up as shown. It looks about perfect. It wasn't good with other options. Note that the option for evaluation in global space (buton with sphere and cube at end of source name box) is off. The "nearest" is then evaluated in local space for each object. For that to work the origin has to be in the same place relative to the geometry. I made sure of that before adding the modifier by Transform-Origin-to-Geometry for each object. I also agree that this is still MUCH more work than mapping before duplicating!
  15. Still a bit tedious, but I could make it work in your example by sorting the mesh elements (Select all, Mesh->Sort Elements->xxx, then choose Vertices/Edges/faces in the tool panel). If they all have different orientations, then I snapped the cursor to the same point in each mesh, then used "Cursor distance" for xxx choice. After the sort, the UV map transferred as expected (or at most one vertex to move manually). I did all three sorts (V/E/F) in case. So I don't know which are required. I assume this means the mirroring or other manipulation changed the element orders and that the transfer is dependent on the order in some way. It's also possible that you might be able to get the right effect by using a Data Transfer modifier, with the right methods of matching source and destination elements, but that would be tedious to work out and to apply.
  16. Don't know if this will help, but it's possibly relevant that SL will import only one UV map, even when the Collada contains multiple UV maps. I'm not sure how it selects, but in the experiments I have done it has always been the last one in the file. If your rearrangement of the UV spaces involves multiple maps that get exported into the Collada, that could be the problem. In that case, you need to eliminate all but the map you did the painting with. (Don't ask me how, I only use Blender where you can tell it which map to export). Why do you need non-overlapping UVs for different materials? Is that a limitation imposed by Mudbox? If so, that's a pity because it limits the pixel density of the texture on each face.
  17. "...starting out with Maya was it would give them the most ability to stress test..." That's what I find most strange. As far cas I can see, excluding other sources cannot possibly increase the range of stresses. Surely the opposite is true - excluding other sources excludes stresses that may be uniquely presented by the excluded sources. So it can only reduce the scope of stresses instead of maximising it. No? eta: It does. however. mean that they can avoid "invalid" stresses that might arise from "non-compliance" issues. By sticking to those that lrgitimately control what "compliant" means, such issues are definitively sidestepped. Medhu has already pointed out the consequent vulnerabilities that might result when (if?) the platform is opened up to other sources.
  18. I understand your point about excessive content control being too expensive limiting the market. I wan't really imagining individual content filtering, but rather the filtering of the creators, as you suggest. Your anecdotes suggest that might still not be effective. I suppose the alternative is indirect control with something like the LI system, but much more effective. It will be fascinating to see what they come up with. I suppose it will need to burden the merchant directly, not just his customer, with the peanalty for "bad" content' Perhaps a LI-equivalent based per-sale tax, so that excesses mean uneconomic prices. By the way, I thought Ebbe's earliest statements indicated that Sansar would be using an existing engine, and that Unity was mentioned in that context although no decisions had been taken. Indeed it seems highly likely that they would want to reinvent the wheel as far as the underlying technology is concerned. In that case, much more of their efforts will be devoted to the layers above, which, amongst a host of other things, would include controlling content parameters. Thus Sansar would really be a meta-platform. Some contraints would be imposed by the underlying engine. Have we heard any more about that?
  19. Unless something has changed since I last looked into it, FBX is still the proprietary format of Autodesk, and is without any public definitive specification. So they are free to change it as and when they like, so that compliance can only be defined by them. In those circumstances, it is impossible for anyone else to claim compliance, because there is nothing to comply with. Thus it may be a desire to test only with guaranteed compliant content that leads them to allow only Maya users to participate. Unfortunately, you are right that this will likely result in irreversible embedding of code that will never work properly with the products of other software. This is absolutely in conflict with the stated objective of "...eliminating the complicated challenges that today limit the medium to professional developers with significant resources...". Instead, quite apart from giving precisely that group the competitive advantage of early access, it will increase that limitation instead of eliminating it. Of course that is only strange if you take the stated intention seriously. As myself and others have pointed out before, the cause of huge increases in participation, that must be the aim, are probably best served by eliminating the amateur (in both senses) from the creation process. Consequently, I take that intention with a very large pinch of salt. I do not expect Sansar to be a place for people like me, not a professional creator and not a willing consumer. I think there are sufficiently few of these that we don't matter at all.
  20. SL is a real-time dynamic rendering system. It has to render everything in a scene many times per second. In contrast, Blender can spend minutes rendering each object. This limitation of SL means that it cannot get anywhere near the quality that you can achieve with either rendering engine in Blender. It can't do ray tracing. It can't do proper reflection or refraction. So your model will never look as good in SL as it can in Blender. To get as near as possible to the rendered object in Blender, you can use baked textures that include some of the available lighting and material effects. The problem there is that they can conflict with the dynamic lighting in Blender. Ulness use of baked effects is restrained, this can lead to undesirable effects. It is best used with fixed local lighting an static object positions. Given your description of what you are doing now, it may be that you could get the best return for your efforts by becoming familiar with the peculiarities of SL normal and specular maps (in case you are not already). It takes a lot of experimentation, but these can provide substantial improvements in rendered appeareance, provided you are using advanced lighting. Using local textures means you can experiment without upload costs, even on the main grid.
  21. "Hopefully SL can make it so you can have more faces in the future" In the development viewer Project-Import, this is possible, but it works by automatically splitting your mesh into multiple objects until each has no more than eight materials. It then gets imported as a linkset. Unfortunately, so far this has some rather unpleasant consequences. As usual for the automated uploader functions, you can do much better job by doing the splitting it yourself. So whether this will ever make it into the release viewer is unpredictable. My own view is that, since each material means another texture and so more loss of performance, it would be better to avoid it. We don't need chairs uploading a couple of dozen 1024x1024 textures before they can be rendered!
  22. Probably you still have the texture on all the sides of the object. Try setting the texture of the whole thing to "Default Transparent Texture" (type that in the edit box after clicking the little texture preview on the texture tab of the edit dialog). Then put your texture on just the right surface, either by selecting the "Select Face" radio button, clicking on the face and choosing the texture, or by simply dragging the texture from inventory onto the right face. If that doesn't work, there may be a non-transparent edge to your texture, due to feathering or something. Try setting the horizontal and vertical repeats to 0.99 to see if that;s the problem.
  23. Sorry about perimeter. I should perhaps have said Profile, as that's what it's called in the code and the edit thingy. The usage is a bit inconsistent though. Prims are supposed to be a profile swept along a path, but the distinction can get ambiguous, especially for the cube prim. I'm not aware of any documentation. However, I did find this, which has some interesting things I didn't expect (holes aren't really holes!). It appears the author is an original designer of prims. So if anyone can answer definitively.....
  24. If you are in Europe and the euro-politicians get their way, you are going to be breaching copyright of the architecht who disigned the shop before you even get close enough to see the picture, never mind the artist's copyright!
  25. Yes. it's tricky, and I may not be right. As you change the pathcut ends, a texture applied to the inside of the hollow doesn't shift at all. However, if you look in wireframe mode, the mesh (at high LOD) always has 5x3 faces on each of the sides of the triangle or square. These get smaller as you increase the cut, but are aways equally spaced. So it must be recalculating the U-dimension ends of the UV map so that the texture doesn't move, then dividing it in three. So the question of what happens if it uses angle division instead of perimeter division in that context. From the UV maps below, you can see that, except for the perimeter subdivision, the segments of the map have to be different across these inside faces for the texture to appear flat. In fact, even with these maps there would be some distortion because the variation of stretch is continuous, while the use of UV map approximates that with a series of discrete segments within each of which the stretch is constant. The distortion would be more severe with only the three segments of the inworld prim's mesh, even though the ends have been recalculated. There would also be a jump when it switched to lower LODs (with less segments). So I think it is necessary to use the perimeter subdivision in order to keep even stretching of the texture at all, and probably even more so to avoid it shifting with changing pathcut. That is in addition to avoiding the complicated calculations that would be needed to adjust the UV endpoints when the fitting to the texture space is variable across those inside faces. (Actually, that all applies equally to outside faces, where I don't think we would expect anything other than even stretching of texture along the perimeter, would we?) For good measure, here are three alternative models. These have 40 segments around inside and outside, many more than the prim meshes (12 for rectangle, 9 for triangle)m which makes the potential for distortion more evident, I think. First the models: all have the zero point of the perimeter on the right. The middle one uses equal subdivisions of both triangle and square perimeters. On the left, the square is still equally divided, but the triangle has divisions radially aligned with the divisions of the square. On the right both are divided at equal intervals around a bounding circle. Here are the natural UV maps - segment widths proportional to the face widths in the model, to achieve closest approximation to even stretching of the texture over the surface. A seam, becoming the left and right edges in the map, is at the zero point. At the bottom is the inside of the middle model, with even divisions around the triangle perimeter. The highlighted edges are the extra ones where the corner of the triangle doesn't coincide with a 40th of the perimeter. Next above is the triangle from the right hand model, subdivision at equal angles. At the top is the equally divided square, which is the same in the first two models, and below it is the square with subdivision at equal angles. However, since the UV maps are generated after the cut end points are calculated, and the are always equal divisions of the perimeter, like the mesh. then the texture will always be evenly stretched. So there won't be a distortion problem. The only problem would be converting the cut angle into the right point in the perimeter/UV space, which would be a bit complicated because the relationship is non-linear, but not impossible.
×
×
  • Create New...