Jump to content

Drongle McMahon

Advisor
  • Posts

    3,539
  • Joined

  • Last visited

Everything posted by Drongle McMahon

  1. What instancing means .... same drawcall ... I can't claim to have looked at the code, but I would be surprised if there's any instancing at that level in the standard viewer, even with legacy prims. Others would know better. There is intancing in the sense of downloading the mesh assets (geometry etc), independently of texturing, provided the only differences between objects are simple transformations (and provided they aren't uploaded together with the different textures, getting different uuids). I'm not even sure whether it even applies to the generation of geometry from sculpt maps for sculpties that share a map. I think that happens per object level, not per map. (You do see coordinated snapping on of objects with shared maps, but I think that's the map download/cache fetch).
  2. Medhue, where is lowest LOD?* At this size, the LI is very dominated by the lowest LOD. So, apart from that, your models look ok. I guess I would have had less detail all the way down, and more drastic reductions, but that's a matter of choice - I'm not so interested in guns that I would be looking closely at them. It woud be interesting to know the actual triangle and vertex counts (uploader's version). The default targets for the autoLOD are 1/4 the vertices at each step. Yours looks like a lot less pruning than that, but that doesn't mean much. I haven't used the Blender decimator for ages. It used to destroy the UV maps. So I only use Dissolve now, which takes longer but keeps the maps intact. Does the decimator preserve UV maps now? Before Dissolve, I used to spend much more time fixing UV maps than reducing the geometry. *One triangle? Actually, for this case, two triangles (for both sides) with the right shape and suitable texture, might be quite reasonable. Maybe four, so you still have the magazine from head-on.
  3. In reality, once a mesh object is downloaded, all other copies of it should has a download weight of zero. That would be right if the downloading burden was the only consideration. It would make a huge difference, and it would encourage mpre efficient re-use of geometry. For example, I once made a tree where all the branches were instances of the same mesh at different scales, but with the LI system it was horribly expensive. However, I think instancing doesn't help (much) with the gpu effort required to render multiple objects. So when they decided that that was the equally or more important target for limitation with LI, the idea of rewarding instancing no longer fitted with the aim. You're also right about textures. I have no real data, which would be really interesting to see if it exists, but I do get the impression that both downloading and rendering large textures contributes more to the slow rezzing problem than does geometry, especially now that we have normal and specular maps too. On the other hand, I think good textures/materials can contribute much more to visual quality than geometric detail can.
  4. I don't disagree. Poor choices by consumers are as much to blame as the producers who choose to exploit those choices. No difference between RL and SL there, except maybe that RL does have a few laws to eliminate the worst abuses. Still the cynical exploiters can still make the most money (ever heard of Gerald Ratner?). Fortunately for me, I'm not trying to sell anything. So I can ignore the destructive realities of commerce.
  5. an extremely bad prim/land impact system I should have qualified my defence of the LI system by restricting it to the dowload weight component. The triangle-based physics weight system, on the other hand, is entirely deserving of your assessment!
  6. Ah. I think I see what that's about - the ability to coordinate the LODs of several objects together. That would be a very nice addition. The uncoordinated switching of objects that are part of the same thing, because of either size or distance differences, can be very disconcerting and difficult to mitigate. It might be less easy to implement LOD groups in a system like SL, where, unlike a Unity game, stuff from different makers gets mixed up and the contents of a scene are not all controlled by one author. If we are right in assuming that the SL successor will be using something like Unity, or a comparable engine, it may well inherit much more flexible LOD control.
  7. The one-triangle medium LOD trick should have been prevented somehow. I'm not sure if I'm familiar with that trick. Please tell! Setting the max triangle count for medium/low/lowest LODs to zero, so that you get one triangle per material at all but the highest LOD. Often recommended here for LI reduction for stuff always inside buildings, on the basis that you never see them further away than the first LOD switch. But if you have RenderVolumeLODFactor set lower than the designer, this can lead to horrible collapses to random triangles while stuff is within view. Tha earlier collapse of smaller objects is also often forgotten, and woe betide those who like to look through windows. If done really well, with an explicitly one-triangle LOD models where the triangle is always invisible (e.g. facing into the ground), this can work well, as long a you can accept things disappearing altogether, but the zero auto-LOD method will never work like that. As long as there are examples inworld, you can inspect the LOD behaviour of items you consider purchasing by dialling in lower RenderVolumeLODFactor while looking at them. You can't do that for things only shown on the Marketplace. That is a serious limitation of the latter that facilitates bad practices.
  8. I would look at it as a result of an extremely bad prim/land impact system, which forces creators to adjust LODs to get land impacts to a reasonable level. It doesn't force anybody to do anything. What it does is penalise those who don't pay attention to making things efficient for real-time use. The intended penalty was incresed LI. Unfortunately, it also allowed the alternative of terrible LOD behaviour, and many took that way out as it was easier and/or more lucrative. Allowing that was the mistake. The original intent, reflected in the size-related calculations, was that the LI cost would reflect the burden of data downloading, which uses both server and clients bandwidth. Later, the developers shifted emphasis toward the rendering strain on the gpu, but retained the calculations as the two factors were (claimed to be) highly corellated. If I remember correctly, the LI calculation was based on a scene budget (excluding avatars) of triangles for a viewer with vertain low graphics settings (effectively RenderVolumeLODFactor=1 and FarClip=188). I can't remember the figure, but it was well below a million and over 100,000; maybe 150,000 or 250,000*. In fact, the calculation was rather generous. It effectively assumed that content was uniformly distributed in two domensions. In fact, cameras and objects are spatially correlated, and distributed in three dimensions (albeit less spread verically). Accounting for either of these effects more accurately would have increased LI and made it even more drastically dependent on size. I thought originally that the LI calculation it was overly drastic. However, having seen the extent to which every opportunity to avoid the effort of making efficient content gets exploited by at least a few, I can appreciate that some for of control was absolutely necessary. Without it the world would be unuseable for anyone without the fastest broadband and the latest gpu (even with it, for some - almost there now for my 1.75Mb/s connection). The system adopted was at least based on reasonably rational quantitative criteria. Of course it's not ideal, but I don't think it's worthy of "extremely bad". *the figure is hidden in the old content meeting minutes somewhere.
  9. There can be layers for small detailed objects that will disappear faster. For large buildings LOD handling can be such that they are visible from a distance. In SL we do not have this kind of functionality... I'm not sure I understand what you mean here. If you design your LOD meshes appropriately, you have quite a lot of control over when details, or whole models, disappear. You can also add another level of control with invisible geometry and/or joining/splitting objects. Of course, if you stick to the automatically generated LODs, that is pretty hopeless. I did once do a jira asking for a object-by-object settable LOD distance modifier factor - so I must have agreed with you a bit, but the existing syatem is pretty flexible.
  10. Good illustration. This sort of thing happens mostly/worst when people try to cheat the LI system, not caring about those who can't afford or don't know how to set high RenderVolumeLODFactor. The one-triangle medium LOD trick should have been prevented somehow. Too late now. For me, the challenge of making good LODs is part of the enjoyment of making mesh. I guess others don't share that. and it is hard work.
  11. .I did not use a higher poly model to bake normals--should I do that? I don't think it's a matter of "should". I was really just mentioning it for completenss, not to say it's necessarily even a good idea. It's a lot of work even if you already have a high poly model - more if you have to make it, and not easy even when you know how. There are two possible reasons for doing it, I guess. My suggestion of using it to reduce vertex count by having all smooth shading and correcting the shading problems with a normal map is probably not a good idea in SL because many people will not be using advanced lighting. For them, the normal map will be ignored, and they will see the artefacts, which can be ugly. So you are better off starting off making textures that will work without advanced lighting, as you have done. Then you have the option of adding a normal map that will work with that texture to give more realistic detail for those with advanced lighting. Whether that is worth the effort, only you can decide. Same thing goes for a specular map. You also have to decide whether the extra maps are worth the performance cost of the extra textures.
  12. Nice house. "Object" is the default name given after the uploader tries several ways to find a name from the data loaded from the collada file by the collada library. So it seems to be able to find the Blender object name for just one mesh object (collada <geometry>), but not for subsequent ones. So the rest all get called "Object". I don't know why. Nor can I remember (if I ever knew) what determines which object is the one whose name gets used. The Blender exporter certainly puts all the object names in the collada file. You can override the single chosen name by putting a name in on the upload options tab, but that doesn't solve the "Object" naming for the others.
  13. Another one: If you are going to use a normal map, then you can keep more(all?) edges soft, and compensate for the shading when you bake the normals from the high poly model. This will reduce the vertex count in the uploader, which splits* vertices at sharp edges that meet with different normals. You can may also get some reduction by welding UV islands where that's possible possible, because vertices are also split* across UV seams. *In the internal format, every vertex in the vertex list has position, normal and UV coordinate. So when the same geometric vertex appears with different normals, at a sharp edge, and/or different UV coordinates, across a UV seam, then a whole new vertex has to be created..
  14. Drongle McMahon

    LOD?

    It's an odd one because I know many people who randomly find it reset to default. I have to check it periodically. For me it is not necesarily with every relog. There is no other setting I have that problem with. Perhaps it is getting reset every time a new viewer version is auto-installed?
  15. that's cheaper than send some up from the ground. Yes. For common stuff, that's how it works - for space contruction/use, the cost of getting stuff into orbit from the earth is so huge that getting it from low-gravity asteroids can compete. According to the wikipaedia artice, "A comparatively small M-type asteroid with a mean diameter of 1 kilometer (0.62 mi) could contain more than two billion metric tons of iron-nickel ore, or two to three times the annual production of 2004". For rare metals, provided the cost of refining isn't too high, it might be able to compete with supply to the surface too. The problem now is that that much is not going to be needed in space for a very long time. It might be just too long-term an investment to be attractive.
  16. it's quite another to accept that we need it so much that getting it, manned or not, is worth the cost. Apparently that can be estimated. The worked example presupposes a demand in earth orbit, which wouldn't seem to be there yet, but the potential profitability may be surprising. Note that there is a misprint on return date. I assume it should be 2024, not 2014!
  17. ...or in the SL import. There is a Generate normal button, with an associated crease angle, which will smooth adges where the faces are at less than the crease angle, and make others sharp. However, this doesn't work as well as doing it in your 3D program. If I remember correctly, it leaves all UV seams sharp. It might do as a stop-gap, and to diagnose the problem, but I wouln't recommend it for general use.
  18. I dont want to use standalone textures to import in SL. May I ask why not? There are several disadvantages of uploading with the model, which I have listed before. Making good metallic effects is quite hard. This is especially the case if you want something that will work with and without advanced lighting. You really need to use normal and specular maps for advanced lighting. If the metal is smooth, you can get away with using the blank versions of these. Using the diffuse map (if you have one) as specular map sometimes works better, as the colour of specular reflection from metals is the same as the diffuse colour. For more realistic metal, you need to make these maps. As far as I know, you can't upload normal and specular maps with the model. So you are going to have to upload separate textures unless you stick to the blank maps. Here is one of my attempts to make something metallic, gold and silver. The maps here were not baked. They were all made by manipulating the diffuse texture in Gimp. One quarter shown here, as they are symmetrical. Unless your metal is smooth, I think it is essential to use the alpha channels of the normal map (specular exponent) and the specular map (environment reflection). Otherwise you get the overall glazed/wet look. You can experiment with the Glossiness and Environment slider values which have strong effects, but do that with different lighting conditions, because their effects are very dependent on them. Here is a prim cylinder with these maps inworld. Top row is default lighting at 3pm. Advanced lighting on at the left and off at the right. In the bottom row, advanced lighting is on and a light source (not projected*) is added at the lower left; 3pm on the left. midnight on the right. You can bake normal and specular maps from cycles. I have not baked normal maps in cycles, only with the blender render. So I can't comment on that. For the specular map, use the Glossy color bake, which just does the reflectiveness of the surface. The other Glossy bakes are for highlights, which depend on the lighting. They also bake what a camera looking back along the normal would see at each point, which does not correspond to any real camera view. Unfortunately, these bakes will not give you the separated specular exponent (inversely related to roughness) of the separate environmental reflection map, which you need for the best results. I have tried to get these maps from baking the roughness parameter as if it were a colour, but not met with much success. Then you still have to put them together in Gimp/PS. Maybe someone has worked this out? I also tried to make a node setup that would mimic the SL lighting in Blender, using the alpha channels as in SL, but I couldn't find hoe to get the environmental reflection to work. That would be very useful. Meanwhile, testing the maps in Blender, as Cristhiana said, is the best you can do. Then use local textures to experiment, so that you don't waste upload fees. *note: projected lights are also included in the environmental reflection. There's a problem with this in the current viewer. but a tested fix is on the way. Then they should make good metallic effects.
  19. 1. To see if it's "tint", simply go into the Texture tab of the edit dialog inworld, and check that the diffuse texture color is set to pure white. 2. Are the edges across the curved part smooth-shaded? I guess in Maya that is having them in the same "smoothing group". (Also re-check the normals - should be ok as you made it by bevelling). 3. There is/was an artefact caused by inworld ambient occlusion (not fixed as far as I know). You can see if that is affecting your model by turning it on and off in Preferences (assuming you have advanced lighting on). However, I think your model is probably too small to have this problem.
  20. Too many triangles. The uploader secretly creates new materials if it reaches 21844 triangles in any material. Described in this jira (BUG-1001).
  21. Yes. The sickness industry is much more profitable than the health industry. No doubt there are comparable perverse motivators at work in the space industry. Nevertheless, there's no doubt that both have brought us useful things too.
  22. But i think i go learn in blender That is the best solution in the end. There are many problems with the way Sketchup makes meshes, that are difficult to solve in SL. It is also difficult for us to know which problem is the cause. Often the best solution is to edit the Sketchup mesh in Blender. So you might as well use Blender from the beginning. via Google (+)... Voilà la meilleure solution à terme. Il ya beaucoup de problèmes avec la facon Sketchup fait les mailles, qui sont difficiles à résoudre dans SL. Il est aussi difficile pour nous de savoir quel problème est la cause. Souvent, la meilleure solution consiste à modifier lemaille de Sketchup, avec Blender. Donc, vous pourriez aussi bien utiliser Blender depuis le début. (mesh = maille ???) !! cedilla not permitted in this community ??
  23. In all probability, all the originators of the mission were interested in is origins, and the rest was added in order to get the funding for it. Cynical thinking, yes, but I bet it's not all that far off the mark. On the contrary, I think you are generous. My guess would be that their prmary concern was the advancement of their personal careers. Sadly, that has to be the priority for survival, in science as much as elsewhere. More noble motives are a secondary luxury dependent on success with this overriding one. In priciple though, that should not affect the assessment by the funders. It's something that never ocurred to me until you wrote it It didn't occur to me until then either. Just goes to show the value of debate in driving the refinement of views. HS2: Now that is a rather different question. The benefits there are supposed to have been explicitly identified and calculated. The validity of those calculations is certainly open to dispute, as is the question of who are the beneficiaries. There is always reason to be wary of benefit to "the economy" evaluted without reference of how it is distributed. Same applies to "economic recovery". However, this is digressing too far. I will let it rest.
  24. and they did it without it costing the country almost half a billion quid True of Rutherford, but irrelevant in context. I am using the example to illustrate the fact that benefits are unpredictable, not about whether that work should have been funded. Not true concerning accelerator development, which is very expensive, although medical applications were foreseen quite early on. I guess my real difficulty with your argument is that there will always be immediate needs that are more important than investment in knowledge, irrespective of prevailing economic conditions, and in someone else's eyes if not your own. So applying your absolute criterion instead of a proportional weighting will always mean zero investment in knowledge, and loss of the benefits that flow from it. It also seems to me inconsistent with your claim to be in support of the pursuit of knowledge in general. You have to keep allocating some proportion of expenditure to maintain it. I happens that our present (UL) government have weighted the long-term research benefit highly, as they have protected research expenditure from the swingeing cuts made elsewhere. In the past, the science budget has often been the first to suffer in strained circumstances. The commitment to expenditure on Rosetta was, of course, made in economic circumstances very different from those we are in now, and most of it had probably already been spent before 2008. Do you think the original commitment was ungustified under the prevailing conditions? Or was cancellation mid-expenditure required after circumstances changed in 2008, consigning the past expenditure to the dustbin. The latter sounds drastic, but there is plenty of precedent in just about all government IT "investment"!
  25. Drongle McMahon

    LOD?

    I can't speak for the SL viewer The setting of RenderVolumeLODFactor is persistent in the current (and past) LL viewers. It stays set across logins whether you alter it directly or via the Object Detal slider. It even stays set if you relog after setting the slider but don't close the Preferences dialog. It may sometimes change when the viewer gets updated. The persistent value is stored in settings.xml (in users\....\AppData\Roaming\SecondLife\user_settings\ on Windows 7). So it will also change if you use a different computer or log on to the same computer as a different user.
×
×
  • Create New...