Jump to content

leliel Mirihi

Resident
  • Posts

    928
  • Joined

  • Last visited

Everything posted by leliel Mirihi

  1. PyroSteel wrote: couple things going on here. The user created stuff isn't the problem. In a dense population area my vert buff is only hitting 5-7k. <-- this is super low. On a modern game, the main character can be 12-25k verts, while the whole scene can go as high as 2-5 million verts. You either hang out in empty sims or you misread something. Heavily built up sims are usualy pushing 700k-1m tris without shadows. The sim I'm standing in right now has 3.3m tris on ultra and that's with no avatars in view other than my own (draw weight of 14336). Get your facts straight before you start pointing fingers.
  2. Drongle McMahon wrote: Normal and spec maps have their own LOD known as mipmaps that would hide that detail. I don't think that' can be generally true. The switches are not coordinated, and simply interpolating the higher res normal map doesn't necessarily make the right kind of adjustment. I didn't mean to say the switches would be coordinated. The two LOD systems were created to solve two very different problems and as such are implemented separately. As for making the right kind of adjustments that's a rather subjective mater that I think needs to be taken in to consideration when making the normal map just like any of the other limitations of normal maps. I was more so pointing out that down sampling 4:1 on each step of the mipmap is pretty much guarantied to average out fine detail very quickly.
  3. Drongle McMahon wrote: Yes. They stay on the same faces, just like the diffuse texture. That can be a problem with the normal map because the map specifies the difference between the geometric normal and the normal to be used by the shader. If your lower LOD removes detail, or removes edgeloops used to sharpen corners, then the normal map for the high LOD won't be correct for the lower LOD. That's just an extra thing you need to check if you are using materials. Normal and spec maps have their own LOD known as mipmaps that would hide that detail. The best way to see exactly what's going on is to dial in lower/higher RenderVolumeLODFactor on the Show Degug Settings dialog, rather than zooming. If it's hard to see the switches, you can use wireframe view to check (Develop->Render->Wireframe). Then, to see what the LODs actually look like with the LOD switches at a particular factor, you can either zoom or stretch/squeeze the object (that's how I did the bed ones). As far as appearance is concerned, it's the same thing. While this is good advice for looking at geometry LOD keep in mind as per above you would not be seeing the correspondingly correct texture LOD. You would have to either modify the SL viewer or write a custom model viewer to see them both at the same time. I point this out mostly to say you should take what you see with a gain of salt as that's not how it would be viewed "in the wild".
  4. ChinRey wrote: So SL has several mechanisms for reusing data although it can hardly be called true instancing. That sounds like a no true Scotsman argument to me. How about you define what true instancing is first then claim the view doesn't do it. I would assume you're talking about functions like glDraw*Instanced*(). The viewer doesn't use those becuase they don't make much sense within the context of the kind of content we have in SL. Then perhaps you're talking about glMultiDraw*Indirect(). The viewer doesn't use those because they were just added in OpenGL 4.3 two years ago and a significant portion of the user base doesn't have hardware or drivers new enough for them. None of that really matters tho. As soon as you have less inputs than outputs you're doing some kind of instancing and the viewer has plenty of that going on.
  5. The problem with this kind of stuff is there's no feed back loop. When people see performance problems there's no obvious cause for them and the community basically indoctrinates people to blame LL for everything. Most people just don't even realized it's their own content that's causing the problems. When you try to bring up the issue many people take it as a personal insult and downplay, brush off, shift blame or just get hostile. Increasing public awareness through frequent educational campaigns while expanding the performance metric systems in the viewer is the only long term solution I can think of. The performance metrics displays in the viewer were originally made for debugging but they need to be brought to the for front and made easier to use and understand. Content creators and consumers can't make informed decisions without the proper data and they can't get the proper data without the appropriate tools.
  6. Works for me and plenty of other people. Rare problems are just that, rare.
  7. Jake Koronikov wrote: The SL shader system does not have any special solution for these kind of objects. Some other game platforms have special shaders to render those. At least those shaders can be coded. Those games use alpha masking instead of alpha blending, SL now has alpha masking as well. Give it a few more years and most well made content will use it making over draw much less of a problem.
  8. Medhue Simoni wrote: I'll show an example, and then you all can rip me apart, and tell me to stick to animating.Triangles - High 2814, Mid 1406, Low 350 This model was 1 of my first decent model, but I recently decided to sell it for Unity also. It's a little hard to judge from that one picture so take this with a grain of salt. High LOD looks fine for the most part. The only problem I see is the receiver, it was literally just a box with stuff bolted on. Yours is too high poly for that. Mid LOD seems a little higher than it needs to be but I'd have to see how it behaved in SL to properly judge it. Low LOD is way higher than it needs to be. Nobody is going to see those curves on the hand grips or stock at those distances, the barrel could just be a prism for all anyone cares, etc. Some artistic criticism if you want as well. The front hand grip is in the right place but the wrong shape and missing some parts. Also the overall gun seems a bit off assuming you're modeling the original Tompson and not the M1928 and variants.
  9. Pamela Galli wrote: I am not bitter about it, quite the contrary. I agree with the above points about LI accounting, but I dont find it that hard to work around. Sorry if it seemed like I was calling you out. I was replying to the thread as a whole and your post just happened to be the last one at the time so I clicked reply to it.
  10. I think a lot of you guys are asking too much from the LI system. One of the design constraints of LI was that it wasn't too radical of a change from the prim system. LI doesn't count textures because the prim system didn't, LI doesn't include avatars because the prim system didn't, LI doesn't account for instancing because the prim system didn't. LL wanted a mesh object to have an LI of roughly the same as an equivalent prim object but that's never going to happen if the LI system is counting all kinds of crazy things the prim system never did. There's also the issue of user training. It's taken years to get people to understand LI, arguable we're still a long way from it being widely understood. And it's a simple system! If they had added all that stuff people would have just given up and stopped caring, I'd argue it's happening now with the type of objects that started this thread. If you're saying that LI didn't go far enough to stop the rampant resource abuse that has plagued SL for a decade then you're right. But being bitter about that isn't very helpful.
  11. You have no idea how avatar imposters work do you? What it's trying to accomplish or how effective it is. I'll give you a hint, you are the problem avatar imposters is trying to mitigate against. In other news tragedy of the commons is now playing in theater near you.
  12. Flexi pre dates Qarl by a year. Also while you could blame some of that on Qarl LL has made the same mistake dozens of times throughout all aspects of SL. The company has a history of rolling out proof of concepts that were poorly thought out and often caused more problems than they solved. I think it's more a structural problem within the organization than the failings of any one person other than their inability to fight the system.
  13. Kwakkelde Kwak wrote: Anyway, I am talking about the maximum of 512 MB reserved for textures. No matter how muh memory a vertex uses, it doesn't qualify or is used as texture memory. If it is, the term is chosen poorly at best. Texture memory is an artificial construct of the viewer that is completely disconnected with reality. On the hardware side dedicated texture memory fell out of favor almost 15 years ago. The reason why the viewer can't use all the vram modern cards have is because it still clings to this out dated concept. LL knows this system is broken and needs to be removed but they're lazy so we should light a fire under them by abusing it as much as possible.:matte-motes-evil-invert: Anyway, the entire point is: sometimes it's better to use geometry, sometimes it's better to use normal maps. It's always good to use as little as possible. Agreed.
  14. Drongle McMahon wrote: "That means your normal map is too big." I don't think I can entirely agree with that assertion, so far. The density of geometry can be highly variable. In my example, it's mostly in rounded bolt heads that occupy less that 2% of the surface area. The remainder doesn't use much geometry, but the normal map has to be uniform and of sufficient resolution to give the required detail for the bolt heads. In other words, the normal map resolution is determined by the finest detail to be represented. If the distribution of detail is non-uniform, that means it has to carry a great deal of redundant information. In contrast, the geometry only contains detail where it is required. You're right about that, I suppose I should clarify. Normal maps have a high base cost but a low cost per feature. So they work best with larger objects with lots of detail that allows you to amortize the base cost accross the whole object. Also while the detail may be non-uniform on the mesh it doesn't have to be on the UV map.
  15. Kwakkelde Kwak wrote: Of course when you compare a high poly model to a low poly model with normal maps, the low poly will be better memory wise. Normal maps are textures though and Second Life has a very limited amount of texture memory. In other words, if you have a lot of VRAM, SL will run out of memory before your graphics card does. Vertex date takes memory too. Develop -> Show Info -> Show Render Info to see how much. I'm shure everyone has heard about the GDC presentation approaching zero driver overhead by now. It's worth pointing out that many of their examples were bandwidth limited by the PCIe bus transfering vertex data. The left object is rather heavy, with 4410 tris and 8820 verts (if it was uploaded to SL). The right object 18 and 28. Baked onto a normal map, there wouldn't be any difference though (apart from the fact you can of course tile the map of the left object, but this is an example). I'm pretty sure using a normal map for the object on the right, rather than geometry, wouldn't do you any good memory wise. Obviously there's always exceptions to everything, that goes without saying. Tho I don't think a shape as simple as a pyramid is a good example of that. btw, doesn't a normal map always use the fourth channel's memory, even if it's not used, taking up 4 bytes per pixel? No. Normal map != normal gbuffer. The viewer will use 4 bytes for normals and spec exp even if you don't have a normal or spec map because the normal gbuffer is for the whole screen. The same as how the "framebuffer" (diffuse/albedo gbuffer) always has an alpha channel even if there isn't a single alpha texture visible.
  16. I meant that vertices use more memory for the same amount of detail. Give two objects with the same visible quality the one made with pure geometry will use significantly more memory than the one that uses a low poly base mesh with a normal map. That is to say that normal maps are more efficient at storing high detail than vertex meshes are. That's what they were invented for and why people use them. The key point is to use the right size normal map. In your example you say you'd use fewer vertices to make that detail than what the normal map has in pixels. That means your normal map is too big. Granted I did gloss over a few things, such as the index buffers that define how the vertices combine to form triangles (6 bytes per tris). And how normal maps are not the same as the normals buffer. LL also missed the boat on some optimizations, with tanget space normal maps you only need the X & Y channels the Z channel can be reconstructed in the shader and they could have used compression as well which in total would cut the per pixel cost down to 1 byte.
  17. Kwakkelde Kwak wrote: Jenni Darkwatch wrote: IIRC normalmaps are merely a shader operation on a GPU. As far as calculations go, but they do use more VRAM than geometry of course. Then again, with a good normal map you can probably get away with a smaller diffuse map. Vertices use significantly more memory than normal maps. Vertices use 40 bytes for position, normal and color, plus 8 bytes for texcoord and however many for weights for skinning (note color and texcoords use floats). Normal maps are just 3 bytes per pixel. A 512x512 normal map has 262k pixels but takes up the same space as 19.6k vertices @ 40B each. Altho be warned there isn't a 1:1 ratio of nomal map pixels to pixels on your screen, there's a whole lot of filtering and interpolating going on before you see the final result.
  18. First off You right SL only supports 2 shadow casting lights, I was wrong about that. As for the extra triangles I suspect some of them are full screen quads used for multiple render passes. In a deffered renderer on the first pass you write out the data to multiple targets (normally diffuse + alpha, normal + spec, depth + stencil for 3 "framebuffs" total, altho other arrangements are also used). Then on the second pass you draw a "full screen quad", this could either be one triangle scaled up to cover the whole screen or two triangles that cover the screen, and the texture you map to the quad is the framebuff from the previous pass. Then in the fragment shader on the 2nd pass you do the lighting calculations using the depth and normals. Post proccessing effects (SSAO, DOF, motion blur, etc.) are done in the same manor with a full screen quad for each one (some times more). Of couse that can't acount for all the extra triangles you're seening, you'd have to dig in the viewer source code to find out exactly what's going on. I suspect the viewer is doing some dumb things here and there. The shadowing code especially looks rather naive and full of low hanging fruit that could be better optomized.
  19. Drongle McMahon wrote: It seems to be a multiplier of about 5x, because a mesh with fewer triangles gives roughly proportionall fewer shadow increase. No idea why that should be, but it would certainly explain the effect of sgadows on fps if it's actually rendering thast many extra triangles! Does anyone know how the shadows are made? Shadow maps are just depth maps from the point of view of the light. When you render the scene you transform the fragment by the light's projection matrix and do a depth test, if it fail the frag is shaded, if it passes it's lit. The 5 maps would be the 3 shadow casting lights SL supports plus sun and moon. This is why shadows cost so much, you have to render the scene from the point of view of each light that casts shadows. Altho it's just an untextured depth render so doesn't cost as much as say reflections would.
  20. I agree that the bugs turned features and other compatibility issues are very annoying and definitely holding the platform back. I'd love to just rip out many parts of SL and replace them with something modern but I know if that ever happened most users would grab their pitchforks and torches and storm LL's headquarters. It's kind of sad that not breaking user content will eventually kill SL due to stagnation. Having a certified builders program is an interesting idea, but I don't think it would work in practic. People generally don't like being told they're bad. As for configurable shaders that's exactly what glow, shinny, fullbright and the like are. You're really just asking for more of them. Anyway I'll repeat what I said a few years ago. The cause of SL's success (user made content), is also the cause of its problems (horrendously bad performance) and also its eventual downfall (lagacy compatability).
  21. Put a 100lb weight in the trunk of your car and drive around for bit and I doubt you'll even notice it. Put 40 100lb weights in the trunk of your car and drive around for a bit and you'll notice a massive difference. Your tests are completely unrealistic since nobody stands around in an empty sim only wearing one attachment. You're approching the problem from the wrong angle, think about how this item will actually be used in the real world. People wear lots of attachments and they also tend to congregate. Lets do some quick calculations to see how things can add up fast. Say the average avatar wears 10 attachments (shirt, pants/skirt, hair, shoes, feet, hands, breasts/butt, jewelry, head, more jewelry, ears, tail, guns) and they're all 22486 tris like your object. Now lets say you're in a club with 20 such avatars which isn't that uncommon. So already we're at 4.4 million tris just for the attachments. I don't really feel like going on since this is actually a pretty complicated example so I'll just stop here and let you think about these numbers so far.
  22. Jenni Darkwatch wrote: The answer to that would be "no, you can't really see any benefit from that". SL has been hamstrung in many ways, the graphics engine is pretty oooooooold Strongly disagree. The viewer's rendering engine has been updated a hell of a lot over the last 10 years, you just haven't noticed it. Look at materials: That stuff has been basic for computer game graphics since around the time SL launched. Yet we barely got support for it. And we still have no support for shaders, no support for displacement/heighmaps, no support for a lot of things computer games have taken for granted for years. Which has more to do with performance problems due to User Made Content than anything else. We can't even trust average joe the builder to make something out of prims that won't lag half the grid to death and you wonder why we don't have all the fancy stuff games made by professionals do. I mean seriously no support for shaders? Do you really want the average builder in sl to be able to run arbitrary code on your gpu with god knows what kind of performance? Anyway implementing these things in the viewer wouldn't be all that hard relatively speaking. LL even tried to do texture arrays 4 years ago in viewer 2.0, but back then opengl didn't have the nessecary features so they had to hack together a home made system. It didn't work so well tho which is why it got removed and thus you probably never heard of it. The main problem and why we won't be seeing this in sl anytime soon is all this stuff requires opengl 4.2 or higher, some of it isn't even fully supported on all hardware yet. Where as many people in sl are using old and decrepit hardware that barely supports opengl 2.0.
  23. There is no such thing as "real anti-aliasing". Aliasing artifacts are a byproduct of rasterizing lines onto a grid of pixels, if the line isn't parallel to the grid you end up with aliasing. All anti-aliasing methods, whether they're the down sampling variety (msaa, ssaa, etc.), or the image filtering variety ( fxaa, mlaa, etc.), all do the same basic thing which is blur pixels on the line to make it not look so jaggy. The only true "fix" for aliasing is to buy a monitor with a higher pixel density than your eyes can perceive.
  24. Jenni Darkwatch wrote: In game engines the impact is typically also minimized by agressively limiting the reflection range. I.e. only reflect things that are relatively close to the mirror, and only if the mirror itself is not too far away from the viewer. I just watched the vid about Tofu Lindens code - it seems to me that it uses a somawhat similar approach, aggressively blurring objects that are too far from the mirrored surface. Granted though, it would be a great way to radically slow down viewers That would help a lot, however it's still ignoring the elephant in the room known as user made content. A lot of people like to just hand wave it away or pretend it doesn't exist but it's here, and it's the defining feature of SL, everything has to be designed around the assumption that poorly informed hobbyist are going to be using it. There is a large contingent of people in SL that don't know, and don't want to know. They will just make everything reflective then blame LL for the horrible performance. Personally I think it would be a good idea to add this to the viewer for all shiny surfaces, similiar to light&shadow at deferred rendering: Turned off by default, let users turn it on if they like it. In other words: More stuff for people who can handle it without impacting people who can't handle it. Or at least minimally impact people. I don't think turning shinny into reflections is a good idea. A lot of the time shinny is used for specular highlights (technically the same as reflections but whatever), making a change like that would break a lot of content, for various definitions for break. I think a better route would be to leave the old shinny as it is and move forwards with spec maps in the materials system and possibly reflections in the future.
  25. Tateru Nino wrote: At some point a few years ago, I was given a replacement shader that file made all textures that were flagged 'shiny' into reflecting mirrors. It was quite cool (although, yes, it worked the card very hard). For that small of a change it more than likely was just using a single environment cube map for all reflective surface, which is really not much more than what the current shinny code does. For true reflections you need to render the scene from the point of view of each reflective surface, the same as you do with lights casting shadows, except you have to do a fully textured render instead of just depth like with shadow maps. So imagine how much of a load shadows puts on your computer, then multiply that by 5-20 and you'll get an idea of how much reflections cost. There's your answer for why we don't have reflections. If you're wondering how games can get away with it the answer is no user made content, complete control over where the player can look / move the camera. We don't have that kind of control in sl so we'll never be able to use the same optimizations that games use.
×
×
  • Create New...