Jump to content

leliel Mirihi

Resident
  • Posts

    928
  • Joined

  • Last visited

Everything posted by leliel Mirihi

  1. Kwakkelde Kwak wrote: leliel Mirihi wrote: Because you're plastering everything with high res textures! According to whom?? I never use textures bigger than needed, I always make them fit the object. ( Your whole example sounds to me like "How can I use a high res textures for tiny objects without it increasing memory use". Maybe I'm reading more into it than there really is but that's the way it looks to me.) Then you clearly don't hear what I am saying. I am talking about lowering the texture use on the lower LoDs, I never mentioned increasing the high one anywhere. To what end? What are you trying to save by doing this? It's the full size image that's taking up all the ram, cutting back on some image half way down the mip chain isn't going to do much of anything. We're talking about extensive modifications to the viewer to over ride how the GPU does mipmaps to save, what 5%? ETA: It's not even close to 5%. Here's the break down for a 1024x1024 texture. 66.66% for 1024 22% for 512 7.33% for 256 2.44% for 128 0.814% for 64 0.271% for 32 Who cares about such small savings? The fastest, easiest, and biggest savings comes from using smaller textures. No amount of mipmap trickery can work around that. There are much lower hanging fruit to worry about. Depends on how you use the textures. If you want 512x768 for the table and 512x768 for the chair, yes. The results would probably not be that much better than two 512x512s. And some people might want only the table, no chairs. Then you really have unused texture space. [...] I'm not sure it does, but I am sure it's possible, since a LoD change means a model change. One copy of one object is what you'll be looking at in most cases and in SL most objects have unique textures. If you build grass or trees it's probably not a wise choice. If you are building pretty much anything else you won't often see duplicates. This is very different from what you'll see in video games, where reusing textures is the norm for obvious reasons. This just highlights the differences between sl and a real game. In a real game you can make these sorts of guaranties about objects so doing these kinds of optimizations makes sense. Point taken.... but that's how it works now, not how is has to work. The viewer would have to keep track of all the objects at all the time, it would need some smarts for times when people zoom in and out so it doesn't keep loading and unloading textures, etc. It's possible, but it's also an easy way to add bugs. I can see the jira issues now about the viewer being confused about how far an object is and refusing to load the high texture and so on.
  2. Kwakkelde Kwak wrote: Ok, now you have me confused. In two ways. Why would I have more memory than will fit the vid memory? Because you're plastering everything with high res textures! See the part below about the whole texture staying in vram. That's what's happening in sl right now, seriously go log in and turn on the texture console and render info console and you're see what I mean. ( Your whole example sounds to me like "How can I use a high res textures for tiny objects without it increasing memory use". Maybe I'm reading more into it than there really is but that's the way it looks to me.) And why would using atlasses be better for memory use? One 1024x1024 texture uses as much memory as two 512x1024s to my best knowledge. I thought the big advantage was the textures on an object loading all at once rather than piece by piece. If there was no need for the entire set of materials on all LoDs, you could use two 512x1024s, on a lower LoD only one of them. I am tempted to say that is a reduction of memory use by 50%. One atlas for more than one object, i.e. the table and chair use the same texture. This works best for non square objects that would otherwise have a large amount of wasted texture space. I don't know where I said they are the same and can be treated the same way. You didn't say it but your example is based on the same idea (i.e. nonlinear drops). Also as I understand you, the entire texture is loaded into vram, this is not what you said earlier when you said the viewer uses a lower res version of that texture when far away before sending it to the GPU. The viewer only loads the low res texture when the object starts out far away, once you zoom in on the object it has to load the full texture which stays in vram even if you zoom back out If you fake the mipmapping by using 1024 for LoD0, 512 for LoD1 etc... and you walk away from an object, the biggest offender can be dumped from memory. The GPU will do its mipmapping with a smaller version, giving the exact same results visually, but using a fraction of the memory. When closing in on an object, the same will happen. What makes you so shure the viewer will dump the large texture? The object is still in view after all. And this whole trick will only work if you have one copy of the object in view, if you have multiple copies at different LODs than you're now using way more vram then a normally made object. .
  3. Kwakkelde Kwak wrote: A "big" texture on the highest LoD won't be visible on lower LoDs/bigger distances/smaller sized objects as you keep explaining. You say both the viewer and graphics card make sure of this. The problem is you now have more date you want to store in vram then what you can fit. My video card has 1GB of ram and I routinely see the viewer eat all of it up and still want more. That's because people use high res textures way more often than they need to. They don't use texture atlases as much as they should (mesh now makes this easier). 100 pixels on your screen is most definately not 100 pixels of texture. on something like a necklace you only get to see maybe 20-30%. I gave the 10 meters just as "a"number...I didn't test it. Use 2 meters instead then if it makes a difference or 3 or 6, any distance where the GPU would pick a multicoloured 64x64 mipmapped texture. The point is you're really not going to save much in the way of memory bandwidth doing this, and that's not what's in short supply (in this case). These big textures take up more vram and that's what we're short on. Geometry and texture LOD may work similarly but they're intended to solve very different performance problems. Applying the theory of one to the other does not work so well. ETA: I suppose I should expand on this. The way LOD is done for Geometry and textures is similar but they are handled very differently. Texture LOD is done by the GPU where as geometry LOD is done by the program. When you switch between geometry LODs the viewer is actually unloading one list of vertices and loading the next, both don't stay in vram at the same time. With texture LOD it's all done by the GPU and the whole texture mipmap stays in vram the whole time, no matter how far the object is from the camera. SL is a bit of a special case since the viewer only loads the texture up to what is currently needed, but once you zoom in on the object the whole texture is in vram and stays there for as long as the object is in range. The GPU can not unload part of a texture and the viewer never does.
  4. Kwakkelde Kwak wrote: leliel Mirihi wrote: What? Either we're having a major miss communication or you don't understand what mipmaping is. I know what mipmapping is. You said the viewer sends a lower res texture when not close to an object. That gives essentially the same result as mipmapping then. Or at least a first step in that process. I think we've completely lost each other on this one. Did you read my post at all? the example? On for example small objects, where all detail is lost at a distance not that big, you can use a 1x1 pixel image (well if SL would support such small ones, but 32x32 with only one colour will do, the "blank" texture) The small object will be viewed up close though, then a high res texture can be preferred. This is exactly why I say a human being will make a better choice than some piece of machinery. I expect the GPU to adjust the texture to the size of the object on screen. Object twice as small, next step in mipmap. But some objects escape our attention when not in our face, so the texture size could regress faster than that. I also don't think mipmapping takes LoD changes into the calculation. In a real game a human would decide it's a waste to use such a high res texture for something you'd have to zoom into in order to see. One of the problems in sl is that the residents often decide the exact opposite and plaster the smallest of trinkets with the biggest texture they can just on the off chance people will cam in on it all day long. ETA: In your example above of the necklace covered with one texture, no repeats. By the time your camera is 10m away the necklace is only taking up 100 pixels at most, the GPU would have long since dropped to the 32x32 mip level or lower.
  5. Kwakkelde Kwak wrote: You're mixing two things up now. According to what you said earlier the GPU doesn't have to do any mipmapping, or hardly, since the viewer sends the reduced texture in the first place. Still a question what happens if you walk away from an object instead of walking towards it, since then your viewer has received the bigger texture, maybe there's some mipmapping then. What? Either we're having a major miss communication or you don't understand what mipmaping is. Here's an OpenGL tutorial that explans how all the texture filtering options work and shows you why it's a good idea to use them. Anyway, the example. You have a necklace made out of silver chainlinks and you use one full unwrap on it, no repeats (which would in most cases be stupid, but it's an example). As soon as people are 10 meters away from you(probably a lot closer even), they won't see any detail in the texture, but the entire texture area is still pretty big. In that case I would say it's better to make the necklace a plain color than a reduced version of the high res one. Again..again..again... it's all very academic, since the performance gain would be so small it won't be noticable next to the other textures. Why not just use a low res texture to begin with?
  6. On linux you don't really install anything so you don't have to roll back anything either. Just download what ever version you want and run it (disable auto updates if need be). I normally have 5 or more versions of the viewer in my downloads folder.
  7. Kwakkelde Kwak wrote: That's where the difference between a GPU and a human brain is important. The GPU only has numbers to work with. The endresult of an object on screen isn't determined by numbers though, but how it looks. A human can see that so I'm pretty sure that human will make the better decision. If all the GPU does is mipmapping, it means it uses the high res texture as a base, with all its colours and details. Use a smaller texture and less colour variation, possibly to a point where there is no variation at all, and the GPU can mipmap with that. I don't see how this technique has got anything to do with the highest LoD texture being too large. I'm still not seeing it. Can you give an example of how this would work and not just turn into a blurry mess? I think you're focusing so much on optimizing mipmaps without realizing that mipmaps are the optimization. Back in the old days GPUs would use the full texture for everything, every single frame it would down scale the texture to the needed size. But that used up a lot of memory bandwidth. So some one came up with the bright idea of storing pre scaled versions of the texture along with it that way the GPU would only have to fetch the closest one and scale from it saving a large amount of bandwidth.
  8. Kwakkelde Kwak wrote: I'm pretty sure the GPU does its job better than I would and certainly a lot faster. The thing is, as you said so yourself, it ofcourse has to work with what it's given. I as a human being on the other hand can decide whether something looks good enough at a certain distance, so I can reduce and reduce the texture the GPU has to work with to a point where it still looks good enough. Then the GPU can go from there with a much simpler task. Again, the gain would be overshadowed by the inefficient texture use in SL, so it's a bit academic and probably not worth the efford in this environment. A 5% gain for one texture (just guessing something) won't be very noticable when someone teleports in with 20 unique 1024 textures on their avatar. I think we drifted into talking about different things. At first you were talking about simulating mipmaps using separate materials, which honestly, is a horrible idea (except for tricks like using a billboard for the lowest LOD of an object). Where as now you seem to be talking about tweaking how the GPU does texture filtering, which is a better idea. I don't agree with your example tho. If a texture has such low detail that you can get away with using a significantly smaller image than what a given mip level would call for then wouldn't that mean the full res texture is too large? I do wonder about the streaming. Does the server send the full resolution texture or the reduced (mipmapped) one to the viewer? On a busy sim that can make a lot of difference I think. (no idea how well those jpg2000's compress, normal jpg's don't compress very well) Next to the streaming there's the cache. Does it store a load of full res textures or just the reduced ones that have actually been used? And another thing, how does texture load on the server-viewer system compare to the script and geometry updates and chat and voice and all the other things? JPEG2000 inherently creates mipmaps due to how it works (wavelet transforms). The spec takes advantage of this to enable some cool features, the main one being partial downloads. You can download part of the file and decode it to a lower res image than the main one. This is in fact the main reason LL chose jpeg2k over other formats despite its high processing requirements at the time. The viewer uses partial downloads so it only has to get textures up to the resolution it needs, instead of having to download the full res version of every texture in sight. The viewer caches however much of the texture its downloaded so far. As for quality, jpeg2k is a significant improvement over jpeg1994, and 12 years later is still considered one of the better image codecs. With one caveat however, they were both made for compressing images of the real world, they don't do as well with artificial images. I don't know how much of a load sending textures is on the servers, you'd have to ask the lindens about that. For the viewer texture loading and decompressing is the second hardest thing after rendering.
  9. Kwakkelde Kwak wrote: Btw I'm pretty sure I have a better brain than a GPU... if the GPU renders a low LoD model a couple of pixels wide, I'm sure something a lot smaller and cost effective than what the GPU can spit out will do just fine. Again, for now it makes no difference. I'm shure you are a lot smarter and more creative than the GPU but it has a lot more information than you do. The GPU knows exactly how many pixels each triangle will take up and what angle it is relative to the camera so it can sample and filter the textures to perfectly match every triangle on the screen. You on the other hand have no such luck, all you can to is make a best guess as to how large a texture should be based on some rough estimates that are guarantied to not be right in all cases. There is also the fact that even if you did do the mipmaps yourself you'd still have to turn on some texture filtering just so they'd look good.
  10. Kwakkelde Kwak wrote: Anyway, if the viewer does send the textures in different resolutions as you say, that mystery is solved, thanks. I don't know if the viewer just gives the GPU the full res texture and tells it to auto generate the mipmap or if it fills in the mipmap with the discard levels from jpeg2k, but in the long run it doesn't really matter. Not that it would have made any difference in how to texture the mesh, since adding is just adding with the "all LODs need all materials" way of uploading. Using separate textures to simulate mipmaps would cause a noticeable performance drop when done sim wide. Mostly because you have no way of telling the viewer not to do mipmaping for that objects. There's also the question of why you'd want to do that by hand when the GPU can figure it out for you and do a better job of it. :smileysurprised:
  11. The GPU already does that for you, when told to, You don't even need to give it the mip levels as it can auto generate them, when told to. The GPU can also do some fancy linear interpolation between mip levels, when told to. And even anisotropic filtering when the texture is viewed at oblique angles, when told to. In short the GPU can do all of that for you and the viewer does tell it to do that stuff.
  12. Alicia Sautereau wrote: We are talking about a pc that needs to outlast 2 years, do it right from the start if you have the budget and not endup replacing part by part. [...] 1500 watt, because it did 4 5870 befor the upgrade [to] 580... lolwut? Don't believe in following your own advice I guess.
  13. Ceka Cianci wrote: i'm also kicking around maybe getting this one really..i'm really liking what i am seeing in it.. The MSI GT783-625US http://www.msimobile.com/level3_productpage.aspx?id=347http:// I think you'll be rather underwhelmed with the performance of a $2400+ laptop. For that price you'd think you'd be getting a high end machine but in fact it's a fairly pedestrian mid range machine that would only cost around $800 if it was a desktop.
  14. As far as I can tell Ceka is looking for a desktop. ETA: Given the price difference between laptops and desktops I think I'd still recommend a desktop even if she was looking for a laptop. Many people don't even need a laptop, they just let it sit on their desk. And there's a good chance that buying a cheap but faster desktop and paying a teenager minimum wage to carry it for you would end up costing less the the laptop.
  15. Alicia Sautereau wrote: There is a difference between ultra and deferred rendering enabled with high settings Every one says they rung "great" on ultra, but cringe when they enabled deferred and their fps drop to 12 :matte-motes-mad: I did mean deferred. In order to spend 3k you'd have to get an SB-E system and run quad SLI which IMHO is a massive waste for sl.
  16. Ceka Cianci wrote: i basically have 3,000 minimum for a system..i am not spending less than that hehehe if i can only top out at $2500 in parts because that is where technolidgy has us at,,then i have 500 more to spend on whatever else i can for it..] I could build 2 systems with 2 monitors each for that much that would both run sl on ultra just fine with enough money left over to buy diner for the whole family for a week. Are you shure you need to spend that much?
  17. Rolig Loon wrote: Oh, good. Thanks, Innula. You saved me doing the experiment to check. I love how no one believed me even tho I've been looking at these broken invisiprims for months. Why do you think I posted what I did?
  18. Rolig Loon wrote: That's sad. I'll probably find myself in the same boat sooner or later, Ceera. I have loads of favorite shoes and other products that I bought no-mod, in the pre-alpha layer days and will hate to lose. Doesn't matter if the shoes are no mod because invisiprims are already invisible so you don't need to remove them. You just need to make or find an alpha mask for them.
  19. Lyra Blackthorne wrote: Some of the mesh hair is really nice, but the problem is having to use a mesh viewer...over half of SL still has not moved to the newer viewers; so, most people see mesh as a big blob. [12:25] Mumbles (charlar.linden): Currently we see about 60% of all sessions are with a mesh-enabled viewer. [12:25] Mumbles (charlar.linden): and about 2/3 of unique users are running a mesh viewer on a given day. [12:25] Mumbles (charlar.linden): It's growing, the biggest bump was when Phoenix released support. Source.
  20. Chosen Few wrote: I'm not sure it's that simple. Is there any single vertex on the avatar's head that is always the highest point, no matter what the configuration of the morphs? I don't know that there is. I also don't know for certain that there is any particular vertex on the foot that is always the lowest point. There's a vertex at the top of the skull that's always the highest point, I don't know if the hair mesh has a similar vertex but I don't think it should be counted. For the feet you could use the vertex at the back of the heel, it's always the lowest point for bare feet and is what the shoe base extends. Some of the morphs can make it so they aren't the only vertices at the highest or lowest point, but that doesn't really matter. The only problem with measuring like this is that it's viewer side, the sim doesn't "see" the avatar mesh so it wouldn't be so easy to add an avatar height function to LSL.
  21. Chosen Few wrote: Penny Patton wrote: Nyx Linden would disagree. I don't pretend to know what conversations you might have had with Nyx. I do know that the LSL functions to return avatar height in scripted measuring devices were explained eight years ago as returning eye level height, not skull cap height (since the avatar doesn't actually have a skull cap), and have not changed since. The height values now reported by the viewer are identical to these, are they not? Nobody is doubting whether or not the agent height function returns the skeleton height, the question is should that number be used as the avatar height and displayed in the avatar shape editor. The height of the avatar's skin is an entirely different measurement, which is more complicated to calculate, since every single morph has to be accounted for, in addition to the bone positions. Not that hard at all, just measure the distance between a vertex at the top of the head from a vertex at the bottom of the feet. The only hard part is agreeing on which vertices to use, as you pointed out, should hair count as part of an avatar's height.
  22. Evangeline Arcadia wrote: Search is abominable -I don't find it user friendly at all. What was wrong with the old one? If it ain't broke don't fix it!!!!!!! For example, I tried to look at listings under Arts and Culture in 'Places'. In the older viewer I would have seen a list of all the relevant places, so I could happily pick some to go explore - you can't do that with this viewer. It seems you have to put something in search field - but if I don't know what's out there how can I put something in the field to search for it!! Search is for searching, if you just want to browse then look at the destination guide.
  23. Penny Patton wrote: Second, think of how making everything larger affects Level of Detail. When objects are far away the engine renders them with lower detail models to save processing power. Smaller objects are downgraded to lower detail models more quickly than larger objects, because you will notice it more in larger objects like buildings if they suddenly drop to a lower detail model. If everything is larger, it's being rendered at higher detail, more polygons, over greater distances. Right there is a hit to your framerate. LOD has to do with the amount of area an object takes up on the screen, not how large it is or its distance from the camera. The reason being that micro and degenerate triangles are very costly to rasterize and cause a large amount of over draw in the fragment shaders. A large object at the highest LOD actually cost slightly less to render (relatively speaking) as it gets closer to the camera due to less over draw in the fragment shaders. I can use my poor photoshop skills to whip up some illustrations if you want.
  24. Looking at the specs I'd say there's nothing worth while in the whole machine. I'd suggest you keep this one for a little while and save up to buy a new machine all at once. If you won't be able to buy a new machine soon and want to upgrade this one to hold you over I don't recommend it, but your only option is to replace the video card. You have a GeForce 4 MX440 which is not all that great. See if you can get and old AGP card from a friend, buying a new or used one isn't worth the price.
  25. I will admit that fragmentation and vendor specific extensions would be somewhat of a problem with this system, but I don't think insurmountable. I intentionally modeled this system after the extension systems that are used in OpenGL and web browsers. Both of them have been doing it this way for over a decade but still managed to keep things together (for the most part). First and foremost, the viewers are open source, there's nothing stopping viewer X from incorporating an extension from viewer Y if it's good. That is in fact the goal of this system, the features will prove their worth by the fact that people request their favorite viewer pick them up and the developers responding accordingly. Second of all it would allow time for user feed back to improve the extension before it gets set in stone by LL and required to be supported forever. Viewer X adds experimental extension N, early adopter content creators try it out and give feed back, Viewer X comes out with a new version that works well and proves to be popular, viewer Y picks it up and so on. Third it's content creators that are going to be using these features, but it can be hard to know what will work best until you try things out. If we went through the normal process LL could end up implementing features that a year or two down the line we could end up regretting. This would give us a chance to experiment to figure out what works first. Granted I will admit a material system doesn't necessarily need to be all that grandiose. But it's the only way I can think of to maintain both backwards and forwards compatibility without having to regret past choices in hind sight.
×
×
  • Create New...