Jump to content


  • Content count

  • Joined

  • Last visited

Community Reputation

1,499 Excellent

About CoffeeDujour

  • Rank

Recent Profile Visitors

1,043 profile views
  1. CoffeeDujour

    What the fitmesh LoD bug actually means

    Yes, but like with script counts, there wont ever be a practical alternative. None of the script count / memory tools would gently inform people that maybe they should, at their leisure, work on using fewer scripts. They were all coded to kick. There is no reason to presume a scripted way to check someones ARC would result in different end products, unless the lab specifically added it to the ToS .. which they wont.
  2. CoffeeDujour

    This is why we can't have nice things.

    Imagine a highway with many lanes, one for each CPU core. There are many vehicles on the road and they are all driven by idiots (especially all those Chrome dump trucks). Some of the cars flying along are texture decode processes and in each one sits a hamster making a sandwich. There is a bendy-bus on the highway, that is the main thread. At some point, the hamsters in the cars have to pass their sandwiches to the old goat that runs the bar at the back, whom for reasons best left out of this tale of woe is a pedantic jerk. He will accept one sandwich at a time and only wants the exact sandwiches ordered by his currently seated guests (penguins, probably), the number of seats varies, everyone has to sit and place and order and no one can start eating till everyone has their sandwich and grace has been said. It's often lamented that hamsters are terrible drivers, get caught up in all kinds of traffic, don't arrive when they are expected in an orderly fashion, sometimes run each other off the road, crash, or screw up the order and hand in half chewed ball of bread covered in mayo. * Passing sandwiches from moving cars to a bus, in traffic, is fiendishly complicated. Hamsters only have short arms. Larger critters with longer arms are slower to get going and tend to get themselves wrapped around the buses wheels. ** Fitting the Bus with a hopper into which sandwiches can by tossed fails because hamsters can't throw very well and the old goat has better things to do than continually check the hopper to see whats appeared in it. LIkewise attempts to give everything over to an ever increasing fleet of hamsters tends to only result in squabbles over condiments. *** Sandwich ingredients are procured by a separate fleet of cars driven by rabbits pulling off the highway, buying CDN brand Happy Meal and then throwing out the pickles. **** A Kitty did experiment having multiple hamsters in multiple cars making the same sandwich and while they performed admirably, dinner was always late. ***** The Kitty suggested an assembly line would make a better analogy, but I felt a tale of hamsters in a sweatshop being bossed about by a possibly fictitious cat to be a little dark. If this tale has you more baffled than ever, that's intentional. I hope the confusion you're now feeling adequately communicates the complexities of multi threaded coding.
  3. CoffeeDujour

    This is why we can't have nice things.

    the net code lags behind the decode pipeline, your bandwidth is not being fully used.
  4. CoffeeDujour

    Which Do You Prefer COPY or TRANSFER?

    Don't even joke ... FREE THING !! (asks for debt perms, charges L$20 per use).
  5. CoffeeDujour

    This is why we can't have nice things.

    That's why texture trashing is noticeable. A texture gets dropped from the GPU and then re-added to the decode queue (as its still required by an asset in the scene). It decodes though each of the discard levels .. the final one loads (as it's a major part of the scene) and pushes past the memory limit and immediately gets canned. Round and round we go. On a separate thread does not mean independent of the main thread. Depending on FPS a certain amount of textures are decoded each frame (and a single decode level counts as 1) .. if your struggling with single digit FPS then the number of decodes per frame is also single digits. We did try threading the decode of individual textures, but the threading overhead made it slower. Textures in SL aren't big enough to benefit.
  6. CoffeeDujour

    When will the 38 attachment limit be increased?

    omg so many things You know you can pull scripts from things and put them in other things ? Passed you full perm gesture from Marine Kelly's cuffs that fix this when you press F9 .. it only happens when you sit on something anyway.
  7. CoffeeDujour

    Game is crazy laggy even with good system/internet.

    PACKET LOSS !! Press SHIFT+CTRL+1 ... third item from top .. If this says anything other than 0.0% .. you're going to have bad time. Step 1 go to the mainland region KARA - this is an empty full region and will rule out problems with whatever regions you're hanging out on. Step 2 reduce the bandwidth slider in prefs to about 200 .. it's doesn't apply to most of the data SL downloads, it's badly named and a little misleading. Step 3 troubleshoot your connection - wired lan over wifi, reboot your router and other network gear, if problem persists call your ISP and tell them you have problems with UDP traffic, be persistent.
  8. CoffeeDujour

    This is why we can't have nice things.

    All of these problems are caused by the specific way SL handles texture downloading, local storage, decompression and moving to the GPU. Those points alone account for 99% of the overhead. Once the texture is on the GPU, it's basically free. Your GPU's primary job is to shuffle textures around and it's damn good at it, way better at this than geometry and lighting (this is incidentally why Nvidia have the Quadro line). The decline in framerate is the progressive downloading, local file I/O and huge amount of decoding that gets done. It's a circular catch 22, when things are stressed, this makes it worse, goto 10. If you have the memory, and it's available to SL, there is no penalty to using it. Inspecting objects to get their full resolution texture usage doesn't match viewer behavior when you're not looking at it. It's more like .. this object has a certain max memory footprint, but in actual use it can easily be below that. If you do everything you can to force the viewer to load everything full rez, then you're going to have a bad time. The cache is getting a rework by LL. It does generally work, but it has some pretty harsh pitfalls..... ....decoding textures is expensive, the cache doesn't save decodes and every texture must go though multiple progressive decodes. Under normal use (when you're not inspecting everything) it will stop at a level based on screen space, distance, etc. Typically using a fraction of the memory that the full resolution image might require. The issue is that this is very slow. Hopefully the new cache will save decoded images. This alone will dramatically improve performance, there are a lot of other tricks that can be used to boost this even further. This is nothing to do with textures, more with how the mesh package is structured. The viewer gets and renders the geometry before it knows about the rigging, essentially it just assumes it's a static mesh till it's told otherwise. Right now we have a perfect storm. I/O heavy processes with an expensive decode combined with a render engine that's weighted heavily towards getting stuff on your screen as fast as possible. Dropping down a step in texture resolution does not happen correctly The viewer waits till it's running out of VRAM (the bias figure in Texture console), then bins an arbitrary (typically full resolution) texture from the pool in order to make space for the new item. The viewer then re-adds the removed and forgotten texture to the decode pool and starts from scratch. The way the texture to drop gets chosen is not ideal and the result is thrashing as one part of the viewer bins it, another screams "but mooom we need that one" and puts it back, only for the it to get immediately re-binned and forgotten. Rinse and repeat. It is done this way because the decode phase is stupid expensive. Right now, making sure each frame has only the ideal resolution textures would cripple the viewer with I/O and decodes. An updated cache (if it saves decoded information) will have a dramatic impact on performance. Even loading a large decoded texture from a slow HDD and passing it to the GPU will be significantly faster than the decode phase. Other possibilities include keeping lower resolution decodes in system memory so the swapping can happen instantly (effectively putting a small secondary cache of low resolution versions of active textures into a viewer managed ram drive). Without the super expensive decode phase, stepping texture resolution down on the fly one level at a time becomes a real possibility. ... because in case i've not made it clear by this point, decoding textures is stupid expensive. Till we get the new cache, you can improve texture performance by not having the texture console open (it has a non trivial impact on how the decode pipeline works) and not inspecting everything and everyone forcing items to be loaded full rez. Textures are not automatically decoded to full resolution every time under all circumstances, but inspecting or jamming your camera up super close is a sure fire way to make this happen.
  9. CoffeeDujour

    Just Ignore and let this one die

    Home alone recharge time is essential .. kinda why I routinely stay up half the night while everyone else sleeps.
  10. CoffeeDujour

    Please. Can we have more physics.

    Water 2.0 is certainly possible .. it would however fall foul of LL's shared experience rules.
  11. CoffeeDujour

    This is why we can't have nice things.

    There is no way to know what a texture will be used for prior to upload, smaller textures might be better individually, using one large texture as an atlas is significantly faster than 4 x 512's (etc). There is no reason that texture memory use can not be entirely solved in software allowing for unlimited texture detail on everything. I'd very much like to see 4K textures and some code to better handle texture degradation ... but that will have to wait till we have the cache changes from LL as they are quite literally the wheels this fun-bus rides on. We have some back-of-napkin ideas but the exact method depends on how the cache performs, worst case scenario being the new cache is as terrible as the old cache and we leverage large amounts of system ram as a buffer. (yes 4K textures, or bigger, really really, bring it on, challenge accepted) Mesh detail on the other hand .. eeeeehhhh, unless someone comes up with a generic FOSS GPU based decomposition library that performs acceptably exceptionally in all circumstances...
  12. CoffeeDujour

    Which Do You Prefer COPY or TRANSFER?

    Copy / Transfer ... Who cares. MODIFY >> Always - On Everything << PLEASE !
  13. I really can't believe we're still debating this.
  14. CoffeeDujour

    What the fitmesh LoD bug actually means

    Automated "offending avatar" removal systems shouldn't ever be part of a solution. We had a huge mess with the script time slap fight that was based almost entirely on ignorance about how the script engine worked combined with numbers a decade out of date. Creating a revolving door by removing people as they arrive on a region for "arbitrary reason" places a huge load on the region. Script access to avatar/attachment complexity would be a terrible idea as we will just tie it to parcel ejection and create the same mess we had last time. The ideal solution is that the viewer would render avatars with lower complexity in order to keep the frame rate up, which is impossible with current clothing & avatar accessory creation workflows / dependency on manually created LOD models. The only thing I can think is that the viewer will end up having to ignore LOD models and do decomposition on the fly.