Jump to content

Selc

Resident
  • Posts

    23
  • Joined

  • Last visited

Reputation

23 Excellent

1 Follower

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Auto-generated texture atlases could be used for avatars, objects with more than one texture UUID, and complex linked objects — to drastically lower the draw calls. Sometimes, this can mean the difference between 5 fps and 60 fps for any given scene. If an avatar/object/linked object is manually modified in any way by the resident, notify the server; rate-limiting for the automatic updating of any given avatar/object/linked object's (multiple) texture atlas cache could be defaulted to once every 15 seconds. If possible, the server should rewrite only the delta output to any updated texture atlases. And nearby residents should only re-download the delta. An algorithm could be written so that any face/set of faces updating its texture UUID(s) more than once every 15 seconds due to an auto-updating script is automatically excluded from its respective avatar/object/linked object's (multiple) static texture atlas cache. Animat says the mipmapping code may be sub-optimal at ZerahMordly's thread here; maybe future updates can work in tandem with texture atlases. This could easily be fixed by giving residents an edge-masking allowance for each article of clothing/any mesh in general.
  2. It seems these distant matte backgrounds could work for rendering the contents of distant sims. But the clientside calculations of polygons that generate these matte backgrounds need to be hindered/throttled at certain distances. Currently, even the clientside calculations of polygons that generate avatar impostors aren't being shoved in a rocket and launched into the sun. Any mesh beyond a certain polygon threshold. Mesh houses with sensitive edges could be unflagged.
  3. My friend just ran another test: 1. He set max # of non-impostor avatars to 1 and set maximum complexity to 20,000 2. Went to a different club to see his RTX 2080 Ti calculating 18,000,000+ polygons with 36 people -- the result was 11 fps 3. Blocked and derendered the other 35 people in the club 4. 18,000,000+ polygons were still being calculated at 11 fps although no avatars were being rendered on-screen (everyone was invisible) 5. Disabled camera constraints and zoomed all the way out into the sky, looking up 6. His fps instantly shot back up to 150+ 7. Pressed Escape to zoom back in 8. 18,000,000+ polygons were still being calculated although no avatars were being rendered on-screen, and he plunged all the way back down to 11 fps I experience similar results when I run the same tests. We've tried both the default viewer and Firestorm. Proof avatar imposters don't do anything, even for the latest gaming computers.
  4. Wondering if it's possible for devs to implement an adaptive LOD feature that automatically and seamlessly reduces the polycount of avatars and static objects at different zoom levels or different graphics settings. Something that enables the user to override the current four static levels of manual LOD for mesh. This should also be enabled by default. The complexity update did nothing to fix user framerates so that VR would be viable with SL, because currently every mesh creator exploits the unfixed complexity loophole that allows for low complexity but super highpoly avatars (everyone is guilty, including me and my hair team). Even the highest LOD for the majority of rigged mesh in SL will be rendered at the lowest graphics setting when zoomed out — it's everyone's fault. We really want to see Second Life thrive again, even if it's 17 years old and long shunned by anyone not banned from Chuck E. Cheese's. And everyone wants to see VR re-implemented. My techie friend helped me benchmark some framerates with his RTX 2080 Ti... 200+ fps with three 40k-poly avatars (normal maps generated from highpoly models) in a beautiful mesh house on a quiet private island. More than enough fps for VR. But he gets only 6-15 fps in a crowded club full of highpoly mesh avatars, even with max # of non-impostor avatars set to 1 and maximum complexity set to 20,000. That would probably translate to only 2-7 fps in VR without adaptive LOD. The impostor avatars don't hinder the calculation of polygons. Although 26,000,000+ polygons in the clubroom with 30-40 avatars aren't being rendered at full resolution, they're still being calculated (enough for 433 fully-detailed Playstation 4 characters on your screen). Automatic adaptive/customizable adaptive LOD is a feature that could retroactively fix all of these issues, make framerates viable again for VR, and significantly boost the fps on your average resident's laptop that currently runs SL like a flipbook in crowded areas. Average SL avatar today at over half a million polys Example:
  5. Hiring an expert scripter (must have > 5 years LSL experience) to help with debugging mesh heads and HUDs. PM/notecard me your rates & past work.
  6. Hiring an expert scripter (must have > 5 years LSL experience) to help with debugging mesh heads and HUDs. PM/notecard me your rates & past work.
  7. We at NewSea & rezology have some hairstyle photos that need to be updated. Send a notecard to Selc with examples of your photography and desired compensation per photo. Each photo needs to be shot at a high resolution (CMD+SHIFT+S Mac / CTRL+SHIFT+S Windows) then scaled down to 700 x 525 pixels.
  8. "We are hard at work upgrading all of the SL infrastructure" What does this mean? Sims will be cheaper & have less latency issues? "make SL more performant" What does this mean? Higher frames per second for everyone?
  9. I don't watch the Superbowl, but what would be the perfect Superbowl advertisement for Second Life? Something that would at least break even for Linden Lab and generate new users that would stay (unlike the 99% that left from CSI:NY publicity).
  10. Kevin O'Leary bundled his software with printers in joint-venture deals. What would Linden Lab need to do to get Second Life bundled onto Windows PCs?
  11. Bitsy Buccaneer wrote: Spent much of yesterday reinforcing my appreciation of Blender's Alt Merge function. It's well nifty. Am curious to know what the OP needs to reduce that much and losslessly. dae files not intended for SL use? A creator who's found his or herself on the wrong side of the complexity numbers? And of course this super ninja trick would be brilliant to learn about. Every little helps. Alt Merge is not lossless for planar hair. I'm looking for the same secret technique that CATWA is using for their mesh heads. CATWA Jessica, for example, has 30+ mesh heads linked together during any given frame (most of them invisible other than the current frame) — yet the total complexity is only around 5,000 when worn. How do they bring something that should be 300,000+ complexity down to 5,000 complexity?
×
×
  • Create New...