Jump to content

It's time for a place for technical discussions on viewers


animats
 Share

You are about to reply to a thread that has been inactive for 332 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Suddenly, there are more viewers. Many are not based on the Linden code. There's the Unity-based viewer, there's a new viewer written in C#, and there's my viewer written in Rust. Plus LL's new mobile viewer. We need some place to discuss the obscure technical issues of inter-operating with Linden Lab and Open Simulator servers. Where should something like that be? Here, or do we need a forum outside the LL systems?

Dull and boring stuff like:

  • Exactly what info needs to  be sent to the server to tell it what the viewing frustum is. There's a body position and orientation, a head position and orientation, and a field of view message. This tells the simulator what the viewer needs to know about. How do those all interact with the interest list and with object kill messages? I've noticed that my field of view is set too narrow, and when moving objects are near the edge of the screen, they get updated less frequently. How much effect does the direction you're looking have on the interest list? What happens when the viewpoint is moving? Good coordination between viewer and server on this is important to good performance.
  • Exactly what the rules for terrain elevation vs. terrain texture are. They're not quite what the documentation says they are. Different viewers show different ground textures for the same region.
  • Exactly how varregions (sizes other than 256x256) work in Open Simulator. What are the allowed cases for mixed region sizes? How do you find adjacent regions when the edges are of different length?  Does that even work?

This is an exciting time for SL/Open Simulator technology. More people are doing serious development. The metaverse hype had an effect.

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

53 minutes ago, Coffee Pancake said:

Probably not here if opensim is to be part of the discussion, it's inherently off topic and therefore against the rules.

OK. Setting up a Discord server, "metaverse tech", and will be sending out invites.

  • Like 4
Link to comment
Share on other sites

19 hours ago, animats said:

OK. Setting up a Discord server, ”metaverse tech”, and will be sending out invites.

Sorry, I do not use Discord at all. Privacy (*) issues and (to add to the already bad picture), web compatibility issues with my browser (Pale Moon).

(*) E.g. (these are just a couple articles, and you will find many more in the same vein with a web search):

https://www.techradar.com/opinion/discords-ai-features-violate-our-privacy-and-its-time-we-pushed-back

and:

https://cybernews.com/privacy/discord-privacy-tips-that-you-should-use/

 

  • Like 1
Link to comment
Share on other sites

Well, Discord hasn't exactly the best track record for GDPR compliance.

https://www.cnil.fr/en/discord-inc-fined-800-000-euros

But it seems Discord is the tool everyone defaults to these days. Popular often wins...

/me feels old considering Usenet groups, IRC and Jabber/XMPP servers as antediluvian options

Did anyone consider something like github discussions? Might have some benefits as it makes it easy to just link to the Linden Viewer code on github and other code focussed tools that might help to discuss code related topics?

Communicating on GitHub - GitHub Docs

 

 

Link to comment
Share on other sites

20 minutes ago, Kathrine Jansma said:

/me feels old considering Usenet groups, IRC and Jabber/XMPP servers as antediluvian options

Discord is kind of the modern IRC, Reddit is Usenet.

Iono .. I miss old school usenet more than anything. Thanks googs...

20 minutes ago, Kathrine Jansma said:

Did anyone consider something like github discussions? Might have some benefits as it makes it easy to just link to the Linden Viewer code on github and other code focussed tools that might help to discuss code related topics?

Communicating on GitHub - GitHub Docs

Chat is likely to end up pretty vague and expressly avoiding Linden code might be important to some people.

  • Thanks 1
Link to comment
Share on other sites

On 4/18/2023 at 3:25 PM, Love Zhaoying said:

I was somehow only aware of the mobile Unity viewer, and surprised that the Unity viewer is separate! 

Berry Bunny is working on the Crystal Frost TPV using Unity with several other devs. She created a Discord group for it. She also has a Patreon page to support development as it's a long term project for her.

Open Sim and Linux support is on the table, but not a high priority for now. SL comes first.

  • Thanks 2
Link to comment
Share on other sites

More progress on my Sharpview viewer.

ff2023-bright.thumb.png.2e25217b97d5a8a998c9304ab0af109c.png

Video from Fantasy Faire. (Click for video)

A few SL users are now testing this viewer. It's still experimental and not ready for general use yet.

It's move, view, and chat only. Avatars are blocks. Requires a gamer PC. Much more work to do.

We can have nice things. When I started this project two and a half years ago, the conventional wisdom in Second Life was that AAA game-quality responsiveness and visual quality was impossible with user created content. I said that was merely a technical problem to be overcome. I think I've made my point.

I notice that the other viewers have improved quite a bit since I started publishing videos of my work.

  • Like 5
  • Thanks 2
Link to comment
Share on other sites

7 hours ago, animats said:

I notice that the other viewers have improved quite a bit since I started publishing videos of my work.

Indeed... Get the Cool VL Viewer, enable the multi-threaded GL image worker (”Preferences” floater, ”Graphics” tab, ”GPU/GL features” sub tab, set ”OpenGL worker threads” spinner to -1) and restart. Then set ”Advanced” -> ”Rendering” -> ”Textures” -> ”Boost proportional to active fetches”, and enable the smart frame rate limiter: in the ”Preferences” floater, ”Cool features” tab, ”Misc.” sub tab, set ”Frame rate limit” spinner to 60 (or your monitor VSync rate) and enjoy !

Want even faster rezzing when moving around ?... Reduce the ”TextureFetchBoostRatioPerFetch” (defaulting to 200) at, say 50 or even 20 (not recommended for non-gamer PCs, however: suitable for 12+ cores CPUs only, if you do not want stutter caused by excessive cores load).

Oh, and do enable the SMAA anti-aliasing shader to replace the lame/blurry FXAA one (”Preferences” floater, ”Graphics” tab, ”Renderer settings” sub tab); courtesy of Rye Mutt (I backported his SMAA implementation from Alchemy).

 

I find your approach (rewriting a viewer from scratch) quite admirable and even heroic, but IMHO the amount of work and time you invest in this would likely be more profitable if invested in the current viewer code base (that can still be improved in very large proportions, especially if the renderer is migrated to Vulkan).

For example, and without even touching the renderer itself, it would be great if you could contribute your method to evaluate the required texture LoD in real time; my optimizations are ”just” about pushing more fetching/decoding/caching work to (more) threads and increasing the number of required LoD calculations per frame in smart ways (such as during the frame limiting delays, or when a lot of textures are being fetched), but the real solution would be to have the required LoD evaluated in a thread.

Edited by Henri Beauchamp
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

IMO, a 'walking around' viewer already a few choices. From my perspective, what I really want is a machinima viewer - one that works with  gamer PCs. I really like Lindens viewer now as far as performace but if falls far, far short of working for machinima work. And by that I don't mean on an empty sim with one or 2 avatars, but a regular sim with 50+ avis on it. Firestorm always has been the overall best viewer for that, but it is still slow and sluggish alot of the times. I just don't think there are many use cases develepors work off of that machinima based use cases

Link to comment
Share on other sites

10 minutes ago, Jackson Redstar said:

IMO, a 'walking around' viewer already a few choices. From my perspective, what I really want is a machinima viewer - one that works with  gamer PCs. I really like Lindens viewer now as far as performace but if falls far, far short of working for machinima work. And by that I don't mean on an empty sim with one or 2 avatars, but a regular sim with 50+ avis on it. Firestorm always has been the overall best viewer for that, but it is still slow and sluggish alot of the times. I just don't think there are many use cases develepors work off of that machinima based use cases

machinima would be better served if you could separate recording events and rendering those events

Parts o f the SL platform scale linearly and that will always outpace the ability to render at a constant framerate for film making.

  • Like 2
Link to comment
Share on other sites

23 minutes ago, Henri Beauchamp said:

I find your approach (rewriting a viewer from scratch) quite admirable and even heroic, but IMHO the amount of work and time you invest in this would likely be more profitable if invested in the current viewer code base (that can still be improved in very large proportions, especially if the renderer is migrated to Vulkan).

IIRC, @animats - isn't your new viewer in Rust or something other than C++?

Having a "from scratch" viewer in a "different language" certainly has its benefits, all other issues aside! 

I say that BECAUSE I'm a big C++ fan.

Link to comment
Share on other sites

5 hours ago, Love Zhaoying said:

IIRC, @animats - isn't your new viewer in Rust or something other than C++?

Yes. 36,000 lines of safe Rust. Zero Linden Lab C++ code.

5 hours ago, Henri Beauchamp said:

I find your approach (rewriting a viewer from scratch) quite admirable and even heroic, but IMHO the amount of work and time you invest in this would likely be more profitable if invested in the current viewer code base (that can still be improved in very large proportions, especially if the renderer is migrated to Vulkan).

That approach requires staying reasonably close to the twenty year old architecture of the LL viewer.

5 hours ago, Coffee Pancake said:

Parts of the SL platform scale linearly and that will always outpace the ability to render at a constant frame rate for film making.

Not really. The fundamental rule of Sharpview is that the render thread doesn't wait for much. Here's the basic structure:

  • The render thread. This just draws, plus some housekeeping. It waits for almost nothing.
  • The incoming UDP message thread. This acknowledges incoming UDP messages and enqueues them for the update thread.
  • The update thread. This does what the server tells it to do. It's possible for it to fall behind, and its work queue can build up. It may be necessary to have two work queues - high priority, for scene updates near the camera, and low priority, for everything else.
  • The movement thread. This is started once per frame, and does all per-frame object movement. It's running on another CPU while rendering is running. It's supposed to finish before frame rendering does, but at present it sometimes falls behind by a frame. You see this in the video above where the main avatar's block sometimes appears in front of the camera for one frame. It's a lock contention problem, and I am working on that.
  • The asset queue system. This has several threads. These manage texture and mesh loading. Every 2 seconds, the loading priorities for queued assets are recomputed, based on where the camera is now. There's a thread dedicated to that. Then, there are 16 more threads (1.5x the number of CPUs) making requests of the asset servers. Up to six of them (half the number of CPUs) can be doing JPEG 2000 decodes. Those threads all run at low priority. When a region loads, the number of asset requests grows to a few thousand for a busy region, then drops as the assets load.
  • The cache purge thread. There's a thread purging the asset cache on disk of old files. That runs at startup, at a very low background priority.

Key concept: You can get more CPUs. You can get GPUs with more compute units. You can't get faster individual CPU cores. That hit 3 to 4 GHz a decade ago and has not advanced much since. So, if you want to have more performance, you have to do work on many CPU cores.

Performance is measured by frame rate and the length of the queues. The performance metrics are much more independent than in the C++ viewers. If the GPU is overloaded, the frame rate will drop, and this is independent of update rate. (Right now, it isn't, due to a locking problem at the WGPU level which others are now fixing.) The goal is constant frame rate independent of scene complexity.

If the updater is overloaded, the update queue will build up. If the movement system is overloaded by too many moving objects in the area, the movement thread will fall behind. If the network is slow or losing packets, that shows in the network statistics. Those are independent of each other. One of my current sub-projects is to have live graphs in the viewer to expose this info. Right now, I have to run Sharpview under Tracy, the profiler, and look at log files, to find performance bottlenecks.

This is nothing like how the C++ viewers work. The architecture is totally different. It's hard to make changes this drastic to the LL-based viewers without being LL. You need control of the code base, or what you do gets broken every time they release a new version. Then you need a big team just to keep up.

Edited by animats
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Right now, I have Linux and Windows versions available to a few testers. If some Mac developer would like to help with the MacOS port, and knows Rust, they can download my open source ui-mock, compile it with Rust, and try that on a MacOS machine. Ui-mock is a cross platform graphics library test. It uses the same stack of standard Rust graphics crates as Sharpview. In theory, those should run on MacOS, using Metal. They work on Windows and Linux. Ui-mock brings up some dummy menus and draws a 3D cube, using all the heavy machinery needed for a larger system. Once ui-mock is running on a platform, Sharpview will probably run. Send me a message if you're interested and know some Rust.

Edited by animats
  • Thanks 1
Link to comment
Share on other sites

Here's a related subject of interest: the look of physically based rendering.

Here's a real estate ad appearing elsewhere on the forums today:

image.thumb.png.cf026629372bb99efb4af3b4

SL real estate ad, some standard viewer. Shadows off, apparently.

billionareshouse.thumb.png.b2163550d4507af67b24ac577522b0ca.png

Same location, rendered in Sharpview.

Everything is nice and sharp. Details stand out better, but the softness of the original image has been lost. Partly because this is full daylight with no environment filters

This is what you get by applying physically-based rendering to existing Second Life content. LL has chosen not to do that. The new LL PBR viewer runs the old content through the old rendering pipeline, and new PBR content through a new PBR pipeline. We'll have to see how that looks when the LL PBR viewer comes out.

(The waves are somewhat off because Sharpview doesn't have animated textures yet.)

 

crackden1.png

Crack Den. Note the fine detail on the trash on the ground. The cracked paint on the fence. You can see the surface detail on the discarded red beer cups.

Needs a bit of tone-mapping to lighten the background. That's coming.

Whether we want hyper-realism is a good question. This is harder-edged rendering than GTA V. It's great technically, but artistically?

If you've got hyper-realism, you can tone it down with filters. If you start from blurry, you're stuck with blurry.

chicagoslum2.jpg.513dc7c5bb85349bbf628bc27787968d.jpg

This isn't from SL. This is from Chicago. Added for comparison. Crack Den's builders are getting the look right.

  • Like 2
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 332 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...