Jump to content

Physically based rendering, and all that


animats
 Share

You are about to reply to a thread that has been inactive for 477 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

1 hour ago, Sammy Huntsman said:

Not gonna lie, I wish the lab would hold creators accountable for not optimizing their items. I mean video game companies don't even use overly high poly items in their games. 

My experience is that non-clothing object complexity can be handled in the viewer, given a GPU with enough resources. I've posted videos of walking around New Babbage, which is very detailed, using my own renderer. The LI limits keep the triangle count down to manageable levels.

Clothing complexity, though, remains a big problem. And it's hard.

The big players are working on it. I've mentioned Roblox, Ready Player Me, and recently there's MetaTailor. For a long time, Second Life had the only layered clothing system. But now others are doing it.

I've discussed "baking of mesh", as the possible next step to Bakes On Mesh. The idea is when you change clothes, something grinds away on a server somewhere and combines all the worn mesh pieces into simpler meshes. Hidden triangles are discarded, and simpler LODs are auto-generated. Just as, when you change clothes, a server grinds away and combines all the texture layers into one layer. So the job gets done once per clothing change, not once per frame. This is hard to do, but the other players in the industry are starting to do it.

Creators can't do this. They don't get to optimize the entire outfit and skin together. it has to be done when you get dressed.

  • Like 3
  • Haha 1
Link to comment
Share on other sites

1 hour ago, Love Zhaoying said:

I missed the "why". The "user-created content"?

"The idea of building things -- be they social relationships or objects or just building the world around yourself -- is pretty broadly appealing, in particular to soccer moms or teenage girls or people you might not think of conventionally as videogamers," Rosedale said.

Found in wiki.

Edited by Ron Khondji
  • Sad 1
Link to comment
Share on other sites

12 minutes ago, Ron Khondji said:

"The idea of building things -- be they social relationships or objects or just building the world around yourself -- is pretty broadly appealing, in particular to soccer moms or teenage girls or people you might not think of conventionally as videogamers," Rosedale said.

Found in wiki.

One would hope he doesn't feel that way anymore about the demographics.

Link to comment
Share on other sites

1 hour ago, Love Zhaoying said:

I missed the "why". The "user-created content"?

.. because the fix would be an involved project touching every part of the viewer internals, probably require some server tuning, and it would be a major ongoing development commitment for multiple high skill developers from LL for at least a year.

 

  • Like 2
Link to comment
Share on other sites

16 minutes ago, Coffee Pancake said:

.. because the fix would be an involved project touching every part of the viewer internals, probably require some server tuning, and it would be a major ongoing development commitment for multiple high skill developers from LL for at least a year.

 

Sounds like a good investment for the potential gains.

Link to comment
Share on other sites

4 hours ago, animats said:

I've discussed "baking of mesh", as the possible next step to Bakes On Mesh. The idea is when you change clothes, something grinds away on a server somewhere and combines all the worn mesh pieces into simpler meshes. Hidden triangles are discarded, and simpler LODs are auto-generated. Just as, when you change clothes, a server grinds away and combines all the texture layers into one layer. So the job gets done once per clothing change, not once per frame. This is hard to do, but the other players in the industry are starting to do it.

Creators can't do this. They don't get to optimize the entire outfit and skin together. it has to be done when you get dressed.

yes agree. Would be very nice to have this

is not easy tho to implement as you say. Which isn't to say it couldn't/shouldn't be done, just that the path is long and rocky

  • Like 1
Link to comment
Share on other sites

7 hours ago, Coffee Pancake said:

A this point, building a SL Preview plugin for blender is far simpler than building blender in the SL client, even if that client doesn't log in or have social tools (which due to the way the viewer works, would actually be a huge amount of work - the viewer is a 3d dumb terminal, no more).

this is also an option

i think is more about how Linden can/could/should/maybe/somehow move 'optimisation' to earlier in the process of asset production

and I think if this was done then there could be a 'standard' for creating SL compatible mesh. With  a standard (a baseline) to which people are working then it would I think make ideas like Baked Mesh Model a little less arduous to implement

 

 

Link to comment
Share on other sites

5 hours ago, Coffee Pancake said:

We've been saying that for a decade, and even louder and with greater urgency since Apple announced they were depreciating OpenGL, but here we are.

We need a complete and fully multithreaded overhaul to the asset fetch render pipeline, and we needed it years ago. This is a huge project and would take a significant amount of time to implement and test, it is well beyond the scope of any unfunded 3rd party viewer project (even the mighty FS).. and now you know why we don't have it.

 

3 hours ago, Coffee Pancake said:

.. because the fix would be an involved project touching every part of the viewer internals, probably require some server tuning, and it would be a major ongoing development commitment for multiple high skill developers from LL for at least a year.

 

So in the meantime, encourage creators to optimize their assets they submit to the platform.

  • Like 1
Link to comment
Share on other sites

14 minutes ago, Codex Alpha said:

 

So in the meantime, encourage creators to optimize their assets they submit to the platform.

this is the base of the topic. What is meant by optimize ?

this is not to say that this is not being thought about by residents or by Linden. Resident creators pretty much do the best they can, just that optimize has different meanings for them depending on their product

like for example, indoor furnishings (behind walls therefore not needed to be seen from long distances) can have less complex LODs than do outdoor furnishings (which can be seen from long distances). In this case both opposites are optimal

which leads to, what is the Linden standard for optimisation ?  At the moment the standard is pretty broad. Is whatever is accepted by the uploader code

and if there was to be a standard then what would it be ? Which is an ongoing open question

Link to comment
Share on other sites

With modern rendering technology and GPUs, complex non-clothing content within current SL LI limits is just not a big problem. Both textures and meshes have levels of detail. Space keeps everything from being in the same place.

apothecary03.thumb.jpg.1dc6f26ae812306255dd283132d29381.jpg

Apothecary shop, Babbage Palisade. Experimental renderer. All those bottles are highly detailed.

apothecary02.thumb.jpg.c2673cb1dea9bf5d2882a8d46da3311f.jpg

You can get close to every label on every bottle, or the books on the bookshelves, and it all looks good.

This is running about 52 FPS on an NVidia 3070, by the way. FPS should improve over the next few months.

Clothing, though. A crowd of well-dressed people in Second Life creates a severe viewer overload, as we all know. Levels of detail don't help. Everything you're wearing gets the same level of detail, determined by the size of the avatar. So a finely detailed bracelet will be rendered with all its glorious triangles even when it's across the room. There's no land impact limit for clothing, allowed complexity limits are very high, and all the layers and skin you can't see are being rendered.

That's where content optimization efforts need to be focused - on clothing.

 

  • Like 4
  • Thanks 6
Link to comment
Share on other sites

  • 1 month later...

I have heard officially that PBR is coming to SL soonish and is already working in internal viewers rather well, this is just one of the changes they have been working on of late since Philip returned as a consultant. Things are about to get much better!

Link to comment
Share on other sites

On 7/14/2022 at 4:05 PM, NaraPhox said:

I have heard officially that PBR is coming to SL soonish and is already working in internal viewers rather well.

It's been discussed at Creator User Group. The idea is to have more layers one can add, using GLTF format. This allows some new effects. But as of the last mention, subsurface scattering isn't one of the new layers. Subsurface scattering is the feature needed to make skin look right. With the current setup, you have to pick some reflectance between "dead" and "shiny", and it won't look right under varying light conditions.

  • Thanks 1
Link to comment
Share on other sites

  • 1 month later...

I am interested in the PBR viewer and regularly check the official repository to see what kind of movement is occurring.

I recently found a change in the DRTVWR-559 branch in the official repositories that enables PBR by default for medium and higher quality images, so I'd like to build and test it.

I wish there was an official viewer for testing, but I can't test with the official viewer because I can't subscribe to the Discord channel.

Link to comment
Share on other sites

As long as this thread is back in circulation, I wonder if anybody has a reading on the practical implications of this:

Quote

In order for to be compliant with glTF, tangents are going to have to be generated in mikkTSpace, where normal maps are applied. This means that existing normal maps within Second Life / normal maps generated without using MikkTSpace may not look correct when rendered via the PBR pipe.

from Inara Pey's blog summary of the September 1st Content Creation User Group. Do we know how badly broken existing content will be?

Link to comment
Share on other sites

On 9/5/2022 at 10:54 AM, Qie Niangao said:

Do we know how badly broken existing content will be?

It is not yet set in stone how legacy normal maps will be treated. The current plan is to keep legacy materials as is, but maybe with a debug setting to force mikkt on all meshes. There are a few things that needs clarification still. 

The implementation of MikkT for PBR is currently undergoing.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

  • 1 month later...
  • 4 weeks later...

according to the SUG the rollout implementation is to use glTF but just the materials part at rollout, so we're looking at JSON's that contain Metallic Map, Roughness map, Diffuse map, Normal Map, Occlusion map, and Emissive Maps. Interestingly the Normal Map has a scaling factor in the materials JSON and the occlusion map has a strength level. Whether or not we can mess with these in second life's implementation or not and if messing with the JSON before upload does anything or not remains to be seen.

Source: This is a png explaining the glTF standard and on the right side of it, it explains the materials JSON.  https://github.com/KhronosGroup/glTF/blob/main/specification/2.0/figures/gltfOverview-2.0.0b.png

Link to comment
Share on other sites

1 hour ago, Jabadahut50 said:

according to the SUG the rollout implementation is to use glTF but just the materials part at rollout, so we're looking at JSON's that contain Metallic Map, Roughness map, Diffuse map, Normal Map, Occlusion map, and Emissive Maps. Interestingly the Normal Map has a scaling factor in the materials JSON and the occlusion map has a strength level. Whether or not we can mess with these in second life's implementation or not and if messing with the JSON before upload does anything or not remains to be seen.

Source: This is a png explaining the glTF standard and on the right side of it, it explains the materials JSON.  https://github.com/KhronosGroup/glTF/blob/main/specification/2.0/figures/gltfOverview-2.0.0b.png

The public alpha testing phase of the GLTF Materials implementation to Second Life is currently going on. You will find the current project viewer and some additional information here:
https://releasenotes.secondlife.com/viewer/7.0.0.576331.html

The normal scaling factor and, the occlusion strength level isn't part of the implementation.

  • Like 1
Link to comment
Share on other sites

21 minutes ago, arton Rotaru said:

The public alpha testing phase of the GLTF Materials implementation to Second Life is currently going on. You will find the current project viewer and some additional information here:
https://releasenotes.secondlife.com/viewer/7.0.0.576331.html

The normal scaling factor and, the occlusion strength level isn't part of the implementation.

Oh nice... appreciate it.

Link to comment
Share on other sites

9 minutes ago, Jabadahut50 said:

Oh nice... appreciate it.

But be aware, it's ALPHA! The GLTF Materials do work quite well, though. But a lot of things around it are still WIP. The more people testing it and file Jiras, the sooner we may get it released, though.

  • Like 1
Link to comment
Share on other sites

  • 1 month later...

In the viewers release notes,I can see that a new PBR project viewer release is available, however, clicking on the link to get to it, I get an XML error page instead:

xml-error-page.png.9be784fd216faad8a135c9042b47656e.png

You may however get around it, by replacing the build number in the link for the former release (link in Arton's post above), with the new one, listed in the viewers releases notes page, which, for 64 bits Windows gives this, for example.

Yet, could someone at LL fix the PBR viewer release notes page, please ?

  • Like 2
Link to comment
Share on other sites

  • 3 weeks later...

“…They say smart materials are fair. ‘Tis a truth, I can bear them witness. And good to have—’tis so, I cannot reprove it. And useful, but for thy implementation. By my troth, it is no addition to their function—nor no great argument of the lack of it, for we will be horribly in love with them.”
                                                      -Shakespeare (kinda) Much Ado about Nothing

Here is the skinny on import/export of GLTF2.0 via Blender, should any want it:

https://docs.blender.org/manual/en/latest/addons/import_export/scene_gltf2.html

Edited by Maxwell Graf
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 477 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...