Jump to content

Imposters!


anitabush
 Share

You are about to reply to a thread that has been inactive for 446 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Does anyone have any information on making imposter objects for the lowest LOD? For simple objects like pictures in frames the process is simple (if time consuming to do manually) but I’m struggling with more complex shapes. 
 

I don’t know if I’m just being dense but I can’t find any guides or videos on making imposters apart from links to addons that don’t seem compatible with SL like instant imposters. I use Blender but anything would be helpful at this point.

For small complex objects the lowest LOD kicks in so quickly that I’m really struggling to make something acceptable.

  • Like 1
Link to comment
Share on other sites

38 minutes ago, anitabush said:

Does anyone have any information on making imposter objects for the lowest LOD? For simple objects like pictures in frames the process is simple (if time consuming to do manually) but I’m struggling with more complex shapes. 
 

I don’t know if I’m just being dense but I can’t find any guides or videos on making imposters apart from links to addons that don’t seem compatible with SL like instant imposters. I use Blender but anything would be helpful at this point.

For small complex objects the lowest LOD kicks in so quickly that I’m really struggling to make something acceptable.

As Wulfie points out Second Life doesn't have support for imposters, so you basically have to cheat and make them manually by creating flat planes and generating 32-bit textures to apply to them (either by baking the information from the high LOD model or simply rendering low resolution images from different angles).  You'll probably need at least 2 or 3 intersecting planes and, depending on the item in question and what it's going to be used for, you may want to make the planes double sided by duplicating them and flipping the faces/normals.

It's helpful to set up a separate material for the imposters so that you can adjust the properties separately from the other LOD models.  Bear in mind that these imposters are only ever seen from a distance so you really don't need a lot of pixels for the textures.  If the texture for your high LOD model already has an alpha channel then, with a little creative arrangement of the UV mapping, you can probably sneak the imposter textures into some of the wasted space.  It's a good idea to set your alpha settings to Alpha Masking for the faces on your imposter models (also if you decide to add an alpha channel to your main texture just so you can incorporate your imposter textures on the same image and the textures for the higher LOD models don't use/require transparency then change the alpha settings for those faces to None to avoid any alpha swapping issues).

  • Like 2
Link to comment
Share on other sites

1 hour ago, Wulfie Reanimator said:

Maybe @animats can give some input here, but the short answer is that currently we can't make 2D sprite/image impostors for mesh LODs.

At present, we can't really do sprite impostors in SL. I have a demo which fakes it.

slscreenshot_348.thumb.png.c41ea8e4bef533e3c59e4251e460793d.png

Impostor demo. One stone lion is a model with 20,000 faces. The other is a sprite impostor with 2 faces. Which is which? Visit Vallone/123/11/36 and see.

This is a proof of concept. The impostor object senses the nearest avatar, turns the impostor plane to face the avatar, and selects one of 8 images from a texture atlas to display. This approach works for only one avatar at a time. If you go to look at it, walk around rather than camming; it can't tell where the camera is, but can tell where the avatar is. The size of the fountain pool discourages people from getting too close to the impostor.

The images were made with a test fixture I built. I have a stage with a turntable, a green background, and a red frame. There's a chair fixed some distance from the stage. You link the object to the turntable, sit in the chair, and click on the stage to start. The turntable rotates through eight positions, and beeps at each stop, to tell the user to push the "take picture" button. Then the 8 pictures go into a Python program which trims them to the size of the red frame and makes the green background transparent. That produces the impostor images. Those are automatically trimmed to fit the outline of the object and assembled into one image with 8 frames. The appropriate frame is selected by changing the image UV offset from LSL.

It's a cool demo, but not a useful approach. It does let you see how far away you have to be to maintain the impostor illusion.

The viewers do this now for avatars. Avatar impostors are generated in the viewer, which means it has to drag in all the avatar's textures and objects to generate the impostor. So it doesn't help with initial appearance time at all. You're still stuck in pink cloud mode.

Good impostors would take more machinery. You'd like to be able to impostor big objects, such as buildings, with all their contents. Then you could look at large cityscapes. But SL doesn't have a way to talk about groups of objects in that way.

I'm not sure this is a win with a modern GPU. Triangles just aren't that expensive any more. As the LL viewers go to retained mode, and maybe even to Vulkan, it becomes less important to keep the triangle count down. I've shown that with my experimental viewer, where I have all mesh at full resolution but reduce texture resolution based on screen area covered. Beq Janus points out that most of the data volume sent to the viewer is textures, not meshes.

The real win is showing the world beyond draw distance. I'm looking into impostoring entire regions for my experimental viewer. Take pictures of each region from 4 or 8 directions, and from above. These would be stored on a server, like the map tiles. The viewer would assemble these into a sim surround out beyond your draw distance. You'd be able to see distant shores when sailing, and distant airports when flying. Like my old slippy map of SL.

Above Second Life. Those are the regular Second Life map tiles, viewed with a slippy map program. When you're flying, distant regions should look at least that good.

(If BUG-226530 from 2019 is ever fixed, I'll make that slippy map freely available again.)

SL is such a good big world, yet it's really hard to see it big. Few people have ever seen the Snowlands or Mt. Campion from a distance.

 

  • Like 4
Link to comment
Share on other sites

Some excellent answers already but to add a little bit, sometimes it's possible to use the same texture for the main model as for the impostor. I've posted two examples of it here earlier:

This trick doesn't work very often but it's very useful when it does.

 

3 hours ago, anitabush said:

For small complex objects the lowest LOD kicks in so quickly that I’m really struggling to make something acceptable.

Yes, LL didn't think things through when they decided that fixed, object radius dependent swap distances were a good idea. It's good for prims but for meshes it really only works for mid sized (say 1-2 m object radius) objects. Small objects swap way too early and big ones way too late. Of all the content creation related mistakes LL has ever done, I think this is one of the top three... I mean four... no five.

But we can trick the viewer into thinking an object is bigger than it is. Look at this:

image.png.76a15613c15ad82d6bf4f2e605cdbc28.png

That little dot at the top is a loose vertice. What it does, is double the nominal height of the vase, significantly increasing the swap distances. You have to sort the vertices so the loose one is number 1 on the list to prevent it from being culled by the uploader but that's easy to do in Blender. Select the loose vertice only and:

image.png.9b769f48bdb09be7c60539fa96b43fe9.png

You can achieve the same by substituting a degenerate or 100% transparent tri for the loose vertice but I prefer this method. It's cooler, more elgant and it does save one tri and two vertices from the model.

  • Like 1
Link to comment
Share on other sites

On 1/5/2023 at 8:03 PM, ChinRey said:

Some excellent answers already but to add a little bit, sometimes it's possible to use the same texture for the main model as for the impostor. I've posted two examples of it here earlier:

This trick doesn't work very often but it's very useful when it does.

 

Yes, LL didn't think things through when they decided that fixed, object radius dependent swap distances were a good idea. It's good for prims but for meshes it really only works for mid sized (say 1-2 m object radius) objects. Small objects swap way too early and big ones way too late. Of all the content creation related mistakes LL has ever done, I think this is one of the top three... I mean four... no five.

But we can trick the viewer into thinking an object is bigger than it is. Look at this:

image.png.76a15613c15ad82d6bf4f2e605cdbc28.png

That little dot at the top is a loose vertice. What it does, is double the nominal height of the vase, significantly increasing the swap distances. You have to sort the vertices so the loose one is number 1 on the list to prevent it from being culled by the uploader but that's easy to do in Blender. Select the loose vertice only and:

image.png.9b769f48bdb09be7c60539fa96b43fe9.png

You can achieve the same by substituting a degenerate or 100% transparent tri for the loose vertice but I prefer this method. It's cooler, more elgant and it does save one tri and two vertices from the model.

Thank you very much for this, it solved my issue! I’d been pondering trying to fool the system into treating it as a larger object but my ideas were a lot more complex than a single vertex.

On 1/5/2023 at 6:45 PM, animats said:

At present, we can't really do sprite impostors in SL. I have a demo which fakes it.

slscreenshot_348.thumb.png.c41ea8e4bef533e3c59e4251e460793d.png

Impostor demo. One stone lion is a model with 20,000 faces. The other is a sprite impostor with 2 faces. Which is which? Visit Vallone/123/11/36 and see.

This is a proof of concept. The impostor object senses the nearest avatar, turns the impostor plane to face the avatar, and selects one of 8 images from a texture atlas to display. This approach works for only one avatar at a time. If you go to look at it, walk around rather than camming; it can't tell where the camera is, but can tell where the avatar is. The size of the fountain pool discourages people from getting too close to the impostor.

The images were made with a test fixture I built. I have a stage with a turntable, a green background, and a red frame. There's a chair fixed some distance from the stage. You link the object to the turntable, sit in the chair, and click on the stage to start. The turntable rotates through eight positions, and beeps at each stop, to tell the user to push the "take picture" button. Then the 8 pictures go into a Python program which trims them to the size of the red frame and makes the green background transparent. That produces the impostor images. Those are automatically trimmed to fit the outline of the object and assembled into one image with 8 frames. The appropriate frame is selected by changing the image UV offset from LSL.

It's a cool demo, but not a useful approach. It does let you see how far away you have to be to maintain the impostor illusion.

The viewers do this now for avatars. Avatar impostors are generated in the viewer, which means it has to drag in all the avatar's textures and objects to generate the impostor. So it doesn't help with initial appearance time at all. You're still stuck in pink cloud mode.

Good impostors would take more machinery. You'd like to be able to impostor big objects, such as buildings, with all their contents. Then you could look at large cityscapes. But SL doesn't have a way to talk about groups of objects in that way.

I'm not sure this is a win with a modern GPU. Triangles just aren't that expensive any more. As the LL viewers go to retained mode, and maybe even to Vulkan, it becomes less important to keep the triangle count down. I've shown that with my experimental viewer, where I have all mesh at full resolution but reduce texture resolution based on screen area covered. Beq Janus points out that most of the data volume sent to the viewer is textures, not meshes.

The real win is showing the world beyond draw distance. I'm looking into impostoring entire regions for my experimental viewer. Take pictures of each region from 4 or 8 directions, and from above. These would be stored on a server, like the map tiles. The viewer would assemble these into a sim surround out beyond your draw distance. You'd be able to see distant shores when sailing, and distant airports when flying. Like my old slippy map of SL.

Above Second Life. Those are the regular Second Life map tiles, viewed with a slippy map program. When you're flying, distant regions should look at least that good.

(If BUG-226530 from 2019 is ever fixed, I'll make that slippy map freely available again.)

SL is such a good big world, yet it's really hard to see it big. Few people have ever seen the Snowlands or Mt. Campion from a distance.

 

That’s a really cool demo, it’s a shame something like this can’t work for more than one person.

On 1/5/2023 at 4:54 PM, Fluffy Sharkfin said:

As Wulfie points out Second Life doesn't have support for imposters, so you basically have to cheat and make them manually by creating flat planes and generating 32-bit textures to apply to them (either by baking the information from the high LOD model or simply rendering low resolution images from different angles).  You'll probably need at least 2 or 3 intersecting planes and, depending on the item in question and what it's going to be used for, you may want to make the planes double sided by duplicating them and flipping the faces/normals.

It's helpful to set up a separate material for the imposters so that you can adjust the properties separately from the other LOD models.  Bear in mind that these imposters are only ever seen from a distance so you really don't need a lot of pixels for the textures.  If the texture for your high LOD model already has an alpha channel then, with a little creative arrangement of the UV mapping, you can probably sneak the imposter textures into some of the wasted space.  It's a good idea to set your alpha settings to Alpha Masking for the faces on your imposter models (also if you decide to add an alpha channel to your main texture just so you can incorporate your imposter textures on the same image and the textures for the higher LOD models don't use/require transparency then change the alpha settings for those faces to None to avoid any alpha swapping issues).

 

On 1/5/2023 at 4:10 PM, Wulfie Reanimator said:

Maybe @animats can give some input here, but the short answer is that currently we can't make 2D sprite/image impostors for mesh LODs.

If you could share a screenshot of what your object looks like, and an idea of how small of a scale we're talking, maybe we can give more advice.

Thank you both for the tips. I was hoping I had just missed something with sprite imposters but at least I know not to bother with that approach now.

I appreciate all the information very much, thank you everyone.

Link to comment
Share on other sites

On 1/5/2023 at 8:03 PM, ChinRey said:

 

You can achieve the same by substituting a degenerate or 100% transparent tri for the loose vertice but I prefer this method. It's cooler, more elgant and it does save one tri and two vertices from the model.

Sorry to intervene on this, but I dissent about these statements.

This method might be cooler, coolness is a subjective feeling. However it's technically not elegant, and may introduce exceptions in the exported files and their handling.

First, think of the mesh as a polylist: with a stray vertex, you're breaking the standard and introduce an exception, where there has to be listed only a vertex that doesn't pertain to a face. Indeed you need to make sure it is vertex 0 or it gets pruned by the uploaded, so you basically brute force it into the definition. There is no context where brute forcing is cool or elegant.

Secondly, with this method you force an extension of the bounding box in just one or at most 2 axis, and with this latter, also offsetting the pivot point sideways.

Instead, using a stray triangle gives benefits only.

First benefit, is that you keep the definition as a poly list.

Second benefit, if you make it into a degenerate triangle (a zero area polygon, for whoever is wondering what that is... Basically a triangle in which two vertices overlap) it doesn't render, but allows you to place it across a cube diagonal, extending the bounding box in all directions, maintaining also the pivot point location untouched, if done carefully.

From the second benefit, we also have a third one, for which vertex normals do not have to be recalculated during upload, and the problem there was with un matching normals we had until some time ago would not appear to begin with. I know, a fix was introduced by Beq Janus and I believe it's been put in the official LL viewer too. However, if one can avoid this recalculation to begin with, the better, because this fix works only when hardware shaders are on. With hardware shaders off, the mesh surfaces show the vertex normal inconsistencies.

The fourth benefit from doing a cubic extension is that the LOD switching happens the same way, just with a better volume distribution around the object, with a minimized offset from the surface in all directions; though this benefit might be subjective and dependent from the specific user case, so an exception where a side offset bounding box is preferable over the evenly distributed version is perfectly admissible ... Still, a triangle is better than the stray vertex.

The argument of saving one triangle and 2 vertices, instead, is ridiculous, no offense meant. I mean, excessive load would never come from that +1 triangle, come on.

Edited by OptimoMaximo
Link to comment
Share on other sites

8 minutes ago, OptimoMaximo said:

First, think of the mesh as a polylist: with a stray vertex, you're breaking the standard and introduce an exception, where there has to be listed only a vertex that doesn't pertain to a face. Indeed you need to make sure it is vertex 0 or it gets pruned by the uploaded, so you basically brute force it into the definition.

I have to correct myself here. The loose vertice is removed by the uploader but only after the bounding box has been defined:

image.thumb.png.c3a8be96102d230fb4fc3acbe485a8b0.png

This is bascially the same principle as LL used for their infamous 8+ faces mesh upload function.

 

22 minutes ago, OptimoMaximo said:

Secondly, with this method you force an extension of the bounding box in just one or at most 2 axis, and with this latter, also offsetting the pivot point sideways.

You mean a small object right in the middle of a big bounding box like this?

image.thumb.png.5e0ab410c1fbab733f5fdc297cc8e817.png

That's possible too but it's a completely different trick that I definitely won't recommend, among other things because this method does retain one loose vertice.

If you need to keep the pivot point at the center you have to use hidden tris of course but how often is that factor relevant? In the few cases the poivot point is a significant factor chances are you don't want it at the object's center anyway.

When it comes to LOD control there's no need to extend along more than one axis. Look at the formula for calculating swap distance from high to mid:

d=√(x²+y²+z²)/0.6*L

(L is the viewer's LOD factor)

You can see even at a glance that unless the x, y and z sizes are vairly similar, it's really only the largest of them that significantly affects the swap distances so there's no point in extending the bounding box in several direction. Unless you need to preserve the pivot point of course. You do need to use "hidden" tris in those cases but how often is that relevant?

 

34 minutes ago, OptimoMaximo said:

From the second benefit, we also have a third one, for which vertex normals do not have to be recalculated during upload

There is no reason why the loose vertice should affect the normals of the ones that end up being uploaded.

 

36 minutes ago, OptimoMaximo said:

The fourth benefit from doing a cubic extension is that the LOD switching happens the same way, just with a better volume distribution around the object, with a minimized offset from the surface in all directions; though this benefit might be subjective and dependent from the specific user case, so an exception where a side offset bounding box is preferable over the evenly distributed version is perfectly admissible ... Still, a triangle is better than the stray vertex.

I have absolutely no idea what you're talking about.

 

1 hour ago, OptimoMaximo said:

The argument of saving one triangle and 2 vertices, instead, is ridiculous, no offense meant. I mean, excessive load would never come from that +1 triangle, come on.

Tes, that was a bit tongue-in-cheek. ;)

Link to comment
Share on other sites

1 hour ago, ChinRey said:

There is no reason why the loose vertice should affect the normals of the ones that end up being uploaded

Well, it's not because of the loose vertex. Having a cube extended bounding box using a triangle, force the mesh to be seen as a cube proportioned mesh, and during the conversion within the uploaded, the vertex normals do not get squashed, which needs the recalculations I mentioned.

Then about the swap distance : do you realize that the first part of the formula is actually the vector magnitude? Of course having one of the component bigger the the other two affects the bounding box significantly, still you can get the same magnitude with smaller vector components taken collectively, and that reduces the single axis offset.

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 446 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...