Jump to content

How can Linden Lab encourage better content?


Kyrah Abattoir
 Share

You are about to reply to a thread that has been inactive for 2097 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

44 minutes ago, animats said:

We had to unlink the model and resize everything to real world sizes.

I know we're not supposed to promote our own products here but this is a freebie and it isn't actually mine. I only updated it to handle objects larger than 10 m and made it available on MP:

https://marketplace.secondlife.com/p/MBAGR-Object-Resizer-v-521/6622985

Link to comment
Share on other sites

1 hour ago, animats said:

That's a bug. File a JIRA.

It's not a bug, it is intended behavior, on LL side. I wanted to make an exporter from Maya to the encoded mesh asset and i understand that it was done becuase of the different standards in different softwares (ie: Maya uses centimeters, 3dsmax and Blender use meters). It's not a matter of the embedded arbitrary measurement units from years ago, which Maya and 3dsmax never had, btw. I agree it shouldn't be handled that way, there are many enforced rules/limitations that i can't see a reason to not enforce a measurement unit over any other too.

 

1 hour ago, animats said:

Mis-scaling makes builds look tacky.

Not if you build with a conversion in mind anyway. Blender units equal meters units since quite a long time, by now. The issue sits in the unaware modeler that builds eyeballing from a reference. In reference to each other objects, everything was scaled correctly. Just not matching meter scaling.

Link to comment
Share on other sites

1 hour ago, animats said:

and internal coordinates in 3D modeling programs were 32-bit integers.

Actually,  they still are. That's why Python was integrated in Maya (which embedded scripting language can't convert to any other C struct) and 3dsmax got the C struct conversion in Maxscript as well.

edit: 32 bit double, not integers, my bad

Edited by OptimoMaximo
Link to comment
Share on other sites

7 minutes ago, OptimoMaximo said:

Blender units equal meters units since quite a long time, by now.

Actually, you get to specify what units you want to use in Blender. It's in a menu item. Then the conversion factor, if present, goes into the COLLADA file. The uploader should use the conversion factor in the COLLADA file if it is present.

Sometimes you need to work smaller in Blender, because it has zoom limits. You might need to do jewelry or clocks in centimeters.

Link to comment
Share on other sites

1 hour ago, animats said:

Actually, you get to specify what units you want to use in Blender. It's in a menu item. Then the conversion factor, if present, goes into the COLLADA file. The uploader should use the conversion factor in the COLLADA file if it is present.

Sometimes you need to work smaller in Blender, because it has zoom limits. You might need to do jewelry or clocks in centimeters.

Switch the Blender Units to Meters and the grid doesn't change size, unlike with centimeters or any other unit available in there. For small detailing and close ups, the camera clipping distance is the way to go, not the scene scale

Edited by OptimoMaximo
Link to comment
Share on other sites

1 hour ago, animats said:

Everybody uses real-world units now. "Softimage units" and "Blender units" are from the era before computers had fast 64-bit FPUs and internal coordinates in 3D modeling programs were 32-bit integers.

That statement, is technically, "A load of tech illiterate fact free twaddle"...

The use of generic unidentified "units" in 3D file formats wasn't because we lived in a 32 bit cpu darkage, now magically i,proved by 64 bit...

It was because EVERY 3D modeling app maker thought you should only use THEIR app, and because, every app had a feature to select the units you used in the app.

C4D & Amapi, European, defaulted to metric units, 3D Studio & Max, American, originally defaulted to imperial units, Poser went with 1 unit = 8 feet (96 inches), then switched at a later date to a "metric 8 feet" (100 inches), and so on.

The convention was 1 unit = 1 unit = 1 unit...

Conversion rates were always problematic, depending on if the makers had decided to use 25mm to the inch or 24.54mm to the inch, or just 24 mm to the inch.

That Poser 96 inch unit to 100 inch unit, changeover for example, caued a lot of poser content makers real problems, as when they did it, they didn't bother making a lot of noise about it, so made items basedd on a 96 inch unit ended up the wrong size...

Certain file formats now include what the unit is named in the spec, for ease of exporting/importing, but many still do not.

It had NOTHING to do with 32 bit cpu vs 64 bit cpu, and everything to do with competing products trying to protect their market share and making assumptions based on the nationality of their dev team coders.

Just as one app's default skin shader looked crap because the non-3D-artist coder who wrote it thought zero percent body fat body builders in Utah was "how normal peoples skin normally looks".

 

Edited by Klytyna
Link to comment
Share on other sites

I do more CAD/CAM than I do animation, so I'm more familiar with how CAD/CAM does units. It's taken a while, but now all the programs take units seriously. You can go from Solidworks to Inventor to Fusion to with the units intact. Metal-cutting people get very annoyed if the part gets made in the wrong size.

COLLADA import should obey the <units> field if present.

Link to comment
Share on other sites

On 5/26/2018 at 7:11 AM, Klytyna said:

In addition, why the obsession with "constant FPS", hmmm let me guess, still thinking SL is a "game" for shooting and racing and all that console gamer 60 fps based action?

You want a constant FPS because if your FPS is constantly jumping up and down it looks noticeably janky.

It's also related to how monitors work. A typical monitor refreshes at 60 times a second. If you're rendering 60fps in SL (or a videogame) that is 1 frame per refresh. 30fps means 1 frame every 2 refreshes. If your FPS is bouncing all around that means frames are remaining on screen for inconsistent times wich looks choppy, even if it's technically still a higher FPS, and also manifests as screen tearing.

This is why even though my FPS can get as high as 50-60fps (in sims I've optimized myself) I tend to lock my FPS at 30 when I'm not building. It ends up looking smoother than when my FPS is constantly jumping from 30-50 and back.

I've also noticed that when I leave my fps unlocked my fps will sometimes dip down to as low as 25, but if I lock my FPS at 30 in the very same sim that stops. I might not get up as high as the 50's and 60's anymore, but my FPS won't fall down into the 20's anymore either. I'm guessing, and anyone with more technical knowledge can feel free to correct me, that this is because of the reduced number of frames in the frame buffer using less memory.

 

And speaking of memory, I wanted to comment on CoffeeDujour's post earlier where she was pointing at a screen showing how much memory SL was using; SL will only ever use so much memory.That's intentional because SL's texture renderer is buggy, a problem I've heard LL is trying to fix so they can increase the amount of memory SL will use. But because of this it's easy to look at the amount of memory SL is using and come to the conclusion that memory isn't a bottleneck, but the numbers are misleading. You can still have plenty of VRAM available while SL has maxed out the memory it will use and is taking a performance hit like a man dying of dehydration right next to a swimming pool full of purified drinking water.

 At the same time I totally agree with the point she was making that there is no one single silver bullet to getting better performance out of SL. Model complexity, textures, and how SL uses GPU/CPU and memory all factor in. I'm not disagreeing with that at all, it's completely correct.

Edited by Penny Patton
  • Like 1
Link to comment
Share on other sites

8 hours ago, animats said:

COLLADA import should obey the <units> field if present.

To some extents, it does. It uses it to calculate the initial size (for display only) and the subsequent rescale factor in the uploader last tab. The problem is what Klytyna points out about the arbitrary softwares units, in this case applied in reverse: The conversion into a binary format doesn't account for any linear unit, with all the multiples and submultiples it can imply. The established 2 bit integer conversion of vertices locations implies an arbitrary unit derived, as i was saying earlier, from the need to make the model fit into a box, in which you get 65536 (integer) subdivisions per axis and that's it, the actual metric size isn't accounted there because that's the job for the transform node that contains the geometry node. Similarly to what happens in Blender, when you scale up/down only the geometry in edit mode, no scaling shows up in object mode.

Edited by OptimoMaximo
Link to comment
Share on other sites

56 minutes ago, Penny Patton said:

I've also noticed that when I leave my fps unlocked my fps will sometimes dip down to as low as 25, but if I lock my FPS at 30 in the very same sim that stops. I might not get up as high as the 50's and 60's anymore, but my FPS won't fall down into the 20's anymore either. I'm guessing, and anyone with more technical knowledge can feel free to correct me, that this is because of the reduced number of frames in the frame buffer using less memory.

Penny is absolutely right about maintaining an average FPS to save on resources during realtime rendering. I do the same myself, i capped my Firestorm to 30 FPS which is more than enough to run animations and simulations.

Lower and stable FPS brings an overall smoother experience than claiming higher FPS, and for what? Physics are capped to 45 anyway, which applies to raycast calculations too; hence, for the gaming experience with shooting and whatnot, a higher FPS is relatively useless, unless you look at your camera movement (machinima may benefit from this higher FPS. Is the whole SL userbase machinima makers?).

The avatars are being moved around using physics and animations are encoded in such a way, that the original FPS doesn't count when decoded and played back, as the animation curves are being re-built from time steps relative to start and end of the animation file, not using the viewer fps: animation stuttering is either due to the animation design itself or the viewer can't quite keep up with the whole rendering process, which sees animation playback as one of the last priorities, therefore likely to fall behind.

As the scene to be rendered increases in complexity, chances are that the lower the FPS you set on your viewer, the better it performs. There is a hard limit to this, of course: under 15 FPS the human eye can catch the frame changes. That's a rendering issue though: animations smoothly made at 5 FPS look smooth inworld too at 30 or 60 fps, because of the encoding/decoding/curve rebuilding system that accounts for the animation TIME, not frames. So the problem sits in the TIME it takes to render a scene in regard to the scene complexity, which includes scenery and avatars.

It doesn't matter what engine or simulator that runs the game and the data streaming required: if content is unoptimized (unnecessarily highpoly) and a scene is loaded with such content, there's no way one can get a stable very high FPS when the drawing distance (AKA culling distance), antialiasing, anisotropic filtering, shadow quality, objects, water and terrain details are all set to the max. SL doesn't have any culling tools that other engines have to help with this problem: those were introduced to obviate the limits that a simple culling distance (like SL's) has. It's like re-making a triple A game without all those and then wondering why it performs bad. Those games can afford the "modern standards" because of these "tricks", while in SL the "trick" is to optimize content as much as possible.

 

  • Like 2
Link to comment
Share on other sites

5 hours ago, Penny Patton said:

I've also noticed that when I leave my fps unlocked my fps will sometimes dip down to as low as 25, but if I lock my FPS at 30 in the very same sim that stops. I might not get up as high as the 50's and 60's anymore, but my FPS won't fall down into the 20's anymore either.

That's setting "Max FPS?" Or are there other parameters? Can that actually increase the frame rate? SL viewers should try harder to keep the frame rate up, reducing LOD thresholds or texture sizes or draw distance if necessary, but I didn't think that was implemented.

Link to comment
Share on other sites

 

34 minutes ago, animats said:

That's setting "Max FPS?" Or are there other parameters? Can that actually increase the frame rate? SL viewers should try harder to keep the frame rate up, reducing LOD thresholds or texture sizes or draw distance if necessary, but I didn't think that was implemented.

The feature limits the FPS that the viewer tries to accomplish, because it already tries really hard to push it to the maximum possible. The problem with this approach is a fluctuation due to the rendering processes "falling behind" on some aspects and framerates go lower trying to catch up.

Screenshot_1.thumb.png.e24d85a650cb79409de655beecf13be1.png

you can see that this feature is correctly called "Limit Framerate", which works as a capping tool that tells the viewer to not even try to go above this number. Thi way, the viewer won't attempt to run things too fast to find itself overloaded to the extent to be forced lowering the framerate to recuperate processes that fell behind in the meantime. My machine is capable to stay quite stable between 50 and 60 fps but why? what benefit would that give me? It's just more taxing on my GPU while physics simulation and all that is related to isn't going to benefit from that at all. I prefer to have a better responsiveness of the overall viewer rather than a fluctuating performace that may well drop below acceptable standards in order to try and keep up with the framerate. Hopefully that makes sense.

EDIT: i guess i would explain this in other words for simplicity. Say you're a great typist and you can type 600 letters per minute as your top performance. If you go ahead and try to do that constantly for a certain amount of time, you will have to eventually come back and correct punctuation, mistypes here and there, which leads to some loss of time to fix the mistakes. However, since you're capable of that feat, if you try to stay at 300 letters per minute, you will be quite fast at typing anyway but chances are that you are less in a stressful hurry and, given your full power ability, that typing rate will lead you to be more relaxed and less inclined to mistypes and punctuation mistakes because, well, you're typing "slow". Something similar happens to the viewer.

Edited by OptimoMaximo
  • Like 2
Link to comment
Share on other sites

19 hours ago, animats said:

SL viewers should try harder to keep the frame rate up, reducing LOD thresholds or texture sizes or draw distance if necessary, but I didn't think that was implemented.

Like Klytyna pointed out, this simply is not realistic.

The problem is, if you're playing a videogame then everything you see on screen was created by professionals working very hard to create an experience with consistent FPS. They have resource budgets for various aspects of the game (the game should never use more than X amount of texture memory, or exceed an amount of Y polygons), and the game engines come with various tools to help make that happen (such as culling elements you don't see, pre-baked lighting, fixed camera positions or otherwise restricting the camera to where the developers want you to be able to see). On top of that, game devs try to be smart about how areas of a game are put together. Level maps will be broken into segments with hidden loading areas where the area you left is removed from memory and the area you're approaching takes its place. In large, open world games you still have multiple separate maps. The outdoor area will be a series of maps with hidden loading areas. Interior environments will be their own separate maps separated completely from the outdoor environment.

Contrast this with Second Life where most of the content is made by people entirely unaware of resource management, or even people who believe crazy things like "textures don't use memory" or "high poly counts aren't really a factor in rendering anymore" (these are actual things actual SL users are prone to saying). We don't have the option of hidden loading areas, we can't control directly what SL holds in memory, we can't set up a sim so assets you no longer see are removed from memory. Those tools are unavailable to us, as is pre-baked lighting. While we can totally do like videogames and separate interior/exterior environments or break environments into multiple pieces, many content creators and SL users in general balk at the thought, clinging to a weird idea that building interiors MUST be inside those buildings or their immersion will be shattered. (I've seen people get angry, yes angry, at the very thought of using skyboxes for building interiors.) In SL there are no real limits on the amount of resources a single avatar can use (and SL residents go into incoherent fits at the mere suggestion that SL avatars should have such limits) so a single avatar could wander into view and send your FPS into a sudden nosedive. Jelly dolls (another feature SL residents complain about) helps with this a lot (and yet still ignores textures, so some of those low LI avatars that don't get turned into brightly coloured pixels are actually WORSE for your FPS than those that were culled), but at the end of the day you're trading poor performance for ugly pixel blotches on your screen.

 Since there is no "resource budget" for a given scene in SL all of this fluctuates wildly and at random. There is no way for the SL viewer to predict what settings will need to be changed at any given moment. And even if you had a prescient AI capable of changing your settings on the fly  to compensate for what will appear on your screen before it affects your FPS you'd still end up with your draw distance jumping from 256m down to 0, your avatar rendering cap jumping all over the place, all of your graphics settings changing so frequently and so drastically the experience would probably set off epileptic fits.

 

If you want consistent FPS in SL you need to force content creators and casual users alike to be more conscious of the resources they use, encourage good design habits while discouraging the worst trends, and after all that limit your FPS to whatever your most consistent low end FPS is.

And we might see these changes. LL's plan to change how Land Impact costs are calculated now has a name (ARCTan) and is something they are actively working on. As far as I know LL is still planning to add in tools to jelly doll avatars that use excessive amounts of textures. And just on the whole the current development team at LL seems more acutely aware of the issue of resource use in SL content than LL has been in the past.

Link to comment
Share on other sites

44 minutes ago, Penny Patton said:
20 hours ago, animats said:

SL viewers should try harder to keep the frame rate up, reducing LOD thresholds or texture sizes or draw distance if necessary, but I didn't think that was implemented.

Like Klytyna pointed out, this simply is not realistic.

The problem is, if you're playing a videogame then everything you see on screen was created by professionals working very hard to create an experience with consistent FPS. They have resource budgets for various aspects of the game (the game should never use more than X amount of texture memory, or exceed an amount of Y polygons), and the game engines come with various tools to help make that happen (such as culling elements you don't see, pre-baked lighting, fixed camera positions or otherwise restricting the camera to where the developers want you to be able to see). On top of that, game devs try to be smart about how areas of a game are put together. Level maps will be broken into segments with hidden loading areas where the area you left is removed from memory and the area you're approaching takes its place. In large, open world games you still have multiple separate maps. The outdoor area will be a series of maps with hidden loading areas. Interior environments will be their own separate maps separated completely from the outdoor environment.

Once again, i point to modding resources from games in order that everyone can access this stuff to see what is going on in there to compare with SL. In the following text, i will explain how Skyrim and UnrealEngine4 manage a few of these things, the first one for the sole purpose of being easily and readily available to take a look at its working finished stage.

Aside from systems like mip-maps (the texture's own LoD equivalent for texture memory saving), there are quite a few different assumptions to begin with.

Huge terrains obey LoDs too, in the first place, and only then they're divided in "cells" of about 64/96 meters squares (depending on the area, devs have control over this cell sizing). Your graphics card will be rendering only the cell where you stand and, when reached a certain threshold distance from the cell's border, the adjacent one gets rendered. Quite easy to accomplish in an open world where rocks and trees (with PROPER LODS) do a lot of the hiding job. All other cells keep simulating mathematically, storing changes as data only. Graphics assets belonging to an area aren't put in the scene at all unless the player enters the cell.

Cameras aren't free like in SL, so the sick idea that everything should be smooth and with max texture/polygon resolution everywhere on the surface is removed altogether.

All object types get 2 sets of different UV channels, one for the texturing work, the second one for the sole purpose of baking lighting, most often (if not always) grouping tens of objects in one single texture using the same LightMap UV to multiply over the main texture UV Set. As an example, in Skyrim all house types are made by separate meshes, each one has its set of textures on the main UV Set (diffuse, normal and specular) but ALL of them in a specific building style share the same LightMap UVSet. This makes easier to use tileable textures, keeping the advantage of having pre-rendered lighting on top, regardless of the tiling values set on the main textures.

Most of the interiors (as Penny pointed out already above) require a loading screen as they're not part of the open world itself. In Skyrim's Modding tools you can see that when you load a dungeon, it's just a floating cave in the void. When you play the game, you can notice that distant rooms are NEVER in a straight line in front of the entrance, and if there's a long distance in straight line, chances are that you can see particle fogs or mists to confuse the view and make LoD degradation way less noticeable.

Particle Systems are volume based, those are cubes you can stretch to your liking but they contain the "fluid simulation" within it only, it's precalculated and looping continuously, with the container box being keyframe animated to hide or confuse the obvious looping.

Culling tools like AreaCulling (a "wall" that culls whatever is beyond it until the player gets there, usually put around corners) and VolumeCulling (a "cube" that culls off everything within it until the player walks into the cube, to then culling whatever is outside the cube's volume) objects are essential for pretty loaded almost-open-areas

Dynamic lighting (casting shadows) is limited on a per scene or on a per culling tool basis to an established number, the rest of the lights are pointlights until the player enters/leaves a culling area or volume.

And the most important thing of all because it applies to any graphics content: LODs! Everything has their LoDs! Also textures have their LoDs called mip maps! 

Therefore i ask a question: how can one claim "modern standards" when all of these features aren't available in SL, except for the LoDs? There is absolutely NO ground for comparison. Setting changes, viewer side, on an automated basis is most likely to be more resource consuming than the scene it is trying to save resources on.

  • Thanks 1
Link to comment
Share on other sites

1 hour ago, Penny Patton said:

And we might see these changes. LL's plan to change how Land Impact costs are calculated now has a name (ARCTan) and is something they are actively working on. As far as I know LL is still planning to add in tools to jelly doll avatars that use excessive amounts of textures. And just on the whole the current development team at LL seems more acutely aware of the issue of resource use in SL content than LL has been in the past.

As i see the issue now, i would prefer to see the LoD system to work consistently to what it is supposed to do at the cost of "breaking" existing content (greatly deserved punishment to those whining c***s that bypass and/or circumvent rules and limitations with their "design choice" of having no LoDs) with the addition of mip mapping the exceeding textures load BEFORE jellydolls come into play, with these latter showing up as signal to the user saying "this thing shouldn't have been uploaded at all in the first place".

Link to comment
Share on other sites

Focusing on "bad content" is a bit of a witch hunt that is only half the picture.

Yes, we could all make better content with better LOD and fewer textures. but this is not the end of the story.

Due to SL's dynamic nature, a lot of the math that games get to do in advance of the level ever being rendered can not be done. This includes lighting and occlusion.

The biggest rendering defacit in SL locations is lighting. A few real time point lights can not compete with pre-calculated global illumination. So we over compensate a little baking a little lighting into textures for slightly higher vram usage. Every notice how almost all SL furniture now has the same neutrally shaded off-white style - this is why. The viewer attempts to do do a little shading of faces too, but it's limited. Disable advanced lighting and all this extra workload goes away, your frame rate skyrockets .. for the exact same content with the exact same model and texture detail.

Next up Occlusion (don't render things behind things).. Games get to calculate occlusion an advance by building a map of the level calculating what can be seen from every point the player camera is likely to be. SL can't do this as the scene isn't static and we can stuff the cam everywhere. So again, we're back to real time calculations which aren't as thorough or aggressive as games get to do when execution time doesn't matter. So as such, object occlusion in SL is limited and tends to only really help when not rending avatars you can't actually see. 

Yes, we could all make better content with better LOD and fewer textures or better single object atlasing (omg this .. 1x1024 for your object is WAY better than 4x512, heck, I'd advocate 2048 maps for this purpose).

Better building and scene layout is way more important than the never ending subjective assault on content creators who are entirely beholden to market forces. We all buy mesh content and onion skinned mesh bodies (etc..etc), we are in no position to grumble. However, better scene construction is something we can ALL do. VRAM is only an issue when you run out .. so put less stuff in one scene and many of the problems caused by object density go away.

  • Like 1
Link to comment
Share on other sites

3 hours ago, CoffeeDujour said:

We all buy mesh content and onion skinned mesh bodies (etc..etc), we are in no position to grumble.

YOU do buy that stuff, i make my own avatars, while the onion skinned bodies could easily go away by allowing a second UV set instead of that bake on mesh necro feature crap as i asked billions of times when i was used to waste my time at the content creator meetings

 

3 hours ago, CoffeeDujour said:

The biggest rendering defacit in SL locations is lighting. A few real time point lights can not compete with pre-calculated global illumination.

Actually, game engine do a LightMap for AO, then they have a light volume within which illumination is being calculated realtime. Unity perhaps still doesn't as being sub-industrial standard as Sansr is, but Unreal Engine 4 showcases that really well.

3 hours ago, CoffeeDujour said:

Next up Occlusion (don't render things behind things).. Games get to calculate occlusion an advance by building a map of the level calculating what can be seen from every point the player camera is likely to be.

Nope, that's the culling and it is calculated realtime, there's no precalculation of ALL possible camera and/or player position whatsoever or the loading screen would take for ever. Read my post above about culling objects. The object to object occlusion (the "don't render this item if it's behind another") is done on camera raycast, again at runtime. Not to mention that you can have multiple camera layers, but that's another story (post processing)

3 hours ago, CoffeeDujour said:

so put less stuff in one scene and many of the problems caused by object density go away.

This is truly correct. Add also less yet better optimized stuff.

Link to comment
Share on other sites

5 hours ago, OptimoMaximo said:

addition of mip mapping

SL does have mip-mapping. That's why textures first appear blurry and later become clear. Assets (at least some of them) are stored in JPEG 2000, which allows the reader to get reduced-resolution versions before reading the full thing. Textures, once decompressed, are stored in the viewer's "fast texture cache" file at some resolution, not necessarily full. It's even possible for a texture to be pulled from the graphics card, reduced in resolution, and put back. That's done when texture memory is very scarce.

Look at class "LLTextureCache" in the viewer to see the machinery behind this. The code intermixes policy with machinery, which makes changing the policy algorithm difficult. The policy on what textures to load at what resolution could probably be improved, but the policy is not all in one place in the code.

Link to comment
Share on other sites

11 minutes ago, animats said:

SL does have mip-mapping. That's why textures first appear blurry and later become clear. Assets (at least some of them) are stored in JPEG 2000, which allows the reader to get reduced-resolution versions before reading the full thing. Textures, once decompressed, are stored in the viewer's "fast texture cache" file at some resolution, not necessarily full. It's even possible for a texture to be pulled from the graphics card, reduced in resolution, and put back. That's done when texture memory is very scarce.

Look at class "LLTextureCache" in the viewer to see the machinery behind this. The code intermixes policy with machinery, which makes changing the policy algorithm difficult. The policy on what textures to load at what resolution could probably be improved, but the policy is not all in one place in the code.

Alright, then i correct myself about it: addition of working mip mapping as i can't see any resolution degradation happening on distance increase, except for LindenWaters. Which reminds me that refraction is available for that but no refraction is possible on any other content. But that's a matter of shaders and i don't believe LL is going to tackle anything in the materials code in the next foreseeable future.

EDIT: i was missing a part: That's done when texture memory is very scarce. This is not the typical behavior of mip mapping, as it is intended in general applications: they're intended to switch exactly like mesh LoDs, what you're describing is a faint resemblance of true mip mapping.

Edited by OptimoMaximo
Link to comment
Share on other sites

Here's a useful comment in the code:

// Introduction
//
// This is an attempt to document what's going on in here after-the-fact.
// It's a sincere attempt to be accurate but there will be mistakes.
//
//
// Purpose
//
// What is this module trying to do?  It accepts requests to load textures
// at a given priority and discard level and notifies the caller when done
// (successfully or not).  Additional constraints are:
//
// * Support a local texture cache.  Don't hit network when possible
//   to avoid it.
// * Use UDP or HTTP as directed or as fallback.  HTTP is tried when
//   not disabled and a URL is available.  UDP when a URL isn't
//   available or HTTP attempts fail.
// * Asynchronous (using threads).  Main thread is not to be blocked or
//   burdened.
// * High concurrency.  Many requests need to be in-flight and at various
//   stages of completion.
// * Tolerate frequent re-prioritizations of requests.  Priority is
//   a reflection of a camera's viewpoint and as that viewpoint changes,
//   objects and textures become more and less relevant and that is
//   expressed at this level by priority changes and request cancelations.
//
// The caller interfaces that fall out of the above and shape the
// implementation are:
// * createRequest - Load j2c image via UDP or HTTP at given discard level and priority
// * deleteRequest - Request removal of prior request
// * getRequestFinished - Test if request is finished returning data to caller
// * updateRequestPriority - Change priority of existing request
// * getFetchState - Retrieve progress on existing request
//
// Everything else in here is mostly plumbing, metrics and debug.

This is from lltexturefetch.cpp (viewer source, Firestorm version). Note that someone added this comment long after the original code was written. Code comments within the viewer are rather weak.

This is doing the texture fetching side part of MIP-mapping. The caller sets the request priority and "discard level". Discard level is a power of 2, as in "take this 1024x1024 texture down to 128x128.  That's the part of the viewer's mip-mapping system that actually pushes the bits around. Policy on what gets shown at what resolution is elsewhere. Not sure what the policy is.  Haven't had to look at that yet. The texture fetcher was causing crashes in the all-open-source version of Firestorm, so I had to look at this.

There's a separate system for deciding when to remove texture data out of display memory. Texture data can be shrunk - reduced in size and resolution - at that point as well. Not clear on what the policy is for when to use what resolution. The policy part may need some tuning.

So the heavy machinery for MIP-mapping is there. It's complicated by the need to fetch textures from a server subject to bandwidth and number of connection limits. This thing is frantically fetching a huge number of small files, one for each individual texture. The server end for this is basically a web server. The textures come from a content delivery network (Akamai?), not the sim servers. To the CDN, this looks like a web page loading a huge number of images. You know how long that usually takes. The CDN has a cache in front of it (ngnix, I think, from the error messages in the Firestorm log). But that doesn't help much unless many avatars are in the same area, requesting the same textures. Cache hit rate is probably low. So somewhere out there in CDN land, rotating disks are doing a lot of random access seeks. It may not even help to only request a low-rez version of the image, because the drive has to seek to the file, which takes longer than transferring a few megabytes. I wonder if this stuff is on solid state drives yet. Probably the caches are, but the primary storage is not. Fastily, one of Akamai's competitors, does it that way. This usage pattern is very different from video streaming, which is the main job, by volume, that CDNs handle.

Potentially, one could have a region-aware caching server where the first request to a new region results in downloading all the textures for that region into a server side cache at low resolution from one big pre-built file. That would be more like traditional mip-mapping. Then a new sim would become visible at low-rez very fast. But that's not the service you get from a standard CDN.

Not sure when textures are served over UDP, or from where. That looks like a legacy feature from back when the sim servers did everything.

All this is probably too technical for this forum.

Edited by animats
Grammar
  • Thanks 1
Link to comment
Share on other sites

On the topic of occlusion, most game engines allow map designers to assign "zones" that can be connected to eachothers and are used to speed up occlusion of large areas.

The people who built firewatch mention that they se a sector system for the game to load/unload large sections of the world based on the area the player is currently in.

The classic case is to unload all exteriors when a player is inside a cave and likewise, unload cave interiors when the player is outside.
On older games like quake II you could get very aggressive with this, down to the point of having doors opening/closing triggering visibility changes between zones.

 

It's something I submitted to the Jira a long time ago, might be worth re-submitting given that LL changed a lot in the past years, it's not really difficult to grasp for a builder and it could even be used for privacy.

Edited by Kyrah Abattoir
Link to comment
Share on other sites

1 hour ago, Kyrah Abattoir said:

On the topic of occlusion, most game engines allow map designers to assign "zones" that can be connected to eachothers and are used to speed up occlusion of large areas.

The people who built firewatch mention that they se a sector system for the game to load/unload large sections of the world based on the area the player is currently in.

Kyra, please read my post above, in the section about terrains and its splitting into cells, it's exactly what you're describing for Firewatch. Fact is that whoever works or has worked on a game engine using a huge terrain has to do this procedure.

1 hour ago, Kyrah Abattoir said:

On older games like quake II you could get very aggressive with this, down to the point of having doors opening/closing triggering visibility changes between zones

Area Occlusion objects as in my post above about culling tools

(I myself have worked on commercial titles as well as doing freelance and indie games jobs extensively using game engines from the time of CryEngine2, which i believe was called so because it makes anyone cry when it comes to set up terrains).

Link to comment
Share on other sites

1 hour ago, OptimoMaximo said:

Kyra, please read my post above, in the section about terrains and its splitting into cells, it's exactly what you're describing for Firewatch. Fact is that whoever works or has worked on a game engine using a huge terrain has to do this procedure.

Area Occlusion objects as in my post above about culling tools

(I myself have worked on commercial titles as well as doing freelance and indie games jobs extensively using game engines from the time of CryEngine2, which i believe was called so because it makes anyone cry when it comes to set up terrains).

I can't find the sources but firewatch doesn't use the patchgrid approach of typical openworld games, hey designed the world in such a way that they have natural points where the player view is constrained enough that they can loading/unloading specific chunks of the game world into the main scene unnoticed.

Here is a link to the GDC conference with the timecode. Youtube -> hTqmk1Zs_1I?t=823

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 2097 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...