Jump to content

How can Linden Lab encourage better content?


Kyrah Abattoir
 Share

You are about to reply to a thread that has been inactive for 2103 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

11 hours ago, animats said:

SL does have mip-mapping. That's why textures first appear blurry and later become clear. Assets (at least some of them) are stored in JPEG 2000, which allows the reader to get reduced-resolution versions before reading the full thing.

That isn't "Mip Mapping" that is a case of "heres a fuzzy preview while the rest of the file loads", not the same thing. Even semi-on-the-fly resizing isn't traditional mip-mapping.

You want to see mip-mapping, look at "game format" image file formats...

Jpeg-2000: "Progressive transmission by pixel and resolution accuracy[edit]

These features are more commonly known as progressive decoding and signal-to-noise ratio (SNR) scalability. JPEG 2000 provides efficient code-stream organizations which are progressive by pixel accuracy and by image resolution (or by image size). This way, after a smaller part of the whole file has been received, the viewer can see a lower quality version of the final picture. The quality then improves progressively through downloading more data bits from the source."

Compared to mip-map formats...

They may well be a lossy compression because "hell with it, we're gonna resize it down anyway and then run ansiotropic filtering", and the mips, the each image half the dimensions of the previous, are pre calculated during image format conversion and saving. so all the software has to do is read the mip level out of the file and slap it on something. Tends to be fast, and 'smooth' to render high fps, while counting on you being busy dodging bullets lasers and monsters to notice that the textures are in fact a bit on the crap side.

File sizes for mip-mapped formats may well be LARGER than the original texture, even with more agressive lossy compression formats, because it's storing not only the 1024x1024, but the 512x512, and the 256, and the 128, 64, 32, 16,8,4,2 and 1x1 pixel versions as well. so you'd suck down a bigger file then only use part of it, less texture thrashing, a lot more bandwidth usage and "rez damn it" world-of-grey on bad network days.

Works ok with preloaded to your HD during install type games, bloody awful for random streamed content from a 1000 TB of assorted stuff type setups like... SL.

That's why the jpeg-2000 and not say, dds dtx5. Jpeg-2k basically can interlace an image on the fly without storing an additional 11 separate reduced resolution versions of the image, with the consequent hit to network bandwidth and downloading times.
 

Link to comment
Share on other sites

1 hour ago, Kyrah Abattoir said:

I can't find the sources but firewatch doesn't use the patchgrid approach of typical openworld games, hey designed the world in such a way that they have natural points where the player view is constrained enough that they can loading/unloading specific chunks of the game world into the main scene unnoticed.

I've just watched the video you pointed out, and i can find just a difference in terminology from what i listed above, being developed in Unity while i used a more UnrealEngine4 type of nomenclature for the same exact features. i'll attache relevant screenshots from the video and will relate them to the nomenclature i used. I made sure to capture the time of the video so that you can listen to her explanations:

They call it sectors to work on patches of the world and avoid bad merging, this is a pipeline method to create assets in parallel without overwriting content

Screenshot_1.thumb.png.fa0af5a65f0cc1636111473b97c613a7.png

VolumeCulling, they call it nearby loading. Essentially they make the engine render only a specific part of a volume:

Screenshot_2.thumb.png.da69012ff8691f5cd5fc6a4f54f6cd13.png

This is the AreaCulling, that they call DoorandPortalStreaming, essentially the objects that make something load after a object is passed through and triggers the next area. I made an example about it placing them around a corner, but of course, like you pointed out for Quake2, this is the very case of doorways:

Screenshot_3.thumb.png.bd1023756e809e8e2c1055dabf39e617.png

I called the following "splitting the world in cells", they call it sectors again to facilitate world streaming. She admits there are no doors or volumes, so they had to be "extra smart"...

Screenshot_5.thumb.png.8926a89aeac981a10a1d370618774333.png

...so they added these toggle switches "walls" to load and unload the areas the player enters/leaves. Note that this is not what i called AreaCulling, this is the "distance threshold from adjacent world patches" to start loading a specific cell/sector.

Screenshot_6.thumb.png.73fc36162aff7778b7afe7ec800a11f4.png

But what they did is really something that Unity, being sub-industrial standard crap, can't quite manage as UnrealEngine4 does (do i show my bias towards UE4? xD ) and indeed they had to rely on a 3rd party plug in for this system to work, which made a sort of hybrid with an AreaCulling (in my terminolgy from UE4) object to serve as distance threshold detector for each sector:

 

 

Link to comment
Share on other sites

too many images, new post:

Screenshot_7.thumb.png.b2de6b003bba24cfa09d04dcbbb822d3.png

And finally the distance culling bypassing the other triggers here, to let the player look in the distance. Faster loading just by using LoDs (hint hint)

Screenshot_8.thumb.png.d166e05c17bd85707e13f52f8f444f61.png

If you wish i can continue so for the whole rest of the video =) but i hope i gave you an idea about the fact that i really work also on real game engines and i seriously know what i'm talking about. SL is my free-time social 3D personal enjoyment and a little cash help at the end of the month

Link to comment
Share on other sites

35 minutes ago, Kyrah Abattoir said:

I never implied anything about your competences, I just wanted to highlight small differences.

Kyrah, i didn't say you were implying something about my competences. What i did was showing how the small differences were only in the nomenclatures of tools used in production, but the main concept behind it all is the same. Analyzing features and pipelines should not rely on terminology, this should be done on methodology. You can call a feature "JohnFoe" and still do the same thing that another engine calls "Godzilla" or "DistanceBlindFolder" for that matters

 

1 hour ago, OptimoMaximo said:

and i can find just a difference in terminology from what i listed above, being developed in Unity while i used a more UnrealEngine4 type of nomenclature for the same exact features. i'll attache relevant screenshots from the video and will relate them to the nomenclature i used. I made sure to capture the time of the video so that you can listen to her explanations:

See? i just went on explaining along with the info source you pointed out for you to consume and relate to methodology rather than terminology. I understand that i might sound serious or disappointed by making the point across, but that's the nature of text communication, which tends to be misinterpreted even when it just hides the best intentions =) 

Link to comment
Share on other sites

23 hours ago, CoffeeDujour said:

Focusing on "bad content" is a bit of a witch hunt that is only half the picture.

Yes, we could all make better content with better LOD and fewer textures. but this is not the end of the story.

Yes, poorly optimized content is only a part of the picture, I agree, but it is still a significant part of that picture. You can't ignore it, it's an enormous problem. Build a sim with optimized content and you will see a huge performance increase over what people typically expect in SL. Consistently and across varying levels of hardware.

Content creators need to be better about optimizing their content. 

23 hours ago, CoffeeDujour said:

The biggest rendering defacit in SL locations is lighting. A few real time point lights can not compete with pre-calculated global illumination. So we over compensate a little baking a little lighting into textures for slightly higher vram usage. Every notice how almost all SL furniture now has the same neutrally shaded off-white style - this is why. The viewer attempts to do do a little shading of faces too, but it's limited. Disable advanced lighting and all this extra workload goes away, your frame rate skyrockets .. for the exact same content with the exact same model and texture detail.

Sure, but here's the thing, we can't just wish bottlenecks like the performance impact from enabling shadows would get some wiz bang fix from the fine folks at Linden Lab. We have to work around the constraints that are there. And guess what, if you optimize the memory consumption of assets in a scene, you see an even more massive FPS boost with ALM and shadows on, then you do by making the same optimizations with ALM and Shadows off. 

Now, I'm fully admitting that the technical aspects are outside my area of expertise so maybe Optimo or others who know a bit more about this angle can help me to fill in the gaps.

Like I pointed out before, SL is designed to only use so much of your computer's memory, regardless of how much you have available. I've seen a lot of people look at how much system memory they have available and mistakenly believe that proves memory consumption in SL is not an issue. This part I do know. LL has commented on it themselves, including yesterday when Oz Linden brought it up at the Viewer Development meeting.

Ok, with this in mind I have noticed that when you have advanced lighting and shadows enabled and you reduce the amount of texture memory in an environment,  you see a much more substantial performance increase  over the increase you see when making the same optimizations but while keeping ALM and shadows off.

My guess, and it is a guess, is that this occurs because lighting and shadows also use VRAM, that's where light maps et all are stored.

My reasoning goes that because of unoptimized content SL is almost always maxing out the VRAM LL allows it use even with these features off, so when you throw in more rendering features that also need to use that restricted pool of VRAM you tank FPS pretty hard. That "slightly higher VRAM usage" is SL using up the remainder of what it can use, because, again, it can only use so much of your memory regardless of how much more you have free. If you can reduce the memory use of assets in a scene to something more reasonable, then these other features have more memory readily available and can perform their functions without waiting in line and impacting FPS so much.

On 6/1/2018 at 12:56 PM, CoffeeDujour said:

Better building and scene layout is way more important than the never ending subjective assault on content creators who are entirely beholden to market forces. We all buy mesh content and onion skinned mesh bodies (etc..etc), we are in no position to grumble.

Like Optimo, I can grumble because I also avoid this, but I get and agree with your meaning here, however it doesn't mean we should just accept the problem either.

I've seen content creators insist over and over that they cannot optimize their texture use because customers will see their content as inferior. That's nonsense. Straight up nonsense. Optimo is totally correct that SL content creators need to learn more from game modding resources on how to optimize work without sacrificing quality.

At the same time I'm also going to say that the state of content creation in SL is not simply  the fault of content creators in SL. I've said over and over that it is unreasonable to expect every SL content creator to, on their own, understand all of these aspects of resource use and design. It's just not going to happen, unless LL wants to send every content creator to school in game design. Linden Lab needs to be the ones discouraging the worst habits, while rewarding the good habits, to push content creators naturally towards producing better content regardless of whether or not those same content creators understand why it's important.

 Limit how much texture memory an object or attachment can use by giving it a tangible resource cost (ie: making texture use effect the Land Impact cost of an object) and content creators will, on their own, learn how to make better use out of fewer/smaller textures. Likewise, give avatars a resource pool similar to LI so we can't each wear half a sim's worth of prims and gigabytes of textures, and content creators will learn how to make better optimized avatar content that looks just as good as what they're producing right now.

 In short, we should be talking about content creation, educating people on optimization, but ultimately the onus for the situation we find ourselves in is at LL's feet. (To be fair, they do seem to finally recognize this and are working to make improvements to address this. It will just take time.)

 And, at the same time, I do totally agree with this:

On 6/1/2018 at 12:56 PM, CoffeeDujour said:

However, better scene construction is something we can ALL do. VRAM is only an issue when you run out .. so put less stuff in one scene and many of the problems caused by object density go away.

And Optimo's comment on it:

20 hours ago, OptimoMaximo said:

This is truly correct. Add also less yet better optimized stuff.

Maybe this back and forth is necessary because at the end of the day I do think we all tend to agree that we're not looking at one single problem with one single solution. 3D rendering is complex and involves a number of variable. To make SL run better you have to take a bunch of things into account.

When I'm building a sim I optimize the content I use as much as I can. That goes for both polygon density and texture memory use because you can't ignore one side while ignoring the other.  You need to optimize both. And at the same time, I spread my content out. I put building interiors in skyboxes. I break outdoor areas up into separate maps connected via teleporters.  Doing either one of these things will give people in these environments higher framerates. Doing all of this together means I can get a stable 30fps in a highly detailed RP sim despite running SL on an 8-9 year old computer.

  • Like 2
Link to comment
Share on other sites

7 hours ago, OptimoMaximo said:

Essentially they make the engine render only a specific part of a volume:
...so they added these toggle switches "walls" to load and unload the areas the player enters/leaves. Note that this is not what i called AreaCulling, this is the "distance threshold from adjacent world patches" to start loading a specific cell/sector.

Umbra 3D, which Sansar uses, claims to do much of that automatically. There's a lot of automatic preprocessing - polygon reduction, imposter generation, level of detail model generation, texture reduction, and some occlusion culling. Then there's viewer-side support to request the assets in an order that looks good. Microsoft acquired Simplygon so they could have a level-of-detail  solution too. It's just preprocessing for individual meshes, not an entire system.

The trend in the game industry seems to be heavy automation of level of detail issues. Using armies of artists and human mesh editors to do this by hand isn't cost-effective. There's been a lot of "blame the creators" in this thread. But that's not the right answer any more. That belongs to the past.

SL's version of this preprocessing step is the mesh reduction in the viewer's uploader, which just discards random triangles. That's way, way behind the state of the art. SL needs an upgrade at that point in the asset pipeline. How good is Simplygon? You can use Simplygon for free if you're willing to let Microsoft look at your models. Anyone tried?

Link to comment
Share on other sites

3 hours ago, Penny Patton said:

I've seen content creators insist over and over that they cannot optimize their texture use because customers will see their content as inferior. That's nonsense. Straight up nonsense. Optimo is totally correct that SL content creators need to learn more from game modding resources on how to optimize work without sacrificing quality.

Packing UVmaps as tight as possible and splitting into multiple textures based on what your product offers in the way of customization goes a long way getting you not only more bang from the textures you are already using anyway, but also reducing wasted space.

A certain couch (that I won't name) not only uses already a hefty amount of 1024x1024 textures, but if you change the texture of the accent trim, it essentially loads an entire second copy of the main textures because base & accents where put together on the same map.

It's something you see on a lot of products, parts that have different states sharing texture with the rest of the model, forcing to load twice as much texture real estate when those parts end up textured differently.

3 hours ago, animats said:

SL's version of this preprocessing step is the mesh reduction in the viewer's uploader, which just discards random triangles.

Even blender's decimate modifier is a significant step up, and assuming it's as far as you are willing to go LOD creation wise, it's going to take you 10 minutes at best to dial it (eventually clamp vertex weights) and export custom LODs, so why not do it?

Edited by Kyrah Abattoir
  • Thanks 1
Link to comment
Share on other sites

15 hours ago, Kyrah Abattoir said:

Even blender's decimate modifier is a significant step up, and assuming it's as far as you are willing to go LOD creation wise, it's going to take you 10 minutes at best to dial it (eventually clamp vertex weights) and export custom LODs, so why not do it?

There also are tools for automated X number of LoDs that also copy the weights for rigged meshes automatically, the procedure really doesn't take that long.

The problem with this is, as i can see it as modeler/rigger for a little brand, is that if i go LoD wise with rigged meshes, it all ends up in tons of files, since when i rig those items i have to do it on 4,5 or even 6 different bodies. It becomes a bloodbath of files that i should send to the general manager (who now just assembles color huds and sets all up for sale), who last time went crazy uploading that stuff. I kept making LoDs and started uploading myself when he told me to not do that, "who cares of the LoDs, everyone just zero them out they will never show up" (statement that i don't agree with), but i think for this reason many uploading people (note that i'm not using "designer" or "creator") won't waste such amount of HD space for it, considering that rigged meshes collada file, especially when complex, end up weighting quite a few megabytes.

Edited by OptimoMaximo
Link to comment
Share on other sites

2 minutes ago, OptimoMaximo said:

it all ends up in tons of files

Yeah tell me about it. Each of my (individual) project folders is a mess, but it is getting better.

Supposedly, the new calculation that is going to be used (at least) on animesh is that you essentially are "paying" (LI/Complexity) for the highest possible LOD, and all the other lods are "free" unless they are more than 50% of the weight of the LOD above them.

That and a more conservative logic to decide when to switch.

 

It's funny how, by trying to encourage people to make the lower LODs light, they basically encouraged everyone not to use them at all.

Link to comment
Share on other sites

18 minutes ago, Kyrah Abattoir said:

It's funny how, by trying to encourage people to make the lower LODs light, they basically encouraged everyone not to use them at all.

I think that programmers put steps to follow for their procedures, thinking that "here is a sequential procedure, logic says go through all the steps" while the user see the steps as annoyances that separate them further from getting what they want. Reading instructions is overrated, apparently, to then shout at the "bug" if something doesn't work as they expected. A little concept that, to me, seems to be pretty widespread is the claim of "magic button do it all for me" regardless of the fact that each user situation is NOT the same and proceduralizing everything may not work as expected.

Link to comment
Share on other sites

  • 3 weeks later...

I think understanding that the content creators that will come to SL, especially as of late are younger artists who want to create and earn income without being tied down to the difficult tedious things that have been solved in other engines already.

(Yes I get the argument that this isn't a 100% solution but its better than what SL currently provides. I'd rather LOD generation that worked, than have one that didn't or have to do hand made LODs everytime.)

https://docs.unrealengine.com/en-us/Engine/Content/Types/StaticMeshes/HowTo/AutomaticLODGeneration
 

Link to comment
Share on other sites

If there was a way to automatically generate good LOD meshes on the fly with a user selectable bias, I would say dump the existing mess in a heartbeat, delete all existing auto-generated mesh LOD models and let the client figure out the magic.

Till then, I'd like to see some hard actual numbers from LL as to what LOD meshes should contain and some basic tutorial content. For example each LOD needs to have 30% of the polys as the previous level

AND provide a substantially reduced Land impact to people who manually create all of the LOD levels meeting those guidelines. No bonuses for using the built in decomposition tool and set a minimum Li of 5 for everything that isn't 100% manually created.

  • Like 1
Link to comment
Share on other sites

8 hours ago, CoffeeDujour said:

AND provide a substantially reduced Land impact to people who manually create all of the LOD levels meeting those guidelines. No bonuses for using the built in decomposition tool and set a minimum Li of 5 for everything that isn't 100% manually created.

To me this sounds good and fair enough.

11 hours ago, MIstahMoose said:

I think understanding that the content creators that will come to SL, especially as of late are younger artists who want to create and earn income without being tied down to the difficult tedious things that have been solved in other engines already.

The main problem is that people should work with what they're given and following the provided workflow, like it or not. Don't like it? No problem, go create on those other engines then.

It doesn't matter if the younger artist wants an easier shortcut to earn some money, if you take it seriously as any job should be, it all boils down to expertise = proportioned payroll. A 3D generalist whose job is to model secondary assets for a game's environment will get a substantially lower pay than the main characters rigger/animator, proportioned to the level of responsibility for the main "in the user's face" content and know-how they contribute to the production. If your manager says "do this, this way" you can't simply skip the annoying/bothering/tedious steps because all you want is your payroll and your name in the credits. 

I know that other engines have pretty much solved a lot of these "issues", but for the example you bring up from UE4, you should also consider that upgrading every time a new version comes up means taking the big risk of previous work within the project getting broken and in need of a rework. Therefore many developers seriously ponder this chance for a long time before they blindly go to the newer shiny featured version. And this is for individual games, which SL can't compare to for the sole fact that its servers are full of content from every stage of its life that might get broken: it's thousands of assets of any kind on a marketplace (and likely in someone's inventory) from a multitude of different people, quite a lot different from a downloadable patch that replaces files in one game, limited in assets numbers and people working at its development. This latter case allows for a sanity check and working order testing, with the ability to rework and fix what's broken before the new version is shipped to the users. LL doesn't have access to such level of control over the content, and even if they would, it's no feasible task prior to an eventual new release.

  • Thanks 2
Link to comment
Share on other sites

On 6/1/2018 at 6:56 PM, CoffeeDujour said:

Disable advanced lighting and all this extra workload goes away, your frame rate skyrockets .. for the exact same content with the exact same model and texture detail.

The thing is, realitime shadows, while they are a little more taxing on the GPU (depending on model complexity really) are overall a more efficient way to do thing than making every single object textures unique by burning shadows into them.

Houses and skyboxes in SL use ridiculous amounts of texture memory because people bake everything on many many 1024x1024 textures. And yeah other engines have lightmapping but how lightmapping is actually rendered is completely different from simply burning shadows in textures.

The problem with turning advanced lighting off is that it is like switching to a completely different rendering engine. It is also encouraging creators to ignore materials for their detailwork and to keep using oversized textures with baked shadows and fully detailed meshs, both of which are what typically sends the viewer over the edge when turning advanced lighting on.

A self realized prophecy basically.

18 hours ago, MIstahMoose said:

I think understanding that the content creators that will come to SL, especially as of late are younger artists who want to create and earn income without being tied down to the difficult tedious things that have been solved in other engines already.

I work on a game project in Unity in my spare time. These tedious aspects are not really a SecondLife exception, it's just that when making your own game you can pretend that there are no constraints.

Until they hit you in the face.

EDIT: I'd like to add that you can point to engine X and Y all you want, however they are just that, engines. What matters is what ends up happening when people have built a full blown game on top of it. Because that's what SL should be compared to, not an empty engine.

Edited by Kyrah Abattoir
  • Like 2
Link to comment
Share on other sites

4 hours ago, Kyrah Abattoir said:

The thing is, realitime shadows, while they are a little more taxing on the GPU (depending on model complexity really) are overall a more efficient way to do thing than making every single object textures unique by burning shadows into them.

In SL's case, realtime shadows doubles* the poly count. While this doesn't doesn't double the GPU load it does slow things down some; assuming you have slack capacity, which with a 780, I do ... if you're on a lower end or integrated card, shadows can easily push you beyond your GPUs intended load and bring SL to it's knees.

My suggestion about poly counts and the mesh uploader is mainly about favoring content that actually degrades cleanly rather than raw poly count, Objects popping out of existence or turning into a wonky mess is the core motivation for jacking the LOD slider in the viewer to the max regardless of hardware (less than max isn't usable). Allowing the existing LOD features of SL's render engine to work as designed would go a long way to keeping SL running on the majority of lower end hardware.

Screenshot with Ktris** with and without shadows ..

1nwarbS.png

* The doubling isn't a literal doubling of raw polys, it's technically multiple render passes for each shadow casting light source and cheaper than the primary render pass due to differences in texture handling.

** it's showing a little higher than double as this is SL and I didn't take any steps to isolate myself from others on the region, so the diff is probably @Kitty Barnett arriving between snaps at double her raw poly count :P

This is not quite the same as baking shadows & lighting into textures. Texture re-use died with prims as every mesh asset now has it's own dedicated UV painted map, it's pretty safe to say that the days of buying a texture and slapping on half a dozen things are long dead, so in that sense as everything has it's own texture, there is nothing bad about baking lighting onto it .. aside from the baked lighting needing to be fairly neutral. 

Objects needing multiple maps in order to maintain resolution makes a solid case for 2048 or even 4096 texture maps as every face of a single object sharing the same map can be drawn in the same batch.

SL's woes can not be boiled down to a single overarching factor, like raw VRAM usage, more that the engine was designed for a specific content type (prims) that has been for all intents and purposes superseded. There has also been precious little evolution on the render engine since the last iteration years ago, mainly because LL need to hire more wizards.

  • Like 1
Link to comment
Share on other sites

22 minutes ago, CoffeeDujour said:

Texture re-use died with prims as every mesh asset now has it's own dedicated UV painted map, it's pretty safe to say that the days of buying a texture and slapping on half a dozen things are long dead, so in that sense as everything has it's own texture, there is nothing bad about baking lighting onto it .. aside from the baked lighting needing to be fairly neutral. 

I know, that's why pretty much every residential area hits you with 200Megs of texture per house, for that same "off white" look.

I'd love to have lightmapping in SL, but the last time I requested it, i'm not even sure the lab had a look at it before they closed as "won't do. And to be fair it's the only proper way to do this stuff, lightmaps are wayyy more efficient and it's what every game designed to run on a potato uses.

I've asked several times if LL had any plans on axing the non material renderer, just so everyone sees the world the same way, and also so the entire grid has to face this problem instead of toggling it off. But this idea seem to throw the Lindens into a small panic.

I love this quote and i think it's fitting, even if it's from a movie:

Quote

Ian: Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.

To me that's where the "quest for profit" has led us in SecondLife, complete unashamed misuse of tools and ressources to "one up" eachothers. And in a way it's just a mirror of real life really. We are destroying the world we live in because making a buck now is the only scale of success.

Edited by Kyrah Abattoir
  • Like 1
Link to comment
Share on other sites

3 minutes ago, Kyrah Abattoir said:

I know, that's why pretty much every residential area hits you with 200Megs of texture per house, for that same "off white" look.

I'd really like to see per face texture post processing added and set up so that it could be changed even if the object was no-mod. Being able to adjust the brightness, saturation, etc would go a long way to allowing creators to step out of the shabby-chic box because items could be matched to each other and their setting.

  • Like 1
Link to comment
Share on other sites

@CoffeeDujour and @Kyrah Abattoir Your points about texture reuse being dead is one of the main factors that make me advocating for a materials layering system. I read Kyrah works in Unity in her spare time, so i'm quite sure she can see its use. I don't know about CoffeeDujour but i'm prone to believe she understands this too. Those 200 MB worth of textures per house is most likely also given by the use of dedicated normals/specular material textures for each one dedicated diffuse color texture, in my opinion. Now if we could layout a whole house in one single UV map, split in pieces yet referencing same texture coordinates, a sort of lightmapping is definitely possible, provided that a material layering system can get at least one single diffuse (no normal/specular) plugged alone on the top layer for the baked lighting (if not a specific AO channel so that it also affect specularity as it happens in PBR shaders, but i'm dreaming of much so i won't claim this, considering that the last layer could "cover up" the lower layers dimming out the specularity from those). This would bring back tileable textures, material enabled, saving on a lot of baked dedicated textures and still retain the lighting.

Link to comment
Share on other sites

7 hours ago, Kyrah Abattoir said:

I work on a game project in Unity in my spare time. These tedious aspects are not really a SecondLife exception, it's just that when making your own game you can pretend that there are no constraints.

Until they hit you in the face.

EDIT: I'd like to add that you can point to engine X and Y all you want, however they are just that, engines. What matters is what ends up happening when people have built a full blown game on top of it. Because that's what SL should be compared to, not an empty engine.

I don't think comparing SL to a game is as easy, because you have full control of what goes into your game. I just advocate for as much automation as possible because there is no control over who is creating content and what quality of content is going into SL So if you can control at least a small fraction of that then you'd hope to see a small performance boost for newer content. 
I think its fair to point to other engines, because LL has had plenty of money and time to continue to develop SL, even without "Game breaking" engine changes they should be a lot further along by now. The promises they made years ago still have not become reality.

Alternatively they could have just sunken it all into sansar which is fair, but sansar is not quite ready or attractive enough to users yet. Plus I am expecting this whole argument to happen over if sansar ever does take Second Life's place in the social game world.. Since sansar has no restrictions either it appears 

 

11 hours ago, OptimoMaximo said:

The main problem is that people should work with what they're given and following the provided workflow, like it or not. Don't like it? No problem, go create on those other engines then.

It doesn't matter if the younger artist wants an easier shortcut to earn some money, if you take it seriously as any job should be, it all boils down to expertise = proportioned payroll. A 3D generalist whose job is to model secondary assets for a game's environment will get a substantially lower pay than the main characters rigger/animator, proportioned to the level of responsibility for the main "in the user's face" content and know-how they contribute to the production. If your manager says "do this, this way" you can't simply skip the annoying/bothering/tedious steps because all you want is your payroll and your name in the credits. 

I know that other engines have pretty much solved a lot of these "issues", but for the example you bring up from UE4, you should also consider that upgrading every time a new version comes up means taking the big risk of previous work within the project getting broken and in need of a rework. Therefore many developers seriously ponder this chance for a long time before they blindly go to the newer shiny featured version. And this is for individual games, which SL can't compare to for the sole fact that its servers are full of content from every stage of its life that might get broken: it's thousands of assets of any kind on a marketplace (and likely in someone's inventory) from a multitude of different people, quite a lot different from a downloadable patch that replaces files in one game, limited in assets numbers and people working at its development. This latter case allows for a sanity check and working order testing, with the ability to rework and fix what's broken before the new version is shipped to the users. LL doesn't have access to such level of control over the content, and even if they would, it's no feasible task prior to an eventual new release.

You're right, it doesn't matter if they want to take a shortcut.. Because they can and they will. I don't think content creation can be treated the same as payroll. More content + more community = More money. More time spent doing tedious things that slow down rate of content = less money
I agree with the fact that people should do a required workflow for a game engine, but when there isn't one in place.. Well can't really fuss at the artists who are making the community happy. And that community giving LL $$$

Thats why I feel these things should to be automated. That, or figure out a way to create an approval system. Though even with automation people will find ways to get around the constraints and shortcut things.

I didn't suggest this as an overhaul of every piece of content on the server, but rather just a new system in place to force new models to at least fit constraints a bit. So maaaybee the community can see a small performance boost.

Also if they could delete everything from before mesh that would be nice too..  

 

Link to comment
Share on other sites

20 minutes ago, MIstahMoose said:

You're right, it doesn't matter if they want to take a shortcut.. Because they can and they will.

 

20 minutes ago, MIstahMoose said:

Thats why I feel these things should to be automated. That, or figure out a way to create an approval system. Though even with automation people will find ways to get around the constraints and shortcut things.

And this is why i agree with CoffeeDujour about some way to penalize the shortcut takers in a way or another, as also proposed and mentioned by Elizabeth Jarvinen at the content creators meetings. I can't see how "artists" can make the comunity happy with content that breaks performance and clearly only caring of the $$$ you mention. Yes, this would also slowdown their production, because if fast production rate = crappy performance content, the platform suffers from consequences.

Edited by OptimoMaximo
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 2103 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...