Jump to content

Secondlife declining player base


You are about to reply to a thread that has been inactive for 920 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

I think SL looks pretty good considering it’s 16 years old. Some people have made some builds that rival a AAA game. When viewed with a viewer like Black Dragon....wow.

Problem is, a lot of people don’t use the right lighting. Lighting makes a big difference in SL and a ridiculous number of people aren’t using ALM or just use CaWL or Nam’s Or even midday all the time...just never change it.

I’d love to see SL with an updated engine and all that good stuff. But the way it looks now is pretty danged good depending on some variables.

  • Like 3
Link to comment
Share on other sites

2 hours ago, Penny Patton said:

Content creators need to reign in texture use. When you do that you can see enormous performance improvements in SL right now.

If the Lab was to put a couple of simple texture editing tools into the build menu that would make it so much easier and more likely to happen. Something like a basic cropping tool and a one to reduce image sizes for example. It would make it so much more convenient to experiment with smaller sizes and not have to upload several different ones to do it manually. But if LL isn't going to get serious about reducing complexity by giving people the tools, why should creators and users , who are more concerned about how good their product looks?

  • Like 1
Link to comment
Share on other sites

1 minute ago, Arielle Popstar said:

If the Lab was to put a couple of simple texture editing tools into the build menu that would make it so much easier and more likely to happen. Something like a basic cropping tool and a one to reduce image sizes for example. It would make it so much more convenient to experiment with smaller sizes and not have to upload several different ones to do it manually. But if LL isn't going to get serious about reducing complexity by giving people the tools, why should creators and users , who are more concerned about how good their product looks?

There is already a reduction if you load a 2048 image in I think,  it reduces it automatically to 1024. As to the texture resize in SL this wont work as it will then rely on the ability to edit the textures. With the "its no-mod cause its my artwork and you have no right to destroy it" craze it just wont happen.

Additionally most things are meshed externally now and therefore should be UV mapped properly meaning such a feature would be to much than its worth. On a 1024 texture you should be able to have the entire texture of a full avatar on it with very little blank space.

It is about educating the users more-so. Why people still make a ring with a 1024 texture for the band and then each individual gem is beyond me.

  • Like 1
Link to comment
Share on other sites

9 minutes ago, Drayke Newall said:

 With the "its no-mod cause its my artwork and you have no right to destroy it" craze it just wont happen.

Yes, that no-mod thing is a huge pita imo. It makes S/L a frustrating experience in that regard. Technically it isnt even modifying the texture but simply reducing its size and complexity for a better experience. 

Additionally most things are meshed externally now and therefore should be UV mapped properly meaning such a feature would be to much than its worth. On a 1024 texture you should be able to have the entire texture of a full avatar on it with very little blank space.

Ok i wouldn't have thought there would be a difference on the placement of a UV map whether it is 1024x1024 or 512x512 but not familiar enough to know for sure.

It is about educating the users more-so. Why people still make a ring with a 1024 texture for the band and then each individual gem is beyond me.

Probably for the simple fact that at 1024x1024 one KNOWS its going to look the best it can be and if every size of texture costs the same to upload, why risk uploading a size too small?

 

 

Link to comment
Share on other sites

12 minutes ago, Arielle Popstar said:

Ok i wouldn't have thought there would be a difference on the placement of a UV map whether it is 1024x1024 or 512x512 but not familiar enough to know for sure.

The issue is that most SL creators are uploading textures that look like this:

bad+texture+wrapping.jpg

They're automating the UV process and not combining their maps together. So you'll have an object that is covered in 1024x1024 textures, but none of them are actually using more than 256x256. This, again, is one of the reasons why the people screaming that optimization will lead to uglier content are so off base. These content creators would be able to combine all of their textures, reducing the memory cost of their content to a tiny fraction, without sacrificing any detail whatsoever. And it's not difficult to do! They just never bother to figure it out because LL never bothered to put any restrictions on texture use.

Edited by Penny Patton
  • Like 4
  • Thanks 1
Link to comment
Share on other sites

1 hour ago, Drayke Newall said:

Though that said, prims in themselves have far more polygons than what can be uploaded with mesh. A mesh cube uploaded can have 2  polygons per side and stretch to a maximum size of 64m without any difference in texture sharpness. Whereas a normal prim cube stretched to 64m will have multiple polygons. So I suppose it comes down to how optimised is optimised.

An untwisted prim cube has 108 triangles at full LoD regardless of its size. Although I do agree it's too much, I wouldn't call it "far more" than a mesh cube's 12. Usually it's easy enough to fix anyway, just the slightest hint of tapering and the prim's triangle count drops down to 12.

 

1 hour ago, Drayke Newall said:

So I suppose it comes down to how optimised is optimised.

Nobody makes better optimised SL content than me.

But I see it even in a completely empty sim. Although it's not so laggy to cause any problems, I've compared various Second Life (and opensim) regions to the Unigine Benchmarks and the SL viewers definitely has a poorer scene complexity/fps ratio than Unigine at least.

With that being said, as users we can't do much about the overhead inherent in the software. We just have to try to male the most of what we have and there is quite a lot we can do there.

Link to comment
Share on other sites

1 hour ago, Drayke Newall said:

On a 1024 texture you should be able to have the entire texture of a full avatar on it with very little blank space.

I went to a user group meeting where Oz Linden told us LL had seriously considered introducing 1024x1024 textures for avatar eyes. Apparently they had abandoned the idea eventually but I left the meeting anyway just to be sure - it might have been contagious.

  • Haha 2
Link to comment
Share on other sites

There are lots of things that could be done to speed up SL. Everybody who knows how game engines work internally and has looked at SL knows this. But with Sansar getting more development resources, nothing is happening. There seem to be only three or so SL server developers. When Simon Linden is on vacation, server releases stop. This is a staffing problem, not a technical problem. SL's job postings include several people for Sansar's back end, but none for Second Life's back end.

(On the technical front, here's my personal list of things that ought to get fixed.)

  • Idle scripts must use zero compute time. Not less time, zero. Idle script time is a huge drain on server resources. I once suggested to Oz Linden that it was using half of sim CPU resources. He indicated it wasn't quite that high. But it's high. Main cause of sim overload for sims with few avatars present. Probably requires someone who can get into the internals of Mono. Might need a Mono consultant to make some changes to Mono's CPU dispatching.
  • Collect all the viewer code that decides which textures to load at which resolutions into a policy module. Smarten up the policy. Classify recent avatar movement as "stationary/moving/moving fast" and "fixed look direction/looking around". Pick desired texture resolution to show based on that. If you're stationary and not changing look direction, top priority is high texture resolution on what's in front of you, so you can see all the detail at Fashion Week. If you're moving fast, go for moderate texture resolution in the direction of travel out to draw distance, with lower resolution off to the side, so you can see where you're going.  The same criteria should affect LOD decisions. Also, for each object, store one color, chosen by reducing the textures to 1x1, with the object. Objects rez in that color rather than grey. So trees in the distance are green blobs, houses in the distance are brown blobs, and roads in the distance are grey blobs. Add a little blur during rendering when in blob mode, and the result will look like fog in the distance. Much better visual effect than all grey blobs.
  • Once level of detail and texture control are behaving better, adjust them automatically to maintain a minimum frame rate. This will make the system feel livelier.
  • Build some good tools to log, and decode into a clear form, the message streams from sim to viewer and sim to sim. The existing tools tell you an object update went by, but that's about it. This would help chase down intractable and intermittent bugs that have been unfixed for years. Good task for a new hire; they'll learn the system but can't break much.
  • Better LOD generation. The uploader's LOD generator is awful. Replace it. Once something good has been found, go through the existing assets, find awful low and lowest LODs generated by the old generator, and replace those. Good background task to run at low priority on AWS.
  • Impostor generation. Every major 3D game has impostors, 2D billboards that replace distant objects with flat pictures of them. Except SL, which only has impostors for avatars. Hierarchies of impostors - object, building, parcel, sim. See for kilometers. Feel the big world.
  • More modern rendering. Principled BRDF? Leave that to a rendering expert.

 

 

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

 

1 hour ago, Penny Patton said:

The issue is that most SL creators are uploading textures that look like this:

bad+texture+wrapping.jpg

They're automating the UV process and not combining their maps together. So you'll have an object that is covered in 1024x1024 textures, but none of them are actually using more than 256x256.

This is not true

When you over exaggerate something to the point of absurdity you undermine the entire case you're trying to make.

 

We have progressively encoded textures right now. Only the texture pipeline is like a pinewood derby where everyone only gets one wheel. The decode overhead is massive and accounts for the bulk of the processing SL does, so much so that doing anything else while in the decode phase is pointless. Even balancing how much texture decoding to do per frame is utterly broken as once performance starts circling the drain it can only get worse.

This is why your SL goes slow when you TP anywhere or move the camera.

The new cache removes this for textures previously seen. The cache will store files that can be dumped straight to VRAM.

Right now to get a lower resolution image the viewer throws everything away and decodes the texture from scratch, this is insanely expensive. New cache will mean it can just step down with no decoding at all. No magical woowoo required .. the file will probably still be cached by the OS if you have a reasonable amount of RAM.

 

2 hours ago, Drayke Newall said:

It is about educating the users more-so. Why people still make a ring with a 1024 texture for the band and then each individual gem is beyond me.

The workflow for baking multiple objects with multiple image per texture onto a single map is cumbersome in blender without an addon. It can get very labor intensive and time consuming.

  • Thanks 2
Link to comment
Share on other sites

26 minutes ago, animats said:

When Simon Linden is on vacation, server releases stop. This is a staffing problem, not a technical problem.

But, according to those who are supposed to know these things somehow, there isn't a staffing shortage/problem and hasn't been since that rather large layoff of (I think it was) 30%, of the workforce. Supposedly the two aren't connected in any way. I'm not sure how that conclusion was reached but it is what it is. I just don't buy into it.

Link to comment
Share on other sites

11 minutes ago, Selene Gregoire said:

But, according to those who are supposed to know these things somehow, there isn't a staffing shortage/problem and hasn't been since that rather large layoff of (I think it was) 30%, of the workforce. Supposedly the two aren't connected in any way. I'm not sure how that conclusion was reached but it is what it is. I just don't buy into it.

If you go to Server User Group regularly, it's clear that important things are not being addressed because of staff shortages. Or look at the JIRA, for bugs long marked as "accepted" but not fixed.

  • Like 1
Link to comment
Share on other sites

56 minutes ago, animats said:

There are lots of things that could be done to speed up SL. Everybody who knows how game engines work internally and has looked at SL knows this. But with Sansar getting more development resources, nothing is happening. There seem to be only three or so SL server developers. When Simon Linden is on vacation, server releases stop. This is a staffing problem, not a technical problem. SL's job postings include several people for Sansar's back end, but none for Second Life's back end.

 

Sorry to cut your great post short, but I guess that explains why the whole EEP thingy has stalled. Maybe for the better even, that they didn't release it yet, because it was in a deplorable state, really.

  • Like 1
Link to comment
Share on other sites

11 minutes ago, animats said:

If you go to Server User Group regularly, it's clear that important things are not being addressed because of staff shortages. Or look at the JIRA, for bugs long marked as "accepted" but not fixed.

Just looking at the Jira alone at some of the ones that are really needed (technically), a few possibly desperately needed and how long they've been "in queue" it's kind of obvious they don't have the needed staff. The top showstopper is employee budget. It's not just salaries employers have to pay. Employers have to pay employee taxes like Social Security, Worker's Comp, and a slew of other expenses I'm not going to list. It's not hard to connect the dots and follow the money to Sansar. But I really don't want to get into a discussion or debate or argument about it because people are going to believe what they want to believe and nothing will change that.

Link to comment
Share on other sites

2 hours ago, CoffeeDujour said:

When you over exaggerate something to the point of absurdity you undermine the entire case you're trying to make.

Is she doing that? I told her a million times not to exaggerate!

  • Haha 2
Link to comment
Share on other sites

My real point is this: It is far easier and cheaper to bring SL up to modern game technology levels than to try to start up a new virtual world and get users to use it.

Failed new virtual worlds include High Fidelity, Sinespace, Sansar, Worlds Adrift, Facebook Spaces... Two have shut down and the other three have few users.It's just not working out.

(On the technical front, I'd previously mentioned Nostos, which was supposed to be a big shared world running on Spatial OS run by NetEase from China. So that was going to be the real test of new big-world technology. Launches next month. Turns out,  "shared" is smaller than expected: "Nostos is an open world multiplayer game set in a vast tranquil wilderness scarred by an ancient war. 4-20 players create persistent sessions..." So it's sharded like Sansar, but unlike SL. So far, nobody but SL has a big shared world that works. Despite Improbable spending $500,000,000 trying. So, fixing up SL remains promising, vs. attempting a new start.)

(I watched the final moments of Worlds Adrift. In the end, nothing but clouds remained.)

  • Like 3
Link to comment
Share on other sites

5 hours ago, animats said:

(On the technical front, here's my personal list of things that ought to get fixed.)

  • Idle scripts must use zero compute time. Not less time, zero. Idle script time is a huge drain on server resources. I once suggested to Oz Linden that it was using half of sim CPU resources. He indicated it wasn't quite that high. But it's high. Main cause of sim overload for sims with few avatars present. Probably requires someone who can get into the internals of Mono. Might need a Mono consultant to make some changes to Mono's CPU dispatching.

This is supposed to be the case but for some reason doesn't work like this. Also once again it comes down to user editing as well in some cases. For example, you can remove the script sometimes from an object that has the script already performing its function. i.e. texture movement will continue even with the script removed, just don't then copy it. I would be very surprised if the script is still recognised as running if the script is removed. When I did this all indications were that the script load is reduced, though this was a few years ago when I had sim so maybe something has changed.

The other issue is the fad now to have a script in every object. How many people that landscape a sim remove the season texture change script from the plant? Instead they keep it in there despite never using it. Sure if you have 2 trees on your sim and you change those its manageable, but when it is 50+ tress the scripts are pointless and would be easier and less time consuming just to replace the tree entirely with the seasonal type.

Quote

Collect all the viewer code that decides which textures to load at which resolutions into a policy module. Smarten up the policy. <snipped remainder>

This has already been looked into to some degree. They did release code that makes it so that you render first anything that is in front of you. That said it could definitely use improvements especially to as you mentioned the LOD system which is just to easy for users to ignore or abuse. Once again, a lot falls to the user as they have, upon mesh upload, the option to specify the LOD. This should be done automatically through algorithms by the viewer based on the complexity of the object. The only time is should be user is if they upload a separate set of LOD meshes, but like usual even this will be abused.

Also LL should just remove the ability to "rendervolumelodfactor" out of the debug settings. I remember long ago when it was recommended (for some reason) to bump this up to max, but this was when sculpts were only a thing. Now with mesh it is not needed yet users still bump it to max, then complain its reducing performance.

Quote

Impostor generation. Every major 3D game has impostors, 2D billboards that replace distant objects with flat pictures of them. Except SL, which only has impostors for avatars. Hierarchies of impostors - object, building, parcel, sim. See for kilometers. Feel the big world.

Once again whilst a fix in the LOD would be great, at the moment this falls on the user to know how to do this based on LOD levels. The problem we have at the moment is a person can upload a complicated mesh object and give it no to very little LOD. How many people who upload mesh actually go to the trouble of making a separate physics shape? Instead we get mesh that is uploaded where people then find the physics doesn't work properly and so just tack on invisi prims to solve the problem.

Look at a sphere. There are 2 ways of making it look circular. You either upload a complicated and high poly mesh and just use that - impacting performance, or you generate the texture on a high poly mesh use smoothing, then upload a lower poly mesh and apply the texture to the low poly object. Both results look perfectly fine. Game creators, use the later texture option, which do you think is more common in SL?

People seem to blame the viewer for everything. Cache, sure that's bad at the moment, LOD, sure that can be tweaked. But not everything is viewer based. If I create a game in unreal engine using the same methods some use uploading content to SL, that game will be just as crippling as SL.

Link to comment
Share on other sites

7 hours ago, CoffeeDujour said:

This is not true

<sinp>

Sure maybe when (if) the new cache gets implemented but for now we are stuck with the current cache so saying things like this is pointless and just contributing to the whole mess.

Point of the matter is, that when someone uses a texture like Penny posted for one part of the same mesh then uses another texture using the left upper corner for a different texture face on the same mesh object, your gpu, cpu, viewer, everything has to process twice the amount of textures. If the entire one texture contains all the texture for the whole object it's processed once and is only one texture within the cache not 2 taking up less room. You can even use the same texture for 2 objects with each objects texture using a different area of the texture. This is done all the time in any other game or gaming engine.

That is the main problem. Using textures smartly will solve quite a lot of the issues. I don't think there was a time when loading into second life when I had a sim designed by myself and another game creator friend, that I saw a texture actually load unless I have been sim hopping, filling up my cache. This was with the entire sim being fully material mapped and using lighting properly (using light textures rather than just ticking light). But then, he made the mesh and textures like you would in a game and I designed it like a game area. The design of the sim we came up with used object barriers (rooms) that gradually moved you to larger open areas giving the viewer time to load everything before you got there. Additionally assets were recycled as were textures.

The comments we received all the time was saying the sim didn't reduce FPS, lag and loaded insanely quick. You could have your draw distance set to 32 and because the textures were recycled or used in multiple areas, they were already processed and therefore "popped" in instantly.

Quote

The workflow for baking multiple objects with multiple image per texture onto a single map is cumbersome in blender without an addon. It can get very labor intensive and time consuming.

I'm sorry, that's just a poor excuse for laziness. Download the addon, just like most people have to do with any 3D creation software.

Edited by Drayke Newall
Link to comment
Share on other sites

24 minutes ago, Drayke Newall said:

Also LL should just remove the ability to "rendervolumelodfactor" out of the debug settings. I remember long ago when it was recommended (for some reason) to bump this up to max, but this was when sculpts were only a thing. Now with mesh it is not needed yet users still bump it to max, then complain its reducing performance.

It can't be removed. That debug setting is the same value that the object detail slider in the prefs changes. Debug settings for the most part are just global variables.

.. and even if it was locked down in some way by LL, FS would just ignore those changes and carry on regardless.

24 minutes ago, Drayke Newall said:

Instead we get mesh that is uploaded where people then find the physics doesn't work properly and so just tack on invisi prims to solve the problem.

A prim based physics model is not bad .. SL physics has been built around prims from the outset and depending on your structure can be cheaper.

24 minutes ago, Drayke Newall said:

If I create a game in unreal engine using the same methods some use uploading content to SL, that game will be just as crippling as SL.

This is because SL is entirely server side, your viewer is dumb as rocks and only does what the region tells it. A game will handle all client input and physics and update the server, SL tells the server you pressed up and waits for the server to tell it what changed.

Creating the same scene 1:1 with a game you will get significantly lower performance in SL than a game engine. There is no way to change this. A game engine is a light weight race car designed to move a very specific load as fast as it can. SL is a garbage truck, it will move tons of random stuff.

The difference is down to a key element that differentiates SL from all games. In SL you can change the entire scene, in every way, from one frame to the next, with new previously unseen content. You can't do that with a game engine.

14 minutes ago, Drayke Newall said:

Sure maybe when (if) the new cache gets implemented but for now we are stuck with the current cache so saying things like this is pointless and just contributing to the whole mess.

It's when, not if. 

The only things that's pointless is ranting about the state of the texture pipeline now and blaming it all on LL for not imposing limits. That accomplishes nothing unless anyone has a time machine handy (and intends to do more than travel back just to shout at the developers).

14 minutes ago, Drayke Newall said:

Point of the matter is, that when someone uses a texture like Penny posted for one part of the same mesh then uses another texture using the left upper corner for a different texture face on the same mesh object, your gpu, cpu, viewer, everything has to process twice the amount of textures. If the entire one texture contains all the texture for the whole object it's processed once and is only one texture within the cache not 2 taking up less room. You can even use the same texture for 2 objects with each objects texture using a different area of the texture. This is done all the time in any other game or gaming engine.

The kind of horrendous texture wastage that penny was citing is super rare. It's a tiny edge case.

The vast majority of textured objects in SL are "fine". Not excellent, not mostly blank, but quite "fine".

The single biggest sin is one of object density and clutter. It's very easy to fill a space with a hundred 1Li objects and then blame the hundred 1024 texture that come with it. 

The vast majority of SL is built by picking and placing things made by other people. Thanks to Li being directly linked to cost, and FS enabled hacking around the LOD system to get objects rezzed as cheap as possible, people pick and place way more things than they should. 

21 minutes ago, Drayke Newall said:

I'm sorry, that's just a poor excuse for laziness. Download the addon, just like most people have to do with any 3D creation software.

Which addon should we download for Blender 2.8 ?

  • Like 1
Link to comment
Share on other sites

4 minutes ago, CoffeeDujour said:

It can't be removed. That debug setting is the same value that the object detail slider in the prefs changes. Debug settings for the most part are just global variables.

.. and even if it was locked down in some way by LL, FS would just ignore those changes and carry on regardless.

Locking it is what I meant. Removing the access from people changing it. As to FS ignoring it, well if that's the case least we can say don't use Firestorm 😛 

Quote

A prim based physics model is not bad .. SL physics has been built around prims from the outset and depending on your structure can be cheaper.

Oh I agree, for example if it is just a somewhat square object like a statue sure. But I have bout so many houses or the like where half of the mesh is has decent physics and then one part doesn't and a wall can be walked through so they just place a prim in front of it.

Quote

This is because SL is entirely server side, your viewer is dumb as rocks and only does what the region tells it. A game will handle all client input and physics and update the server, SL tells the server you pressed up and waits for the server to tell it what changed.

Creating the same scene 1:1 with a game you will get significantly lower performance in SL than a game engine. There is no way to change this. A game engine is a light weight race car designed to move a very specific load as fast as it can. SL is a garbage truck, it will move tons of random stuff.

The difference is down to a key element that differentiates SL from all games. In SL you can change the entire scene, in every way, from one frame to the next, with new previously unseen content. You can't do that with a game engine.

I'll rephrase as I'm not talking about server to client performance as I agree that is different and a grin and bear it issue. Nor am I talking about the dynamic nature of it.

Take the Bethesda creation engine, as this is the most commonly modded gaming engine that allows it. I can create a mesh object so polygon dense similar to how some might make it for SL such as single coin with a 4k texture and then add that to the game and it can crash the game due to the increase in load - a single coin. Same with 4k textures. Whilst yes, SL is different, the performance hit a person takes by loading poorly optimised content will still effect performance in either a dynamically generated environment such as second life or a static one such as Skyrim as, the computer needs more resources to process the coin irrespective on which "game" it is in.

Quote

The only things that's pointless is ranting about the state of the texture pipeline now and blaming it all on LL for not imposing limits. That accomplishes nothing unless anyone has a time machine handy (and intends to do more than travel back just to shout at the developers).

I agree, though not imposing limits will still create the same issues even with a new cache method as unlike a gaming studio the average user isn't as adept in the knowledge of how to create game assets. People are still going to limit how big their cache is despite any new changes and by the sounds of the new cache, your going to need an even bigger size to accommodate.

Quote

The kind of horrendous texture wastage that penny was citing is super rare. It's a tiny edge case.

I'm not to sure about that. I've seen some pretty terrible texture wastage and quite often and a lot are similar to Penny's example maybe not as extreme as a quarter but half definitely.

Quote

The single biggest sin is one of object density and clutter. It's very easy to fill a space with a hundred 1Li objects and then blame the hundred 1024 texture that come with it. 

The vast majority of SL is built by picking and placing things made by other people. Thanks to Li being directly linked to cost, and FS enabled hacking around the LOD system to get objects rezzed as cheap as possible, people pick and place way more things than they should. 

That's my point as to why they should lock down the LOD system and to be honest LL could just stop FS from hacking around it. I'm sure it isn't the first time Linden Lab have stopped a TPV from doing something, just ask niran.

I do agree on the other things people made comment, which is why in my posts I have stated making your own is the best way.

As to the 100's of 1Li object example, once again it would depend on the user not Linden Lab or the viewer. If those 1Li objects only share 10 textures between them then that's good use of resources. The case we have now is that most people would have 100 unique textures for each object. That would be poor use of resources. Who do I blame for that? I would say the creator of the objects. But as you say if people use other peoples creations then you would have multiple textures everywhere, though then it once again comes down to the creators to ensure that their objects are as optimised as possible to resolve situations like this.

Quote

Which addon should we download for Blender 2.8 ?

No idea, I don't use Blender. I only suggested it as such as you mentioned it wasn't possible without an addon, so assumed one was available. If it isn't then learn how to do it manually and grin and bear it. I have to do that for a lot of my content created IRL if there isn't a easier way of doing it.

  • Like 1
Link to comment
Share on other sites

1 minute ago, Drayke Newall said:

As to FS ignoring it, well if that's the case least we can say don't use Firestorm 😛 

Yeah, that battle was won and lost a long time ago. All hail the orange viewer, may it's check boxes be plentiful.

1 minute ago, Drayke Newall said:

Oh I agree, for example if it is just a somewhat square object like a statue sure. But I have bout so many houses or the like where half of the mesh is has decent physics and then one part doesn't and a wall can be walked through so they just place a prim in front of it.

Built things both ways and am on the fence as to what my favorite as a consumer is.

I've had houses for example that I've broken down into exterior and interior spaces, prim physics makes ripping things apart much simpler.

1 minute ago, Drayke Newall said:

I'll rephrase as I'm not talking about server to client performance as I agree that is different and a grin and bear it issue. Nor am I talking about the dynamic nature of it.

I wish the SL client was authoritative and focused around the individual users perception\experience rather than a uniformly janky shared experience. It's never going to happen for SL. It would be too much of a change.

1 minute ago, Drayke Newall said:

Take the Bethesda creation engine, as this is the most commonly modded gaming engine that allows it. I can create a mesh object so polygon dense similar to how some might make it for SL such as single coin with a 4k texture and then add that to the game and it can crash the game due to the increase in load - a single coin. Same with 4k textures. Whilst yes, SL is different, the performance hit a person takes by loading poorly optimised content will still effect performance in either a dynamically generated environment such as second life or a static one such as Skyrim as, the computer needs more resources to process the coin irrespective on which "game" it is in.

The difference is SL is built to handle that coin. The garbage truck analogy is very fitting, we can put in most anything, good or garbage, and it just keeps going.

1 minute ago, Drayke Newall said:

I'm not to sure about that. I've seen some pretty terrible texture wastage and quite often and a lot are similar to Penny's example maybe not as extreme as a quarter but half definitely.

I'm not saying it doesn't happen, just that it is an extreme edge case. Which given the nature of SL is par for the course. Can't base any solutions around edge cases, only accept that the system must be able to handle them gracefully.

Less CPU time spent decoding puts a lot of options back on the table, even some of LL's own abandoned attempts might be viable.

1 minute ago, Drayke Newall said:

As to the 100's of 1Li object example, once again it would depend on the user not Linden Lab or the viewer. If those 1Li objects only share 10 textures between them then that's good use of resources. The case we have now is that most people would have 100 unique textures for each object.

That's an edge case. 100 1Li objects (probably from the arcade) is entirely normal. See the Linden Home decor threads.

Link to comment
Share on other sites

13 hours ago, CoffeeDujour said:

When you over exaggerate something to the point of absurdity you undermine the entire case you're trying to make.

I see textures like this EVERYWHERE in SL. If it's not most SL content creators, then it's still an extremely common practice.

13 hours ago, CoffeeDujour said:

This is why your SL goes slow when you TP anywhere or move the camera.

Something that doesn't happen to me in sims I made myself, or the handful of others I know where the creators took time to optimize texture use. 

13 hours ago, CoffeeDujour said:

The new cache removes this for textures previously seen. The cache will store files that can be dumped straight to VRAM.

Right now to get a lower resolution image the viewer throws everything away and decodes the texture from scratch, this is insanely expensive. New cache will mean it can just step down with no decoding at all. No magical woowoo required .. the file will probably still be cached by the OS if you have a reasonable amount of RAM.

And VRAM remains a bottleneck. That's the part you're ignoring. That's the part I'm saying you expect to much of progressively encoded textures. It is not a magic wand to make memory issues go away.

13 hours ago, CoffeeDujour said:

The workflow for baking multiple objects with multiple image per texture onto a single map is cumbersome in blender without an addon. It can get very labor intensive and time consuming.

It does take time, but it's not that bad. When you over exaggerate something to the point of absurdity you undermine the entire case you're trying to make.

Link to comment
Share on other sites

14 hours ago, animats said:

Failed new virtual worlds include High Fidelity, Sinespace, Sansar, Worlds Adrift, Facebook Spaces... Two have shut down and the other three have few users.It's just not working out.

Feels a bit unfair to keep calling Sansar failed, it's still getting it's foot in the door in a unfortunately slightly crowded room.

Personally to some extent I feel like it's probably given the best support to the desktop side that's been left rather forgotten by the other new VR centric social games with desktop tacked on that have cropped up.

And it's got a new avatar system in the works, along with a little "Quest" system that could come handy for game themed worlds

Edited by Digit Gears
  • Haha 1
Link to comment
Share on other sites

18 hours ago, CoffeeDujour said:

The workflow for baking multiple objects with multiple image per texture onto a single map is cumbersome in blender without an addon. It can get very labor intensive and time consuming.

@CoffeeDujour @Penny Patton Is this what you guys were referring to?

Blender 2.8 introduced the ability to have multiple separate objects in edit mode. So, you will be able to see the UV's for each of the selected objects all at once and you can move them around. Just select multiple objects and press tab for edit mode. From the quick basic rough test I show below, I had to copy the Image Texture node (the bake to texture) to each of the materials on each object. Otherwise, only a single object would bake to the texture. Although, I think you still need to bake the individual diffuse, normal, etc. unless there's an addon that lets you choose various bake type options.

  1. 3 separate objects each with their own material and UV map (this is a terrible rough example for quick testing)
  2. Same Image Texture node (the bake to texture node) on each object material
  3. Bake (I had 3 objects, so I saw 3 bake progress bars in sequence)

EDIT: It seems baking separate multiple objects to a single texture is possible with 2.79. But, multi-object editing is new with 2.8 and this will allow you to work on multiple UVs at once for multiple objects.

aa40418cae645368aa0e03cbb95d32d6.jpg

 

Edited by Kurshie Muromachi
  • Thanks 1
Link to comment
Share on other sites

9 hours ago, CoffeeDujour said:

The difference is SL is built to handle that coin. The garbage truck analogy is very fitting, we can put in most anything, good or garbage, and it just keeps going.

The fact that such things causes performance issues says otherwise. Sure SL may keep going, but the experience the user has with the huge performance drop is what one should think about as this negatively effects user retention.

Quote

I'm not saying it doesn't happen, just that it is an extreme edge case. Which given the nature of SL is par for the course. Can't base any solutions around edge cases, only accept that the system must be able to handle them gracefully.

I think you're underestimating the uv mapping issue as it is not an edge case at all. There are plenty of examples.

Quote

Less CPU time spent decoding puts a lot of options back on the table, even some of LL's own abandoned attempts might be viable.

Agree, however when the viewer is how it is and see's very slow major improvements to the drain it has on a users computer, I would far prefer other creators creating things properly to reduce all of that drain. If this is done it would see a much speedier increase in the performance of SL than anything in the near future LL can muster up.

  • Like 2
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 920 days.

Please take a moment to consider if this thread is worth bumping.

Guest
This topic is now closed to further replies.
 Share

×
×
  • Create New...