Jump to content

Recommended Posts

so i had a curious question with the RTX graphics cards out now and more and more games supporting them with second life adding new things like bento and animesh will there ever be  raid tracing support? i  am  looking into getting an RTX 2060  as   all i really do is second life and a few games.. i  cant evan image how  good SL would look with raid tracing support is that evan something that's possible? i know a lot of stuff is viewer specific

Share this post


Link to post
Share on other sites

Let me start by saying it's RAY tracing, not raid tracing.

Is it possible? Everything is possible when you have the will, the time and the money.
Is it gonna happen? No.

For a start, and putting it simple because anyway other way would be pointless an counter productive, LL doesn't control the assets on SL, that means they cannot control their parameters and that is a major hinder for something like that.

Realistically best case scenario you'd have ray traced shadows and water reflections in SL.

Most games today use Screen Space Reflection for their reflections, the downside of this technique is only objects on camera can be reflected, the upside is, compared to other options, it is very light on system resources.
Second Life uses the good old "mirror all the assets" approach, that is why you can see them in water reflections even if they are off camera, needless to say this technique is very heavy, as heavy as actual ray tracing? No.

As for shadows, they would be better and more realistic, but considering the number of people that run SL with them off to begin with...

And that's the problem here, adoption numbers. While you say "more and more games support ray tracing" the reality is I only need one hand to count all of those games.

And how many people that run SL use RTX cards? I'd be amazed if it was more than 100, so why would LL invest a significant amount of money to support a technology that most SL users don't even use?
You'll likely won't even see Ray Tracing on Sansar, let alone SL.

Share this post


Link to post
Share on other sites
On 4/30/2019 at 8:40 PM, Dean Haystack said:

And how many people that run SL use RTX cards? I'd be amazed if it was more than 100

Prepare to be absolutely blown away then.
There are plenty of content creators and gaming enthusiasts who also play Second Life.
I can name 8 people, 8 who just happen to be people I hang out with inworld.
If that's any statistic to go by, then I'd be relatively certain in saying that your metric is off by a landslide.


Oh and OP: The community will deliver. There are plenty of us enthusiasts around to make the dream come true, especially since a few viewer devs (including myself) have a vested interest in making SL look as pretty as possible.

  • Like 2

Share this post


Link to post
Share on other sites
On 5/18/2019 at 2:33 AM, Vixus Snowpaw said:

(...)If that's any statistic to go by, then I'd be relatively certain in saying that your metric is off by a landslide.(...)

So you're using anecdotal evidence of a small sample size to prove my estimate wrong, by a landslide no less!

I can always present you mine, which is a few hundreds of SL users that I've help set up SL over the years, part of that help includes them sending me their system specs and over 95% of them have laptops with no discrete GPU.
So my sample pool being vastly larger than yours puts the statistical evidence on my side, which since LL doesn't do hardware surveys or at least do not make those public is the only thing we have to go by, but I would love to see you prove there are over 100 concurrent users of SL right now running it on RTX cards.

Share this post


Link to post
Share on other sites

The Firestorm people would have a bigger sample than LL does, but I don't know if they can release what they know. @Whirly Fizzle, does Firestorm keep data on the number of users who have each common type of graphics cards/chips?

My guess is the percentage of RTX users is pretty low ... the cards are new, overpowered, and expensive for the typical SL user. From what I've seen in the forums, laptops with weak graphics capability are commonly used in SL.

Share this post


Link to post
Share on other sites

As a totally useless data point, there are only 9 reporters on the Firestorm JIRA that have an RTX card (including support tickets) & 7 on the LL JIRA.
So at least 16 people have one  :P

 

  • Like 2
  • Thanks 1

Share this post


Link to post
Share on other sites

Very few people are running RTX cards, and ray tracing will definitely not be playing well with the way most of SL is laid out. The whole "unoptimized user content" ordeal would only give you worse performance when you include ray traced lighting into the mix.

When this game runs decently on high end hardware at all, and when the technology is common in all consumer graphics cards, thats when it might be viable to add the functionality to SL. But when you consider the very small amount of people who use RTX cards or even have GPUs capable of non slideshow ray tracing otherwise (780ti/r9 290x or higher really), with the majority of the users being on more average hardware, theres just no point.

"hey, LL here, have this feature that maybe 1% of you can actually use, and for most of that 1% its going to be a sub 20fps nightmare"

  • Like 1

Share this post


Link to post
Share on other sites

Yet another RTX owner here ...

Should be mentioned that raytracing works on cards other that just the RTX series.

I can't say how much work would be involved in adding it to the viewer. If I had to finger in the wind guess what performance would be like .. I'd aim for a solid improvement vs existing rendering with shadows enabled as it will shift a lot of processing away from the CPU. Faster than without shadows .. probably not. Also depends if it's used for just point lighting or global illumination and or shadows.

Pretty for sure.

  • Like 1

Share this post


Link to post
Share on other sites

Also consider that ray tracing in general is extremely hardware intensive and even games like minecraft with ray tracing require very, very high end computers to even get playable framerates.

>> https://youtu.be/5jD0mELZPD8?t=62

Thats an RTX 2080ti with an i7 7900X getting 60fps in 1080p with that graphical mod. And a Ryzen 1700x and GTX 1070 (still a very high end system) in 720p to get 30fps. Even titles designed for ray tracing end up with 1080p/720p sub 30fps performance if you want it to actually look good. Battlefield V is a good example where a 1070 and HEDT type of i7 or Ryzen 7 is going to give you 1080p medium 30fps with ray tracing enabled.

A game like SL where you have a lot of detailed objects and shadows being drawn to begin with, lots of light, lots of non static or interactive physics objects is a game where ray tracing is going to decimate the performance. And really most of this games users just dont have the hardware to make it even close to playable.  SL can already look pretty impressive, but work needs to be put into the optimization of rendering in the game. Because of many other times where this has been discussed, theres no real reason for SL to perform as badly as it does.  But thats another topic.

Ray tracing just isnt going to happen for SL unless theres a pretty big jump in graphical hardware performance at a more economical price, and currently the turing cards are the first generation of cards with these dedicated RT cores, as times goes on if the technology sticks at all, im sure there will be more powerful ray tracing capable cards at a price the masses can buy. As of right now most users are running laptops and integrated graphics, and even for dedicated graphics most people use something like a GTX 1060, 1050 or performance equivalent, or even lower end. I use Steams hardware survey for these kinda stats. There are a similar portion of users on steam who use a GT 730 as compared to people who use an RX 580...

Theyre usually a pretty good representation of the PC gaming sphere. SL's userbase is probably a little bit different but overall likely follows the same trends. So that brings it back to my original statement which is that theres just not enough people who would ever use ray tracing for it be a good idea to add it to the game. And if it was, almost nobody who could run it would actually get performance that wasnt completely unbearable.

Share this post


Link to post
Share on other sites

The minecraft demo is not making any use of the RTX cores, it's software path tracing (which they explain if you watch the video with the sound on). The only thing the 2080 is doing is being much faster than his friends 1070 .. which it is.

10 hours ago, cykarushb said:

. Because of many other times where this has been discussed, theres no real reason for SL to perform as badly as it does.

You say this a lot .. and disagree with those of us who work on viewer code .. so please, by all means, get the source and show us how it's done. Or a friend, or a friend of a friend, or anyone really, anyone at all. It's only been open source forever at this point and here we are.

  • Like 1

Share this post


Link to post
Share on other sites
5 hours ago, CoffeeDujour said:

... so please, by all means, get the source and show us how it's done. Or a friend, or a friend of a friend, or anyone really, anyone at all. It's only been open source forever at this point and here we are.

There's a position open on Linden Lab's career page. Been there for a while now. Would be nice if someone filled it assuming it's still open. If someone knows someone, let them know please.

  • Like 1

Share this post


Link to post
Share on other sites

Posted it on the SL sub .. if we can't do the job, least we can do is share the hell out of it.

  • Thanks 1

Share this post


Link to post
Share on other sites
On 5/27/2019 at 1:25 AM, cykarushb said:

Also consider that ray tracing in general is extremely hardware intensive and even games like minecraft with ray tracing require very, very high end computers to even get playable framerates.

>> https://youtu.be/5jD0mELZPD8?t=62

Nvidia to publish open source version of Quake II RTX

https://hexus.net/gaming/news/pc/129617-nvidia-publish-open-source-version-quake-ii-rtx/

 

 

it looks like ray tracing might give second life a face lift and older video cards support it somewhat? it only says basic for none rtx cards

Ray Tracing, Your Questions Answered: Types of Ray Tracing, Performance On GeForce GPUs, and More

https://www.nvidia.com/en-us/geforce/news/geforce-gtx-dxr-ray-tracing-available-now/

 

  • Haha 1

Share this post


Link to post
Share on other sites
Posted (edited)
On 5/17/2019 at 9:33 PM, Vixus Snowpaw said:

Prepare to be absolutely blown away then.
There are plenty of content creators and gaming enthusiasts who also play Second Life.
I can name 8 people, 8 who just happen to be people I hang out with inworld.
If that's any statistic to go by, then I'd be relatively certain in saying that your metric is off by a landslide.


Oh and OP: The community will deliver. There are plenty of us enthusiasts around to make the dream come true, especially since a few viewer devs (including myself) have a vested interest in making SL look as pretty as possible.

The same content creators that fill their products full of unnecessarily large textures, geometry, and bloated scripts in a game that penalizes people more for geometry count rather than texture data which is orders of magnitude more expensive to store and stream? The same content creators that all rehash work by either Linden Labs or give up and say that the rendering engine is too big and scary and the code too obfuscated? 

Anyways, you can get ray tracing working with Second Life its just a bit of a pain in the ass and is limited to depth buffer/screen space effects for the moment, at least with ReShade.

https://i.gyazo.com/c918495088980042e9de2c70fe974c19.mp4

https://i.gyazo.com/ae671903e97e5ca0a3fabbe5598a936c.mp4

https://i.gyazo.com/f3a1a73bdd075bd7df8b48b80e97263c.mp4

https://i.gyazo.com/418304ccea6490f3988578e0fdd5018e.mp4

I was exploring the idea of allowing for overriding of the dynamically generated cubemap that windlight settings create that is later sampled by the environmental reflection engine in SL. Managed to ***** up the sampling and texture scaling on it so it ended up pulling from garbage areas of memory.  My plan of attack was going to revolve around using stbimage and shunting in a custom cubemap for the environmental reflections. I had to give up due to work and school *****. Id imagine it could be possible to sample fullperm textures or uuid's from ingame and then somehow feed that into the engine as well. A kind of cludgy cube map or reflection capture node or something.
 

https://i.gyazo.com/c1a78312b6e96e5f9e4a59827806192c.mp4
 

2f78a5e8f6635302329fff3e847b3aab.gif

9368774b75716d1cf3ba6ca6c2c4fcfd.jpg

Edited by Biggay Koray

Share this post


Link to post
Share on other sites
Posted (edited)

Ray tracing for worlds as complex as SL's is a ways off. But there is a way.

The whole graphics system of SL needs modernization, but that's a huge job. One new hire isn't enough. The next new hire will probably be stuck trying to make the viewer use Vulkan (Microsoft) and Metal (Apple). Until recently, you could just use OpenGL on all platforms, but that's changing. Vulkan seems to be the future. There's Vulkan for Windows, for Mac (using an open source adapter called "MoltenVK", if that works) and for Linux.

This is a job I would consider "hard", "not fun", and "write once, debug everywhere". This is far harder than EEP, and look what a mess that turned into.

Maybe, if we're really lucky, we get physically based rendering out of this. Principled BSDF, which is a Disney/Pixar standard and which Blender understands, is probably the way to go. It's more texture layers. Right now, SL has diffuse, emissive, specular, and normal ("bump") textures. This adds more textures to the mix:

render_cycles_nodes_types_shaders_princi

Putting all those layers together is something modern GPUs can do fast. Much faster than ray tracing. The main use cases are for skin, for which the "subsurface" layer contributes to realism, and automotive paint jobs, where "clearcoat" and "clearcoat roughness" matter. Most of the time, you don't use all those textures at the same time.

From an SL perspective, it's almost all viewer side. The server just has to tell the viewer the UUIDs and URLs from which to get the texture images, and some numbers associated with how they're assembled. There's no wiring up of shader plumbing, as with Cycles render. It's close to the way SL represents materials now. So LL might be able to pull this off.

At lower graphics settings, the viewer would skip some of the more subtle layers. Think of this as "Advanced Lighting Model, Boss Level". You'll need a good GPU. Time moves on and GPUs get better and cheaper.

What do the graphics people think of this? It's worrisome that, with all the tutorials on "Principled BSDF" for Blender, very few show photorealistic humans. Lots of shiny things, garden gnomes, etc., but not many humans.

 

image.jpeg

image.jpeg

image.jpeg

Edited by animats
  • Like 1

Share this post


Link to post
Share on other sites
Posted (edited)
4 hours ago, animats said:

Ray tracing for worlds as complex as SL's is a ways off. But there is a way.

The whole graphics system of SL needs modernization, but that's a huge job. One new hire isn't enough. The next new hire will probably be stuck trying to make the viewer use Vulkan (Microsoft) and Metal (Apple). Until recently, you could just use OpenGL on all platforms, but that's changing. Vulkan seems to be the future. There's Vulkan for Windows, for Mac (using an open source adapter called "MoltenVK", if that works) and for Linux.

This is a job I would consider "hard", "not fun", and "write once, debug everywhere". This is far harder than EEP, and look what a mess that turned into.

Maybe, if we're really lucky, we get physically based rendering out of this. Principled BSDF, which is a Disney/Pixar standard and which Blender understands, is probably the way to go. It's more texture layers. Right now, SL has diffuse, emissive, specular, and normal ("bump") textures. This adds more textures to the mix:

render_cycles_nodes_types_shaders_princi

Putting all those layers together is something modern GPUs can do fast. Much faster than ray tracing. The main use cases are for skin, for which the "subsurface" layer contributes to realism, and automotive paint jobs, where "clearcoat" and "clearcoat roughness" matter. Most of the time, you don't use all those textures at the same time.

From an SL perspective, it's almost all viewer side. The server just has to tell the viewer the UUIDs and URLs from which to get the texture images, and some numbers associated with how they're assembled. There's no wiring up of shader plumbing, as with Cycles render. It's close to the way SL represents materials now. So LL might be able to pull this off.

At lower graphics settings, the viewer would skip some of the more subtle layers. Think of this as "Advanced Lighting Model, Boss Level". You'll need a good GPU. Time moves on and GPUs get better and cheaper.

What do the graphics people think of this? It's worrisome that, with all the tutorials on "Principled BSDF" for Blender, very few show photorealistic humans. Lots of shiny things, garden gnomes, etc., but not many humans.

 

image.jpeg

image.jpeg

image.jpeg

PBR could be doable but I would really really emphasize they move to a fully fledged deferred rendering engine and ditch the legacy crap. Also openGL is hardly dead, and though vulkan would be a huge improvement, they would likely have to up the min system spec considerably and porting from opengl to Vulkan is not trivial. The last time i peeked over the rendering code, they were still using a ton of early ass opengl crap that they extended with gl arb calls to maintain backwards compatibility. Also to clarify, PBR materials are more than just adding more texture channels as part of the shaders are reworked to handle energy preservation approximations which factors into specular hightlights, fresnel, IOR, etc. Additionally, it looks like this was started or kind of attempted with the way parts of the environmental shading system and time of day was implemented, at least if I'm understanding things correctly. The other thing to take into account, is that one need not be limited by the texture channels strictly provided by conventional PBR workflows, and you can implement stuff like parallax occlusion mapping, subsurface scattering, etc. Almost all modern day triple A game engines use custom spun PBR rendering engines with extra perks thrown in like fancy subsurface scattering for skin and translucency or parallax and height info for fancy depth details.

An example of PBR workflow taken to 11 in UE4 without ray tracing.

Blender PBR realtime engine.

 

General Rant Time About the Engine
A big gripe of mine is how a lot of assets are handled which seem to encourage poor content design practices and likely cost LL a good chunk of change. From a data storage and streaming standpoint, they are punishing players for using a ton of geometry which is relatively inexpensive even by today's standards compared to no penalty for plastering 1k texture maps over everything. Then they turn around and make geometry mandatory that usually entails some extra either colour data or preassigned textures which players seldom remove. This makes me shake my head for several reasons when compared to a modern day graphics engines or something that handles lots of streamed data - those being:

1. PNG's are typically on the order of magnitude of several megabytes for 1k maps compared to a few dozen kilobytes for meshes, yet people are punished with land impact based primarily on geometry count rather than VRAM usage. This is puzzling to me since it ends up being more expensive for LL, and can result in people going full retard and producing content with preferably lower resolution meshes, though not always, that can still end up having ridiculously sized textures on them. (Yes modern GPUs have more vram on them, but I will get back to this.)

2. With the above point in mind, LL partially encourages this behavior by having a single UV channel which results in people prebaking lighting into their diffuse textures that they want to look pretty so they use huge resolutions. To avoid this, modern engines use tillable, procedural, and conventional texture solutions in the first UV channel and then a secondary UV channel for lower resolution lightmapped data when its needed. Though you increase the texture calls by using this approach, you are still streaming and loading less data into the gpu, and if people get clever with their light mapping techniques or texture tiling methods, you can still optimize down some of the additional texture calls.

3. Alright getting to VRam usage and people with thicc boi HBM and GGDR6 video cards. Yes, modern day GPUs can handle more textures, however by having to stream or decompress all these damn things, especially if you are sim hopping or people are loading in, you are introducing a bottleneck into the render pipeline. The textures first have to be downloaded, decompressed, and then sent to the graphics processor. That is a lot of god damn waiting and intermediary steps, and the juicier your textures are,  the more time you are going to end up having to wait. Additionally, there has to be some kind of scheduling or polling system that checks or notifies the client when there is new content to load which also slows crap down. Now before all the smart asses with fast internet connections attack me, I will clarify that every draw call, every intermediary step, and every texture that has to load is going to have an impact on frame time, and render engines really dont give a rat's ass if you have gigabit internet when they need to go and fetch something, because every millisecond or fraction of a millisecond counts. - Dont believe me? Go load up Rage and go see how mega textures turned out in their first incarnation, and now imagine that for a more dynamic environment.

4. We need to move to a fully deferred lighting engine and ditch this legacy ***** once and for all. There is so much prebaked bull***** in this game that tries to emulate richer and more complex lighting environments and it drives me god damn crazy because everybody wants to combine that ***** into 1k diffuse maps or alphas all over the place. What does a deferred lighting system do? Well it allows you to have ass loads more lighting sources, especially static ones which could totally be implemented along with the dynamic ones, that get calculated after your geometry data is loaded for the shaders. This allows you to basically only care about what geometry is on screen and only do the lighting calculations necessary for what you as the player would actually see, and the engine can ignore all that other bull***** - from what I could understand about the current engine, SL kind of does this already but its half baked. This would also dramatically reduce dev time because you wouldn't have to be maintaining more or less two render engines.

5. LL needs to not be afraid of *****ing breaking some functionality. Day and night cycle and environmental effects? ***** it. Focus on one god damn thing and do it well. Looking at this code is insane. I have sat down with notebooks and *****ing dependency chart graphing tools, and I still cant fully figure out this *****ing mess from where it starts or ends. Get us a stripped down client with just a simple skybox, some geometry,  and some textures to put on your geometry. Build up your engine from there. 

6. Get rid of sculpts and figure out a way to convert the preexisting ones to mesh. If someone has managed to convert prims which I suspect are some freaky BSP thing to mesh, then there should be a way to retroactively convert all those sculpt maps to mesh - go grab Moy or something. It will save you storage on your servers and reduce the amount of insanity that the client has to handle. You could then merge all that crap into the current existing system that handles conventional mesh and streamline the engine a bit. - Less ***** to maintain and worry about and easier to improve.

7. Decals and nodes. Now, I can see why lighting and particles are connected to objects in world for obvious reasons like script control, dynamic stuff, etc. However, especially in the case of lighting, having this be the only option is *****ing dumb. Lets break down what it takes for a player to place a static light in a scene:

  • Hmm I want a light source here.
  • I need to res an object: which contains positional data, likely colour and texture data because I'm lazy or have no idea what I'm doing, and data specifying who it belongs to and permissions. Note* We have several vectors and possibly megabytes worth of array data here for something that might not even be visible. - This is god damn insane.
  • I now need to choose what kind of light I will have, be it projector or standard, and all of the properties. More vectors and and a potential array.

Now I ask, if the person owns the sim or has some kind of elevated land editing permission, why do they need to waste so much damn data for a static object? All they need is position, permissions and ownership, and properties of the light source. Same god damn thing applies for particle effects or decals which are usually implemented as alphas on prims that waste unnecessary geometry and texture calls. LL could save so much additional processing, bandwidth, and boost performance a great deal by implementing some kind of empties or node system for crap like this. This could also have the added benefit for things like reflection map capture sources or cube maps down the line, as well as simple in world scripts and sensors that dont need geometry.

I am going to stop here because I have stomach flu or something, and I need to rest, but I could continue. Disclaimer I am not an expert and have limited graphics programming experience. In fact, I am a huge nooby, and my *****ing C is rusty as *****. However, I may have had the pleasure of working for a certain company involved heavily in games and graphics for a year very very recently and perhaps witnessed how development, testing, and optimization worked for applications ranging from games to 3D suites to self driving cars. Although, I commend Linden Labs for the amazing work they have done and how amazing this crazy ass game is, the rendering system needs some serious love.

  1. Really map the thing out. - Figure out how it works
  2. Cut down as much unnecessary crap as possible and streamline. - Make your life easier
  3. Begin building from there. - Start fixing and improving it.

(Side grumbles about possible opencl or gpu accelerated particles and multi threading)

 

Edited by Biggay Koray
  • Like 1
  • Thanks 1

Share this post


Link to post
Share on other sites
1 hour ago, Biggay Koray said:

what does a deferred lighting system do? Well it allows you to have ass loads more lighting sources

Give SL's render engine a lighting boost and people will still favor the content we have now simply because they have no idea how to effectively light anything. Lighting in SL is either massively neglected (to the point people make things full bright - because it's dark), used so badly it's painful to look at, or abused to make up for cube map deficiencies.

1 hour ago, Biggay Koray said:

LL needs to not be afraid of *****ing breaking

Backward compatibility is not something LL are going to compromise on, at least up until such time as they decide maintaining something is no longer feasible.

Sculpts could easily be converted to regular mesh, the blocking issue is Li accounting, and while most sculpts would come out fine, there are edge cases (like giant off-region landscaping) that wont. SL is a rats nest with another edge case gotcha around every corner.

 

SL without the legacy content or design decisions .. that's supposed to be Sansar.

Share this post


Link to post
Share on other sites
Posted (edited)
9 hours ago, CoffeeDujour said:

Give SL's render engine a lighting boost and people will still favor the content we have now simply because they have no idea how to effectively light anything. Lighting in SL is either massively neglected (to the point people make things full bright - because it's dark), used so badly it's painful to look at, or abused to make up for cube map deficiencies.

Backward compatibility is not something LL are going to compromise on, at least up until such time as they decide maintaining something is no longer feasible.

Sculpts could easily be converted to regular mesh, the blocking issue is Li accounting, and while most sculpts would come out fine, there are edge cases (like giant off-region landscaping) that wont. SL is a rats nest with another edge case gotcha around every corner.

 

SL without the legacy content or design decisions .. that's supposed to be Sansar.

I am reinvigorated from sleeping and am ready to spit some fire about this here rendering engine. 
wl3gfaejhd611.gif
 

Alright first point there. We already have a deferred lighting engine kind of sort of with advanced lighting.

  • Full bright or shadeless can work in deferred lighting systems just *****ing fine, and already does with advanced lighting on, which as previously stated, is a franken hybrid deferred system. Nearly every major game engine has been doing this just fine for years - see unlit shaders in Unreal and Unity or light path shadeless materials in Blender.

 

  • Traditionally, full bright materials are usually considered "emissive" without lighting other objects and appear unshaded. This can either be done via a light path check using a ray and the current camera position before shaders are calculated, or if you want to get really lazy, by making the object emissive like a standard glow object and cranking down the funky bias and fuzzy glow effect around it. If you want to get really basic, have a default fallback shader that is unlit and just applies a uniform amount of light to every pixel that is visible in the current frame on a full bright object. All of this could be implemented without breaking the current full bright content in the game and likely is to some degree already - would need to look at the source some more. Again, this is one of the reasons why fully deferred systems can be highly advantageous as they can be very scalable, as after your geometry fetch, the amount of shaders and data you need to fetch and move can be highly attenuated and threaded accordingly.

 

  • Don't give me this argument that people are too dumb to figure out how to light a scene. If I can explain to a biology major who has never worked with lighting nodes before, let alone a game engine, what, when, and how lighting nodes are used in UE4, the same can apply to Second Life. A big reason Second Life's lighting and scene setup is so horrible is due to the painful user experience and bad practices that comes from using and interacting with the game. People simply don't know what to learn or don't fully comprehend the effects of their actions, and as much as I love Torley and his cheerful videos, this game needs much better documentation, I think the UE4 dev docs would be a good place to start for inspiration. Additionally, a stronger emphasis should be placed on rewarding good content creation practices and explaining in world content creation and manipulation in as many ways as possible - not all people coming to this game are highly technical or possess computer savvy backgrounds and they need context. (Don't get me *****ing started on coders or engineers who cant write instructions for regular human beings or add some *****ing pictures. The amount of amazing CS students and engineers I have met that cant make a pitch to a VC or explain their ***** to a regular layman is maddening. The budding cognitive scientist in me is screaming embrace multi aptness and elegance.)

 

Pfew, alright backwards compatibility and sculpts.

Dont give me this horse *****. It is costing their company more money with server and bandwidth upkeep costs.  Change land impact and innovate a little. Hell, you are in god damn California, grab some interns and pay them in pizzas or something or grab some bored 80s hot shots that were fired over their age. (I literally got a job at a quantum computing lab once by breaking into it and walking into the directors office and asking for a position, and then my mother began dying, I was stressed as ***** at school, and then I tried to kill myself - have some courage. If someone tells you its impossible or has too many got yous ask them why and keep asking them why?)

 Here's some ideas:

  • Do some data mining and get a sense of how complex and texture heavy something should be with respect to its size in the game world. (Both from a hardware and a desired fidelity standpoint.)
  • Punish or "encourage" content creators to attempt to adhere to this. (Favoured advertisement and promotion, reduced land impact, or some exclusive features or bells and whistles)
  • Look into some kind of dynamic cache management that along with the current mipmapping techniques, can generate and store lower res stuff locally and pull it first so the engine has to work a little less.
  • Geometry Instancing! (Not sure if this exists already but this could be huge for sim surrounds and highly tileable and repeating geometry)
  • Embrace heterogeneous computing. This game is data heavy and dynamic as hell. Some form of virtual memory system or the ability to swap ***** more efficiently back and forth between the cpu or gpu or use both to varying degrees can make a big difference. If I were to make a case for Vulkan, this would be one of the big reasons as opencl and opencl like mixed computing can be done much more readily within it. Depending on load, whats going on in the sim, and hardware availability, preference could be given to whatever processor is fastest and most appropriate for the task or is available.
  • FOR THE LOVE OF GOD SOME BETTER POST PROCESSING, THROW SOME SHARPENING FILTERS OR SOMETHING IN THERE OR A FEW LIGHT WEIGHT AND SCALABLE IMAGE PROCESSING HEURISTICS (If people can click a button to make something look sharper or better, they will likely do that over adding more geometry or adding more resolution to their textures, especially if they are punished for doing so.)
  • Better and more procedural content. Take a look at what nodes can do in UE4 or Substance Designer. (This can extend beyond textures and can be combined with ***** like geometry instancing.)

 

 

Edited by Biggay Koray
  • Like 1
  • Haha 1

Share this post


Link to post
Share on other sites
Posted (edited)

The Client is open sourced (for the most part) and someone reverse engineered part of the simulator code years ago (how we got OpenSim) so go build it yourself.

Then come back and tell us how it went - and no, you don't get to start from scratch.

Edited by Solar Legion
  • Like 1

Share this post


Link to post
Share on other sites

Realize how tiny the SL dev team is. If you go to Server User Group, you'll meet them all eventually. Simon Linden seems to be the only one who works on hard server side problems, and he's way overworked. UE4 probably has 20x as many developers.

OpenSim is down to one developer, the last time I checked.

When LL gives up on the Sansar resource drain and assigns the 30 or so  people over there to working on SL, SL might improve. We, as SL customers, pay for that debacle.

  • Like 1

Share this post


Link to post
Share on other sites
5 hours ago, Biggay Koray said:

I am reinvigorated from sleeping and am ready to spit some fire about this here rendering engine. 

Or you know .. do something productive and brush up on the ole C++ and have at it.   

5 hours ago, Biggay Koray said:
  • Don't give me this argument that people are too dumb to figure out how to light a scene.

Sadly this is the state of play, have you not been to mainland ?

5 hours ago, Biggay Koray said:
  • If I can explain to a biology major who has never worked with lighting nodes before, let alone a game engine, what, when, and how lighting nodes are used in UE4, the same can apply to Second Life.

I wish you the best of luck, when can we expect your classes to begin?

5 hours ago, Biggay Koray said:
  • Dont give me this horse *****. It is costing their company more money with server and bandwidth upkeep costs. 

That's not what I said, Li cost calculations are arbitrary and only exist as a way to link usage to land to billing. Once you create a mesh from a sculpt, it will stop being a sculpt and will fall under different Li calculations. If you make an exception for converts, people will take that as a route to evade Li.

 Seriously, none of what you have posted is news, the viewer is open source, have at it, come to the TPV meetings, get a contribution agreement on file and submit your work back to LL.

  • Like 1

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...