Jump to content

Polymath Snuggler

Resident
  • Posts

    257
  • Joined

  • Last visited

Reputation

203 Excellent

2 Followers

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thank you for the JIRA link Lucia. Commenting on that JIRA made me realize that my test wasn't entirely thorough here, as you will note in the code snippet above, it does not contain a "collision" event, only a collision_start event, as that was the topic of this thread. If you add a "collision" and "collision_end" event handler to the "bonk" prim as well and do some tests, a new picture of what's happening emerges. If you drop a physical cube on top of a prim with a collision detect event setup in it, the collision_start event triggers on impact, collision continues to issue updates until the physics engine makes the cube comes to a rest and then the collision_end event happens. Working as intended. Avatars w/o a collision HUD do the same thing. When the avatar hits the prim collision_start triggers, collision event continues until the avatar is stationary and at rest, then collision_end triggers. Same as the prim. Working as intended. Avatars WITH a collision HUD behave differently. When an avatar WITH a HUD hits the prim, collision_start triggers, then collision events continue to trigger 8 times per second, no matter what the avatar is doing. The avatar can be perfectly still, and motionless for hours, and it will still be triggering "collision" 8 times per second. This happens both in the HUD collision event, and in the "Bonk" prim. Both scripts stream updates at 8 updates per second with their "collision" event until the avatar vacates the "Bonk" prim either by walking off or flying, wherupon the collision_end event triggers. This may be more ideal for how the original poster wanted collision events to be handled, but it is not what the system is designed to do. The reason why collision_start and collision_end stops "randomly happening" while the avatar moves around on top of the surface is because it's trapped in an endless (erroneous) collision loop, from which there is no escape. TLDR; Avatars collision HUDS don't "suck up all the sim time" and prevent new events from happening, they throw the ground prim into the same error state that they're in.
  2. So ~ this has been nagging at the back of my mind now for quite a bit... I'd proved to myself beyond a doubt before even making my first post here that it wasn't a Firestorm specific thing. I'd gotten a large number of collision events on SL Release Viewer, but I did notice a rather stark disparity between wearing an AO HUD and not wearing an AO HUD, and I could not figure out why that would be the case. This led me down a bit of a rabbit hole~ so I'll try and keep it fairly concise, but this is a bit of a story. All tests for this were done on Second Life Release 6.6.0.571939 (64bit), so that completely removes Firestorm from the equation. However, it'll come back into this later~ I made the following code for the ground prim, it just says "Bonk" anytime someone walks on it. Putting this code in a large flat prim and standing on it creates what I call a "Bonk" prim, as it goes "Bonk" when you walk around on top of it quite frequently. default { state_entry() { } collision_start ( integer bonk ) { llOwnerSay ( "Bonk " + llGetTimestamp () ) ; } } Interestingly enough when walking around on this prim without a HUD based AO ( IE a ZHAO style Animation Override HUD ) I got tons of random collision events, ( as you can see in a few posts above ). However when I wore an AO HUD, the prim I was walking on no longer gave me random "Bonk" messages as I was walking over it. This was the mystery that has been bothering me... I solved it today by making the following script and putting it in a basic prim and wearing it as a HUD: vector UNIT_VEC = < 1.0,1.0,1.0> ; default { state_entry() { } collision_start ( integer hudBonk ) { llSetText ( "HUD BONK Start " + llGetTimestamp () ,UNIT_VEC , 1.0) ; } collision ( integer hudBonk ) { llSetText ( "HUD BONK MID " + llGetTimestamp () ,UNIT_VEC , 1.0) ; } collision_end ( integer hudBonk ) { llSetText ( "HUD BONK END " + llGetTimestamp () ,UNIT_VEC , 1.0) ; } } This is what it looks like while running : https://gyazo.com/a756c77d1bdd7ed19e64e63d574d9afa Note how the SetText on the prim is constantly being triggered by the HUD collision code. It's updating every fraction of a second. While wearing a HUD with the above code running in it, if you walk on the prim with the "Bonk" code in it, events cease to trigger. I am assuming that the HUD is monopolizing every single collision cycle the server is willing to give that avatar, and thus the prim on the ground can't manage to trigger an event, except for very large collisions, as it seems not all collision events are treated equally. Large "Bonks" gain more priority and get event priority over tons of small ones. "But Liz, what does this have to do with Animation Overrides?" Let's have a look inside ZHAO code : ZHAO-II-Core Code lines 1038 -1048 collision_start( integer _num ) { checkAndOverride(); } collision( integer _num ) { // checkAndOverride(); } collision_end( integer _num ) { checkAndOverride(); } This is the main update loop for a ZHAO ( and ZHAO derivative ) AO. This architecture was used widely by most AO's prior to the AO scripting updates, and as such many are still widely in use today. So ZHAO AO's flood the server with constant collision update checks too... This is all provable with code ~ Now back to the problem at hand... This brings us back to Firestorm. Firestorm has a built in AO that does not saturate the server with constant collision event updates requests. This leaves the server to have cycle time for that avatar to process other collision events such as our "Bonk" prim. Not all Firestorm users use the built in AO, but I'm willing to bet that enough of them do, that it becomes "statistically relevant" that on average, Firestorm users are less likely to be using ancient ZHAO style AO's. So ~ statistically, it's likely that Firestorm users aren't flooding out their collision events more often than Non-Firestorm users. Sooo ~ I'm guessing that Firestorm users "cause more random collision events" on "Bonk" prims because the prim can get enough server cycle time to process it's collision events, whereas it's drowned for anyone wearing an old HUD based AO. Ironically ~ this "problem" of random collision event triggers is because Firestorm fixed something... Anyhow !~! That's all from me ~ Take Care! - Liz
  3. Except collision_start has always worked this way, for as long as I can remember. Sometimes it just generates hit events constantly for things walking on the prim. Not Hypothetically at all. https://gyazo.com/83519adcd8f031c818e2dee643efb6db It's simply not a "viewer thing".
  4. Collision events trigger the exact same regardless of what viewer the user is using. You often get two collision_start triggers as an avatar wanders into a prim. Once for when it hits the edge of the prim then a second for when it lands on the top surface. https://gyazo.com/39451e795fb1e018a203abe0cb23cab8 If the prim is close to the sim terrain and the sim terrain is uneven, then the avatar might collide with the sim terrain and jitter them up and down based on the terrain value, rather than the prim bounding box, also making them 'tap' the top of the detection prim frequently. This, again~ doesn't have anything to do with the viewer that the person who is triggering the collision_start event is on. SL Standard Viewer triggers just as many "random" collision events as Firestorm does. It's always done this, and quite frankly I think this is an excellent example of how innate biases get repeatedly "confirmed" when no one bothers to check the facts. If you dislike X, suddenly everything that goes wrong is X's fault. Doesn't matter whether the boogeyman is Firestorm, the Federal Government ~ or "thems aliens that's always watching us". No one is saying you have to love Firestorm, but please keep your confirmation bias out of this.
  5. It is an absolutely ancient bug as well, my 2018 bug report more or less duplicates https://jira.secondlife.com/browse/BUG-5591 Which was filed back in 2014.
  6. I believe Rider took a look at it and said what he found gave him nightmares, and he gave up. I'm not sure it'll ever be fixed.
  7. Creating a whole entire piece of secondary external software is a largely obtuse solution to an easily solvable problem. The notion of "Local Mesh" has been discussed and even been partially developed by a number of Third Party Viewer Devs to allow people to just view a DAE file inside SL without actively uploading it. This would allow for the addition of textures as well. Whole entire external software packages are largely a silly idea when it's not entirely difficult to just add a temporary upload feature. Yes the content creation pipeline needs work, but not with a software package.
  8. I have been advocating for doing a lighting and reflections overhaul before doing any new content creation pipeline graphics changes, loudly, stubbornly, even belligerently for quite some time. It's been met with a deafening silence from the decision makers at the Lab, the only break in which was Vir, after much indignant protestation on my part asking in what I perceived to be a somewhat irritated tone, "what would updating reflections [[ and by extent lighting ]] actually help with" while we were discussing the topic of PBR at the most recent Content Creator Meeting ... or something to that general effect. I don't remember the exchange perfectly word for word, so my recanting of it might be somewhat flawed... Either way though, my advocacy for such a change has not waned. As far as I'm concerned, there is not much to be lost from creating a new "Experimental High Definition" Render mode, with a fairly short list of new features actually : Better sky interpolation and an actual reflection model that isn't "Environment Shiny overwrites Specular Color" Lumen (LLumen?) based lighting values ( thought we're still in SL, so there will have to be some auto convert option from 0-1.0 intensity into a lumen scale, but add a "Lumens" spinner next to the Intensity dial on the UI that spins with the Intensity slider~ This might force us to allow greater than 1.0 intensity but, I'm pretty sure that's okay~ 'cause in the present system if I put two "1.0" intensity lights right next to each other, I get a nice 2.0 Intensity light.... We can fiddle with it, this is largely just a UI change. The nice part of this is we get to just make up a lumen value for 1.0 intensity so that we can implement a proper inverse square lighting falloff and have things not look too poo. A user adjustable "Default world Glossy Value" that fits as a stand-in for all assets that are diffuse only. Does it lack a glossiness parameter ?? That's fine~ Make one up. I nominate 0.9 (Roughness) aka 0.1 ( Glossy ) for the world default value. That's mostly matte but will reflect a small amount of light, and it'll do Fresnel things which is cool. Most fabric, unpolished wood, asphalt, brick, unpolished stone and boring paint will render out quite fine with a 0.1 Glossy value. Auto camera exposure ( like Joe highlighted in the Cyberpunk 2077 walkthrough ) Fresnel reflections on surfaces where the lighting angle requires it ( which we STILL don't have despite the current SL Renderer being littered with code that has this notion in mind ) Possibly some sort of reflection probe or possibly a 360 snapshot cube map "volume prim" that can be plopped down in an area as an override for skymap reflections. ( People already make these sorts of things with kind of hacked-together methods with 6 projection lamps from each side ). Doing this removes the expectation on the part of the users that "Yes, your stuff will look the same" that was the downfall of EEP. No more of that, new model, it's gonna look different!! It SHOULD look different!! If you don't like it? No big deal~ just turn it off and use regular ALM. Call it EEP Try #2 ~ or something, who knows... Though that might leave a bad taste in people's mouths~ None of this stuff is OpenGL of Vulkan or Metal or DirectX specific. it's a short list of fixes and it lays the groundwork for everything else. It's just simple math additions to the existing environment. ( okay not that simple.. blending between cubemap spaces is actually kinda annoying and difficult to implement~ but that's the only 'actually hard thing' that there is that requires a change to the actual SL environment assets. Everything else is low-hanging fruit.) This lays the groundwork for PBR. It should be done first. It's not a huge massive project, it's a small bitesize thing that the Lindens can toss a few hundred man-hours at and ( hopefully? ) get results. Or Beq could probably do most of it in her rare moments of spare time in... like a couple months~ 'cause she's awesome like that. It completely boggles my mind that this is not the 'next project'. EEP 1.0 : Make sure we get parity for everyone who wants it. EEP 2.0 : Blow people away with cool new graphics stuff Woooo! EEP 2.0 will be required for viewing PBR assets. "But Liz, what about all the people who don't like it ~ then we have a new asset type that can't.. " ~~ Yeah yeah I know that's why I've also been demanding that any implementation of PBR includes a "Diffuse only Baked Lighting Slot" that the content creator will fill in a DIFFUSE texture fallback for their PBR asset. Why isn't this the roadmap? It's so obvious to me this is how you do this...
  9. That's the thing Joe ~ I've been fairly mute in mentioning it myself until now, but now that the Lindens are loudly tooting their horns about "We're going to be implementing PBR" seemingly without any understanding of what that actually entails~ I am starting to get increasingly pedantic about what PBR means, and Beq is as well. PBR by definition isn't just 'a set of maps' and 'a reflection model' and 'light is calculated in lumens' ~ When someone sticks a "PBR' label on something, then I am promising to the world that: "If I feed this system the PBR standardized values for glossy varnished wood, and shine 800 lumens of soft yellow light on it, it will look as close to it as it possibly can to a wooden surface with a 60 watt bulb shining on it in Real Life." "If I feed it the system the PBR standardized values for a polished gold material in 20,000 lumens per square meter, to simulate broad sunlight and I snap a screenshot of that render with an Shutter Speed , ISO etc, to match a Real Life camera, and I compare it to an actual photo taken with a camera at that rating, of a gold ring, in broad sunlight where the light-meter for the photo registers 20,000 lumens, it's going to match up." I'll be able to composite those two images together and it has a prayer of looking believable. PBR ~ at it's core is an equation. I give you a known data value for a material ~ and I add it to a known render environment, and I get a pretty picture that matches how it should look in the real world. The promise is ~ that in "this PBR world we've created" : 2 ( accurate PBR texture data ) + 2 ( the accurate rendering environment ) == 4 ( the correct pretty picture ) The promise of PBR is that 2+2 = 4, it always will, and that's what makes it PBR. If a system has the "PBR" label, it means that if take these values and cram them into the input of this lighting model, I'm going to get known results. Rend3 is a PBR system, if you feed it correct values and correct lighting information, it will produce those known results. However, when it comes to SL, we don't have light measured in lumens, and the texture input data you're using is back-converted: Diffuse, Normal , Spec data that is littered with pre-baked lighting information, (( Which by the way , to echo Beq's sentiment ~ I am so so so very impressed that you were able to do. The adaptability you display is continually astounding, both with your pathfinding project and this project. )) and shining SL sun and SL lighting info on it, which are entirely guesswork numbers, then looking at what the PBR system of Rend3 spits out when you feed it nonsense inputs can be fun!! ~~ Beautiful even! But it's not "Implementing PBR for Second Life". We don't have a reflection system to reflect the surroundings, we don't have light measured in Lumens, all we're doing is feeding the Rend3 PBR systems a "mystery meat" data dump and seeing what it does with it. It lacks the promise of equivalence, which by it's very definition means it's not PBR. I would never bother to correct you on a detail so minute were it not for the fact that the Lindens are now looking and saying "Well Joe implemented PBR for Second Life, so we can too", when the fundamental promise that the data and math of PBR offers of, "Wood on a sunny day RL is going to look like wood on a sunny day in SL" being entirely absent. The way the Lindens are talking about implementing "PBR" right now is basically "Yes what you see in the substance painter PBR lighting environment and you see in SL will be totally different but we're going to call it "PBR" anyways. That breaks the fundamental mathematical promise of PBR. The Lindens offering to 'implement pbr without doing lighting and reflections overhaul" is tantamount to saying that : 2+Y = N Where the values of Y and N are somewhere between 0 and infinity, and the values of N might be somewhere near 4, if you're lucky, but they might be 6 or 15 or 0.002, cause the environment is nonstandardized, so who @#)% knows? They don't get to call this 'PBR", it breaks the fundamental promise of PBR, and calling it such is misleading at best, and false advertising and prosecutable in court at worst. What you're doing is the opposite~ You have the Rend3 Environment so : X + 2N = X + 2N At least your equation is correct but what X is ~ is entirely a mystery. You can feed actual PBR data (2) into X and get 4, but by definition since you're using SL data as inputs, you don't know when it will actually be 2, so the promise is broken as well. Neither model offers the required solidity of 2+2=4 If 2+2 sometimes equals 5 then that's not PBR, and I'm getting somewhat exhausted trying to explain that to the Lindens. I'm terribly sorry to bring that exhausted irritation to your doorstep, as I really do admire the work you're doing. It's fantastic! It looks amazing! You're cramming SL data through a Rube-Goldberg device you built so that on the other end it produces pretty pictures. The Rube-Goldberg device in of itself is fascinating, the fact that it makes pretty pictures is frankly astounding. It's a magic trick of the highest order. But it's not PBR. The fact that you keep calling it that, makes my life difficult when I have to try and explain to the Lindens that when they're talking about "doing half of a PBR project" that's not PBR either.
  10. Aww... Now what am I going to do with my Wednesday mornings?
  11. I'm using gmail and as of a day ago it was working just fine. So I don't think it's a gmail problem.
  12. Do each of these "Child Users" come with their own camera? Or is all of this data fed into the MVP matrix for the single user camera?
  13. Yes ~ rendering distant elements with a secondary camera with a different Near / FarClip plane a lower FPS is how most games handle composing massive vistas. This helps with rounding errors as well as performance since it eliminates huge ZBuffer distances. The Z-Axis in this case is relative to camera ~ and represents the overall depth of the scene, ~ not to be confused with Z height in the SL World ~ which is how high up things are~ which also causes jitter due to the fact that things are rendered in world-coordinate space ~ these are compounding errors... but yeah... Like I said ~ Massive changes to SL render code would be required ~ but technically all of this "is possible". The question is (as with all possible things ) "would it actually materially improve anything?" I don't know the answer to that. SL is a very specific use case ~ and as animats has pointed out ~ it's not clear that rounding errors at these "relatively short "1-3 km viewing distances would be material enough to cause notable jitter. I don't actually know. I'm not a graphics programmer ~ I just play one on TV.
  14. Making image planes for entire sims in lieu of actually displaying the content of those sims simply does not work, due to the nature of such planar representations. They don't parallax correctly ~ meaning if you had such a thing while ~ say riding a motor vehicle across the mainland ~ the trees and houses up until the image plane would move correctly ~ then the trees on the image plane would not. They also don't do vertical changes correctly ~ so if there is any Z height difference, due to terrain ~ or say from a flying vehicle of some sort ~ or just a flying avatar ~ the illusion would break. The thing is ~ the people who actually do want their draw distances turned up to 500+ meters ~ are all the members of the SL community who are very interested in boating ~ flying aircraft ~ etc. You can't simply doodle their runway that's next to a mountain on an image plane and expect them to be happy about it ~ these people care about realism and accuracy to the degree that they want to make sure the landing lights on their run way pulse at the correct number of flashes per minute to mimic their real-life counterparts. They don't do lighting correctly. If I place a house on a hill on a horizon line while the sun is setting it reflects light properly in a manner that tells someone ~ even a kilometer away "there's a box there with a roof shape on it" if you replace that with an image of a house at some given time of day ~ it will necessarily look incorrect and pretty much every other time of day. Even if you try and mitigate this issue with ~ say having four different image sets ~ This still won't account for the differentiation in environment settings. Just using library EEP settings "[NB] P-Haze" vs "Dynamic Richness" will yield totally different sun angles and color tones at the same "time of day" You simply cannot use baked lighting in a dynamically lit environment. It just doesn't work. It's for the above reasons I didn't really take the "let's image plane an entire sim" idea seriously ~ among the myriad of other ( sort of proposed ?? ) notions ~ and instead focused on other steps to improve rendering efficiency / calculation spaces in order to improve the SL user experience. Image plane impostors don't work for anything besides foliage and other similarly constructed organic creations that have a central core with branched out components ~ the moment you try to simulate anything with a vaguely solid form ~ the impostor breaks down catastrophically. There is a reason these are not used in modern day game engines. Image planes won't keep SL relevant into the 2020's any more than having pose-ball based animations will.
  15. Yes ~ the "make sim surrounds on private islands a standardized feature" idea makes a lot of sense I think ~ it's also the kind of small incremental change LL seems to be comfortable with. Viewing / impostring adjacent sims / mainland sims ~ is a bit less so ~ But as I said in my original reply ~ that's a very different ask from the notion of "Make SL able to do AAA type 'Big Worlds' ~ " which implies a necessary full coordinate-space rework.
×
×
  • Create New...