Jump to content

Frionil Fang

Resident
  • Posts

    385
  • Joined

Everything posted by Frionil Fang

  1. Yes and no: textures on fully transparent faces are loaded (at least if the object is "seen", not sure what would happen if it was completely hidden inside something else) and cached, and as long as the texture is still on the object, you can be relatively certain it'll stay loaded and ready for particle use. If you just cycle the textures like your example, though, they will get cached but since they're no longer being used after they get swapped out, they'll get soon unloaded. This means they'll have to be fetched again -- this time from the cache -- and there still will be a period of gray boxes instead of your desired particles, just a shorter one. The textures need to stay on the object to be more certain they're available for particles with as few unsightly loading delays as possible.
  2. The normal map is not "level": Compared to the neutral normal map colored square at the bottom right it's visibly different in overall brightness/tone, so it will respond differently depending on the light direction - the entire surface is slanted, in other words. It looks like it was saved with an incorrect color profile or something along those lines. Normal maps should never use a color profile and be linear color, not sRGB. If I manually butcher it to ~0.82 gamma with a levels adjustment, it will more or less match the neutral color overall and retain its general shade while rotating the object, but that's a dirty eyeballed hack job. Ideally you'd use normal maps that aren't flawed to begin with. Well *ackshually* a -1, -1, 1 is not a valid unit/normal map vector, the magnitude is a unit, not the components themselves. The correct normal vector would be -1/√3, -1/√3, 1/√3, corresponding to the rgb color 53, 53, 212. SL does accept a non-normalized normal map though and you might get away with it without visible issues, the values are normalized before use. That may be part of it, PBR material rendering compensates for negative scaling and rotations of the normal map, non-PBR does not. The only proper value is a positive scale on both axes and 0 rotation, but the 180 degree rotation (or both axes negatively scaled) will at least be "correct" as in physically sensible, only inverted in direction.
  3. You can actually drag a full-perm object out of a full-perm object directly inworld, but some kind of permission combination here blocks it (I tested multiple perms; only full-perm and -m+c+t could be rezzed directly from object inventory, -m+c-t and -m-c+t both failed). There's probably a design reason for this, but beats me. It's just how it works.
  4. There is only one lighting environment being rendered, even if there's multiple defined at different altitudes. They're just swapped in on demand, they don't "exist at the same time". Edit: additionally, things obviously have to be in draw distance to cast shadows, so a build at 1000 m when your draw distance is 128 m from the ground is not going to ever cast a shadow.
  5. Daily reminder that just because shadows require ALM/PBR rendering to be enabled, doesn't mean you have to enable shadows. Just turn them off to save performance and keep scenes that are not designed with shadows and interior lighting brighter.
  6. On non-PBR materials, you're not supposed to have negative scaling or rotation the normal map or else it'll be wrong. The rotation and scaling are not accounted for for computing the light direction, so you get an inverted direction in the best case (both X and Y scaling are negative, or the normal map is rotated 180 degrees) or a non-physical direction that makes no sense with how light lands on it for anything else. If you use the "synchronize materials" checkbox as suggested, then you also cannot rotate or negative scale the diffuse because that will, as it says, apply the negative scaling and rotation to the normal map.
  7. Just toggle PRIM_POINT_LIGHT. The projector parameters (texture, FOV, focus, ambient) are not affected by the point light call, so they remain the same if you change the light state, color, intensity or radius.
  8. I'm not going to be deleting my existing generic BP tileable texture sets and will be using them going forward, so being prevented from using them on new objects would be strange. Not to mention that I have little faith in LL actually ever implementing PBR extensions, so some "less realistic" materials that require specular colors that aren't derived from the base color can only be accomplished with BP.
  9. Commenting specifically on this part: if the PBR material has texture rotation, the normal map stops working correctly if applied directly on a BP material. The latter does not account for the rotation of the normal map, so the lighting becomes "wrong and unphysical" for any angle other than 0 or 180 degrees, and the 180 degree rotation inverts the apparent direction of the surface detail. PBR materials' normal map has correct lighting regardless of angle. BP materials also have the specular glossiness encoded in their normal map alpha channel, which is unused in PBR materials. If you were doing some run-time baking you might consider taking the roughness from ORM, inverting and using it as the BP glossiness channel, but they're not exact inverses.
  10. The UV maps between the models don't need to have any relation. Use Blender's "bake selected to active" mode, and you can bake any maps from any kind of highpoly to any kind of lowpoly: You may have to use the cage/extrusion and max ray distance parameters to ensure everything gets baked correctly, but you'd have to read up on the documentation and experiment yourself, I'm really not versed enough on the fineries. You can choose any kind of bake, for demonstration I used diffuse without light contributions. The "low poly" is a default cube, the "high poly" is a default UV sphere with a noise texture so they certainly don't have any matching UV's. Resulting texture on the cube itself:
  11. It's not a mystery, you can just test, you know! Firestorm non-PBR, plain blank textures with alpha channel: LL Viewer, PBR material with blank textures+alpha channel: Green cube's end wall shows through the red cube's side wall just the same from the same viewpoint.
  12. The script never requests animation permission. Even if it's auto-granted for attachments, you still must request it. See https://wiki.secondlife.com/wiki/LlRequestPermissions for PERMISSION_TRIGGER_ANIMATION; you should request that in an attach event.
  13. Sadly, I tested it and the viewer uses the old BC3/DXT5 compression. It ranges between sort of imperceptible and terrible depending on the image. Test image: gradient, normal map, pixel art, photograph. Saved as DXT5 in Paint.net, using error dithering, uniform error metric: At the default zoom, it doesn't look too bad, but you can already see the gradient looks banded. Zoom into the normal map and pixel art, left original, right DXT5: That normal map is not happy, and that pixel art is just blasted. The original image inworld, with texture compression enabled, zoomed in. Looks just about the same as the DXT5 compressed file. Ignore the blurring, SL will force bilinear interpolation on textures and it's not part of the compression: As for the discussion above... I'll stick with my opinion that texture compression might have its uses for certain situations. The cost in smoothness might be a killer for some, the cost in quality for others, but if it makes the difference between having a tolerable experience at all... it's still there as an option. And yes I'd certainly like other viewers to have some of the efficiency of Cool VL Viewer. Request for LL: enable the BPTC compression if available, BC7 would look so much better for the situations where people absolutely must use texture compression!
  14. The subject of the viewer's "lossy texture compression" has come up recently and I wanted to see what I could find out. The compression is an OpenGL feature. I'm not enough of an expert to figure out what, exactly, is the compression format being used here since the viewer requests a "generic, nicest image quality" compression which to my understanding lets the driver decide, but I think the options are either: 1) "S3TC", older format (DXT5/BC3) 2) "BPTC", a newer format (BC7). Requires OpenGL 4.2 or higher. Both appear to have a 4:1 fixed compression ratio; it's not lossy like JPEG where the ratio and quality varies depending on settings, but BPTC is "smarter" and can squeeze better quality out the same fixed-size block. I didn't actually try to do any subjective quality testing for now, but the formats are certainly lossy so you're bound to lose details. I found some comparison images which are probably valid, even if it's from 2012: https://www.reedbeta.com/blog/understanding-bcn-texture-compression-formats/ - BC7 produces a lot better quality overall, so assuming it can be used when available, it would definitely explain people not noticing real visual degradation these days. It might hurt normal maps more than other textures (normal maps are actually raw numbers and not really a "picture" so compressing them in a lossy, perceptual manner may be trouble). For the performance side, I didn't test any heavy scenes, just my own home with the exact same settings: 1.3 GB VRAM after everything had finished loading without texture compression, 0.55 GB with compression, for a ~2.4:1 reduction. Some textures in the viewer like light maps and stuff are specifically prevented from using compression and other things than textures are stored in video memory as well, so a 4:1 ratio probably won't happen in practice. Ideally the textures would be compressed beforehand and stored on disk as such, then loaded directly; since the viewer requests runtime compression, I saw a *lot* of stuttering during rez as it was loading and compressing the textures. After everything was done loading the stutter went away, the texture decompression should be handled pretty invisibly by the GPU. In my simple test scene there was no measurable FPS difference between compression on and off since the VRAM nor memory throughput were being strained, but compressed textures should help performance by both taking less memory, and requiring less memory bandwidth to move textures around. TL;DR it's worth checking out, maybe the performance boost is worth the rez-stutter and lower quality?
  15. Oops, you're right, it is indeed the "glossiness" which serves a function roughly opposite to roughness. Brain fart. The SL scaling is a basic bilinear scaling implemented in the file indra/llimage/llimage.cpp, he code is a little difficult to follow since it's quite optimized but from what I can decipher it doesn't do anything special. Most image editors have bilinear as an option among others, but having more choices lets you tailor the scaling to the needs of the image: bilinear certainly *can* be the best but it's generic and soft, other modes can preserve details better or even sharpen them. I will always advocate for scaling to correct resolution by yourself.
  16. Mostly, except calling a normal map a bump map would be a misnomer. If it looks like a normal map with the blue overall look, then it's probably ok. If it's grayscale, then it's a height/bump map and won't work for normals. Diffuse is supposed to be a baked representation of the lighting so it's not the same as base color, as used in old materials. I've certainly seen lots of "PBR libraries" advertised here and elsewhere where the base color/albedo texture is just a diffuse map, so it's not technically correct.. but might look just fine anyway. An actual diffuse could be thought of as base color+occlusion but it's not quite that simple. Alpha is the transparency on the base color texture, i.e. its alpha channel. Occlusion/AO are the red component on the roughness-metal channel, it's more accurately occlusion-roughness-metal. Displacement and height are probably misnomers (height and bump are a grayscale height map, not the same as normal). Displacement is not used in SL but it could be either in height or normal map format, and actually displaces the vertices instead of just altering the shading. Specular could be specular strength (old materials: normal map alpha channel) or color (old materials: specular itself). Not used in SL PBR. Oop mobile quoting changed the order. There's no simple answer, what channels you must include depends on what you're trying to accomplish. AO can be useful to define surface detail, and since it's a part of the O-R-M texture it's gonna be uploaded along the other two, anyway. Normal maps get uploaded with lossless compression if you upload a gltf directly. Otherwise there's no difference; if your normal map is 128x128 or smaller, or the material doesn't care about compression artifacts, you can upload them separately.
  17. KVP storage calls are already throttled. If you hit the throttle, then you know you're going too hard, if you don't hit it, it should be fine.
  18. That doesn't seem to be the case, completely plain blank textured old prims linked together still count as 1 LI per prim for me. If you put PBR materials on said prims (or diff-spec materials, or change the alpha mode from default), it has the same effect as switching to convex hull, making them count with mesh-style impact calculation. The LI discount only applies to download cost anyway, as far as I recall the announcement saying. Prims have almost no download cost and their impact is almost all physics/server. If the total LI of the object/linkset wasn't determined by download cost, there wouldn't be a visible change.
  19. I can't pass commenting on "IBL" making me think "irritable bowel lighting" and that actually seems very accurate considering the tummyaches getting things settled into the "new normal" causes.
  20. Part of the reason is that all the other default environments don't have the same reflection probe ambience: Midday uses 1, others use "almost 0" but it's not exactly 0 since that would mean legacy mode. You can see the difference by having personal lighting open while switching between the environments: if it says "brightness" instead of "HDR scale", it's the so-called legacy mode. Add to that the whole probe ambience/environment ambience mixing and yes, Midday is actually darker than Midnight in some situations, *especially* if a reflection probe is being used. It's great, I love it. /s
  21. Nothing stops you from making PBR materials by hand. Don't let the "you're supposed to use professional workflows and tools" attitude stop you from experimenting, just because you *can* do things in a more industry standard way doesn't mean you have to. Yes, the base color (albedo) + occlusion-roughness-metal division is more complicated: you're not supposed to draw one single estimate of how the thing looks into one texture, but instead separate the components (base color is literally that, the base color, no baked in shades from surface details, no gloss, etc.) and let the engine combine the components in a more scene-appropriate way, but that is just a matter of understanding what detail goes into which texture rather than some impossibility that can only be accomplished via a 3D painting tool. Pencil drawings aren't obsolete because photographs and sculptures exist. If the pencil drawing does the job you want it to do, great, you won't need a camera or lug around a heavy object just to convey every detail more accurately.
  22. Advanced lighting never forced you to use shadows (and that was probably a big disconnect, people turning on advanced lighting and shadows, then getting their performance absolutely ****canned instead of mildly reduced) and nothing has changed on that regard. Turning them off in the PBR gives you a spicy performance boost as always (lazy test in a very plain, noncomplicated scene: roughly +30-50% more FPS), and running with shadows on in scenes that are not designed for it is a subpar experience, so... make use of those quick graphics presets.
  23. You could of course build a list of prims once on start-up/on linkset change event and assign those to a global variable. Or you could just skip doing that entirely based on what your example shows: you've already gotten the link number by matching it in the touch_start event, so you can pass that to a generic engine toggle function. Also instead of a separate engine state flag for each engine, you can have a list instead, and also pass the list index along in the call. key owner; list engine_active = [FALSE, FALSE]; // holds the activation state for all engines EngineToggle(integer engine_num, integer link) { integer active = llList2Integer(engine_active, engine_num); // get the chosen engine active state list params = llGetLinkPrimitiveParams(link, [PRIM_DESC]); list detailz = llParseStringKeepNulls((string) params, [":"], [""]); integer detail_num = 0; // the only difference between calls is the list entry being used if(active) // so choose the correct detail to use based on active state detail_num = 1; // if you were to put the transparency change back in, you could also just determine that based on the active flag llSetLinkPrimitiveParamsFast(link, [PRIM_POS_LOCAL, (vector) llList2String(detailz, detail_num), /*PRIM_COLOR, 0, <0.596,0.596,0.596>,1,*/ PRIM_PHYSICS_SHAPE_TYPE,PRIM_PHYSICS_SHAPE_NONE]); engine_active = llListReplaceList(engine_active, (list)active, engine_num, engine_num); // update active state list } default { state_entry() {owner = llGetOwner();}//Dont mind this.. just part of an Access System. touch_start(integer total_number) { integer linkno = llDetectedLinkNumber(0); list details = llParseStringKeepNulls(llGetLinkName(linkno), [" "], [""]); key id = llDetectedKey(0); if (llGetOwnerKey(id) == llGetOwner()) { integer link = llDetectedLinkNumber(0); // store & pass along the prim number string linkname = llGetLinkName(link); integer engine_num = -1; // match link name to an engine cell number, -1 means not valid if (linkname == "Engine Cell 1") engine_num = 0; //Cell 1 (#0) else if (linkname == "Engine Cell 2") engine_num = 1; //Cell 2 (#1) //Else if Repeats more of above ^ that would other wise be placed here. if(engine_num >= 0) // only toggle if it's a valid engine cell EngineToggle(engine_num, link); } } } There's probably more things to optimize and turn into a more generic, expandable form, but generally speaking: the fewer function definitions and calls, the better.
  24. This is one of my least favorite things about how PBR is implemented. IMO they absolutely should've reused tint and transparency instead of hiding everything behind layers of material asset stuff and overrides. I don't foresee actually using PBR materials unless absolutely necessary i.e. mostly shiny metal things, the environmental reflections and such already help diff-spec materials look spicier. My second least favorite thing is renaming the old system "Blinn-Phong", couldn't we just have called it diffuse-specular which is already enough technobabble, instead of going for scientifically accurate names? My third least favorite thing is the flaky performance, sure the PBR viewer pushes 300-400 fps like non-PBR Firestorm, but the instant the camera moves the framerate stutters to ~10 periodically no matter how many times I try to cam around to make sure things are loaded and cached (non-PBR viewer keeps up the framerate without a hitch). Guess my PBR enthusiasm has kinda waned, "it's fine I guess", could've used a little longer in the oven.
×
×
  • Create New...