Jump to content

Theresa Tennyson

Advisor
  • Posts

    4,184
  • Joined

  • Last visited

Everything posted by Theresa Tennyson

  1. "Deformers" as the term is generally used mean objects that play animations that adjust avatar bones constantly. They wouldn't work to make Lara clothes work on a LaraX body because the deformer would deform the clothes in the same way. Really, every rigged mesh you wear has no idea about any other rigged mesh you're wearing. "Deformers" as the term the people behind the Legacy body used it were actually separate, differently shaped sections of mesh. Maitreya provides a perfect "deformer" using this definition by including the original Lara body with the LaraX package. The only other thing they could have done was provide separate upper and lower halves and they'd probably have to meet at the waist, which is one of the wonkiest parts of the original Lara body
  2. People who do this do it because they enjoy watching people get upset. "Confronting" them tells them what they're doing is working.
  3. No, that encourages not being a cloud - it's one of the required components.
  4. What do you think LeLutka would sue you for in that case?
  5. I've been pleasantly surprised with how well the PBR viewers are working for me. However, I spend most of my time in areas that are set up with the default environment. I've always relied on placed light sources, and I haven't messed with reflection probes yet. However, yes - if I go to places that have environments that have custom EEP's made for the old lighting engine, indeed - sometimes it looks GAAHHH!!!. If that "realistic ambient" lighting isn't made for how lighting works now - yes; it won't look realistic. And there was one place I made myself that was just nightmarishly bright even with the default environment - however, I realized that I'd placed three lights blasting out maximum power in a small space. In a real-world situation someone walking in there would be frizzle-fried in seconds. I rolled them down from intensity 1.0 to intensity 0.6 and things looked fine later. Under the old situation the constrained contrast meant that I didn't realize how over-lit it was. Yes, there are problems with the viewer, but they're being fixed pretty quickly and to be honest I'm adopting it a lot faster than the original materials viewer, which broke a lot of things with lighting in the first few iterations. And some of the "problems" here come down to not knowing how things work, which for some people will be a learning process and for others, dare I say it, a case of, "Dinnae mess wi' things ye ken nothin' about."
  6. I'd consider a vendor deciding to allow someone to not use their product due to their behavior a "check/balance."
  7. Then you can show me some evidence that's the case, I take it. Note that I never said what you were saying was wrong.
  8. When it comes to UV maps, one company will never have a "monopoly" because layouts aren't copyrightable. I think a certain Cat Company might have found that out a while ago. You'd need a patent and that process would be way too slow and complicated for the stakes. I DO think there's room for standardization - the various "new" channels for the bake service are crying out for that, but getting the Lab involved is just going to create a big delay. One of the real successes in standardization in SL was "Standard Sizing" for pre-mesh-body mesh clothing. Yes, there were outliers but it really worked pretty well - and that was completely user-driven.
  9. There is a standard. It's the original SL avatar mapping. That's what LL uses on Senra heads. But some others had different ideas. So now, do you want LL to ban other mappings?
  10. You mean like how they posted that textures are delivered by HTTP now and they were surprised to discover that UDP delivery is still around but only as a glacially-slow fallback? As in, "HTTP from Svalbard would be faster"?
  11. So, these are all UDP packets, huh?
  12. I'm not an expert on reflection probes by any means, but this thread suggests they work like setting the exposure on a manual camera. Our eyes automatically compensate for the amount of light that's coming into them - that's why your pupils get bigger and smaller. With a camera there needs to be some way of doing this. Most people today are used to cameras that automatically adjust themselves, but cameras used to have to be adjusted manually. Let's say I'm using my old manual Rolleiflex. If I'm standing outside at midday and wanting to get a precise exposure I'll get out an exposure meter. That meter will look at the amount of light coming in and the needle will move to a point that represents the settings that will cause the scene to react like a gray surface that reflects 11% of the light that's hitting it. That's an amount that will generally produce a pretty-well exposed picture. It'll tell me I need a fast shutter speed and I need to stop the iris diaphragm down so the opening is very small. If I take a picture outside I'll probably get good results. Now, if I walk into a room and try taking a picture with the same settings I'll have a problem - at midday the difference between the light amounts outdoors and indoors is huge. The room will be dark as the inside of a cat, with only the window exposed properly. To get a usable picture I'd need to re-measure the light inside the room and set the camera to allow more light in. If I set my camera outside at midnight, though, I'd have to open the diaphragm as far as it would go and set a very slow shutter speed to get a usable picture. However, if I then went indoors and tried to take a picture with the same settings it wouldn't look that different because the difference in amounts of light isn't as much. In fact, it would probably look "brighter" than the picture taken indoors at midday with the noon outdoor settings. In other words, things would look like the first pictures Arduenn showed us in this thread. This suggests that when Arduenn set a reflection probe that extended outside of the room it used the outside lighting to set the exposure - hence, inside of a cat at midday and properly exposed but low contrast at midnight. One of the big differences of the new viewer, though, is that it automatically compensates for light amount similar to our eyes. If Arduenn took a picture with no reflection probe at all, it would have been properly exposed. It sounds like the viewer was doing exactly what Arduenn was telling it to do. In his defense the documentation should have let him know what was going to happen. So, in response to your cave question - I'm guessing that if you'd like the cave to consistently look dark you should set up a reflection probe inside the cave near the door. If you want it it to look bright enough to move around in with a blown-out highlight from the opening you'd set the reflection probe farther back in the cave.
  13. There is nothing stopping another viewer developer to continue supporting forward rendering. I imagine Henri will do it until the heat death of the universe Second Life.
  14. If you set up your reflection probe OUTSIDE of the room... And you don't really need a reflection probe at all.
  15. Who in the name of Pete's blue bicycle is forcing people to do anything? There are a variety of supported viewers that use older technology.
  16. In Second Life you could set up the most perfectly realistic static scene you want and then the second things start to move it would all go kablooey as avatars bend into natural human positions that make them look like balloon animals and float over the land and furniture instead of them mutually deforming realistically.
  17. Are you saying that excessive verbage is ineffective in getting one's point across?
  18. Advanced lighting doesn't require using shadows.
  19. If a single hobbyist viewer developer can do it, and wants to do it, why should the company behind SL pay somebody to do the same thing if they could be doing something else? Now yes, you should allow the hobbyist's viewer to connect to the environment but that hasn't really been a problem. There are people who love to say, "Second Life isn't a game, it's a 'virtual world'." But when you say that, what do you mean? In my mind, it's the "place" formed by the pre-existing and newly-added items, and the possible other items. There's a reason that viewers are called viewers. They're just a way of looking at the "world." And there's a lot of ways you can look at that world - that's one of the reasons that Second Life has stayed around so long.
  20. The question is how far you have to go to do this. One of the leading voices for the sans-culottes here is literally using a computer with a broken video card, and they haven't installed the non-broken replacement they already have because... well, there you go.
  21. Note the edit on my post before your reply.
  22. https://wiki.secondlife.com/wiki/Release_Notes/Second_Life_Release/5.1.5.515811 That would be the "Love Me Render" viewer, released on May 31, 2018, several months before your "first thread" on EEP. ETA: Actually there have been a number of "Love Me Render" viewers to fix a variety of rendering issues, both before and after EEP came out, and they weren't generally "EEP specific" in their nature. https://releasenotes.secondlife.com/categories/viewer.html
  23. EEP came out in 2020. Love Me Render came out in 2018.
×
×
  • Create New...