Jump to content

leliel Mirihi

Resident
  • Posts

    928
  • Joined

  • Last visited

Everything posted by leliel Mirihi

  1. Faye Feldragonne wrote: Morgaine Christensen, I wasn't aware of this setting but boy does it make a difference in Dodge! I wish "somehow" we all knew about these things! For 2 years I've seen that dang flicker in things and never "knew" there was a setting! Dooh! It's not a real fix. You'll see what I mean soon enough.
  2. Corin Clary wrote: In both viewers when I moved my avatar the FPS would drop a lot. Probably because I cannot get high enough network speed with my ISP. Your 7 year old CPU may have something to do with it as well. Just saying that pairing up a CPU from 2005 with a GPU from 2010 is not something any gamer would do.
  3. Chosen Few wrote: leliel Mirihi wrote: Doesn't matter one bit in a deferred renderer, the lighting equations are completely decoupled from the geometry. Ah, I hadn't considered that. Thanks. It's actually not quite as easy as I let on. Doing normal maps on alpha tested particles isn't hard, but doing normal maps on alpha blended particles is. Especially since doing alpha blending in a deferred renderer is pretty hard in itself. Altho since people have figured out how to do volumetric shadow mapping on particle clouds in real time we aren't that far off from it.
  4. Morgaine Christensen wrote: You might want to post the information on the product pic itself, or where it is highly visible. I strictly run Phoenix; I don't care what anyone says or their logic..I like it...am use to it...and will run it till they take it from. me. Many still run Phoenix and you could end up with unhappy customers, especially if selling the product on Marketplace where you would be rated, which might effect future sales of the items. Then again, if a product is restricted to a single named viewer, I probably wouldn't buy the product either. I would be really mad if I bought something that was restricted to a single viewer and was not informed before hand. Is there anything you can do creative-wise to reduce the flicker in the type V1 viewers? Maybe sell/include two version types? I know I have seen some designers do this in things I have purchased and is much appreciated. I continue to patronize them because of this extra effort they have made. Demonstrates to me they are cognizant of what they are creating, the limitations of the various viewers, conscious of the needs/wants of their customers, tells me they want my Lindens. So in other words you're unwilling to compromise on your choice of viewer but at the same time demand that creators compromise by making multiple versions of their objects. Anyway to stay on topic in v1 this option was known as Fast Alpha and should be in Advanced -> Rendering. Altho the v1 version isn't as good as the v2 and later version.
  5. Chosen Few wrote: You'd better hope they never allow normal mapping of particles. It would be difficult to imagine that adding so many more lighting vectors to objects that can spawn by the thousands would be a good idea. Doesn't matter one bit in a deferred renderer, the lighting equations are completely decoupled from the geometry. Josh Susanto wrote: Interesting problem. What about alphas, though? Shouldn't normal mapping of alphas essentially provide a kind of rudimentary holography? No. Normal mapped alpha textures would look no different than normal mapped opaque textures. Holographs are a much more difficult and interesting problem since they appear to reflect light where there is no geometry at all. Things are obviously more complicated than that.
  6. SL doesn't have any client side scripting built in. The only thing close is the third party extension RLV but I doubt it has anything remotely like what you need. And keep in mind that even if you do get this working with some client side scripting extension it will only work for people that also have that extension and willingly run your scripts. This will not be a seamless effect that just happens to anyone that come within view of the object.
  7. Josh Susanto wrote: How many data points does Second Life normally want to apply to your mesh models? Chosen already told you like 6 pages back. Really that kind of detail isn't very important, the limits are high enough that no reasonable object would ever hit them. I guess it's my turn to say it now, but stop thinking about scultpies.
  8. Josh Susanto wrote: >"This seems to be the root of your misunderstanding... each triangle is considered to be independent." The explanation is simpler than that, I see now. 1) The mushroom didn't load out as a 128 .obj. It loaded out at the resolution Cel Edman expects to be used for SL. I just didn't realize that because I have always exported sculpts as .png, not as .obj. 2) If there is a default export resolution assignment for meshes, no one cares to discuss that. If there isn't one, then manybe we're really talking about "LOD". Are we? Default resolution of what? Talking in riddles and leaving important details out doesn't make your posts more profound. I think I understand now why this thread is 10 pages of hot air.
  9. The 195.x driver was released a year before the GTS 450 came out so I'm surprised it worked at all. The first driver to support the GTS 450 was 260.x.
  10. Josh Susanto wrote: So what is the kind of default resolution available with Blender? One vert per surface pixel? something like that? My mushroom loaded at less than that, and it was only a 128x128 sculpt. This seems to be the root of your misunderstanding so let me try to clear some things up. Textures are not pictures. Textures are not addressed by pixels. Textures are addressed with normalized floats in the ranger of [0,1]. Everything and I do mean everything about textures is interpolated, they go through some very heavy filtering before they end up on your screen. UV maps do not exist, they're a human construct to make the life of 3d modelers easier. From the GPU's point of view objects are made up of a list of vertices, a list of texture coordinates and some other things we don't care about. The coordinates are matched one to one with the vertices, it doesn't matter what they are because quite honestly your GPU doesn't care. Each texture coordinate is completely independent from the others, the texture coordinates for two triangles can overlap or cross in the texture space in any way that you can come up with, because one again your GPU doesn't care, each triangle is considered to be independent.
  11. Kaitlyn Tearfall wrote: just getting so sick of wearing sculpties and having everyone IM me telling me that my shoes or my top aren't rezzing properly. RenderVolumeLODFactor only controls what you see, it has nothing, repeat nothing to do with what other people see. If they're complaining about your clothes not rezzing it's because of something on their end, not yours. Tell them to STFU and figure it out themselves. I totally agree that LInden Labs needs to change the default value of this. If you knew what the setting did you would more likely be demanding that content creators stop making bad sculpties thus forcing every one to raise the setting and get lower frame rates just to cover up for the creator's ignorance.
  12. Drongle McMahon wrote: Now the jpeg2000 in SL may be completely different, but unless it is, this means that jpeg2000 compresses the interpolated version best, even though it has more different pixel values (someone please exp[lain that!). Easy, jpeg & jpeg2k were designed to compress photographs of the real world. In the real world you almost never see large blocks of solid color, there are all ways gradients of color from lighting variations, imperfections in the material and surface, and random noise from the camera or your eyes. So jpeg sacrifices the uncommon case so it can better optimize the common case. Try comparing jpeg2k vs. png on a high res photo.
  13. The Particle Laboratory has a great tutorial on using particles in sl. As for other tutorials it depends on what you want to learn and how far down the rabbit hole you're willing to go.
  14. Yes you can do something like that with some extensive modifications to both the viewer and the sim, the point is why do we need that? Aside from your one example what exactly would be using this? We're talking about tens of thousand of dollars worth of development work for your pet project. Why can't you build this thing with the tools that are already available? And I'm sorry for being blunt but you need to learn a lot more about how this stuff works so you can start asking the right questions. Some of the things you're asking or presuming are just way out there, it makes it very hard to give you a proper answer because I'd have to type out several pages to explain the necessary back ground info.
  15. Why do you keep insisting that it has to be an object that follows the avatar? Go and try to build what you want with particles first before you ask for a new and rather weird object type that would have very limited uses (i.e. almost none).
  16. face2edge wrote: So, how can I get an object in the virtual world to use client side rendering in order to look at each viewer’s avatar? This is my current question. If communication was allowed to take place between a ghost particle, the viewer, and another object, would you be able to track the orientation of the particle and feed that information back to the orientation of an object inside the world. You would still have to use client side rendering so each viewer sees a unique view of the object orientation. So, the next question is about that point. Can you use client side rendering on object other than HUDs and particles? If you could enable client side rendering at will, could you make higher level of detail objects to render outside of the second life system? Can this be turned on for an entire Sim so all models are hardware accelerated? First everything in sl is rendered in the client. Second what you're asking for is the ability to treat 3d objects as if they were particles which is a bit silly considering particles were invented to do things you couldn't do with 3d objects. Particles were created to simulate fuzzy objects that don't have a solid shape and are thus difficult or impossible to create with 3d objects. Another way of putting it is that 3d objects are used for solid matter where as particles are used for gases and fluids or things that behave like them. Originally particulate clouds such as smoke, water mist, rain/sand/dust clouds, fire, sparks, etc. More advanced particle systems, which sl does not have, can simulate things like hair, fur, grass and such. Particles used in conjunction with a fluid simulation can also be used to mimic flowing water. It's best to think of a particle as a flat sheet of paper that always faces the camera, because that's what it is. But just because it's a piece of paper doesn't mean you can't trick people into thinking it's something else. I suggest you spend some time playing around with sl's particle system to see what it can do and how it reacts to the various settings. This will give you a much better idea of what you can and can't do with them and will effectively answer your question for you.
  17. Josh Susanto wrote: I've already been running a bunch of 1024's in one frame without any detectable lag. How can that be? You won't see much if any lag until you go over the limit for your video card and then your frame rate will drop by 90% because it will be swapping textures from system memory.
  18. Josh Susanto wrote: I understand that most people would consider 4096 or larger to be total overkill, but overkill is subjective. This is not subjective. 4096x4096x24 +33% for the mipmap is 64MB for just one wall. That wall by itself would be a lag monster for a significant number of people in sl. If I can make an 8192 version of the right kind of photo image, 256MB for one wall! I expect there could be some demand for that, at least as a wall in a sex club or something like that. I expect not once people figure out how badly it causes them to lag. At a 31 prim limit, I should be able to produce a framed photo at 6144x5120 than an avatar can carry around attached. Most people don't max out their avatar prim limits as quickly or consistently as they do land prim limits, and not everyone has land, but everyone has an avatar. I strongly recommend you reconsider this idea. A large number of people in sl do not have enough vram for this kind of thing.
  19. Chelsea Malibu wrote: If I am reading this right, your concern is over lag. Not to be sarcastic but yeah, I think they'll get all over that lol. Lag has been an issue since this game started. That's the nature of this game.The challenge of walking through lag. Take away the lag would be like having NPC's in WoW that don't attack you. Friendly monsters who want hugs not war. What fun is that? The OP is talking about latency. What is generally referred to as lag is actually much much more complicated than most people realize.
  20. Chaos Borkotron wrote: I'd like a setting that stops the sun/moon light from affecting the texture/prim while still allowing local lights to affect them Could you give us a usage case for that which doesn't involve faking indoor environments.
  21. Kaluura Boa wrote: Hmmm... cd ~/.secondlife/cache ;rm -R ./* WARNING: These 2 lines are a loaded gun pointing at your foot. When you launch a terminal, it always opens in your $HOME. So, if the first line to go to the cache directory does not work as expected, the second line will just wipe clean your $HOME... and you will really really regret it. Instead of shooting your 10 toes one by one and adding a bullet in each ankle, I strongly suggest to go to Preferences >> Network in your client. The cache directory is shown on this tab. Point your file browser to this directory and then you can delete everything by hand. (Don't worry, your client wll recreate all the folder it needs on next start.) Or you could just do this and not worry about it. rm -rf ~/.secondlife/cache
  22. Phil Deakins wrote: Ceera Murakami wrote: Use the mini location bar and you gain an additional quarter inch of screen height. You actually see *less* of the world when using the mini bar, so it's a false way of gaining more view. You can see this by going to a quiet location with the full bar on, and noticing exactly what's at the edges of the viewing part of the screen. Then change to the mini bar, and you'll see that the world is slightly magnified, so that the edges are now inside where they were, and you actually see less of the world, It's called angle of view. Change the vertical angle and you have to make a corresponding change in the horizontal angle or things with look distorted.
  23. It should perform fairly well on mid/high settings.
  24. Qie Niangao wrote: I believe you're correct about that, but a couple of points: You've no doubt noticed that the ARC score changes on your own machine as you cam in and out, even for your own avatar. I don't know what's happening there; maybe it's an effect of LOD, or maybe it's because some objects stop rendering altogether at some distance. But it's not as simple as just the summed effects of all attached object properties. Maybe somebody intimate with the viewer code that calculates ARC could comment. That was a problem with ARC but has, I believe, been entirely fixed with display weights in v3. It now accounts for all of an object's LODs so the only variation you should see is when the viewer doesn't know about all the objects in a link set. Also, the fact that ARC is not hardware-specific reveals one of the big problems with any such measure: it's at best some weighted average of the effects different properties have on a wide range of graphics hardware with which they may be viewed. GPU manufacturers are forever touting advantages of one over another, so we know they don't all handle all features the same way, and therefore won't be equally affected by them. I think that's looking at the problem from the wrong angle. The differences between GPUs can be summed up as the faster one being able to do a given job faster. How it's faster would go into some nitty gritty details that are way beyond the scope of this forum, and there really hasn't been as much improvement in the area as you'd think. Modern GPUs are in many ways just faster versions of past GPUs. Much of the advances we've seen in current games are due to clock speed and memory bandwidth increases along with algorithmic improvements in how games render things and very little to due with any behind the scene optimizations that GPUs do. ( I'm talking about the last 5 years here, before that there were large differences between each generation of GPUs) Really you should say that hardware config X should be able to render any scene with a display weight of N at roughly the same frame rate. Any large variation you see would be something the display weight system isn't accounting for. From there you can go on to say that you need a machine that can handle a DW of Y in order to run on mid graphics settings and so on which will allow you to roughly rank systems. And that's saying nothing of the complexity of weighing viewer-side rendering cost versus sim lag effects that are sometimes caused by "too many scripts" -- the topic of this thread. You're certainly right there, but this is a related and interesting question. Can you AR some one for too high of a display weight? From the average users perspective sim lag and viewer lag are the same thing, and they're both often caused by an over use of resources.
  25. Void Singer wrote: ARC is problematic in that it's calculated relative to the local hardware, and the same things seen by two people can have wildy different scores, and the scores don't seem to translate from one user to another.... [citation needed] Seriously that goes directly against the definition of ARC. The equation used to calculate ARC (or now draw weight) just gives an object verious amounts of points depending on what prim properties it has, at no point does it factor in your hardware.
×
×
  • Create New...