Jump to content

Fenix Eldritch

Resident
  • Posts

    778
  • Joined

Everything posted by Fenix Eldritch

  1. Hi imacrabpinch, welcome to content creation in SL! Regarding the discrepancy between tutorials/forums and popular stuff inworld, I think a lot of what you see is resultant from content creators who are either ignorant or apathetic to good building practices when it comes to optimizing content. Further compounding this are the consumers, most of which are equally (often more) unaware of why unoptimized content is bad in a real-time online virtual environment like SL. I'd wager the average user gravitates to poorly optimized content simply because it "looks good" - but they are unaware of the hidden price they pay for using it (in the form of low frame-rates, lag, and general performance drops). Or, they might be using the Land Impact metric as their only guide to gauge efficient content; not realizing that it is not absolute, doesn't account for textures, and can be manipulated/gamed. For example, if someone creates heavy poly mesh, but puts the lower LODs at single digit tri counts, that can artificially lower the LI. But that doesn't make the higher LODs any more optimized. Improper LOD use often leads to LI's that are not reflective of how complex the object truly is and you still have poorly optimized content that additionally looks awful as the LODs step down. Some people advise players to amp up the viewer's LOD factor setting, which essentially forces content to stay at higher LODs longer. While it might look nice, it comes at a heavy performance impact, and basically negates any benefits properly made LODs are supposed to impart. So to circle back to your original question, I don't have any hard numbers on LI or polycount (as such numbers are situational to begin with). But speaking in very general terms on whether or not it is acceptable to have extremely high-poly content... well just look at the complaints about performance. Using your example above, is it acceptable to be noticeably lagging from a single piece of clothing? Many don't realize that is the cause and simply blame SL as having poor graphics capabilities (when really, I think it's just the opposite, considering how well it manages to run with so much unoptimized content already floating around). Additionally, few things exist in a vacuum, and with the completely dynamic nature of SL, the potential to have lots of content in a scene is always there. So it behooves us to try to optimize wherever we can. Professional developers who make content for games are always trying to make their content as efficient as possible in order to deliver a consistent performance to their game. I see no reason why creators in SL shouldn't strive for that same ideal. Aim to make your content as efficient and as optimized as much as you can. If someone makes a pair of mesh pants at 70k triangles, but you can make a similarly detailed on at 7k (or less), do that! Make every polygon count, use only what you need and no more. Make every LOD only detailed enough to keep the general silhouette at whatever distance it triggers. Use as few textures as possible, that are as small as possible, and make use of every last bit of space on the UV map. SL is full of novice/hobbyist creators, and I'm not trying to throw shade on those who simply don't know, or are misinformed or inexperienced. But I think we really should be trying to improve ourselves and the content we create. Because when more and more people use better optimized content, SL as a whole benefits. Everyone's performance grows by leaps and bounds.
  2. There are 4 scenarios I can think of that might cause your script to simply not work: 1. You don't have modify rights to the target object. 2. The target object is sitting on a scripting-disabled parcel. 3. The script itself has been set to not-running, or has crashed and needs to be reset. 4. The script actually is running, but might be written in such a way that isn't flexible enough to work with your particular target object. The first case is pretty self explanatory, and you'd know right away if that were the problem - as you wouldn't even be able to drop a script into the object in the first place. The second case is usually just as obvious, but not always. If you bring up the about land window, it should draw parcel boundaries and you can verify you and your object are in the same parcel and are subject to the same land settings. The third case has caught me a few times. Near the bottom of the script editor window, there is a checkbox labeled as "Running". Make sure that is checked. If it isn't, the script is turned off and will not reactivate until you check that box. It may also be possible that the script had previously crashed and is in a bad state. Hitting the reset button will reboot it and may help (though ideally you'd want to ensure you don't get into that situation in the first place). Finally, add some simple instrumentation commands to your code like llOwnerSay("test"); at the beginning of your touch_start event, state_entry event, or your on_rez event if you have it. That way, you can at the very least confirm that your script is running. The task then becomes figuring out why you're not getting any other output - which is dependent on how the script is written.
  3. Thanks for the responses, everyone! The linked discussion was pretty much exactly what I was looking for.. So the consensus seems to be favoring a fully connected topology with slightly more faces instead of fewer faces with overlapping edges. But just to be safe, I want to make sure it is understood that I don't have any duplicate vertices in my picture. There are no extra verts that could be removed or merged. So in that light, does it still hold true that creating faces with a few additional overlapping edges can lead to generating MORE verts in SL's internal model format as indicated by the linked thread?
  4. I'm reworking a building facade to be more efficient. Originally the windowed wall was not connected to the columns and ran behind it. Now I'm removing the extra verts and having the wall's edge use the column's edge. As I fill in the wall, I am trying to use as few faces as possible and have a dilemma. The picture below shows the two ways I've filled the wall between the window and column. The right side uses more faces, but creates a fully contiguous (is that the word?) geometry. The left side uses fewer faces (yes I know I haven't triangulated the quads yet, but it's still less than the other side). I'm actually doing the same thing with the sections between the windows too... But unlike the right side, I don't connect each and every vertex.  So my question is this: Is there any reason why I shouldn't use the method on the left (and center) for keeping the face count down? Is there any drawback to not having the geometry completely connected at every vertex? (is there a phrase for this?)
  5. When you attempted to delete the script, did you use the delete key? If you did, you may have inadvertently deleted the host prim by mistake. When removing inventory from prims, I play it safe and right-click the target item and select "delete" from the context menu.
  6. I would be careful using Dropbox like this. There are bandwidth restrictions for its public folders. Last I heard, free Dropbox accounts may not use more than 10Gb of bandwidth per day. That sounds like a lot, but it can quickly add up. Every time you encounter someone in SL who has their MOAP set, their viewer downloads the file to their cache, consuming a bit of your bandwidth quota for that day. The bigger the file, (like a full song for example) the more bandwidth is consumed for a single encounter. If you encounter enough people, you will exhaust your bandwidth and all of your files in the public folders will be unreachable.
  7. llAvatarOnSitTarget() returns the key of the detected avatar, and since you have already stored it in the av variable, you can further tighten up your code by using: llRequestPermissions(av, PERMISSION_TRIGGER_ANIMATION); instead of calling llAvatarOnSitTarget() a second time.
  8. If relogging fixes the problem for you, then it indicates network issues on your end. Any time an object is modified in SL, it happens on the server first, and then that update must be streamed over the internet to all viewers in the region. It sounds like the update-packet failed to reach your viewer, so it never knew the poseball had changed to be visible. In effect, you were out of sync with the SL servers. Relogging is one way to fix it, but that's overkill. In this case, selecting the object in edit mode is often enough to refresh it and get back in sync.
  9. Wow... that does seem to work a lot better! My knowledge of this is almost non existant, but why is taking the cross product of UP%FWF%UP more accurate than just the llRot2Fwd(llGetRot()) ?
  10. So I've been having more success with the casting-while-in-flight approach. Still not perfect, but I'm getting there. I'd like to backtrack a little and ask another question: Looking at my code in the original post, does anyone see problems with the alignment process? I did originally say it's working "well enough" but I wonder if it could be better. Even in controlled tests (using a touch_start to align instead of being in physical motion) I would find that the object isn't perfectly aligned with the target surface normal. Depending on the orientation of the projectile, it sometimes is very noticeably misaligned with the target surface. I am using default prim cubes at various rotations for my test targets, though I can often see this with non-rotated targets too. The projectile is created such that at zero rotation, its top faces positive Z and its left faces negative X. When fired, it is rezed moving "bottom first" and the ray always shoots out the bottom for alignment purposes. I assumed that the normal of the target would effectively be perfect to use as the new UP vector and I could just use the current FWD to calculate the rest for llAxes2Rot. Is there something else I need to compensate for?
  11. Thanks for the suggestions, Rolig and Innula! The actual object is a very small <0.14, 0.14, 0.03> cube. I tried increasing the depth of the child prim for the collision shape, but the results wern't much better than before. It did occur to me though, that I can probably scrap the child prim and instead make the rayCast start further back (relative to the object) and go a tad deeper. This seems to improve the detection rate a little bit - if the object buries itself in the surface, there's a good chance the ray will still start outside to compensate. I think a lot of the remaining error is boiling down to the elasticity of the object. If I can make it less bouncy, I think it would help a great deal. I've spent a while this afternoon experimenting with the gravity/friction/density/restitution parameters. Not much info on the wiki, but a restitution of 0 seems like a good first step. I did consider doing a series of raycasts while in flight, but was reluctant to go that route because I wanted to put as little stress on the server as possible. But then again, doing a short range cast on a reasonably paced timer and aborting the iteration if nothing is detected probably wouldn't be that big a load. And I suppose that would be the most accurate solution, detecting the surface and acting before the collision itself has a chance to alter the projectile and thus make its ray miss after the fact. Raycasts are pretty lightweight, right?
  12. I'm working on small thrown object. The idea is that when the user "throws" it, the object will fake attach itself to the surface it collided with. In practice, it will shoot out a ray, grab the normal and then re-orient itself based on the surface normal and it's own rotation, and then re-position itself to be right up against the surface. I've got it working well enough in controlled tests. The problem I have is during live tests. The object doesn't seem to stop itself fast enough in the collision_start event. It bounces off, or even clips through the impact surface before the ray can be cast, and thus it finds nothing to adhere to. I've tried lowering the initial rez velocity to 5 m/s, and adding a larger child prim for a slightly bigger collision buffer. Even with this, it still messes up half the time. collision_start(integer count) { llSetStatus(STATUS_PHYSICS, FALSE); vector startUp = llGetPos(); //will reuse these two vectors for llAxes2Rot below vector endFwd = startUp - (<0.0,0.0,0.3>*llGetRot()); list results = llCastRay(startUp, endFwd, [ RC_DATA_FLAGS, RC_GET_NORMAL, RC_REJECT_TYPES, RC_REJECT_AGENTS|RC_REJECT_PHYSICAL, RC_MAX_HITS, 1] ); if(llList2Integer(results, -1) == 0){llOwnerSay("no target!"); return;} llOwnerSay("hit object: "+ llKey2Name(llList2Key(results,0))); //debug object hit name //Given our rotation, and the vector normal of the face we hit, combine the two to make us perpendicular. startUp = llList2Vector(results,2); //reuse the start vector for "UP" endFwd = llRot2Fwd(llGetRot()); //reuse the end vector for "FWD llSetLinkPrimitiveParamsFast(LINK_THIS,[ PRIM_ROTATION, llAxes2Rot(endFwd, startUp%endFwd, startUp), PRIM_POSITION, llList2Vector(results,1) + <0.0,0.0,0.016>*llGetRot() ]); } The very first thing I do is disable physics! How is that not fast enough? Any ideas on how to halt thig this the instant it collides? Trying to predict the impact surface by ray casting on_rez isn't an option because the thrown object is physical and is to be affected by gravity...
  13. Seeing the entire texture is actually how it appears to the server: with repeats of <1.0, 1.0> and offsets of <0.0, 0.0>. So it could be that when the animation "breaks", the viewer defaults to those settings. Another thing: Selecting or de-selecting a prim/object will restart the animations. When you link the objects, does the animation immediately sop, of does it happen when you de-select the object? The bug may be that for some reason, the animation doesn't restart and instead reverts to what the server sees as far as texture repeats/offsets.
  14. Yes, that is how texture animations work in SL. You basically have an image containing each frame of the animation in succession. The function actually works by telling the viewer to manipulate the texture repeats/offsets behind the scenes. It's mostly a client-side effect. That is why you can't set the repeats/offsets of an animated face while the animation is playing. For reference, this is only one of several animation methods. You can also have animation modes where the image smoothly slides or rotates across the face. But they all work with a static image.
  15. If that's the script for all prims, then you could get around this problem by linking the signs first and then using scripts to apply the animations as desired. I would still recomend using a single script with llSetLinkTextureAnim() so you can easily control which child gets which animation without worrying about conflicts from other scripts elsewhere in the linkset. Also, if you only need to set the animation, and don't need to turn it on or off later on, then you can safely delete the script(s) from your object. Why your animations break upon linking is puzzling to me - it may be a bug. If you can recreate the problem reliably, you may want to consider reporting the problem with the bug tracking tool JIRA.
  16. It's hard to say without seeing the scripts involved. Also, what do you mean when you say the child prims "revert to the original image"? Is the texture changing to something else, or is the animation simply stopping? Do any of your scripts have a changed() event that triggers on CHANGED_LINK ? Texture animation is a property of the prim which can generally persist once it is set. You can even remove the script and the animation will continue to play. I believe shift+drag copying is one of the few cases where the property is lost and not applied to the copy prim. But if you say the animation is stopping when you link the prims, my suspicion is that you have something in your script which messes up when the linkset is formed. Either a changed event, or llSetLinkTextureAnim() which a link number target of zero. One alternative I can suggest (without knowing more) is to get rid of the child scripts. you most likely don't need them. Instead, just have one script and use llSetLinkTextureAnim() to set the animation for each child prim. Fewer scripts is almost always more desirable and efficient.
  17. Ah ha! Thank you Arton and Drongle! Using the Mesh->Edges->Edge Split option in Blender on the cylinder caps achieves the shading effect I was after. Just for clarification, in this context "sharp" and "hard" mean the same, right? And marking an edge as hard is akin to marking a seam? In the sense that both are used in preperation for another action? (seam is to unwrapping as hard is to the edge split modifier)
  18. I'm recreating a prim based avatar into mesh components. For the most part, it's going well. However I have a problem with the shading on the mesh counterparts. Pictured below is a rear view of the head and torso components, mesh on the left and original prims on the right. The shading is considerably darker or lighter on the cylinders that make up the head, neck and shoulder bar. (this picture was taken with Advanced Lighting + Ambient Occlusion enabled, but the issue is even more apparent without it) Originally, I had everything using smooth shading - and that had similar problems with other ares of the mesh parts. I was able to solve some of that by using Flat shading on the more boxy shapes. But doing the same for these cylinders won't work as nicely, because the faces will become too noticeable. Is there a trick to making a mesh cylinder shade similarly to a prim?
  19. You might still be able to use the burst and rate settings. How about having the height of the particle elongate over time? Could ribbons help here?
  20. I believe it's also worth mentioning that llPlaySound only supports one sound at a time. So if your script makes a second llPlaySound call before the first one finished playing, it will cut off the first as it starts the second. Unnatached sounds will not experience this can can be layered. Including llSetSoundQueueing(TRUE) would cause llPlaySound calls to let the previous one finish before starting the next.
  21. Doh! That was it. I didn't even realize I had any scale on the object. After applying it, the spin worked like a charm. Thanks both!
  22. Much thanks, Drongle! However... I'm afraid I'm still doing something wrong; the resulting spin is slightly squished. The radius is fine along the y axis, but is almost looks like it's about half on the x axis. I set the pivot point to 3D cursor and the cursor itself at the "center" of what will be the the spun object. Testing it out by setting the degrees to 360 confirms that everything is "centered as it should be, but it doesn't expand the correct distance on the x axis...
  23. I'm trying to model a building similar to this: [link] So far so good, but I am having difficulty with the rounded corner on the left side. Due to the windows on that part, I modeled the section as a straight, un-curved segment and intend to use the curve modifier to gracefully bend it 90 degrees. I've added enough vertical subdivisions to support a reasonable curve. However, I am having a hard time being "exact" when applying the modifier. That is to say, I've already modeled the front part of the building and want to align the curved part such that the near edge snaps to the front wall, and the "far" edge is perfectly 90 degrees from the other edge - so I may extrude that out to start working on the side wall. So far, every tutorial I've looked at seems to treat the curve modifier as an inexact process. Am I going about this the wrong way?
  24. If used excessively and improperly, yes 1024x1024 textures are BAD. And the sad truth of the matter is that on SL, 1024x textures are most definitely used excessively and improperly. The best thing content creators can do moving forward, is to produce more and more efficient content. Use less resources. SL's performance will improve for everyone. A 1024x1024 texture consumes 3MB of video card memory. If the texture also has an alpha channel (transparency) in it, bump that number up to 4MB. That's a lot for one texture - especially when you take into consideration that this is on top of EVERYTHING ELSE being rendered on your computer's GPU. So you really must think long and hard before deciding to use a large texture like that. What is the purpose of this object? Will it be large and generally be viewed up close? If so, then maybe you could consider using a large texture. But if this is going to be a "small" object, like an avatar, vehicle, prop, or worn accessory, then you probably should not go with a 1024 texture. Take to heart what LepreKhaun said with regard to relative island size/placements. Now that we have total control over UV mapping, we can cram every last corner of a texture with imagery. The important parts that need to be clear can be sized bigger in the texture - given them priority and a more crisp detail. Anything else can be scaled down to take up less space on the texture. Consolidating multiple textures into one for a single mesh is great, but if that means using a 1024x texture, (or if you use four 512x's) you should really consider trying to bring that size down first. There should be as little unused space in the texture as possible.
  25. Have not played with it enough to say - but what you describe sounds like server lag to me... Hard to say though. As a workaround, you could automate the rewpawn code to pause for a second after the initial teleport and then check coordinates. If the target avatar is standing in the respawn area, all is good. Else, retry the respawn code.
×
×
  • Create New...