Jump to content

Fenix Eldritch

Resident
  • Posts

    771
  • Joined

Everything posted by Fenix Eldritch

  1. As per the caveats on the wiki page for llGetObjectDetails, this function doesn't return information about items within an inventory. Unfortunately we don't have access to that kind of metadata directly. However, you can indirectly access an approximation by keeping track of when the notecard's UUID changes - as it will get a new one when you save any edits to a notecard. default { state_entry() { llOwnerSay("pre-edit uuid: "+(string)llGetInventoryKey("test") +"\npre-edit time: "+llGetTimestamp()); } changed(integer change) { if (change & CHANGED_INVENTORY) { llOwnerSay("post-edit uuid: "+(string)llGetInventoryKey("test") +"\npost-edit time: "+llGetTimestamp()); } } }
  2. Via the official Map API, yes. You could make an http request to a utility URL specifying the global coordinates and the response (after some formatting) would be the region name. See https://wiki.secondlife.com/wiki/Linden_Lab_Official:Map_API_Reference#Utility_URLs Note you would need to divide the value returned by llGetRegionCorner by 256 to get the proper region offset. Once you have the region name, you can construct a SLURL or a clickable chat link with the Viewer URI formatting. Edit: Played around inworld and I think it's quite doable. Here's the basic idea: given any arbitrary global coordinate, use the map api to get the region name. find the region's region corner (can use the other utility url, or llRequestSimulatorData with "DATA_SIM_POS". subtract the region corner from the arbitrary global coordinate to get the local position within the region. use the discovered (escaped) region name and the discovered local XYZ position to create the URI link.
  3. Asking for finding existing products should be done in the Wanted forum. Although what you are seeking can be accomplished with the command llSetScriptState. One of the caveats is that the function can only target scripts in the same prim. So you need to set up a communication framework between the HUD and the script that uses llSetScriptState - which in turn will set the state of the target script. Here is an extremely quick example using 3 scripts: // Object A // cotains this script. Touching it will broadcast the START or STOP message integer toggle = FALSE; default { touch_start(integer total_number) { if(toggle = !toggle) { llSay(1, "STOP"); } else { llSay(1, "START"); } } } . // Object B // cotains this script and the "target script". // Receives the start/stop message and sets the state of the "target script" accordingly default { state_entry() { llListen(1, "","",""); } listen(integer channel, string name, key id, string msg) { llOwnerSay("received signal: "+msg); if(msg=="STOP") { llSetScriptState("target script",FALSE); } else if(msg=="START") { llSetScriptState("target script",TRUE); } } } . // Object B // The target script that will be set active/inactive by the other scripts. // For this example it's just the default hello world script. // But make sure to rename this script to "target script" so the other one can find it. default { state_entry() { llSay(0, "Hello, Avatar!"); } touch_start(integer total_number) { llSay(0, "Touched."); } } Drop the first script into object A, and the other two into object B. Make sure to rename the 3rd script to be "target script" since that's what the 2nd script is looking for. When you click Object B, it should say "Touched." as per usual. Clicking on object A will send the START or STOP signal which will be heard by the 2nd script in object B. When the STOP signal is sent, you will notice that clicking on object B no longer does anything - because that script has been suspended. Clicking object A again to send the START signal will reactivate it.
  4. That is... less than optimal. You can (and should) combine all of those into one script per button. Heck, you honestly can (and in my opinion really should) combine everything into a single HUD script. It isn't as daunting as you may think. Yes, that is precisely what I was going to say. You can simply add a new set of variables for additional links/faces/whatever and reference them in the new messages following the original. That is a better way than essentially duplicating the script for each message. Going a few steps further, you can further combine everything into one script in the HUD root. For example, suppose you name the buttons on your HUD to something like "Button1", "Button2, "Button3"... and so on. Then you could but a script like this in the root and it would react to whatever button was clicked (providing the name matched) : default { touch_start(integer total_number) { string buttonName = llGetLinkName(llDetectedLinkNumber(0)); llOwnerSay("user clcikd on "+buttonName); //debug readout if(buttonName == "Button1") { //do stuff for button1 //you can put you messages for button1 here } else if(buttonName == "Button2") { //do stuff for button2 //you can put you messages for button2 here } else if(buttonName == "Button3") { //do stuff for button3 //you can put you messages for button2 here } } } You could then take those llSay commands you have in each button and instead copy them to the appropriate sections in this script instead. Make sure you also define all the needed variables as well.
  5. Your HUD script is basically constructing a custom message to send to the receiver. So sure, you could alter it to include additional link and face numbers in a single message. However, in doing that you would need to similarly alter the receiver script to understand the new message format and parse out the multiple link/face values from the single message. Alternatively, you might be able to just send extra messages for each link and/or face depending on the situation. That would probably work without needing to modify the receiver, providing you maintain the same format. As an aside, you can't define multiple discrete values into a single integer variable. Well... I mean you could encode multiple values into the integer (deep down it's just 32bits of data) but I imagine that's overkill for an application like this. The typical approach would be to use a list or some other kind of delineated string. Generally speaking, when you have a function that takes a link number as an input parameter, you are limited to either the specific individual link or a handful of special LINK_* constants that can define certain links or a limited range (self, root, the entire linkset, everyone else, or all child links). For faces, it's even more limited: you can only specify the single target face or ALL_SIDES. In order to target an arbitrary subset of link numbers (that aren't covered by the LINK_* constants) at the same time, you would need to use PRIM_LINK_TARGET. That signals that all the parameters that follow will apply to whatever link you specified there. You can have multiple PRIM_LINK_TARGET instances in the same parameter list to target other links in this manner.
  6. Down the rabbit hole we go! I also created a test plane in Blender and uploaded it with the following traits: highest LOD was the model (default Blender 2x2 meter plane: 4 verts, 2 tris, no UV) all other LODs used "use LoD above" physics shape used high LOD, did not analyze or do anything else rezzed plane is using PRIM physics type (and I scaled the size down slightly to better fit in my raytracer scene) I then sat it down in front of my in-world raytracer and performed some visibility tests. The raytracer can "render" what is sees in terms of physics shapes. Colors for the backdrop walls and floor are hard-coded, but anything else in the viewfinder has a random color based on uuid. The grey prim in the background registers as a mirror when rendered. My avatar is also in the frame for reference (it appears as a purple coffin-shaped thingy). The raytracer shoots 1 primary ray into the scene and then shoots a secondary ray from the point of impact to the light source to determine shadows. Both rays have a maximum hit count of 1. Picture 1: the plane is facing the raytracer's camera (highlighted by me in edit mode). I would have expected the plane to render as a solid color. The speckled dark spots you see on it are a known issue. Basically, the ray did in fact hit the plane's surface, but the secondary ray that tries to rebound to the light source got interrupted, so the raytracer thinks that point is under shadow and shades it as such. I've seen this occur on torus and sphere prims under certain conditions. See BUG-202695. This seems to be a problem specific to my use case, but the important part is that the primary ray did in fact hit the plane at all points sampled. Picture 2: Oh... what? Those previously shadowed points are gone and now a lot of rays are shooting through the plane entirely and hitting the scene's backdrop behind it. What changed? The "thickness" of the object. Notice the Z component of the size in the edit tool: I reduced it to the minimum 0.010m. For some reason, this is now allowing many of the rays to punch clean through. Picture 3: Same as #2, but rotated the object 180 degrees to have the backface pointing towards the raytracer. Same results as #2. Picture 4: Repeating test#1 (front facing the raytracer's camera, default prim "thickness" as 0.75m) but with the physics shape type now set to convex hull. Now THIS is what I would have expected for all the other tests: a clean render with all rays hitting the surface and rebounding to the light source. For the sake of brevity, I'll skip the rest of the pictures and just report that repeating the other tests but with convex hull produced the same results: when the prim thickness (z size) was reduced to 0.010, some of the rays would manage to punch through the plane. Now this was with the physics model being the high LOD itself without any other options like analyzing, converting to hulls, or simplifying in the uploader. Sooooo.... Re-uploading the same plane, setting physics to highest LOD and analyzing using solid method, normal quality, no smooth and checking the close holes box (not that close holes should matter in this case). The results shown by the analyzer report Triangles: 2, Vertices: N/A, Hulls: N/A. (which were the same as the original plane I uploaded too). Interesting to note that the simplification section was greyed out at all times - perhaps this model is too simple? All tests using this model produced identical results as with the first model - except the new plane rendered in a stylish red color. Here's the older thread Wulfie referenced. And if anyone wants to experiment, I've left my raytracer rezzed in my workshop - see profile for slurl.
  7. Hard to say without more info... What are the actual llCastRay command parameters you're using? What is the physics shape type of the plane? And as a sanity check, what does the plane's physics shape look like? You can get a quick visual by going to Develop > Render Metadata > Physics shapes. You'll want to probably do this in the sky to reduce visual clutter from nearby objects.
  8. By detect, I presume you mean something to the effect of whether your avatar is currently in or under the water. The only command that deals with water is llWater. It returns a float which is the height of the water level at the object's current position plus an offset. Since the object is an attachment, its position is the same as our avatar's position, so we can just specify ZERO_VECTOR as the offset. To tell if your avatar is "in" the water, you would want to compare the value returned by llWater(ZERO_VECTOR) with the Z component of your avatar's current position vector returned by llGetPos. (you can access the individual X Y or Z components of a vector by appending .x .y or .z respectively when referencing the variable). If your vertical position is less than or equal to the water level, then you can conclude that you are in the water. Note that the position returned by llGetPos relates to the "center" of the avatar which is about waist-high. So if you want to get the coordinates of say, your feet, you would need to compensate for that. One way is to subtract half the avatar's height from its current position. Something like this: default { touch_start(integer total_number) { vector av = llGetAgentSize(llGetOwner()); //get (approximate) size of avatar vector p = llGetPos()-<0,0,(av.z/2)>; //get our position & subtract half avatar height from z component. (this will give the position of our feet) float w = llWater(ZERO_VECTOR); //get water height at our position w = p.z-w; //subtract water height (w) from our feet height (p.z) if (w <= 0) { llOwnerSay("you are in water"); } else { llOwnerSay("you are "+(string)w+"meters above the water"); } } } To have your attachment react to the avatar contacting the water, it would need to be running that check on a repeating timer. Perhaps with an interval of once every second or greater. Note that this is only viable for "Linden Water"... it won't work for user made prims that are meant to look like water.
  9. Yup, there are several caveats to KFM, and that is one of them (see the wiki page I linked above for all of them). You can't use scripted commands to move any part of the linkset while the KFM is going. You need to wait for it to fully complete or stop it with one of the KFM commands. In fact, that's how several of the smooth door rotation examples work if I recall correctly.
  10. Key Framed Motion my be another option, depending on the scenario. float rotationtime = 2.0; default { touch_start(integer total_number) { llSetKeyframedMotion( [llEuler2Rot(<0,0,90> * DEG_TO_RAD), rotationtime], [KFM_DATA, KFM_ROTATION] ); } }
  11. If you do happen to get your hands on that lamp post, shoot me a copy. I'd love to have a piece of history like that.
  12. Hmm, well that rules that out then. I just realized how I could find the answer! Since all icons are in a single atlas file, then either the html or css of the web profile should have some definitions to reference them. Saving a local copy of the page, I found some familiar strings buried deep in the css... (repeating the imge for reference) icon-beta_resident icon-charter_member icon-concierge icon-lifetime_member icon-linden_lab_employee icon-premium icon-resident (these were mixed in with all the other icon references, but they appeared in the same order as the icons visually) So Lindal was correct, #7 wasn't for mentors as I had guessed earlier. It turns out to be an abandoned regular resident badge. And Premium members also at one point were indented to have a badge as well - but that too apparently was abandoned, confirming Rowan's remarks. The Knowledge Base article for Accounts makes a passing reference to Concierge and that residents who have access to it can get special live chat. So I'm guessing this badge would have been for those concierge agents. Thanks for everyone's input!
  13. Ok, I found another one I the wild. The key icon (#4) represents "Lifetime Member". I forgot about that one! I also found a screenshot of the lime person icon (#7) circa January 2011 here https://danielvoyager.wordpress.com/2011/07/03/look-back-at-previous-redesigns-of-the-second-life-profiles/ Viewing the same profile today does not show the icon. So I'm thinking it might have been either the default resident icon, or for mentors, as the screen shot does reference them being an ex-mentor. Can anyone recall if mentors, greeters, live helpers, etc had any kind of special account status?
  14. If the icon is displayed, it'll be to the right of the avatar's name on the web profile, just above the gear options button. I found the set of seven by viewing the image in its own tab, only to discover it was a texture atlas of all buttons/icons in the same png file. Very odd that you found a Charter account that doesn't display as such... Unless there is a way to suppress that?
  15. I've long known that there are various kinds of SL accounts (not talking about premium vs non premium), but was recently reminded of them having special icons identifying them on the web profiles. I did a little digging and pulled out what I think are seven different icons. I recognize some of them: Beta Resident Charter Member Concierge* Lifetime Member Linden Lab Employee Premium* Resident* Does anyone know what the other ones are? I don't believe they're for regular or premium residents, as that doesn't show in the web profiles (unless it's since been deprecated) Edit: identified Concierge, Lifetime Member, Premium, and Resident by looking at the web profile's CSS file.
  16. My guess is that it is another example of objects that were created during the alpha/beta phase of SL which were then migrated over to the main grid. The metadata references the creator, whose same account was recreated on the main grid and thus has a newer creation date than the object itself. The same thing can be observed with "The Man Statue". https://secondlife.com/destination/the-man As it happens, the creator of the lamppost referenced in the OP is listed as one of the Beta Contributors on the monument in Plum.
  17. Just to be clear, when you say "20 list entries" are you talking about the individual elements in the list, or the strided "records" that are comprised of 4 list elements (timestamp, avatar name, online_status, previous_timestamp). Because I also tried removing 10 records (40 list elements) and that still produced a crash when the memory limit was 6k. When you're pruning the list, make sure you're doing the proper amount.
  18. Did a little more tinkering and found that simply deleting 80 list entries (20 full records) in your prune code block seems to be enough to keep the script going, even if you keep your original memory cutoff point at 6000 bytes. lHistory = llDeleteSubList(lHistory,0,79); // Remove the oldest 20 records (1 rec = 4 list elements) from history I'm a bit more confused now about the overhead of llDeleteSubList, but it seems that with about 6k remaining, it wasn't able to operate on your list to return that massive list minus 4 entries. Taking out 80 at a time seems to be more manageable for it at that point.
  19. I modified your script slightly to have some additional diagnostic reports: Added global integer variable "count" which tracks number of full records added. count is incremented after adding a new record. count is decremented when removing the oldest record. Added llOwnerSay("prune:"+(string)llGetFreeMemory()+" "+(string)count); to the start of the code block for removing the oldest record. Added llOwnerSay("prune end"); to the end of the code block for removing the oldest record. Added llSetText((string)count,<1,1,1>,1.0); after adding a full record to display total contents visibly on prim. Next I used this companion script to drive the test: integer toggle = FALSE; integer count; default { touch_start(integer total_number) { if(toggle = !toggle) { llOwnerSay("started"); count = 0; llSetTimerEvent(0.075); } else { llOwnerSay("stopped"); llSetTimerEvent(0.0); llMessageLinked(2, 99, "", ""); } } timer() { count++; if(count%100 ==0) { llMessageLinked(2, 99, "", ""); } llMessageLinked(2, 2, "Fenix Eldritch,1,"+(string)llGetUnixTime(), ""); } } It will rapid-fire populate your history script and also initiate a status report about every 100 entries. Within a minute I got the stack heap collision: [06:38] A: started [06:38] B: Hello from the History Server Script, Free Memory: 50190 / Lines recorded: 99 [06:38] B: Hello from the History Server Script, Free Memory: 40790 / Lines recorded: 199 [06:38] B: Hello from the History Server Script, Free Memory: 31390 / Lines recorded: 299 [06:38] B: Hello from the History Server Script, Free Memory: 21694 / Lines recorded: 399 [06:38] B: Hello from the History Server Script, Free Memory: 12294 / Lines recorded: 499 [06:39] B: prune:5996 566 It blew up on the very first prune. It didn't even get to the end of the prune block, which indicates to me that it's running out of memory while trying to remove the oldest entry for the very first time. LSL passes its variables "by value" meaning it creates a temporary copy of them in memory when passing them to functions. So when you call llDeleteSublist(lHistory,0,3) for a brief moment, you have two copies of that list - which pushes you well over the memory limit. I think you need to perform your cleanup much earlier, at a point where you have more memory memory to work with the lists. I doubled the memory cutoff limit from 6000 to 12000 and that seemed to work. But I would strongly suggest you also remove more than just one entry. Otherwise you're going to be doing cleanup for every new entry added after you hit the initial threshold - and that's a lot of thrashing that can easily be minimized. Perhaps remove 400 list entries (100 full records) when you prune. That will give you some more breathing space. Edit: I must be incorrect about the list duplication, that might only apply to user functions? Because at the point when you perform your prune, you're using about 53kb. And with 12k remaining, that's not enough to hold a 2nd copy but it is enough to keep the script going through the prune process. Not exactly sure what is going on under the hood...
  20. I'm a bit green to this area, but here's what I think is going on: (please correct me if I get anything wrong) UTF-8 can represent characters using a variable number of bytes, between 1 (8bits) and 4 (32bits). Common ASCII characters for example only need 1 byte, while the more interesting unicode symbols can require up to 4. Base64 works by expanding 3 bytes of arbitrary binary data into 4 bytes (padding any extra space with "=" characters). The output is printed as alphanumeric characters plus a few other symbols. But keep in mind that it's always going to output in multiples of 4 characters. So the bespoke code snippet is taking your input string and encoding it to base64. It then takes a substring of that from index 0 to 31... basically taking the first 32 characters of that newly encoded base64 string. It then converts that substring back to UTF-8. As it turns out, a base64 string of 32 characters long is just enough to contain the first 24 bytes worth of the original UTF-8 input string. Consider this breakdown: string theString = "abcdefghijklmnopqrstuvwxyz"; llOwnerSay("1:"+theString); llOwnerSay("2:"+llStringToBase64(theString)); llOwnerSay("3:"+llGetSubString(llStringToBase64(theString), 0, 31)); llOwnerSay("4:"+llBase64ToString(llGetSubString(llStringToBase64(theString), 0, 31))); /* 1:abcdefghijklmnopqrstuvwxyz 2:YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXo= 3:YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4 4:abcdefghijklmnopqrstuvwx */ The input string is 26 ASCII characters (and 26 bytes) long and so the resulting base64 string is 36 bytes long. Remember that base64 encodes 3 bytes into 4 byte segments. So breaking it down further... abc def ghi jkl mno pqr stu vwx yz └┬┘ └┬┘ └┬┘ └┬┘ └┬┘ └┬┘ └┬┘ └┬┘ └┬┘ ┌┴─┐ ┌┴─┐ ┌┴─┐ ┌┴─┐ ┌┴─┐ ┌┴─┐ ┌┴─┐ ┌┴─┐ ┌┴─┐ YWJj ZGVm Z2hp amts bW5v cHFy c3R1 dnd4 eXo= We can see how each block of 3 characters (which are 1 byte each) fits into their corresponding 4 byte base64 segment. So we can take the first 8 segments (32 characters/bytes) and convert it back to UTF-8 which gives us the original first 24 bytes, and as such, the first 24 ASCII characters of the input string. And this could be considered the "worst" case, where every character from the input sting was 1 byte. If the input string had characters that were larger than 1 byte, the resulting conversion would give us fewer characters in the end. I hope I explained that correctly...
  21. The wiki does list a turning speed of 90 degrees per second. https://wiki.secondlife.com/wiki/Voluntary_Movement_Speeds I was curious, so I threw together my own crappy measurement script to try and verify the numbers shown there. integer toggle = FALSE; key av; vector lastDeg; vector currDeg; default { state_entry() { av = llGetOwner(); } touch_start(integer total_number) { if(toggle = !toggle) { llOwnerSay("ON"); lastDeg = llRot2Euler(llGetRot())*RAD_TO_DEG; llSetTimerEvent(1.0); } else { llOwnerSay("OFF"); llSetTimerEvent(0.0); } } timer() { integer avState = llGetAgentInfo(av); string s; if(avState & AGENT_WALKING){s="Walking: ";} if(avState & AGENT_ALWAYS_RUN){s="Running: ";} if(avState & AGENT_FLYING){s="Flying: ";} if(avState & AGENT_CROUCHING){s="Crouching: ";} currDeg = llRot2Euler(llGetRot())*RAD_TO_DEG; lastDeg = lastDeg-currDeg; s += (string)llVecMag(llGetVel()) + "\nRot: "+(string)llFabs(lastDeg.z); lastDeg = currDeg; llSetText(s,<1,1,1>,1.0); } } When active, my script polls every 1.0 seconds. The reported numbers seem to be pretty close to what's listed on the wiki. As far as the rotation is concerned, it does seem to be about 90 degrees every second (the difference between recorded values alternates between around 90 and 270). Edit: As an aside, if you hold down SPACE, it acts as a sort of handbrake for avatar movement - though it doesn't stop you completely. Here are the speed values for each state when SPACE is applied (note that when flying or falling, all control is arrested and the avatar slowly descends at 0.21 meters/second). state speed SPACE ----------------------------- Walking 3.20 0.80 Running 5.13 1.28 Crawling 2.00 0.50 Flying 16.00 0.21 Fly Up 16.00 0.21 Fly Down 22.87 0.21 Falling 52.83 0.21 And rotation reports all zeroes, despite you still appearing to turn on your screen. I wonder if you appear to stop rotating to others when doing this?
  22. Then I'm afraid at present you're probably limited to the kludge I posted above, or Frionil's suggestion. But your feature request seems like a pretty simple and iterative expansion, so I think it might have a good chance at getting accepted. Of course, don't expect it to be implemented soon even if it is. Speaking of, there is something you should probably be mindful of. When a particle stream targets an avatar, it zeros on on the agent's center, which would be the avatar's... um... crotch. So even in the best scenario where your request is implemented, you might have to contend with that. Edit: Frionil's suggestion would probably be the way to go then. You'd be wearing both the emitter and target prims and using some tricks to offset the worn emitter to be more or less at the target avatar's position.
  23. How does it fail? What does your combined script look like? As a matter of course, check out the documentation on the wiki regarding the IF THEN ELSE flow control in LSL. It will give you examples with explanations. Breaking the logic down by writing out what you want to happen can help (as you've already done). The next step is to translate that into LSL bit by bit. "IF the avatar that touched the object has the same active group, THEN do stuff. ELSE warn them with a message." Your first script does exactly that, except the "stuff" it does is a single command of offering inventory. So you would want to replace that with the commands from the other script. When you want your IF statement to perform multiple commands on a given branch, you should enclose each branch within curly braces {} to define the scope - basically, their contents. (Notice how each event uses curly braces to define their scope in a similar way) if(condition) { //do stuff if condition evaluated to TRUE } else { //otherwise do this stuff if condition evaluated to FALSE } In this case, your condition would be (llDetectedGroup(0)). The TRUE branch would contain the 2nd script's commands to define the channel, open the listener, and generate the textbox. And the FALSE branch would have the warning message. And of course, you would copy over the rest of the stuff from the 2nd script: the global variable and the entire listen event. If you are very new to scripting (or programming in general), I highly recommend spending some time reading through the tutorials on the wiki. They have some good beginners documents that can get a newcomer up to speed on the basics. https://wiki.secondlife.com/wiki/Category:LSL_Tutorials
  24. It's possible in a very limited way. By using an ANGLE or ANGLE_CONE pattern combined with a tight angle begin/end slice, a burst radius, and having the emitter target itself, you can make the particles generate a distance from the emitter and flow into it instead of the other way around. default { state_entry() { llParticleSystem ([ PSYS_PART_START_SCALE, <0.0,0.2,FALSE>, PSYS_PART_END_SCALE, <0.2,0.0, FALSE>, PSYS_PART_START_COLOR, <1.0,1.0,1.0>, PSYS_PART_END_COLOR, <0.431,0.823,1.0>, PSYS_PART_START_GLOW, 0.0, PSYS_PART_END_GLOW, 1.0, PSYS_PART_START_ALPHA, 1.0, PSYS_PART_END_ALPHA, 1.0, //=======Blending Patameters: //PSYS_PART_BLEND_FUNC_SOURCE, 0, //PSYS_PART_BLEND_FUNC_DEST, 0, //=======Production Parameters: PSYS_SRC_BURST_PART_COUNT, 5, PSYS_SRC_BURST_RATE, 0.1, PSYS_PART_MAX_AGE, 2.0, PSYS_SRC_MAX_AGE, 0.0, //=======Placement Parameters: PSYS_SRC_PATTERN, 8, // 1=DROP, 2=EXPLODE, 4=ANGLE, 8=ANGLE_CONE, //=======Placement Parameters (for any non-DROP pattern): PSYS_SRC_BURST_SPEED_MIN, 0.0, PSYS_SRC_BURST_SPEED_MAX, 0.0, PSYS_SRC_BURST_RADIUS, 5.0, //=======Placement Parameters (only for ANGLE & CONE patterns): PSYS_SRC_ANGLE_BEGIN, 0.00, PSYS_SRC_ANGLE_END, 0.1, //PSYS_SRC_OMEGA, <0,0,0>, //=======After-Effect and Influence Parameters: PSYS_SRC_TARGET_KEY, llGetKey(), PSYS_SRC_ACCEL, <0.0,0.0,0.0>, PSYS_PART_FLAGS, 0 |PSYS_PART_INTERP_COLOR_MASK |PSYS_PART_INTERP_SCALE_MASK |PSYS_PART_FOLLOW_VELOCITY_MASK |PSYS_PART_TARGET_POS_MASK |PSYS_PART_EMISSIVE_MASK ]); } } But as said above, this is extremely limited. You would have to continually update the orientation of the emitter to face the target avatar, as well as update the radius based on how far away the target avatar was. It would be much saner (and look better) to have your emitter positioned near the target avatar and have the resulting particles target your attachment. Either Frionil's method, or making the emitter be a small, invisible, floating, phantom vehicle that could essentially follow the targert would yield better results.
×
×
  • Create New...