Jump to content

Myrmidon Hasp

Resident
  • Posts

    33
  • Joined

  • Last visited

Everything posted by Myrmidon Hasp

  1. There's something wrong with that link; the video gives an error. When used in trigonometry theta is the acute angle of interest in a right triangle. In my diagram it is the angle that is formed between the red and black dashed lines. I'll see if I can find a decent tutorial or scratch one up to explain it better. The secondary object is only needed once to accurately figure out the distance from the camera to the HUD before it gets hard coded as cam2hud. I'm pretty sure it effectively changes between different fields of view and screen resolutions. That code I posted isn't going to work without an accurate value for cam2hud and it needs some adjustments similar to Wulfie's applied to the values from llDetectedTouchST() to compensate for the size of the HUD's prim dimensions and to move the origin from the bottom left corner to the center.
  2. The snippet to get the rotation and apply it should look something like this: float cam2hud = 1; // this is the distance from the camera origin to the hud. vector touched; // this is the vector of the detected touch. (I'm assuming it's in relationiship to screen center) vector euler; //this is the euler we're going to convert into a rotation. rotation touchrot; //this is the rotation that points at touched from the camera position. euler.x = 0; euler.y = llAtan2(touched.x, cam2hud); // sets the horizontal theta given opposite and adjacent euler.z = llAtan2(touched.y, cam2hud); // Sets the vertical theta given opposite and adjacent touchrot = llEuler2Rot(euler); // converts euler into quaternion rotation llCastRay(llGetCameraPos(),llGetCameraPos() + <50,0,0> * llGetCameraRot() * touchrot, []); I can't remember where the origin of detected touches originate so you may need to adjust so it relates to screen center. As for the test device, it would first need an object with a known position in relation to your camera's position in world following my previous diagram to calculate thetas inworld with llAtan2(). Then, without moving or rotating the camera, use your HUD and touch on the center of the object to get that touch offset. Use that offset to calculate cam2hud with: float cam2hud = touched.x / llTan(HorizontalTheta); // or float cam2hud = touched.y / llTan(VerticalTheta); It might be easier to script an object to position itself to a preset offset in relation to your detected camera's position and rotation than set things up by hand and mess around figuring out offsets and rotations afterwards. Kinda like this: llSetPos(llGetCameraPos() + <10,5,4> * llGetCameraRot()); float HorizontalTheta = llAtan2(5,10); float VerticalTheta = llAtan2(4,10); Edit: * not entirely sure if I got the axes right in llAtan2() I haven't tested any of this stuff yet and Haven't used the function in over a decade. damn rust.
  3. It looks like the ending point is only moving slightly in relation to the touch offset because you're just moving it forward 50m and shifting by the touch offset when you need to determine the rotation between the camera position and the touched position and then multiply that by <50,0,0>. I scratched up a diagram that may help to explain things: The point at the bottom of the vertical dashed line is the camera position in world and camera's origin in relationship to the HUD/screen. The smaller horizontal line represents the face of your HUD. The square represents the object in world that you visually click on. The larger horizontal line represents the width of screen in world at the distance of the clicked object. The large X represents the point on the HUD that is touched. The vertical dashed line represents the line along llGetCameaRot() which is between the camera position and screen center. The dotted line represents the edge of the field of view. The script you've posted appears to set the llCastRay() along the solid red line. The Dashed red line is what you need and represents your <50,0,0> offset multiplied by the rotation between the camera's origin and the clicked point on the HUD. I don't know the effective distance between the camera and the HUD, but it is possible to work it out mathematically using the camera's and object's in world positions in relation to the camera's and HUD click's positions. Speaking in trigonometry, we can calculate thetas from the in-world opposite and adjacent lengths and then use those thetas and the HUD click's opposite lengths to find the distance between the HUD and the camera. I'm not too familiar with 3D rendering principles so there might be more to it, but I think this approach should get you close enough to what you're after.
  4. You'd need to use llGetCameraPos() for the llCastRay() starting vector. Then, use llGetCameraPos(), llGetCameraRot(), llDetectedTouchPos(), llGetAttached() and some math to figure out the ending vector. You can use llGetCameraPos() added to a vector offset multiplied by llGetCameraRot() to get a point that's far into the distance from screen center. The tricky part is using the attachment point adjusted touch data to generate a rotation to accommodate off-center touches. I can't remember the axes offhand, but you'd probably want to compare the center of the hud's vector to a normalized vector of the detected touch's horizontal and vertical components with a preset high magnitude 3rd component. This should get close to where you've clicked. There should be a way to calculate things more accurately by taking the camera's viewing angle into account, but I don't know the calculations or a way to grab the needed data.
  5. If your ultimate destination is further than the llMoveToTarget() limit, you just have to run it in a loop or a timer event with smaller steps that move you toward your goal. llVectorNorm() multiplied by a scalar and added to llGetPos() should break it into a small enough step.
  6. Keyframe motion is smoother in my experience. However, it only moves or rotates an object; it can't change its size.
  7. Pretty sure you can't spoof a UUID. You can make yourself a set of gestures to eliminate the need to type things out every time. It may also be worthwhile to IM the creator and ask politely if they would change the script to allow HUD communications. If the boat's scripts are modifiable you could even add in the functionality yourself.
  8. You may be able use llCastRay() to check if your turret's firing solution will hit your ship and hold fire for that cycle. Feed your turret's emitter location and your firing target's location into llCastRay() with the appropriate filters and test the first returned key against the key of your ship. If you're using llSensor() to get your target's location, you can limit the detection area to exclude your ship. I'm not sure how useful this is nowadays, but you can also precalculate a volume that contains your ship then test it against your turret's local rotation. Basically does a cast ray the hard way without having to poll the sim. Came in handy when llCastRay()s were unavailable or failed more frequently.
  9. llCastRay() returns a filtered list of keys and positions along a ray between start and end vectors. Depends on if you want to use it for the tracking or for the shooting. I don't think it would be particularly efficient for any tracking tasks. It can be useful when simulating piercing projectiles from a turret and using the returned data to report hits to some other game object. Any other auto turret tasks are better realized with physical projectiles, llGetObjectDetails() or regular sensor calls.
  10. Took about a week to get mine straightened out. Things went quick once they got up to me in the queue.
  11. No need to convert your position vector with llEuler2Rot(). When you multiply or divide a vector by a rotation, it rotates that vector. This should get your child prim moving in the direction you want, but it will not change its rotation: llSetLinkPrimitiveParamsFast(LINK_THIS,[ PRIM_POS_LOCAL, localpos / llGetRootRotation() ]);
  12. llSetScriptState() or llRemoveInventory() will accomplish those tasks. You'd also need to set up a communication system between the HUD and the script you'd want to control. llSay() and llListen() would be the easiest and you may want to add some kind of conditionals to specify which objects or scripts you want to control. Here's a link to the wiki's inventory subcategory: http://wiki.secondlife.com/wiki/Category:LSL_Inventory And here's the communication subcategory: http://wiki.secondlife.com/wiki/Category:LSL_Communications
  13. My memory is a little fuzzy, but I think if you divide the local position by the root prim's global rotation you can get the child prim to move where you want. If I'm remembering correctly, the local position of a child prim will always be rotated by the root prim's global rotation and Dividing by a rotation applies the opposite effect of multiplying. So dividing the rotation of the root by the rotation of the root will cancel out the rotation.
  14. You could always file a JIRA for it, but I think it has already been requested many times already. I don't remember exactly, but I think the limitation has something to do with how a worn object's position doesn't really get tracked by the server's physics engine on account of avatar animations being mostly client side.
  15. There's no way to sit on an attachment. You've pretty much outlined all the ways I'm aware of to fake it though. Use animations and either an object that is sat upon to line up the poses or a movement script to keep them aligned. Edit: typo
  16. I did a little bit of research on that "Le Massosein" unit. Apparently it's for firming and not enlargement. The cup pressurizes with cold water and causes temporary tissue shrinkage to deliver a firming effect. There were similar looking devices at the time that did use water to create a mild vacuum. Can't imagine either system being particularly comfortable, but sound better than the high voltage electrostimulation. Here's the site, Le Massosein details under the "Douching" subsection: https://cosmeticsandskin.com/ded/bust.php EDIT: Almost forgot to warn that the boob enhancement site contains depictions of bare boobs.
  17. You need to keep track of which step you're on. You need to test for the correct message and then advance step if true. You need to reset the step you're on when an incorrect message is received. You need to trigger the ultimate effect when step three is complete. So you'll need something like this in your listen event: integer step; if(step == 3 ) DoTheThing(); if(step == 2 && message == "step3") ++step; if(step == 1 && message == "step2") ++step; if(step == 0 && message == "step1") ++step; else step = 0; You'll probably want to initialize step to zero outside of the listen event as a global so it doesn't get reset each time there's a received message. Edit: forgot to mention some caveats
  18. The full text gets revealed with mouseover. it goes up to Pad_Button15.
  19. I've always used shortcut keyed gestures with offchannel chat commands to expand into extra keyboard input. It's not as responsive as llTakeControls() and only allows for a pressed event, but is usually good enough for most secondary actions.
  20. If your light and switch are different prims you'd need to search for the light. You'd could step through the link numbers and test for the description of the prim you want to light up like you've done with the switch, then store that number in a variable and feed it into your llSetLinkPrimitiveParamsFast(). Use llGetNumberOfPrims() to set the high end of your loop.
  21. Just add in the additional key's bitflag to your conditional statement with another bitwise & operator: if(edge & CONTROL_FWD & CONTROL_ROT_LEFT) Here's a link to a good tutorial on how to use the operators: http://lslwiki.digiworldz.com/lslwiki/wakka.php?wakka=bitwise (i'm trying to find a good explanation of edge and level- will update when I find it or make one myself) They're all a mess, here's my best shot: Level and Edge correspond to a control's particular input signal as it is polled by the computer. level denotes where the button is: 1 when pressed , 0 when unpressed edge denotes when it moves: 1 when changing, 0 when stable It is often times useful to test the following four conditions instead of just the pressed or unpressed state of a button. For example: if you only want to generate one event per press even when a button is held, you'd want to use the start condition. If only the level value is checked, it will generate events constantly when held down. Point A: integer untouched = ~(level | edge); // both level and edge are false Point B: integer held = level & ~edge; // level is true and edge is false Point C: integer start = level & edge; // both level and edge are true Point D: integer end = ~level & edge; // level is false and edge is true Signal Diagram: _____ _____ ____| |____| |___ ^ ^ ^ ^ A B C D EDIT: added edge and level potion.
  22. Before they expanded the inworld toolset, you'd have to set many of the weirder settings via script. IIRC, depending on how you switched from one primitive type to another you could glitch them into really fantastic shapes.
  23. I've been meaning to try making one of those, but never got around to it for the past 10 years. Is there a specific range of height values that Prims will render onto the world map or is it just whatever is highest?
  24. Wait.... drive, WhatsApp, messenger, contacts... Are they trying to download the installer onto a mobile device?
×
×
  • Create New...