Jump to content

Myrmidon Hasp

Resident
  • Posts

    33
  • Joined

  • Last visited

Reputation

19 Good

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. There's something wrong with that link; the video gives an error. When used in trigonometry theta is the acute angle of interest in a right triangle. In my diagram it is the angle that is formed between the red and black dashed lines. I'll see if I can find a decent tutorial or scratch one up to explain it better. The secondary object is only needed once to accurately figure out the distance from the camera to the HUD before it gets hard coded as cam2hud. I'm pretty sure it effectively changes between different fields of view and screen resolutions. That code I posted isn't going to work without an accurate value for cam2hud and it needs some adjustments similar to Wulfie's applied to the values from llDetectedTouchST() to compensate for the size of the HUD's prim dimensions and to move the origin from the bottom left corner to the center.
  2. The snippet to get the rotation and apply it should look something like this: float cam2hud = 1; // this is the distance from the camera origin to the hud. vector touched; // this is the vector of the detected touch. (I'm assuming it's in relationiship to screen center) vector euler; //this is the euler we're going to convert into a rotation. rotation touchrot; //this is the rotation that points at touched from the camera position. euler.x = 0; euler.y = llAtan2(touched.x, cam2hud); // sets the horizontal theta given opposite and adjacent euler.z = llAtan2(touched.y, cam2hud); // Sets the vertical theta given opposite and adjacent touchrot = llEuler2Rot(euler); // converts euler into quaternion rotation llCastRay(llGetCameraPos(),llGetCameraPos() + <50,0,0> * llGetCameraRot() * touchrot, []); I can't remember where the origin of detected touches originate so you may need to adjust so it relates to screen center. As for the test device, it would first need an object with a known position in relation to your camera's position in world following my previous diagram to calculate thetas inworld with llAtan2(). Then, without moving or rotating the camera, use your HUD and touch on the center of the object to get that touch offset. Use that offset to calculate cam2hud with: float cam2hud = touched.x / llTan(HorizontalTheta); // or float cam2hud = touched.y / llTan(VerticalTheta); It might be easier to script an object to position itself to a preset offset in relation to your detected camera's position and rotation than set things up by hand and mess around figuring out offsets and rotations afterwards. Kinda like this: llSetPos(llGetCameraPos() + <10,5,4> * llGetCameraRot()); float HorizontalTheta = llAtan2(5,10); float VerticalTheta = llAtan2(4,10); Edit: * not entirely sure if I got the axes right in llAtan2() I haven't tested any of this stuff yet and Haven't used the function in over a decade. damn rust.
  3. It looks like the ending point is only moving slightly in relation to the touch offset because you're just moving it forward 50m and shifting by the touch offset when you need to determine the rotation between the camera position and the touched position and then multiply that by <50,0,0>. I scratched up a diagram that may help to explain things: The point at the bottom of the vertical dashed line is the camera position in world and camera's origin in relationship to the HUD/screen. The smaller horizontal line represents the face of your HUD. The square represents the object in world that you visually click on. The larger horizontal line represents the width of screen in world at the distance of the clicked object. The large X represents the point on the HUD that is touched. The vertical dashed line represents the line along llGetCameaRot() which is between the camera position and screen center. The dotted line represents the edge of the field of view. The script you've posted appears to set the llCastRay() along the solid red line. The Dashed red line is what you need and represents your <50,0,0> offset multiplied by the rotation between the camera's origin and the clicked point on the HUD. I don't know the effective distance between the camera and the HUD, but it is possible to work it out mathematically using the camera's and object's in world positions in relation to the camera's and HUD click's positions. Speaking in trigonometry, we can calculate thetas from the in-world opposite and adjacent lengths and then use those thetas and the HUD click's opposite lengths to find the distance between the HUD and the camera. I'm not too familiar with 3D rendering principles so there might be more to it, but I think this approach should get you close enough to what you're after.
  4. You'd need to use llGetCameraPos() for the llCastRay() starting vector. Then, use llGetCameraPos(), llGetCameraRot(), llDetectedTouchPos(), llGetAttached() and some math to figure out the ending vector. You can use llGetCameraPos() added to a vector offset multiplied by llGetCameraRot() to get a point that's far into the distance from screen center. The tricky part is using the attachment point adjusted touch data to generate a rotation to accommodate off-center touches. I can't remember the axes offhand, but you'd probably want to compare the center of the hud's vector to a normalized vector of the detected touch's horizontal and vertical components with a preset high magnitude 3rd component. This should get close to where you've clicked. There should be a way to calculate things more accurately by taking the camera's viewing angle into account, but I don't know the calculations or a way to grab the needed data.
  5. If your ultimate destination is further than the llMoveToTarget() limit, you just have to run it in a loop or a timer event with smaller steps that move you toward your goal. llVectorNorm() multiplied by a scalar and added to llGetPos() should break it into a small enough step.
  6. Keyframe motion is smoother in my experience. However, it only moves or rotates an object; it can't change its size.
  7. Pretty sure you can't spoof a UUID. You can make yourself a set of gestures to eliminate the need to type things out every time. It may also be worthwhile to IM the creator and ask politely if they would change the script to allow HUD communications. If the boat's scripts are modifiable you could even add in the functionality yourself.
  8. You may be able use llCastRay() to check if your turret's firing solution will hit your ship and hold fire for that cycle. Feed your turret's emitter location and your firing target's location into llCastRay() with the appropriate filters and test the first returned key against the key of your ship. If you're using llSensor() to get your target's location, you can limit the detection area to exclude your ship. I'm not sure how useful this is nowadays, but you can also precalculate a volume that contains your ship then test it against your turret's local rotation. Basically does a cast ray the hard way without having to poll the sim. Came in handy when llCastRay()s were unavailable or failed more frequently.
  9. llCastRay() returns a filtered list of keys and positions along a ray between start and end vectors. Depends on if you want to use it for the tracking or for the shooting. I don't think it would be particularly efficient for any tracking tasks. It can be useful when simulating piercing projectiles from a turret and using the returned data to report hits to some other game object. Any other auto turret tasks are better realized with physical projectiles, llGetObjectDetails() or regular sensor calls.
  10. Took about a week to get mine straightened out. Things went quick once they got up to me in the queue.
  11. No need to convert your position vector with llEuler2Rot(). When you multiply or divide a vector by a rotation, it rotates that vector. This should get your child prim moving in the direction you want, but it will not change its rotation: llSetLinkPrimitiveParamsFast(LINK_THIS,[ PRIM_POS_LOCAL, localpos / llGetRootRotation() ]);
  12. llSetScriptState() or llRemoveInventory() will accomplish those tasks. You'd also need to set up a communication system between the HUD and the script you'd want to control. llSay() and llListen() would be the easiest and you may want to add some kind of conditionals to specify which objects or scripts you want to control. Here's a link to the wiki's inventory subcategory: http://wiki.secondlife.com/wiki/Category:LSL_Inventory And here's the communication subcategory: http://wiki.secondlife.com/wiki/Category:LSL_Communications
  13. My memory is a little fuzzy, but I think if you divide the local position by the root prim's global rotation you can get the child prim to move where you want. If I'm remembering correctly, the local position of a child prim will always be rotated by the root prim's global rotation and Dividing by a rotation applies the opposite effect of multiplying. So dividing the rotation of the root by the rotation of the root will cancel out the rotation.
  14. You could always file a JIRA for it, but I think it has already been requested many times already. I don't remember exactly, but I think the limitation has something to do with how a worn object's position doesn't really get tracked by the server's physics engine on account of avatar animations being mostly client side.
×
×
  • Create New...