Jump to content

ThalesMiletus

Resident
  • Posts

    9
  • Joined

  • Last visited

Reputation

1 Neutral

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Fenix, thanks for taking the time. I'm aware it can be made to work with llDetectedTouchST. Based on Wulfie's feedback, for llDetectedTouchPos, there may not be a way to get an accurate transformation from the region coordinate frame to the attached local-root coordinate frame, which is too bad. Another LSL limitation.
  2. Wulfie, thanks for an actual explanation. 😀 And the same also applies to getting the avatar position/rotation using llGetObjectDetails? So would it possibly work if I accepted the limitation of sitting and attaching to avatar center and used a dedicated root prim just for attaching with zero vector/rotation and then just offset the rest of the linkset from that? Not a great solution but better than nothing.
  3. I have a simple test example with llDetectedTouchST that works on an attached linkset. So it is very much possible without a HUD. But I'm trying to get it to work with llDetectedTouchPos because that's what my boardgame script uses and I don't want to take on new assumptions about faces/textures for llDetectedTouchST and then have to retest everything. I tried to make that clear in my original post. I'll wait to see if anyone else has some input on this.
  4. Actually, llDetectedTouchST does work to accomplish this without having to resort to a HUD. But it gives touch position local to the texture/face. Conceptually, there is no reason why the region coordinates given by llDetectedTouchPos can't be transformed to the frame of the attached root prim unless the various LSL routines just don't provide adequate information to accomplish this.
  5. I think you've misunderstood what I'm trying to do.
  6. Can you explain why you think that? I don't see anything in the wiki to support that and I just did a quick test to confirm that llDetectedTouchPos() in an attached prim is giving me valid regional touch coordinates that are the same coordinates as when I touch an unattached prim overlapping it. If you are suggesting that there is no way to get from a regional touch position to an attached-local-to-root position, then that seems odd to me. I would think the data is there somewhere in the LSL routines and you just need to know how to get the correct info and apply the correct transformations to transform the coordinates from the region frame to the attached root frame. Intuitively, it feels to me that perhaps two sets of translation/rotation need to be applied, one to go from region frame to avatar center frame, and a second set to go from there to root prim based on the position/rotation of the root prim (relative to the attach point).
  7. I'm not sure what you mean. It's not only meant for HUDs. I'm getting a touch position fine, just can't figure out the correct transformation to local-root when attached. From the wiki for llDetectedTouchPos:
  8. I'm still having the same issue calculating the correct transformation even if standing still. If there are other issues specific to sitting, that might also be a problem but I'm not even there yet.
  9. I have created a boardgame script (for different simple boardgames) that is all working and I don't want to make any major changes to it. But I discovered that if I want to attach a boardgame to my avatar somewhere where I don't have rez rights, the touch position when trying to move pieces is off. (BTW, this would be just for a sitting avatar, so I can ignore any issues with the avatar moving.) The script runs in the root prim. The root can be a playing surface or a separate prim (whatever makes more sense). Currently, when unattached, I'm using the following to do the appropriate transformation from region to local coords relative to root: vector localPos = (llDetectedTouchPos(0)-llGetPos())/llGetRot(); This works perfectly fine unattached. I'm having a hard time determining what exact 3D transformation I need to do to llDetectedTouchPos() when the boardgame linkset is attached, so that I still get a touch point relative to root. I'm fine with the limitation that the attach point would have to be the avatar center but I would need to be able to position the boardgame from there. I've created a simple linkset with a few pieces to play around with the transformations using things like llGetLocalPos/Rot, or llGetRootPosition/Rotation, or avatar position/rotation from llGetObjectDetails(), but can't seem to figure out the right combination. I've had people suggest I use llDetectedTouchST/UV but that is a different approach for getting the touch position and seems that it would require a number of code changes and retesting everything, which I'm not prepared to do. I just need the correct transformation to get from the region touch coordinates to the local-relative-to-root-prim coordinates when attached. Of course, any self-contained lines of code that uses llDetectedTouchST/UV but gives exactly the same position relative to root as the 'localPos' code line above, is definitely acceptable. It just seems unlikely because of assumption to do with the faces and textures of the playing surfaces.
×
×
  • Create New...