Jump to content

Touch position from region to local coordinates


ThalesMiletus
 Share

You are about to reply to a thread that has been inactive for 1521 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

I have created a boardgame script (for different simple boardgames) that is all working and I don't want to make any major changes to it. But I discovered that if I want to attach a boardgame to my avatar somewhere where I don't have rez rights, the touch position when trying to move pieces is off. (BTW, this would be just for a sitting avatar, so I can ignore any issues with the avatar moving.)

The script runs in the root prim. The root can be a playing surface or a separate prim (whatever makes more sense).

Currently, when unattached, I'm using the following to do the appropriate transformation from region to local coords relative to root: 

vector localPos = (llDetectedTouchPos(0)-llGetPos())/llGetRot();

This works perfectly fine unattached. I'm having a hard time determining what exact 3D transformation I need to do to llDetectedTouchPos() when the boardgame linkset is attached, so that I still get a touch point relative to root. I'm fine with the limitation that the attach point would have to be the avatar center but I would need to be able to position the boardgame from there. I've created a simple linkset with a few pieces to play around with the transformations using things like llGetLocalPos/Rot, or llGetRootPosition/Rotation, or avatar position/rotation from llGetObjectDetails(), but can't seem to figure out the right combination.

I've had people suggest I use llDetectedTouchST/UV but that is a different approach for getting the touch position and seems that it would require a number of code changes and retesting everything, which I'm not prepared to do. I just need the correct transformation to get from the region touch coordinates to the local-relative-to-root-prim coordinates when attached.

Of course, any self-contained lines of code that uses llDetectedTouchST/UV but gives exactly the same position relative to root as the 'localPos' code line above, is definitely acceptable. It just seems unlikely because of assumption to do with the faces and textures of the playing surfaces.

Edited by ThalesMiletus
Link to comment
Share on other sites

I'm not sure what you mean. It's not only meant for HUDs. I'm getting a touch position fine, just can't figure out the correct transformation to local-root when attached. From the wiki for llDetectedTouchPos:

Quote

Returns the vector position where the object was touched in region coordinates, unless it is attached to the HUD, in which case it returns the position in screen space coordinates.

 

Link to comment
Share on other sites

3 minutes ago, ThalesMiletus said:

I'm not sure what you mean. It's not only meant for HUDs. I'm getting a touch position fine, just can't figure out the correct transformation to local-root when attached. From the wiki for llDetectedTouchPos:

 

If it is attatched it only works as a HUD. Or to be presice attatched to a HUD.

Edited by steph Arnott
Link to comment
Share on other sites

Can you explain why you think that? I don't see anything in the wiki to support that and I just did a quick test to confirm that llDetectedTouchPos() in an attached prim is giving me valid regional touch coordinates that are the same coordinates as when I touch an unattached prim overlapping it. If you are suggesting that there is no way to get from a regional touch position to an attached-local-to-root position, then that seems odd to me. I would think the data is there somewhere in the LSL routines and you just need to know how to get the correct info and apply the correct transformations to transform the coordinates from the region frame to the attached root frame. Intuitively, it feels to me that perhaps two sets of translation/rotation need to be applied, one to go from region frame to avatar center frame, and a second set to go from there to root prim based on the position/rotation of the root prim (relative to the attach point).

Link to comment
Share on other sites

2 minutes ago, ThalesMiletus said:

Can you explain why you think that? I don't see anything in the wiki to support that and I just did a quick test to confirm that llDetectedTouchPos() in an attached prim is giving me valid regional touch coordinates that are the same coordinates as when I touch an unattached prim overlapping it. If you are suggesting that there is no way to get from a regional touch position to an attached-local-to-root position, then that seems odd to me. I would think the data is there somewhere in the LSL routines and you just need to know how to get the correct info and apply the correct transformations to transform the coordinates from the region frame to the attached root frame. Intuitively, it feels to me that perhaps two sets of translation/rotation need to be applied, one to go from region frame to avatar center frame, and a second set to go from there to root prim based on the position/rotation of the root prim (relative to the attach point).

You have to construct it as a HUD.

Link to comment
Share on other sites

Actually, llDetectedTouchST does work to accomplish this without having to resort to a HUD. But it gives touch position local to the texture/face. Conceptually, there is no reason why the region coordinates given by llDetectedTouchPos can't be transformed to the frame of the attached root prim unless the various LSL routines just don't provide adequate information to accomplish this.

Link to comment
Share on other sites

1 minute ago, ThalesMiletus said:

Actually, llDetectedTouchST does work to accomplish this without having to resort to a HUD. But it gives touch position local to the texture/face. Conceptually, there is no reason why the region coordinates given by llDetectedTouchPos can't be transformed to the frame of the attached root prim unless the various LSL routines just don't provide adequate information to accomplish this.

Once again, moving child prims as an in world object the way you describe won't work. SL is not capable. You have to treat it as a HUD. Also it would be GetLocalpos/rot because the agent is the root.

Link to comment
Share on other sites

I have a simple test example with llDetectedTouchST that works on an attached linkset. So it is very much possible without a HUD. But I'm trying to get it to work with llDetectedTouchPos because that's what my boardgame script uses and I don't want to take on new assumptions about faces/textures for llDetectedTouchST and then have to retest everything. I tried to make that clear in my original post. I'll wait to see if anyone else has some input on this. 

Link to comment
Share on other sites

4 minutes ago, ThalesMiletus said:

I have a simple test example with llDetectedTouchST that works on an attached linkset. So it is very much possible without a HUD. But I'm trying to get it to work with llDetectedTouchPos because that's what my boardgame script uses and I don't want to take on new assumptions about faces/textures for llDetectedTouchST and then have to retest everything. I tried to make that clear in my original post. I'll wait to see if anyone else has some input on this. 

Do as you wish SL is not capable of doing what you want. Good day.

Link to comment
Share on other sites

I got curious and after some experimentation, I did manage to accomplish the task using llDetectedTouchST. Since the function returns a vector with the x and y components in the range of 0 to 1, that means they are in essence a percentage of how far the touched point was from the corner of the face. Knowing that, we can perform some operations to make them correspond to the center of the game board prim and offset accordingly.

The demo below assumes you have a gameboard prim as the root and a linked child (link number 2) as the game piece. It also assumes the game board's top face (0) is what will be clicked on.

default
{
    touch_start(integer total_number)
    {
        //Only allow touching the top face of the root prim for this demo
        if (llDetectedLinkNumber(0) != 1 || llDetectedTouchFace(0) != 0) {return;}
        
        vector boardDimensions = llGetScale();
        vector localPos = llDetectedTouchST(0);
        
        localPos.x = boardDimensions.x*localPos.x - boardDimensions.x*0.5;
        localPos.y = boardDimensions.y*localPos.y - boardDimensions.y*0.5; 

        llSetLinkPrimitiveParamsFast( 2, [PRIM_POS_LOCAL, localPos] );
    }
}

When I tested this, the object was attached to my Avatar center, but since these are local coordinates, that shouldn't matter. I even saw that this works ok when not attached as well.

A solution using llDetectedTouchPos may yet be possible, but this was the first approach I tried.

Edit: Whoops, that if/else condition for each axis seems to be unnecessary since both branches are doing the same thing. I think that's a remnant from an earlier attempt when I was using a different through process. I've removed it to further simplify the demo.

Edited by Fenix Eldritch
removed unnecessary if/else branch in demo
Link to comment
Share on other sites

@ThalesMiletus Please disregard Steph's advice, they're often misinformed, misinterpreting, or coming up with their own observations without backing any of it up. Then eventually you'll get the iconic "I'm right and you're wrong. Good day."

But to answer your question, this isn't really doable, because of two BIG limitations in SL/LSL.

  • While llDetectedTouchPos correctly detects which region coordinate an attachment was touched (very easy to test)...
  • llGetPos will give you the avatar's position in region coordinates, not the attachment's position. (So this will cause you to have an inaccurate offset to begin with.)
  • llGetRot will give you an approximation of the avatar's rotation (it is not accurate unless the avatar is sitting or in mouselook).

The only possible way to get llDetectedTouchPos work in an attached object,  is to have the attachment attached to Avatar Center at ZERO_VECTOR. Then your avatar must also be sitting on an object. (I'm unsure if groundsitting works.)

Edited by Wulfie Reanimator
  • Like 2
  • Thanks 1
Link to comment
Share on other sites

35 minutes ago, Wulfie Reanimator said:

@ThalesMiletus Please disregard Steph's advice, they're often misinformed, misinterpreting, or coming up with their own observations without backing any of it up. Then eventually you'll get the iconic "I'm right and you're wrong. Good day."

But to answer your question, this isn't really doable, because of two BIG limitations in SL/LSL.

  • While llDetectedTouchPos correctly detects which region coordinate an attachment was touched (very easy to test)...
  • llGetPos will give you the avatar's position in region coordinates, not the attachment's position. (So this will cause you to have an inaccurate offset to begin with.)
  • llGetRot will give you an approximation of the avatar's rotation (it is not accurate unless the avatar is sitting or in mouselook).

Really? The script will lag and stop. Still you think the 'lookat' is in the body and not the head.

Link to comment
Share on other sites

Wulfie, thanks for an actual explanation. 😀 And the same also applies to getting the avatar position/rotation using llGetObjectDetails? 

So would it possibly work if I accepted the limitation of sitting and attaching to avatar center and used a dedicated root prim just for attaching with zero vector/rotation and then just offset the rest of the linkset from that? Not a great solution but better than nothing.

Edited by ThalesMiletus
Link to comment
Share on other sites

16 minutes ago, ThalesMiletus said:

Wulfie, thanks for an actual explanation. 😀 And the same also applies to getting the avatar position/rotation using llGetObjectDetails? 

So would it possibly work if I accepted the limitation of sitting and attaching to avatar center and used a dedicated root prim just for attaching with zero vector/rotation and then just offset the rest of the linkset from that? Not a great solution but better than nothing.

Have fun, soon you will realize the sim server will stall it as it is too heavy.

Link to comment
Share on other sites

Fenix, thanks for taking the time. I'm aware it can be made to work with llDetectedTouchST. Based on Wulfie's feedback, for llDetectedTouchPos, there may not be a way to get an accurate transformation from the region coordinate frame to the attached local-root coordinate frame, which is too bad. Another LSL limitation. 

Edited by ThalesMiletus
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 1521 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...