I have created a boardgame script (for different simple boardgames) that is all working and I don't want to make any major changes to it. But I discovered that if I want to attach a boardgame to my avatar somewhere where I don't have rez rights, the touch position when trying to move pieces is off. (BTW, this would be just for a sitting avatar, so I can ignore any issues with the avatar moving.)
The script runs in the root prim. The root can be a playing surface or a separate prim (whatever makes more sense).
Currently, when unattached, I'm using the following to do the appropriate transformation from region to local coords relative to root:
vector localPos = (llDetectedTouchPos(0)-llGetPos())/llGetRot();
This works perfectly fine unattached. I'm having a hard time determining what exact 3D transformation I need to do to llDetectedTouchPos() when the boardgame linkset is attached, so that I still get a touch point relative to root. I'm fine with the limitation that the attach point would have to be the avatar center but I would need to be able to position the boardgame from there. I've created a simple linkset with a few pieces to play around with the transformations using things like llGetLocalPos/Rot, or llGetRootPosition/Rotation, or avatar position/rotation from llGetObjectDetails(), but can't seem to figure out the right combination.
I've had people suggest I use llDetectedTouchST/UV but that is a different approach for getting the touch position and seems that it would require a number of code changes and retesting everything, which I'm not prepared to do. I just need the correct transformation to get from the region touch coordinates to the local-relative-to-root-prim coordinates when attached.
Of course, any self-contained lines of code that uses llDetectedTouchST/UV but gives exactly the same position relative to root as the 'localPos' code line above, is definitely acceptable. It just seems unlikely because of assumption to do with the faces and textures of the playing surfaces.