Jump to content

HUD Clicks to World Position


Shymus Roffo
 Share

You are about to reply to a thread that has been inactive for 2042 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Hello, I have been experimenting lately on a HUD. I've been attempting to get positions within the world when a spot is clicked on the HUD.

There are a lot of problems I've faced, here is what I've managed to get so far, only cause it was easy. Sadly, nothing I have made has come close the result I've been looking for.

Right now the HUD is currently an invisiprim that takes up the entire screen. The intended behavior is; when a user clicks somewhere on the HUD, it's as if they are clicking somewhere in the world and it will report the position where the simulated world click was. I know that Raycast would need to be used, the problem is I can get figure out how to offset and get it to shoot to the correct spot.

Link to comment
Share on other sites

You'd need to use llGetCameraPos() for the llCastRay() starting vector. Then, use llGetCameraPos(), llGetCameraRot(), llDetectedTouchPos(), llGetAttached() and some math to figure out the ending vector. 

You can use llGetCameraPos() added to a vector offset multiplied by llGetCameraRot() to get a point that's far into the distance from screen center.

The tricky part is using the attachment point adjusted touch data to generate a rotation to accommodate off-center touches. I can't remember the axes offhand, but you'd probably want to compare the center of the hud's vector to a normalized vector of the detected touch's horizontal and vertical components with a preset high magnitude 3rd component. This should get close to where you've clicked.

There should be a way to calculate things more accurately by taking the camera's viewing angle into account, but I don't know the calculations or a way to grab the needed data.

Link to comment
Share on other sites

Yeah, all the camera tracking is good. That was easy. (Thankfully) The major issues that I've been having are the math part getting the raycasts to offset correctly in the hud.

In the scripts that I tested, my cast ray worked based off the center of the HUD. The only negative is it was not offsetting. If you view it below and test with the debug prim, you'll notice the prim only moves a little bit.

This is the script that I have so far that attempts to find the position. (mind my math, I suck at it...)

Basic HUD Place within a prim, wear as a HUD, and scale to fit your window, then set transparency to 75-100.

integer c = -54354;
float fov = 1.7320508075688774;
vector offset = <50,0,0>;
vector touch_offset = ZERO_VECTOR;

default {
    state_entry() {
        llRequestPermissions(llGetOwner(), PERMISSION_TRACK_CAMERA);
    }
    touch(integer n) {
        if(llDetectedKey(0) == llGetOwner()) {
            vector c_pos = llGetCameraPos();
            rotation c_rot = llGetCameraRot();
            vector touch_pos = llDetectedTouchPos(0);
            touch_offset = llVecNorm(<fov,touch_pos.y,touch_pos.z>);
            vector offset_pos = (c_pos+(offset+<0,touch_offset.y,touch_offset.z>)*c_rot);
            list t = llCastRay((c_pos+(<0,touch_offset.y,touch_offset.z>)*c_rot),(c_pos+(offset+<0,touch_offset.y,touch_offset.z>)*c_rot),[
                RC_REJECT_TYPES,  RC_REJECT_AGENTS|RC_REJECT_PHYSICAL,
                RC_MAX_HITS, 4,
                RC_DATA_FLAGS, RC_GET_NORMAL,
                RC_DETECT_PHANTOM, FALSE
            ]);
            // Used for debugging below. 
            llRegionSay(c,llList2String(t,1)+","+(string)c_rot);
            llSetText(llList2CSV(t),<1,1,1>,1);
        }
    }
    touch_end(integer n) {
        touch_offset = ZERO_VECTOR; // Used to put the debug prim back, serves no purpose.
    }
}

Debug Prim This isn't really needed, this just lets me know where the script's ray cast in landing.

integer c = -54354;
quickPosRot(vector pos, rotation rot) {
    llSetLinkPrimitiveParamsFast(LINK_THIS, [PRIM_POSITION, pos, PRIM_ROTATION, rot]);
}
default {
    state_entry() {
        llListen(c,"","","");
    }
    listen(integer c, string n, key i, string m) {
        if(llGetOwnerKey(i) == llGetOwner()) { 
            list t = llCSV2List(m);
            vector pos = (vector)llList2String(t,0);
            rotation rot = (rotation)llList2String(t,1);
            if(pos != ZERO_VECTOR) quickPosRot(pos,rot);
        }
    }
}

 

Link to comment
Share on other sites

This was my attempt at it, before reading your script. It.. kinda works? I can't get the accuracy quite right, it either undershoots or overshoots.

My "HUD" was size <1.0, 1.92385, 0.2>, rotated <0, 90, 0>, and attached to HUD Center.

// HUD
integer channel = 2935;
float range = 10;

default
{
    state_entry()
    {
        if(llGetAttached()) llRequestPermissions(llGetOwner(), PERMISSION_TRACK_CAMERA);
    }

    touch_start(integer n)
    {
        // Since touchST covers the face of the prim and
        //  our screen is NOT square, vertical FOV needs to be reduced.
        // These values are the two sides of the prim.
        float verticalFOV = ((1.92385 - 1) / 1.92385); // 0.480208956 ratio

        float fov = 30; // Adjust this for a wider range of movement.

        // Changes ST range from [0.0, 1.0] to [-1.0, 1.0]
        vector STOffset = ((llDetectedTouchST(0) * 2) - <1,1,0>);
        rotation Hud2Rot = llEuler2Rot(<0, STOffset.x*verticalFOV, -STOffset.y> * fov * DEG_TO_RAD);

        list data = llCastRay(
            llGetCameraPos(),
            llGetCameraPos() + <range,0,0> * llGetCameraRot() * Hud2Rot,
            []);

        llWhisper(channel, llList2String(data, 1));
    }
}

Maybe this isn't any help, but it was fun to tinker.

  • Like 2
Link to comment
Share on other sites

It looks like the ending point is only moving slightly in relation to the touch offset because you're just moving it forward 50m and shifting by the touch offset when you need to determine the rotation between the camera position and the touched position and then multiply that by <50,0,0>. 

I scratched up a diagram that may help to explain things:

diagram.jpg.3eb87385342da963df33c11edc6293e2.jpg

The point at the bottom of the vertical dashed line is the camera position in world and camera's origin in relationship to the HUD/screen. The smaller horizontal line represents the face of your HUD. The square represents the object in world that you visually click on. The larger horizontal line represents the width of screen in world at the distance of the clicked object. The large X represents the point on the HUD that is touched. The vertical dashed line represents the line along llGetCameaRot() which is between the camera position and screen center. The dotted line represents the edge of the field of view. 

The script you've posted appears to set the llCastRay() along the solid red line. The Dashed red line is what you need and represents your <50,0,0>  offset multiplied by the rotation between the camera's origin and the clicked point on the HUD. 

I don't know the effective distance between the camera and the HUD, but it is possible to work it out mathematically using the camera's and object's in world positions in relation to the camera's and HUD click's positions. Speaking in trigonometry, we can calculate thetas from the in-world opposite and adjacent lengths and then use those thetas and the HUD click's opposite lengths to find the distance between the HUD and the camera.

I'm not too familiar with 3D rendering principles so there might be more to it, but I think this approach should get you close enough to what you're after.

 

 

 

 

  • Like 2
Link to comment
Share on other sites

The snippet to get the rotation and apply it should look something like this:

float cam2hud = 1; // this is the distance from the camera origin to the hud.
vector touched; // this is the vector of the detected touch. (I'm assuming it's in relationiship to screen center)
vector euler; //this is the euler we're going to convert into a rotation.
rotation touchrot; //this is the rotation that points at touched from the camera position.

euler.x = 0;
euler.y = llAtan2(touched.x, cam2hud); // sets the horizontal theta given opposite and adjacent
euler.z = llAtan2(touched.y, cam2hud); // Sets the vertical theta given opposite and adjacent

touchrot = llEuler2Rot(euler); // converts euler into quaternion rotation

llCastRay(llGetCameraPos(),llGetCameraPos() + <50,0,0> * llGetCameraRot() * touchrot, []);

 

I can't  remember where the origin of detected touches originate so you may need to adjust so it relates to screen center.

 

As for the test device, it would first need an object with a known position in relation to your camera's position in world following my previous diagram to calculate thetas inworld with llAtan2(). 

Then, without moving or rotating the camera, use your HUD and touch on the center of the object to get that touch offset. Use that offset to calculate cam2hud with:

float cam2hud = touched.x / llTan(HorizontalTheta); // or

float cam2hud = touched.y / llTan(VerticalTheta);

 

It might be easier to script an object to position itself to a preset offset in relation to your detected camera's position and rotation than set things up by hand and mess around figuring out offsets and rotations afterwards. Kinda like this:

 llSetPos(llGetCameraPos() + <10,5,4> * llGetCameraRot());
   
   float HorizontalTheta = llAtan2(5,10);
   float VerticalTheta = llAtan2(4,10);

 

Edit: * not entirely sure if I got the axes right in llAtan2() I haven't tested any of this stuff yet and Haven't used the function in over a decade. damn rust.

Edited by Myrmidon Hasp
Link to comment
Share on other sites

20 minutes ago, Myrmidon Hasp said:

As for the test device, it would first need an object with a known position in relation to your camera's position in world following my previous diagram to calculate thetas inworld with llAtan2(). 

Then, without moving or rotating the camera, use your HUD and touch on the center of the object to get that touch offset. Use that offset to calculate cam2hud with:


float cam2hud = touched.x / llTan(HorizontalTheta); // or

float cam2hud = touched.y / llTan(VerticalTheta);

 

1

I am not really to sure how to calculate a Theta or what a Theta is.

39 minutes ago, Myrmidon Hasp said:

It might be easier to script an object to position itself to a preset offset in relation to your detected camera's position and rotation than set things up by hand and mess around figuring out offsets and rotations afterwards. Kinda like this:


 llSetPos(llGetCameraPos() + <10,5,4> * llGetCameraRot());
   
   float HorizontalTheta = llAtan2(5,10);
   float VerticalTheta = llAtan2(4,10);

 

1

Sadly, the calculations need to stay within the HUD (assuming this goes into the debug object). The debug item is just there so I can see where the ray lands.

This is the script that I have gotten so far from your example. (Assuming this is what you meant. ) Sadly, I'm assuming I did something wrong as I was not getting the expected result.

integer channel = -500;

default {
    state_entry() {
        if(llGetAttached()) llRequestPermissions(llGetOwner(), PERMISSION_TRACK_CAMERA);
    }

    touch(integer n) {
        float cam2hud = 1; // this is the distance from the camera origin to the hud.
        vector touched = llDetectedTouchST(0); // this is the vector of the detected touch. (I'm assuming it's in relationiship to screen center)
        vector euler; //this is the euler we're going to convert into a rotation.
        rotation touchrot; //this is the rotation that points at touched from the camera position.
        
        euler.x = 0;
        euler.y = llAtan2(touched.x, cam2hud); // sets the horizontal theta given opposite and adjacent
        euler.z = llAtan2(touched.y, cam2hud); // Sets the vertical theta given opposite and adjacent
        
        touchrot = llEuler2Rot(euler); // converts euler into quaternion rotation
        
        list t = llCastRay(llGetCameraPos(),llGetCameraPos() + <50,0,0> * llGetCameraRot() * touchrot, []);
        
        
        llRegionSay(-54354,(string)llList2String(t,1));
    }
}

Basically, the final result that I'm looking for is what is in this Youtube video here. I have done everything except placing the waypoints, which is requiring the cast ray.

https://www.youtube.com/watch?v=9mPc_9yX2mM

Link to comment
Share on other sites

There's something wrong with that link; the video gives an error.

When used in trigonometry theta is the acute angle of interest in a right triangle. In my diagram it is the angle that is formed between the red and black dashed lines. I'll see if I can find a decent tutorial or scratch one up to explain it better.

The secondary object is only needed once to accurately figure out the distance from the camera to the HUD before it gets hard coded as cam2hud. I'm pretty sure it effectively changes between different fields of view and screen resolutions. 

That code I posted isn't going to work without an accurate value for cam2hud and it needs some adjustments similar to Wulfie's applied to the values from llDetectedTouchST() to compensate for the size of the HUD's prim dimensions and to move the origin from the bottom left corner to the center. 

Link to comment
Share on other sites

By the way, I don't think it's a reasonable assumption to say that there's a distance between "camera origin" and the "HUD."

The HUD is rendered orthographically onto the surface of the viewport, it has no depth. If you place an object on llGetCameraPos, it will be on the exact spot your view/HUD is, not behind it. Basically, adjust Myrmidon's drawing by moving the camera point to the center of the HUD width.

Edited by Wulfie Reanimator
Link to comment
Share on other sites

This problem may only arise much further on,  but it's worth knowing about because I found it perplexing at first (and has completely stalled a project I'd worked on for months): llCastRay() intersects with the physics shape of objects the ray passes through. So, if Object A is nestled in a concave portion of Object B, and Object B has a Convex Hull physics shape, Object A may be occluded by the invisible convex hull of Object B.

  • Like 1
Link to comment
Share on other sites

8 hours ago, Qie Niangao said:

This problem may only arise much further on,  but it's worth knowing about because I found it perplexing at first (and has completely stalled a project I'd worked on for months): llCastRay() intersects with the physics shape of objects the ray passes through. So, if Object A is nestled in a concave portion of Object B, and Object B has a Convex Hull physics shape, Object A may be occluded by the invisible convex hull of Object B.

Yeah, I noticed that as well. I've had that problem in the past. Luckily, we're not really looking to distribute the HUD that this will work with, so we won't really have to deal with helping other's make it work in their mesh sims. It'll most likely be used to help us with our pathfinding waypoints in our sim. That way we have a way of visualizing the AI's path. Since most of our sim is still built out of prims and we don't really use mesh on any of our builds, we don't have a large problem with that. A lot of our items that we do use that are mesh, we give a basic cube physics shape.

After most attempts of trying to find a way to do it, I'm rather confused. Multiple attempts have left me in the dust.

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 2042 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...