Jump to content
You are about to reply to a thread that has been inactive for 1535 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Posted

Looking the big complex huds of product like the ones provided with mesh body or mesh heads I cannot figure out how they could work.
I mean, there's a lot of clickable areas, with very complex shape (not just a square or a circle) and very close each other (without gap between each other).
As noob scripter I cannot figure out how they can function, there's no invisible clickable prim, they don't (I think) compare mouse coords with a matrix of point to detect what you are clicking, as far I see they don't pickup color from pic or do anything else I can imagine..

Anyone know how they get that result?
..if it's a too complex explanation to write down could you provide me a link, or an info source I can read / study to find the answer?

Thank you (in advance) !

Posted

The function you'd use for that is llDetectedTouchUV. This returns a vector with the X and Y coordinates of the point on the texture that was clicked. Then you use a shedload of if/elses to compare those coordinates to the areas of the texture corresponding to the various controls. 

  • Like 2
Posted

Some HUDs also contain complex mesh components, so the different touch areas are really different faces (mesh "materials") of arbitrary shape, detected with llDetectedTouchFace(). You can tell if that's what's going on by mousing them in the editor with "Select Face" enabled. This is particularly handy if the result is to highlight the clicked part to show it has toggled activation, because there's a separate face to paint.

  • Like 1
You are about to reply to a thread that has been inactive for 1535 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...