Jump to content

Detecting areas on prim faces.


Pedlar Decosta
 Share

You are about to reply to a thread that has been inactive for 2020 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

To add on to what Rollig posted. I found in the past that llDetectedTouchUV would cause issues when used across different size monitor resolutions. I am not sure why as in theory the texture coordinates should be the same no matter the resolution as they are on a prim face. Not everyone has had this issue but I did for a long time and now exclusively use llDetectedTouchST to detect prim face coordinates.

  • Like 1
Link to comment
Share on other sites

That does sound peculiar... I also wonuldn't expect screen resolution should have an effect on the results of llDetectedTouchUV. What issues did you see? Is this something you can still reproduce? If so, list the steps - I want to see if I can hit the problem too. If either of us can reliably reproduce the problem, it would be worth creating a jira bug report.

The wiki page for llDetectedTouchUV does mention that the returned results will be different if the face's repeats and/or rotation settings are not at the default values, but I'm assuming you had already taken that into account, correct?

  • Like 1
Link to comment
Share on other sites

2 minutes ago, Fenix Eldritch said:

Hence why I asked them to clarify. I wanted to see if it was a misunderstanding or an actual issue. There is no harm in performing a sanity check.

There is no point. The issue decribed is user induced. The sanity check is simply reading the functions parameters.

Link to comment
Share on other sites

Look at it this way: it could very well be a user error, but we don't yet have that information definitively. We don't know the exact conditions under which ItHadToComeToThis experienced their unexpected results. Again, it could have resulted from user error, but we don't know just yet. Rather than dismiss them out of hand and assume they tripped over the repeats/rotation caveat, I'd like to verify a little deeper.

You may believe it's a waste of time, which is fine. I'm the one asking for the extra info, so it's my time to waste. Even if it ultimately turns out to be user error, it nothing else, it may help ItHadToComeToThis come away with a better understanding of the function and let them feel comfortable using it again. I'd say that's worth the time spent investigating.

  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...

Ok. Much as I appreciate the help so far, I have realized the functions  llDetectedTouchST  or llDetectedTouchUV aren't going to work with projectile collisions. So I tried to use face detection to  get the desired result, however using the example on the llDetectTouchFace() page, I get a -1 result. I tried using a couple of different viewers, which didn't help. I'm trying to make a low prim target using a mesh object. can anyone offer advice or point me in the right direction ?

Link to comment
Share on other sites

You can't reliably detect which face of an object something has collided with, or where on that face.

At best, as approximations, you can use llDetectedPos in collision_start and move that closer to your target on one axis (because llDetectedPos gives you the center of the projectile, which may be some distance away from your target when the collision happens). This works best if your target is a simple shape, preferably flat.

Alternatively, you can use llCastRay from llDetectedPos towards the target itself. This is "more accurate" and works with more complex shapes, but has other quirks and drawbacks (listed on the llCastRay wiki page).

Edited by Wulfie Reanimator
  • Like 2
Link to comment
Share on other sites

3 hours ago, Pedlar Decosta said:

Ok. Much as I appreciate the help so far, I have realized the functions  llDetectedTouchST  or llDetectedTouchUV aren't going to work with projectile collisions.

Right.  I am sorry not to have made that clear in my initial response to your question.  The various llDetectedTouch functions only work in touch* events. You have to be much trickier to get reliable results from collisions.  If you're designing a target that people are always going to be shooting at, of course, you can make it a linkset with a gridwork of smaller sub-target areas (like a dartboard or an archery target) and then record a collision with one of them.  If your target is some unexpected enemy's random battleship, though, that's not a likely solution.

  • Like 1
Link to comment
Share on other sites

Touches, you can locate. Collisions, no. This is a lack.

(I ran into this with pathfinding characters. They have one collision object, a capsule, and you can't tell what part of it hit something. You get collisions with the ground and any objects you're walking over. Distinguishing those from hits on obstacles is hard. I've used llCastRay pointed straight down, and that takes care of most cases. But when the character crosses onto a new surface like a door threshold or a rug, there's a hit that's not directly below the character. All this is part of a workaround for pathfinding bugs, though. It should be unnecessary.)

  • Like 1
Link to comment
Share on other sites

Rolig said " Right.  I am sorry not to have made that clear in my initial response to your question.  "

That is quite alright Rolig. The question was about touches AND collisions, for a couple of different projects. So the touch commands were very relevant. The archery target is just one of those projects. I have a working target already, but I was hoping to save prims by using mesh or even just a flattened cylinder or sphere. But thanks to everyone for clarifying that. I'll have a look at llCastRay for future reference at the very least.

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 2020 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...