Jump to content

Wulfie Reanimator

Resident
  • Posts

    5,722
  • Joined

Everything posted by Wulfie Reanimator

  1. GTFO was interesting, it recently became available for purchase. Oh, not the same GTFO game?
  2. Apparently, and it doesn't surprise me that somebody did it first. My function is based on very basic graphics-programming mathematics. The viewer uses basically the same calculation for displaying the HUD objects themselves. The math in that version is quite a bit more obscure, though.
  3. To put it in simple terms, there are many different kinds of "lag." The term is ambiguous on its own. There's low framerate, there's delay in inputs/actions, there's slow/weak connection, etc. Some of it can be fixed by you, some of it can be fixed by LL, some of it can be your ISP's fault, and some of it can only be fixed by the people making stuff for SL.
  4. All it actually "does" is convert a 3D coordinate to 2D screen space. The main purpose for this is to show something on the HUD that relates directly to something in-world. For example, here's a simple HUD that overlays prims onto avatars when they are within view. (I used a repeating sensor and looped through each avatar.) - https://giant.gfycat.com/RepentantKlutzyBoar.webm
  5. It really depends on what your intention is. If you want to absolutely prevent an avatar from moving faster than X meters per Y time, create a "cage" that is slightly bigger than the avatar's hitbox and make it snap to the avatar's position every time the avatar touches the cage. This way, the avatar can kind of "push" the cage in the direction they want to go, but cannot go past it even if they have some kind of movement enhancers. I've only seen this approach in combat sims though, where oppressive tactics are kind of the norm. Another alternative for complete movement control is llMoveToTarget, where the time to reach some nearby position is long enough to be slow and smooth. This only really works on flat ground or for flying (and if the script is attached to the avatar), though. Detection of upward/downward slopes gets too fiddly with raycasting.
  6. I'm like 30% sure I've posted something similar here before, but it's not in the past 10 pages at least... Here's a pretty simple function that takes any position in the world / region, and converts it to a useful 2D-position (relative to attachment point) that can be used on the HUD. vector World2HUD(vector region_pos) { vector cam_pos = llGetCameraPos(); rotation cam_rot = llGetCameraRot(); // Calculate the offset from camera to region_pos. // Positive X is the forward-distance to region_pos. // Y0 & Z0 are at the center of the screen. // +Y is left, +Z is up; when the HUD is at ZERO_ROTATION. vector relative = (region_pos - cam_pos) / cam_rot; vector hud; if (relative.x > 0) // Ahead of the camera { // "Perspective division" // Here, the forward-distance is used to divide the // two other components to "map" them to a lower dimension. (3D -> 2D) hud.y = relative.y / relative.x; hud.z = relative.z / relative.x; } return (hud * 0.87); // FOV ratio fix. ZERO_VECTOR if behind the camera. } An alternative calculation can be used to map the 2D coordinates so that +X is right and +Y is down (like it would be for OpenGL, etc.) hud.x = relative.y / -relative.x; hud.y = relative.z / -relative.x; Essentially, if you're going to use this function in a HUD attachment, make sure the HUD is attached to Center or Center 2 and not rotated. If you want to use the alternative calculation, the HUD must be rotated <0, 90, 270>. Changing the orientations is not terribly difficult if you edit your attachment and display the local coordinates, so that you can tell which way the axes go. Here is an example that tracks an avatar: default { state_entry() { llRequestPermissions(llGetOwner(), PERMISSION_TRACK_CAMERA); } touch_start(integer n) { list target_data = llGetObjectDetails(target, [OBJECT_POS]); vector target_pos = llList2Vector(target_data, 0); vector hud_pos = World2HUD(target_pos); llSetLinkPrimitiveParamsFast(1, [PRIM_POS_LOCAL, hud_pos]); } }
  7. I'm sure the situation has been explained to you, whether you acknowledged it or not. If LL's systems detect some kind of suspicious activity on an account, it can be locked while they investigate it. This could be (for example as you said) a sudden spike in spending from an account that hasn't been active for a good while, or big discrepancies in login locations, etc. The best things you can do is ask if there's a way for you to prove your identity, or just wait while they investigate.
  8. I assume you mean the Marketplace? You're not guaranteed to have your ad show up immediately and for the whole duration of the subscription. At least it wouldn't make sense if that was the case, because there are lots of other merchants doing the same thing. Or maybe you're talking about something else.
  9. While it looks decent, it's very impractical if you expect any avatar to be walking in the rain. The planes block mouse clicks, so you cannot click on anything, even your own avatar, if there's a rainy surface between your target and your camera.
  10. Do you want a smooth color transitions on a surface? Don't do repeating timers or complicated color math. Create a texture with the desired color gradient, and smoothly slide it across the surface. You can have as many colors as you want and it's easy to control, and it works great with small textures! You could even create a greyscale texture so it becomes tintable while giving you control of the color's brightness.
  11. Sure, if you first compare the value of llDetectedKey(0) with a specific avatar's key, in the touch_start event, you can block anybody else from getting the item. http://wiki.secondlife.com/wiki/LlDetectedKey
  12. But I love nits! And at least we don't have spammy gestures here on the forum. Signatures get close, but at least they're disabled by default.
  13. Opening a profile and clicking notices causes a UI sound to play. It could be that something on your computer is adjusting/balancing/prioritizing sounds, and thinking that SL is more important than your music stream (even if you're listening to it through SL).
  14. The placement of the mesh while not attached to an avatar is not accurate, because they are rigged and so their visual placement is based on the avatar's "skeleton," which is affected by your shape. You need to adjust the piercings after you have linked and worn the ears. (Your piercings won't move with the ears either, if they are animated, because your piercings aren't rigged to the ears. They might follow one of the ears, if the attachment point itself is animated.)
  15. There's two ways to do it. 1. You combine the two linksets, then change the script so that it moves a list of specific prims. This method has some problems like if you link something new to the linkset, the order of the links will change and the moving part will break, unless you name all the moving links something specific and your script searches for which parts should be moving. 2. You keep the two linksets separate, but both linksets have a script that communicate with each other so that the moving part can align itself correctly with the static part, even if the static part is moved later. This is the worse option because it requires two scripts and two listens. The reason why it didn't work how you wanted is because the script doesn't know which links belonged to which linksets before they were combined.
  16. I see you're a man of good taste in hand-writing. People look at me weird when I give them my 0.25 mm. If you exchange llDetectedTouchPos and coordinate-checking (which is hard to maintain) with llDetectedLink, this is the correct way to do a multi-button HUD with one script. This way, you can have one texture for the HUD (or per HUD page if you don't want to figure out offsets) and only need to move the buttons around to get the exact placement/size you need.
  17. The easiest way to use rotations is to... not. More specifically, life gets easier when you don't touch quaternions (the "rotation" type, with 4 values) directly. Instead, you should define a rotation in normal XYZ degrees, like so: vector relative_rot = <0, 0, 45> * DEG_TO_RAD; // "DEG_TO_RAD" is a conversion from "degrees to radians." // Rotations use radians, but don't think about it. // Just do the conversion and forget about it. So, now you have relative_rot in a format that can be given to llEuler2Rot, which converts the XYZ rotation into a proper rotation. vector relative_rot = <0, 0, 45> * DEG_TO_RAD; rotation relative_r = llEuler2Rot(relative_rot); Now you have relative_r which you can use to correctly apply a 45-degree rotation around the Z axis. Similarly, if you want to adjust some object's existing rotation, you can do the conversion in the opposite direction: rotation object_r = llGetRot(); vector object_rot = llRot2Euler(object_r) * RAD_TO_DEG; object_rot.z += 45; // Add 45 degrees to the Z rotation. llSetRot(llEuler2Rot(object_rot * DEG_TO_RAD)); This doesn't directly answer your question, but hopefully this can guide you towards easier rotations. If not, I can get back to you later tomorrow.
  18. I don't think that's correct, but it's close. How do you calculate start? For example, if start is llGetPos (avatar pos at <10,10,10>), you're calculating: raycast_start = <10,10,10> + <0.5,0.0,0.5>; raycast_end = <10,10,10> + <60.0,0.0,0.5>*llGetCameraRot(); Your raycast starts 0.5 meters above and to the East of the avatar, regardless of where they're facing. The ray might hit the user itself. Add the camera rotation to raycast_start as well and then it's correct. But if start is llGetpos*llGetCameraRot, you'd be calculating: raycast_start = <10,10,10>*llGetCameraRot() + <0.5,0.0,0.5>; raycast_end = <10,10,10>*llGetCameraRot() + <60.0,0.0,0.5>*llGetCameraRot(); // When avatar rotation: <0,0,90> degrees raycast_start = <-10,10,10> + <0.5,0.0,0.5>; raycast_end = <-10,10,10> + <60.0,0.0,0.5>*llGetCameraRot(); Which would be just completely bonkers. Debugging raycasts is definitely hard since there's no way to automatically visualize them. What I do is rez prims at the starting point, facing towards the calculated direction.
  19. You would get multiple results, but raycast doesn't cause damage on its own. Like @Fenix Eldritch said, you would have to process the results in a way that makes sense for you. If you used 3 rays in a triangle shape, for example, you could have 3 separate variables for each ray's result, and check those results in some set order to choose which ray is the "most important" for a hit. If RayA has a result, damage that avatar and ignore the rest. If RayA didn't hit, check RayB, etc. Also, regardless of how many rays you shoot (As few as possible! Rays can fail to cast completely if the sim is busy.) you should figure out what the expected maximum range for your weapon is, and then figure out the "width" of your raycasted shot, and then remember that you're shooting at a target about 0.2 - 0.4 meters wide. Your maximum spread should not be much greater than the width of your target (<0.5) at maximum range. Parallel rays are the simplest solution so you won't need to calculate an angle and you'll have the advantage of the maximum width of the shot regardless of distance. Ah, I had only tested it with an array of "visual lasers" that used raycast to determine their length. They seemed to form a spherical shape, close enough I suppose but interesting to see. This one even I didn't know about!
  20. Here's a small wrench to throw in: If the message that's received is NOT a valid vector, vReceived_vector will get ZERO_VECTOR as its value. So, if your script is always listening, or might also receive non-vector messages, you may want to verify that the message that's received is actually supposed to be a vector. The absolute simplest thing to start with is to see if the message begins and ends with < >, to at least have an idea if it could be a valid vector. The useful snippets section on the wiki has a convenient function to do everything.
  21. Two fun facts: An avatar that is standing up is shaped like an oblong sphere for raycast. This sphere is smaller than the avatar's hitbox (from render metadata). An avatar that is "sitting on ground" is shaped like a pyramid with a flat tip. Shoot multiple rays, either parallel to each other or starting from the same point but diverging by some degrees.
×
×
  • Create New...