Jump to content

Kagehi Kohn

Resident
  • Posts

    60
  • Joined

  • Last visited

Everything posted by Kagehi Kohn

  1. Well, again, the rug doesn't work, it requires another prim, which I need to avoid, if at all possible. Note, a solution for this might, for a good example of why this is a better solution, be used to test 'where" you are, for a hud, so you can do special manuvers. Right now, for example, not only does SL not support say, making something a climbing surface, scripting every damn things you might want to climb is.. just absurd. But.. Lets say you have a hud, which when you enter the sim, talks to a sever, which hand it a set of places, where specific animations/actions can be chosen, such as, being able to actually climb the ladders... Now.. "Some" things you could use "within x distance from" and which way you are facing, to trigger. Say, instead, you want to be able to "creep" past a location, automatically, like in a stealth game, you create a virtual bounding box, which your hud detects you entering, and leaving, lets you opt to crouch, and "triggers" the animation needed (as well as, say, keeping you moving other the right direction, unless you persist, and then, it lets you stand up, instead. Also.. All the prior code.. dead. Here is a different version (it still requires translating the AV, to test, but is much simpler, and actually should work. No complete code yet, but here is the idea: // Move our AV's location, so that the equations can "assume" that our positions are relative to <0,0,0>.AVPos = llDetectedPos(0) - llGetPos();//Bounding box size = 5, 3, 2, 0, 4, 1, or 8x2x5 meters in size.right = 1*x + 0*y + 0*z - 5;left = -1*x + 0*y + 0*z - 3;top = 0*x + 1*y + 0*z - 2;bottom = 0*x - 1*y + 0*z - 0;front = 0*x + 0*y + 1*z - 4;back = 0*x + 0*y - 1*z - 1;//Note, not tested, so some of these may be testing the wrong "direction". I.e., they might need to check if the sign is +, instead of -, or visa versa.if (right <= 0 & left >= 0 & top <= 0 & bottom >= 0 & front >= 0 & back <= 0) llOwnerSay("Inside box.");
  2. Sigh.. OK, for anyone that didn't see it, like me. my original code doesn't work at all, because I am testing "two planes/sides", while the original math only handled a single plane. I.e., it dealt with only X and Y, not Z. I make the mistake, in directly copying the code, of only used the X and Y of each point, even when I **should be** testing the Z... Need to rethink how to fix that...
  3. Yeah, well. There is that possibility. I had considered making myself one, and may still, but, for now, I am using a prebuilt I got from the market. Strictly speaking, there is no reason to not use the box trick I am trying to do though. It has advantages, in that, if I can make it work right, the "box" can be any orientation, or location, not even "connected" to the prim involved at all. My only worry was that the math method works regardless of which way the box it pointed, but, in this specific case, the problem wasn't "is it in the box?", so much as, "Where are the corners of the box?" And, in fact, I suppose I could have multiplied the offsets by the rotation, to get my box corners, then tested against that, using the math. The non-math version, unless you "unrotate" it, I suspect has a different problem, illustrated below. Green is where the actual box you are trying to test is, while red is what you end up with, if you don't "unrotate" the box, and the av location, so they are parallel to the axis you are testing. In other words, unless you object is facing exactly +/-X and +/-Y, without rotation, your "test" is going to be looking at the box that isn't actually the size your corners suggest they are. The math version.. doesn't care how the box it rotated, just if the place you are trying to find is either in/out of the resulting box. But, as I said, in that case, your problem is figuring out where your corners should be, since.. they are not going to be directly on a +/-X and +/-Y, never mind +/-Z axis line either. But.. its probably, thinking on it, as simple as doing "corner = center + (offset * rotation)". I.e., the direct opposite of what is being done in the examples people gave. I think... Umm. Note, in the case of useing it for "any" orientation, of course, I need 6 corners. Four for the base, then two more for at least one side. The only reason I can get by with two, otherwise, is that my box isn't rotated.
  4. This is a prim heavy sim, where the area being used is "rented", and every extra prim counts. So.. sorry, but no, adding extra prims is usually the worst option. If I could find a way to make the thing 0 prims, instead of 1, I would be. lol
  5. Not really. That gives the bounding box for the object. Since I am not sitting on the object, it doesn't include the AV. Since I am only useing a single prim (which is to say that the transparent blue box is not "real" or "attached" to the build, but merely virtual, the bounding box will be the size of the prim, not the size of the detection box. However, the "isInPrim" code seems to do that same thing that Talia's does, more or less, just using the existing bounding box instead. In principle, its the same thing, just the wrong box. But, I think it gives me two options. One option is to use the math method, and the other, a bloody lot of if/then statements. Hard to say which one is better, in terms of script execution. Way back, when I was dealing with compiled languages, or even interpretted, instead of "run on demand", the math would have been slightly faster. lol In any case, I think this one single line is, in fact, the bit I was missing (and, its much simpler than I imagined it was going to be): vPos = (vPos - llGetPos()) / llGetRot();
  6. Hmm. Maybe. It seems you are doing what I was suggesting, which is, in effect, 'removing" the rotation, then testing against a Rot = <0,0,0,0> box. Its probably doing the same thing without math, maybe. Will have to do some testing.
  7. Hrrm.. Not sure if this works, but.. it does with a square, and by testing two sides. It "may" work with any orientation, but.. I would still need to compute the correct locations for my box corners, which since those will have to be ralated to the prims rotation, while the psuedo code I have, assumes the thing is at a known angle. integer triangleArea(vector A, vector B, vector C){ return(C.x*B.y-B.x*C.y)-(C.x*A.y-A.x*C.y)+(B.x*A.y-A.x*B.y);}integer isInsideSquare(vector A, vector B, vector C, vector D, vector P){ if(triangleArea(A,B,P)>0 || triangleArea(B,C,P)>0 || triangleArea(C,D,P)>0 || triangleArea(D,A,P)>0) return FALSE; else return TRUE;}integer isInsideBox(vector bbox1, vector bbox2, vector bbox3, vector bbox4){ vector P = llDectectedPos(0); vector A1 = <bbox1.x,bbox2.y,bbox1.z>; vector B1 = <bbox2.x,bbox2.y,bbox1.z>; vector C1 = <bbox2.x,bbox1.y,bbox1.z>; vector D1 = <bbox1.x,bbox1.y,bbox1.z>; vector A2 = <bbox3.x,bbox4.y,bbox3.z>; vector B2 = <bbox4.x,bbox4.y,bbox3.z>; vector C2 = <bbox4.x,bbox3.y,bbox3.z>; vector D2 = <bbox3.x,bbox3.y,bbox3.z>; return (isIntegerSquare(A1, B1, C1, D1, P) & isIntegerSquare(A2, B2, C2, D2));} Apparently, this is supposed to work because if P is "inside" of the squares, the result of each area calculation will be positive. If its "not" inside the squares, and thus, not inside the box, you end up with a negative result. Or, at least according to http://www.emanueleferonato.com/2012/03/09/algorithm-to-determine-if-a-point-is-inside-a-square-with-mathematics-no-hit-test-involved/ where I adapted this code from. So, assuming it does work, I just need some way to figure out where bbox1, bbox2, bbox3 and bbox4 "should be" based on the proper rotation. Hmm. Multiply them by the vector normal, for the direction maybe? Only.. wouldn't that, in some cases, result in multiplying some of the numbers by zero, and losing the result? Like I said, rotation stuff is "not" what I am good at. lol Might be easier to figure out what the "unrotated" locations would be, for the av, and prim, then compute the above?
  8. Well, actually, I want this to be more universal, so I can borrow it for, say, doors. If you have something the size of, say, a garage door, or bigger, then an "arc", is going to leave areas on either edge of the door, which will fail the test. That is why I was looking at using a bounding box type test. I think.. If I had the vector and distance to the av, and the rotation of the object running the test, then I could "normalize" the result, by rotation, into a state where, in effect, everything is exactly lined up, as though it had no rotation at all, then the bounding test is simple. Its only complex, if/when you have the prim you are testing against at an angle. Just.. my understanding of rotation stuff in SL sucks, like.. more than anything else I know how to do (or rather, don't). lol
  9. Umm. Think you missed most of my point. That will detect which side they are not, not "if they are in the box". I.e., it is "one part" of the problem. Maybe an image would help.. The guy with the blue head is "in" the detection box. The two red guys are a) and the wrong side of the counter (green), in one case, and b) not in the detection box, in the other case, but, all three are on the "correct side", and "close enough to be in range". So, only the blue guy should actually "be" allowed to access the safe. Also, since its detecting touch, the sensor really isn't necessary. Its going to be using the detected position, from the toucher (or, if needed, by getting that info some other way than by sensor, which is.. kind of redundant, since I know who I am looking for already).
  10. Ok, seems like trying to get help in-world can be a pain with some of this stuff, and I don't have an immediate need to fix this, so, will ask here. I resently created a script for a safe, which lets you drop cards in, or remove them, as a means of tracking what gets put in by faction members, or, if someone gets the code/figured it out, steals from inside of it. However, while I have some script sitting some place that can say.. tell "which side" of the resulting object they are on (I used it in a door design), and how close they are, I want something more precise for this, since its behind a counter, and, I want it to work even if I later re-orient the safe, to point a different direction. So.. here was what I was thinking, as psuedo code: vector pos = llGetPos(); integer left = 0; integer right = 2; integer length = 5; integer top = 1.5; integer bottom = 1.5; //Assuming that the "front" is in the Y direction. // integer boxlen = llGetScale().y / 2; // vector bbox1 = <pos.x - left, pos.y + boxlen, pos.z - bottom>; // vector bbox2 = <pos.x + right, pos.y + boxlen, pos.z + top>; // vector bbox3 = <pos.x - left, pos.y + boxlen + length, pos.z - bottom>; // vector bbox4 = <pos.x - right, pos.y + boxlen + length, pos.z + top>; // cpos = llDetectedPos(0); // {stuff to figure out if cpos is actually in the box formed by bbox1, bbox2, bbox3, and bbox4, // even if its bloody rotated 32.5356 degrees, or something..} I mean, I know how you go about it, if the thing is at a nice 90 degree angle, or something, but.. how do you manage it if its not? Well, other than the near insanity of, say.. normalizing all the variabled into a form in which all of them, including the AV location, are oriented so they all line up with such a nice, neat, direction? lol This is, definitely, a bit more complicated that just going, "Are they in front of the thing, and close enough to be opening it, since, well. for consistency, I don't want them doing that, if they are a) on the right side, b) close enough, but, c) on the wrong damn side of the counter. ;) And, that is assuming I don't end up having to move it, and messing up all of the resulting assumptions about where the places you can't be are. Heck, in all honesty, its also, logically, a problem with, say, a door, in a hallway. And detecting them in an "arc" of the door, means the closer you are to the door, and the wider the door, the more of a "gap" you have, where they could be standing practically on top of said door, and still be "not in the arc". So.. testing a script defined bounding box would be better, if I can work out how to do that.
  11. This would be a very nice thing to have. Though, I would definitely like the next application that gets it to be Wings3D, simply because of two factors - 1) its interface is simple, and 2) while there are some quirks and bad assumptions it makes, like assuming you don't want to work with floating surfaces, and can't UV them (like you do to create a working door in blender), there is a version that is slowly improving that includes Carve3D functions, so you can produce CSG unions, differeces, and intersections with it (makes building a lot less of a hassle, since you just cut a hole in something, with something else). I just got through spending several days creating three distillery stacks, which didn't *quite" come out as good as I would have liked, which, once resized properly (I do wish the exporter could set units..), ended up with an impact of 39. One of them I can probably drop, one of them ran into a glitch, where it refused to upload without physics set to maximum, and the third, I think the exporter fowled (supposed to have a hole, and hollowed out inside, but part of the hole is blocked with mesh that isn't supposed to be there). I was expecting.. I don't know, maybe 20? lol The ability to select and object, and run it through something that could have said, "About 7 for that one, 15 for the next one, 16 for the one you went nuts on, when creating a handle on it...", would have told me, "You need to rethink this just a tad. lol
  12. Ooooh! I take that back, there an ablitity to use custom camera shapes in the new Beta version. Its even possible to use a mesh to do this. So, in principle you could import your mesh as both the object, and camera, then "bake" the texture you add to the mesh, via the process. This could be just what I am looking for.
  13. Still seems, unfortunately, to be some "hideous" in it. I hope they come up with something more logical in linking your texture with what you are doing in the model. Its driving me nuts. Also, found that all the "drawing" modes are, for my purposes, just about useless. Vertex is only helpful if you have a lot of them, otherwise it isn't very helpful. Texture.. seems to paste textures on, not "draw", exactly, and that isn't even close to what I wanted. The other one that is supported, is just about as pointless. What I was hoping to find is something more like having the mesh unwrap to a "draw area", then letting you layer stuff onto that, so that you can slide things around, or draw on the image, rather than the model. In other words, more like if you where painting a house, and "unwraped" the whole house to a flat surface, while still being able to see a version of it, in the original form, but in the same application. The closest Blender supports is unwrapping the mesh to a template, saving that, then using another application to edit the resulting image. That works, sort of, but you lose the ability to clearly see "how" its been unwrapped, and where on the things you are drawing. For any object that has sort of an amorphious shape, or no clearly distinct details, where you can really tell which side you are dealing with, not being able to see, and undo, a mistake, as you draw, when you are drawing on the wrong thing, is bad. And, of course, due to how the existing drawing functions work, there doesn't seem to be a sort of, "just fill this face with this color", function. Its literally the one things missing. You can slap a texture in there, but you can say, make you template by making everything on the left side blue, the right red, etc. The only function that "sort of" does that is the vertex paint, and it "overlaps" into other areas, instead of just painting the faces. Sigh.. I'll fingure something out. I actually thought of a solution, maybe, but its a) probably not as feasible as I am thinking, and b) maybe isn't even supported. Someone came up with a trick to try to generate Sculpties via POV-Ray. It works, sort of. You: 1. Create a spherical camera, at the center. 2. Place your object in the center, with the "no image" option set, so the camera can't "see" it. 3. Make the object Ambient, and have a texture that changes color, based on the "normal" for each point. 4. Place a large sphere around this, which has reflection. Since the object does produce a reflection, just not an image, the camera records a perfect scuplty map, sort of. It doesn't work well with anything too complex, and such maps have too few points to produce sane results, most of the time, without a lot of masterful tweaking. But.. I had the thought, if you could make a camera with the same basic shape as your object, and a reflector, such that your are "recording" a more exact match of the original, you would get the more complex details... Well, in theory anyway. Still, even without that, you could produce an image that was, say 800x800, instead of 32x32, and then parse that bitmap through something that converted the colors into a mesh, and then edit the result, to trim it down to something manageable. The real value, if you could have a custom camera shape though would be, in principle, being able to generate two images, one the "displacement", which you conver to a mesh, the other a "texture render", which would give you a perfect UV map, with all of the detail. And since you would be generating it without the need to even open photoshop, you could make "all" of the details using purely 3D objects, including the texture itself (i.e. add in really fine things, like screws, panels, etc, but rendering them in the "texture" step, while only producing the less detailed "mesh" by producing one from the much simpler, general, object you plan to apply all those bits to. Unfortunately, since I don't think you can use such custom shapes for the camera... And, there may be any number of other problems making it work. Still, would be a seriously interesting trick, if it worked.
  14. Yeah. Hadn't looked recently. When I got the book, they where "getting there". Good to know.
  15. Nice. Funny thing is, I have a book for 2.6, which isn't out yet. lol Need to look at how it works. The prior versions of Blender gave me headaches. The new one seems like they finally got the idea that people need to be able to use the damn thing, not just say they have it installed.
  16. Now about layering. I always thought that most of the 3D programs would support texture layering and layer baking. I only can tell for Blender and there it is possible. With the Blender 2.5 release i believe that the number of stackable textures is arbitrary and only limitted by your computer power. So you could create several texture layers, apply them as you like (use as filters or just mix add multiply whatever) and then bake the whole thing to your final texture. Right. So, bad and good news. lol Being able to say, slap something with the "object" I want onto a layer, where every other part is transparent, then baking a final is not too different than what I would have done in POVRay. The only big issue, in both cases, has always been making sure you layer them in the right order. However, at the same time, as you say, without going to some super high end, and expensive, application, its non-trivial to "paint" on the object itself. Blender does, I think, support it, but as near as I can tell, its probably not worth much beyond generating a "template", which you would then go back into photoshop (precision seem to be a handicap when it comes to drawing tools in 3D apps. You can witchwash the fence, so to speak, but not paint a mural on it.), to generate the actual image. Mind, had some discussion of exactly this sort of thing with the people on the forum for POVRay, in terms of what would be nice to be able to do. Someone had modified the code to have it generate mesh data (very preliminary), based on the mathematical intersections, generated during the programs pre-trace. This meant that, basically, you could produce a sphere, using real math, and have it generate an "approximate" of that true sphere, in mesh, and thus, any other structure, in principle, including very complex objects, or complex math, like iso-surfaces. Examples, generated from "single equations": http://home.no/t-o-k/povray/Isosurface-Rotated_Noise.jpg http://www.3dplumbing.net/tutorials/rockspov/index.html There are better ones, but finding them, since I don't remember where they where.. lol My thought was that what would be nice is if it also produced the texture map from the result, and there was a "level of detail" function, where anything below a certain threshold was mapped to texture, instead of mesh. The former isn't possible, the later.. kind of works already, since one of the newer features is "Object textures". In principle this is a single color, no other texture, "mapping" of the lighting and other effects, onto the final surface, based off of the 3D object/objects you use as the "texture". You could think of it a bit like rendering, say, the pipes that need to be inside of a space, as an image, then mapping that image onto the final object, so it "looked" like it was really there. Mind, its still possible to do something like that, in a round about way, but you run into considerations on how to "get" the image you need, in the right size, etc., so you can then apply it seperately. In general, I know the tricks, for a lot of this, in principle, but in practice... they can be a serious pain, when you have no experience "applying" them, and as I have been using things like POVRay, off and on, experimentally, since back when it was like version 1.0, I have gotten real sick of, "Well, you can't do that, but if you do A, then F, then M, then export to Q, and do S to it, you can sort of get B, which almost gets you to X, where you intended in the first place." Wimper!! lol
  17. Ok. The closest I have ever come to mesh, so far, has been using Wings, to produce scuplties, and then trying to "hack" a texture out in photoshop, without relying too much on something like unwrapping the thing. Now that mesh is supported, I have a few questions: 1. Just what texture options actually work with SL. Just UV mapped, or is it supporting things like bump maps, and other attributes? The information on how to get mesh out of a 3D app, and into SL doesn't exactly cover what is supported, and what isn't. 2. Is there any *sane* way to apply bits of stuff to a UV map (i.e., build the map in the 3D app), rather than having to export some complex, hard to understand, and non-trivial "unwrapped mesh"? The second one annoys the hell out of me, frankly, because you have, say, a texture for a panel, which you want to add to the final texture, but while its fairly trivial it slap that in place in photoshop, its way less than trivial to slap it onto what may be a curved surface, of a UV map layout. This is one of those things that would be, in principle, trivial in a real raytracer, just slap the texture on, as a layer, then adjust the position, until its in the right place, then render. Problem, of course, with something like SL, you have to "render" the texture, into a map *before* you apply the thing. Second problem is that most applications, as far as I know, don't let you layer textures, so you can't just map your base on, then something else, then you other, finer details, then "bake" that into a final result. Or, can you? Frankly, I am clueless about mesh, with respect to texturing the dang things. Nearly all 3D I have ever done has involved avoiding it, in favor of either basic colors, with no texture, procedurals, or full raytrace systems, where Constructive Solid Geometry, and layering make getting things right bloody trivial (especially since you don't have to worry about making all the detail work texture, instead of just "building" the details as they would really exist). Wrapping my head around how to wrap a texture around something, where the "texture" is flat, and the object isn't, is just... nuts to me. lol
  18. Think people are so used to producing bad design that finding good mesh is going to take a while. lol That said, there are some people working on some insane stuff: http://ccs-gametech.com/news_tabs.php?cat=65&readmore=115 Its just going to take time for people to get a handle on being able to build stuff that actually works now. And, sadly, lighting it, using projected images, and a few other things is going to require some people finally getting better hardware. My own system isn't bad, but it can't pull a sane frame rate *and* also do all the new illumination, and stuff (hell, last time I logged in I was ruthed for an hour, with the client saying it is below 10 frames, but I hadn't rebooted XP in a few days either).
  19. I keep hoping for a time when this becomes less of a problem. However, "sorting" is basically a short cut. While you can still get thingsl like coincident surfaces in full raytracing, you don't get alpha sorting problems. The reason being that you hold the "whole scene" in memory at once in those, and then trace back, based solely on where the surfaces are, compared to the camera, mathematically. With graphics cards, and other "real time" systems, you are sorting out things you can't see, and resorting every surface, before running the math. You are not really "tracing" anything, you are only picking out the bits that are "visible". Everything else is thrown out by the engine. Hense, if the method used to "sort" which bits are visible throws out the wrong things, or puts them in the wrong order, etc., things go wrong. Unfortunately, despite all the advanced made, there are a) no cards that use true tracing, and b) nothing that can bypass this issue, in anything like real time, though the chip makers keep working towards a sort of compromise system that "kind of" does so. It just can't, yet, get the frame rate needed. Ironically, since SL rarely, if ever, runs at anything like the frame rate a normal game does, we could probably get by with it, if it was even available. lol
×
×
  • Create New...