Jump to content

Helium Loon

  • Posts

  • Joined

  • Last visited

Everything posted by Helium Loon

  1. There are a few pieces of software out there designed to take an image and 'wrap' it onto a 3D object appropriately. Most are a bit too pricey for hobby or one-time use, though. One of the apps from the Kanae Project, Somato, projects flat textures onto sculpt or OBJ file, and generates the textures. If you have the right pictures of your face (and a copy of the SL Avatar's head object) you can project your face image onto it and build it yourself. It may require some editing, due to hidden surfaces and such, but it can be done. And Somato isn't terribly pricey. http://kanae.net/secondlife/somato.html
  2. Sorry to say, you can't. The way SL handles tinting is by applying the prim color and summing it with the texture color. Whether it has any transparency or not.
  3. I've been considering pre-ordering the Dev Kit and SDK. I agree, this would be amazing with SL (as well as quite a few MMOs and FPS games.) I think the Oculus developers need to add a mic & headphones into the design (or at least a convenient way to plug earbuds and a mic) so voice can be done, as the keyboard will NOT be viewable. And having a 3D controller (like the spaceball/spacemouse/spacepilot stuff) is almost mandatory. Having support for those in the viewer as well would be a big benefit as well.
  4. You may be able to find someone to do some of that, but I'm sorry to say that some of it cannot be done with LSL scripting. LSL cannot write notecards. So generating custom 'reciepts' with amount paid and date/time in them isn't possible. While tracking payment time, amount, and user is easy......LSL is limited to 64k for data and program space. Without an external database, keeping that data in the prim will quickly get too big. As long as the lists can be cleared after the email is sent, not a problem as long as the 'regular' periods are short enough. The first two requirements are easy. It's 3 and 4 that have problems. The problems with 3 are not insurmountable, but 4 is just not possible with LSL.
  5. I think the question is can you have one animated texture play through, then a different one play through, and so on. Yes, it's possible. But it means you need to use a timer script. llSetTextureAnim() has both length and rate, so your script will already know how 'long' the animation is. Start it playing without looping, and set a timer for the total length. In the timer, set it to the next animation (and reset the timer length to the new animations 'length'.) If you have more than two, you can use a list of animations and parameters for them. Then just have a counter to keep track of which one is currently playing.
  6. You can't change the mesh itself via scripting. Changing the textures is relatively easy, simply using llSetTexture() or llSetLinkTexture() or llSetLinkPrimitiveParamsFast(). Each 'material' in the mesh is a separate texture 'face' on the resulting prim. Changing makeup and such (i.e., the textures) would just require a library of texture UUIDs the script could switch between. If it needed to change shape, one could create 'copies' of the head (each consisiting of ONE material) and linking them so each has a different material. Then by setting all but the 'active' one to 100% transparent, you can 'change' the appearance of the mesh. To do any of this from a HUD would require the HUD to communicate via llSay() or llRegionSayTo() with the mesh object script.
  7. Considering the breadth of the experience you are asking for, and for full-time dedication, you might want to indicate what kind of salary range you are willing to entertain. What you are describing is not just a scripter, but also an IT professional and developer. If you really want full-time dedication and performance, you can expect to pay professional wages. Are you prepared to pay $30k-$60k USD (or equivalent) per year, DOE? If not, you might want to also include expected time involvement on a weekly or monthly basis to earn what you are planning to offer. Most professional developers contract at $45-$90 USD per hour, higher for very short term or such. If you are planning on paying percentage on sales to offset the dev cost, how much? For how long? You're looking for a lot, you might want to give a few more details......
  8. As has been noted already, the xCite people have done this. Not that a little healthy competition would be a bad thing! To have the remote USB device controlled from within an LSL script in world: (1) LSL-Script opens connection to a Webserver somewhere (via llHTTPRequest() call) passing data to it in the body or header. (2) Webserver acts on the data passed into it from the LSL script, changing values in a server-side DB. (3) USB device driver uses IP to make queries over HTTP to a webpage front-end on the same webserver, retrieving updates in the server-side DB for THAT particular registered device. (4) USB driver alters function of connected USB "peripheral". Obviously, this is something where having authentication is probably VERY important.....so there will be some additional steps in some of those. But that's the basics of it. Alternatively, you can have an in-world script set up as an HTTP server, which is then accessed much like from (3) above, but that limits some flexibility, and also can put various other problems into place.
  9. I've never actually played the game in question, but knowing the particular region, it's very probably a RLV based game. That means that the player needs to have (a) A viewer which supports RLV, AND (b) a RLV Relay worn, so external objects can affect the wearers RLV. Without those, many RLV based game areas cannot trap, teleport, or affect the player, so they won't let them proceed through the game.
  10. A HTTP server is just that. A server. ANY client can connect to it. What the server DOES in response to those requests is up to the person implementing the server. llRequestURL() just requests a server URL for the automatic sim HTTP server to respond to, and assigns a specific UUID key to identify requests as being for/from that URL and script. What that script does with any requests to that URL (i.e., authentication, encryption/decryption, etc.) is up to the scripter.
  11. Could be a couple of things..... First, remember that even rigged mesh does not respond to most of the shape sliders in a shape. This means, that if your avatars shape is anything but the default you are modelling against, it will likely not fit right. This is why most mesh designers make multiple sizes, to accomodate different shapes. Second, if some of the vertexes in your mesh aren't correctly weighted, they may not 'move' to the correct places in relation to the shape of the avatar. Verify the vertexes in those areas are all correctly weighted.
  12. Make sure to try clearing your viewer's cache. Mesh objects get cached, just like textures do, and if the download gets corrupted, it may think it has a valid object in the cache, when it doesn't, and when it tries to load it from the cache, it gets garbage (which doesn't render correctly, or at all).
  13. touch events in LSL only occur due to someone clicking on a prim in an object. There is no way to 'simulate' this via another script. If the script is mod, then it can be re-written to utilize the techniques above (such as the collision volume detect and such) to avoid having to 'click' on the door. Otherwise, it would have to be completely re-scripted, or if the whole object is no-mod, replaced with a door that utilizes these other techniques.
  14. If you really want an excellent reference book, the one I mentioned is good. The best is still the quintessential textbook....."Principles of Computer Graphics" by Foley and Van Dam . Both are based more in algorithms and theory, rather than how modern hardware and APIs handle implementing them. But excellent material to know to understand what is going on 'under the hood' in most modern graphics cards. Getting back to the original topic.....what LL needs to do is implement another object parameter (they are already working on normal maps) that provides an environment map and reflectivity parameter. A simple modification to the existing shaders would allow those to be used to generate static reflections that would be a HUGE improvement. Even better if they had a button on there that would render the current environment to a texture from the prim's position. Then you could create and adjust, then click that button, upload the result, and apply it to the prim as an environment map. Boom. Matching reflections (though no dynamic objects in them). Of course, I'd like to see them also give us access to a few nice vertex displacement shaders as well.....could increase flexibility and interesting effects by an order of magnitude.
  15. Shadow mapping can still produce these results. It isn't raytracing. It works by using a shadow buffer (similar to a stencil buffer) that is rendered along with the color channels. It's based on Williams work in 1978. It works with the standard z-buffer render algorithms which most render pipelines in hardware are now based on. It renders each scene from the viewpoint of the light sources (which aren't unlimited, naturally) into a depth-buffer. This is pretty fast, and depending on how precise the shadow information needs to be, it can be considerably reduced in resolution. Using coordinate transforms, you can map from the scan-line point to the z-buffers and determine how much from each light source actually reaches the pixel, and which ones don't. To quote from "Fundamentals of Computer Graphics": The Algorithm is a two-step process. A scene is 'rendered' and depth information is stored into the shadow Z-buffer using the light source as a view point. No intensities are calculated. This computes a 'depth image' from the light source of these polygons that are visible to the light source. The second step is to render the scene using a Z-buffer algorithm. This process is enhanced as follows. If a point is visible, a coordinate transformation is used to map (x,y,z), the coordinates of the point in three-dimensional screen space (from the view point), to (x', y', z'), the coordinates of the point in screen space from the light point. If z' is greater than the value stored in the shadow Z-buffer for that point, then a surface is nearer to the light source than the point under consideration and the point is in shadow, and athus a shadow 'intensity' is used; otherwise the point is rendered as normal. There are other shadow algorithms around. But this (or one of its many variants) is how most modern graphics systems perform shadow calculations since they use a built-in Z-buffer system and the hardware is highly optimized to render in this fashion. Now, that said, most modern scan-line rendering engines (that aren't strictly hardware), allow for much more granular control over what pixels use what algorithms to render sections of the image. Many use very compex math functions to compute procedural textures, ray-trace certain pixels because the objects that occupy those pixels are marked for ray-traced reflection, or use caustics simulations, radiosity, or any number of other systems for a given set of pixel. Things like RenderMan, or LightScape, or any number of other renderers used to generate the 'production' images intead of the real-time display, use these and more.
  16. Just a note.... Regardless of the graphics API (OpenGL or Direct3D or etc.) generation of reflections or refractions is the job of the developer, not the API. While Vertex/Pixel/Fragement shaders have done a lot to automate this, it is still just a texture being applied. That said, that texture does not have to be fixed. It can be generated dynamically each frame, at a lower resolution, by having a second rendering context which renders the same scene from the point of the reflective object, and then uses this texture as it's environment map. SL actually HAD this at one point, LONG ago, in a beta viewer.......actual real reflections. But it was WAY too slow (especially on the hardware at that time) and too many reflective objects in the scene brought the client programs to their knees. With today's modern graphics hardware, I think they should re-look at bringing in dynamic reflection.....with the advent of shaders, as well as the general increase in rendering performance, I think it could be done reasonably (and be a checkbox in preferences that only enables at high or ultra levels.) Also, shadows in scanline renders are ususally generated via shadow mapping, which does NOT use raytracing. Projected shadows DO use raytracing of a sort (it's a simplified version, and sub-sampled).
  17. You'll need to adjust the wand tool settings, or possibly shrink the selection. What is likely happening is you are partially selecting some of the surrounding pixels (because of anti-aliasing settings and such.)
  18. I think the 'real numbers' part is that the LOD factor doesn't control WHICH LOD is seen, but the distances at which the LODs switch. Put simply, at a RenderLODFactor of 4.0, the distance at which you will see the highest detal LOD of a sculpty/mesh 4 TIMES the normal distance at which it would switch.....which means that by the time it is far enough away to switch to the 'medium' LOD, it's so small on your screen that you can't even notice it. Same with the switches to 'low' and 'very low' LODs. While the actual calculations/distances involved are VERY complex, a simplified example: Say that, at 1.0 RenderLODFactor, an object switches from "high" detail to "medium" detail at a distance of 10.0 meters, and from "medium" to "low" at 30.0 meters. Switching to a RenderLODFactor of 3.5 would change the distances at which the switching occurs, to 35.0 meters and 105.0 meters, respectively. And if the object is large enough, those distances get even bigger (larger items switch further away than small objecs, hence why they have a higher Land Impact.) and the effect of RenderLODFactor becomes even greater......great enough that by the time the object is far enough away to 'switch' to a lower LOD, it is outside your view-distance!
  19. The big problem is using a comma as your separator, since you are passing vectors and rotations.....which contain commas inside them. You need to use a different separator, both in the command that is sent, and the parsing of received commands. Currently, the params="My message,<0,0,0>, <0,0,0,0>" would be parsed by llParseString2List(params, [","], [""]) to a list like: [ "My message", "<0", "0", "0>", "<0", "0", "0>" ] ....which isn't what you want, I believe. Try a separator you are pretty certain will never occur in the string params EXCEPT as a separator. The "|" and "~" are common choices. Then also construct your message using them, so that: params = "My message|<0,0,0>|<0,0,0>" Which is then parsed using llParseString2List(["|"],[]) which results in a list like: [ "My message", "<0,0,0>", "<0,0,0>" ] Which is what I think you are looking for. Using llCSV2List() does preserve the vects and rots, but the reverse (llList2CSV()) isn't guaranteed to. Better to escape/unescape the strings on both sides of the conversions, i.e.: llCSV2List(llUnescape(params)) and for(i=0; i<llGetListLength(lparams); i++) enc_lparams+=[llEscape(llList2String(lparams,i))]; llList2CSV(enc_lparams);
  20. Something along these lines should work: integer IsInteger(string var) { // This is from the wiki integer j; for (j=0;j<llStringLength(var);++j) { if(!~llListFindList(["1","2","3","4","5","6","7","8","9","0"],[llGetSubString(var,j,j)])) return FALSE; } return TRUE;}string ReverseString(string var) { string retval = ""; integer i; for(i=llStringLength(var); i > 0; i--) { retval += llGetSubString(var,i,i); } return retval;}string ReverseWords(string in){ list outlist = []; list words = llParseString2List(in, [" "], []); integer i; for(i=0; i<llGetListLength(words); i++) { if(IsInteger(llList2String(words,i)==FALSE) { outlist += [ ReverseString(llList2String(words,i)) ]; } else { outlist += [ llList2String(words,i)) ]; } } return llDumpList2String(outlist," ");} Then, wherever you want to reverse the non-number words in the string, simply call ReverseWords(whatever) and it should do it.
  21. Any chance the updates to the shaders can include the ability to mix shiny and transparent? The fact the two are mutually exclusive currently is a big limitation.
  22. I'll have to post a dissenting opinion there, and hope that Marine won't lock it down so hard. Reasons being: (A) Without a relay on auto, the object will not be able to temp attach without the Relay asking permission to take control for the object (so it can execute the @acceptpermission command.) (B) The object that attaches in this fashion cannot live through a logout, and therefore it CANNOT reassert any restrictions when the player logs back in. Unlike items that live in the players inventory, such an item does not require 'cheating' to escape from it. I don't see the danger. Only the potential for those who enjoy certain kinds of RLV play to not have to constantly click dialogs.
  23. "@acceptpermission=add" will cause RLV to auto-accept animation and attach requests from the object sending it. (Relays will add the object that sent the request, not themselves.) Since an object has to be attached to directly send RLV commands, this doesn't mean much.......until recently (more below). With a relay, if it isn't on auto, it will pop up a dialog asking permission for the object to take control, as usual. Some third-party viewers have an option to Auto-Accept inventory items.....I'm guessing it may be in the main viewer as well (it used to have one for textures/notecards only). There is currently a bug with having that enabled AND accepting to #RLV, but fixes have already been worked out. And the new llAttachToAvatarTemp() functionality actually allows us to use this as well, so if acceptpermissions is set, it will not prompt to attach (you gave permissions automatically). So you CAN get some very interesting potential on this (which I've been exploring lately.....) But to answer the OP, RLV doesn't do animations.......just normal scripts. An attached or sat-on object will automatically grant animation permissions (though in the script you still have to request them.....but NO dialog will pop up asking for permission to animate you). Note, this is for the same prim you are sitting on....if the script is in a different prim, it DOES prompt you. Check the "automatically granted when..." column in the table on the wiki page: run_time_permissions event So, to answer the question: To force an animation to play (without any pop-up permission dialog) the object must either be (A) already sat upon by the avatar, (B) already attached to the avatar, or © have "@acceptpermission=add" executed by the avatar's relay or RLV for this object. [edit] To clarify the original question the OP asked: You don't play animations via RLV. Your script simply requests permissions to play animations on the targeted avatars key. If you don't want the pop-up, you tell the relay the "acceptpermission=add" command first, then request perms. Once you have them, simply call llStartAnimation(animname), and it will play it on the avatar that last granted permissions (which in this case would be the target).
  24. Also, remember that we have link functions now so that we don't HAVE to use linked messages for a lot of prim functions. What I do: Create a list of the prim numbers that constitute a particular 'part' I want to change visibility/color/etc. Write a function that takes the list and the new value, and call it as needed. That function loops over the list, and calls the appropriate llSetLinkWhatever() function as appropriate.....or, if not available in that kind of function, calls llSetLinkPrimitiveParamsFast(). Viola! So in this case, it would be something like: list prims_for_lock = [ 5,6,7,9,12 ]; // These could change, so you could write a function to populate// the list based on the prim name or description fields ( I scan for// specific prefixes or suffixes in my versions, and name the prims// appropriately.ChangeColorAndAlpha(list prims, vector color, float alpha){ integer i; for(i=0; i < llGetListLength(prims); i++) { // The if's are there so you can call this function and NOT // change one of the two if needed. And if you need black, just // pass in a color of <0.0,0.0,0.001> if(color != ZERO_VECTOR) llSetLinkColor(llList2Integer(prims,i),color,ALL_SIDES); if(alpha > 0.0) llSetLinkAlpa(llList2Integer(prims,i),alpha,ALL_SIDES); }}
  25. That was it. ZHAO-II needed to check that to properly recognize the change in owner. Not sure a whole initialize() is needed, maybe just updating that global variable. I'll fiddle with it some more. Thanks again, Qie!
  • Create New...