Jump to content

Helium Loon

Resident
  • Posts

    309
  • Joined

  • Last visited

Everything posted by Helium Loon

  1. Could be a couple of things..... First, remember that even rigged mesh does not respond to most of the shape sliders in a shape. This means, that if your avatars shape is anything but the default you are modelling against, it will likely not fit right. This is why most mesh designers make multiple sizes, to accomodate different shapes. Second, if some of the vertexes in your mesh aren't correctly weighted, they may not 'move' to the correct places in relation to the shape of the avatar. Verify the vertexes in those areas are all correctly weighted.
  2. Make sure to try clearing your viewer's cache. Mesh objects get cached, just like textures do, and if the download gets corrupted, it may think it has a valid object in the cache, when it doesn't, and when it tries to load it from the cache, it gets garbage (which doesn't render correctly, or at all).
  3. touch events in LSL only occur due to someone clicking on a prim in an object. There is no way to 'simulate' this via another script. If the script is mod, then it can be re-written to utilize the techniques above (such as the collision volume detect and such) to avoid having to 'click' on the door. Otherwise, it would have to be completely re-scripted, or if the whole object is no-mod, replaced with a door that utilizes these other techniques.
  4. If you really want an excellent reference book, the one I mentioned is good. The best is still the quintessential textbook....."Principles of Computer Graphics" by Foley and Van Dam . Both are based more in algorithms and theory, rather than how modern hardware and APIs handle implementing them. But excellent material to know to understand what is going on 'under the hood' in most modern graphics cards. Getting back to the original topic.....what LL needs to do is implement another object parameter (they are already working on normal maps) that provides an environment map and reflectivity parameter. A simple modification to the existing shaders would allow those to be used to generate static reflections that would be a HUGE improvement. Even better if they had a button on there that would render the current environment to a texture from the prim's position. Then you could create and adjust, then click that button, upload the result, and apply it to the prim as an environment map. Boom. Matching reflections (though no dynamic objects in them). Of course, I'd like to see them also give us access to a few nice vertex displacement shaders as well.....could increase flexibility and interesting effects by an order of magnitude.
  5. Shadow mapping can still produce these results. It isn't raytracing. It works by using a shadow buffer (similar to a stencil buffer) that is rendered along with the color channels. It's based on Williams work in 1978. It works with the standard z-buffer render algorithms which most render pipelines in hardware are now based on. It renders each scene from the viewpoint of the light sources (which aren't unlimited, naturally) into a depth-buffer. This is pretty fast, and depending on how precise the shadow information needs to be, it can be considerably reduced in resolution. Using coordinate transforms, you can map from the scan-line point to the z-buffers and determine how much from each light source actually reaches the pixel, and which ones don't. To quote from "Fundamentals of Computer Graphics": The Algorithm is a two-step process. A scene is 'rendered' and depth information is stored into the shadow Z-buffer using the light source as a view point. No intensities are calculated. This computes a 'depth image' from the light source of these polygons that are visible to the light source. The second step is to render the scene using a Z-buffer algorithm. This process is enhanced as follows. If a point is visible, a coordinate transformation is used to map (x,y,z), the coordinates of the point in three-dimensional screen space (from the view point), to (x', y', z'), the coordinates of the point in screen space from the light point. If z' is greater than the value stored in the shadow Z-buffer for that point, then a surface is nearer to the light source than the point under consideration and the point is in shadow, and athus a shadow 'intensity' is used; otherwise the point is rendered as normal. There are other shadow algorithms around. But this (or one of its many variants) is how most modern graphics systems perform shadow calculations since they use a built-in Z-buffer system and the hardware is highly optimized to render in this fashion. Now, that said, most modern scan-line rendering engines (that aren't strictly hardware), allow for much more granular control over what pixels use what algorithms to render sections of the image. Many use very compex math functions to compute procedural textures, ray-trace certain pixels because the objects that occupy those pixels are marked for ray-traced reflection, or use caustics simulations, radiosity, or any number of other systems for a given set of pixel. Things like RenderMan, or LightScape, or any number of other renderers used to generate the 'production' images intead of the real-time display, use these and more.
  6. Just a note.... Regardless of the graphics API (OpenGL or Direct3D or etc.) generation of reflections or refractions is the job of the developer, not the API. While Vertex/Pixel/Fragement shaders have done a lot to automate this, it is still just a texture being applied. That said, that texture does not have to be fixed. It can be generated dynamically each frame, at a lower resolution, by having a second rendering context which renders the same scene from the point of the reflective object, and then uses this texture as it's environment map. SL actually HAD this at one point, LONG ago, in a beta viewer.......actual real reflections. But it was WAY too slow (especially on the hardware at that time) and too many reflective objects in the scene brought the client programs to their knees. With today's modern graphics hardware, I think they should re-look at bringing in dynamic reflection.....with the advent of shaders, as well as the general increase in rendering performance, I think it could be done reasonably (and be a checkbox in preferences that only enables at high or ultra levels.) Also, shadows in scanline renders are ususally generated via shadow mapping, which does NOT use raytracing. Projected shadows DO use raytracing of a sort (it's a simplified version, and sub-sampled).
  7. You'll need to adjust the wand tool settings, or possibly shrink the selection. What is likely happening is you are partially selecting some of the surrounding pixels (because of anti-aliasing settings and such.)
  8. I think the 'real numbers' part is that the LOD factor doesn't control WHICH LOD is seen, but the distances at which the LODs switch. Put simply, at a RenderLODFactor of 4.0, the distance at which you will see the highest detal LOD of a sculpty/mesh 4 TIMES the normal distance at which it would switch.....which means that by the time it is far enough away to switch to the 'medium' LOD, it's so small on your screen that you can't even notice it. Same with the switches to 'low' and 'very low' LODs. While the actual calculations/distances involved are VERY complex, a simplified example: Say that, at 1.0 RenderLODFactor, an object switches from "high" detail to "medium" detail at a distance of 10.0 meters, and from "medium" to "low" at 30.0 meters. Switching to a RenderLODFactor of 3.5 would change the distances at which the switching occurs, to 35.0 meters and 105.0 meters, respectively. And if the object is large enough, those distances get even bigger (larger items switch further away than small objecs, hence why they have a higher Land Impact.) and the effect of RenderLODFactor becomes even greater......great enough that by the time the object is far enough away to 'switch' to a lower LOD, it is outside your view-distance!
  9. The big problem is using a comma as your separator, since you are passing vectors and rotations.....which contain commas inside them. You need to use a different separator, both in the command that is sent, and the parsing of received commands. Currently, the params="My message,<0,0,0>, <0,0,0,0>" would be parsed by llParseString2List(params, [","], [""]) to a list like: [ "My message", "<0", "0", "0>", "<0", "0", "0>" ] ....which isn't what you want, I believe. Try a separator you are pretty certain will never occur in the string params EXCEPT as a separator. The "|" and "~" are common choices. Then also construct your message using them, so that: params = "My message|<0,0,0>|<0,0,0>" Which is then parsed using llParseString2List(["|"],[]) which results in a list like: [ "My message", "<0,0,0>", "<0,0,0>" ] Which is what I think you are looking for. Using llCSV2List() does preserve the vects and rots, but the reverse (llList2CSV()) isn't guaranteed to. Better to escape/unescape the strings on both sides of the conversions, i.e.: llCSV2List(llUnescape(params)) and for(i=0; i<llGetListLength(lparams); i++) enc_lparams+=[llEscape(llList2String(lparams,i))]; llList2CSV(enc_lparams);
  10. Something along these lines should work: integer IsInteger(string var) { // This is from the wiki integer j; for (j=0;j<llStringLength(var);++j) { if(!~llListFindList(["1","2","3","4","5","6","7","8","9","0"],[llGetSubString(var,j,j)])) return FALSE; } return TRUE;}string ReverseString(string var) { string retval = ""; integer i; for(i=llStringLength(var); i > 0; i--) { retval += llGetSubString(var,i,i); } return retval;}string ReverseWords(string in){ list outlist = []; list words = llParseString2List(in, [" "], []); integer i; for(i=0; i<llGetListLength(words); i++) { if(IsInteger(llList2String(words,i)==FALSE) { outlist += [ ReverseString(llList2String(words,i)) ]; } else { outlist += [ llList2String(words,i)) ]; } } return llDumpList2String(outlist," ");} Then, wherever you want to reverse the non-number words in the string, simply call ReverseWords(whatever) and it should do it.
  11. Any chance the updates to the shaders can include the ability to mix shiny and transparent? The fact the two are mutually exclusive currently is a big limitation.
  12. I'll have to post a dissenting opinion there, and hope that Marine won't lock it down so hard. Reasons being: (A) Without a relay on auto, the object will not be able to temp attach without the Relay asking permission to take control for the object (so it can execute the @acceptpermission command.) (B) The object that attaches in this fashion cannot live through a logout, and therefore it CANNOT reassert any restrictions when the player logs back in. Unlike items that live in the players inventory, such an item does not require 'cheating' to escape from it. I don't see the danger. Only the potential for those who enjoy certain kinds of RLV play to not have to constantly click dialogs.
  13. "@acceptpermission=add" will cause RLV to auto-accept animation and attach requests from the object sending it. (Relays will add the object that sent the request, not themselves.) Since an object has to be attached to directly send RLV commands, this doesn't mean much.......until recently (more below). With a relay, if it isn't on auto, it will pop up a dialog asking permission for the object to take control, as usual. Some third-party viewers have an option to Auto-Accept inventory items.....I'm guessing it may be in the main viewer as well (it used to have one for textures/notecards only). There is currently a bug with having that enabled AND accepting to #RLV, but fixes have already been worked out. And the new llAttachToAvatarTemp() functionality actually allows us to use this as well, so if acceptpermissions is set, it will not prompt to attach (you gave permissions automatically). So you CAN get some very interesting potential on this (which I've been exploring lately.....) But to answer the OP, RLV doesn't do animations.......just normal scripts. An attached or sat-on object will automatically grant animation permissions (though in the script you still have to request them.....but NO dialog will pop up asking for permission to animate you). Note, this is for the same prim you are sitting on....if the script is in a different prim, it DOES prompt you. Check the "automatically granted when..." column in the table on the wiki page: run_time_permissions event So, to answer the question: To force an animation to play (without any pop-up permission dialog) the object must either be (A) already sat upon by the avatar, (B) already attached to the avatar, or © have "@acceptpermission=add" executed by the avatar's relay or RLV for this object. [edit] To clarify the original question the OP asked: You don't play animations via RLV. Your script simply requests permissions to play animations on the targeted avatars key. If you don't want the pop-up, you tell the relay the "acceptpermission=add" command first, then request perms. Once you have them, simply call llStartAnimation(animname), and it will play it on the avatar that last granted permissions (which in this case would be the target).
  14. Also, remember that we have link functions now so that we don't HAVE to use linked messages for a lot of prim functions. What I do: Create a list of the prim numbers that constitute a particular 'part' I want to change visibility/color/etc. Write a function that takes the list and the new value, and call it as needed. That function loops over the list, and calls the appropriate llSetLinkWhatever() function as appropriate.....or, if not available in that kind of function, calls llSetLinkPrimitiveParamsFast(). Viola! So in this case, it would be something like: list prims_for_lock = [ 5,6,7,9,12 ]; // These could change, so you could write a function to populate// the list based on the prim name or description fields ( I scan for// specific prefixes or suffixes in my versions, and name the prims// appropriately.ChangeColorAndAlpha(list prims, vector color, float alpha){ integer i; for(i=0; i < llGetListLength(prims); i++) { // The if's are there so you can call this function and NOT // change one of the two if needed. And if you need black, just // pass in a color of <0.0,0.0,0.001> if(color != ZERO_VECTOR) llSetLinkColor(llList2Integer(prims,i),color,ALL_SIDES); if(alpha > 0.0) llSetLinkAlpa(llList2Integer(prims,i),alpha,ALL_SIDES); }}
  15. That was it. ZHAO-II needed to check that to properly recognize the change in owner. Not sure a whole initialize() is needed, maybe just updating that global variable. I'll fiddle with it some more. Thanks again, Qie!
  16. As an addition, I've tested a little further. The scripts are still running (at least one of them, so I'd assume all of them, and the 'Script Info' shows all three as active) and the menus respond......it's just no animations seem to fire.
  17. I've been playing around with llAttachToAvatarTemp(), and ran into some odd behavior, which isn't in the wiki..... I created an object with a ZHAO-II AO in it. I'm having this object rezzed by another object, and it attaches to the person who touched the object that rezzed it. That all works fine. I'm testing it with an alt, while I am online as well. But when the person who it is attached to TPs to a different sim, the AO stops. I've checked and the scripts still appear to be running. What's stranger.....TP back to where I am, and the AO starts up again. But even stranger. TP away, then I (the creator) TP to them, and it starts up again when I enter the sim they are on! Any ideas on WHY this would be happening?
  18. The main RLV specification is from Marine Kelley's Restrained Love viewer. RLVa is done by Kitty Barnett (I believe) and is an extension of the main RLV spec. If you think a capability is missing, I recommend talking to them about the possibility of adding it.
  19. Yes, I know that LL doesn't closely monitor the forums, nor do they prioritize bugs based on user feedback.....but I'm just curious as to the particular bugs that are annoying (and have been annoying) and frustrating the scripters here the most. You can link to a JIRA if there is one, or just tell us. Right now, my biggest annoyance is Function: llSetSoundRadius( float radius ); fails to limit sound to radius It's been bugged for years, but never fixed. LL even implemented a new function that limits sound in a bounding box area llTriggerSoundLimited , but still haven't fixed this one. And it has been delaying a project I've been designing and planning for a LONG time. What LSL bugs are most annoying/frustrating you?
  20. Well, it wouldn't necessarily have to be done EVERY time a view changed. Nor would it necessarily take a 'monster' chunk of memory. Reflection/Environment maps are well defined in gfx pipelines these days. Highly optimized shaders exist to handle them. The only real requirement for 'real time reflections' is to render an environment map for EACH reflective object in the view. Now, obviously, that needs somre control, since it could get out of control quickly. Limit the resolution for each based on the projected bounding box in the viewscreen (meaning most objects would only get 128x128 resolution env. map each time. Also, EVERY frame isn't a necessity. Every other, or every third frame, or even more, would still look good. Debug settings could control just how often and what max resolution for such maps could be generated. Doing a 128x128 pixel render of the world from the view of the object would take a very small amount of time (compared to a typical 1024x768 or higher main viewport render), so the hit is really from the number of such 'reflective' objects in a scene. Put a limiter on that as well (another debug setting) and much like lighting, only handle the biggest 10 or so. Suddenly, real-reflections aren't so unreachable or resource hungry. Allow disabling them for those with weaker systems. I wouldn't make shiny use the new shading system, since that would QUICKLY overblow it. Add it as a new shading option. The trick of course is the sub-renders of the maps. Doing them as cubics is unneeded, since half of it would never be visible. Better to do a spherical map render, but that's a little more complex, mathematically speaking. And the maps for any given prim type could be optimized..... It's not a simple thing to do, but it COULD be done.
  21. I would say a single prim per gate (or element, for those composed of multiple gates), with particle emitters for the various connections to other prims. Anything but a simple circuit is quickly going to get VERY prim heavy. But, you can have examples of the 'building blocks' as individual gates, to show HOW they work, then have ICs that do the work of multiple gates that are a single prim. So you show how a single-bit adder is built, then you have a 'single bit adder with carry-out' IC......which you show how it can be chained to make an n-bit adder. Then you have a, say, 8-bit adder IC which you use in the actual CPU. Each prim has a script which scans for nearby scripted objects, and filters based on name, and presents that as a list for connections. It shouldn't be hard to script most gates and elements. And by switching the texture on the particles that show connection, it can change appearance to show whether it is at high logic or low logic.
  22. Cathy Foil wrote: I have a problem with, "I bought a 3D model (or got it from a friend or downloaded it free from a website), and I made a lot of changes and improvements to it. It looks really different now. Can I sell it or distribute it for free? No. That’s called a “derivative work.” That means you took something that was protected by copyright and derived a new work from it (adapted it to make something new). But the copyrights still apply to the original work, even if your changes make the model unrecognizable. ", especially the unrecognizable part I have a problem with. And you should. According to Title 17 USC, "a derivative work" must bear a 'substantial', 'striking', or 'probative' similarity to the original work, to be considered infringing. This is a case of the FAQ of a commercial website being misleading. And under certain Fair-Use conditions, even then it would NOT be infringing in the courts eyes. TurboSquid has a vested interest in making both not hosting infringing content, as well as insuring that content isn't 'diluted' by copying......so they bias their FAQ to make it look much worse than it is in the actual laws. Cathy Foil wrote: At some point it ceases to be the same design and becomes something new. With this logic a clay manufacture could copyright the clay cubes they sell and anything a sculptor creates from that cube they bought the copyright would be actually owned by the clay manufacture. It would also mean that anything created in SL from regular prims LL would own the copyrights. This leads to the second one I have a problem with, " I got a model for free, and there was nothing that said it’s copyrighted. I changed it and improved it, so you can hardly tell it’s the same model. Can I sell my version of it? No. No. No. ". According to this logic even if LL did not copyright their regular prims you can not create things with regular prims and sell them or really own the copyrights to your creation. Again, as above. STRIKING or SUBSTANTIAL SIMILARITY. This would NOT be an infringing case. Yet TurboSquid implies it would be...... As to the rest, I've noted in prior posts in this thread (and others) that the original purpose and value of copyrights have been corrupted by dishonest and greedy corporations and politicians. So we agree there! It is important to note (since it appears a lot of cross-confusion is going on to a lot of people) that Copyright, Trademark, and Patent laws are ALL DIFFERENT. And while the mechanisms are similar, they have wildly different conventions on how they apply tests and the criteria they use to determine when infringement occurs. And it seems like a lot of people (both here and elsewhere) are confusing the elements of the three.
  23. Cathy Foil wrote: P.S. Pamela I really really have a lot of respect for you and admire your creativity and talent but I have to point out the irony that the text you copied and pasted from Turbosquid was copyrighted and you may have just violated that copyright. OMG and I copied and pasted part of the same text from you!!! When will the madness end!!! LOL :smileylol: The really funny thing is....you used an excerpt (which is protected, under the Fair-Use sections), but Pamela posted the entirety of the page exactly, which IS a violation of copyright.
  24. Unfortunately, that scenario isn't very realistic.....possible? Yes. But highly unlikely. For the following reasons: 1) Person B would in all likelyhood (being someone who would 'rip off' a mesh and resell it as their own work) not be skilled or knowlegable enough to direct a contracted worker (who he doesn't want to know the details of the work, since it is possible they could be subpoenaed) to completely 'duplicate' the effort and structure of differences needed. There are a lot of steps along the way, and upload dates to external websites and or servers can be used as evidence. 2) The court would very rarely dismiss a case like this, as it is a civil tort, not a criminal one. The onus (used to be) on the rights-holder to prove infringement. Now, due to excessive spread of the laws involved, it is now often more a case that the defendant has to prove they didn't infringe. Regardless, the usual process in civil law is still innocent until proven guilty (assuming they do not default the judgement, which can't happen in a criminal case, but OFTEN happens in civil ones, due to sneaky filings and such) so the plaintiff will try to arrange for the defendant to be unaware, or unable to appear to defend themselves, thus earning a default judgement. Dismissal? HIGHLY unlikely. 3) If Person A can show that the models are identical, vertex for vertex, and can show an earlier date of upload ANYWHERE......They have copyright. That upload timestamp would be conclusive evidence. Then it becomes a question of whether it was willful infringement, or not. Don't get me wrong. Current Copyright laws have gone WAY beyond the original intent when they were created, and I disagree with that broadening of scope, duration, penalties and enforcement. That doesn't mean whe have to throw the baby out with the bathwater. Simply eliminating copyright would cause just as many problems as we have now. But serious reform is needed.
  25. Citing the faq from a commercial site which makes its profits from distributing creative works is a lot like asking a senator if congress should be able to vote for their own raises. A much better source is to obtain the information directly from the governmental sites devoted to it. The quoted FAQ is heavily biased. Anyone who has actually READ title 17 of the USC will recognize this. For example, the FAQ quoted seems to try to dismiss the Fair Use doctrine sections as being almost impossible, citing that using even 1% of a work can make it infringing. While not untrue, the wording clearly shows the bias. One could just as easily show (using the law itself) that using 50% of a work may NOT be infringing. Both are true statements. But one shows bias one way, one the other. Check out: Fair-Use in Copyright and the rest of the US Copyright website (for those with a truly masochistic or legal bent, fee free to read the entire section of the code at USC Title 17, Copyright Law
×
×
  • Create New...