Jump to content

LindaB Helendale

Resident
  • Posts

    69
  • Joined

  • Last visited

Posts posted by LindaB Helendale

  1. No clue what might be the problem, but one thing to check is that you don't use the .SLM files while uploading, they usually mess up loading the same model again.  

    You can either delete the <model>.SLM file in the folder before uploading, or set debug setting MeshImportUseSLM FALSE.  



  2. Drongle McMahon wrote:

    One thing puzzles me - wht don't we see the scale changes in the client? Either the server (where the script runs) or the client must be dropping out the updates that are rapidly reverted? Or does the client do them so fast that they can't be seen? Seems to me it must be ther server not sending them. Otherwise there would surely occasionally be network delays long enough to see a change?

    I see a rapid flash of the 40m mesh, it may depend on the client settings and network.

    I found it unexpected that the land impact numbers can be requested right after llSetScale with no delay while the editor / more info display has  several second delay until it shows the numbers.


  3. teuclase wrote:

    Hmm... wouldn't that make the color change only happen while you're holding down a click on the object? I'm looking moreso to have the color change triggered or stopped by clicking the object once.

     

    oh , I guess I didnt  read  well enough what you want *blushes* 

    To interact with the script you need event loop (as new events wont interrupt the running event handler).  I can think of two ways, timer is one, as you said, other option is to have event handler for changed color ( see changed() event), each color change would trigger  new changed() event, so your while loop would be changed() event loop. When you click it, touch_start would go through and you could change the loop.

     EDIT: you could set the interval when  it can be interrupted by adjusting how many steps in your current sweep over rainbow is done in one changed() event.

     

  4.  

    touch_start() will be triggered when you click the object, but there's no way inside the event to see if the agent is still touching (holding the mouse button down), as all the llDetected* refer to the event when the touch started.

    You might see touch event (http://wiki.secondlife.com/wiki/Touch), it is triggered as long as the mouse button is held down (meaning as soon as your script event handler returns, there's new touch() event in the queue), so you can use touch() event instead of of the while loop.

     

  5.  

    I made LSL script to measure the mesh size needed  in Land Impact calculation, available here: https://wiki.secondlife.com/wiki/User:LindaB_Helendale/getMeshLODsize , with also code to calculate the streaming cost for given radius. Mainly it is just nice to know thing, to see exactly how the streaming/download cost is calculated, but the script may have more serious use in optimizing the land impact.

    The streaming/download cost is based on the size of the mesh in bytes on each LOD. The format is in wiki (http://wiki.secondlife.com/wiki/Mesh/Mesh_Asset_Format) but the actual size of the mesh asset is not available (and as the data is gzipped it's not directly predictable from the number of vertices/triangles/etc.)

    There was a thread about making a plug-in to predict the land impact (http://community.secondlife.com/t5/Mesh/Land-Impact-Estimate-in-Blender-via-addon-scripts/m-p/1204967/highlight/true#M8664) and these tools could be used to either make some kind of statistical predictor or if someone would decipher the asset size from the asset format, to check the results.

    • Like 2
  6. I did quite a bit research on terraforming when i made a tool to make the terrain from any height map with  LSL terraforming functions. Here's what I found out, but of course i can't say i know that ;)

    The ringing you see is noise the viewer adds to the rendered terrain. It's not  part of the real terrain (as you see with llGround() or by letting physical objects rest on the land). The noise is added near steep edges to 4-8 m from the edge and the amplitude depends on the heigth of the slope and especially on how steep crease there is between the plateau and the slope.

    I assume it is there to make steep cliffs look more natural. 

    About the intepolation, the ground is defined in grid points where the origo is at 0,0 and it is linearly interpolated  with each square meter split to two triangles diagonally in direction SW to NE. So grid points at  0 to 255 are in the sim and the last meter is interpolated from the coordinate 0 lines in the next sim.

    You can find more in the manual of my terraformer, see Chapter 4   (https://d44ytnim3cfy5.cloudfront.net/assets/3384190/original/EdenScape%20User%20Manual.pdf?1302781935) but if it's not OK to link here stuff related to commercial products, please let me know and i remove the link.

     

  7. Hint to save texture faces (materials)

    I was running low on texture faces with  a mesh and needed to set the bounding box and pivot point. I tested if it's possible to add so small markers in the bounding box corners that they won't show, and then they can be part of any existing texture face.

    I used corner marker triangles of size 0.01 mm,  so on a 2000 pixel screen the camera needs to be closer than 2 cm from the marker to make it fill one pixel,  which makes them practically invisible no matter what color they have. Still the tiny triangles defined the bounding box correctly.

    (The size was chosen so that the difference of the corner marker coordinates can be calculated accurately to be non-zero with the SL 32-bit floats, though I don't know if the server and mesh uploader uses 32-bit floats like LSL does.  The 32-bit float  has 23 bits precision, about seven significant numbers, hence  0.01 mm is ok for up to 64 m sizes,  as 64.00001-64 requires 22.6.bits to give non-zero result).

     

    • Like 1

  8. Dora Gustafson wrote:

    I
    am not sure  about who has got what rights with respect to editing.

    I am premium and has been for more than 4 years.

    Nothing special about my account and I can edit and add.

    It must be written somewhere.

    Maybe someone out there will help. HELP!!!

    If you have written it all down in the form you want to present, maybe someone (like me) could add it to the wiki:smileyhappy:

    ADDED: Try reading these

    Me blonde... I tried to add a comment on this on the discussion page and it said I need special permission, so i thought it's the same for the main page, but it was possible to edit that page. 

    Additional tests show that translation is not affected by collision with avatar, only the angular movement is. I added it in Caveats.

     


  9. Rolig Loon wrote:

    I don't see this as unusual or unexpected behavior.  A KeyframedMotion object is physical, so its collision with an avatar is like any physical object's collision.  I made a pair of double doors with
    llKeyframedMotion
    last week.  If I stand in the way of the doors as they open, they swat me out of the way.  If I make tiny doors, though, they don't have enough oomph to move me, so they stall.  That's the way with any physical object I've ever worked with.

    The way it works is quite different from what the wiki page says, 

    Collisions with physical objects will be computed and reported, but the keyframed object will be unaffected by those collisions. (The physical object will be affected, however.)"

    So it should swat you out of the way but the door would not stall, according to the description.

    The key framed object is not physical, but part of the physics is simulated: "This function can only be called on NON-physical objects. In the future it could be extended to support physical objects, but this is more complicated as collisions could prevent the object from reaching its goal positions on time."

    The implemented part of physics in interaction of a keyframed object and a physical task differs from normal physical objects, depending on whether the other party is object or avatar. If there's a big physical object on the way, the keyframed object just throws it away and wont stall, while an avatar affects the keyframed object causing  the problem mentioned above, why it is not yet implemented on physical objects too. 

     

     


  10. Dora Gustafson wrote:

    If you think it should be mentioned, you should do it.

    All residents can log in and make changes and additions to the wiki.

    There are some rules to follow of course.

    You can read them when you log in.

    I think it is worth mentioning and I think you should do it:smileyhappy:

    I tried to, logged in and  when I tried to save, it complained about user not having rights to edit the page.

    If you think it should be possible, I'll try again ;)  

     

  11. Wiki-page  http://wiki.secondlife.com/wiki/LlSetKeyframedMotion states that collisions with physical objects will be computed and reported, but the keyframed object will be unaffected. There's no mention about collision with avatar, but in tests avatars work differently than other physical tasks.

    Collision with avatar affects the moving object and it won't reach the final destination.

     

    The test was simple bar that makes a 360 deg rotation on click with code

    touch_start(integer num){
      float FT=45.0;
      list opt=[KFM_MODE, KFM_FORWARD, KFM_DATA , KFM_ROTATION]; 
      integer i;
      integer N=4;
      float Trev=2; // secs per rev
      float t=(float)llRound(Trev/(float)N*FT)/FT; // integer multiple of 1/45 sec
      list keyFrames; 
      for(i=0;i<N;i++) {
        keyFrames += [llEuler2Rot(<0,0,360.0/N>*DEG_TO_RAD),t];
      }
      llSetKeyframedMotion(keyFrames, opt);
    }

    and when I stand on the way of the bar it slows down and won't end up in 360 deg turn.

    Should this be mentioned in the wiki as a feature (someone with edit rights there?), or is it bug, or something purposeful on the way to make the key framed motion interact with physical tasks?

     

    UPDATE:

    Hovering avatars do not appear to affect the key framed motion, while dropping an anvil on hovering avatar causes normal collision effects.

    The effect of collision with avatar on the moving object seems to depend on the mass of it, like with normal physics.

     

  12.  

    Just add a test for the distance to any giver script you find. If it uses collision or sensor, the test would be

    if (  llVecDist(  llDetectedPos(i), llGetPos()  ) < 10 ) {  
        // give it

     

    llDetectedPos(i) is the position of the detected avatar, and llGetPos() is the position of the prim where the script is and llVecDist gives the distance between two positions.

     

     

    • Like 1

  13. Void Singer wrote:

    I'm going to recommend against that for at least two reasons

    1) channel commands are normally (but not always) typed with a space after the channel. The space IS read as a literal and would cause a failure to trigger. when numbers are involved it's worse, since lack of a space will cause a leading number to be read as part of the channel (again causing failures. both of these can be avoided by using string trim in the listen event

     

    The space between the channel and the first character and white space after the last character is trimmed off automatically when the message is sent to listen-event.

    "/1 2"  will come as "2"  and  

    "/1                     hi                         " will come in as "hi" ,

    so listen for "hi" on channel 1 will trigger for "/1             hi     "  as well as for "/1hi".

     

    2) fixed word listens do not ignore capitalization, so if expects "hi" and you type "Hi" (or vice versa) it fails, but this can also be caught in the listen event with to lower, or to upper.

    Yes, if you have fixed list with several versions, they need to be listed. 

     

    3) specific message is the last item checked by the region when filtering listens, so the savings vs versaitly is extremely low. I only recommend using that format if you have no filters for name/key, and a re operating on the public channel (which you should already be avoiding at all costs) or it's a very specific limited use channel command (a few maybe, I'd say not more than a handful personally, since you still have to sort them on the other end.)

    All the previous checks are done for both fixed listen and open listen, so it would not matter that it's the last item, for comparing those two, and the server code would be more efficient doing the filtering than the lsl code. Basically there's the same stuff that is to be done, either in the server code before triggering the event, or queueing the event, running the script, and doing the filtering in lsl, so if there's any number of false positives that the lsl code would reject, doing it without triggering the event would save resources.

    But of course it's useful only if the list of words  is limited. Yet so many scripts listen in public chat for fixed commands "hide", "show", etc with  open listen and filter them in the listen-event.

     

     
  14.  

    Definitely good idea. From the information in wiki, it is possible to calculate the land impact streaming cost / download cost component as a function of the radius of a sphere enclosing the mesh, for any given LOD scheme, assuming you know the size of the mesh in bytes for each LOD.

    The byte size of the stored mesh depends on many things, number of faces, number of vertices, UV maps, etc, and it seems that there is some compression of data too. Anyways, a statistical model with some confidence intervals could be estimated.

    I have reported some experiments here http://community.secondlife.com/t5/Mesh/Prims-Land-Impact-Whatever/m-p/1195631#M8403

     

     

  15.  

    One more note on multiple listeners.

    If you listen fixed lines like "hey", "hi"  and you don't need to catch arbitrary lines starting with the word (such as "hi guys", "oi there"), it is more efficient to register a listen for each word, up to 65 listener per script. In that way matching the chat line to each word  is executed in the server when it decides whether to send the message to your script as listen-event, instead of your lsl-code doing it.

    You can register the listens in a for loop from a list easily

    nWords=llGetListLength(wordList);
    for(i=0;i<nWords;i++)  {
    llListen(  channel, "", NULL_KEY, llList2String(wordList,i) );
    }

     

  16. Yup it's just convenience, similar to having several mesh objects in the same file, but of course it is convenient only if the feature is supported in the 3d tools.

    I don't know much about what Blender or other such tools offer readily, I make my meshes with math tools and write the collada files directly according to the collada specifications, so I'm just curious of which collada features work and how,  and which don't work. Definitely not worth suggesting a feature that people could not use.

     

  17.  

    All strings in MONO scripts use UTF-16 encoding, with one character taking 2 bytes, while all communication, llSay, llEmail etc use UTF-8 with one character taking one, two or three bytes.

    I made a helper function to check the string length in bytes in UTF-8 encoding:

    http://wiki.secondlife.com/wiki/User:LindaB_Helendale/UTF8StringLength

     

    The limits on communication channels are hard, the message is clipped on the byte limit even if it's in the middle of a two or three byte character, so the last character may change if the message is clipped.

     

  18. Thanks Drongle :)

    I would think LL has to have a card of the current  specs for the collada importer,  to keep the software development in control (it can't be in the heads of the developers only, I hope...) and it would be win-win for LL and the user community to see the specs, instead of tedious and uncertain reverse engineering.

     

    Natales Urriah wrote:

    >Drongle gave a good explanation... but to put it bluntly... what you do to the Collada data is far less relavant
    >than what the Lab does to their Collada importer.

    I dont think you understood the question, there's collada specs and there's the internal mesh format but the question was about specs for the translation of the collada to the mesh asset.

    In general I think what content creators do with collada data (they create the meshes) is not less relevant than  what  LL does with Collada importer, and knowing how the Collada importer translates the data to the mesh is a key to utilize the features LL provides.

     

  19. If anyone would have a link to the specifications of the SL collada uploader, I would much appreciate... and while not having any reference, I continue asking if anyone would know

    Collada lets you specify the LOD in the same file as a proxy for the node, as an example

    <node id="NODE0"/>
    <node id="NODE1"/>
    <node id="NODE2"/>


    <node id="LOD1">
      <instance_node url="#NODE1" proxy="#LOD2"/>
    </node>
    <node id="LOD2">
      <instance_node url="#NODE2"/>
    </node>
    <visual_scene>
      <node>
        <instance_node url="#NODE0" proxy="#LOD1">
      </node>
    </visual_scene>

    So mid LOD would be proxy of high LOD, low LOD proxy of mid LOD and lowest LOD proxy of low LOD. But collda specification does not say that the application must use the proxy as LOD, it's just a way it's often used.

    Because the mechanism and use of this attribute are application defined, more information about how applications can decide which path to follow should be stored in the <extra> element of <instance_node>.

    It's a bit work to make a script to bundle the LODs in this format just to test if it is supported, and if it is, likely the <extra> element should have some info to tell the uploader to take the LODs from the proxy-nodes.

    Would anyone know if this is supported, and what <extra> it needs, and/or where to find the SL collada uploader specs?

     

  20.  

    The face numbers come from the order in which the faces are defined in the collada file. Each <triangles> and <polylist> defines a (part of a ) face; if you have the optional material  attribute the parts with the same material form one face.

    But how can you control the order of the elements in  Zbrush collada exporter, not the faintest clue.

     

     See this for more info http://community.secondlife.com/t5/Mesh/Multiple-materials/m-p/1187185#M8278

     

  21.  

    Uploading animations that have other root than HIP was disabled in viewer 2, but you can still upload them with viewer 1 version, like phoenix and imprudence.

    It's weird that LL allowed making deformer animations for years, and when they would be needed  with rigged avatars, they were disabled ;)   Hopefully LL will provide some other means to change the skeleton, maybe i just havnt seen those news...

     

     

  22.  

    /me facepalms 

    you are right, disregard all that about mirroring, i had the 4x4 matrix transposed (in column vector on the right form) and it messed up the script to check it.... so the problem is not in the transformation but something else.

     

  23.  

    I use this algorithm 
    http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation#Conversion_to_and_from_the_matrix_representation
    to switch between collada transformation matrices and second life rotation quaternions, and that algorithm confirms that the transformation matrix in the odd mesh

    1 0 0 0
    0 0 1 0
    0 -1 0 0
    78.6674 0 0 1

    does not correspond to any rotation + scaling + translation  (as the mirroring turns the right-handed coordinate system to left-handed and you can't rotate the object to flip the handedness), and it might be possible that the second life collada uploader just is not prepared to handle all possible transformations correctly.

     

     

  24. The matrix is 

    1.0000   0           0          0
    0            0           1.0000 0
    0           -1.0000 0          0
    78.6674 0           0         1.0000

    In collada the transformation matrices are in row vector on left -form, so

    v' = [x y z 1] * matrix

    so this matrix  has translation <78.6674 0 0 > 

    and it picks the original Z coordinate as Y and negates it (second column)

    and it picks the original  Y coordinate as Z  (third column)

     

     

     

     

     

×
×
  • Create New...