Jump to content

LindaB Helendale

Resident
  • Posts

    69
  • Joined

  • Last visited

Posts posted by LindaB Helendale

  1.  

    My bad if the table is not clear.  The two first columns are what a scripter may need. The left column tells how much key-value quota is charged for storing a key-value that fits in the payload (the second column).

    The third column is just nice to know -stuff, listing the chunks added to the previous block size to make the next size, thus the memory is added in increments [240, 256, 512, 1024, 2048, 4096] bytes.

    Oh *blush* I just realized a typo there on the second block size, no wonder the numbers didn't make sense.  

  2. I digged out the memory usage and memory allocation for experience key-value storage, and here are the results.

    Should this info be in the wiki and if, where? Comments are welcome.

     

    Strings use UTF-8 encoding

    The data is stored with UTF-8 encoding, with basic 7-bit ascii characters one  byte, extended ascii and few others two bytes and fancy characters (ℒℬ⇄ ⇈㐦 㐧) three bytes. (See http://wiki.secondlife.com/wiki/Combined_Library#Byte_Length_of_UTF-8_Encoded_String for checking the string length in UTF-8) 

    The limits are in bytes, for example the Value part can fit 4095 7-bit ascii, and 2047 two-byte and 1365 three-byte unicode characters.

    Memory allocation

    There's pretty good amount of memory available (thanks LL), but it's always good to be aware of memory allocation, like overhead and fragmentation issues.

    The record overhead is 125 bytes. Each record needs 125 bytes + length of the Key in UTF-8 + length of the Value  in UTF-8.

    Memory for the record is allocated in blocks. Key and Value share the same block, so only the total length of Key + Value affect the block size. (Key 100 bytes  + Value 400 bytes uses the same amount of memory as Key400 bytes + Value 100 bytes).

    The memory block sizes are 

    Block size   Payload  Added size

    240          115      240
    496          371      256
    1008         883      512
    2032         1907     1024
    4080         3955     2048
    8176         8051     4096

    For longer records there's a chance for considrable fragmentation;  if the key+value won't fit in the payload of 4080 block (3955 bytes) the record is allocated  8176 bytes (out of which only 1011 + 4095 bytes are available storage due to llCreateKeyValue limits on the Key and Value lengths).

    Each record has separate memory block, so the total memory usage is the sum the record block sizes. For example if one key-value has key+value length 884 bytes if gets 2032 bytes allocation (with the payload of 1008 block 883) and if another key-value has 5 bytes  it gets 240 bytes block and the total usage is 1248 bytes.

     

     

     

    EDITS:
    - had a typo on the 3rd column of the block 496 bytes. (Was 512, should be 256)
    - added explanation for the usable storage limit due to key and value size limits

  3. I made more comprehensive tests by choosing different return list lengths (4095, 4096, 4097, 4098) and setting up the key length to make the list  length multiple of (keyLength+1) , which is the key cdl length excluding the status flag "1".

    The result is that the whole list can be 4097 characters, 8194 bytes, and the actual list exluding the status flag can be 8 kB or 4096 characters.  For example 128 keys of length 31 characters,.

    I added it in the wiki
    http://wiki.secondlife.com/wiki/LlKeysKeyValue

    This function will attempt to retrieve the number of keys requested but may return less if there are not enough to fulfill the full amount requested or if the list is too large. The length of the returned list is limited to 4097 characters (the success flag "1" and 4096 characters). The order keys are returned is not guaranteed but is stable between subsequent calls as long as no keys are added or removed.

    Could someone who knows wiki policies check that it's in the right place and in suitable format (and move/edit if needed). I was not sure if it should be in the definition or 'Caveats'. 

     

     

  4. Thanks Rolig, so there may not be throttle in key-value request.

    I made some tests (that are pretty inconclusive as it goes with LSL) but it seems that the limiting factor is the event queue. Depending on the length of the available event queue, that's at least 64 but if the server is happy or has idle resources or whatever reason, it can be much longer, I was able to store up to 473 key-value pairs sent in a for loop. After testing a while the event queue reduced to the nominal value 64, but no throttle errors.

    For reading the keys, llKeysKeyValue returned 600 out of 1000 keys in one call, with the return value size 8182 bytes, which suggests that the max key list  size could be 8 kB. To test that I made longer keys,  so that 8kB can hold 66 keys with the comma delimiters, and llKeysKeyValue returned 66 keys, taking 8166 bytes.

    Conclusion: the size of the key list returned by llKeysKeyValue  is limited to 8 kB.

     

  5. I have two questions about experience tools for which I havn't been able to find any info on lsl wiki, JIRA, or the forum.

    There appears to be a quota for experience key-value requests, at least there's error code
    XP_ERROR_THROTTLED 1 exceeded throttle The call failed due to too many recent calls.

    Would anyone know about the quota and throttling? 

     

    The other question is about requesting the key list 
    http://wiki.secondlife.com/wiki/LlKeysKeyValue

    The returned list may contain less keys than requested "if there are not enough to fulfill the full amount requested or if the list is too large".  Does anyone know what  'too large' means there?

    The keys may be 1011 bytes, so if the script has to assume they can all be max size, the script can request around 60 keys, max.  If the keys are shorter (such as compressed keys, 18 bytes) a script can handle a list of 3 000 keys, but since there's no way to filter the keys in the request, some other script in the experience may add longer keys and cause a script assuming short keys to crash. Is the 'too large' related to script available memory (and if so, how?) or is it some fixed size?

     

     

     

     

  6. When I log on  the wiki and try to add a new page or edit the existing pages on my SL wiki user page 

    http://wiki.secondlife.com/wiki/User:LindaB_Helendale

    I get error that only Administrators have permission to edit it.  Also the 'edit' button on the top menu bar is missing.

    Wiki still says that everyone in group Users can create and edit pages. Am I missing something or what could this be about?

     

     

     


  7. Kwakkelde Kwak wrote:


    LindaB Helendale wrote:

    LI (Land Impact) is server side cost, and transparent textures won't increase it, Display Cost is viewer side cost where it shows up.

    LI is determined by the combination of all the weights, or rather the highest weight. server cost is server side cost, so is physics weight. display weight is indeed viewer side. In other words, you can have something with little impact on the servers and still get a high LI.

    Download weigth, server weigth and physics weights all measure server side load, download cost is the bandwidth needed in the server to send the stuff to view, physics weight is the physics simulation cost in the server and the server weight is the prim bookkeeping cost in the server.

     

  8. I made a simple in-world calculator to see the effect of mesh LOD reduction scheme on Download Cost in Land Impact. It prints a table of total LI and the contribution of each LOD on the LI for different mesh radii.

    The script is here: http://wiki.secondlife.com/wiki/User:LindaB_Helendale/meshLODschemeCalculator

     

    Note (or caveat actually):

    Download cost depends on the byte size of each LOD in the asset server, including the vertex and face data, UV vertices and faces, etc, and the data is compressed by gzip. In general it's not easy to know those numbers, but roughly the triangle counts are on the same order of magnitude as the numbers used in this script. The LI results should be taken as relative, showing the relative contribution of different LODs and the effect of the mesh scale for a given scheme, such as 2000 : 500 : 120 : 60.

    To see the actual mesh LOD sizes in the asset server for an uploaded mesh, to get absolute LI values, see http://wiki.secondlife.com/wiki/User:LindaB_Helendale/getMeshLODsize

    Partly the same information this script prints can be found in Drongle McMahon's graphs for some LOD schemes http://community.secondlife.com/t5/Mesh/Download-weight-and-size-Some-graphs/m-p/1058609#M5642 while this script lets you display any LOD scheme.

    • Like 2
  9. I don't know Maya at all, but sometimes the problems in loading the same mesh again are related to the cached copy the uploader saves as .SLM file and it tries to reuse the cached copy. Unset debug setting MeshImportUseSLM  or delete the .SLM file before uploading the mesh to check if that's the problem.


  10. I've been doing rather a lot testing now, and if the search keyword matches the outfit name (subfolder name) and some item (link) inside the folder, the listing of course shows the folder and the matching items inside. But if the search does not match any item inside the folder it is not listed.

    Often outfits are named by the items so it finds those, but I use a lot outfit names that allow better searching with no matches from items (i use finnish outfit words) and now those outfits are not listed at all.. in 3.2.4 and before it worked ok.

    Only debug setting i found that migth have anything to do with this is 'ShowEmptyFoldersWhenSearching' but it had no effect.

     

    To test this, make an outfit folder with name that has no common words with any of the items inside, and search with the outfit name.

     

  11. yes i did full reinstall, and the issue is still there.

    I just tested also by saving my outfit as  testoutfit and tried to search for testoutfit in inv, and it only found the link to the outfit folder inside Current outfit , but it didnt find the folder testoutfit under My Outfits. The folder shows ok in the inv listing, and in Recent tab, but search can't find it.

     


  12. I installed  the latest release version 3.2.5 and it seems that the inventory search does not find any My outfits subfolder names, so you can't search for the outfit names. It finds the item names (links) inside the folders ok.

    I have had this same issue in two latest DEV viewers too, and bugs are kind of ok in DEV viewer ;), but this is annoying in the release viewer.

    Before filing JIRA,. could someone else check too,  is it the viewer or something wrong in my settings/installation.

    So, can you search for outfit names in 3.2.5 ?

     

  13. Technically it is possible, but i have no clue if it can be done in blender or  such tools... likely not.

    The collada file contains the inverse bind matrices (INV_BIND_MATRIX)  for each bone, that transforms the bone to standard positon/rotation from the bind shape (with origo in the pivot point), and during animations the rotations are applied on  these transformed bones. You would need to replace the T-pose inverse bind matrices with your current shape inverse matrices. (The inverse means that the forward matrix moves the bone from  the bone's own position/rotation to the bind shape position/rot and the inverse moves it back.)

    If you can set the bind shape somehow in blender.... no idea.

     


  14. Kwakkelde Kwak wrote:

    Hmm....someone said something about "polygons" in the DAE....if the DAE stores quads, it SHOULD be possible SL does the triangulation differently....never looked in a DAE file really... Still my guess is the triangulation is the same, makes sense to me the vertex with the lowest number is where all the triangles meet, or originate from...

    Hmm.. i have no idea how to add xml stuff in this text, it thinks they are html and removes them all... really awfully awkward editor on  this forum :( 

    "polylist" means <__polylist__ >  with __ removed in the following.... omg it cant be this hard...   

     

    Collada has three basic formats for the polygons; "triangles" contains the list of each triangle vertex indices, and "polygon" where the vertex indices of each polygon are separately listed (and it's not recommended, it takes a lot text in the file, and the most generic "polylist" that i use to save almost all my meshes. Polylist contains a list "vcount" that lists all the polygon orders (for example triangles would be "vcount" 3 3 3 3 3 3 3 ... "/vcount" but any of the polygons can have any number of vertices (and don't need to be planar), and it contains a list "P" that lists the vertex indices + normal indices + uv map vertex indices. As each polylist produces one texture face (=material in blender) the polylists are handy way to save meshes for SL, no need to use materials at all (unless the uploader is changed in future to use the actual material attritbutes instead of the or triangles and polylist blocks to set the texture faces).

    My comment on the triangulation of faces with more than 3 vertices was about how the SL uploader triangulates polylists.


  15. Jenni Darkwatch wrote:

    LindaB's post made me wonder just how smart the uploader really is, so I created a collada file by hand with a text editor to add a few edge cases.

     At least the uploader is smart enought to merge duplicate vertices and to remove vertices not referenced in the face list.

     


  16. Kwakkelde Kwak wrote:

    If they are not flat we're not talking about polygons... or are you talking about multiple flat polygons like this resulting in a curved surface?

    Hopefully we don't get stuck with vague terminology. In math polygons are plane figures, but for example in collada format specifications the term polygon is used for generalized polygon, "Polygon primitives that contain four or more vertices may be non-planar as well." Similarly the LL avatar mesh in (in wavefront obj format) has 638 triangle faces and 3274 faces with four vertices (that some people might call quads) , and very few of them are planar.

    Also Maya documentation talks about both planar and non-planar polygons

    http://download.autodesk.com/us/maya/2010help/index.html?url=Polygons_overview_Planar_and_nonplanar_polygons.htm,topicNumber=d0e201272

     

    I don't know which terminology  we should use but i thought concepts in the collada specs would be more relevant than pure math. 


  17. Kwakkelde Kwak wrote:

    Ok, educate me here. The behaviour of lighting on this hexagon would be different from one where the lower triangles would be mirrored horizontally, but I really don't see why this is "not so good". When it's flat I see no disadvantage at all to be honest.

    If it's flat it won't matter much but in general the polygons would not be flat and then those triagulations in the middle row of Chosen Few's pic produce triangles with less sharp angles (and shorter crease egdes) can give flatter patch and more natural looking reflections. That's why those triangulations are commonly used.

    I think (but i dont know) that the polygon triangulation methods wouldn't add vertices so the ones in Chosen Few's pic right column would require manual triangulation.

    Kwakkelde Kwak wrote:

    LindaB Helendale wrote:

    Quads are done the same way, so for quads you could let the mesh uploader do it, but not for any polygon with more than four vertices.

    This is just not true. As both Chosen and I said earlier, the way light hits a quad when bent, is affected by the orientation of the triangles within it.

    Two quads with the same vertice pulled up, as you can see they are nothing alike.

    Yes of course the result depends on the direction the triangulation is done but for quads the only way to do it is to split between  two opposing vertices, so unless you specifically need it done in the other direction (from vertex 1 to 3, not from  0 to 2) the SL uploader does it the same way as any other tools.

    I definitely agree that doing it yourself gives better control  than letting the SL mesh uploader do it.

     

     


  18. Chosen Few wrote:

    However, even if you know you'll be working with a platform that doesn't care what the heck you feed it, it's still always best to triangulate your model by hand, before exporting it.  That's the only way to know for certain what your final poly count is going to be, and what the model's appearance will be. 

    Even though you know a quad will break down into two triangles, you can't ensure which direction the dividing edge will run, unless you put it there yourself.  This can be incredibly important when a model is supposed to be symmetrical, for example, or when cleaning up any non-planar quads.  (Non-planar quads are BAD!) 

    For more complex polygons than quads, all bets are off unless you control the triangulation yourself.  Is a pentagon three tris or five?  Is an octagon three quads, or is it six tris that all meet at one corner, or is it eight tris that all meet in the middle, or some other combination?  It's never a good idea to leave decisions like that up to the computer.

     

    SL mesh uploader triangulates the polygons (specified by <polylist> block) by drawing edges from the first vertex in the list to all other vertices, which is not so good way to do it.

    A hexagon is triangulated like this

    hexagon_tris-png.png

     

     

     

     

     

     

     

    Quads are done the same way, so for quads you could let the mesh uploader do it, but not for any polygon with more than four vertices.

     

×
×
  • Create New...