Jump to content

Jenna Felton

Resident
  • Posts

    38
  • Joined

  • Last visited

Everything posted by Jenna Felton

  1. That is the simplest approach. Both 'LSD containers' must have scripts installed with an access API between them. The API must implement access rules (who is allowed to read or write the store, which keys are read-only) and also specify formatting of the data. Especially when the access must be allowed between items of different owners and creators. But I'd rather wait with this until it is casted in marble that LL is not going to add an outside access to the LSD storage to the LSL. When this happens, the API with this access should be negotiated and added to SL wiki. But there is nothing that prevents anybody to invent this API for very own applications.
  2. That is doable, have a free attachment slot then when collected some important data, attach a backup storage item, send the data to it, or when restoring data, read from it via chat commands and then detach after saving or reading the data. When using RLV it goes even automatically, otherwise needs user action, but then it is a little inconvenient. I hoped it would work automatically from the attachment itself, with any viewer and without an extra backup item. And without an experience because that is not something any resident can afford. There is an other Jira which was accepted, and maybe (probably not but nice to dream of ) was the reason to invent the LSD: A grid-wide KVP database to store data persistently. This can be an other solution if it is implemented.
  3. Happy new year everyone Actually I wanted to discuss this in the next SL Server meeting but explaining the matter in few sentences would be difficult and easier to read here and maybe I will not come to asking there, so we can discuss here instead. It happens repeatedly (at least once every few weeks) that the viewer crashes and attachments are rolled back and some important data is lost. And it is not always data you can restore in few seconds. I was hoping LSD will persist crash rollbacks but apparently it was not designed for this but maybe we can use LSD to establish such a persistence. So I wanted ask few questions before I make a reasonable feature request if any. Question 1: Is it correct that attachment data is load from asset servers and then passed from sim to sim and stored back to the asset servers only when the attachment is detached or the owner logs out or crashed? Back in this thread it was claimed to be so, but not by a Linden, so I better ask to confirm. When it is not correct, the rest is maybe irrelevant. Actually it is naturally to do it this way: Attachments change their data permanently and it is wise to save the data when the attachments stop collecting it, i.e. on detach or on crash. However, I was thinking that detecting a crash works precise but because attachments seem to loose data to crashes sometimes, it seems that there are conditions preventing saving attachments on crash and LL can not fix them. Hence we can try the data loss prevention. First attempt: When the avatar is leaving a region, the region stores the attachment data to the asset servers when this data was changed while on the region. Not practicable because the relevant attachments change their state dozens of times on every region and we would overload the asset servers every time we leave a region. The system can not know which data is volatile and must not be saved and which has to be persistent. We need a way to tell the system which data must be kept persistent and which can be lost without problems. To do so, we could use the LSD storage: Attempt 2: When an attachment is leaving a region after the LSD store of the attachment was changed, the server stores the content of the LSD storage back to the asset servers. Whatever scripts want to keep persistent, they simply put on the LSD storage and when the attachment is attached, they load the actual data from there. As long scripts keep the important data in the LSD storage, crashes will not wipe them. Better but still not fully practicable yet: Some applications will use LSD storage to replace inter-script communication and those attachments will load the asset server needlessly like it was for the attempt 1, because the LSD storage will be changing permanently on every region. Hence the attempt 3: The 'persistent' keys: integer llLinksetDataWritePersistent(string name, string value); Creates or changes a persistent key. A key that was created this way is marked as persistent and can only be changed with this function; calling llLinksetDataWrite() for this key fails. A persistent key can be read normally with llLinksetDataRead(). Persistent keys are not protected. To have a key persistent and protected, we can write the data protected and use a secondary persistent key that changes every time the protected key does (like a revision number for persistent-protected data, but it can be even a boolean actually which flips.) Now, when leaving a region after a persistent key was changed on it, the server stores the entire LSD storage with this key back to the asset servers. Changing non-persistent keys will not mark the LSD storage as 'has to be stored'. This way scripts can change the LSD storage without loading the asset servers but once a persistent key was changed, the first leaving the region writes the data into asset servers. A bad crash looses only the data of the last region, not the progress of the entire day. Sometimes we can run into race conditions here: Leaving a region and changing the persistent key after arriving and before crashing. The leaving region detects crash as well and saves the LSD storage after the entering region already did, invalidating so the LSD content. To prevent this, the LSD storage would have a revision number. When attachment is rezzed, the revision number is reset to 0 on the asset server and rezzed link-set. Every time a persistent key changes, the revision number is increased. When a region server tries to store the LSD storage to the asset server, the data is only accepted when the revision number is newer than on the asset server. This avoids overwriting the stored LSD data with the older content. Now Question 2: Is it possible to write the LSD storage to asset server separate from the rest of the attachment? Can the structure of the attachment asset be organized in a way, that the LSD storage can be stored separately faster and more efficient than saving the entire attachment? When this is not the case than probably the whole attachment must be stored together with the LSD storage for persistence. Question 3: Is the suggested automatic LSD backup possible to tackle? When not, maybe it is more preferable to have a function that triggers the backup manually, e.g. after important data has changed: integer llLinksetDataStore(); The function would store the content of the LSD storage to the asset servers, or when the LSD storage can't be saved separately, then the whole linkset. The function may have a throttle per attachment and fail when called too frequent (be it more often than every 15 minutes) anywhere in the linkset.
  4. I tried to keep the design of new functions close to the current ones, so that extending the system wont be too difficult. However I realized that scripts might want to close access to the LSD store even to scripts which owner has modify right to their linkset. Then the script would need to set a pin (not my idea actually but I can not find the post) llLinksetDataSetPublicAccessPin(integer pin); // when pin = FALSE, updating into LSD from outside is only open // when the linkset is modifyable, otherwise requires same pin. Edit: Actually the pin can be also a regular parameter for all public access functions that update the LSD store from outside. When the pin is not set to the LSD store, then access functions will use FALSE as pin and have success when the linkset is modifyable. Otherwise they use a pin number and have success when it is the correct one, regardless if the linkset is modify or not. This way no additional write, delete, reset functions are necessary.. And one more thing. Also not my idea, but I would also like to see the action parameter in the linkset_data event built of flags instead of being just a number. This way you can combine the actions WRITE, DELETE, RESET with additional parameters like PROTECTED (protected slot of LSD was updated), PUBLIC, CREATED (the key was created upon writing, and not just updated), etc. and so you can get a vast number of possible actions, and still be able to check them all easily.
  5. This topic is still hot, so I thought maybe I add some more functions to the LSD family First, not fully replaceable but it would help scripts to prepare the LSD store without resetting the whole thing. And when reducing the event load like in the BUG-232792 is out of the table it will also reduce the count of generated link messages: integer llLinksetDataDeleteKeys(list keys, string password); The function takes a list of keys to delete and deletes those it can. The function will delete unprotected, protected keys and public keys (I come back on them later.) When it could delete at least one key, it will trigger only one linkset_data event, with the CSV of the deleted keys in the name parameter and the CSV of the keys if failed to delete in the value parameter. The return could be either the number of deleted keys or more generally one of three constants meaning for full success, partial success, full failure. When an unprotected key is to be deleted, the password parameter is ignored. A protected key is deleted if the password matches the one the key is protected with (and otherwise the deletion of this key fails.) Public keys are also protected with passwords, so they are handled the same as protected keys. Public keys Here the fun begins for all people who wait the access from outside the linkset: integer llLinksetDataWritePublic(string name, string value, string password); string llLinksetDataReadPublic(string name, string password); integer llLinksetDataDeletePublic(string name, string password); These functions create, update, read and delete public keys in the LSD store attached to the linkset the script runs in. I.e. 'from inside' the linkset the keys behave quite similar to protected keys. But these keys are also available for the public access: integer llLinksetDataWriteInto(key target, string name, string value, string password); string llLinksetDataReadFrom(key target, string name, string password); integer llLinksetDataDeleteFrom(key target, string name, string password); integer llLinksetDataResetInto(key target); list llLinksetDataListKeysFrom(key target, integer first, integer count); list llLinksetDataFindKeysFrom(key target, string regex, integer first, integer count); list llGetObjectDetails(key target, [OBJECT_DATA_AVAILABLE, OBJECT_KEYS_COUNT]); The functions take the key of an object and operate on the LSD store attached to the root prim of it. The naming is not important and using llGetObjectDetails also not copyrighted However, the functions operate only on the public keys in that LSD store. Especially the resetting only wipes out the public keys and leaves the protected and unprotected keys untouched, so the scripts inside the target not loose any data. Listing keys and the number of them also ignores protected and unprotected keys. Only DATA_AVAILABLE returns the free space, because the script can not know which of it will be used for comming public keys. Writing access 'from outside' is only available when the script has modify permission to the target linkset (in the next post I introduced a pin which lifts this restriction.) The functions trigger appropriate linkset_data events to be handled by the scripts running within the target linkset. The accessing script will not need any event (I think return value is sufficient.) Read access from outside will have the same restrictions llGetObjectDetails function has. Finally, issuing llLinksetDataWriteInto creates a key in the LSD store, when it was not there yet, and this key can be read 'from inside' the linkset via the llLinksetDataReadPublic function as if it was created via llLinksetDataWritePublic. This way scripts can provide data for free or secure access from other objects on the same region without using chat messages. We can use scripted and unscripted objects to create environments or experiences by providing required data in the LSD stores of these objects, in a way the object creators were not aware of. Even automatically, e.g. via scene rezzers.
  6. I think the feature as it is will cover a wide range of applications. Now when it is released we will be able to write the applications and then when we get idea for applications which the LSD store in this definition can not or can hardly support, feature requests for enchanting it will follow. For example I can imagine a function that would wipe out a bunch of keys, protected with the same password. This way a plugin script which is leaving the build would not remove each key separately and overflow the event queue. This can be added after the release. Maybe I am wrong but I think the project is in a phase when it would only accept the last minute changes such that without them the LSD store definition would break future applications. But maybe I just wish it life before Christmas
  7. When you not want that your neighbor eats your yogurt from the commonly used fridge, do not let them live with you. But when they move out and left back their pudding, you want to be able to clean the fridge despite there are yogurts and pudding with other's people names on them. Something like that Means do not install scripts that call llLinksetDataReset despite being plugins but the main script must be able to clean the database this way after the plugin scripts left protected keys after de-installation.
  8. I think most of what was asked here can be done by using a script in the linkset and using it as an API: Chat messages for access the store from outside and link messages to access from within the link set (when the keys are protected). The script would respect the access level of the keys and allow or block the operation request. Only when we want to avoid extra script in every linkset using a LSD store, or the overhead of messages between scripts when the scripts could read and write to the store directly, then we need the inter-object access, access levels and pins. I think by now all of the applications using LSD store will have a script in them and so the current definition is sufficient. But I have a strong feeling requests about opening access from the viewer UI or from other objects will remain and follow and with it the access pins and levels. Because this will save the extra scripts and messages and allow interesting use cases. And because of that, I want throw something else into the round, but related. When the LSD store is only under control of one installed script, then this script can define the names of the keys fully as it wants. But when we want to give access to different scripts, made by different creators, or to let viewer access the store, than we need to invent a simple semantics. This way you can also link LSD stores and the keys will mostly not interfere. I suggest to build key names from hierarchical prefixes. connected by a dot. The first (main) prefix will identify the instance that (mainly) uses the key. For example I made a HUD and I'd like it to remember the position on screen, for every HUD slot. The script is the 'core' script, so I call the keys "core.pos.ltop", "core.pos.center", and so on. A plugin script for this HUD that is meant to style my attachments would use the main prefix "style" and the keywords would be e.g. "style.color.theme" or alike. When I want to allow others to write plugin scripts for my HUD, I'd have to define a system that allows the creators to name the keys so that they not clash. The simplest and safest way might be to use their avatar key or login name as main prefix. I would then define such system in the user manual for my HUD or give it with the dev kit I release. However, I'd also suggest to reserve the names of each TPV from the TPV list as prefix to avoid. And the prefix "viewer" as a common prefix to use by all viewers. The keys with them as main prefix will be used by the viewer when the viewer would get the direct or indirect access one day. When by then there will be objects on the market using the prefixes, then the viewer could have damaged the objects. As for viewer access applications. For example the viewer could store the last region and last position under keys "viewer.last.region", "viewer.last.position". Then not only firestorm will be able to rez the object at the position it was picked up and even then when the object was picked up with an other viewer or by someone else (when the keys remain stored.) Also, every TPV could then store the own information into the LSD store of a linkset under their own main prefix.
  9. I was actually expecting that when the server finds some data of an object it can not handle, this data remains attached to the object and is passed over to the next region because it probably can handle the data. This way when LL deploys e.g. a server that for some reason forgot how to handle materials, walking into this region, while wearing a super expensive super rare gacha suit full of materials will not loose the materials, you can safely visit the region. But would the server drop the unknown properties, your suit is done. Loosing the unknown object properties brings extra pressure on LL to not deploy servers to Agni before all bugs that may destroy worn asset are fixed. Actually such bugs should not deployed to Agni anyways but they can happen. Passing over unknown object properties, like the LSD store, would reduce the danger of these bugs.
  10. I've tested out my 'crash plan', all points passed. I tried some of the forced errors (Disconnection, llError, Bad Memory, Driver Crash) gave up testing all because same result. Crashing as soon as possible after adding keys to the LSD store or after teleport did not lose the content of the store or the new keywords. However, I could not force a crash which would lose changes to the script itself, and thus can not say if the LSD store will be more persistent than the linkset and scripts in there. But I have big hopes about this.
  11. Thank you Rider, having protected keys is cool, but I'd like to have a way to get access to protected keys to some degree without having a password. Because there are applications extensible via plugins, be it scripts installed into the application or (never seen but possible) linked to core prim. The core script and the plugin script can be by different creators. Now the core script may want to maintain some keywords while plugin scripts would have only read access to them unless the developer or the core script made the write or deletion password known to the plugin script. I would suggest to create protected key like this: llLinksetDataCreateProtected(string name, string pass, integer level); while level defines if the key is read protected, write protected, delete protected. Write protection inherits read protection (read protected key needs password to be changed) Delete protection inherits read and write protection. Write protected keys can be read without password but not changed Delete protected keys can be read and changed without password but not deleted. Edit Better to separate creation of the key and actual writing because this will avoid to change the protection level every time you change the content. After the key is created it can be changed via llLinksetDataWriteProtected as defined.
  12. I have created a test script which allows to reset the LSD, store and read keys of it, and it remembers the number of keys it has stored. My Test Plan which I'd like to see to pass at every point is this: Reset LSD store, add 2 keys, list them - the script must know there are 2 keys in store and the store must have them (pass) Add 2 keys, detach, reattach the HUD - the script must know there are 4 keys in store and the store must have them (pass) Add 2 keys, teleport out, teleport Back - the script must know there are 6 keys in store and the store must have them (pass) Add 2 keys, Crash the viewer, Relog - the script may think there are 6 keys in the store but the store must have 8 keys Add 2 more keys, Crash the viewer after 1 minute, Relog - the script may think there are 8 keys in the store but the store must have 10 keys Add 2 more keys, Crash the viewer after teleport out, teleport in, Relog - the script may think there are 10 keys in store but the store must have 12 keys Add 2 more keys, Crash the viewer after 15 min, Relog - the script may think there are 12 keys in store but the store must have all 14 keys I not know yet which viewer error I take for crashing it, maybe I try more than one, but by now the test failed already on TP into region that not supports LSD and teleport back (as reported in this JIRA, there you can also find the script and the test build). Why all the fuss. Sometimes my viewer crashes or internet disconnects. When I have bad internet sometimes, it can happen within 5 minutes online time and few of my friends have it quite often like this. When this happens, servers fail to write the state of the link set into asset servers, resulting in the loss of data and this data can be very important. And it is hard for the server to know which data is important on the particular link set and which is not. For example I bought a dress and use texture HUD to style it. I crash after styling, the texture is lost, but after log in i use the HUD again and all is back and propper. No big deal to loose this texture to a crash. Now imagine the creator of the dress invented a revolutionary system where the textures are applied in their store after I paid for them. I paid 1000L$, the dress got all styled, I tp home, crash, log in at infohub and all textures are gone. I may pay an other 1000L$ now. With Linkset Data we've got an excellent tool to explain the server which part of the linkset is important and has to be saved as soon as possible while the rest is of lower priority and must not survive any crashes. The texture receiver in my dress stores the textures into the LSD and then checks in on_rez() event if the stored data are newer than it knows and if so, it reads and reapplies it. Crashes are no more that bad any more. At the last user meetings i was told that the Linkset Data was meant to be persistent but I am not sure if my question was understood or the answer was that the LS Data was meant to survive script crashes and resets but not the linkset' crashes. When this level of persistency was not planned originally, I think to fill a feature request to ask for it. Edit The version in Jira not remembers the number of keys (represents the linkset state not covered by the LSD system), so when interested, please use this script in an 3 prim HUD: integer rows = 0; resetRows() { rows = 0; llLinksetDataReset(); listRows(); } listRows() { llOwnerSay("Num Rows (in script): "+(string)rows); integer size = llLinksetDataCountKeys(); list keys = llLinksetDataListKeys(0, -1); string text = "Num Keys (in store): "+(string)size; integer i; for (i = 0 ; i < size ; i++) { string name = llList2String(keys, i); text += "\n\t"+name+" -> "+llLinksetDataRead(name); } llOwnerSay(text); } addRow() { string name = (string)(llLinksetDataCountKeys() +1); string value = llGetTimestamp(); integer add = llLinksetDataWrite(name, value); rows = llLinksetDataCountKeys(); llOwnerSay("add row ("+name+" -> "+value+"): "+(string)add); } default { state_entry() { llSetLinkPrimitiveParams(1, [PRIM_TEXT, "add", <1,1,1>, 1]); llSetLinkPrimitiveParams(2, [PRIM_TEXT, "list", <1,1,1>, 1]); llSetLinkPrimitiveParams(3, [PRIM_TEXT, "reset", <1,1,1>, 1]); } on_rez(integer start_param) { listRows(); } touch_start(integer num) { num = llDetectedLinkNumber(0); if (num == 3) resetRows(); else if (num == 2) listRows(); else addRow(); } } Edit 2 Lucia answered in the Jira and named an other region that supports the system. Teleports between these regions retained the LSD data. So the point 3 is pass. Will continue with the test tomorrow.
  13. I quite like this suggestion. Maybe the LSD would be attached not to the root link but to every prim in linkset? 64kB would be the alotment per object (linkset) and not per prim. To access the LSD a script would first open the LSD via the function integer llLinksetDataOpen(integer link, integer writemode, string password); The function would allow opening the LSD in any prim in the linkset, while using LINK_THIS would allow access to the LSD of the same prim, when really needs and allow to link the prims without the scripts damage the LSD from an other prim after linking. Delinking a prim would take the attached LSD with the prim, as well. The return value (LSD index) would address the LSD the script has opened (0 = FALSE means the script provided wrong password and can not use data functions). When you want to have access to LSD from multiple prims at same time, e.g. to add values from different stores, then this LSD index need to be provided in the data accessing functions. It would be also good to have the way to lock the LSD from changing by different scripts. For example when a script opened the LSD in write mode, no other script can do this and is delayed until the LSD is closed via llLinksetDataClose(integer index); As alternative, instead of using the number, returned by llLinksetDataOpen, the data function and llLinksetDataClose would accept the link number as reference of the LSD, since no link can have two LSD attached. Edit I not realized yesterday, letting scripts lock multiple LSD stores can lead to a circular lock where two or more scripts try to lock the same 2 or more stores in different order and lock out themselves. To avoid this, the scripts should be able only to lock different stores in the order of the LSD-associated link numbers. Or not delay when a script tries to open a locked store but receive an error response so the script can lift the lock on an other store and try again in a random while. Or we drop hard-locking completely (because it should ensure that two scripts not change the same store at same time destroying the data), but instead use the soft-locking in the manner of the Experience Tools: integer llLinksetDataUpdate(integer store, string name, string value, integer checked, string original_value); This will not prevent two scripts damage the same LSD store but they can detect if a store is in use and with some discipline on the scripter's end refrain from overwriting.
  14. I could not test it yet (was just lucky to find in the BUG-232751 that this feature is live in Mauve in Aditi) but in that Jira is said Is it important to sort the keys in these functions? Because when I am just interested in the list of the keys without needs for order, the sorting is an overhead. When sorting not used internally, maybe it is better to drop it and when you need the keys sorted, you just wrap the output in llListSort? Edit: Ok, after I know the parameters, sorting is indeed needed. The functions return keys out of a certain range and for this you need a proper order to make the output deterministic.
  15. Cathy Foil wrote: Those making and selling animations will want to clearly label their animations as "Rotations Only", "Rotations and Translations" or "Translations Only"... I think that is a good suggestion. But I think also mesh head creators will have to classify their mesh by what sort of expressions the mesh will accept. Because the customer buying the mesh head will rarely understand why the particular expression will not work with the mesh, but they can learn that some expressions will go and some not. They must not even understand what a rotation and what a translation is, but when they know this mesh fails on expressions using translation than they know which expressions the customer can look at and which are not worth of spending time for looking at the demos for them. Also, when the mesh head or fullbody uses the expression override (be it the suggestion of polysail or LL implements the feature request for it) this must be explained too, since that is a big plus in having predefined best looking expressions (since made by the mesh creator themselves) and probably simple to use new expressions made by third party. However, there will be a load of new information falling on the head of the to the mesh wearer and I think they have to be prepared for it. So I think there must be an explaining post somewhere targeting the customers and explaining that there is a slides vs. expressions problem, that some expressions can not work on some meshes, what an expression override is (if there will be a ready to use solution) and probably a simple to understand video about what translations and rotations are, how they affect the mesh and why some mesh can not accept translations in expressions. A bold request, sorry, but I myself am not good enough to make an easy to understand explaination and having deep understanding of the topics technically. Cathy Foil wrote: I probably am not explaining my self well enough. The program that is setup to use your web cam watches your face for expression and when it recognizes an expression say like when you smile it recognizes that you are smiling and sends a signal to the viewer you are logged in with to play a pre-recorded smile animation. So it wouldn't matter how big your smile was in real life it simply play the animation named "Smile". This can be done by expression overrider (assumed it is available.) The web cam program will just start recognized expressions and they will run overriding animations. Needs a service that sends signals from the clinent computer to the LL server running the agent. Can be done by LL or probably the web service which the expression hud worn by the avatar offers and client web cam program is registered into.
  16. This is getting very long already and every time I have time to read the thread gets 10 new pages But I am glad how it is going on, and that there are debates, since it schows the topic is so important, and I am glad Bento team is stil waiting with release before all issues are resolved. The rest of the post can be pointless, as I am neither an avatar designer not animator (although I have an intension to make my very own full bento mesh body myself some day) so you can ignore the folowing when it is pointless. However, when I read the debates about sliders vs. expressions, I get a feeling the goal is to have sliders and expressions in the same extend the default avatar shape allows. And I beleive there is a mistake in this.At least in theory. There are exactly two default meshes, a male and a female mesh. They are changed by the shape sliders or animations. Some sliders morph the shape but I think it is irrelevant as we want emulate morps via bones. But the point is, the default avatar mesh is only one of two meshes that every one uses and changes via sliders and animations. Everyone has one of them. While there will be legions of avatar designers and each of them will create dozens of different mesh bodies. Now, when we want change all the mesh bodies via sliders and animations/expressions in the same range the default avatar mesh allows, than I beleive we get into a situation that you can take a mesh body (or head) from designer A, replace skn and use the shape sliders and you get the look of mesh body from designer B. When it happens, the mesh bodies will become replaceable. And then you can think what happens. People will take cheaper bodies when they look like the more expensive. Prices will fall probably down to dollaraby level. Or DMCA reports will snowing. Neither of them I'd like to see. Expressions are good. Sliders are good also. But I think shape sliders should change the mesh only slightly, just to make the mesh unique but the mesh must stil stay recognizeable. And when the range the sliders change the mesh is limited, perhaps the expressions will stil work well on them and not destroy the mesh? I think when it is possible this way, the goal must be limited on that. I am not sure if the mesh format allows to restrict the range of sliders and animations.When it can not, I think there is stil a simple way for the mesh creators to restrict the sliders. Put a notecard with the mesh listing the bounds of every relevant slider. There is such a notecard with the mesh body I wear and mesh head I wear and I have no problem in following the instructions. Animators will have to get the same information from the mesh designer in same or other possible way. Just some thoughs I got by reading the next 10 pages
  17. Thank you (Bento team) very much for opening Bento to the Agni, expected it soon but not so soon that is cool. And for updating the skeleton files. However,there is a problem, or better to say stil, because I noticed it since long time. When I import the .dae files (mesh + skeleton) into Blender, I see the skeleton broken. Some bones are minimized and their head tips do not catch the tail tips of their child bones in the line. You can see it by comparing the leg lines with hind lines:  The same problem I have when I import into Blender 2.75 and into Blender 2.77a. Is there a way to import it in an unbroken way or do I have to corect this manually? It seems not too hard, although I am not sure if I can do it without a mistake.
  18. Hello DS, response to you, althought it would be one for the whole human avatar subproject. That looks cool, yes, I played with the Manuellab addon a little, the customization is really good, very many options and it looks as if Manuel took a long study about body types, shapes, and this makes me respect that work. And when i look at the hands and feet, that is a dream of hands, really. Well, body needs more complexity at some places and there are number of places that are way too complex, like eyebrows or wimpers, eyeballs or teeth, they use much too much vertices for SL. However, apparently Manuel will make a simplier version, or the mesh designer could simplify the body themselves. I am really support the idea of a resident-driven standard of human and humanoid avatars so the avatar and clothing designers can work together very much like the legacy avatar mesh designer (that is LL) worked together with the legacy clothing designers (that were the system layer clothing) That would improve SL very much if it would be possible. But, I thnk the part about open source avatar based on Manuellab, needs really a separate thread, which I'd suggest Gaia opens herself and posts a link here. I am sure there would be interest to talk about the standard, implementations and everyting else around it, but that topic tangents the Bento topic only and is not a part of the Bento topic. Hope it not sounds rude, was just an idea. Something around the idea of resident-driven hummanoid avatar standard. It must not remain by a hummanoid one only. I am not sure at the moment, but perhaps Bennto could birth a few other standards, for horses, pets, animals, dragons and more. Which could be handled in similar way, standarticed open source mesh that avatar designers could adapt on their own needs while the accessory for the final avatars would be similarly useable on all of the avatars based on that standard one. Perhaps its too early to talk about it yet.
  19. Not sure if this is too late to add bones, but I was just talking with a friend about hair cuts and then it came up, that you could make changeable hair styles when there was bones for hair in the skeleton. I think 2 or 3 chains of 3 bones parented to mSkull would allow to get dynamic hair and change hair style. At least 2 chains for tails left and right, one of them or both you can use for ponytail. It needs be different bones than used for (for example) wings or face because you want probably wear hair to the wings or mesh head. Just an idea. I am not an avatar designer, so this suggestion needs being verified by those who mesh avatars and/or hair.
  20. Thank you Code for liking the contribution I added the "[bENTO]" prefix to the Jira name. I was not sure at the time of wriging if i should. The post does address an issue that is there since mesh avatars and not just since bento, and it is also scripting issue. But bento has the ability to solve it. So I decided to add the prefix thank you for reminding on it. Edit: Managed to miss the meeting, forgot about the daylight saving time... When no one brought up the topic today, LL has more time to reply
  21. Something that doesn't seem to have been covered in this thread is how Bento will affect existing content. This is true, I realized that too, after reading this thread. The idea of polysailis also good and it has a good chance to be implemented, I think, when LL not accepts your idea about extending animation override. However, I dared to think about it, as i am also a scripter rather than mesher or builder and to post a feature request for it: Animation Override for built-in animations. I extended it to all built-in animations because not only facial expressions are affected, but also hand poses may be and, if there any, also animations that use morphs on other parts than face or hands. I also added a second way to implement this parallel to the llSetAnimationOverride because although we also want override avatar animations but it may be something different internally. But I hope it will be to do as easy as copy the llSetAO code and paste into new functions Tomorrow is a Simulator User Group where it is a good place to discuss the idea and what way is better to go.When i manage to come there and there will be time I'd bring it to discussion. However, either way it will be going, I'd suggest to open a new page in the wiki, similarly to the one for RLV protocoll for maintaining the communication protocol (when the polysail solution will it be), the both scripts (avatar part script + expression API script for the furniture etc.) and selection of the expressions/poses to override. I think such a protocol / standardized access to custom expressions is a necessarity, otherwise a Bento avatar is not complete.
  22. Thank you for your Answer Nalates Yes thats corect, waiting 24 or even 48 hours (two days) is generally a good idea. Also it is important to log in to the beta grid with the SL viewer (and not with SL Beta viewer despite the name) as first. The second login can be with any viewer. This worked for me as i had once the problem that the Aditi session was not stored somehow. Yes, as far i was also told, the inventory on Aditi is managed by weaker hardware than the inventory on Agni. This was the main reason why i started the tread. Now after i was thinking more, i beleive the idea of packaging stuff was correct. As far as i know there are shared asset servers that are keeping the content of the assets, for example content of notecards or the pictures. The inventory itself is than just like an index telling who has what assets in their inventory. Which is simply a large list of names and asset keys organized in folders like the content page in a book. Due the inventory transfer Agni to Aditi only this content is tranferred, not the files themselves (they are stil on the shared asset servers.) Also the smaller this index is, the less work should the inventory database have. Hence, when i package, say, all the furniture i have into a box, than only this box is in my inventory and takes only a single entry in my inventory database. The inventory transfer brings only this entry to Aditi. Now when i rez the package box on ground than this box comes from the shared asset server. And as long i dont unpack it into my inventory (on Aditi), nothing goes to the inventory server. This means, the idea to rezz all the package boxes and pick them up after the inventory transfer can work but has little benefits. When i do it, than the box of furnitures does not go to my Aditi inventory. But i can leave this box in my inventory, too, let the tranfer routine bring it to Aditi, and as long i do not unpack it, my Aditi inventory has just a single entry for this box and that probably does not hurt. 10,000 items per folder is a crazy number I think i have just a few folders with one or two thousands items, for unorganized objects and landmarks, but i am working on them: when i open that folder i get a huge scrolling problem. I suppose a folder with 10,000 items would bring the viewer to a real winter-sleep once i open it.
  23. Good evening I know there is a routine that exports all our inventory from Agni (main grid) to Aditi (beta grid). To initiate this routine all you need is change your password. The question is, is there a way to organize my inventory in order the export routine has less to work or also in order the inventory servers on Aditi can work with the inventory on Aditi easier? An example to make the question some more concrete: When I package all my stuff in boxes (you know, by rezing a prim, putting there my stuff, take the boxes and removing the stuff from my inventory) than i have on the first view less items in my inventory, but the packaged stuff is a part of the boxes that are in my inventory, it is still present and available somehow. Now when i change my password, the export routine brings the boxes to Aditi. Now when i rez the boxes i should be able to take the exported stuff out of the boxes. Hence, the packaged stuff was still exported to Aditi. Why i am asking this, there is plenty of stuff i need on main grid but don't need on beta grid. For example most clothes, landmarks and notecards, skins and shapes, furniture, vehicles etc. That are more than 20.000 items i do not need on Aditi but they would make a load on the Aditi inventory servers when exported. My first idea was to package all the stuff into boxes, than change password. But I guess, the export routine will still bring all the stuff to Aditi. When so, than probably the only way to avoid the full export will be to rez the boxes somewhere for 2 days and remove them from my inventory and after export pick up the boxes again. Is this correct, or does packaging the stuff into boxes still a good idea for the Aditi inventory servers, even if the boxes are exported with the content, so i could leave the package boxes in my inventory before i change my password? Best regards, Jenna
  24. The avatar you want check the ping must have a RLV-enabled viewer and possibly an active relay. Unless you implement that check by a device and give them the device, than the device can communicate with the viewer directly without a relay. But their viewer must understand the RLV commands. Your viewer must not be RLV-capable. This is an interesting idea actually to implmement such a thing and test on myself how much the ping value calculated this way differs from the ping value shown in the statistics bar. If i get time i'll try that
  25. Principally it seems to be possible to test how good is a connection of the avatar and how good is the avatar's machine by using a RLV-enable viewer. RLV has a number of commands awaiting a response from the viewer. For example a "@version=channel" command. The protocol is this: A scrip in avatars object (e.g.) relay opens a chat listen on a channel, e,g, 222, than issues a command llOwnerSay("@version=222"); This is a message sent by script directly to the viewer using by the avatar.The string "@version=222" is thus passed over the server - viewer connection and is delayed in respect of the connection speed. The viewer receives the message and understands as command to reply the viewer's version on the channel 222 (if the viewer supports RLV, if not it just displays the message.) The response again is sent over the viewer - server connection and is the faster the better the connection is. The script in the scripted device receives the message and calculates the "ping" value. However, in most cases you have a device that is owned by you and not by the avatar. In that case there will be also a step 0, when your device sends a RLV Relay message towards the relay worn by the avatar, This message is a command to request viewer's version and this will make an additional delay by script - script connection. From the step 1 on, there will be three delays until you receive the viewr's response, those in steps 2 and 4 are caused by the viewr - server connection and are as bad as the ping of the avatar's machine. The delay in step 3 is caused by the machine itself and determine how fast the machine is but can also hapen because of the virus scan program and similar loads at the moment. So with this technique you can quess the avatar's ping but not measure an exact value, but principlly it is possible. PS. Two links about that RLV version checking command RLV Relay specification
×
×
  • Create New...