Jump to content

Kadah Coba

Resident
  • Posts

    207
  • Joined

  • Last visited

Posts posted by Kadah Coba

  1. What's with all the human mesh bodies needing to be cursed?

    Skimming over the talk around this one, it kinda sounds like they might have confused the list of general complaints about other mesh bodies for a design spec. The only possible upside I'm seeing is maybe its easier to get access to the devkit, but this appears to be offset by restrictive and limiting license terms.

    Couldn't we have commissioned a few permissive open source bodies instead?

    • Like 7
    • Thanks 1
  2. 2 hours ago, Jaylinbridges said:

    Another issue if you add your registered agents to the region admin list, is that list is limited to 15 avatars.  A club I work at has 14 slots already used up for the staff and owners, to give them region ban rights. There is no more slots to add the registered agents that have been in the club as a band, for the last 15 years. So the bots will keep visiting the club, causing lag to an already crowded region, because the owner will not ban his own AI bots and has no room to add them to the region admin list.

    Why do the bots need EM verse just being allowed via the allowlist?

  3. Awesome, gonna enable this on my estate. And maybe the policy change to further increase the requirement to have bot accounts flagged properly actually has some affect.

    I hope something like this eventually comes to the parcel level. I have some mainland which has been seeing exponential increase is roaming bots visiting with unknown purposes. It's nearly useless to ban them individually as repeat names rarely are seen. One of our rental parcels has seen over a dozen bots in the past 24 hours, this was typically 1-3 per day max just last week going back over a year. No clue what's with a recent massive uptick in bot traffic. And this is just what's our parcels alone are seeing, the regions themselves might be getting even more bot traffic that we're not checking for. :s

    • Like 6
  4. On 3/11/2023 at 2:46 PM, animats said:

    If the new code is integrated into the old code, yes, they do have to release any new proprietary code. The existing viewers are LGPL licensed. This is unusual for an executable program; LGPL is usually for libraries that you link into a larger program. If LL creates a viewer based on pieces of the existing viewers, they're stuck with licensing it under LGPL and making the code available.

    Here's library code of mine licensed under LGPL 2.1. That's a little library for reading and writing LLSD (Linden Lab Serial Data). It supports the XML and binary representations of LLSD, but not the "notation" format, which isn't used much. Anyone can use that library freely. But if someone were to add support for "notation" format (which puppetry is using), they'd have to release the code for that. Preferably as a Github "pull request" to merge it into the main branch, which is how this is normally done today.

    This is all pretty much settled and noncontroversial today. The open source enforcement lawsuits were years ago now.

    Contributors grant LL a license which would allows LL to change the license in the future. There was actually some various issues in the distant past, which was part of the reason LL changed the license to LGPL from GPLv2. The current CLA for reference.

    The license they release the code as is what everybody else can use it under. Internally their license is full ownership.

    However, LL is just unlikely to do a restrictive license change as they are not arseholes.

    • Like 1
  5. Getting super off topic but...

    The "mid sized" EV charge station I'm currently getting installed will use more electricity than our entire commercial/industrial block and the surrounding residential area. Most of our power comes from natural gas or coal, especially at the hours when its going to be used the most. The possibly sad part is that this station is barely going to put a dent in the EV charge infra that's going to be needed.

    On the scale of "caring about the environment", it probably would go something like this for socal: pedestrian power, e-bike or similar, bus on a busy route, carpool, ev charged only on home solar, LNG, CNG, fuelcell, hybrid, ev charged from grid, literally anything else, bus on a low volume route (eg. most buses on the majority of routes).

  6. 11 hours ago, Rick Daylight said:

    For me, hardware isn't the issue right now. I'm on the last generations that could run Windows 7, at least without major driver headaches. I could run 10; everything except my scanner and my  printer (which is an ancient but unkillable big office machine) will work with Windows 10, although most of my electronics stuff (MCU and FPGA programmers and test equipment for instance) won't work either. I don't really do that stuff much now though.

    Run Win7 in a virtual machine, then run whatever currently supported OS you want on the host. The number of hax and compromises you'll need to stay on 7 has already beyond those needed in the opposite situation, and this will only increase during this year. I'm saying this coming from a position of still having to support a bunch of Win7 machines at work and have a very strong dislike for 8/10/11.

    Also, what ancient and probably long EOL'd MCUs and FPGAs are you working with that don't have tool chains that work on anything newer than 7? I'm pretty sure every single MCU I've worked with over the past decade is supported by the current tools, though I could see that happening with some old FPGAs. xD

  7. 21 hours ago, Coffee Pancake said:

    Catznip will be making it 1:1 .. if it looks square, people will treat it as square, it should be square.

    If I was still involved with FS, I'd have pushed for just nudging it to 1:1. From what I remember, its like 1 or so pixels off being 1:1, which makes the choice even more confusing.

    21 hours ago, Coffee Pancake said:

    I am really irked the tag cloud got to web profiles and then just died on the vine.. that would actually be useful in the new profiles.

    The entire web profile project was like that from what I saw. After [I forget his name, the one that was lead web profiles] left LL, the whole project quickly came to a halt and what had been testing be beta was moved to release as-is. The only thing I ever saw using the tag cloud was that one email campaign trying to do match making on seemingly accounts with few friends and/or in frequent use and/or at random.

    21 hours ago, Coffee Pancake said:

    There should be a way to select a few languages from a list and set one as the preferred language, and this list should be accessible to LSL so scripted objects can present UI the user can understand.

    Freeform type isn't the way to go.

    Good point, being able to grab preferred language in LSL could be handy. Though without some means to handle storing a lot of translations effectively and easily in LSL, its possible the feature wouldn't see much use.

    I'm only saying free text field if the only other option is a single selection from a predefined list.

    21 hours ago, Coffee Pancake said:

    If anything we needed more website links.

    Markdown maybe ?

    Its not-quite-markdown, something like [url Text for link]

    Some actual markdown support in text would be nice to have.

    • Like 1
  8. When I contributed the viewer profiles to LL, it included the interests and web tabs. Sometime in the 2-3 years they were working on it, those got removed. I do not known any details as to why.

    Similarly, I do not know why we ended up with yet another profile image aspect ratio. I had reused either v2's or v1's (I forgot since that was years ago), webprofile's would have also made since, but they went for "not quite 1:1" for whatever reason. lol

     

    I can kinda understand dropping most of the fields in the interest tab as they were too limited.

    Web profiles kinda tried to have similar-ish functionality with its interest tag cloud thing, which as far as I saw, was highly underutilized and possibly only fed the short lived match-maker email campaign. Had that or something like it been carried over, and it had some use in search, and been redesigned to be used more, it could have been neat.

    One thing web profiles and LL viewer profile did miss is the language field. I feel that the boat left on that being a "should have been kept" back when it was not added to web profiles; which is possibly for the best as back then, I would have fully expected if them to have made it a single choice drop-down instead of free-text or at least a multi-select.

    The loss of the web tab, specifically the website field, was rather annoying. A lot of residents made use of that for linking to their MP store or other useful links.

     

    Largely a lot of my complaints are minimized by the increased character length of the bio fields to 64KB and LL adopting parsing of links in bios as officially supported. An easier method in-viewer to create/format links would be nice as you otherwise have to locate and RTFM the markup documentation for that on your own (getting a WYSIWYG editor control would be a pretty big request).

    The one key thing legacy interests tab did was to prompt many residents to share a certain set of common data points. So I'm a hard +1 to bring back the "languages' field and a soft/"sure why not" +1 to bring over something like web profile's interest tag cloud under the stipulation that it'll works for search since it is otherwise kinda useless.

    • Like 1
  9. 5 minutes ago, Love Zhaoying said:

    But..what if..your list is..a list of..strings?

    Dun dun dunnnnn!

    (Assuming you didn't need a list of numbers in the first place.)

    Its possibly about as bad as lists in that example, though lists have the upside that duplicate values are stored as references, though that is kind of an edge case to make happen. There might also be some referencing happening on retrieved values, I can't remember, its been years since I tested this.

  10. 29 minutes ago, Love Zhaoying said:

    Great!

    Yeah, I think JSON in LSL is generally a couple orders of magnitude better than lists for most things - not exaggerating!

     

    Native binary stored JSON within LSL would be nice, the stringized we have now is very memory inefficient. Its fine for save/storing, comms to other scripts and for interfacing with external services, but will eat a lot of mem if uses much within a script as a list. xD

  11. 18 hours ago, Love Zhaoying said:

    Using JSON with LSD works out the same, at least for me..

    I found out last night that the JSON functions are stupidly fast. By an order of magnitude, this is the fastest way to convert lists to/from strings while keeping types. Will have some benchmarks on this "eventually".

    9 hours ago, Quistess Alpha said:

    Having support for lists naively in LSD, or other places outside of an individual script runtime, isn't currently possible for technical reasons. It would be nice, but we're gonna have to wait for a bunch of other changes first.

    • Thanks 1
  12. On 1/20/2023 at 11:09 PM, primerib1 said:

    Indeed. In script-space you can represent them as 4 integers then use strided list.

    But since we're in the LSD thread, there's an additional limitation: You have to convert that into string as LSD can only store strings.

    So all the "packing" discussion in this thread is just us discussing about the most efficient, reversible way of storing 4 integers in LSD.

    If you simply stringify the integers, you can end up with 47 characters per encoded UUID when you're unlucky... (4 * 11 chars [if your integer is *very negative*] + 3 separators)

    (Example of 'very negative' integer: -1564371818 = 11 characters)

    I think what I meant was, once you get data down to being a list of ints, encoding generally becomes easier to manage. x3

    I really wish there was a way to transmit binary data between scripts, and if LSD could have supprted lists (TL;DR I got is that it would have required a much bigger project, eg. wouldn't have happened). All the stringization we have to do because code has to be divided up amongst multiple scripts is painful, and all of that eats up so much resources that could be doing more with less. Larger scripts would be great, but there's a lot that needs to change on the backend before that could happen...

    If the data is going to be mostly large ints, IMO llIntegerToBase64 is still the best option for speed and code size, the fixed 6 char length makes indexing a lot simpler. I used this method for storing large amounts of binary data via media faces (something which could be directly ported to LSD).

    Currently kicking around a few ideas for non-fixed length encoding with arbitrary lists. Code for fixed structures would be more efficient, but a pain to have to customize for each data exchange type.

  13. 20 hours ago, Anna Salyx said:

    There are some edge cases where it really does come in handy though.  I can store* ~1,771 unpacked keys into the data store. Using Mollymews llOrd/llChar packing method I can up that to ~2,383, (roughly ~600 more).  Everyday use scenarios, you're right and it probably wouldn't be worth the extra overhead to pack and unpack. But it's worth noting that it's use in some cases can be significant, especially when offloading to a DB prim is not feasible. 

    Added note: Watching the discussion is interesting.  I'm not sure I'd need any method more advanced than what I'm using here, but it's always good to have options.

     

    (* llLinksetDataWrite(key, "0"); )

    7 hours ago, Coffee Pancake said:

    I think the issue at this point isn't how many keys you can store, but what can be meaningfully done with more than a thousand of them within the processing and output conditions of LSL.

    I don't think there would ever be much call for needing that many keys at once in live memory. More so storing a lot of data for lookup when needed or to step through to do larger processes where size is of greater importance than raw speed.

    The most obvious common use-case I could see would be for caching modified poses in furniture. With maximum floating point value limits (either fixed point or split decimal), should be able to store poses for quite a lot of residents.

    The use-case we've had is storing level/map data. While it may not have many UUIDs in the structure, a UUID is essentially just 4 integers with a minorly inconvenient periodic hyphenation.

    Those have been the two targets I've been working on supporting in a generic/template-able library.

     

    19 hours ago, Quistess Alpha said:

    I haven't tried it, but my back-of-the envelope calculation says you could fit about 3120 keys if they were compressed using base 95 (20 bytes per key + 1 for an unused value) or ~700 more than Molly's method. If you could get rid of the value, that'd only give you another ~100 keys.

    theoretically true, but in order to leverage existing base64 functions in LSL in the most obvious way, you have to encode in 32 bit -> 6 character blocks (*4 ints in a key = 24 characters). I might see if I can get the byteshifting to work it down to 20+1.5 next I have some free time though.

    ETA: Actually I think that's basically what you said in your last sentence after I parsed it a few times...

    17 hours ago, primerib1 said:

    The key is to grab only 3 bytes at a time (6 hex digits), rather than 4 bytes. So the bytes meshes nicely with how Base64 works (3x8 bits => 4x6 bits). No need for bit twiddling 😉

     

    EDIT: Just because I'm bored out of my mind at the office, I made a code. Down to 22 chars, and URL-safe.

    I'll test this later if I remember, but might only be slightly more execution-time to do a higher base instead and maybe still be URL-safe too. Either Base85 or BasE91 seem like decent options as they are somewhat standards that'll be easier to support on a remote server.

     

    • Like 1
  14. On 1/16/2023 at 9:57 PM, primerib1 said:

    Well, we currently have to avoid the first 32 characters of Unicode (\u0000 to \u0019) due to BUG-233015, so we can't just simply use 7 bits for encoding.

    So yeah, Base95 seems good. But the implementation probably will get hairy, and likely a bit slow.

    Or use one of the higher-efficiency encodings in this list here, which already has implementations we might be able to adapt to LSL: https://en.wikipedia.org/wiki/Binary-to-text_encoding#Encoding_standards

    EDIT: OMG, I see you have actually implemented the Base95 algo! Ahaha, well done!

    EDIT 2: All being said and done, standard usage of LSD should not need a packed_uuid; it's only when you really need to eke out every last byte of LSD that you should consider using a packed_uuid. Explore other way of adding more LSD space, like having a separate prim (MUST be separate/unlinked so it has its own LSD store) and talk to that prim using standard messaging. Shape it like a server hard disk, make it phantom, and plug it into your main object. You can then identify the object with something like "/dev/sda", "/dev/sdb" and so on... 😁

    I have a few I've been working on, was planning to get it out last month, but been sick so much this Winter that everything is a mess.

    One's have have so far:

    Base64
    Just a bunch of utility functions that wrap the built-in Base64 support. By far the fastest executing method.

    BaseE91
    The capital E is significant. This one is compatible with the existing BaseE91 implementations out there. Requires a lookup table due to the non-ordered dictionary used by this standard.

    Base91
    Base127 (which is bugged due to BUG-233015)
    Any base from 91 through 127 can be done with the same method by just swapping out a couple constants and a magic number. I'd have to go back through my workbook, but I think <base91 wasn't worth the trade-offs over using the built in Base64, and >129 is getting in to the "bad for UTF-8". If staying within UTF-16, larger bases might make sense (I might have already figured that out and which, do need to check that workbook again...).

    Base32k/Base32768
    Not as space efficient as the others, but what I remember from the benchmarking, it was faster due to being a power of 2, which makes being unable to use Base128 more sad.

    Base1T/Base1099511627776
    Kinda similar to the last one with one of the ideas from the non-square bases. This was mostly an experiment to see if it was possible. It stores 40bits of data per two Unicode chars, so the number of actual bits used varies on if its UTF-8 or UTF-16 and per char depending on what codeblock its in. Code was pretty fast due to how simple this it is, the most complicated thing is that it uses 3 different blocks of unicode.

    BaseN
    I've had a generic BaseN implementation for years. It takes a list of integers of one arbitrary base and coverts it to another arbitrary base. Can get some good packing for binary data, but its like O(n^2), lol. With small lists, its not bad, but it becomes insane pretty quickly.

    If speed is more important, stick with llBase64ToInteger(str) / llGetSubString(llIntegerToBase64(num),0, 5).

    I'll try to finish and publish this library with benchmarks SOON-ish.

    • Thanks 2
  15. On 1/13/2023 at 4:37 PM, Anna Salyx said:

    The key differences, in my own understanding, is that the script is a "living" process. When a script moves from sim to sim it must be registered on the new host as a running process.  It's byte code loaded (if needed) and it's current stack and event queue applied, and finally given place in the queue do it's thing. Where as a mesh/prim object by itself is just a static thing and all that needs to be done is to provide the receiving sim a packed (I assume) copy of it's current properties alongside it's asset ID. The client/viewer renders it and Bob's your uncle.  Yes there is going to be some overhead in the LSD store, but that store is not an action item that requires VM registration and time slices. 

    And as LZ pointed out (below), in the scheme of things it's not *that* much really.   If you're carrying 38 attachments each chock full to brim with LSD keys, maybe then it'll have an impact, but if all you've got is a small set of objects, 1 to 3 maybe, each with only a handful of keys moving around with you, that might not be even noticed. 

     

    if I'm wrong on my admittedly limited info assessment on how things move from region to region, I'll be happy corrected so I can learn :)

    From my understanding, the LSD store is a standard data type on the sim object. It probably adds only a fairly trivial amount of extra time to the (de)serialize of the object on region change.

    We could likely test this if viewer already has a teleport-time metric we can access (or if we add one) and just do a bunch of TPs with and without a bunch of full LSD stores.

    • Thanks 1
  16. FYI,  we have an actual feature request filed for remote LSD access: https://jira.secondlife.com/browse/BUG-233201

     

    9 hours ago, Love Zhaoying said:

    Yep, so I won't be surprised if it is the REST of my code/logic.  A few too many hoops and calls and checks to / within its own functions, for instance.

    All the string processing that's required in LSL is really painful. I would not be surprised if your script is getting random sleeps during that. I was working on some data packing code a couple weeks back and randomly some cycles will just take 10x longer. That behavior will only be worse the more loaded a region is, which is extra FUN. :p

    • Thanks 1
  17. 16 hours ago, primerib1 said:

    It's the "LKG" (Last Known Good) principle.

    The server has no idea why you disconnected uncleanly. Could be network issues, could be something you picked up messed things up on your server side. If the server commits your state, you might be trapped in a non-working state and that will be hairy.

    So, the server simply rolls your state to "LKG" state, the state where you actually logged in, rezzed, and exist in-world properly.

    Edit: An analogy is SQL "Transaction". When you logged in, every attachment issues "Begin Transaction". When you detach an item, the detached item issues "Commit Transaction". An orderly Quit results in all attachments issuing "Commit Transaction" but without changing the attachment state.

    That was my assumption. Had forgotten about that term for it. x3

    • Like 2
  18. On 1/2/2023 at 1:42 PM, Jenna Felton said:

    It happens repeatedly (at least once every few weeks) that the viewer crashes and attachments are rolled back and some important data is lost. And it is not always data you can restore in few seconds. I was hoping LSD will persist crash rollbacks but apparently it was not designed for this but maybe we can use LSD to establish such a persistence. So I wanted ask few questions before I make a reasonable feature request if any.
     
    Question 1: Is it correct that attachment data is load from asset servers and then passed from sim to sim and stored back to the asset servers only  when the attachment is detached or the owner logs out or crashed? Back in this thread it was claimed to be so, but not by a Linden, so I better ask to confirm. When it is not correct, the rest is maybe irrelevant.
     
    Actually it is naturally to do it this way: Attachments change their data permanently and it is wise to save the data when the attachments stop collecting it, i.e. on detach or on crash. However, I was thinking that detecting a crash works precise but because attachments seem to loose data to crashes sometimes, it seems that there are conditions preventing saving attachments on crash and LL can not fix them.  
     
    Hence we can try the data loss prevention. First attempt: When the avatar is leaving a region, the region stores the attachment data to the asset  servers when this data was changed while on the region. Not practicable because the relevant attachments change their state dozens of times on every region and we would overload the asset servers every time we leave a region.

    An attachments state is only committed back to asset on detach/clean logoff (there is no difference, logoff=detach).

    What is "Attachments change their data permanently"? An argument could be make for any single object attribute being permanent or temporary entirely on how its being used in a particular application.

    I do not know the reasons for why timeout (ie crash or forced logout) does not result in a detach. Its possible this is to avoid unwanted results of commit at an unexpectedly point, eg. safer to assume that last intentional detach/logout is going to be consistent to what the user wants/expects.

    Committing back to asset on region change would increase the time it takes to do region changes. Detached everything then reattach it, now add that time to every TP and region cross and ask if you would want that additional delay.

     

    If you are crashing/disconnecting often enough that this is an actual issue, you may want to address what is causing that to happen. Generally the loss of changes on attachments on DC hasn't been that big of a problem, more of a possible annoyance for a user when it happens at an inconvenient time, like forgetting/not knowing to do a detach cycle after doing a lot of changes before the rest of their session do stuff that has higher risk of ending in a timeout/DC.

    • Like 2
×
×
  • Create New...