Jump to content

bunboxmomo

Resident
  • Posts

    101
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by bunboxmomo

  1. This is a inaccurate representation of what is being asked, and also a misrepresentation of how flags work in SL. Region and Parcel Flags What is being asked is the following Region Flags - Region Flags need to exist. An Estate owner MUST be able to make decisions at region level. Parcel Flags - Parcel flags allow the owner of a parcel to be able to say *for their own parcel* if they wish to deny access to bots or not. In Second Life, we already have examples of this. For example in Region settings we can allow/deny permissions, permissions that are *also* available in Parcel settings. When a contradiction happens between Region and Parcel settings, the highest level *negative* is the one that is applied. See below for examples Region - Allow Fly/Parcel - Deny Fly: Flying is disabled in the parcel. Region - Deny Fly/ Parcel - Allow Fly: Flying is disabled in the parcel. Region - Allow Fly/Parcel - Allow Fly: Flying is allowed in the parcel. Region - Deny Fly/Parcel - Deny Fly: Flying is disabled in the parcel. Negative permissions when present, be they at region level or parcel level, *always* override Positive permissions at parcel level. To help relieve worry people may have that this is somehow unprecedented, this is how Second Life works and it is how second life has always worked, and a hypothetical parcel flag should be the same here, and if implemented likely would, just like every other flag we already have. (Currently the region level only approach is an outlier that breaks step from how every single other flag in SL works and prevents individual users from deciding they don't want bots on their parcels, when a region owner is ok with it. A user should have that choice, just how region owners should have the choice for their regions.) The result is a region owner can choose deny on the entire region regardless of parcel owner wishes (This is intended and good). But also, where a region owner chooses to allow bots on their region, parcel owners could still choose to disallow them on their own parcel. This is currently not possible, and it is this addition people asking for parcel flags are asking for. They are not asking for this as a replacement of region flags (Although I'm sure some here or there are, and I would side against that) In this case, a deny_bots at region level, would override all settings on parcel level, but a region that allows them, could give parcel owners their own individual deny_bot flags to give more granular control. Changes I would like to see to LSL and Viewers on visibility of data about agents in invisible parcels. As for your second claim regarding how access to a parcel is not required, you are absolutley correct and that is why parcel flags should also come with changes to how "invisible" parcels are handled. Currently if you are on a parcel and your parcel is set to be invisible to people outside the parcel, all that's really happening is you're being de-rendered. Viewer floaters still show you in the parcel, you are still shown on the map, and LSL is still able to poll everything it could want to. You don't even need a bot to do this. Personally, speaking as a scripter, when a parcel is set to invisible, if I am not an EM or my script is not in an object that has been deeded to the relevant group on group-owned land, then neither myself in my viewer, nor my script, should be able to see information about agents in said invisible parcels while I or my script is outside of said parcel. This potentially could also be extended to objects as a consideration, not just Agents, but I think that may be more disruptive in a destructive way to almost two decades of existing sim programmatic infrastructure, but if possible with a transition period I'd be a fan of seeing that apply to objects as well. For now though at the very least I really shouldn't be able to see infomration about agents in my viewer or LSL in invisible parcels when I am not an EM, or my script is not deeded to group. This is a thread about changes in policy, infact a change that is highly significant and disruptive (that doesnt mean its bad, it just means it has a huge footprint on impact). As a result, it's not really relevant what the limitations are right now, because this is a discussion about changing them, in light of changes that have already happened. The people asking for parcel flags, are fully aware of the issues you raise in your second claim, and are asking for those to be changed too to make that viable, rather than what currently feels like a very rushed out and blunt approach to placating the fears of bots that some residents have. Just so it's clear, and the last sentence doesn't get taken out of context. I'm 100% in favour of the deny_bots flag, even as someone who doesn't have an issue with scripted agents as a concept, but I think it didn't go far enough, and it's implementation is clumsy and half done in a way that doesn't really seem to fit with the rest of the flags we have for land management in Second Life, and that it is a reasonable expectation of users to be able to decide they don't want to allow bots on their land, even if a region owner has said they're fine with it, and should be able to apply parcel level denial, just like we can with anything else right now in region/parcel interaction, but yes of course region denial always takes priority over parcel allow. A personal note I would also say, that on a more personal note, I'm disapointed in the lab and how they've handled this, because between what feels like a rushed half implementation that leaves parcels out in the cold, stokes conflicts between region owners and parcel owners, and a admittedly neutral post announcing this, but one that was made in what feels like it was written in a form that failed to take into consideration in it's delivery the underlying attitudes, tensions and assumptions of malice and the resulting propegation of dehumanising attitudes that risked (and ultimately were) emboldend and validated by what felt like the lab "taking a side". Obviously the lab is not taking a side, but there could have been a lot more awareness given into how the announcement was written, to pre-empt that. I'm sure to this someone will bring up "Yeah but Bonniebots in previous threads...." and I'd say that while they certainly did not help their case, and while they should have been better about this, they are still residents. While the lab is the lab, there's a different degree of expectation in that. I'm not going to make a big ongoing deal of that, this will be the only post I make on this aspect, but I do feel I need to express this. I've been quite put off by some of the things I've been seeing expressed in this thread and speaking as a counsellor, I feel the feature announcement was delivered in a way that failed to take into account this underlying tension surrounding bots on SL and years old, that has resulted in the slow accumalation of distorted characterisations and stereotypes held by a number of residents in both directions (Some bot developers views of residents opposed to bots, and some residents opposed to bots views of bot developers). These characterisations of one another held by some residents have sat under the surface, and easily brought to the surface when *percieved* as validated by the lab, in either direction, and I can't help but feel the lab's either rushed writing of the announcement, or an over-cautious sticking to imagined neutrality through "not trying to say anything" rather than acknowledgement of either groups concerns, ultimately has enflamed this. So if I were to hypothesise how the announcement could have been better worded, in my opinion something like: <//// NOT AFFILIATED WITH THE LAB.////> <////I'M MERELY WRITING A HYPOTHETICAL FROM THEIR POV IF I WERE THE ONE WRITING IT ////> "Scripted Agents (otherwise known as "Bots"), have been a controversial topic for the residents of Second Life and a topic of discussion on our forums for over a decade. Over the years, Scripted Agents have allowed a number of our talented residents to find solutions to challenges that have allowed for the creation of products and services by our users that come together to directly and often invisibly, contribute to the grid and second life as a whole. This is why we have always encouraged use of our scripted agent status by bot developers, recognising the work that goes into these less conventional solutions by our residents, while still making a distinction between residents and scritped agents." "However, Scripted Agents function as a black box, seperate from our own code base, while still having access to the same level of information (if not more) than is typically available to a resident through a viewer. While this is also true about a number of LSL functions that return information about a region and it's contents, in the case of LSL, we can make adjustments whenever we feel we need to in the interest of making a better experience for residents of Second Life. However, with Scripted Agents, since the code base running this bots are not our own and not built on LSL we are unable to effect the same steering decisions in running code as we can on our own codebase, or scripts developed using it. Instead we have maintained a list of best pracitces and guidence we offer to bot developers about good practice considerations and imposed limits into SL where we've felt we needed to, such as the daily cap on sending IMs and Notices in responsed to advertisement bots several years ago." "While we are happy with how this has worked out, this does however ultimately mean that the nature of Scripted Agents and their role in Second Life, is one fundamentally built on good faith trust. While we at the lab are happy with this, this also means that residents of Second Life are in the same position having to trust bot developers to act in their interests when developing Scripted Agents for Second Life, however without the same degree of trust they feel they have with us at the lab. We at the lab are beholden to our residents in a way that establishes the basis for our relationship between the lab and our residents. As bot developers are residents themselves first and foremost, this basis of trust is very one directional instead in a way that makes a percentage of residents uncomfortable or even concerned about the presence of bots that come and go across their land, or can even be found in their travels in Second Life." "The nature of Scripted Agents in Second Life is subject to a long standing debate and is one that is unlikely to end soon amongst our residents, but one position we do firmly believe is that residents deserve the ability to decide for themselves on what terms and where, are they comfortable with Scripted Agents in their day to day experience in Second Life, and the ability to make this decision should be as easy and accessible as possible, and without having to explain or justify their choice first to have it respected." "While some bot developers offer opt-out capabilities, this still asks for a resident to contact a bot developer and trust them with their information to request an opt out, if the bot developer even offers it. We feel this is far from ideal and in the interest of empowering our residents to be able to easily make these decisions for themselves without having to trust a third party to respect that wish, we are pleased to announce the new addition of.... <the rest of the announcement as it was released by the lab> <//// NOT AFFILIATED WITH THE LAB.////> <////I'M MERELY WRITING A HYPOTHETICAL FROM THEIR POV IF I WERE THE ONE WRITING IT ////> Personally, I feel something along those lines or anything else really that better prefaced the announcement itself, would have gone a long way to preventing the ignition of the kind of characterisation of people who make things I've seen in this thread. It's left me feeling pretty uncomfortable, as a developer myself about how safe SL is as an environment to be a developer in SL when this kind of characterisation can descend upon a person. And as a counsellor, I feel uncomfortable seeing the way these characterisation and attribution of ill-intent have only intensified over the course of the thread. While of course, there will always be a few who hold such views and of course while BonnieBots played their own role in the fermentation of this in the most recent bot contraversy from what I understand learning from other residents, I still feel the the lab's announcement and it's lack of due context and consideration prefacing the announcement itself, has in turn contributed to this and does leave me a bit disapointed. PHEW. OK. Bit of a long post, but I wanted to get my thoughts on that out in one go, so I can leave that to the side having said, because I notice my difficulty feeling comfortable engaging in this thread after last night after seeing this happen and wanted to get my views on that out as well as my own thoughts on the nature of Region and Parcel flags. (Morning everyone!)
  2. GridSurvey has been running for over a decade with no monetisation. This is true for pretty much all the bot networks.
  3. That's basically my point, at its core people are genuinely doing what they think is best for SL, on either side of this. It's not a matter of side vs side, it's we're all residents who care about SL and want to make SL better, but we all have different ideas on what this is, and naturally sometimes these clash. You and I have different opinions Love, you don't displace onto me for example that I'm some moustache twirling evil scripter. I just don't think it's fair to vilify bot operators when there is a genuine passion there in a lot of them.
  4. I'm certainly am going to be taking Soft's advice about a disclaimer on phone home or data transmit functions. I think that's actually a good policy. None of my scripts do that in production stage, but my pre-release public alphas and public betas of stuff does, but I already add a disclaimer for those. It was the headers on httprequest that was the "uh-oh" for me.
  5. Sparkle, this is a thread announcing a significant change to decades old policy and operation of Scripted agents in Second Life, and I'm a scripter. Yes of course I came out of nowhere, this is the first time I've actually been interested enough in the topic to actually read the forums. Don't assume ill intent like that, come on.
  6. That might have actually been us, and I do still agree. But I am still saying while I understand people's fears about it, its not fair to displace that onto other people. I'm not saying that as a putdown or dismissal of those concerns, I'm saying its important to recognise they are one's own concerns, instead of vilifying people for it. Essentially it isn't fair to asume ill intent and prescribe malice onto a third party, to justify a concern. The concern doesn't need to be justified, it is already valid without having to do that to a person.
  7. "Everyone I don't like is BonnieBots" Oooooooooookay. Inquisitor Sparkle out to expose me for techno-heresy of not having a problem with bot networks. Spicy.
  8. Under GDPR, encrypted personal data is still personal data. So this isn't nesecarily an example of a workable solution. GDPR cares more if data has been anonymised, as far as I'm aware.
  9. I don't think it's fair to blame others for people's anxious concerns. "This is new and scary and I don't like" while totally valid, is the responsibility of the person with that response, not the person who triggered it. When I use the term anxious, I don't mean that in a put down sense, just as in the proper term of a triggered and sustained alarm response. From what I understand, they made efforts to try and reach out and explain this and resolve concerns, but past a certain point an assumption of threat acts as a filter to anything they tried to say. You are perfectly valid to have your opinions of them and their operation, but it's not fair to blame them for your own fears, discomfort and worries about it when that was fueled by speculation. That doesn't make your point invalid, its just dont displace that onto others.
  10. Worth mentioning that most visitor boards also email daily visit lists (I know the SHX-VWB does for example).
  11. Actually based on responses of people who maintain BonnieBots in this thread, they did actually talk to LL about it and did have the go ahead from what I understand. It was apparently LL's idea to even talk about the opt out here in this forum, hoping it would help win residents over and alieve those fears, but this instead had the opposite impact. It's ok to not like BonnieBots, but lets not engage in revisionism to asume our position is an objective truth? Subjective opinions about if you like it or not can be, and are, equally valid without having to do that.
  12. Well heres another. Second Life has a discoverability problem. The way it's search algorithm is currently programmed, it popular regions have an easier time staying popular, than they do losing popularity, likewise unpopular, new or unknown regions have a harder time gaining popularity. With the traffic system as it is, each successive point of traffic is easier to get than the point before, as alongside other things this impacts search ranking as well as people going "Oh it has a high traffic, it must be good!" as a result it is much harder to innovate and create new experiences and find a community organically for them, versus simply hanging out at the same old place. Well, one bot network came up with a solution to this, and many in this thread won't like to hear it, but it's name was BonnieBots. BonnieBots maintains an API endpoint of popular regions. This is updated every 60 minutes, but what makes this really good, is that it does not look for simply avatar count nor does it look at traffic, but rather it maintains an algo similar to how reddit handles voting up regions and then that decay of that vote, but it does so with popularity in the form of people there *and new people coming and going* as factors of this. As a result, you are reliably seeing regions that have a high amount of people coming to them, rather than simply a high amount of people *in them*. This is a much better way to find popular places, because it shows you places that people are *currently* wanting to go to, rather than where they already are. This algorithm is brilliant for discoverability and massively offsets the advantage the LL traffic algorithm gives to established communities, which in turn allows new experiences and new places to have a much better chance at life, which benefits SL as a whole, making a more diverse and rich experience across all of it's regions by massively lowering the barrier to accessibility and viability for created public land experiences. It's important to note, that this depends on widespread adoption of BonnieBots as a way to find cool places to meet people, however the "botphobia" has naturally hamstrung that. There's another example for you about what bot networks enable us to do.
  13. Well to pluck an example out of the ether, gridsurvey provides us with information about the grid that a lot of random sim teleporters would function on as far back as as 2012 or so. It allowed us for example to be able to make decisions based on various different region statistics, about if it was likely to be a dead teleport, and if so "reroll". It also allowed us to retrieve UUIDs for textures relating to maps, which I'm pretty sure a lot of people have used. But this is me just plucking one out of the ether, therers far far more. But it's important to remember, this is an API, it's not about complete solutions. APIs are RESTful services that can be called via outgoing llHTTPRequests and the data used in LSL. Theres a lot of things you use, that probably on some level uses data from an external API in part populated by data from bots, you just don't see it as the end user because this is all backend in script functionality.
  14. Exactly, these are people too, kind ones at that for the most part that run free services at their own cost, that enable us to do things in SL we otherwise would not be able to. These aren't villains to be vilified or reviled, they are residents who genuinely care about SL and love SL.
  15. People who run bots are residents just like your self, and create things with the best of intentions just like anyone else. With regards to something like GridSurvey, many of the things you use everyday in SL will be making use of their ongoing survey of the grid and it's regions. I'm not saying worship bot owners, but the vilification, is a bit much.
  16. Well if your primary concern is seeing bots, yes, deny_bots will mean you see less of them, but that will come with a cost. This is a list of all the "known" roaming bot networks as of Feburary 2023. As you can see there are far more on here than the BonnieBots, which I believe were the bots that created resident cocnern due to fears regarding how may have operated, rather than concerns about how they did operate. That aside, BB isn't the point here. The point is there are many other networks that are now unable to provide reliable information for the purpose. Perhaps one of the mist interesting ones here in my opinion would be that of SurveyTeam, which is a bot network that operates on behalf of GridSurvey which is a long long long time pillar of the community in the valuable information they provide about SL and generously provides API that countless scripts make use of over the past decade and a half, if not more. This dataset can no longer be relied upon for production and mission crtiical standards of code, due to the patchyness of the dataset. That means algorithms needs to be changed to account for that, if they can be at all. This is true for not just that bot network, but multiple ones. I'm not saying "dont add deny_bot" personally, I think it's a good thing, but it should be parcel level with region override on negative parcel permissions, and the invisible setting on parcels should also hide residents and objects from the viewer, and scripts in my opinion. Personally, I think it didn't go far enough, but it could have been a bit less of a blunt hammer approach to the situation in my opinion. My support of that position, is not informed by a fear of bots or a spookyness of them. I'm a scripter myself, I'm not the least bit concerned when I see a bot come and go, because I know they really dont do anything spooky. I support the policy change because I believe users have a right to decide for themselves how to manage their own land without having to justify their choices about their own land. Code is unfortunately a black box to many users, and by the nature of our own human psychology we conjure imagined threats and fears about things we don't know exactly what they do, and bots become the boogeyman under the bed. So while I do think for a lot of users, this "botphobia" as I've seen it described as by others in this thread, is informed by a misunderstanding of what these bots even do, that has just blown out of control, I still think people should have the right to decide for themselves on their own land. However, despite being in support of that, I'm also not going to pretend that this won't have a significant impact on the grid, thats the nature of what I'm getting at here. This is a major policy change that will fundamentally alter much about the grid as we know it, even if that is not immediately obvious yet, and it will do so in ways far beyond how many bots you see every day. While LL I'm sure is fully aware of this, and will have considered this, on the resident side of things where many of us are people who just come to hang out and may not know these things as in depth when it comes to our thoughts on things should be ready for that and keepm that in consideration, rather than from a hopeful utopia where the bots are gone but everything else stays the same, otherwise we run the risk of cheering on potentially devastating changes that can take a while for things to re-adjust from, asuming project maintainers are even still active in SL. The sudden workload, may even push a number to decide SL aint worth it for them, and I'm not just talking about bot operators. With regards to degradation to service, first part of what I said about that was with regards to the deny_bot flag. This means that datasets are now unreliable, so code that acceses this data via APIs, and not just BonnieBots, but any of them, can now not be safely relied on, which means algorithms have to be changed. This takes time, and there is no guarentee equal functionality will be viable. This has to be an accepcted cost of such a change, and people who were pushing for this, I hope they were aware of that. In terms of the HTTPRequest aspect again, soft's statement is a good one and it is good to hear that, however, this still does not change the fact that as it currently stands the nature of llHTTPRequest, and any handling of UUID36 for that matter (Which is one of the most important data types in LSL) is in the realm of uncertainty for the forseeable future, pending future statements from LL. While, as soft said, we will not see action taken or enforcement against existing scripts and services. This does mean that developers, myself included, will be a lot more hesitant for now to continue development of anything using these functions (Which again, are some of the most important ones in LSL), until this uncertainty is cleared. In addition, projects that may have been started, likely now never will given the unknown status of the function and use of UUIDs in the future. This will have an impact on the grid, even with the pending status of enforcement by LL, due to a slowdown in development out of caution, to avoid wasted workhours into algorithms we'd have to redo. Again, I am in favour of these changes (a bit spicy on some aspects of httprequest though), but I wouldn't say its a good idea to pretend to ourselves the only impact this will have is, how many bots we see popping in and out over the course of a day, and we should be prepared to accept the consequences of this change as a result of the policy change that has been asked for, by who I can hope knew what would that entail with it. Just my two cents on the whole "Yeah this is great!"
  17. I don't think that's a fair assessment. There are many services that residents take for granted that are now impacted as a result of this change. While residents are unlikely to see the direct cause of this change, they are likely to see degradation of service over the coming weeks as data that many scripters rely on for various widely used products in SL will now have an inceasingly unreliable ability to provide their function. This is an aspect residents will notice, and I do think it's important to, even if this is something you feel is fine, consider this is an impact of this change. With the HTTPRequest aspect, depending on how that plays out that could have a much larger impact in the time ahead, but the lindens are going to discuss this ahead as stated by Soft in our above conversation. When it comes to serious topics like this, its best to consider things for all their impacts, rather than just the ones we're directly concerned about.
  18. https://wiki.secondlife.com/wiki/Linden_Lab_Official:Using_Personal_Data "Agents" in LSL, refers to users, so this would be the UUID36. Under the PII policy as written, we cannot transmit your UUID36 or your legacy name (username) out of SL, or send any transmission that includes this information embedded in it (such as headers), without explicitly informed consent on the transmission being there as part of the script. The context of the above conversation is that both your UUID36 and your username are in the headers of llHTTPRequest by default (and not sure if they can even be removed by the scripter), so presents a sort of blocking issue here. (Also some additional information that such as position data, which while not in the PII category, is related to the spirit of the concerns from some residents that informed the fear around bots and the recent changes) As soft has said, consent needs to be obtained, which means explicitly asking for consent for any script sending any HTTP Request. Soft has said they are going to internally discuss this at LL and we will learn of the outcome, if any, at a later point. Theres the summary of the above conversation and why those SL identifiers are coming up in PII discussion.
  19. Yeah that'ts the paradox I'm potentially seeing here, because on one hand you're telling us we should get informed consent, and I agree when it comes to the body of the request, yeah 100% agreed. It'll be a cultural change, but seems a good enough one honestly. But with headers, that's a little different and like you said changing those functions can do a whole lot of damage to the grid, given how integral llHTTPRequest is and how long it's had those headers for. Changing that can break a lot of functionality real fast. So it's a bit of a catch 22 here as far as I can tell. The only real solution seems to be to just, ignore the PII advice due to the constraints when it comes to the existence of that data in headers, or just dont use HTTPRequests, until either the function is changed or the policy is changed/clarified for an exception perhaps on header transmission and isntead onus put on the context of the use case on the remote side? The other thing though is, those headers also contain snapshots of data on XYZ positions, rotations and velocity, which as far as I can tell conflicts with the spirit of the intent behind the recent changes with regards to scritped agents, even if not the letter of things. I'm not sure how that one could be worked around, without risking significant damage to existing scripts built over the decade by residents, but good luck with that, genuinely. I'm sure you'll work something out over at LL. I wouldn't say I'm anxious about it, I just want to be able to be a responsible scripter with my scripting, but I can't help but see a conflict here in the way the policy functions vs the way the function...functions? So thats my point of concern. I am a bit concerned if this was an oversight in the drafting of the policy, or if this was known about but just not included in the wording? Since llHTTPRequest is such an important function for web integration, which is a big part of the modern web and SL's ability to integrate with it, contradictions concerning that function and it's future status is naturally concerning from a LSL developer side of things and if this might mean future difficulties with interfacing with APIs readily in the grid as part of the modern internet. I've been encouraged by the discussion on recent labgabs, but part of me feels this change presents what feels like clash with the stated vision in SL's future, given how much of a core function this is to the ability for SL to fit into an increasingly microservice driven web, if SL is to grow into the modern internet like it currently is. Essentially, given the importance of that function in that context, it's future status given today's communications is something I was concerned about and as much as I appreciate you taking the time to reply, do hope for some clarification regarding the status of llHTTPRequest in the future going forwards, though I know you probably aren't able to give that right now just on a whim without that being discussed as you mentioned. Appreciated thank you!
  20. Soft, what you're saying makes sense for the first half of it. I think thats pretty reasonable and I hope other scripters choose to do that too. I don't "phone home" in secret in any of mine, and never intend to, I like the way you phrased the disclaimer, thats pretty good and I'll make use of that in any future scripts of mine if they do need to phone home for whatever purpose. However, regarding the second half your comment, it doesn't really address the concern. Included below, is an image showing several (but not all) of the default headers included in any outgoing request made by llHTTPRequest(). This is how the function is today, as written by Linden Labs. As it currently is, any outgoing request, by default unless explicitly *removed*, not added, by the user from the default LL headers (if that's even possible), will include The username of the owner of the object (This is considered PII information as of today.) The UUID of the owner of the object (This is considered PII information as of today.) The name and key of the object itself (I feel this is something some residents might find concerning) XYZ position data, Rotation data and even velocity of movement (which includes direction of movement for non scripters) (This is far more information than BonnieBots ever collected from my understanding) The problem and concern being raised, is a LL function, by default, as it is implemented in the LSL library, is giving out information with every request, regardless of if a scripter is employing good practices or not, because this information is in the header, not the body. As a result, since these are in the headers, this means any httprequest, no matter how benign and regardless of good faith efforts to comply with the PII policy, will still contain data that is considered protected *and* data ontop of that which was the basis of resident's concerns with bonniebots. Due to this it makes it functionally impossible to comply with the PII good practices/suggestions document and is raising significant concerns about the use of HTTPRequests in their current form and if the PII policy as written is even viable containing username and UUID data for this reason, including basic operation within SL as it is in terms of UUIDs. We need clarification on this, either in the policy being edited, or the function being edited, becauase as this is currently is, this is not possible to comply with due to this blocking issue on the implementation of llHTTPRequest in LSL. This also affects over a decade worth of code it should be mentioned.
  21. Necrobumping this thread because this is currently the top spot on google search results for this error and it's worth clarifying the above answers are misunderstanding the issue so people looking for help on this issue do not come to this thread and begin to question their own perceptions of what's happening. The above comments from Rolig Loon and Alyona Su have incorrectly asumed this was a case of the OP paying the wrong user by mistake or some other error that's occured between the OP making a payment and the recipient of the resident recieving the money. This is not the case. The error OP is reporting here is that the toast notifcation itself from Second Life shown in the viewer is You failed to pay [Resident Name] [Amount]. This is not the result of any trickery or some other error after the initial payment attempt itself failing, but rather the actual initial payment event itself has failed but the linden amount has been deducted from the payee. This is also not a client side mismatch from this transaction itself between linden balance and reported linden balance as refreshing the linden balance does not fix this either. Transaction history does not also report this transaction, neither in a successful or failed state, instead, quite literally, there is no record in any way shape or form that any transaction even occured to begin with. Upon investigation, the issue appears to have been caused by a prior transaction on the current user session in the viewer in the following scenario: User begins with L$ Balance of value N User recieves a linden transaction of L$ amount X, updating L$ balance server side and client side to L$ (N+X). In rapid succession, user makes a Linden transaction of L$ Y, updating L$ balance server side to L$ ((N+X)-Y), however the client side balance remains at L$ (N+X) User attempts a transaction of L$ amount Z, where Z is greater than ((N+X)-Y) but less then (N+X), the server rejects this transaction as the server reports insufficient funds, but the client instead throws the error "You failed to pay [Resident Name] L$Z" The standard payment sound occurs and the client side linden balance is updated to match the server side value, but this is not presented as such to the user, instead the client appears to and sounds as if, a payment has occured. Admittedly, this is the first time I've ever seen this issue myself in about 10 years of Second Life, so the mistake is understandable but this is indeed a real issue that can occur, especially seeing as I just encountered it myself. If the above doesn't help explain what has happened, see below for an explanation and suggested actions to trace the "missing" funds. For the purpose of allaying any anxiety that may exist in those searching up this issue, your balance has not actually been updated, but rather your client was falsely reporting your balance to begin with. This has occured because in your current user session you have made a transaction rapidly after being the recipient of another transaction (we're talking almost instantly here). The result is that transaction went through successfully, but your balance was not updated as such on the viewer. This error has occured because you have attempted to make a payment of an amount that is greater than your current balance. Your transaction history will not report this failed transaction because as aforementioned, it hasn't actually happened. However, you should look through your transaction history at your recent transactions as the L$ amount has not actually been eaten, but rather were used in a prior transaction you've made. It is likely the case you've actually already paid the transaction you are currently attempting to make or alternatively you have a scripted object that's been granted permissions to manage your wallet and made a scripted payment upon recieving funds earlier, this rapid update causing a desync between your client's reported balance and your actual balance. Hope this helps!
×
×
  • Create New...