Lucia Nightfire Posted September 19, 2023 Share Posted September 19, 2023 A few minutes ago all my apps grid-wide that use urls continously have their URL lifetimes prematurely end, generating 499 error codes. Anyone else seeing this? Link to comment Share on other sites More sharing options...
Aishagain Posted September 19, 2023 Share Posted September 19, 2023 (edited) Not that precisely but about 15 minutes ago I was getting random errors, mostly script errors without consequences on Teeglepet horses when freshly rezzed. It may not be connected but it IS unusual. Some appeared to be the consequence of the item not being able to communicate with a central server, so maybe those ARE connected. Nothing on the GSP yet...hardly a surprise though. ETA: this was on the LeTigre RC channel with the new build number. Edited September 19, 2023 by Aishagain Link to comment Share on other sites More sharing options...
Krafties Posted September 19, 2023 Share Posted September 19, 2023 (edited) Yes, our players are experiencing this exact error in some regions. It seems that other regions are unaffected, though. Haven't found a pattern yet. Edited September 19, 2023 by Krafties Link to comment Share on other sites More sharing options...
Gayngel Posted September 19, 2023 Share Posted September 19, 2023 All my rental systems are going crazy here. "***Error*** Maybe the system is in maintenance. Please try again later." Link to comment Share on other sites More sharing options...
Ronnie Pawpad Posted September 19, 2023 Share Posted September 19, 2023 Yes, getting HTTP error 499 when scripts try to open a http request, so all scripts that request a http/s conneciton to an outside server (Casper systems, Roleplay huds, update checks, etc) are currently timing out. Link to comment Share on other sites More sharing options...
Gunwald Constantine Posted September 19, 2023 Share Posted September 19, 2023 On aws portal - amazon backend having issues Network Connectivity Issues [05:21 PM PDT] We continue to work toward resolving the increased networking latencies and errors affecting Availability Zones (usw2-az1 and usw2-az2) in the US-WEST-2 Region. We have successfully applied an update to the subsystem responsible for network mapping propagation to address resource contention. We have seen network mapping propagation times stabilize but they have not yet begun to trend towards normal levels. We expect that to begin over the next 30 minutes, at which time we expect latencies and error rates to improve. We will continue to keep you updated on our progress towards full recovery. [04:20 PM PDT] We continue to progress toward resolving the increased networking latencies and errors affecting Availability Zones (usw2-az1 and usw2-az2) in the US-WEST-2 Region. At this time, we are approximately 50% completed with the update to address resource contention within the subsystem responsible for network mappings propagation in the usw2-az2 Availability Zone. Once we complete the update in usw2-az2, we will then move on to usw2-az1. Our current expectation is to have both Availability Zones fully resolved within the next 60 to 90 minutes, and we will continue to provide updates as recovery progresses. [03:33 PM PDT] We continue to make progress towards resolving the increased networking latencies and errors affecting Availability Zones (usw2-az1 and usw2-az2) in the US-WEST-2 Region. In the last 30 minutes, we’ve continued applying an update to address resource contention within the subsystem responsible for network mappings propagation and are seeing early signs of improvement. We will continue to monitor before deploying this change more broadly and will continue to provide updates. [03:02 PM PDT] We continue to make progress towards resolving the increased networking latencies and errors affecting Availability Zones (usw2-az1 and usw2-az2) in the US-WEST-2 Region. In the last hour, we applied an update to address resource contention within the subsystem responsible for network mappings propagation and are seeing early signs of improvement. We will continue to monitor before deploying this change more broadly and will continue to provide updates. [02:22 PM PDT] We continue to make progress towards resolving the increased networking latencies and errors affecting Availability Zones (usw2-az1 and usw2-az2) in the US-WEST-2 Region. While we continue to make progress in addressing the issue, we wanted to provide some more details on the issue. Within Amazon Virtual Private Cloud (VPC) any changes to the network configuration - including launching an EC2 instance, attaching an Elastic IP address or Elastic Network Interface - needs to be propagated to the underlying hardware to ensure that network packets can flow between source and destination. We call this network configuration “network mappings”, as it contains information about network paths or mappings. Starting at 10:00 AM PDT this morning, we have been experiencing a delay in the propagation of these mappings within a single cell (part of the Availability Zone) in usw2-az1 and usw2-az2 Availability Zones. The root cause appears to be increased load to the subsystem responsible for the handling of these network mappings. We have been working to reduce the load on this service to improve propagation times, but while we have made some progress, mapping propagation latencies have not returned to normal levels. We continue to work to identify all forms of resource contention that could be leading to load, and have a few additional updates that we are currently working on. Link to comment Share on other sites More sharing options...
Gunwald Constantine Posted September 19, 2023 Share Posted September 19, 2023 [05:56 PM PDT] We continue to work toward resolving the increased networking latencies and errors affecting Availability Zones (usw2-az1 and usw2-az2) in the US-WEST-2 Region. While network mapping propagation times have remained stable, we have not yet seen the improvement in propagation latencies that we had hoped for. In parallel, we are working on several other updates to address the resource contention within the subsystem responsible for network mapping propagation. We will continue to keep you updated on our progress towards full recovery. Link to comment Share on other sites More sharing options...
Judy Starchild Posted September 19, 2023 Share Posted September 19, 2023 Problems with RC Magnum and Blue Steel. Second Life Server 2023-08-24.581535 seems to be unaffected. Link to comment Share on other sites More sharing options...
Lucia Nightfire Posted September 19, 2023 Author Share Posted September 19, 2023 1 hour ago, Judy Starchild said: Problems with RC Magnum and Blue Steel. Second Life Server 2023-08-24.581535 seems to be unaffected. 2023-08-24.581535 is also affected. Apparently, not all regions are experiencing problems, though. Link to comment Share on other sites More sharing options...
Aishagain Posted September 19, 2023 Share Posted September 19, 2023 (edited) Reading some of the above posts, it seems likely that the issue is another AWS-related glitch. As such we are unlikely to see any comment from LL via the GSP unless the problems increase markedly. The current spate of AWS issues is concerning, to a wider userbase than just SL, so socks will need to be pulled up quickly at AWS unless they want some heavyweight complaints. ETA: I should know better! It is now on the GSP. Edited September 19, 2023 by Aishagain Link to comment Share on other sites More sharing options...
Anastasia Horngold Posted September 19, 2023 Share Posted September 19, 2023 It was just posted on the grid status page. Link to comment Share on other sites More sharing options...
Lindens Monty Linden Posted September 19, 2023 Lindens Share Posted September 19, 2023 Just to confirm... AWS got a bit wobbly starting at around 11:00slt. AWS is still working on it. HTTP-Out is heavily impacted. Update: Numbers looking better from 21:30slt. 3 Link to comment Share on other sites More sharing options...
Love Zhaoying Posted September 19, 2023 Share Posted September 19, 2023 2 hours ago, Monty Linden said: AWS got a bit wobbly starting at around 11:00slt. If an AWS gets wibbly wobbly because of server times not matching, does that make it wibbly wobbly timey wimey? 1 Link to comment Share on other sites More sharing options...
Vicious Hollow Posted September 19, 2023 Share Posted September 19, 2023 3 hours ago, Monty Linden said: Just to confirm... AWS got a bit wobbly starting at around 11:00slt. AWS is still working on it. HTTP-Out is heavily impacted. Update: Numbers looking better from 21:30slt. as of 12:50 am slt, still having the issues in Badger, Remonta and Symmetry. Link to comment Share on other sites More sharing options...
TonyStark Aristocrat Posted September 19, 2023 Share Posted September 19, 2023 I miss April Linden. She always gave us REALLY good reports to keep us informed and with compassion. We need that now that AWS says their stuff is resolved. "Engineers worked to identify the root cause and resolve the resource contention affecting the specific subsystem. By 9:15 PM PDT, the propagation time for network mappings had returned to normal levels. The issue has been resolved, and the service is operating normally." Do SL regions just need the Tuesday restart to be back on their feet? We're seeing inconsistency with world linden balance and web linden balances, too; we have no idea what is happening. some information and some clarification, perhaps just a touch of compassion, would be lovely from the lab 1 Link to comment Share on other sites More sharing options...
Aishagain Posted September 19, 2023 Share Posted September 19, 2023 @TonyStark Aristocrat dream on. Link to comment Share on other sites More sharing options...
Lindens Monty Linden Posted September 19, 2023 Lindens Share Posted September 19, 2023 Information is still coming in (*really* looking forward to AWS' explanation). HTTP-Out was running at elevated levels (including higher error rates) from 21:30slt yesterday until 2:45slt today. That's now running as expected. Teleports remained unreliable (~80% successful) until around 6:30slt today. They've now recovered. Lingering issues are likely and we do want to hear about them. Please contact support. 2 1 Link to comment Share on other sites More sharing options...
Paul Hexem Posted September 19, 2023 Share Posted September 19, 2023 That explains why a bunch of my servers started emailing me with errors last night. Link to comment Share on other sites More sharing options...
Recommended Posts
Please take a moment to consider if this thread is worth bumping.
Please sign in to comment
You will be able to leave a comment after signing in
Sign In Now