Jump to content
Sasun Steinbeck

Cap invocation rate exceeded?

Recommended Posts

Anyone have any clue at all what this error message might mean? "Cap invocation rate exceeded:" followed by a mysterious UUID. I'm seeing this from some HTTP based inworld servers that are getting a non-excessive number of requests.

Some background... no, it's not http rate limiting, I detect and handle that. The servers in question have been stress tested to limits far above what they are at now (an order of magnitude larger) without ever seeing that message.

I've seen it only once before, coming from an inworld server connected to a kiosk in an extremely laggy region (a very busy fashion store). The error came from the non-lagged server in a different region, not the lagged kiosk. I am wondering if the lag is causing the kiosk to be unresponsive to the server in some way... causing that error. What's weird is that it is most definitely not http rate limiting. We're talking a handful of http requests per minute. My guess is that the server is making an http request to the kiosk in the ultra lagged area and it gets that error. In this most recent case the server got a 503 error, so it looks like it's possibly an http request to an object in a laggy area.

I can't find a darn thing about this error and it's extremely sporadic. It happens once... then never again. But now it's cropping up again on some very important servers and I need to get this resolved asap.

Edited by Sasun Steinbeck

Share this post


Link to post
Share on other sites

I am extremely familiar with all those limits and have delt with just about every single one (sometimes painfully, lol) at some point or another, but I'm asking specifically about the error I posted about. None of the documentation addresses what that error might actually mean. Any idea?

Share this post


Link to post
Share on other sites

I couldn't tell from your post whether you were familiar with the full set of caps, which are more than a little confusing.  That's the only reason I pointed to the wiki. I have never seen the error message that you quoted, so I've apparently never tripped over the same stumbling block, but I've probably bumped into most of those caps at one time or another. I'm afraid all I can suggest is that you gather any information that would help LL reproduce the error and submit a JIRA report on it.

Share this post


Link to post
Share on other sites
On 4/8/2018 at 12:46 AM, Sasun Steinbeck said:

Anyone have any clue at all what this error message might mean? "Cap invocation rate exceeded:" followed by a mysterious UUID. I'm seeing this from some HTTP based inworld servers that are getting a non-excessive number of requests.

I don't recognize the exact text, but it probably means that the rate of inbound HTTP requests to objects in the region has been exceeded. We don't document the exact value because we don't want people to try to "get the most we can without hitting it". The 503 response has a Retry-After header that tells you how many seconds to wait before your next request.

However... right now, the throttle is applied to all scripts in the region. We are looking into two changes (which are currently scheduled to roll to a small RC this week):

  • The limit will apply to all scripts in the region owned by the same user. This should prevent your scripts from being throttled because of requests to someone elses scripts.
  • The limit will be raised slightly for Skilled Gaming regions (because of the first change, the increase may in practice be quite large).

We have baseline stats on how frequently this error is occurring now, and will measure how that changes as it goes through the release channel process.

  • Like 4
  • Thanks 3

Share this post


Link to post
Share on other sites

In my case the remote object sends a small request to my inworld server, then server replies with an "ok" response, then immediately makes an HTTP request back to the remote with some data. The only reason it does that instead of just sending the data back in the response is due to the response size limits. Sometimes the data can be a largeish payload so it has to turn around and make a fresh http request. Maybe a little pause between the response and the following request back would do the trick... but the actual number of responses + requests to the remote is just 2 or 3, there's no barrage of requests. Unless the quota somehow factors in the payload size. And in one case (and probably the second) where this happened the remote region was heavily populated and lagged, so it was probably an issue with other scripts in the remote region gobbling up all the quota... so that makes sense.

Changing this quota to "same owner" would be great. At least I can watch for a 503 and the Retry-After header and know when it's hit.

Thank you all for the info, that was very helpful.

  • Like 1

Share this post


Link to post
Share on other sites

@Oz Linden one followup, how can I get the Retry-After header in the 503 response? The request is coming from an in-world script... llGetHttpHeader() in an http_response event doesn't work. I can't seem to get any of the usual expected response headers that way.

Share this post


Link to post
Share on other sites
21 hours ago, Sasun Steinbeck said:

@Oz Linden one followup, how can I get the Retry-After header in the 503 response? The request is coming from an in-world script... llGetHttpHeader() in an http_response event doesn't work. I can't seem to get any of the usual expected response headers that way.

At present, there's no way to read any HTTP headers from an http_response event. The header value is useful if your request is from an outside web request, though.

The throttle allows many requests per second (no, I won't tell you exactly how many because it may change and we don't want people trying to see if they can get in just under the limit) to the scripts owned by the same owner in the same region.

A good general strategy is to wait a few seconds (possibly with a little randomness added in) before retrying, and if that fails increase how long you wait by multiplying by a small number before retrying again. If all your requestors follow some method like this, eventually they'll get through.

  • Like 1
  • Thanks 1

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...