Jump to content

LSL HTTP Changes Coming


Oz Linden
 Share

You are about to reply to a thread that has been inactive for 936 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

A few of the features of LSL HTTP usage will be changing slightly as a part of the migration to using cloud hosted simulators. Our hope is that these changes will not cause any problems, but hope and testing are two different things, so...

If you are the creator of LSL scripts that use any of the features discussed below, or you use scripts that rely on external HTTP services that were created by someone else, you should test them as soon as possible and report any problems in Jira. 

As sandboxes where you can test with these changes are deployed, we will post notices in this thread. Some of what is described below is pretty geeky - mostly that doesn't matter because if you can test your scripts and they work (on the new systems), then you didn't make any of the available mistakes and don't need changes. If you are not able to figure out why your scripts fail with these changes, file a Jira and we'll try to help.

llHTTPRequest - Outbound HTTP

The interface to this method is not changing (aside from one additional error check - see below), but some of its behavior on the network will change in ways that may confuse servers that are doing inappropriate checks. 

HTTP requests from LSL to external systems have always been routed through an HTTP proxy, and that will still be true, but in the past it was a proxy dedicated to each simulator host; now the proxy will be in a pool of separate servers as shown here:

TA5td_KhPxy8kZOD9d1PI2dxFlaV9pY4Q6GE0pF_JGPCGJSvgNnAU__y31WqpL18Hy01M5vY6VBpMVX0b_Uixt0DSho5PbGLQiFTiWEp_LNHjYJNrhb_uOoWyaGywDGeHsDPQ42C

This means that:

  • The IP address of the HTTP client from the HTTP server will not be the same as the IP address of the simulator that the request was sent from; the hostname returned by looking up the client (proxy) address will not be the simulator host.
  • Some timeout behaviors for servers that do not respond quickly enough may change slightly.
  • The body of some error responses from the proxies may change, including the content type.
  • Different requests from a given simulator may be routed through different proxies, and requests from different simulators may go through the same proxy (potentially even on the same TCP connection).
  • Scripts that make more than one HTTP request that overlap in time may see changes in the order that those requests arrive at the server and/or the order in which the responses are returned (ordering has never been guaranteed).

The IP addresses for simulators will be in different ranges and unpredictable; if your server is checking whether or not requests are coming from the simulator addresses in our current datacenter, you will need to remove those checks. We will not be able to provide documentation of the IP addresses of simulators or the proxies.

None of this should bother the use case of a script using HTTP to communicate with another script; those requests just loop through the same proxies even now, but note the llRequestURL section below.

The llHTTPRequest parameter HTTP_CUSTOM_HEADER may not be used to set values for the 'Connection' or 'Host' headers; a check is being added so that any call that attempts to do so will throw an error and abort the request (at present these header names are allowed, but the values are usually not actually used or sent to the remote server, which is misleading).

lRequestURL or llRequestSecureURL

Note this earlier post on not checking the domain part of the URL allocated to your script. A hostname lookup for a simulator IP address will not return the same name that appears in URLs returned by these methods. That is - looking up the IP address for a simulator name will return the correct address for that simulator, but looking up the name for that address will return a different name. This sort of asymmetry is not unusual, and will be the norm for many of our services in the cloud.
 

  • Like 3
  • Thanks 6
Link to comment
Share on other sites

Is the new proxy pool implemented as a round-robbin? Last night I experienced a production region that was reporting a 502 "ERROR: The requested URL could not be retrieved" proxy error with outbound HTTP that went away after the region was restarted. Curious if a round-robbin proxy pool would have at least mitigated the issue. If the Proxy's are no longer tied to the region, would we have to open a ticket when/if one of the proxies in the proxy pool started spewing 500 errors? Will there be any meta-data to report which proxy in the pool was at fault?

Edited by Phate Shepherd
Link to comment
Share on other sites

  • Lindens

I wouldn't call AWS load balancing round-robin but it will definitely be a more-broadly shared resource.  Instead of a handful of regions being stuck with a bad proxy, everyone will cycle through it and move on until it's replaced (quickly, is our plan).  But a 502 on HTTP-Out isn't necessarily sourced from Linden.  That can be generated anywhere up the connection into the origin server.  Sending back hints about error origin to the script is an interesting idea.  Or at least coming up with better alibis...  :)  I may have an idea...

  • Haha 1
Link to comment
Share on other sites

Could you set up a name server at a specific address that does nothing but verify the IP was recently used as an HTTP-out proxy, and respond with NXDOMAIN if it was not?

I think this would be the simplest solution for people that are just trying to do some source verification on scripts expecting traffic only from SL simulators...

Just thought I'd throw that out there.  I don't have anything in SL that does this now, but I have used the name servers for a first stage of authenticating the source in the past.  It sounds like that is all that is being "broken" here.

It wouldn't have to return the "correct" name, in this case.  Just return "doesn't exist" if it's not one of the proxies that has been used recently.

  • Like 1
Link to comment
Share on other sites

  • Lindens

Peer verification is a challenge.  Should it be based on layer 3, 6, or 7?  How to do revocation or update or other maintenance if working in layer 7?  Etc...  AWS itself strongly discourages IP-based schemes but they do publish some ideas for those who insist:  AWS Documentation on IP Ranges  I'm inviting comments on that page, particularly on the later section about SNS subscriptions.

  • Thanks 1
Link to comment
Share on other sites

9 hours ago, Madman Magnifico said:

Could you set up a name server at a specific address that does nothing but verify the IP was recently used as an HTTP-out proxy, and respond with NXDOMAIN if it was not?

I think this would be the simplest solution for people that are just trying to do some source verification on scripts expecting traffic only from SL simulators...

Using IP addresses and DNS names as an authentication method isn't a good idea even when it appears to work.

A much better approach would be to put a secret string in your script and send a hash of that secret and the message along with the message. The server knows the secret, so it can verify that the message is authentic. This verifies not only that it's coming from some simulator, but that the source knows the secret.

Link to comment
Share on other sites

13 minutes ago, Oz Linden said:

Using IP addresses and DNS names as an authentication method isn't a good idea even when it appears to work.

A much better approach would be to put a secret string in your script and send a hash of that secret and the message along with the message. The server knows the secret, so it can verify that the message is authentic. This verifies not only that it's coming from some simulator, but that the source knows the secret.

I generally agree with that, and even when I did do this, it was always just one of many checks performed.  My suggestion was more of an idea that could help make it easier for those people who have made (or paid other people to make) scripts that do this.  It seems it would be doable from your end, and fairly trivial for most people to just direct that reverse lookup to another server.  Just throwing ideas out since I'm guessing that quite a few in-world systems will be affected by the change and not everyone is all that capable of implementing alternatives.

Link to comment
Share on other sites

6 minutes ago, Madman Magnifico said:

I generally agree with that, and even when I did do this, it was always just one of many checks performed.  My suggestion was more of an idea that could help make it easier for those people who have made (or paid other people to make) scripts that do this.  It seems it would be doable from your end, and fairly trivial for most people to just direct that reverse lookup to another server.  Just throwing ideas out since I'm guessing that quite a few in-world systems will be affected by the change and not everyone is all that capable of implementing alternatives.

Unfortunately implementing your suggestion would not be nearly as easy as you might think and would in the end not provide a strong assurance. This is why we're making a special effort to publicize these changes well before they start appearing on the main grid.

We are well aware that making changes to LSL behavior that are potentially not backwards-compatible has the potential to break content, so we're very careful about doing so. In particular, changes that require scripts to be updated are especially problematic since they're so often relied upon by people who can't update the script. In this case though, it's the remote service that requires adjustment, so our hope is that most will be maintained by someone capable of removing any checks like those (after all, someone is paying for the web server, so they presumably can update it).

Link to comment
Share on other sites

3 hours ago, Oz Linden said:

Using IP addresses and DNS names as an authentication method isn't a good idea even when it appears to work.

A much better approach would be to put a secret string in your script and send a hash of that secret and the message along with the message. The server knows the secret, so it can verify that the message is authentic. This verifies not only that it's coming from some simulator, but that the source knows the secret.

Unfortunately, this is not feasible for open-source scripts such as AVsitter, which I maintain. The aim of the check is to prevent abuse of the service by blocking IP addresses that do not belong to Linden Lab.

Is there any other approach that could be used for this purpose?

What's the difficulty in implementing a reverse DNS that resolves the proxy IPs to an address within lindenlab.com?

Link to comment
Share on other sites

47 minutes ago, Sei Lisa said:

What's the difficulty in implementing a reverse DNS that resolves the proxy IPs to an address within lindenlab.com?

We don't own the IP addresses (AWS does), so we can't create an authoritative reverse lookup for them. 

There are lots of DDOS protection services out there for web sites, but for reasons I hope are obvious we can't make recommendations.

Oh ... and also, we're mostly stopping using 'lindenlab.com' for the backend service domains (it's mostly being replaced by 'secondlife.io' or some subdomain of that).

Edited by Oz Linden
additional note about the use of lindenlab.com
Link to comment
Share on other sites

(Sorry if this was covered long ago, but only just now checking into any of this.)

Is this the end of the Viewer "Setup" preference to "Use the built-in browser for Second Life links only" ?

With that setting I did a cursory test on Aditi Cloud Sandbox 3 and it used the system browser as if an in-world HTTP server were external, in contrast to the way Agni used the built-in browser, treating it as a Second Life link.

(It's not a big deal either way for me. I might have some now obsolete documentation floating around, but nobody ever reads the manual anyway, right?)

  • Like 1
Link to comment
Share on other sites

 

5 hours ago, Oz Linden said:

We don't own the IP addresses (AWS does), so we can't create an authoritative reverse lookup for them. 

There are lots of DDOS protection services out there for web sites, but for reasons I hope are obvious we can't make recommendations.

Oh ... and also, we're mostly stopping using 'lindenlab.com' for the backend service domains (it's mostly being replaced by 'secondlife.io' or some subdomain of that).

Would it be possible for the proxy itself to insert a hash into the http header that is based on the script creator UUID and a shared secret that the creator can change themselves? I hesitate to suggest a hash made from the creator's UUID and their password, but something that the creator already has control over, and can't be reversed to reveal their plaintext password. (At worst, it could be reversed to reveal the shared secret, but the creator could change it, and update the server code.)

Doing it this way, existing content could continue to work as only a server side verification of the hash would be needed to know the comms came from a LL proxy by verifying the hash matches the concatenation of known creator UUIDs and the shared secret.

(Now that I think about it more, it wouldn't help open source scripts that have a creator different than the server being communicated with.)

Edited by Phate Shepherd
Link to comment
Share on other sites

I don't have it setup currently, but I used to have my own server here running Linux and MySQL fror some items I had in Second Life.  I found my hard drive activity light blinking like mad on it one night, and when I checked the log, I found I was being hammered by something over in China.  At that point, I tightened my firewall to only allow IP addresses from Linden Labs, and that halted the drive activity.  Most of the drive activity probably from logging all those requests.  It looks like in that scenario I'd have to change.  Can you get the botnets in China to shutdown?

Link to comment
Share on other sites

Hi, I want to do some url tests on Aditi. I started yesterday and the sandbox has an auto-return of 4 hours.  As I want to test if the new urls will be stable through time, I would need a sandbox with no return.  I would leave my test cube there for a while and see how things turn out.   Would that be possible?

Link to comment
Share on other sites

13 hours ago, Phate Shepherd said:

Would it be possible for the proxy itself to insert a hash into the http header that is based on the script creator UUID and a shared secret that the creator can change themselves? I hesitate to suggest a hash made from the creator's UUID and their password, but something that the creator already has control over, and can't be reversed to reveal their plaintext password. (At worst, it could be reversed to reveal the shared secret, but the creator could change it, and update the server code.)

The script can add a custom header that has that signature in its value; there's no need for the proxy to be involved at all.

Link to comment
Share on other sites

1 hour ago, Oz Linden said:

The script can add a custom header that has that signature in its value; there's no need for the proxy to be involved at all.

I was trying to come up with a solution to existing content that wouldn't require replacing all in-world items that talk to servers with source verification.

Edited by Phate Shepherd
Link to comment
Share on other sites

17 hours ago, Basil Wijaya said:

Hi, I want to do some url tests on Aditi. I started yesterday and the sandbox has an auto-return of 4 hours.  As I want to test if the new urls will be stable through time, I would need a sandbox with no return.  I would leave my test cube there for a while and see how things turn out.   Would that be possible?

We can probably lengthen that (assuming that doesn't result in unmanageable litter); I'll look into it on Monday.

Link to comment
Share on other sites

21 hours ago, Qie Niangao said:

Is this the end of the Viewer "Setup" preference to "Use the built-in browser for Second Life links only" ?

With that setting I did a cursory test on Aditi Cloud Sandbox 3 and it used the system browser as if an in-world HTTP server were external, in contrast to the way Agni used the built-in browser, treating it as a Second Life link.

It doesn't change that setting at all (at least not yet). The point of that setting is to use the internal browser for 'trusted' content created by the Lab. The fact that web servers implemented by Residents in LSL ended up in a subdomain of 'lindenlab.com' was actually a flaw in that design, and part of why we're (nearly) eliminating the use of the corporate domain for addressing of things within SL (to reassure viewer developers: we're not changing the domain of the login servers - that would be too much trouble).

Whether or not there should be a setting that adds services within 'secondlife.io' to the list for the internal browser is an interesting question.

  • Thanks 1
Link to comment
Share on other sites

6 hours ago, Phate Shepherd said:

I was trying to come up with a solution to existing content that wouldn't require replacing all in-world items that talk to servers with source verification.

Really, "source authentication" that was just an IP address check wasn't worth much to begin with. Yes, it would filter out some of the annoying but unsophisticated generic attacks on web servers, but it didn't really tell you that requests were coming from the scripts you expected (anyone on any region with any script would have the same IP range that legit scripts did), or that it wasn't an attack. If that's what you had, no change is needed to the in-world content to keep it working because just removing the IP address check at the server will allow the in-world scripts to work.

Adding a more secure verification that requests are really coming from the specific scripts you expect will be a bit more work, but that would have been worthwhile anyway.

Link to comment
Share on other sites

4 hours ago, Oz Linden said:

We can probably lengthen that (assuming that doesn't result in unmanageable litter); I'll look into it on Monday.

If you're doing that, I'd appreciate it if you also turn on "object entry/everyone" for those sandboxes. It's on for some sandboxes and off for others on the main and beta grids, somewhat randomly.

Link to comment
Share on other sites

In response to Oz Linden's comment
 

Quote

Adding a more secure verification that requests are really coming from the specific scripts you expect will be a bit more work, but that would have been worthwhile anyway.


I know that this is a specialized way and doesn't address most cases, but it fits my needs.

I use OAuth2 in my scripts.
When a script starts, it presents client credentials  to an authentication server.

It receives an access token back, which the script remembers (below). This oath2header is added to all the HTTP requests from the script.

list oauth2header = [HTTP_CUSTOM_HEADER,"Authorization","Bearer " + llJsonGetValue(body, ["access_token"])];

The benefit I receive is middleware on my server rejects all requests that don't have a valid access token.
 

Edited by SophieJeanneLaDouce
  • Like 2
  • Thanks 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 936 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...