Jump to content

Oz Linden

Resident
  • Posts

    424
  • Joined

  • Days Won

    2

Posts posted by Oz Linden

  1. If you have reason to believe that uplift has adversely affected something on your region, you should file a Support request to ask that it be moved back to the datacenter (but keep reading to find out how to do so in a way that will work).

    You can tell whether or not a region has been uplifted by going there and looking at the simulator hostname in the Help > About dialog: if it ends in ".lindenlab.com" it's in the datacenter, if it ends in ".amazon.com" or ".secondlife.io" it's uplifted.

    Be aware of a couple of things before you make a request to be moved back:

    1. I have asked Support to collect specific information about exactly what has failed and exactly how it has failed for any such request. If we don't get this information, we can't fix whatever it is and we're likely to leave the region uplifted until we can get the data we need.
    2. We are trying to move all of the regions as soon as we can (consistent with carefully listening, and watching all our internal metrics, for problems after each move). This process is going to move as quickly as we can make it happen, so your region(s) will be uplifted pretty soon. We encourage you to actively participate in this process rather than hide from it and hope someone else solves your problem before it affects you. If your region is among the last to move, the time available to address your problem may be very very short; it will be better to have detected it early. Lucia has been testing on the beta grid and other early-uplift regions, and we're very much aware of and working on solutions for the particular problem they have reported; follow Lucias example.

    If you report a problem and we can observe it and we think it likely that moving back will fix it (there is some chance that the problem is not a result of the simulator uplift), then we'll move it back while we work on a fix. We have the vast majority of our development resources focused on this migration, but if we don't know about a problem we won't fix it. If we don't fix it, there will come a time in the near future when moving it back will no longer be an option.

    Overall, this is going very well; as April said in her post last night - please be patient with us. Really, this is going to be better very soon. 

    PS. That note about possible performance problems is true... it's possible ... but mostly what we've seen in the regions uplifted so far is that it's better.

    • Like 2
    • Thanks 6
  2. 15 hours ago, Afkgirl101 said:

    Hello! I apologize in advance if this is in the wrong forum category! Can someone let me know if there is a way to sync my 24 hour day and time into SL to make it more realistic? So when it's early morning for me, it's early in SL and when it's night for me, it's night in SL. Is this possible to do?

    If you're a landowner, yes.

    In World > About Land you can set the day length and the offset; this shows how I've set them for my in-world office space to match my home office in New Hampshire:

     

     

    Screen Shot 2020-10-22 at 12.05.16 .png

    • Like 1
  3. 2 hours ago, Darling Brody said:

    If there is every another active permission exploit to open scripts how do we protect our servers?

    I'm afraid that I don't have a suggestion for you there. 2009 was before my time ... I can assure you that protecting script sources is very much top-of-mind around here.

    • Thanks 1
  4. 7 hours ago, Darling Brody said:

    Two Issues that may be related to this that need consideration for in-world and off-world validation & security:-

    [...]

    Vending machines and product upgrade delivery scripts need to filter out requests coming from the beta grid and other invalid sources to make sure people are not spending beta grid money to get stuff delivered to their main grid accounts without paying.

    [...]

    With HTTP_Request() we have >> string llGetHTTPHeader( key request_id, string header );  -- With a result like h ttps://sim3015.aditi.lindenlab.com:12043/cap/a7717681-2c04-e4ac-35e3-1f01c9861322

    This will need to be maintained or else you will have a repeat of the HUGE content theft/exploitation issues we had 10 years ago that drove away so many content creators. ( BTW my content from that time is STILL being passed around for free )

    Also off-world servers need to make sure a request is coming from a secondlife domain and not some other external source. (Read: attempted hack)

    For example: in PHP you might use something like this to ensure all requests are coming from SecondLife's main grid.

        ///////////////////////////////////////////////////////////////
        //
        // Validate origin was SL main grid
        //
        $hostname = gethostbyaddr($_SERVER['REMOTE_ADDR']); // extract domain/grid information
        if (substr($hostname, strlen($hostname) - 18, strlen($hostname)) != "agni.lindenlab.com")
        {
            die("ERROR: Request not from Main Grid!"); // ignore requests that are not from the main SL grid
        }

    This thread exists specifically to alert you to the fact that measures like those will no longer work (indeed, they will cause your service to fail) and give you a chance to replace those checks with something more secure.

    For example, if you have an HTTP GET operation to an external server, you can create your own authentication signature with something like:

          # The SharedSecret value is known by the server as well
          string SharedSecret = "a975c295ddeab5b1a5323df92f61c4cc9fc88207";
          string request_params;
          #
          # ... code that builds the request_params string
          #
          string url = "https://myserver.example.com/api/2.0/operation";
    
          # The timestamp ensures that even if the request_params are all the
          # same as some earlier request, the authenticator will be different
          string timestamp = llGetTimestamp();
          string authenticator = llSHA1String( timestamp + SharedSecret + request_params );
    
          key request_id = llHTTPRequest( url + "?" + request_params,
                                         [ "HTTP_CUSTOM_HEADER", "X-Script-Auth", timestamp + "," + authenticator ],
                                         "");
    

    When the server gets the request, it can read the X-Script-Auth header and the parameter string, do the same SHA1 hash, and compare the authenticators.

    An approach like the above is far more secure than using the IP address or hostname of the requestor as an authenticator, since it proves that the requesting script is not merely some script running in SL -  it's the script you wrote that has your secret value in it. 

     

    • Like 2
    • Thanks 1
  5. On 9/20/2020 at 5:21 AM, VirtualKitten said:

    Will all the other parts of the external HTTP Request operators  function as they did on non cloud based services or will they need rewrite to run with cloud ?. Can you please let me know if some of the other submitted changes on jira have been concidered  like opening a stream to pass information in its song headers ? 

    The cloud uplift version does not add or subtract any features. There are some changes that may affect the remote service - see this post

    • Like 1
  6. 1 hour ago, animats said:

    Why can't Linden Lab get a validation signature from Apple? It's not IOS, where Apple wants a 30% cut of revenue.

    We're working on it. We only recently got our tools updated sufficiently to do so (some part of the most recent Xcode is needed).

    • Like 2
    • Thanks 2
  7. 6 hours ago, Phate Shepherd said:

    I was trying to come up with a solution to existing content that wouldn't require replacing all in-world items that talk to servers with source verification.

    Really, "source authentication" that was just an IP address check wasn't worth much to begin with. Yes, it would filter out some of the annoying but unsophisticated generic attacks on web servers, but it didn't really tell you that requests were coming from the scripts you expected (anyone on any region with any script would have the same IP range that legit scripts did), or that it wasn't an attack. If that's what you had, no change is needed to the in-world content to keep it working because just removing the IP address check at the server will allow the in-world scripts to work.

    Adding a more secure verification that requests are really coming from the specific scripts you expect will be a bit more work, but that would have been worthwhile anyway.

  8. 21 hours ago, Qie Niangao said:

    Is this the end of the Viewer "Setup" preference to "Use the built-in browser for Second Life links only" ?

    With that setting I did a cursory test on Aditi Cloud Sandbox 3 and it used the system browser as if an in-world HTTP server were external, in contrast to the way Agni used the built-in browser, treating it as a Second Life link.

    It doesn't change that setting at all (at least not yet). The point of that setting is to use the internal browser for 'trusted' content created by the Lab. The fact that web servers implemented by Residents in LSL ended up in a subdomain of 'lindenlab.com' was actually a flaw in that design, and part of why we're (nearly) eliminating the use of the corporate domain for addressing of things within SL (to reassure viewer developers: we're not changing the domain of the login servers - that would be too much trouble).

    Whether or not there should be a setting that adds services within 'secondlife.io' to the list for the internal browser is an interesting question.

    • Thanks 1
  9. 17 hours ago, Basil Wijaya said:

    Hi, I want to do some url tests on Aditi. I started yesterday and the sandbox has an auto-return of 4 hours.  As I want to test if the new urls will be stable through time, I would need a sandbox with no return.  I would leave my test cube there for a while and see how things turn out.   Would that be possible?

    We can probably lengthen that (assuming that doesn't result in unmanageable litter); I'll look into it on Monday.

  10. 13 hours ago, Phate Shepherd said:

    Would it be possible for the proxy itself to insert a hash into the http header that is based on the script creator UUID and a shared secret that the creator can change themselves? I hesitate to suggest a hash made from the creator's UUID and their password, but something that the creator already has control over, and can't be reversed to reveal their plaintext password. (At worst, it could be reversed to reveal the shared secret, but the creator could change it, and update the server code.)

    The script can add a custom header that has that signature in its value; there's no need for the proxy to be involved at all.

  11. 47 minutes ago, Sei Lisa said:

    What's the difficulty in implementing a reverse DNS that resolves the proxy IPs to an address within lindenlab.com?

    We don't own the IP addresses (AWS does), so we can't create an authoritative reverse lookup for them. 

    There are lots of DDOS protection services out there for web sites, but for reasons I hope are obvious we can't make recommendations.

    Oh ... and also, we're mostly stopping using 'lindenlab.com' for the backend service domains (it's mostly being replaced by 'secondlife.io' or some subdomain of that).

  12. 18 hours ago, animats said:

    Didn't want to put this in Oz's pinned topic, but that's what it's relevant to.

     

    it would be easier if any questions or updates were all in that one thread

    in any event, thanks for your testing.

    I don't know whether or not this build has all the latest region crossing updates

  13. 6 minutes ago, Madman Magnifico said:

    I generally agree with that, and even when I did do this, it was always just one of many checks performed.  My suggestion was more of an idea that could help make it easier for those people who have made (or paid other people to make) scripts that do this.  It seems it would be doable from your end, and fairly trivial for most people to just direct that reverse lookup to another server.  Just throwing ideas out since I'm guessing that quite a few in-world systems will be affected by the change and not everyone is all that capable of implementing alternatives.

    Unfortunately implementing your suggestion would not be nearly as easy as you might think and would in the end not provide a strong assurance. This is why we're making a special effort to publicize these changes well before they start appearing on the main grid.

    We are well aware that making changes to LSL behavior that are potentially not backwards-compatible has the potential to break content, so we're very careful about doing so. In particular, changes that require scripts to be updated are especially problematic since they're so often relied upon by people who can't update the script. In this case though, it's the remote service that requires adjustment, so our hope is that most will be maintained by someone capable of removing any checks like those (after all, someone is paying for the web server, so they presumably can update it).

  14. 9 hours ago, Madman Magnifico said:

    Could you set up a name server at a specific address that does nothing but verify the IP was recently used as an HTTP-out proxy, and respond with NXDOMAIN if it was not?

    I think this would be the simplest solution for people that are just trying to do some source verification on scripts expecting traffic only from SL simulators...

    Using IP addresses and DNS names as an authentication method isn't a good idea even when it appears to work.

    A much better approach would be to put a secret string in your script and send a hash of that secret and the message along with the message. The server knows the secret, so it can verify that the message is authentic. This verifies not only that it's coming from some simulator, but that the source knows the secret.

  15. A few of the features of LSL HTTP usage will be changing slightly as a part of the migration to using cloud hosted simulators. Our hope is that these changes will not cause any problems, but hope and testing are two different things, so...

    If you are the creator of LSL scripts that use any of the features discussed below, or you use scripts that rely on external HTTP services that were created by someone else, you should test them as soon as possible and report any problems in Jira. 

    As sandboxes where you can test with these changes are deployed, we will post notices in this thread. Some of what is described below is pretty geeky - mostly that doesn't matter because if you can test your scripts and they work (on the new systems), then you didn't make any of the available mistakes and don't need changes. If you are not able to figure out why your scripts fail with these changes, file a Jira and we'll try to help.

    llHTTPRequest - Outbound HTTP

    The interface to this method is not changing (aside from one additional error check - see below), but some of its behavior on the network will change in ways that may confuse servers that are doing inappropriate checks. 

    HTTP requests from LSL to external systems have always been routed through an HTTP proxy, and that will still be true, but in the past it was a proxy dedicated to each simulator host; now the proxy will be in a pool of separate servers as shown here:

    TA5td_KhPxy8kZOD9d1PI2dxFlaV9pY4Q6GE0pF_JGPCGJSvgNnAU__y31WqpL18Hy01M5vY6VBpMVX0b_Uixt0DSho5PbGLQiFTiWEp_LNHjYJNrhb_uOoWyaGywDGeHsDPQ42C

    This means that:

    • The IP address of the HTTP client from the HTTP server will not be the same as the IP address of the simulator that the request was sent from; the hostname returned by looking up the client (proxy) address will not be the simulator host.
    • Some timeout behaviors for servers that do not respond quickly enough may change slightly.
    • The body of some error responses from the proxies may change, including the content type.
    • Different requests from a given simulator may be routed through different proxies, and requests from different simulators may go through the same proxy (potentially even on the same TCP connection).
    • Scripts that make more than one HTTP request that overlap in time may see changes in the order that those requests arrive at the server and/or the order in which the responses are returned (ordering has never been guaranteed).

    The IP addresses for simulators will be in different ranges and unpredictable; if your server is checking whether or not requests are coming from the simulator addresses in our current datacenter, you will need to remove those checks. We will not be able to provide documentation of the IP addresses of simulators or the proxies.

    None of this should bother the use case of a script using HTTP to communicate with another script; those requests just loop through the same proxies even now, but note the llRequestURL section below.

    The llHTTPRequest parameter HTTP_CUSTOM_HEADER may not be used to set values for the 'Connection' or 'Host' headers; a check is being added so that any call that attempts to do so will throw an error and abort the request (at present these header names are allowed, but the values are usually not actually used or sent to the remote server, which is misleading).

    lRequestURL or llRequestSecureURL

    Note this earlier post on not checking the domain part of the URL allocated to your script. A hostname lookup for a simulator IP address will not return the same name that appears in URLs returned by these methods. That is - looking up the IP address for a simulator name will return the correct address for that simulator, but looking up the name for that address will return a different name. This sort of asymmetry is not unusual, and will be the norm for many of our services in the cloud.
     

    • Like 3
    • Thanks 6
×
×
  • Create New...