Jump to content

Don't check HTTP-in URL domains


Oz Linden
 Share

Recommended Posts

This is a heads-up for anyone who is using llRequestURL or llRequestSecureURL...

It has come to our attention that some users may be validating that the returned URLs are in the domain they expect, presumably by matching them against something like 'sim.*\.agni\.lindenlab\.com'. These checks may have been inspired by simulator bugs that at one time or another have caused URLs to be returned that didn't work because some part of the domain name was missing.

You should not attempt to validate the contents of the URL. The contents, including the domain name, returned by either of those methods will change when we begin using simulators in the cloud, and possibly sooner. The URLs returned will work (they already have for us in our own internal testing) but you should not assume anything about the URL contents - including the domain name, port number, or anything else.

If you need to be sure that the URL as sent to some client is working, we suggest that you implement a simple health check capability in the handler for your inbound requests rather than attempting to predict whether or not through any examination of the URL contents.

If you have URL content checks in your system now, we suggest that you remove them as soon as possible.

  • Like 2
  • Thanks 10
Link to comment
Share on other sites

Thanks for coming through and offering the heads up on this particular issue.

I hope to see more of these kinds of announcements in the future if/when LL plans on changing things that might cause content breakage or interruption in script-dependent services.

@Oz Linden I would pin this post temporarily at least for a few months, else it will get lost in a sea of new posts.

Edited by Lucia Nightfire
  • Like 1
Link to comment
Share on other sites

There are places inside the viewer code that may think they know a little too much about SL URLs:

grep "secondlife.com" *.cpp
fsgridhandler.cpp:const char* MAIN_GRID_SLURL_BASE = "http://maps.secondlife.com/secondlife/";
fsslurl.cpp:const char* LLSLURL::MAPS_SECONDLIFE_COM         = "maps.secondlife.com";
fsslurl.cpp:                // (or its a slurl.com or maps.secondlife.com URL).
llappviewer.cpp:        LL_ERRS() << "Viewer failed to find localization and UI files. Please reinstall viewer from  https://secondlife.com/support/downloads/ and contact https://support.secondlife.com if issue persists after reinstall." << LL_ENDL;
llappviewer.cpp:    // https://releasenotes.secondlife.com/viewer/2.1.0.123456.html
llfloaterland.cpp:    // the search crawler "grid-crawl.py" in secondlife.com/doc/app/search/ JC
llfloatermodelpreview.cpp:    //    validate_url = "http://secondlife.com/my/account/mesh.php";
llfloatermodelpreview.cpp:        validate_url = "http://secondlife.com/my/account/mesh.php";
llfloatermodelpreview.cpp:            if (num_hulls > 256) // decomp cannot have more than 256 hulls (http://wiki.secondlife.com/wiki/Mesh/Mesh_physics)
llimprocessing.cpp:        indx = msg.find(" ( http://maps.secondlife.com/secondlife/");
llmarketplacefunctions.cpp:        std::string domain = "secondlife.com";
llmeshrepository.cpp://       http://wiki.secondlife.com/wiki/Mesh/Mesh_Asset_Format)
llmeshrepository.cpp:// See wiki at https://wiki.secondlife.com/wiki/Mesh/Mesh_Asset_Format
llslurl.cpp:const char* LLSLURL::MAPS_SECONDLIFE_COM         = "maps.secondlife.com";
llslurl.cpp:                // (or its a slurl.com or maps.secondlife.com URL).
llstartup.cpp:        gSavedSettings.setString("MapServerURL", "http://test.map.secondlife.com.s3.amazonaws.com/");
llviewercontrol.cpp:    // AO - Phoenixviewer doesn't want to send unecessary noise to secondlife.com
llviewercontrol.cpp:    //if((std::string)test_BrowserHomePage != "http://www.secondlife.com") LL_ERRS() << "Fail BrowserHomePage" << LL_ENDL;
llviewernetwork.cpp:const std::string SL_UPDATE_QUERY_URL = "https://update.secondlife.com/update";
llviewernetwork.cpp:const std::string MAIN_GRID_SLURL_BASE = "http://maps.secondlife.com/secondlife/";
llviewernetwork.cpp:const std::string MAIN_GRID_WEB_PROFILE_URL = "https://my.secondlife.com/";
llviewernetwork.cpp:    // This file does not contain definitions for secondlife.com grids,
llviewernetwork.cpp:                  "https://secondlife.com/helpers/",
llweb.cpp:        substitution["GRID"] = "secondlife.com";
llweb.cpp:        //boost::regex pattern = boost::regex("\\b(lindenlab.com|secondlife.com)$", boost::regex::perl|boost::regex::icase);
llweb.cpp:        boost::regex pattern = boost::regex("\\b(lindenlab.com|secondlife.com|secondlifegrid.net|secondlife-status.statuspage.io)$", boost::regex::perl|boost::regex::icase);
llwebprofile.cpp: *    -> GET https://my-demo.secondlife.com/ via LLViewerMediaWebProfileResponder
llwebprofile.cpp: *    -> GET "https://my-demo.secondlife.com/snapshots/s3_upload_config" via ConfigResponder
llxmlrpctransaction.cpp:    std::string uri = "http://support.secondlife.com";

Some of those are OK, and some may need attention.

 

Link to comment
Share on other sites

Speaking of llRequestSecureURL(), I got an issue recently:

curl https://simXXXXX.agni.lindenlab.com:12043/cap/4700d12c-7c84-580a-892c-1f997899a73b
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

The workaround is to disable the check, but then it's less secure of course.

Link to comment
Share on other sites

1 hour ago, Twisted Pharaoh said:

Speaking of llRequestSecureURL(), I got an issue recently:


curl https://simXXXXX.agni.lindenlab.com:12043/cap/4700d12c-7c84-580a-892c-1f997899a73b
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

What's using "curl"?

You probably need a version of "curl" that uses a more recent root certificate store. See:

https://jira.secondlife.com/browse/BUG-228848

https://www.ssl.com/blogs/addtrust-external-ca-root-expired-may-30-2020/

 

 

Link to comment
Share on other sites

10 minutes ago, animats said:

What's using "curl"?

curl 7.58.0 (x86_64-pc-linux-gnu) libcurl/7.58.0 OpenSSL/1.1.1g zlib/1.2.11 libidn2/2.3.0 libpsl/0.19.1 (+libidn2/2.0.4) nghttp2/1.30.0 librtmp/2.3
Release-Date: 2018-01-24
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IDN IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSockets HTTPS-proxy PSL

This is the stock curl that comes with Ubuntu 18.04, the system is up to date. I've done a dist-upgrade as suggested by your link that did not help, as I am regularly doing it anyways. However I'll move to Ubuntu 20.04 when it becomes official (should be next month) and will check again then. Thanks for the links.

 

Link to comment
Share on other sites

Wait, when did this happen? All the servers just got a more recent SSL certificate store, because an important root cert expired June 1. It should work now with a current "curl", but it wouldn't have worked for a few days last week.

Try putting the URL into one of the SSL certificate chain checker sites, like https://www.sslchecker.com/sslchecker and see what error messages you get.

Link to comment
Share on other sites

2 hours ago, Twisted Pharaoh said:

The workaround is to disable the check, but then it's less secure of course.

The real solution is to download and install the Linden Lab root Certificate Authority certificate and add it to the CA store on your system.  Simulator certificates are signed by our internal CA cert, which is included in the viewer.

You can download it from https://bitbucket.org/lindenlab/llca/raw/master/LindenLab.crt

  • Thanks 3
Link to comment
Share on other sites

  • 1 month later...
On 6/9/2020 at 3:28 PM, Oz Linden said:

The real solution is to download and install the Linden Lab root Certificate Authority certificate and add it to the CA store on your system.  Simulator certificates are signed by our internal CA cert, which is included in the viewer.

You can download it from https://bitbucket.org/lindenlab/llca/raw/master/LindenLab.crt

I recently came across this very issue, which it's convenient how recent this thread is. Perhaps adding some kind of note about this on wiki/LlRequestSecureURL could be useful for future users who want secure URLs interacting with their own servers?

  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...
  • 1 month later...

Will all the other parts of the external HTTP Request operators  function as they did on non cloud based services or will they need rewrite to run with cloud ?. Can you please let me know if some of the other submitted changes on jira have been concidered  like opening a stream to pass information in its song headers ? 

Link to comment
Share on other sites

On 9/20/2020 at 5:21 AM, VirtualKitten said:

Will all the other parts of the external HTTP Request operators  function as they did on non cloud based services or will they need rewrite to run with cloud ?. Can you please let me know if some of the other submitted changes on jira have been concidered  like opening a stream to pass information in its song headers ? 

The cloud uplift version does not add or subtract any features. There are some changes that may affect the remote service - see this post

  • Like 1
Link to comment
Share on other sites

On 9/20/2020 at 4:21 AM, VirtualKitten said:

... Can you please let me know if some of the other submitted changes on jira have been concidered  like opening a stream to pass information in its song headers ? 

Oh hell no, do not expose Linden Lab to that legal liability.  That will be a fast way to see parcel media stream support stripped from the platform.

Link to comment
Share on other sites

  • 2 weeks later...
On 6/9/2020 at 3:24 AM, Oz Linden said:

It has come to our attention that some users may be validating that the returned URLs are in the domain they expect, presumably by matching them against something like 'sim.*\.agni\.lindenlab\.com'. These checks may have been inspired by simulator bugs that at one time or another have caused URLs to be returned that didn't work because some part of the domain name was missing.

Two Issues that may be related to this that need consideration for in-world and off-world validation & security:-

1) In-World

Vending machines and product upgrade delivery scripts need to filter out requests coming from the beta grid and other invalid sources to make sure people are not spending beta grid money to get stuff delivered to their main grid accounts without paying.

With email we have >> The prim's email address is its key with "@lsl.secondlife.com" appended, llGetKey() + "@lsl.secondlife.com"

With HTTP_Request() we have >> string llGetHTTPHeader( key request_id, string header );  -- With a result like h ttps://sim3015.aditi.lindenlab.com:12043/cap/a7717681-2c04-e4ac-35e3-1f01c9861322

This will need to be maintained or else you will have a repeat of the HUGE content theft/exploitation issues we had 10 years ago that drove away so many content creators. ( BTW my content from that time is STILL being passed around for free )

 

2) Off-World

Also off-world servers need to make sure a request is coming from a secondlife domain and not some other external source. (Read: attempted hack)

For example: in PHP you might use something like this to ensure all requests are coming from SecondLife's main grid.

    ///////////////////////////////////////////////////////////////
    //
    // Validate origin was SL main grid
    //
    $hostname = gethostbyaddr($_SERVER['REMOTE_ADDR']); // extract domain/grid information
    if (substr($hostname, strlen($hostname) - 18, strlen($hostname)) != "agni.lindenlab.com")
    {
        die("ERROR: Request not from Main Grid!"); // ignore requests that are not from the main SL grid
    }

 

Without the above test you can not trust other values returned such as $_SERVER['HTTP_X_SECONDLIFE_OWNER_KEY'] because you can not be sure the origin was seconldlife and not someone spoofing the values.

 This is one of the most critical layers that must be maintained.

Darling Brody

Edited by Darling Brody
Link to comment
Share on other sites

7 hours ago, Darling Brody said:

Two Issues that may be related to this that need consideration for in-world and off-world validation & security:-

[...]

Vending machines and product upgrade delivery scripts need to filter out requests coming from the beta grid and other invalid sources to make sure people are not spending beta grid money to get stuff delivered to their main grid accounts without paying.

[...]

With HTTP_Request() we have >> string llGetHTTPHeader( key request_id, string header );  -- With a result like h ttps://sim3015.aditi.lindenlab.com:12043/cap/a7717681-2c04-e4ac-35e3-1f01c9861322

This will need to be maintained or else you will have a repeat of the HUGE content theft/exploitation issues we had 10 years ago that drove away so many content creators. ( BTW my content from that time is STILL being passed around for free )

Also off-world servers need to make sure a request is coming from a secondlife domain and not some other external source. (Read: attempted hack)

For example: in PHP you might use something like this to ensure all requests are coming from SecondLife's main grid.

    ///////////////////////////////////////////////////////////////
    //
    // Validate origin was SL main grid
    //
    $hostname = gethostbyaddr($_SERVER['REMOTE_ADDR']); // extract domain/grid information
    if (substr($hostname, strlen($hostname) - 18, strlen($hostname)) != "agni.lindenlab.com")
    {
        die("ERROR: Request not from Main Grid!"); // ignore requests that are not from the main SL grid
    }

This thread exists specifically to alert you to the fact that measures like those will no longer work (indeed, they will cause your service to fail) and give you a chance to replace those checks with something more secure.

For example, if you have an HTTP GET operation to an external server, you can create your own authentication signature with something like:

      # The SharedSecret value is known by the server as well
      string SharedSecret = "a975c295ddeab5b1a5323df92f61c4cc9fc88207";
      string request_params;
      #
      # ... code that builds the request_params string
      #
      string url = "https://myserver.example.com/api/2.0/operation";

      # The timestamp ensures that even if the request_params are all the
      # same as some earlier request, the authenticator will be different
      string timestamp = llGetTimestamp();
      string authenticator = llSHA1String( timestamp + SharedSecret + request_params );

      key request_id = llHTTPRequest( url + "?" + request_params,
                                     [ "HTTP_CUSTOM_HEADER", "X-Script-Auth", timestamp + "," + authenticator ],
                                     "");

When the server gets the request, it can read the X-Script-Auth header and the parameter string, do the same SHA1 hash, and compare the authenticators.

An approach like the above is far more secure than using the IP address or hostname of the requestor as an authenticator, since it proves that the requesting script is not merely some script running in SL -  it's the script you wrote that has your secret value in it. 

 

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

Hi Oz,

Thank you for that code snippet.  It is similar to what I already do.  I encrypt the messages with a pre-shared key in a very similar way.

The reason I had to implement a check that the message was coming from SL was because of all the permissions exploits that permitted no-mod scripts to be opened, thus compromising any shared secrets and encryption keys.   While I am not aware of any active exploits that can still force open a no-mod script, it is something I would like to be able to protect against just in case a new exploit is accidentally created.  Back in 2009'ish someone deleted my entire customer database after compromising the permissions on one of my customer registration scripts to obtain the encryption key I use to communicate with the server.

Here is what I was defending against in detail:-

  • Someone cracks open my script with a permission exploit and copies my shared secret, thus allowing them to send my server correctly encrypted messages.
  • They discover I am also checking against the owner of the prim to reject messages from prims that are not owned by me, so they send the message from outside SL with a fake value loaded into HTTP_X_SECONDLIFE_OWNER_KEY.
  • This is where knowing the message comes from SL where they cant fake headers is very important, as I can reject messages from outside SL that may contained spoofed headers.

If there is every another active permission exploit to open scripts how do we protect our servers?

Suggestions?

 

Link to comment
Share on other sites

2 hours ago, Darling Brody said:

If there is every another active permission exploit to open scripts how do we protect our servers?

I'm afraid that I don't have a suggestion for you there. 2009 was before my time ... I can assure you that protecting script sources is very much top-of-mind around here.

  • Thanks 1
Link to comment
Share on other sites

  • 2 weeks later...
  • 3 weeks later...

Doing at least a basic validation of the URL has important use cases in open source APIs. We use an API where the users use an oAuth-like system to supply their own delivery boxes and are able to customize the code if they want to. That means the server must check that it is actually sending requests to linden labs so the user can not use our server to ddos or make requests to unseemly paces on the internet. We recently added a list of whitelisted domains like secondlife.com lindenlab.com and secondlife.io

Edited by Tonaie
Link to comment
Share on other sites

  • 3 years later...

Hi OZ Any ideas Please I though as this did not work i could try url =  "https://secondlife.com/search?query_term=Galadriel%27s&search_type=standard&collection_chosen=events&maturity=gma
"; but this did not work either yet it does in browser? Any Ideas?

9da37a1c2604898cd0628000c0815990.png

Edited by VirtualKitten
Link to comment
Share on other sites

  • Lindens

Necropost!  Doubly so as oz is no longer here.  A quick check with 'curl' shows that that URL needs to go through about seven 3XX redirects to finally produce output and the final response is about 24KB.  I suspect http-out limits are generating a 499 error and extended error status would provide more information.  (I haven't checked this - just throwing it back out there.)

  • Thanks 1
Link to comment
Share on other sites

Monty Linden

Thank you for your kind reply. We are currently paying 200L every single  event a month this is expensive and thought we could get at these events inside second life. Which there seems no reason why not! 

Have already tried limiting response this below which returns same 499. using it without http o https says its not available. This was providing a response of on on saying it returned noting

integer bodylength= 2048; 
url =  "http://search.secondlife.com/?query_term=Galadriel%27s&search_type=standard&collection_chosen=events&maturity=gma
";
http_request_id = llHTTPRequest(url, [HTTP_METHOD,"GET",HTTP_USER_AGENT, "LSL_Script(Mozilla Compatible)",                                             HTTP_MIMETYPE, "application/x-www-form-urlencoded",  HTTP_VERIFY_CERT,FALSE, 
                          HTTP_BODY_MAXLENGTH,bodylength], "");

The full source code  is here : 

 

Link to comment
Share on other sites

  • Lindens

I think I agree that this should be better available.  The support ticket is one way to try to kick things.  You might also want to go to https://feedback.secondlife.com/ and file a bug or feature request on this.

I noticed the redirection chain is going through an auth endpoint and is doing a good amount of cookie churn to pass context along.  There may be a way to start in the middle of the chain and get a search result back easily.

  • Thanks 1
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...