Jump to content

Oz Linden

Lindens
  • Content Count

    269
  • Joined

  • Days Won

    1

Everything posted by Oz Linden

  1. I think you underestimate the ability of SL Residents to adapt. For some history of why we made this change, see my posts in https://community.secondlife.com/forums/topic/406941-dj-board-display-not-working-properly/ I regret that it wasn't reasonable to make an automatic adjustment in the simulator to correct this, but sometimes that's the way it works out. We try to keep disruptions like this to a minimum, and I'm making a special effort on this one to reach out to the authors of scripts that are affected. Frankly my biggest regret is that the URL hack was not noticed long ago and a better solution (like the current one) provided then (well before my time, but that's an imperfect excuse) - it should never have worked and it's surprising to me that it ever did. Software surprises you that way sometimes. We have a couple of other improvements to llHTTPRequest in the pipeline (ones that won't break anything); watch this thread for announcements when they're available for testing.
  2. In the latest main channel simulator version (2017-07-11T22:13:46.327548), we deployed some updates that changed the llHttpRequest call. These were made necessary by updates to some of the underlying HTTP libraries we use, and are intended to make HTTP use from LSL easier to debug and more robust, but in a few cases they have broken some existing scripts. This note is intended to summarize the changes and provide guidance on how to update your scripts. URLs The most important change, and the one that seems to be responsible for most of the problems that have been reported, is in the handling of the URL parameter. In previous versions, the URL was not checked, and some scripts had done things that shouldn't have worked (and now they don't): Control characters in the URL. Specifically, this has usually been newline characters. There were widely used hacks that inserted newlines in order to get around restrictions on the use of some headers, and those hacks also truncated other important headers that the simulator inserts in all requests. This will now produce a run time error without sending the request. The fix is to remove the newlines and any additional headers they were inserting; if it was the User-Agent header, there is a new parameter you can use to provide a value for that header. Spaces in the URL. Space characters are not allowed in URLs, but because many scripts insert them, we have put in special handling to convert them to a %20 in most cases. If your script is passing values returned from other LSL calls that may return spaces, you should do the replacement before putting them into the URL, like: string URL = "http://example.com/path?region=" + llEscapeURL(llGetRegionName()) depending on how your server works, you may need to substitute a plus sign ('+') for the spaces in parameter values rather than the '%20'. Header Value Changes We added a new HTTP_USER_AGENT parameter that lets you append to the User-Agent header value; in some cases, servers look for key words in this header. We made requests shorter by sending one long Accept header with all the allowed MIME types rather than many headers with one each. This made requests shorter and more compatible with more servers. As far as we know it has not caused any problems. The default User-Agent header server token value was changed from 'Second Life LSL' to 'Second-Life-LSL'; this appears to have caused problems for some servers that were checking for it. Problems If you are having problems with these changes, you may reply on this thread or file a JIRA and we'll make an effort to help you. Please include the part of the script that is making the call to llHTTPRequest and the values that are being passed to it, and describe any errors you are getting on the Debug chat channel or from your web server.
  3. The 'ping' value on the status bar cannot be compared to the times for a traceroute or ping command; it's an entirely different mechanism with quite different performance. The viewer displays the times for application messages that are processed by the simulator; those other commands display times for network packets that are handled at a much lower level and so are normally significantly faster. The login servers are separate from the simulators. To measure connectivity to login, the name to test is login.agni.lindenlab.com (which is really any one of many redundant servers). With as little as there is in your post to go on, it's impossible to guess what your problem might be but we monitor the responsiveness of login servers very very closely and they are fine, so chances are that the problem is either something in your system or in the network between you and the server.
  4. This change is rolling to the main channel now, and will roll to any RC channels it isn't already on in tomorrows roll. Apologies for the delays and difficulties.
  5. Due to an unrelated issue, the scheduled roll of this version to the rest of the grid will not happen this week.
  6. To be clear, that's L$30. Note also that enabling Search also gives you a Place Page - a Linden-hosted web page linked to your land that you can customize.
  7. This script, which requests a json response and then uses the llJsonGetValue to parse it, worked with that server: key kSentRequest; string host="stardust.wavestreamer.com:8062"; string path="/stats?sid=1&json=1"; string stream_status; default { state_entry() { string URL = "http://" + host + path; llOwnerSay("URL:"+URL); kSentRequest = llHTTPRequest(URL,[],""); } http_response (key kRecRequest, integer intStatus, list lstMeta, string strBody) { string title = llJsonGetValue(strBody, ["songtitle"] ); llOwnerSay(title); } on_rez(integer start_param) { llResetScript(); } } In this case, adding a User Agent was not needed, but apparently some Shoutcast servers can be configured to either require or prohibit certain user agents.
  8. I tried that stardust wavestreamer manually with a few different variations; all returned 403. That status normally means "I know who you are, and you're not allowed to see that". Maybe you could ask the server owner why not?
  9. Spaces are not valid in a User Agent token. Try [HTTP_USER_AGENT,"XML-Getter (Mozilla Compatible)"] - even better would be , [HTTP_USER_AGENT,"XML-Getter/1.0 (Mozilla Compatible)"]
  10. The latest version of the fix is on the Magnum and Cacke regions now: 2017-06-29T17:02:05.327400
  11. Because the X-SecondLife headers are useful in debugging the server and region (finding misbehaving objects), we won't provide an option to suppress them, but if it turns out that there are enough servers out there that the new fix (being rolled to the RC channels as I'm typing this) doesn't support, we'll consider other steps to take. Making that decision will require reports about what's not working. So far, I have not seen a report of a Shoutcast v1 server this doesn't fix (v2 does not support this api). The fix being rolled now still does not allow underscores in the host name; I expect that we'll add that to some future version, but would not block this release from the main channel for that (I don't have the last word on that, though).
  12. The Shoutcast stream API has changed, and the /7.html is no longer supported. See http://wiki.shoutcast.com/wiki/SHOUTcast_DNAS_Server_2_XML_Reponses#Equivalent_of_7.html
  13. The problem is that the underscore character is not valid in a hostname. We'll see if we can relax the restriction. Watch for new simulator releases; we'll update the Magnum channel Wednesday with the version that's now on those beta regions, and in time the fix will make it to the main channel
  14. It's our judgement that this wouldn't be a good idea. We think we've made changes that should allow updated scripts to work with most servers (see yesterdays post)
  15. I'm not support; see https://support.secondlife.com/
  16. We have made a change that we hope will reduce the request sizes enough to allow some more servers to work. We plan to put that up next week on the same RC channels on Wednesday, but you can test it now on the Beta grid Aditi. Go to any of Leafeon, Sylveon, Umbreon and Glaceon on Aditi, or you can look for server version 17.06.28.327400 on http://aditi.coreqa.net/gridtool.php (we'll be adding some more regions shortly). If you've never used the Beta grid, see http://wiki.secondlife.com/wiki/Aditi#How_do_I_log_in_to_Aditi.3F
  17. Emmerich ... your server (secondstream.de) is one of those that is behaving badly. At the moment, there is no change you could make to your script that would work with it. We are experimenting with ways to modify our requests that might allow it to work with servers like yours that reject large requests, but I must emphasize that any change we make is liable to be fragile with such poor quality server software, so trying to find a way to improve the streaming server you use is strongly advised. To be clear: it will never work to put any control characters in any header value or in the URL; that's what is invalid about the User Agent value you supplied. The fact that it used to be allowed was a bug (and an embarrassing one). We'll try to put up a region on Aditi later today for more testing... watch this thread.
  18. Ok ... we're continuing to learn new things ... with the pain that sometimes brings. What follows is going to get a little technical, but bear with me. If you're feeling swamped, skip to the numbered steps at the bottom of this post. The fix is now behaving the way we intended, and sending perfectly good HTTP requests. We've found that some stream servers (including the ones we were doing our QA with) work just fine with the new option. Unfortunately, some others don't, but now we believe we know why not. The broken servers we've found (including those above) are failing because they won't accept requests as large as we send. If we artificially truncate our requests, they work. An HTTP request consists of the request line with the URL followed by a bunch of header lines. Most of the headers we send are the X-SecondLife-something headers that identify the requesting object, where it is in SL, and who owns it. Lots of scripters use these in their web applications; they're very useful and also can help us to track down misbehaving objects. So... why did these scripts work with these servers before? We missed one other effect of the old hack: llHTTPRequest(URL + "/7.html"+" HTTP/1.0\nUser-Agent: XML Getter (Mozilla Compatible) \n\n", [], ""); What everyone thought was important about this hack was that it added "Mozilla Compatible" in a User-Agent header. That may have been and may still be important to some servers, but our new option provides for that (we can see in our traces that your scripts are sending your new user agent value with that in it). Note, however the "\n\n" at the end of the hacked-up URL parameter. We didn't realize the importance of that until just a bit ago. It turns out that that is what's making these broken servers work, because it's prematurely terminating the request headers (in HTTP, the headers are terminated by a blank line) - truncating the header. The rest of the headers our server generates (including standard ones and those useful X-SecondLife-* headers) were being treated as part of the request body by the server, but because nothing in the truncated headers tells the server to expect a body and these primitive stream servers don't expect one, they were just ignored. Now that we're not cutting them off artificially, the stream server reads the full set of headers and eventually decides that the request is too big and returns an error. Experimentally, it looks as though our standard requests are a couple of hundred bytes over what the broken servers we've tested against this afternoon will accept. So what will we do? The short answer is "we don't know yet"; I'm writing this just a few minutes after we've diagnosed the problem, and we've got a bit more experimenting to do to work out the limits of these broken servers. I know, I keep saying "broken servers" as though I'm trying to shift blame. Not really, or at least not exclusively. We should have figured this out a little earlier, and I'll personally take most of the blame for that. We missed the importance of the extra newline, and since the servers we'd picked from the earliest reports of this problem happen not to have the extreme size limitation, our tests all worked. But limiting request sizes to the very low values these servers are, and then not returning an error that indicates why they are giving up is very poor behavior - there's no excuse for either, really. Compared to what browsers send routinely these requests are not large at all. So "broken servers" is fair. So what should you do? Test your scripts (use a Magnum or Cake region; they've got the new option). Do make the change to use the new HTTP_USER_AGENT option; it's permanent and may be important. See if you can update or locate server software that doesn't have the request size limitation, or increase the limit if that's possible. Making the limit 2K bytes would leave lots of headroom. If you find server software that works or how to configure it so that it does, please share that knowledge here as soon as possible. We're going to experiment some more and think about the problem. Stay tuned for more updates. Note Well: It is possible that we'll decide that because it has other changes that are important to Second Life as a whole (and it does), we'll need to promote this version to the rest of the grid, even though scripts in it won't work with some stream servers, so please treat the steps above as Very Important.
  19. anything that worked before (including test scripts) should still work on any region that is not in the Magnum or Cake channel; if it doesn't, the problem is not our server change. if you can find a simple test script that works on the older releases now and fails on the Magnum or Cake channel, let us know.
  20. The old way certainly won't work. Does this test script with these URLs work on other simulator versions? When I try those URLs with CURL locally, I get the same results you describe here, so I don't see why they should work from LSL.
  21. The simulator/LSL change is scheduled to roll tomorrow (6/21) to the Magnum RC channel for another try; simulator version 2017-06-19T17:18:00.327192. Thank you all for your patience. This release fixes two things that were responsible for the rollback: Our previous change had broken adding custom headers; those should work again We modified the server to properly escape space characters in the url value. Technically, the script should already have done that, but we didn't enforce it before so there are a lot of scripts out there that send them and we could make the change in a backwards-compatible way, so we did.
  22. We rolled this back for a little rework... stay tuned.
  23. It would be more useful to show us your script.
  24. Ok.... I've updated the wiki page for llHTTPRequest to show the parameter that will be available with the new server roll. If you previously invoked the method like this to hack a User Agent header into the URL parameter: HTTPRequest=llHTTPRequest(URL + "/7.html HTTP/1.0\nUser-Agent: LSL Script (Mozilla Compatible)\n\n",[],""); then it won't work on the new version because the spaces and newline are not allowed in the URL (that's actually why it broke in the current Magnum server already). You'll need to change your script to do something like: HTTPRequest=llHTTPRequest(URL + "/7.html",[HTTP_USER_AGENT, "Stream-Script/1.0 (Mozilla Compatible)"],""); The User Agent value you provide will be added to the one provided by the server, so both your script and the server version will be identified. As has been noted above, this change is scheduled to roll to the Magnum RC regions tomorrow.
×
×
  • Create New...