Jump to content

Oz Linden

Lindens
  • Content Count

    241
  • Joined

  • Days Won

    1

Everything posted by Oz Linden

  1. This change is rolling to the main channel now, and will roll to any RC channels it isn't already on in tomorrows roll. Apologies for the delays and difficulties.
  2. Due to an unrelated issue, the scheduled roll of this version to the rest of the grid will not happen this week.
  3. To be clear, that's L$30. Note also that enabling Search also gives you a Place Page - a Linden-hosted web page linked to your land that you can customize.
  4. This script, which requests a json response and then uses the llJsonGetValue to parse it, worked with that server: key kSentRequest; string host="stardust.wavestreamer.com:8062"; string path="/stats?sid=1&json=1"; string stream_status; default { state_entry() { string URL = "http://" + host + path; llOwnerSay("URL:"+URL); kSentRequest = llHTTPRequest(URL,[],""); } http_response (key kRecRequest, integer intStatus, list lstMeta, string strBody) { string title = llJsonGetValue(strBody, ["songtitle"] ); llOwnerSay(title); } on_rez(integer start_param) { llResetScript(); } } In this case, adding a User Agent was not needed, but apparently some Shoutcast servers can be configured to either require or prohibit certain user agents.
  5. I tried that stardust wavestreamer manually with a few different variations; all returned 403. That status normally means "I know who you are, and you're not allowed to see that". Maybe you could ask the server owner why not?
  6. Spaces are not valid in a User Agent token. Try [HTTP_USER_AGENT,"XML-Getter (Mozilla Compatible)"] - even better would be , [HTTP_USER_AGENT,"XML-Getter/1.0 (Mozilla Compatible)"]
  7. The latest version of the fix is on the Magnum and Cacke regions now: 2017-06-29T17:02:05.327400
  8. Because the X-SecondLife headers are useful in debugging the server and region (finding misbehaving objects), we won't provide an option to suppress them, but if it turns out that there are enough servers out there that the new fix (being rolled to the RC channels as I'm typing this) doesn't support, we'll consider other steps to take. Making that decision will require reports about what's not working. So far, I have not seen a report of a Shoutcast v1 server this doesn't fix (v2 does not support this api). The fix being rolled now still does not allow underscores in the host name; I expect that we'll add that to some future version, but would not block this release from the main channel for that (I don't have the last word on that, though).
  9. The Shoutcast stream API has changed, and the /7.html is no longer supported. See http://wiki.shoutcast.com/wiki/SHOUTcast_DNAS_Server_2_XML_Reponses#Equivalent_of_7.html
  10. The problem is that the underscore character is not valid in a hostname. We'll see if we can relax the restriction. Watch for new simulator releases; we'll update the Magnum channel Wednesday with the version that's now on those beta regions, and in time the fix will make it to the main channel
  11. It's our judgement that this wouldn't be a good idea. We think we've made changes that should allow updated scripts to work with most servers (see yesterdays post)
  12. I'm not support; see https://support.secondlife.com/
  13. We have made a change that we hope will reduce the request sizes enough to allow some more servers to work. We plan to put that up next week on the same RC channels on Wednesday, but you can test it now on the Beta grid Aditi. Go to any of Leafeon, Sylveon, Umbreon and Glaceon on Aditi, or you can look for server version 17.06.28.327400 on http://aditi.coreqa.net/gridtool.php (we'll be adding some more regions shortly). If you've never used the Beta grid, see http://wiki.secondlife.com/wiki/Aditi#How_do_I_log_in_to_Aditi.3F
  14. Emmerich ... your server (secondstream.de) is one of those that is behaving badly. At the moment, there is no change you could make to your script that would work with it. We are experimenting with ways to modify our requests that might allow it to work with servers like yours that reject large requests, but I must emphasize that any change we make is liable to be fragile with such poor quality server software, so trying to find a way to improve the streaming server you use is strongly advised. To be clear: it will never work to put any control characters in any header value or in the URL; that's what is invalid about the User Agent value you supplied. The fact that it used to be allowed was a bug (and an embarrassing one). We'll try to put up a region on Aditi later today for more testing... watch this thread.
  15. Ok ... we're continuing to learn new things ... with the pain that sometimes brings. What follows is going to get a little technical, but bear with me. If you're feeling swamped, skip to the numbered steps at the bottom of this post. The fix is now behaving the way we intended, and sending perfectly good HTTP requests. We've found that some stream servers (including the ones we were doing our QA with) work just fine with the new option. Unfortunately, some others don't, but now we believe we know why not. The broken servers we've found (including those above) are failing because they won't accept requests as large as we send. If we artificially truncate our requests, they work. An HTTP request consists of the request line with the URL followed by a bunch of header lines. Most of the headers we send are the X-SecondLife-something headers that identify the requesting object, where it is in SL, and who owns it. Lots of scripters use these in their web applications; they're very useful and also can help us to track down misbehaving objects. So... why did these scripts work with these servers before? We missed one other effect of the old hack: llHTTPRequest(URL + "/7.html"+" HTTP/1.0\nUser-Agent: XML Getter (Mozilla Compatible) \n\n", [], ""); What everyone thought was important about this hack was that it added "Mozilla Compatible" in a User-Agent header. That may have been and may still be important to some servers, but our new option provides for that (we can see in our traces that your scripts are sending your new user agent value with that in it). Note, however the "\n\n" at the end of the hacked-up URL parameter. We didn't realize the importance of that until just a bit ago. It turns out that that is what's making these broken servers work, because it's prematurely terminating the request headers (in HTTP, the headers are terminated by a blank line) - truncating the header. The rest of the headers our server generates (including standard ones and those useful X-SecondLife-* headers) were being treated as part of the request body by the server, but because nothing in the truncated headers tells the server to expect a body and these primitive stream servers don't expect one, they were just ignored. Now that we're not cutting them off artificially, the stream server reads the full set of headers and eventually decides that the request is too big and returns an error. Experimentally, it looks as though our standard requests are a couple of hundred bytes over what the broken servers we've tested against this afternoon will accept. So what will we do? The short answer is "we don't know yet"; I'm writing this just a few minutes after we've diagnosed the problem, and we've got a bit more experimenting to do to work out the limits of these broken servers. I know, I keep saying "broken servers" as though I'm trying to shift blame. Not really, or at least not exclusively. We should have figured this out a little earlier, and I'll personally take most of the blame for that. We missed the importance of the extra newline, and since the servers we'd picked from the earliest reports of this problem happen not to have the extreme size limitation, our tests all worked. But limiting request sizes to the very low values these servers are, and then not returning an error that indicates why they are giving up is very poor behavior - there's no excuse for either, really. Compared to what browsers send routinely these requests are not large at all. So "broken servers" is fair. So what should you do? Test your scripts (use a Magnum or Cake region; they've got the new option). Do make the change to use the new HTTP_USER_AGENT option; it's permanent and may be important. See if you can update or locate server software that doesn't have the request size limitation, or increase the limit if that's possible. Making the limit 2K bytes would leave lots of headroom. If you find server software that works or how to configure it so that it does, please share that knowledge here as soon as possible. We're going to experiment some more and think about the problem. Stay tuned for more updates. Note Well: It is possible that we'll decide that because it has other changes that are important to Second Life as a whole (and it does), we'll need to promote this version to the rest of the grid, even though scripts in it won't work with some stream servers, so please treat the steps above as Very Important.
  16. anything that worked before (including test scripts) should still work on any region that is not in the Magnum or Cake channel; if it doesn't, the problem is not our server change. if you can find a simple test script that works on the older releases now and fails on the Magnum or Cake channel, let us know.
  17. The old way certainly won't work. Does this test script with these URLs work on other simulator versions? When I try those URLs with CURL locally, I get the same results you describe here, so I don't see why they should work from LSL.
  18. The simulator/LSL change is scheduled to roll tomorrow (6/21) to the Magnum RC channel for another try; simulator version 2017-06-19T17:18:00.327192. Thank you all for your patience. This release fixes two things that were responsible for the rollback: Our previous change had broken adding custom headers; those should work again We modified the server to properly escape space characters in the url value. Technically, the script should already have done that, but we didn't enforce it before so there are a lot of scripts out there that send them and we could make the change in a backwards-compatible way, so we did.
  19. We rolled this back for a little rework... stay tuned.
  20. It would be more useful to show us your script.
  21. Ok.... I've updated the wiki page for llHTTPRequest to show the parameter that will be available with the new server roll. If you previously invoked the method like this to hack a User Agent header into the URL parameter: HTTPRequest=llHTTPRequest(URL + "/7.html HTTP/1.0\nUser-Agent: LSL Script (Mozilla Compatible)\n\n",[],""); then it won't work on the new version because the spaces and newline are not allowed in the URL (that's actually why it broke in the current Magnum server already). You'll need to change your script to do something like: HTTPRequest=llHTTPRequest(URL + "/7.html",[HTTP_USER_AGENT, "Stream-Script/1.0 (Mozilla Compatible)"],""); The User Agent value you provide will be added to the one provided by the server, so both your script and the server version will be identified. As has been noted above, this change is scheduled to roll to the Magnum RC regions tomorrow.
  22. I should have been more clear... we are aware that it's not just one script, and the fix we're working on will be usable by any script. I expect to have specifics in the next couple of days. In the mean time, the scripts still work if the region is on the main channel, or on the Bluesteel or Le Tigre release channels. Support can move your region if needed.
  23. We have diagnosed the problem with this script, and are reaching out to the author to define a change to the script and the server to restore the functionality.
  24. We're wary of the term "NPC" (whether expanded or not) because it seems to be understood very differently by different people. What we're working on isn't an automated avatar, which would imply a great many other things that have nothing to do with animation at all. We are excited about seeing what our talented creators will do with what we are doing though.
×
×
  • Create New...