Jump to content

Oz Linden

  • Content Count

  • Joined

  • Days Won


Everything posted by Oz Linden

  1. The Shoutcast stream API has changed, and the /7.html is no longer supported. See http://wiki.shoutcast.com/wiki/SHOUTcast_DNAS_Server_2_XML_Reponses#Equivalent_of_7.html
  2. The problem is that the underscore character is not valid in a hostname. We'll see if we can relax the restriction. Watch for new simulator releases; we'll update the Magnum channel Wednesday with the version that's now on those beta regions, and in time the fix will make it to the main channel
  3. It's our judgement that this wouldn't be a good idea. We think we've made changes that should allow updated scripts to work with most servers (see yesterdays post)
  4. I'm not support; see https://support.secondlife.com/
  5. We have made a change that we hope will reduce the request sizes enough to allow some more servers to work. We plan to put that up next week on the same RC channels on Wednesday, but you can test it now on the Beta grid Aditi. Go to any of Leafeon, Sylveon, Umbreon and Glaceon on Aditi, or you can look for server version on http://aditi.coreqa.net/gridtool.php (we'll be adding some more regions shortly). If you've never used the Beta grid, see http://wiki.secondlife.com/wiki/Aditi#How_do_I_log_in_to_Aditi.3F
  6. Emmerich ... your server (secondstream.de) is one of those that is behaving badly. At the moment, there is no change you could make to your script that would work with it. We are experimenting with ways to modify our requests that might allow it to work with servers like yours that reject large requests, but I must emphasize that any change we make is liable to be fragile with such poor quality server software, so trying to find a way to improve the streaming server you use is strongly advised. To be clear: it will never work to put any control characters in any header value or in the URL; that's what is invalid about the User Agent value you supplied. The fact that it used to be allowed was a bug (and an embarrassing one). We'll try to put up a region on Aditi later today for more testing... watch this thread.
  7. Ok ... we're continuing to learn new things ... with the pain that sometimes brings. What follows is going to get a little technical, but bear with me. If you're feeling swamped, skip to the numbered steps at the bottom of this post. The fix is now behaving the way we intended, and sending perfectly good HTTP requests. We've found that some stream servers (including the ones we were doing our QA with) work just fine with the new option. Unfortunately, some others don't, but now we believe we know why not. The broken servers we've found (including those above) are failing because they won't accept requests as large as we send. If we artificially truncate our requests, they work. An HTTP request consists of the request line with the URL followed by a bunch of header lines. Most of the headers we send are the X-SecondLife-something headers that identify the requesting object, where it is in SL, and who owns it. Lots of scripters use these in their web applications; they're very useful and also can help us to track down misbehaving objects. So... why did these scripts work with these servers before? We missed one other effect of the old hack: llHTTPRequest(URL + "/7.html"+" HTTP/1.0\nUser-Agent: XML Getter (Mozilla Compatible) \n\n", [], ""); What everyone thought was important about this hack was that it added "Mozilla Compatible" in a User-Agent header. That may have been and may still be important to some servers, but our new option provides for that (we can see in our traces that your scripts are sending your new user agent value with that in it). Note, however the "\n\n" at the end of the hacked-up URL parameter. We didn't realize the importance of that until just a bit ago. It turns out that that is what's making these broken servers work, because it's prematurely terminating the request headers (in HTTP, the headers are terminated by a blank line) - truncating the header. The rest of the headers our server generates (including standard ones and those useful X-SecondLife-* headers) were being treated as part of the request body by the server, but because nothing in the truncated headers tells the server to expect a body and these primitive stream servers don't expect one, they were just ignored. Now that we're not cutting them off artificially, the stream server reads the full set of headers and eventually decides that the request is too big and returns an error. Experimentally, it looks as though our standard requests are a couple of hundred bytes over what the broken servers we've tested against this afternoon will accept. So what will we do? The short answer is "we don't know yet"; I'm writing this just a few minutes after we've diagnosed the problem, and we've got a bit more experimenting to do to work out the limits of these broken servers. I know, I keep saying "broken servers" as though I'm trying to shift blame. Not really, or at least not exclusively. We should have figured this out a little earlier, and I'll personally take most of the blame for that. We missed the importance of the extra newline, and since the servers we'd picked from the earliest reports of this problem happen not to have the extreme size limitation, our tests all worked. But limiting request sizes to the very low values these servers are, and then not returning an error that indicates why they are giving up is very poor behavior - there's no excuse for either, really. Compared to what browsers send routinely these requests are not large at all. So "broken servers" is fair. So what should you do? Test your scripts (use a Magnum or Cake region; they've got the new option). Do make the change to use the new HTTP_USER_AGENT option; it's permanent and may be important. See if you can update or locate server software that doesn't have the request size limitation, or increase the limit if that's possible. Making the limit 2K bytes would leave lots of headroom. If you find server software that works or how to configure it so that it does, please share that knowledge here as soon as possible. We're going to experiment some more and think about the problem. Stay tuned for more updates. Note Well: It is possible that we'll decide that because it has other changes that are important to Second Life as a whole (and it does), we'll need to promote this version to the rest of the grid, even though scripts in it won't work with some stream servers, so please treat the steps above as Very Important.
  8. anything that worked before (including test scripts) should still work on any region that is not in the Magnum or Cake channel; if it doesn't, the problem is not our server change. if you can find a simple test script that works on the older releases now and fails on the Magnum or Cake channel, let us know.
  9. The old way certainly won't work. Does this test script with these URLs work on other simulator versions? When I try those URLs with CURL locally, I get the same results you describe here, so I don't see why they should work from LSL.
  10. The simulator/LSL change is scheduled to roll tomorrow (6/21) to the Magnum RC channel for another try; simulator version 2017-06-19T17:18:00.327192. Thank you all for your patience. This release fixes two things that were responsible for the rollback: Our previous change had broken adding custom headers; those should work again We modified the server to properly escape space characters in the url value. Technically, the script should already have done that, but we didn't enforce it before so there are a lot of scripts out there that send them and we could make the change in a backwards-compatible way, so we did.
  11. We rolled this back for a little rework... stay tuned.
  12. It would be more useful to show us your script.
  13. Ok.... I've updated the wiki page for llHTTPRequest to show the parameter that will be available with the new server roll. If you previously invoked the method like this to hack a User Agent header into the URL parameter: HTTPRequest=llHTTPRequest(URL + "/7.html HTTP/1.0\nUser-Agent: LSL Script (Mozilla Compatible)\n\n",[],""); then it won't work on the new version because the spaces and newline are not allowed in the URL (that's actually why it broke in the current Magnum server already). You'll need to change your script to do something like: HTTPRequest=llHTTPRequest(URL + "/7.html",[HTTP_USER_AGENT, "Stream-Script/1.0 (Mozilla Compatible)"],""); The User Agent value you provide will be added to the one provided by the server, so both your script and the server version will be identified. As has been noted above, this change is scheduled to roll to the Magnum RC regions tomorrow.
  14. I should have been more clear... we are aware that it's not just one script, and the fix we're working on will be usable by any script. I expect to have specifics in the next couple of days. In the mean time, the scripts still work if the region is on the main channel, or on the Bluesteel or Le Tigre release channels. Support can move your region if needed.
  15. We have diagnosed the problem with this script, and are reaching out to the author to define a change to the script and the server to restore the functionality.
  16. We're wary of the term "NPC" (whether expanded or not) because it seems to be understood very differently by different people. What we're working on isn't an automated avatar, which would imply a great many other things that have nothing to do with animation at all. We are excited about seeing what our talented creators will do with what we are doing though.
  17. Debugging issues with the CDN isn't very easy, but it's impossible if we don't get very careful and thorough reporting. For example, including where you are, how you are connected to the network (who is your ISP), exactly what sort of problem you had loading (images, inventory, mesh objects, whatever), what region you were on, what time it was (including the timezone or explicitly in SLT). We have occasionally been able to diagnose and correct problems when we had enough information to go on....
  18. Whenever you're asking about a Viewer problem, it's helpful if you include at least the viewer version information from About Second Life (even better is all of what's in that box... we even have a handy button for copying it). As a part of the 64 bit viewer project, we're revamping how upgrades are downloaded and installed in an attempt to make them more robust and quicker (and incidentally to make sure that Windows users get the 64 or 32 bit version that will work best on their system). That component isn't in the Project Viewer yet, but we are integrating it now so it will be in an update soon.
  19. Have you tried the VLC viewer for your file? It uses much newer media handling. You can download it from http://wiki.secondlife.com/wiki/Linden_Lab_Official:Alternate_Viewers There will be an update to that viewer posted this evening, but the current version should work.
  20. Of course we read the forum ... we're just quiet and retiring by nature
  21. Solving that problem would require major rewrites to lots of viewer code; it is not on our roadmap at present.
  22. Just FYI ... there's nothing magical about 80,000; that's the default for Medium graphics setting (and most Macs). The default you get will vary depending on the viewers assessment of your GPU - high end ones will default higher, and some may default lower. It's a great goal, though. I have my own viewer set to around 100K most of the time, which really improves performance by cutting off the very expensive ones.
  23. This is fixed in the current default viewer, and that fix will be in the next Bento Project Viewer update
  • Create New...