Jump to content

Sarah Passerine

Resident
  • Posts

    4
  • Joined

  • Last visited

Reputation

2 Neutral

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This sounds related to https://jira.secondlife.com/browse/BUG-227756 as well. I've seen similar weirdness on Dolly Dreams, when someone couldn't apply fingernail polish to their Maitreya hands because the scripts in one hand had stopped working.
  2. Yes, it's a heavily scripted sim, and that's why it might be a good test bed to show where the new way of handling scripts is not working. So far, this particular issue has caused my product updated to break nocopy items owned by others due to scripts failing to communicate, and caused general uncertainty with all sorts of gadgets, both worn and rezzed.
  3. Is there a chance that the update here is making it so that some scripts don't execute at all, or are delayed indefinitely? I've noticed a bunch of RLV things breaking on Dolly Dreams, which gets fixed with a region restart. But when things stop working, it's like script roulette. To be clear, these are scripts without a race condition, but simple things like a menu not popping up when requested by a link message spawned from a touch event, as if the script were not executing at all. Are you sure this is working as intended, and every script is subscribed to the appropriate listens?
  4. I think the new Firestorm beta, found at http://www.firestormviewer.org/downloads/, contains the viewer fix for this issue. Thanks so much, Monty, for all of your help here.
  5. I finally got a moment to make that change in lltexturefetch.cpp, and it works much better than the one in http policy, so far. Textures are loading, and doing so very quickly. Thanks! This might just be the thing that fixes it for other tethering users as well, and it would be wonderful to add it to the main viewer code.
  6. So, it appears that the length and range headers are in disagreement with the curl grab of www.example.com as well.
  7. $ curl -H 'Range: bytes=0-33554431' -D headers.txt -o body.txt http://www.example.com/ % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 32.0M 0 1270 0 0 43 0 9d 00h 0:00:28 9d 00h 0 curl: (18) transfer closed with 33553162 bytes remaining to read $ more headers.txt HTTP/1.1 206 Partial Content Accept-Ranges: bytes Cache-Control: max-age=604800 Content-Range: bytes 0-1269/1270 Date: Mon, 30 Sep 2013 14:42:34 GMT Etag: "359670651" Expires: Mon, 07 Oct 2013 14:42:34 GMT Last-Modified: Fri, 09 Aug 2013 23:54:35 GMT Server: ECS (atl/FCAA) X-Cache: HIT Content-Type: text/html Content-Length: 33554432 $ curl -H 'Range: bytes=0-33554431' -D headers2.txt -o body2.txt http://www.example.com/ % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 32.0M 0 1270 0 0 48 0 8d 02h 0:00:26 8d 02h 0 curl: (18) transfer closed with 33553162 bytes remaining to read $ more headers2.txt HTTP/1.1 206 Partial Content Accept-Ranges: bytes Cache-Control: max-age=604800 Content-Range: bytes 0-1269/1270 Date: Mon, 30 Sep 2013 14:47:09 GMT Etag: "359670651" Expires: Mon, 07 Oct 2013 14:47:09 GMT Last-Modified: Fri, 09 Aug 2013 23:54:35 GMT Server: ECS (atl/FCAA) X-Cache: HIT Content-Type: text/html Content-Length: 33554432
  8. Thanks a lot for the more refined code! It took me a while to trace exactly where I could inject a little hack, and that spot does the trick perfectly for me. But yes, I understand it can't be in the general viewer, as that error is legitimate for others. When it comes to the carriers getting their act straight, I'm not really going to hold my breath. A debug setting, as was suggested, may be nice in this case, to give others with this problem a way to pick their poison. An even better way to handle this might be to tweak libcurl so that if it notices a content length header that is set to its maximum possible value, it will assume it is a bogus number, and not flag the partial file error.
  9. For what it is worth, I bit the bullet and compiled my own viewer with a hack to work around this problem. I suspect the server is sending the correct headers, but our data providers are modifying it in the middle somewhere. For those with the know-how, edit: _httppolicy.cpp At line 353, add // If partial content, try to use it anyway, because Content-Length header can't be trusted. if (! op->mStatus) { if (op->mStatus.toString() == "Transferred a partial file") { op->mStatus=HttpStatus(); } }
  10. I am not running a local HTTP cache. I'm fairly sure the carriers are doing some sort of in-between work, because if you exceed your allotted tethering amounts, website viewing winds up being sent to their upsell page.
  11. Also, oddly enough, baking was working on LeTigre for a week, until yesterday.
  12. Well, it's nice to know it isn't T-Mobile specific. Perhaps it is more likely to be fixed.
  13. This is why the conversation has moved to the jira. The issue has been traced to the viewer receiving bad Content-Length headers from http://bake-texture.agni.lindenlab.com. I don't know if this is happening on LL's side, T-Mobile's side, the NSA's side, or whatever, but at least the problem has been identified!
  14. If it is some sort of unfixable problem due to the network type, then at least we know the new method isn't as robust as the previous way of doing things.
  15. Logs uploaded at the bug filed at: https://jira.secondlife.com/browse/BUG-3323 Please forgive if I did something wrong, this was my first report! Also, penfold83, the connection was a steady HSPA+ 21.6 Mb/s link. When it is spotty, SL gets incredibly weird, with messages posting out of order, and multiple times too! In this case, something else is going on.
×
×
  • Create New...