Jump to content

LSL HTTP Changes Coming


Oz Linden
 Share

You are about to reply to a thread that has been inactive for 964 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

On 1/12/2021 at 1:50 PM, Monty Linden said:

I'm going to assume you mean the desktop app is failing with that message.  One thing LSL doesn't yet provide is a fixed point where pieces of a distributed, scripted system can go to find the other pieces.  This usually means a small service on the internet where the pieces can register and search for other pieces (and do license checks, etc.).  Looking at the installation instructions, for the HUD, I'd guess that this function is associated with the 'AOS-Extreme: Object-DNS successfully updated:[200]' line in Local Chat.  If you are still getting this status, there's a good chance that part is working.  If not, the HUD needs a patch.

If the HUD is fine, problem will be either with the service or the desktop app or both.  There are some debug modes and interesting buttons ('CM') on the app.  Look for clues there as to what the app thinks it is trying to connect to and where data may be corrupted.  Look out for truncated hostnames in particular.  If the app lacks sufficient debug capabilities, you'll need to look at environmental tools to tell you what is happening:  Wireshark, syscall tracers, etc.  Those will inform possible actions.

 

Yes it's the desktop app that fails.

 

Link to comment
Share on other sites

On 1/12/2021 at 3:38 PM, Lucia Nightfire said:

Just an FYI, such setups can most definitely miss things as it's using sensors which have arc based ranges, meaning spherical or segments so unless there is complete overlap of all scanned areas of interest you will have blind spots. Also, there is a limit to the number of returns sensors can make so anything beyond that limit could be missed.

It would help if we had a script function to scan a parcel with filtering options and specified object key return limit.

Thomas did a fine job.. His sim scanner didn't miss anything at all.  I used it extensively. I've moved entire sims going down the excel sheet and moving each object either by "restore to last poition" or drag and drop then edit for the no copy ones.  

Link to comment
Share on other sites

On 1/12/2021 at 1:50 PM, Monty Linden said:

I'm going to assume you mean the desktop app is failing with that message.  One thing LSL doesn't yet provide is a fixed point where pieces of a distributed, scripted system can go to find the other pieces.  This usually means a small service on the internet where the pieces can register and search for other pieces (and do license checks, etc.).  Looking at the installation instructions, for the HUD, I'd guess that this function is associated with the 'AOS-Extreme: Object-DNS successfully updated:[200]' line in Local Chat.  If you are still getting this status, there's a good chance that part is working.  If not, the HUD needs a patch.

If the HUD is fine, problem will be either with the service or the desktop app or both.  There are some debug modes and interesting buttons ('CM') on the app.  Look for clues there as to what the app thinks it is trying to connect to and where data may be corrupted.  Look out for truncated hostnames in particular.  If the app lacks sufficient debug capabilities, you'll need to look at environmental tools to tell you what is happening:  Wireshark, syscall tracers, etc.  Those will inform possible actions.

 

Thank you, this makes total sense. 

Link to comment
Share on other sites

On 1/13/2021 at 8:33 PM, Rolig Loon said:

My own scan object, which I have sold in MP for years, is clunky and brute force, but it works.  It's just a spherical object, essentially a drone, that hops all over the region, following a grid pattern and doing overlapping 16m scans.  It stores everything internally, cleans up duplicates, and dumps a report out at the end.  I made a second one that makes a series of vertical hops from ground level to 4000m.  I've never taken the time to make either version very sophisticated, so they don't send data to a remote server or anything.  Designing something like that is pretty simple.

TYSM,  I'll check it out.

As an aside, why doesnt' Area Search work better? It's not possible to find all objects within a specified range without using drones?

There's nothing in SL that does what Conover's program did, so far as I can tell.  The usefulness of an Excel sheet that lists every object in a sim giving locations, owner, group assignment is just fantastic when moving parcles, repairing areas, finding objects, or seeing what other people have littered on the land.  

Conover.jpg

Edited by Knobs Slade
Link to comment
Share on other sites

2 hours ago, Knobs Slade said:

As an aside, why doesnt' Area Search work better?

That would be a question for Firestorm developers, wouldn't it?

AFAIK, the nearest approximation in the Linden viewer (and present in all others too, I suppose) is Build / Pathfinding / "Region Objects" which can list all the stuff you could move around (mostly your own stuff). Catznip (at least) also has World / "My Objects..." which is really limited to your own stuff but has more handy filters. These don't find all other folks stuff, but I've nonetheless found them very useful for finding items I've accidentally strewn about the place. Unfortunately, whether Linden or third-party, SL viewers seem to eschew APIs at all cost, and even (for now?) only rarely offer text copy-to-clipboard at all. Not gonna get a spreadsheet that way.

There would be enormous utility in an LSL function that could fetch batches of all objects in a region, filtered by owner and parcel and probably other "Area Search"-inspired criteria, without needing to rez a probe and push it around while managing llSensor ranges and angles to try to stay under the 16 item return limit.

Link to comment
Share on other sites

  • 3 months later...
On 9/15/2020 at 9:28 PM, SophieJeanneLaDouce said:

In response to Oz Linden's comment
 


I know that this is a specialized way and doesn't address most cases, but it fits my needs.

I use OAuth2 in my scripts.
When a script starts, it presents client credentials  to an authentication server.

It receives an access token back, which the script remembers (below). This oath2header is added to all the HTTP requests from the script.


list oauth2header = [HTTP_CUSTOM_HEADER,"Authorization","Bearer " + llJsonGetValue(body, ["access_token"])];

The benefit I receive is middleware on my server rejects all requests that don't have a valid access token.
 

Hi, does this actually work? Currently it appears that the Authorization header is part of the blocked list and so this comes back as a denied custom header. I've noticed that if you don't set HTTP_ACCEPT then it silently fails. Apologies if this has been discussed already but I would really love to use this feature. Currently the only way I can see this working is if I intercept the incoming request on my server and setting the Authorization header manually based on a token input.

Link to comment
Share on other sites

  • Lindens

It should work.  'Authorization' is not on the blocked list.  Check the error and your custom header list.  As for 'Accept', I haven't checked this but if you don't specify it, we'll default a long list of acceptable types which may startle your server.

Link to comment
Share on other sites

  • 3 months later...

Has anything changed regarding HTTP requests during the last couple of weeks, or last couple of sim rollouts? I ask because starting somewhere around 10 days to 2 weeks ago, about 1.25% of all my requests fail with a 502 error (or 499 error if it was a https request). This problem does not depend on load, script, time of day, day of week, or region: About 1.25% of all requests fail, spread out 24/7, with no discernable pattern to the failures.

I ran a test where I pinged my server every 30 seconds, for hour and hours on end, from a client outside of the AWS cloud, with no lost requests or errors at all. That same ping from inside SL results in the mentioned 1.25% loss due to 502 or 499 errors.

I don't know what to make of it, or how to troubleshoot it further.

The server setup is a dedicated linux i3-9100 server, running nginx, php, and mysql. The internet connection is a 50mb up and down fiber connection. It has been running with no errors for over a year (up until 2 weeks ago).

Link to comment
Share on other sites

4 hours ago, M Peccable said:

Has anything changed regarding HTTP requests during the last couple of weeks, or last couple of sim rollouts? I ask because starting somewhere around 10 days to 2 weeks ago, about 1.25% of all my requests fail with a 502 error (or 499 error if it was a https request). This problem does not depend on load, script, time of day, day of week, or region: About 1.25% of all requests fail, spread out 24/7, with no discernable pattern to the failures.

I ran a test where I pinged my server every 30 seconds, for hour and hours on end, from a client outside of the AWS cloud, with no lost requests or errors at all. That same ping from inside SL results in the mentioned 1.25% loss due to 502 or 499 errors.

I don't know what to make of it, or how to troubleshoot it further.

The server setup is a dedicated linux i3-9100 server, running nginx, php, and mysql. The internet connection is a 50mb up and down fiber connection. It has been running with no errors for over a year (up until 2 weeks ago).

@Monty Linden mentioned somethng about http-out changes in the last server group meeting. Maybe he can elaborate.

Link to comment
Share on other sites

  • Lindens

DM or email (monty @) information to narrow the search:  target URL, date, time (and timezone), region where requests were launched.  Both for the normal script and test script cases.

The usual answer is that dealing with 5xx is something to be expected but there are exceptional cases where we can show the endpoint is behaving badly or a sea cable is involved.  But HTTP Out generates over 10K 5xx status returns a minute.  It's very normal.

Link to comment
Share on other sites

12 minutes ago, Monty Linden said:

Cable ingress has traditionally been a choke point where traffic may be shed.  I've frequently seen problems on cable hops.  There are no alternatives but the new cables are getting faster and faster (bandwidth, not latency).

Ah, the dreaded "oversubscribed" issue, I see.  UDP traffic, if SL is still using it, is often what gets early discard-eligible marking when entering a switching platform and subsequently what gets discarded when an egress-interface is congested.  I like how carriers like to tell people their data travels at "the speed of light".  What some people do not know, and it's not their fault, is that, because glass is denser than vacuum and air, light is slower when traveling through it!  Light in optical fiber is even slower than signals in coaxial cable!  I had some fun setting up a demonstration of this for out CEO 21 years ago.  I used two reels of cable that were the same length, 1,000 meters.  I set up my time-domain reflectometry test set to run tests on both.  He boggled at the result then started looking at cable specifications.  "Oh, so that's what 'velocity factor' means!", he shouted a few minutes later.  He had taken notes, did the math, then showed it to me.

Bandwidth is not speed (velocity) alone.

@Monty LindenI wonder if some sort of tunnel could be used by subscribers to encapsulate their UDP traffic within a long-life TCP flow, to get away from the "UDP is expandable" attitude coded into much of the equipment that makes up The Internet?

Link to comment
Share on other sites

  • Lindens
13 minutes ago, Ardy Lay said:

I wonder if some sort of tunnel could be used by subscribers to encapsulate their UDP traffic within a long-life TCP flow, to get away from the "UDP is expandable" attitude coded into much of the equipment that makes up The Internet?

Using a *TCP* VPN could be a way of improving things where capacity is otherwise available.  But it can all go wrong still:

  • It (partially) moves the choke point from the cable to the VPN ingress as UDP packet loss is turned into TCP retransmits and backpressure.  If that ingress prioritizes stupidly or has other shortcomings, the experience won't be good.
  • You don't pay attention and use a UDP VPN protocol turning your TCP into UDP and making the entire experience worse.

I'd really love to see experiments done and results shared here.  I expect results to vary wildly based on location, local ISP, backbone carrier, VPN, and time.

Link to comment
Share on other sites

  • Lindens
42 minutes ago, Ardy Lay said:

"Oh, so that's what 'velocity factor' means!", he shouted a few minutes later.  He had taken notes, did the math, then showed it to me.

This reminds me of a story I read when very young and probably recall incorrectly now.  When one of the first sea cables was being laid, a problem came up with the electrical performance of the cable.  (This may have been the cable laid by Brunel's Great Eastern.)  So they asked Edison to come in and have a look at the problem.  Being a tinkerer and no fan of Maxwell, he did what he knew and wired up a telegraph set:  battery, key, spool, sounder.  Oh, and the spool was one continuous run of 500 or 1000 miles of cable in a massive hold.  Edison checked his work and closed the key and...  nothing.  Open, close, open, close... nothing happened.  He finally just closed the key and waited.  Several hours later, the sounder closed.  The inductance of that coil was such that it took that long to build enough of a magnetic field to allow sufficient current to close the sounder.

Link to comment
Share on other sites

3 hours ago, Monty Linden said:

This reminds me of a story I read when very young and probably recall incorrectly now.  When one of the first sea cables was being laid, a problem came up with the electrical performance of the cable.  (This may have been the cable laid by Brunel's Great Eastern.)  So they asked Edison to come in and have a look at the problem.  Being a tinkerer and no fan of Maxwell, he did what he knew and wired up a telegraph set:  battery, key, spool, sounder.  Oh, and the spool was one continuous run of 500 or 1000 miles of cable in a massive hold.  Edison checked his work and closed the key and...  nothing.  Open, close, open, close... nothing happened.  He finally just closed the key and waited.  Several hours later, the sounder closed.  The inductance of that coil was such that it took that long to build enough of a magnetic field to allow sufficient current to close the sounder.

https://spectrum.ieee.org/the-first-transatlantic-telegraph-cable-was-a-bold-beautiful-failure

Link to comment
Share on other sites

HEY!  Instead of all this bull***** that doesn't help beginner scriptors, how about somebody leave some script examples since you all decided to break all my updaters?  I mean you guys just go off and change whatever you want and take no mind to stuff that's been working perfectly fine for half a decade ... no big deal right? 

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 964 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...