Jump to content

Monty Linden

  • Posts

  • Joined

Everything posted by Monty Linden

  1. jackpilson225 wrote: Anyone tell me about java programming language? Best to avoid. Friends and family will shun you. PDP-10 macro assembly... now that people will respect.
  2. It enrages us, too. Fixing it is in the backlog. /me eyes backlog angrily...
  3. Mimaaah wrote: I live in Perth, Western Australia and I'm currently with iinet.com. Do you know a way I can fix this? The network ping and tracert are reasonable. Perth to Phoenix is about 15K Km, 100mS light round trip, 200mS in practice. The 200+mS jump to adl.on.ii.net is a little odd. If that is Adelaide, ii.net may be playing with traffic and you might want to get technical with them. Sim Ping is another beast and adds the time to service a UDP request in the simulator. This should be 'a bit' higher that network ping time and 1000mS is not really reasonable. High values can be caused by simulator load, network congestion and packet loss (which I think gets folded in - need to verify that someday). A good test is to teleport to an unloaded simhost (approximated by TPing or region-crossing to a quiet region or island).
  4. There are some... oddities... in how large messages are broken up in chat. Since chat is getting some attention right now, this would be a good time to get a Jira filed.
  5. This has been an intriguing thread...
  6. Hallelujah! Very glad this is improving. Would like to hear from some of the other corners of the Earth. Now we can move on to those interest list bugs (objects disappearing behind your back). Or maybe bringing back pie menus...
  7. SaraCarena wrote: @Monty Did the deployment of CDN make those throttles irrelevant? Probably not. I think they existed for reasons other than asset download. Having to do with scene description (i.e. interest list) and some heavy-weight operations in the simulator. @Carol Yes, the network architecture of SL has changed quite a bit in the past year. So it's time for a new picture! :matte-motes-evil: What follows gives some details on the three main components (viewer, servers and now CDN), the communication between them and what viewer debug settings affect which communication streams. To the left (in red) are pieces of the viewer. To the right (in blue) are simhost/simulators and other backend services. And at bottom (in green) are new CDN services. Solid lines with arrowheads are communication paths, either UDP or TCP/HTTP. Dashed lines indicate legacy communication paths that are now or soon will be deprecated, obsoleted and/or deleted. Ball-and-stick objects between a communication path and a text label indicate a viewer debug setting and the communication path or paths that setting influences. These, too, are in solid and dashed flavors. The latter indicating obsolescence. And as always, at least one error crept into my diagram. In this case, the 'HttpPipelining' setting only influences mesh and texture communications. Inventory is currently unaffected by this setting. [image has been corrected - ed] Generally, things are moving in the direction of simplification and less resource conflict. The mesh and texture HTTP traffic, which is usually the greatest load, tends to part ways with the UDP traffic a few network hops after a user's router or modem. Lacking TCP's throttling mechanism, UDP often wins in a fight (give-or-take the efforts of fairness algorithms along the path). Allowing UDP to overrun the path between viewer and simulator does still degrade the experience and the bandwidth setting remains an effective tool for avoiding this problem. Other settings should generally be left alone. A lot of bad advice was spread around in the community in an effort to work around throughput problems. We're trying to undo that history and get back on track with more typical (albeit aggressive) HTTP patterns.
  8. SaraCarena wrote: I don`t think meshmaxconcurrentrequests is active anymore, it should be Mesh2MaxConcurrentRequests now. That's correct. The older setting became ineffective around a year ago. OP may be experiencing CDN issues as reported here and elsewhere.
  9. Samual Wetherby wrote: Think such hacks are best for those wishing to pirate download stuff or other malicious activities. Nothing routes around network breakage as quickly as a 14-year-old looking for porn.
  10. DOCSIS channels are something below the IP layer. They're analogous to choosing the type of Ethernet you want to use (10BaseT, 100BaseTX, 1000BaseT, etc.). It's the multi-channel aspect of 3.X that makes near-GigE speeds possible between cable head and a customer's home. Possible. Concurrent TCP connections are a separate thing and should be more relevant to this problem. Yes, the viewers (Linden and TPV) have been bad network citizens in this area. SL has historically launched approximately 100 connections on login with a cold cache. Some very, very bad advice in the community had people bumpting this up to over 500. Current SL viewer is using 16 plus a number of utility connections, some persist, some do not. I hope this number drops a bit more and one of the workarounds I suggested earlier dropped that 16 to eight. It's effective in some cases (such as very low-end or buggy hardware), just not this one. Work continues behind the scenes with positive results. I wish I could say more and give a timeline but I can't. There are other areas of experimentation possible in a search for an effective workaround. All of the ones I mention below are fragile (subject to DNS changes, network changes, service provider not liking the interesting new traffic on his service, and users forgetting that they've played with things) and I don't recommend pursuing them. But if you must and are confortable and familiar with these approaches, you might consider: DNS hacks. Use a local HOSTS files to redirect CDN requests to a PoP in a different area. Proxy hacks. The HTTP traffic in the viewer can be redirected to an HTTP proxy located in another country. VPN hacks. VPN services are increasingly available with the goal of presenting a user as being in another country. I've wanted to play with this myself for a number of reasons, just haven't had time.
  11. SwiftXShadow wrote: thats just wrong... it was on LOW before and 5-8 FPS... now ultra 40-50 x.x Glad you're sorted out and happier. But we'll make this a learning moment while I'm still awake. The i7-2630QM is a quad-core processor and SL is (historically, mostly, accidentally) a single-threaded app. Under certain particularly active conditions it will use about 2.5 cores but steady-state is 1.0 plus-or-minus. And that will look like 25% CPU in many system tools. The 'Performance' tab of 'Windows Task Manager' will give more detail and you'll probably see one core pinned to the ceiling.
  12. Thanks for running a test. And log file is full of timeout errors (8 retries)? Clearly not a successful workaround for you. There is another setting to try though it is mostly a factor when connecting via inexpensive, questionable tethering gear (I'm looking at you, android): HttpPipelining (set to false)Combined with the previous two, it may be a very slow load but I'm interested in error-free first.
  13. Sean Heying wrote: Resepctfully Monty, No worries, I can take a bit of heat. In no way is this intended to dismiss the problems people are seeing. They are real and work is progressing behind the firewall on getting the experience up to expectations. I'm just offering an expedient treat-the-symptom approach. Something to alleviate some pain now and get sufferers closer to where they want to be. It may also give some insight into where we might go with service monitoring or adaptive behavior in the client.
  14. Samual Wetherby wrote: 8pm SLT Tons of Fetching Errors and retry timeouts to CDN server. Tracing route to cds.y8a2h6u5.hwcdn.net [] over a maximum of 30 hops: Traceroute/tracert are useful tools but they have limitations. They are useful for finding layer 2, 3, and 4 problems (low-level networking, routing issues, firewalls). But not so good at layer 7 problems (weird or wrong HTTP protocol handling). And when multiple companies are involved, everyone wants to toss the potato to someone else. A better tool or diagnostic approach, one that is harder to repudiate, needs to be developed. But that doesn't help you now. For you, I have an experiment to try. Something to see if you can get around the errors. It involves changing two debug settings, clearing cache, and restarting the viewer. I'm going to assume you know how to do these things and the information you need is the two settings: Mesh2MaxConcurrentRequests (set to 3) TextureFetchConcurrency (set to 3) Start the viewer, go to your test region, allow everything to load then examine the log file for *any* texture or mesh failures (other than 404/NotFound). If it is clean or substantially improved, you have a workaround and you have some additional information for your ISP that might get them interested. (The detail being six versus 16 sustained, pipelined HTTP connections.)
  15. Drake1 Nightfire wrote: hmmm.. Well, Blizzard has multiple online MMoRPGs.. Aeria has several as well.. Oh wait, Sony.. they make playstation, TVs, cameras, movies, TV shows... Pretty sure they blow LL out of the water, profit wise. Sony certainly blows us out of the water revenue-wise. But we may actually beat them profit-wise...
  16. Ardy Lay wrote: It was fixed. That may be stuck in your texture cache. Do try this. If you find that fixes the image then kudos to Linden for a texture cache that works!
  17. Whirly Fizzle wrote: Ref: http://wiki.secondlife.com/wiki/Simulator_User_Group/Transcripts/2013.09.24 See timestamp [12:22] onwards. Monty borked something, did he? :-)
  18. Magnet Homewood wrote: Really, you couldn't make up stuff better than what goes on at the Lab. We thought the same... :-)
  19. Thanks for the followup on the resolution. Sometimes it really is the hardware. :-)
  20. Whirly Fizzle wrote: I made a JIRA filter for all reports that appear to be CDN/Pipelining related which I'll keep up to date: https://jira.secondlife.com/issues/?filter=16679 Quick note to let everyone know that we are finding concrete causes to the problems people are reporting. The help we've been getting from the field has been very valuable. Right now, we're focusing on areas where the most variability in the experience is found. More to come...
  21. Perrie Juran wrote: So for anyone reading, this 10,000kpbs setting is a recommended "test set up" for HTTP Viewer. LINK. So it is specific to this Viewer. Hmm, that was probably not a good setting suggestion. There is an outstanding Jira on values over 3Mbps. Keeping it at or below 1.5Mbps is probably most reliable right now.
  22. I don't have an answer to your question, I'm afraid (I suspect it will require some viewer mods). But I would *love* to see the results of such an experiment.
  23. SaraCarena wrote: You can check that you`re connected to the CDN (which is purely for texture/mesh fetching, apparently taking a big load off the sim) when you`re in RC snack region by typing "ping asset-cdn.agni.lindenlab.com" in windows cmd.exe. You'll be able to ping that DNS name whether the viewer is using the CDN or not. So that really isn't a valid test. Operationally, we can switch a region back and forth as needed and so any static document will never be 100% reliable. The grid is the truth. Using 'netstat', you can get some clues. Connections to 216/8 port 12046 point at non-CDN. Connections to port 80 to the CDN service-of-the-moment point to CDN. (All of this subject to change - it's not a formal declaration of a fixed API.)
  24. MBeatrix wrote: Any ideas? Thanks. Recommend a Jira with SecondLife.log file attached. In cases where this is consistently failing, I often see a problem with inventory setup time after login being excessive. If you find something like a 60+ second gap in logging around the following line: 2014-07-17T20:34:35Z INFO: idle_startup: Creating Inventory Views You'll often get a communications timeout leading to the original message. Faster machine/network (borrowed or begged), reorganized inventory, etc. can help clear that hurdle.
  • Create New...