Jump to content

Too Many 1024 Textures and the NEW (please hurry) Land Impact rules


Chic Aeon
 Share

You are about to reply to a thread that has been inactive for 2097 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Now I seldom agree wholeheartedly with @Penny Patton   BUT I have a little mini-rant here and she will likely appreciate it. 

We have been pretty quite about let's call it "laggy mesh" these last couple of months. After that thread that went on forever with the disappearing trees and the 1024 texture for MATCH STICK HEADS, most of us probably shelved the issue for later. We can do what WE do to make changes but we can't influence others all that much. 

This week I had several  OMG moments in the texture field. There may have been mesh issues also but these were overshadowed by the overuse of huge textures.  No names, not even hints but here is the scenario.

SUPER cute outdoor build that included some rugs and pillow things to sit on. I wanted to change the color of one blanket for color coordination (it was mod, I could do that). It turned out that JUST the top of the blanket (not trim, not underside) was FOUR TEXTURES. I can't think of any logical reason for that. None.  I did some more inspecting and sent the OHSOCUTE item into the trash. 

A few days later I got another super snazzy  new release which included a storage space (not all that large). In trying to tint that I discovered (upon Firestorm inspection) that it was made up of 11 textures (I would have used one). The item didn't even have any grain, it was simple white with shadows.  There were some parts of the set that were customizable and one tiny piece (about six inches in real life) had a 1024 texture on it. 

Now my head is shaking a bit here. These were both well-known designers, not beginners by any means. 

 

MY POINT IS --- (well eventually I would get to that)

When the new land impact rules come into being, they SO NEED to have the texture load figured in along with the vertex count. I sincerely hope there is SOME way that this can happen. 

 

We need some better RULES since obviously we aren't policing ourselves. 

 

 

  • Like 2
  • Thanks 4
Link to comment
Share on other sites

One really good new feature in Firestorm we haven't discussed much here yet, is the VRAM readout. Right-click on an objct, select Object -> Inspect and you get something like this:

5b41c6f3a6cb7_Skjermbilde(1263).thumb.png.14c88a519ab4dba265e5f565432aa8be.png

The VRAM figure is simply the number of texture pixels in the linkset. It doesn't tell the whole story of course but it's still a very good indicator of the cpu lag the item causes.

The significance of texture lag is a bit tricky because it's much a question about the total number of textures in the whole scene. HIgh resoultion textures are always laggy but once we get to the point when we run out of VRAM, it gets seriously bad becasue the computer will have to swap textures back and forth between the HD cache. A good rule is to keep the total number of textures down below 1 GB (1,000,000 Kb) - that's for everything the viewer has to keep track of - not only a single item.

Some VRAM numbers for common (and some not so common) texture resolutions:

Resolution VRAM (Kb)
64x64 16
128x128 64
256x256 256
512x512 1,024
1024x1024 4,096
2048x2048 16,384
4096x4096 65,536
128x256 128
128x512 256
256x512 512
256x1024 1024

 

One thing worth noticing, is that according to Firestorm there is no difference in VRAM usage between 24 bit (non-alpha) and 32 bit (alpha) textures and the numbers are conistent with the raw data requirements for alpha textures. If this is correct, it the viewer is reserving up to a quarter of the VRAM for non-existent alpha channels.

Whether LL is going to do something about it, remains to be seen. For now, let's count our blessings and be grateful they had the sense to limit uploads to 1024x1024.

Edited by ChinRey
  • Like 3
  • Thanks 1
Link to comment
Share on other sites

I liked the matchstick thread :P I learnt a heck of a lot to improve my own mesh.

I do expect the change is going to be a bit painful for everyone when it's done (just as LOD4 to LOD2 was for me), so we may as well rip the entire bandaid off quickly.

As the Lab work on the figures I do hope they put it through a public comment phase, some in-world meetings. It's likely one of the ones I would wake up at 3am for.

  • Like 1
Link to comment
Share on other sites

Second try. No posting on button pushing :D.

We DID talk about this a bunch at that same time (pretty sure it was on that match stick thread), but often good to repeat things. 

And I talked about it a bit in my blog post on the new Firestorm tools: https://chicatphilsplace.blogspot.com/2018/02/the-new-firestorm-building-and-shopping.html

Quote

The improvements in the inspect floater including VRAM usage, contributed by Chalice Yao and Arcane Portal, based on code originally by Cinder Roxley.

To MY memory though, this is new and good to know:

One thing worth noticing, is that according to Firestorm there is no difference in VRAM usage between 24 bit (non-alpha) and 32 bit (alpha) textures and the numbers are conistent with the raw data requirements for alpha textures. If this is correct, it the viewer is reserving up to a quarter of the VRAM for non-existent alpha channels.

Edited by Chic Aeon
  • Like 2
Link to comment
Share on other sites

.. keep in mind the viewer doesn't load everything at full resolution and will attempt to drop to lower resolution versions of a texture when VRAM is under pressure.

The performance downside for high VRAM use comes during the decoding phase and that happens before the textures are placed in VRAM. Inspecting an object will override SL's intended texture loading and force the full resolution textures to be loaded. Not everything is loaded fully under normal operation. (madly inspecting everything and running around like chicken little makes it worse)

I would fully support LL boosting max texture sizes right up to 4096. VRAM usage is less important than atlassing - every face with the same texture can be rendered in the same pass, a single HUGE texture is way faster to render than many small ones.

Unless you are seeing textures swapping in and out, then you don't have an issue with VRAM use. Using more doesn't make SL slow on its own, the associated pre-processing does that, reduce draw distance and universally lighten the load.

Textures swapping in and out highlights a flaw in the viewer code. Loaded texture resolution is based on the amount of screen area an object occupies (as it should be), BUT when VRAM is under pressure the viewer will junk textures trying to free up memory and invariably picks something really obvious to unload. The screen space based allocator kicks in again saying "wait that's huge, put it back" .. and your floor starts bouncing in and out. This is further undercut by some creators screwing around with an objects bounding box to make it appear much larger to the render engine than the actual pixels rendered on screen. 

The viewer is kneecapped in the amount of VRAM is can access. But as LL have been finding out the hard way, many graphics cards lie about the amount of VRAM they actually have and Windows & other applications will be using a sizable chunk of it - games get away with high VRAM use as they presume they are the only focused application, yank everything and force windows to swap out the stuff you cant see because the game covers it up).

If you have low VRAM and texture thrashing ...  crank VRAM to the max your TPV of choice will allow, disable shadows and CLOSE CHROME (plenty of other apps use VRAM too, but browsers are the worst).

In my own testing, it's rare to find a location that pushes VRAM use over 1.5GB .. adding more just means the viewer doesn't have to bother unloading textures for things you can't actually see this frame (like stuff behind you cam).

 

  • Like 2
Link to comment
Share on other sites

12 hours ago, Chic Aeon said:

When the new land impact rules come into being, they SO NEED to have the texture load figured in along with the vertex count. I sincerely hope there is SOME way that this can happen. 

 

We need some better RULES since obviously we aren't policing ourselves. 

Recalculating Li based on texturing will cause people to instantly go over budget with objects already rezzed in world and auto return will send half the grid back. It wont ever happen.

Link to comment
Share on other sites

44 minutes ago, CoffeeDujour said:

Recalculating Li based on texturing will cause people to instantly go over budget with objects already rezzed in world and auto return will send half the grid back. It wont ever happen.

Hence my comment in another thread wondering if Patch's info that they were thinking bout upping the LI per square meter again  might coincide with the new Land Impact Rules. Again, Patch did NOT say that was the case, just can see the logic there. 

And "some" (OK, a very few) of us who have been paying attention to this (or use very old prims builds and there are still a lot of those with 256 repeating textures out there) would be OK.   

Do agree though ---- that is a huge concern and that certainly was the case when the last land impact was introduced. Some folks had very full Lost and Found folders. Again, that makes a new prim per meter allotment seem like the only likely work around for the introduction. LATER people could (would, fingers crossed) replace some of the "laggy objects" with more streamlined ones -- hopefully from some of the same makers who had changed their methods.   

Link to comment
Share on other sites

Also wanted to note that there have been platforms where textures were counted into a "sim's allotment of assets" so to speak. That may never be the case here, but it worked VERY WELL in Cloud Party. 

Along with the number of mesh assets you could have, there was a number of textures (pixels as it was determined in part by the size of the textures) you could have in your area (four sims in size in CP). Once you USED a texture, it did not count again on its reuse - neither did the mesh (instancing).    It was a challenge. It was doable. I would LOVE to go back there now if it was around and see what I could make with those same rules. SO much smarter now :D.

Agree it is too late to close that barn door. Sometimes though I wish it wasn't. 

 

 

Link to comment
Share on other sites

I think the lack of such limitations accounts for a lot of SL's success and longevity, the platform is so open ended that even if a parcel texture limit was imposed we would work out a way around it in a matter of hours (and our solution would be a million times worse than the issue such a limitation was intended to correct).

  • Haha 1
Link to comment
Share on other sites

1 hour ago, CoffeeDujour said:

I would fully support LL boosting max texture sizes right up to 4096. VRAM usage is less important than atlassing - every face with the same texture can be rendered in the same pass, a single HUGE texture is way faster to render than many small ones.

I agree. However, without a second UVSet to map lighting information renders this feature quite useless in SL and prone to exploitation of texture resources, dumping the atlasing in favor of dedicated textures. Which can accommodate much more surface area of course, but i doubt it would be used this way.

  • Like 2
Link to comment
Share on other sites

5 hours ago, CoffeeDujour said:

.. keep in mind the viewer doesn't load everything at full resolution and will attempt to drop to lower resolution versions of a texture when VRAM is under pressure.

That appears not to be the case I'm afraid. There have been some discussions here whether SL use mip-mapping or not and if I understood correctly, the conclusion was that it doesn't.

There is a very simple test anybody can do that should be a good indicator:

  • Texture a few prims with 1024s and scale them down so that each surface only takes up a few pixels on the screen.
  • Check the fps.
  • Retexture with 512s.
  • Recheck fps.

The fps will lower with 1024s than with 512s and it shouldn't if the textures were scaled down early in the process, before they were stored in the VRAM.

Another even simpler, but not quite as conclusive, test is to add a high resolution texture to a surface, slowly zoom in on it and watch as the texture doesn't change every time the viewer doesn't switch to a higher resolution version.

Also, it's worth noticing that even in an extremely low lag environment a 2048 can take several seconds longer to load than a 1024.

 

5 hours ago, CoffeeDujour said:

I would fully support LL boosting max texture sizes right up to 4096.

They've already answered that:

https://jira.secondlife.com/browse/BUG-20171

Also, please read this JIRA:

https://jira.secondlife.com/browse/BUG-20125 I'm not sure if that bugs has been fixed yet.

 

5 hours ago, CoffeeDujour said:

VRAM usage is less important than atlassing - every face with the same texture can be rendered in the same pass, a single HUGE texture is way faster to render than many small ones.

Do you honestly believe content creators who think nothing of using a 1024 for a matchstick head or a shadow prim will be conciderate enough to reserve their 4096s for texture atlases? ;)

It may work if, but only if, there is some function to limit the total texture usage in a scene. Incoroprating textures into the land impact calculation may help there but then there are avatars. Even a low lag system avatar will usually have more than ten textures and a fully loaded mesh avatar (the kind that complains about too few attachments allowed) may well have 30 or more.

Just for fun, let's make a moderately extreme - but not at all unrealistic - example of avatar texture use:

  • Mesh body: 10 textures (skin, three clothes layers and nails - the clothes layer textures are still there even when set to full transparency)
  • Mesh head: 3 textures (skin, tattoo layer and lips, isn't it?)
  • Mesh eyes: 2 textures (we don't want to use the same texture for both eyes, do we?)
  • Eyelashes: 4 textures
  • Hair: 3 textures
  • Mesh clothes: 16 textures (four pieces, each with four textures but really, the sky is the limit here)
  • Shoes: 5 textures
  • Jewelry etc.: 20 textures (again, the sky is the limit)

63 textures, make them 4096s and we have 4 GB of VRAM. Put ten of them together in a club inside a 3 GB VRAM house (yes, they do exist)...

It would still work if SL had mip-mapping and/or scaled down textures before they were stored in the VRAM but, as I said, that appears not to be the case.

Edited by ChinRey
  • Like 1
Link to comment
Share on other sites

There's some rework coming from LL in the texture caching area, but I don't know the details.

The way it seems to work, from looking at the viewer code, is that (some? all?) textures are translated to progressive JPEG 2000 server-side. The first part of the file gives you a low-rez texture, and as you read more, you get the  higher-rez versions.  Or the reader can just read a low-rez version and stop there. In the viewer, there's a separate thread which manages the "fast cache" of textures, where they're stored in a local file in uncompressed form, not necessarily at full resolution. The thread has a to-do list of textures the viewer wants, and what resolution it wants them at, and that list changes as the avatar moves and looks, so a texture's load may be canceled before it comes in. The loading policy is spread out across the viewer code, so it's hard to understand the priority rules in use. (Comments? What comments? Somebody went back later and tried to add comments, but they admit in the comments they're guessing about some parts. I was fixing a bug that crashed the LGPL version of Firestorm and had to figure some of this out. Not fun.)

Then the textures that have reached the viewer are put into VRAM, which is a limited resource. There's a priority calculation to decide which textures at what resolution go out to the graphics card. It's even possible to pull a texture from VRAM, make it lower-rez and smaller, and put it back, although the code comments indicate this is not done much. There's a lot of heavy machinery in there.

So just because the creator uploaded a 1024 texture doesn't mean it gets delivered to the graphics card. It's up to to the viewer's policy how much of that resolution gets used. If the viewer computes that there are more texture pixels than screen pixels, given view distance and size, then it should only load a lower-rez version. It's not clear how well current policy does this. LL's "Project Arctan" may improve that.

Back at the server side, there are LL's asset servers (on AWS?), which have the original of the texture. These feed the content delivery network (currently Akami?) which has disk farms front-ended by solid-state drives front-ended by ngnix caches spread around the world. The CDN is a big shared cache system; it's just mirroring recently used content from LL's servers.

What's supposed to happen is that you get a low-rez texture for everything in view, very fast. Then the nearer stuff fills in with higher-rez textures, and finally the background becomes high-rez, until VRAM is full. All the code is there to do that. But it doesn't seem to deliver that user experience.

When you move to a new area, it often takes about a minute before all the textures are loaded. If you watch the network traffic, there's a peak in the first few seconds, and then the download traffic declines as the remaining textures trickle in. The CDN isn't able to keep the textures flowing at full speed. That's to be expected. SL doesn't have much texture sharing - each user has their own textures, and sims are rarely densely populated. So the odds are that only a small fraction of the texture files will be on the CDN servers. The CDN has to ask the LL servers for them, which is probably the source of the bottleneck.

CDNs work best when a million people are trying to access the same thing. All that distributed caching is a huge win when something goes viral on Twitter. SL is 50,000 people all accessing different things. The CDN doesn't help much.

Since the CDN probably requests the entire texture file, if the viewer only reads the first 64x64 piece of a 1024x1024 texture file, the CDN will request the whole thing from LL's servers, because the CDN is designed for web content. This may be loading LL's servers unnecessarily.

So, as well as choking on VRAM, you can choke at LL's asset servers, at the CDN, on the CDN to viewer connections, on compute power in the cache management thread, and on disk bandwidth to the local disk where the viewer's fast cache file lives. If your disk light is on solid, it's probably the fast cache file.

It looks to me like the big bottlenecks are being local disk limited (an SSD drive would help here, because it's lots of small accesses), and LL's asset servers, plus policy not being as smart as it could be. 

Edited by animats
  • Thanks 3
Link to comment
Share on other sites

13 minutes ago, animats said:

(Comments? What comments? Somebody went back later and tried to add comments, but they admit in the comments they're guessing about some parts. I was fixing a bug that crashed the LGPL version of Firestorm and had to figure some of this out. Not fun.)

  • A TRUE Klingon Warrior does not comment his code!

(Klingon Programmer's Code of Honor) #1)

Edited by ChinRey
  • Like 1
  • Haha 3
Link to comment
Share on other sites

1 minute ago, ChinRey said:
  • A TRUE Klingon Warrior does not comment his code!

(Klingon Programmer's Code of Honor) #1)

Two words. "Technical debt." Some of the things I see in the SL viewer code are brilliant. Some good programmers years ago thought through the performance problems and wrote code to deal with them. But those people are long gone from LL. Content has more detail now, there's a CDN in the middle, users have more network bandwidth, and comparable modern systems provide a better user experience. The system needs re-tuning. Now, Oz Linden's team has to try to understand that code and figure out why the system as a whole is underperforming. In the absence of comments and adequate documentation, this is tough.

(I see this in the region crossing area, too, but that's another story.)

Link to comment
Share on other sites

1 hour ago, animats said:

Some good programmers years ago thought through the performance problems and wrote code to deal with them. But those people are long gone from LL.

Interesting quote from one of the early SL programmers:

Quote

...

the code today looks way more complicated than the code I originally wrote.

...

(Avi Bar-Zeev: How SL Primitives [Really] Work, 2008)

Cory Ondrejka was LL's first CTO and responsible for devloping the original SL software. He is an absolutely brilliant programmer of course, but he may also be the one we should blame for the lack of comments and documentation. He never had time for such details and besides, he probably knew all the code by heart himself.

Edited by ChinRey
Link to comment
Share on other sites

newregionload.thumb.png.df0dd93f28641f91a281e4d7b7863316.png

Entering a new region. (Hangars Liquides, a large elaborate cyberpunk sim).

To illustrate the texture load, here's a Linux network loading graph for the 60 seconds after entering a new region, one not visited in weeks.

Notice the initial network ramp-up, as the region loads and object info comes in. Then there's a long tail, as, for a full minute, more detail trickles in. Well, not "trickles", exactly; about 200MB was loaded in a minute. This is a 50Mb/sec connection, or about 6.3 MiB/s, (capital B means bytes, 8 small-b bits)  which is about where the traffic maxes out. But it falls off about 80% from peak for a while. That's probably the CDN and LL's asset servers unable to serve textures as fast as the viewer could take them, or it may be the viewer throttling its requests.

This is a lot to ask of the asset servers. Moving to a new region requires about 5x the data traffic of a Netflix stream. It's amazing that this works at all. (It also makes you ask the question, should systems like SL just send the user a video stream, instead of rendering locally. That's how Mobile Grid Client works.)

Looking at this, it seems like the asset server system is doing reasonably well. One could ask for more, and the VR people do. But really, that's moving an awful lot of data just to fill one screen. The viewer's difficult task is to bring in the assets in an order and at a resolution which provides the best visual effect when texture download can't keep up.

It's not really about creators providing a 1024^2 texture for an eyeball. It's the viewer asking for all of it when the eye is across the room. The viewer can ask for and get a 32x32 texture if it wants. The machinery is all in place. It's the strategy that needs work.

  • Thanks 2
Link to comment
Share on other sites

You seem to have grabbed thee wrong end of the stick again ... The CDN is more than capable of meeting the data supply demands, I believe it's run on Amazon servers.

You land on a region and the viewer (with nothing to do) requests every asset and texture it can. The texture request rate drops off because the decode list is full and the viewer is spending all it's time on that and file I/O. As decodes finish, more are requested. The bottlenecks are KDU/OpenJpeg, writing data to the cache and writing data to the GPU. This is why performance takes a while to recover after a TP - it's the texture decode sucking up every cycle it can.

*decoding also does not give up on a task until it has been completed, even if the asset that spawned that task is no longer visible because you cammed passed it. There is an assumption that when you arrive at a location, you will hang about and wait for it to rez, not immediately scoot your cam to the other side of the region and get upset that a vendor you were looking at on a previous visit is refusing to load.

The intent with the cache changes is more complicated than simply the "better performance" sales pitch.

  • They hope to end up with a faster more responsive local cache that offers real benefits over having no cache at all (the cache helps, but if you're sitting on a fat pipe, it's not as good as it could be).
  • If we get it, decode once rather every single time a texture is needed will add a marked performance boost.
  • And finally (and perhaps core motivation) reduce the amount of data the viewer fetches overall as currently, not everything even makes it into the cache; This will reduce the insane costs associated with having a CDN.

The cache we have now looks a little like the bins out back of Dr Frankenstein's lab, it's been terrible for a decade and new stuff has been tacked on year after year by random unconnected contractors; and as a whole Igor makes terrible decisions about when to even use the bins vs the incinerator. So in short, the patience of Frankenstein's assistant, Dr Oz (the ever living), has been tried once too often and Igor is getting roller skates, a new brain and a fresh set of colour coded bins. Should he fail to perform, Sensei's favorite new pupil, Sar-san will be set loose to wipe the land clean.

  • Like 2
Link to comment
Share on other sites

4 hours ago, ChinRey said:

Cory Ondrejka was LL's first CTO and responsible for devloping the original SL software. He is an absolutely brilliant programmer of course, but he may also be the one we should blame for the lack of comments and documentation. He never had time for such details and besides, he probably knew all the code by heart himself.

knowing  by heart the code we write ourselves, meaning the ability to come back at a later date to read and understand code we have written ourselves and also code written by others on our level, is a big contributory reason why code doesn't get commented. The higher up the programming complexity ladder we go, the greater the assumption that everyone else who will be hired to follow us to continue the task, is on the same or a on higher level than us

what makes it hard for the those who follow isn't so much the ability to read and understand the code, its the spagettiness that can result when multiple patches to an existing codebase over a period of time are by necessity coded within tight deadlines  

  • Like 5
Link to comment
Share on other sites

2 hours ago, CoffeeDujour said:

You seem to have grabbed thee wrong end of the stick again ... The CDN is more than capable of meeting the data supply demands, I believe it's run on Amazon servers.

You land on a region and the viewer (with nothing to do) requests every asset and texture it can. The texture request rate drops off because the decode list is full and the viewer is spending all it's time on that and file I/O. As decodes finish, more are requested. The bottlenecks are KDU/OpenJpeg, writing data to the cache and writing data to the GPU. This is why performance takes a while to recover after a TP - it's the texture decode sucking up every cycle it can.

Look at the CPU and network loading graphs above. The texture decoding is in a separate thread. This is on a 4-core CPU and there's plenty of spare CPU time available during the low network utilization around the 40 second marker. So it's not out of decoding resources. Writing to the local hard drive might be the bottleneck. But this machine has 8GB, about 60% is in use, and Linux will use the rest for I/O cache. That says the server side ran out of resources before the viewer did. It's also possible that the sim isn't feeding capabilities through fast enough, or requests for assets are being rejected and retried. (Look at a Firestorm log; asset fetch has a lot of errors.)

There's nothing magic about Amazon's servers. They use commodity hard drives like everybody else. "HDD-backed volumes—Throughput Optimized HDD (st1) and Cold HDD (sc1)—deliver optimal performance only when I/O operations are large and sequential. "  Asset fetch is not large and sequential; it's medium sized and random. Lots of seeks. If LL was willing to pay for a few terabytes of SSD (which is cheaper than it used to be, about US$100/terabyte/month on AWS right now) it might help. How big is SL's entire asset store?

Link to comment
Share on other sites

1 minute ago, CoffeeDujour said:

Yes, it is out of decoding resources.. because SL isn't ever going to use all your cores and most of the time .. you have at least one core running full tilt - that's openjpeg that is.

Look at the space between the 30 and 40 second mark. Textures are coming in slowly, and there's plenty of CPU time available. JPEG decode is not in the main thread. The whole fast texture cache management system runs in its own thread. Firestorm seems to max out around 125% of one CPU.

Read the big comment at the beginning of lltexturefetch.cpp. There are five threads, in addition to the main thread, involved with texture fetching, decoding, and caching. A lot of work went into making that fast on the viewer side.

Link to comment
Share on other sites

// I'm sorry

And yet - leaving aside the cdn/cache issues - there is a true and tangible fps drop if you enter a place like fantasy faire where so many vendors has been loaded with 1024x1024 textures. Swing your camera around in a circle and you can feel the concentrations of each group of vendors.

There is also true lag when you have a lot of H&G clutter in small spaces, that is not explained by the tris of the mesh, but can be explained by the sheer number of 1024x1024 textures on everything. Like the match-heads.

It can be demonstrated as mentioned above, texture a prim at 1024x1024, shrink to a pinhead and duplicate, swing the camera around. Do the same at 512x512 and lower.

 

I get the feeling we are going to see mass returns anyway. Too much mesh is faked with zero-ed out LODS to artifically turn a 10LI item into a 1LI item. Tackle both at once is less pain then 2 changes. Only address the zero-ed LOD side of this and it means more work in the future.

 

Edit: But yeah, we all got reamed recently by the sweet Patch for commenting on things we dont know jack about. :S

Edited by Callum Meriman
  • Like 2
Link to comment
Share on other sites

Region auto return is a very blunt instrument and would result in major parts of builds being sent back, it also isn't in the least bit even handed .. 

I can tell you for a fact that if this had happened when I was running mine, I would have been spitting fireballs at LL support over the phone along with every other region owner. Roll backs would be coming hard and fast and the new Li calculations would be on the trash fire by lunchtime.

It would be an unmitigated disaster for LL and certainly result in heads rolling. Not to mention a lot of long time customers reevaluating their commitments.

They aren't ever going to do it.

Edited by CoffeeDujour
Link to comment
Share on other sites

1 hour ago, animats said:

If LL was willing to pay for a few terabytes of SSD (which is cheaper than it used to be, about US$100/terabyte/month on AWS right now) it might help

Hahaha, you know we answer this questin of yours, EVERY time you make one of your "why doesn't SL: perform like a 30 gb pre-installed content MMORPGFPS game..." posts...

But let's answer it again, since you apparently forgot...

1 hour ago, animats said:

How big is SL's entire asset store?

MORE THAN A PETABYTE!

That's over 1000 terabytes, that's more than $100,000 a month on AWS.

Can you guess why LL doesn't rent Cloud SSD's for the asset servers?
 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 2097 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...