Jump to content

Firestorm Texture Memory Autodetection


Lyssa Greymoon
 Share

You are about to reply to a thread that has been inactive for 1281 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

34 minutes ago, Lyssa Greymoon said:

I have a 4GB AMD RX550 that Firestorm doesn't seem to be seeing correctly, it's only letting me set the texture buffer to 1024MB, how can I get it to use the full 2048MB it should be able to? Thanks.

If your computer has onboard graphics, be sure your AMD RX55 is used, look at Help->About Firestorm. If this is the case:

My previous laptop had a GForce 630M 2GB, where its graphic memory did not get detected correct. I solved the problem by using the following command line to Firestorm:

--set TextureMemory 2048

You must edit the link in Windows, which default is "C:\Program Files\Firestorm-Releasex64\Firestorm-releasex64.exe" --set InstallLanguage en

I do not know, if this still works, as I have not used the old laptop for years now.

Link to comment
Share on other sites

  • 3 months later...
15 hours ago, fernando90 Magic said:

Hello! Managed to solve? I have the same problem here. My video card has 2gb of memory but only allowed to define memory allocation up to 1024 MB

It seems to have worked itself out, some update must have fixed it. It wasn't anything I did. As mentioned by someone else in the other thread 1GB is the biggest texture cache you should be able to use on a 2GB card.

  • Like 1
Link to comment
Share on other sites

On 8/8/2020 at 2:15 AM, fernando90 Magic said:

Hello! Managed to solve? I have the same problem here. My video card has 2gb of memory but only allowed to define memory allocation up to 1024 MB

These limits are for Firestorm Viewer only: 

64bit versions only. This setting is hard limited based on the VRAM available with your graphics card. It is recommended you increase the slider to use the maximum available to prevent texture thrashing.

  • GPU 1GB = up to 768MB
  • GPU 2GB+ = up to 1024MB
  • GPU 4GB+ = up to 2048MB

32bit versions only. This setting is hard limited to a maximum of 512MB.

 

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

12 hours ago, Willow Wilder said:

These limits are for Firestorm Viewer only: 

64bit versions only. This setting is hard limited based on the VRAM available with your graphics card. It is recommended you increase the slider to use the maximum available to prevent texture thrashing.

  • GPU 1GB = up to 768MB
  • GPU 2GB+ = up to 1024MB
  • GPU 4GB+ = up to 2048MB

32bit versions only. This setting is hard limited to a maximum of 512MB.

Well ... this is the status from 8 years ago or even longer. And what about the present? In the meantime, 6 to 8GB of graphics ram are available and that at affordable prices. Firestorm should slowly consider doing justice to modern systems and flexibly approving the use of graphics instead of regulating them. We live in the year 2020 and not at a time when you still messed around with weak graphics cards and had to be happy to let Second Life run so halfway smoothly. Please wake up slowly and keep up with modern technologies instead of always paying attention to the outdated systems.

  • Like 2
  • Haha 3
Link to comment
Share on other sites

@Lillith Hapmouche I only went from the current standards for standard consumer PCs and the level is between 6 and 8GB graphics RAM as the standard.

 

@Gabriele Graves I don't think so either since I tested a competitor to Firestorm that has flexible graphicram usage. The differences are significant depending on the region.

Edited by Miller Thor
  • Thanks 1
Link to comment
Share on other sites

16 minutes ago, Gabriele Graves said:

using more than 2Gb would not give any benefits

Spoken like somebody who would buy a "Gaming PC (tm)" and plug the monitor into the motherboard. (I know it isn't your words.)

That's like saying "why buy 16 GB of RAM, 2 is plenty!" Completely ignorant of how bad things get when memory runs out.

Granted, I do think 11 GB of VRAM is excessive for just gaming (including SL). Even 4K gaming doesn't reach very high VRAM usage, relatively speaking. The games tend to stay safely under 5 GB VRAM, plus whatever extra you'll need for all the other programs running in the background, which should be very little.

Second Life is of course a bit "special" because its content isn't exactly optimal by nature, with a single avatar being able to devour more VRAM than your average game scene as a whole.

  • Thanks 1
Link to comment
Share on other sites

3 minutes ago, Miller Thor said:

 I don't think so either since I tested a competitor to Firestorm that has flexible graphicram usage. The differences are significant depending on the region.

Interesting to know that it gives benefits on another viewer.  I would love to see what a difference that could make.  I get as good performance as can possibly be had on 2Gb as is but I would always welcome more of a boost.

Edited by Gabriele Graves
Make who I am referring to more obvious
Link to comment
Share on other sites

10 minutes ago, Wulfie Reanimator said:

Spoken like somebody who would buy a "Gaming PC (tm)" and plug the monitor into the motherboard. (I know it isn't your words.)

That's like saying "why buy 16 GB of RAM, 2 is plenty!" Completely ignorant of how bad things get when memory runs out.

Granted, I do think 11 GB of VRAM is excessive for just gaming (including SL). Even 4K gaming doesn't reach very high VRAM usage, relatively speaking. The games tend to stay safely under 5 GB VRAM, plus whatever extra you'll need for all the other programs running in the background, which should be very little.

Second Life is of course a bit "special" because its content isn't exactly optimal by nature, with a single avatar being able to devour more VRAM than your average game scene as a whole.

I am not a gamer and the card was bought for a performance boost for SL when I bought this PC.  It was recommended as the top of the line card at the time.  I wasn't really aware of how much memory it had at the time.  So I definitely didn't choose it for that reason.  Once I was aware that it had plenty though, I was happy because it seemed like, just with having more PC memory, it would give me some additional future proofing considering that SL seems to get slower and require more from hardware as time goes by.

So upshot is, like many people, I had and still really have no idea how much graphic card memory is really needed for modern environments (SL and games, etc.), it just seems as though there should be some use it could be put to, even to cache textures, to give a speed boost.

Edited by Gabriele Graves
corrections
  • Like 2
Link to comment
Share on other sites

2 hours ago, Miller Thor said:

Well ... this is the status from 8 years ago or even longer. And what about the present? In the meantime, 6 to 8GB of graphics ram are available and that at affordable prices. Firestorm should slowly consider doing justice to modern systems and flexibly approving the use of graphics instead of regulating them. We live in the year 2020 and not at a time when you still messed around with weak graphics cards and had to be happy to let Second Life run so halfway smoothly. Please wake up slowly and keep up with modern technologies instead of always paying attention to the outdated systems.

This topic has been discussed ad nauseam elsewhere.

But for the record, Firestorm has the largest userbase of any viewer available and we don't "always" pay attention to outdated systems. Of course we are mindful of them, just as Linden Lab is.  As a project, you don't just toss away members of your community because they don't meet the standards of a few arrogant *****s. 

P.S. Viewer team/projects are not competitors. If you've found a viewer that meets your needs, by all means use it. I can assure you no one is going to cry if you are using one viewer rather than another. That's the beauty of having choices and options. 🙂 

  • Like 3
  • Thanks 2
Link to comment
Share on other sites

4 hours ago, Lillith Hapmouche said:

From texture memory to world-wide recession... well. That did escalate.

I'd claim it was a segue, not an escalation, but yes, I'm amazed by just how much it did wake the thread up :)

Niran's suggestion of saving up for parts is what I have been trying to achieve, the issue comes not with putting in a different graphics card but in finding that you can't really upgrade from Windows7 to Windows 10 if your motherboard is capped at 4GB Ram. This is where I am so grateful to the TPV's maintaining 32-bit versions of their viewers, I can just about scrape by that way.

 

In a bit of an anti-swerve... let's get back to texture memory @)

I am currently able to get onto sims that aren't too busy with mesh avies or have hundreds of buildings with diffuse, bump and specular textures all at 1024x1024. In fact, I can even run his Black Dragon viewer in 4GB of Ram with care. I have previously asked about the significance of texture memory because of some graphics driver failures, but I have now discovered the GTX1050s sold quite cheaply on ebay are chinacanery, they're not genuine and tend to fall over when passed a hefty list of big textures to get loaded. I have tried drastically reducing the texture memory setting to see if that slows down the rate at which textures are given to the GPU ( I was going to say thrown but that's a bit unfair). I would like any advice about how to work with what I've got rather than what to save up for because I'm now reluctant to commit to anything that might end up to be of chinese origin even through the ebay seller is supposedly shipping from the UK.

Edited by Profaitchikenz Haiku
  • Like 1
Link to comment
Share on other sites

3 hours ago, Gabriele Graves said:

I still don't fully understand why we cannot have both but it seems that is the case and I do get good performance that I am more than happy with.

Speaking from a programmer's background (though not exactly graphics cards), I don't know of any reason why we couldn't have both.

Memory is just memory, as long as 64-bit is supported (basically all but the lowest of the low end). Even Windows 10 no longer supports hardware that can't do it. (Such a version exists, but won't get anything but security updates.)

Slightly technical explanation: 32-bit numbers reach up to the equivalent of 4 GB of memory. It's literally impossible to represent a larger value than that with 32 bits. Meanwhile 64-bit numbers reach up to 17 million GB, which is.. a lot.

LL has had a 64-bit viewer since 2018, praising it has an improvement to resources, performance, and reliability. There should in theory be no additional changes needed to support any amount of memory consumers might have access to. The caveat to that being legacy code that relies (or doesn't account for) 64-bit values.

Even the problem of users cranking up the VRAM limit too high can be curbed much like other "dangerous" settings you can find in the debug menus. If you crash with the setting enabled, it's restored to a safe value.

Edited by Wulfie Reanimator
  • Like 1
  • Thanks 2
Link to comment
Share on other sites

I can't understand what the problem is regarding lower-spec v. higher-spec users. Maybe someone can enlighten me.

At present, Firestorm protects from silly cache settings by reading the available VRAM and capping the offered cache values at a comfortably lower level. Lower-spec users are catered for, and protected against daft settings.

But why does the table of values stop at 4GB VRAM?

Surely it is possible for that table to be extended to higher values, thus allowing larger caches to be used by users with the appropriate GPUs?

Maybe there's a sound technical reason that caches larger than x MB don't fare well? I've had a bit of a search through the forums to see if this has ever been explained but haven't found an answer so far.

We are talking about the 64bit version. (I know the 32bit version is a totally different ballgame).

Edited by Odaks
Afterthought
  • Like 1
Link to comment
Share on other sites

6 hours ago, Odaks said:

Maybe there's a sound technical reason that caches larger than x MB don't fare well?

I know nothing about graphics card caches, but it's certainly true that in practice, caching algorithms can have a "sweet spot" of size above which average performance gets worse. Imagine for simplicity that retrieval performance scales by some function of cache size but 95% of hits are always in the first half-gig. Increasing size will slow those high-frequency hits enough to drag down average performance. (I guess a cache could dynamically adjust its effective size, but smarter algorithms incur overhead, too.)

  • Thanks 1
Link to comment
Share on other sites

7 hours ago, Odaks said:

I can't understand what the problem is regarding lower-spec v. higher-spec users. Maybe someone can enlighten me.

At present, Firestorm protects from silly cache settings by reading the available VRAM and capping the offered cache values at a comfortably lower level. Lower-spec users are catered for, and protected against daft settings.

But why does the table of values stop at 4GB VRAM?

Surely it is possible for that table to be extended to higher values, thus allowing larger caches to be used by users with the appropriate GPUs?

Maybe there's a sound technical reason that caches larger than x MB don't fare well? I've had a bit of a search through the forums to see if this has ever been explained but haven't found an answer so far.

We are talking about the 64bit version. (I know the 32bit version is a totally different ballgame).

If what Firestorm said is true then their reasoning for not allowing more than 2GB (which corresponds to ~4gb+ internally unless they changed it, you can check in the texture console) is that they don't want to support creators going around and tossing with even more unnecessary high-resolution textures.

That being said its a stupid reason, creators have already tossed giant amounts of textures at us since many years, long before we allowed more VRAM which is why the feared "texture thrashing" was so prominent at that time and partly completely out of control. Nowadays it seems much rarer to happen (even though we're still spammed with gigabytes worth of textures). Ever since i added automatic VRAM management i've had almost no reports anymore of texture thrashing (unless the GPU was running out of memory for real or the user disabled the automatic memory management and left the memory settings at default, save an edge case or two i had to fix). Creators are going to toss large amounts of textures in high resolution at us whether your Viewer supports it or not, they have little to no technical understanding how computers and rendering work, some of them don't even get the basics, such as that more stuff means more required work which results in less performance, they think everything is free and create content accordingly.

Technically speaking when i allowed more VRAM to be set i set a hard limit at 3992MB because anything beyond was simply crashing the Viewer, i don't feel like raising this value any time soon as 4GB (for each of the two pools) is quite a huge amount and has been well more than enough for years and will probably for quite some time still.

Edited by NiranV Dean
  • Like 2
  • Thanks 2
Link to comment
Share on other sites

when I use Catznip viewer then for my 1050Ti the slider max. is 1360MB. So I set to that and TechPowerUp GPU-Z reports that the card uses approx. 2640MB

and I never have any texture thrashing like I do on the Linden viewer which is max slider 512MB. ( i wish Linden would allow us to have the same slider as the TPVs. Is the main reason I don't use Linden viewer for anything other than to check stuff )

also if I had a 8GB card which I don't then I would be quite happy with 2GB slider for 4GB actual used. Then what I would want is a CPU+GPU CUDA jpeg decoder which could run in the above 4GB. CUDA decoders are blistering fast. The textures could be downloaded, decoded and cached in native bit format. Then would be no texture thrashing at all ever because the render pipeline wouldn't have to do the jpeg interlaced progressive mode (the blurry), which is how Linden seem to encode large textures  

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

40 minutes ago, Mollymews said:

when I use Catznip viewer then for my 1050Ti the slider max. is 1360MB. So I set to that and TechPowerUp GPU-Z reports that the card uses approx. 2640MB

and I never have any texture thrashing like I do on the Linden viewer which is max slider 512MB. ( i wish Linden would allow us to have the same slider as the TPVs. Is the main reason I don't use Linden viewer for anything other than to check stuff )

also if I had a 8GB card which I don't then I would be quite happy with 2GB slider for 4GB actual used. Then what I would want is a CPU+GPU CUDA jpeg decoder which could run in the above 4GB. CUDA decoders are blistering fast. The textures could be downloaded, decoded and cached in native bit format. Then would be no texture thrashing at all ever because the render pipeline wouldn't have to do the jpeg interlaced progressive mode (the blurry), which is how Linden seem to encode large textures  

 

The texture thrashing happens because the Viewer loads textures in different mipmap levels (think LOD's for textures), when the texture memory is full it starts lowering textures it doesn't need to reduce texture memory usage and make space for others (most games nowadays do this), this has nothing to do with how they are encoded. Regardless of the encoder/decoder used you'd still see texture thrashing. If the Viewer wouldn't texture thrash it would simply continue filling your memory (you can force this by using Full Resolution Textures debug) to the point it is more busy swapping textures in and out of memory killing your performance. Texture thrashing prevents this performance crash because it never allows it to overflow and thus allocated/deallocated textures much more careful leaving your performance mostly intact.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

23 minutes ago, NiranV Dean said:

The texture thrashing happens because the Viewer loads textures in different mipmap levels (think LOD's for textures), when the texture memory is full it starts lowering textures it doesn't need to reduce texture memory usage and make space for others (most games nowadays do this), this has nothing to do with how they are encoded. Regardless of the encoder/decoder used you'd still see texture thrashing. If the Viewer wouldn't texture thrash it would simply continue filling your memory (you can force this by using Full Resolution Textures debug) to the point it is more busy swapping textures in and out of memory killing your performance. Texture thrashing prevents this performance crash because it never allows it to overflow and thus allocated/deallocated textures much more careful leaving your performance mostly intact.

shows what I know 😿

i peeked in the llimagegl.cpp file, and yes mipmaps. I don't like mipmaps because blurry then not blurry then blurry again on the Linden viewer

Link to comment
Share on other sites

Thanks for all that.

We've got @Qie Niangao's "sweet spot" theorem, @NiranV Dean's suggestion that FS is gunning for those nasty creators who knoweth not what they do when splashing large textures all over the place, coupled with practical experience of the viewer crashing when the cache is too big and @Mollymews's amazing Catznip cache which appears to allow it to grow to nearly twice what it's set to, but at least works like a dream.

My vote goes to Qie's sweet spot. That's got to be worth further investigation!😉

 

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 1281 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...