Jump to content

A 3070 report and fps :D


You are about to reply to a thread that has been inactive for 1074 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

  1. Where exactly are you getting that 1.8 GB VRAM usage number from?
  2. What exactly makes you think for even a moment that maxing out that slider is in any way a good idea?
  3. Do you have any other programs running at all while using Second Life?
  4. Do you view any sort of video content anywhere at all while using Second Life (that includes from within the Viewer itself)?

The list goes on and on.

I will add in as well that part of the takeaway for that tooltip should be telling you that what you are doing is taking VRAM away from other GPU related functions to use it solely for your Texture Buffer - meaning it is not available for anything else.

ETA: It is also telling you that the setting does not care one whit if other applications or the system needs the VRAM it is looking to dedicate for that buffer, it will force that amount to be set, to the detriment of everything else.

I've got a 6 GB GPU myself. I have that set no higher than a little over 3GB to leave room for all other applications. And they do need plenty of room.

Edited by Solar Legion
Link to comment
Share on other sites

28 minutes ago, Ardy Lay said:

Yeah, wasted watt-hours, I suppose, but, the location seen by the eye is enough fresher to make a difference between winning and losing a competitive e-sports match and winning money to buy a 240Hz display to win the next one with!  I'm not making this up.  Lack of understanding does not alter facts.

And currently 360 Hz on world fastest gaming monitor from ASUS, this currently only with full HD (1920 x 1080 with 360 Hz) due to limitation on display port or HDMI. Next generation display port and HDMI should make higher Hz possible in 1440p and 4K resulotions.

Link to comment
Share on other sites

3 minutes ago, Solar Legion said:
  1. Where exactly are you getting that 1.8 GB VRAM usage number from?
  2. What exactly makes you think for even a moment that maxing out that slider is in any way a good idea?
  3. Do you have any other programs running at all while using Second Life?
  4. Do you view any sort of video content anywhere at all while using Second Life (that includes from within the Viewer itself)?

The list goes on and on.

I will add in as well that part of the takeaway for that tooltip should be telling you that what you are doing is taking VRAM away from other GPU related functions to use it solely for your Texture Buffer - meaning it is not available for anything else.

I've got a 6 GB GPU myself. I have that set no higher than a little over 3GB to leave room for all other applications. And they do need plenty of room.

Thanks for feedback, I got the number from Windows Task Manager on GPU usage and Hardware Monitor from CPUID.

I am fully aware other programs also use VRAM, normally I just run SL with Firestorm and nothing else, but from time to opening e-mail to check/answer, use Firefox to view Flickr or Youtube vidoes while in world. Load reported with Task Manager is around 18-24% on GPU and usage of 1.9-2.0 GB VRAM (right now Firestorm running, Firefox showing forum here, Flickr page and a Youtube video running).

Should I do video recording, work in say GIMP with 4K images etc, I manually lower the texture buffer in FS before starting vidoe recording other things. And yes web browsers especial Chrome are memory hogs!

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

2 hours ago, Wulfie Reanimator said:

It's very hard to explain with words, but you can definitely feel the difference (in mouse movement) between a game that runs at 60 FPS vs 240 FPS, even if your monitor isn't displaying more than 60 FPS. You can also feel it when the game's framerate isn't consistent, even if it never drops below whatever your monitor can display. (That is one good reason to limit your FPS -- to keep it consistent.)

I think you and @Ardy Lay are speaking past each other, though. If the game is running at 240 but your monitor at 60, you're not going to see things faster. But if you actually had a 240Hz monitor, then it would be true.

I absolutely understand the difference between seeing a game rendered at 240Hz vs 60Hz. That's obvious. What's not obvious is why one would render 440fps for a 60H monitor. In that scenario only one frame in 6.7 actually makes it to the frame buffer.

If a game program can render a frame in 2.5ms, and the monitor can only display one every 16.7ms, other than because of an inability to predict frame rendering time, why wouldn't the game just wait until 2.5+safety margin ms before the next monitor frame starts, then render the scene? I realize there's nothing else for the hardware to do at that time, so I'll guess there's no just reason to be more sophisticated about this. I wonder if games on battery powered devices can afford to be so cavalier about the use of computing resources.

  • Like 1
Link to comment
Share on other sites

1 minute ago, Madelaine McMasters said:

If a game program can render a frame in 2.5ms, and the monitor can only display one every 16.7ms, other than because of an inability to predict frame rendering time, why wouldn't the game just wait until 2.5+safety margin ms before the next monitor frame starts, then render the scene? I realize there's nothing else for the hardware to do at that time, so I'll guess there's no just reason to be more sophisticated about this. I wonder if games on battery powered devices can afford to be so cavalier about the use of computing resources.

applications on low power devices often only update the display frame buffer between display screen scans

with Linden viewer this can be enabled/disabled with Debug Settings: disableVerticalSync. Which Linden have set to TRUE by default. Which means shove updates to the display frame buffer as fast as the CPU can.  When set to FALSE then effectively sets a maximum of one update per display screen scan

not sure why Linden don't have this set to FALSE as the default    

  • Haha 1
Link to comment
Share on other sites

3 minutes ago, Mollymews said:

applications on low power devices often only update the display frame buffer between display screen scans

with Linden viewer this can be enabled/disabled with Debug Settings: disableVerticalSync. Which Linden have set to TRUE by default. Which means shove updates to the display frame buffer as fast as the CPU can.  When set to FALSE then effectively sets a maximum of one update per display screen scan

not sure why Linden don't have this set to FALSE as the default    

Imagine a scenario where the viewer can produce 59 fps. If rendering is started at VerticalSync on a 60Hz monitor, it'll never be done in time for the next frame, so the update won't occur until the frame after. The result will be 30fps output. That's why it's not FALSE by default. When you can't keep up, you can't afford to wait. If you're way faster than the monitor, there's no harm in waiting. We all know where SL falls in that comparison ;-).

  • Like 1
Link to comment
Share on other sites

1 hour ago, Madelaine McMasters said:

I absolutely understand the difference between seeing a game rendered at 240Hz vs 60Hz. That's obvious. What's not obvious is why one would render 440fps for a 60H monitor. In that scenario only one frame in 6.7 actually makes it to the frame buffer.

If a game program can render a frame in 2.5ms, and the monitor can only display one every 16.7ms, other than because of an inability to predict frame rendering time, why wouldn't the game just wait until 2.5+safety margin ms before the next monitor frame starts, then render the scene? I realize there's nothing else for the hardware to do at that time, so I'll guess there's no just reason to be more sophisticated about this. I wonder if games on battery powered devices can afford to be so cavalier about the use of computing resources.

I wasn't explaining the difference in seeing a difference. It's going to be pretty difficult to talk about it in much depth because there's a ton of nuance to talk about, especially when you already missed what I was saying and I don't know how much prior knowledge you have about anything I've said so far or could say. I'll try though.

When you're developing a game, the easiest thing to do is just have a one big infinite loop with a delay at the end.

while (!quit)
{
    frame_start = get_time();
    process_input();
    render_frame();
    wait_until_next_frame(frame_start);
}

Here, your inputs are directly related to the current framerate. I think this (and simple variations of it) is the most common way games handle input, even big titles. Let's say you're playing a first-person game at 60 FPS, just walking and looking around.

Your movement is based on velocity over time. Higher FPS means less time between updates from your keyboard, which means that higher FPS gives you more precise movement regardless of how often your monitor gets updated with a new image. Your movement feels (and is) more precise.

The same applies to your mouse; the trajectory of your cursor changes as your FPS is increased. Imagine a curve that gets subdivided with more and more resolution. You get a smoother curve. Your camera-control feels (and is) more precise.

But since the above code is pretty naive, there are of course more advanced ways to do it. You can do "subframe" processing, meaning that you might process input multiple times before drawing a new frame (for example, when the monitor refresh rate is 60 and the game can reach 240).

while (!quit)
{
    elapsed += get_time_since_last_time();
    process_input();
    if (should_draw_new_frame(elapsed))
    {
        render_frame();
        elapsed = 0;
    }
}

At that point it becomes a question of what you want to tell the player. You could report either the actual render-rate, or how often the whole logic is run. Which one is more important (and the side-effects of picking either) are subject to individual judgement.

Edited by Wulfie Reanimator
  • Haha 1
Link to comment
Share on other sites

35 minutes ago, Madelaine McMasters said:

Imagine a scenario where the viewer can produce 59 fps. If rendering is started at VerticalSync on a 60Hz monitor, it'll never be done in time for the next frame, so the update won't occur until the frame after. The result will be 30fps output. That's why it's not FALSE by default. When you can't keep up, you can't afford to wait. If you're way faster than the monitor, there's no harm in waiting. We all know where SL falls in that comparison ;-).

not sure of the exact details of how Linden do it

generally tho is two buffers. The primary buffer which is available to be rendered. And a secondary buffer which is being built.  When the second buffer is built/completed then is made available as the now new primary for rendering, and the old primary becomes the secondary.  So if the buffer can't be built in a single frame then it doesn't matter. It can take as long as it needs to complete the build of the secondary buffer. Which only gets rendered when it becomes the primary.  If on a frame the buffer is the same as the previous frame then the buffer is not (re)rendered

 

edit add more: while this is a not a wholey technical explanation.  I tend to think about FPS as the rate at which the secondary buffer is built/completed and made available for rendering

Edited by Mollymews
Link to comment
Share on other sites

9 minutes ago, Wulfie Reanimator said:

I wasn't explaining the difference in seeing a difference. It's going to be pretty difficult to talk about it in much depth because there's a ton of nuance to talk about, especially when you already missed what I was saying and I don't know how much prior knowledge you have about anything I've said so far or could say. I'll try though.

When you're developing a game, the easiest thing to do is just have a one big infinite loop with a delay at the end.


while (!quit)
{
    frame_start = get_time();
    process_input();
    render_frame();
    wait_until_next_frame(frame_start);
}

Here, your inputs are directly related to the current framerate. I think this (and simple variations of it) is the most common way games handle input, even big titles. Let's say you're playing a first-person game at 60 FPS, just walking and looking around.

Your movement is based on velocity over time. Higher FPS means less time between updates from your keyboard, which means that higher FPS gives you more precise movement regardless of how often your monitor gets updated with a new image. Your movement feels (and is) more precise.

The same applies to your mouse; the trajectory of your cursor changes as your FPS is increased. Imagine a curve that gets subdivided with more and more resolution. You get a smoother curve. Your camera-control feels (and is) more precise.

But since the above code is pretty naive, there are of course more advanced ways to do it. You can do "subframe" processing, meaning that you might process input multiple times before drawing a new frame (for example, when the monitor refresh rate is 60 and the game can reach 240). At that point it becomes a question of what you want to tell the player. You could report either the actual render-rate, or how often the whole logic is run. Which one is more important (and the side-effects of picking either) are subject to individual judgement.

Well, I certainly don't know how modern games are designed, but movement seems to me to be a part of the physics simulation which drives the rendering but isn't dependent on it. If this is a limitation of modern game platforms, so be it. It seems inefficient. I imagine SL animations, which are not part of the physics simulation, dice up as finely as the renderer can draw, but you still run into the monitor frame rate limit.

6 minutes ago, Mollymews said:

not sure of the exact details of how Linden do it

generally tho is two buffers. The primary buffer which is available to be rendered. And a secondary buffer which is being built.  When the second buffer is built/completed then is made available as the now new primary for rendering, and the old primary becomes the secondary.  So if the buffer can't be built in a single frame then it doesn't matter. It can take as long as it needs to complete the build of the secondary buffer. Which only gets rendered when it becomes the primary.  If on a frame the buffer is the same as the previous frame then the primary buffer is not rendered

https://www.anandtech.com/show/2794/2

Link to comment
Share on other sites

6 minutes ago, Madelaine McMasters said:

the article gets the basics right.  I got my understanding of FPS from how FRAPS does it, as mentioned in the article

i have my NVidia set to Adaptive VSync. Which for me works out pretty ok in SL.  More here: https://www.nvidia.com/en-us/geforce/technologies/adaptive-vsync/technology/

Link to comment
Share on other sites

55 minutes ago, Madelaine McMasters said:

Well, I certainly don't know how modern games are designed, but movement seems to me to be a part of the physics simulation which drives the rendering but isn't dependent on it. If this is a limitation of modern game platforms, so be it. It seems inefficient. I imagine SL animations, which are not part of the physics simulation, dice up as finely as the renderer can draw, but you still run into the monitor frame rate limit.

Input, physics, and rendering (along with many other things like audio/networking/file-handling/etc) can generally all be completely separated from each other, and often are, but there are still many modern games (from big triple-A developers for games like Need For Speed, Dark Souls, or Fallout) that do the dumbest things such as literally tie the physics to framerate. Try googling "tied to framerate" and you'll find lots of examples. It's not necessarily even that the engines used by game developers can't do it properly, it's just that people make mistakes and bad decisions.

Here are a couple posts about specific games that have subframe inputs:

That said, the SL viewer doesn't do any kind of physics handling at all. It's happening completely on LL's servers, the viewer only asks "may I please move forward?" and the server will either grab its hand or respond with "no dummy, that's a wall."

Edited by Wulfie Reanimator
Link to comment
Share on other sites

9 hours ago, Rachel1206 said:

So explain and teach me, why it is bad to set the Viewer Texture Memory Buffer to 4 GB, when I got a GPU with dedicated 6 GB VRAM. Normal the dedicated VRAM usage (system + viewer) for me at home etc. is like 1.8 GB, high trafic clubs places like 3.2 GB.

In Firestorm the popup help tells: "The minimum amount of memory to allocate for textures. This will make sure the specified amount will always be used for textures. even if it exceed the amount of avaible video memory.... "

How does usage of a 4 GB texture memory buffer exceed my GPU VRAM?

 

Just unimportant other things like the operating system and other applications you might be running at the same time. Or silly things like vertex buffers for the rendered objects. I know, completely unimportant compared to cranking up the texture setting as high as possible...

Link to comment
Share on other sites

But the strange thing is in SL is that you usually get great performance with a 'new' card but that over time , it just self degrades quite often back to about the level it was before....

BTW, I went to this Gooseberry Estate sims - nice place for sure - and with region lighting on - and shaders and shadows turned on, I seemed to average about 17-30 fps on each of the Gooseberry sims

Firestorm 6.4.13 (63251) Mar  2 2021 18:51:46 (64bit / SSE2) (Firestorm-Releasex64) with Havok support
Release Notes

You are at 91.8, 154.1, 28.1 in Gooseberry Forest located at simhost-0175169592b32bd5f.agni
SLURL: http://maps.secondlife.com/secondlife/Gooseberry Forest/92/154/28
(global coordinates 135,772.0, 294,298.0, 28.1)
Second Life Server 2021-03-31.557694
Release Notes

CPU: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz (3792 MHz)
Memory: 32672 MB
OS Version: Microsoft Windows 10 64-bit (Build 19042.867)
Graphics Card Vendor: NVIDIA Corporation
Graphics Card: GeForce GTX 1080/PCIe/SSE2
Graphics Card Memory: 8192 MB

Windows Graphics Driver Version: 27.21.14.6192
OpenGL Version: 4.6.0 NVIDIA 461.92

RestrainedLove API: (disabled)
libcurl Version: libcurl/7.54.1 OpenSSL/1.0.2l zlib/1.2.8 nghttp2/1.40.0
J2C Decoder Version: KDU v8.0.6
Audio Driver Version: FMOD Studio 2.01.08
Dullahan: 1.8.0.202007261348
  CEF: 81.3.10+gb223419+chromium-81.0.4044.138
  Chromium: 81.0.4044.138
LibVLC Version: 2.2.8
Voice Server Version: Not Connected
Settings mode: Firestorm
Viewer Skin: Firestorm (High Contrast)
Window size: 1920x1027 px
Font Used: Deja Vu (96 dpi)
Font Size Adjustment: 0 pt
UI Scaling: 1
Draw distance: 216 m
Bandwidth: 2950 kbit/s
LOD factor: 2.25
Render quality: High (5/7)
Advanced Lighting Model: Yes
Texture memory: 2048 MB (1)
VFS (cache) creation time (UTC): 2021-2-12T19:41:38 
Built with MSVC version 1916
Packets Lost: 0/50,149 (0.0%)
April 08 2021 06:04:40 SLT

Edited by Jackson Redstar
Link to comment
Share on other sites

11 hours ago, Wulfie Reanimator said:

If the game is running at 240 but your monitor at 60, you're not going to see things faster. But if you actually had a 240Hz monitor, then it would be true.

Are you from the future, where frames still wait to be drawn here in the past?

809AC25E-E239-4200-966E-88E231EBF016.jpeg.c27968d069fbb18d6cf732eb66bc8615.jpeg

  • Like 1
Link to comment
Share on other sites

19 minutes ago, Jackson Redstar said:

But the strange thing is in SL is that you usually get great performance with a 'new' card but that over time , it just self degrades quite often back to about the level it was before....

Since it is VERY obvious that we can't expect people (well most people) to care much about how heavy their avatars and homes are -- both in triangles and textures -- this will likely ALWAYS be the case.  It certainly has been historically.  

 

When I joined oh so many years ago I had a pretty good computer for those days. I was working in multimedia web design at the time. I STILL had to get a faster graphics card in order to enjoy the pre-mesh and 256 textured SL. My friends that joined at the same time were also in the "biz" but more on the techie side. They too had to get better graphics cards.   So yes, none of this is really new. 

 

I WILL say, that the devs pretty much solved similar issues in Sansar.  When I joined in over there in the third wave of creators (open beta, different TOS :D) Medhue and his partner and I were about the only ones making streamlined mesh and texture builds. You could get to our experiences quickly (I think they were both around 17 seconds) at that time.  Other  experiences (think big sims in some cases) could take half an hour (really, not exaggerating here) to get into. Obviously those were TOO HEAVY.   But eventually, over a year or so those heavy sims were usable because of the changes the devs made, not because we all got new graphics cards :D.

 

So many things come into play here.  

But as long as folks go for the "OMG it is so very pretty I don't care if it is half a million triangles I WANT IT" items -- we will most likely always be in a race to keep up.

 

So far I haven't crashed at all with the new computer. I am still VERY TREPIDACIOUS in my demeanor.  I am  semi-consciously walking on eggshells EXPECTING to go to black screen.  But fingers crossed this will be a good machine. It is running at 25 C (all parts)doing normal things including SL.  It cranks up some baking large projects  in Blender.  If I can get it to last well for five years, it's about a dollar and a  quarter day cost-wise. I can certainly live with that. 

Link to comment
Share on other sites

4 minutes ago, Chic Aeon said:

Since it is VERY obvious that we can't expect people (well most people) to care much about how heavy their avatars and homes are -- both in triangles and textures -- this will likely ALWAYS be the case.  It certainly has been historically.  

 

When I joined oh so many years ago I had a pretty good computer for those days. I was working in multimedia web design at the time. I STILL had to get a faster graphics card in order to enjoy the pre-mesh and 256 textured SL. My friends that joined at the same time were also in the "biz" but more on the techie side. They too had to get better graphics cards.   So yes, none of this is really new. 

 

I WILL say, that the devs pretty much solved similar issues in Sansar.  When I joined in over there in the third wave of creators (open beta, different TOS :D) Medhue and his partner and I were about the only ones making streamlined mesh and texture builds. You could get to our experiences quickly (I think they were both around 17 seconds) at that time.  Other  experiences (think big sims in some cases) could take half an hour (really, not exaggerating here) to get into. Obviously those were TOO HEAVY.   But eventually, over a year or so those heavy sims were usable because of the changes the devs made, not because we all got new graphics cards :D.

 

So many things come into play here.  

But as long as folks go for the "OMG it is so very pretty I don't care if it is half a million triangles I WANT IT" items -- we will most likely always be in a race to keep up.

 

So far I haven't crashed at all with the new computer. I am still VERY TREPIDACIOUS in my demeanor.  I am  semi-consciously walking on eggshells EXPECTING to go to black screen.  But fingers crossed this will be a good machine. It is running at 25 C (all parts)doing normal things including SL.  It cranks up some baking large projects  in Blender.  If I can get it to last well for five years, it's about a dollar and a  quarter day cost-wise. I can certainly live with that. 

Well I am waiting for graphic card prices to normalize and hope to get a 3070 or so then.  With that I should be good to go for probably at least another 7-8 years. 

  • Haha 2
Link to comment
Share on other sites

12 hours ago, Rachel1206 said:

So explain and teach me, why it is bad to set the Viewer Texture Memory Buffer to 4 GB, when I got a GPU with dedicated 6 GB VRAM.

Think about it this way.

Games get to run with the assumption they are the only thing running. They are designed specifically to consume everything your computer has to offer, and will happily let you push things to the point of crashing.

SL gets to run at the same time as your desktop, chrome with 40 tabs doing literally anything, photoshop, blender, another copy of SL, discord, twitch streams, whatever actual work demands, an actual game, etc etc. SL is not designed to take everything your computer has and then some, and can't take advantage of the extra anyway.

 

Link to comment
Share on other sites

13 minutes ago, Coffee Pancake said:

Think about it this way.

Games get to run with the assumption they are the only thing running. They are designed specifically to consume everything your computer has to offer, and will happily let you push things to the point of crashing.

SL gets to run at the same time as your desktop, chrome with 40 tabs doing literally anything, photoshop, blender, another copy of SL, discord, twitch streams, whatever actual work demands, an actual game, etc etc. SL is not designed to take everything your computer has and then some, and can't take advantage of the extra anyway.

 

"chrome with 40 tabs " - that is usually a girl thing! lol But really this is why you should shut down everything you can while on SL if you need to keep FPS up as much as possible

Link to comment
Share on other sites

40 minutes ago, Jackson Redstar said:

But really this is why you should shut down everything you can while on SL if you need to keep FPS up as much as possible

SL runs from a single thread and will barely get a modern graphics card out of bed. If you want to keep the FPS up, keep the draw distance down and avoid places with other avatars.

  • Like 2
Link to comment
Share on other sites

17 hours ago, Chic Aeon said:

GLAD THE ANSWER WAS FOUND.  WHY would you want to limit your fps?????  A puzzle that is even in there but likely a reason. Maybe machinima as there was a limit option built into FRAPS (when we could use FRAPS :D)

Typically you add a fps limiter to avoid silly cases like your GPU overheating and sucking massive amounts of energy when it tries to render the static pause screen of a game because you left the house for work with the game still open. Some games had that issues and rendered 1000+ fps on the paused screen and broke the GPU (or burnt down the house...).

After all, its totally useless (if you go by https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem ) to render more then 2x the Hz of your display in fps (fps is basically Hz), so setting some limit in the display update range is useful to save energy. 

 

  • Like 1
Link to comment
Share on other sites

56 minutes ago, Kathrine Jansma said:

After all, its totally useless (if you go by https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem ) to render more then 2x the Hz of your display in fps (fps is basically Hz), so setting some limit in the display update range is useful to save energy. 

I think that's why I may have set my FS frame limit to 60 back before my 3070 card ... but it never got there. It's broken in FS in some way, because where I was getting 30 max in places (never to the 60 I set), the minute I turned that frame limit off it hit 150 fps. If FS's frame limit check had been working, I should have been consistently at 60 all the time in that spot, unless I don't understand what the limiter is supposed to do.

Edited by Katherine Heartsong
Link to comment
Share on other sites

6 hours ago, Ansariel Hiller said:

Just unimportant other things like the operating system and other applications you might be running at the same time. Or silly things like vertex buffers for the rendered objects. I know, completely unimportant compared to cranking up the texture setting as high as possible...

Yes, all that is obvious for anyone working professional with Windows and programming for many years. Anyway I push the limits and I know, what I am doing (mostly) and always keeping an eye on VRAM usage and temperature.

And no, I do not have my C# deveopler enviroment open, processing video encoding, working in Blender and GIMP or using the memory hog Chrome with 100 open tabs...  -  when in SL.

What I like, is precise numbers and why 2 GB free VRAM on a 6 GB VRAM GPU is not enough. Surely a GPU with 8/10 GB VRAM would be better.

2 hours ago, Coffee Pancake said:

SL runs from a single thread and will barely get a modern graphics card out of bed. If you want to keep the FPS up, keep the draw distance down and avoid places with other avatars.

 

I have no problems getting excellent FPS in general even with 240 meters as default. And yes useing presets for clubs, events (low number of none-impostors, draw distance 64/96 meters etc).

As I have written countless times all modern GPU like the GeForce GTX serie with 4GB and better handle SL and OpenGL without problems and flawless. Highest GPU usage I have noticed, when running Firrestorm is 24% or so.

  • Like 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 1074 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...