Jump to content
Sign in to follow this  
Dilbert Dilweg

Normal Maps,Specularity Maps & Diffuse coming soon )

Recommended Posts

Good news xD but unfortunately it means that I need to start designing stuff I can't even see myself (when I turn on lighting in SL, everything slows to like 1 fps :/ )

Share this post


Link to post
Share on other sites


Kiyoshi Nyoki wrote:

Good news
xD
but unfortunately it means that I need to start designing stuff I can't even see myself (when I turn on lighting in SL, everything slows to like 1 fps
:/
)

Lol. Might be time to upgrade the old dinosaur

Share this post


Link to post
Share on other sites

is gtx470 considered slow or comparable to the gtx6 series ? I have also heard that i should not get a gtx680 because its de facto slower than a gtx580 at least when it comes to blender cycles and cuda ...? Actually i am more than confused about when it comes to select the correct computer hardware as when asking 4 experts i get 5 differing responses  :matte-motes-sour:

Share this post


Link to post
Share on other sites

A 740 is an onboard chip from what I read, it won't be nearly as fast as the high end 600 ones. In fact you can't even compare the two.

A problem with the cuda cycles in the 600 series might be what I myself am facing, too new a chip for the software around. I especially bought the 670 to use the cuda cores for 3ds max Iray render, turns out it doesn't support Iray at all, for now. Both Iray and the 670 are NVidia, so they will fix it eventually, they said this summer, but it will be before december, when they release their professional card with the same chiptype. Anyway, in theory the 680 should be superior when it comes to cuda, it has 3 times the cuda cores the 580 has. No idea how Blender utilises those cuda cores.

For gaming purposes (if I played any besides SL) I'd pick a 680 over a 580 anytime, as long as the funds is there. A 680 isn't exactly cheap at around 500 bucks or euros. The 670 is the wiser choice I personally think (apart from the render debacle). Hey at least SL runs like a dream.


Compare GTX580 to GTX680


btw, this is the first time I bought such a high end system, since it's my workstation. I would never go so high end for a platform like SL. I even capped my framerates because they were through the roof in most places. You really do not need 100 fps, ever. With that in mind you can go a lot lower than a 580, 670 or 680 and still have amazing framerates.

Share this post


Link to post
Share on other sites

Apologize. i meant gtx 470 (i corrected that in my previous post now)

From what i have learned so far from Blender developers:

 "Comparing gtx5xx cards with gtx6xx cards from their technical data is not so easy because they use different technologies. And it turns out that although gtx6xx seems to be the much more powerful card it actually seems to degrade massively when used in programs which make heavy usage of CUDA.

But for games the gtx6xx cards seem to be the better choice as they have been "optimized for gaming"

 Whatever that means...

Share this post


Link to post
Share on other sites

Both the high end 500 and 600 series will run SL just fine. I suspect you won't have any real performance issues with a 470 either though.

I don't know the specifics, but the CUDA cores in different families of cards (8-series, 9-series....500-series, 600-series) are not the same. It's just a name, but the 500 cards use Fermi cores, the 600 cards use Kepler cores. As I said, not all software is optimised or even compatible with the new Kepler cores. So while they are probably faster and you have a lot more of them in the 600-series, some programs can't use all that extra power. When the software "heavily uses" the CUDA cores, that can be a major issue. Again, the reason I bought the 670 was for iray render, which can render on a CPU, but is designed for the GPU's CUDA cores. With iray and the Keplar cores unable to communicate, that leaves me with CPU rendering only, which I estimate to be about 10 times slower.

Optimised for gaming? That means the card will simply push out more fps. Faster memory, faster processing overall.

That's a pretty vague statement by the Blender devs btw, I have no clue how the CUDA cores are utilised in Blender. For on screen performance I suspect the 600 series will outperform the 500 series. Only very specific tasks will be slower with the 600, until Kepler is fully supported. For 3ds Max, the only thing I know of is the iray plugin.

Share this post


Link to post
Share on other sites


Kwakkelde Kwak wrote:

Both the high end 500 and 600 series will run SL just fine. I suspect you won't have any real performance issues with a 470 either though.

I don't know the specifics, but the CUDA cores in different families of cards (8-series, 9-series....500-series, 600-series) are not the same. It's just a name, but the 500 cards use Fermi cores, the 600 cards use Kepler cores. As I said, not all software is optimised or even compatible with the new Kepler cores. So while they are probably faster and you have a lot more of them in the 600-series, some programs can't use all that extra power. When the software "heavily uses" the CUDA cores, that can be a major issue. Again, the reason I bought the 670 was for iray render, which can render on a CPU, but is designed for the GPU's CUDA cores. With iray and the Keplar cores unable to communicate, that leaves me with CPU rendering only, which I estimate to be about 10 times slower.

Optimised for gaming? That means the card will simply push out more fps. Faster memory, faster processing overall.

That's a pretty vague statement by the Blender devs btw, I have no clue how the CUDA cores are utilised in Blender. For on screen performance I suspect the 600 series will outperform the 500 series. Only very specific tasks will be slower with the 600, until Kepler is fully supported. For 3ds Max, the only thing I know of is the iray plugin.


That is some interesting information thanks for that Kwak. I have personally been looking at getting the 670 my self..

One interesting thing you may want to look at is the latest Blender 2.6.3.17 is compiled with ALL cuda kernels . So that looks promising and opens up use for newer cards

Maybe 3ds max will catch up soon

http://www.graphicall.org/444

Share this post


Link to post
Share on other sites

Yes that does sound interesting, but I'm not going to chuck out the thousands of dollars worth of software I'm familiar with to use Blender instead :) Anyway, the iray render uses CUDA, in fact it was designed specifically for it. It's just that the new Kepler ones can't be used yet. Iray works fine on the 500 series CUDA cores, that's what's makes them so much faster in some very specific cases.

I think NVidia has their update already, it's autodesk that still needs to update their part. I'm not sure, but will keep looking. Such a pain the NVidia forums are closed.

 

Share this post


Link to post
Share on other sites

Here is a benchmark testfile:

http://blenderartists.org/forum/showthread.php?239480-2.61-Cycles-render-benchmark

And a results spreadsheet for that:

https://docs.google.com/spreadsheet/ccc?key=0As2oZAgjSqDCdElkM3l6VTdRQjhTRWhpVS1hZmV3OGc#gid=0

I took a closer look at the spreadsheet and then i made the test on my computer. Here is the result:

 

card os reports runtime my value
gtx 480 windows 7 64 bit 3 ~45.8 sec ~46.9 sec
gtx 580 windows 7 64 bit 6 ~45 sec -
gtx 680 windows 7 64 bit 1 ~52 sec -

Well from the values i found it seems like gtx680 is slower than gtx580 on blender, just as i have been told by the developers (those who actually are implementing the cuda support for blender)

And from the values i also conclude that i am already using something that runs well. But then i wonder why i still sometimes have issues with Second Life frame rates.

Share this post


Link to post
Share on other sites

Yes so for rendering in cycles, you want the 480 or 580. I'd like to see the fps while modelling though :) Or while running SL.

If you mainly use Blender for SL, the rendering times will be less critical than when you use it for very high resolution stills with tons of lights and reflections, or even movies, with a whole lot of frames. When modelling performance is more important, and I suspect that's the case for most SL builders, I'd make that the priority. Any card mentioned will give good results, unless you are building an entire sim in Blender.

If you are a texture maker and do morebaking in Blender (assuming the render in cycles does that aswell) than modelling, it's another story.

Share this post


Link to post
Share on other sites


Kwakkelde Kwak wrote:

 

If you are a texture maker and do morebaking in Blender (assuming the render in cycles does that aswell) than modelling, it's another story.

Thats what sux about cycles. You can't bake full renders etc. You can compile one hell of  sceen and a final image...but final render bakes are a different story

 

To get an awesome crisp image with cycles you have to run about 2000 to 5000+ passes . My 9800GT enjoys the challenge "not" lol

Share this post


Link to post
Share on other sites

Dilbert Dilweg wrote:

My 9800GT enjoys the challenge "not" lol

At least it can enter the challenge, unlike my 670 :) Should I plug in my old 9600GT with 64 shaders instead of the 1344 in the 670?

 

Share this post


Link to post
Share on other sites

One of the residents, Nala Spires, did a bunch of benchmarks to see what hardware was having the most impact on performance. The biggest impact is fast memory chips on the motherboard. The limit on memory chip speed is the motherboard. So, one can only go so far with that upgrade before one is into serious upgrade.

You can read my summary of the tests here: Viewer Performance

Share this post


Link to post
Share on other sites

nVidia will eventually get CUDA optimized for the 600 series. 

For now it seems the 500 series is a better choice for Blender. Thanks for your input and measurements.

Share this post


Link to post
Share on other sites


Nalates Urriah wrote:

So, one can only go so far with that upgrade before one is into serious upgrade.

Very true, but if you can keep the cpu, a motherboard is nowhere near as expensive as a high end graphics card. So all the "seriousness" is in having to rip your computer apart, not in costs.

 

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

×
×
  • Create New...