Jump to content

GTX 670 4gb vram GPU


rgelles

Recommended Posts

So this GPU card has alot of bang and vram for the buck and was wondering if anyone yet has tried one of these with Houdini. I know it would not be used for production as is not supported per se but as a hobbyist this much vram and cuda cores would be exhorbantant in a workstation card. Yet would seem to be pretty useful for some simulations in Houdini. (Not to mention one kick butt gaming card)

Rich

Edited by rgelles
Link to comment
Share on other sites

I have never seen that gpu simulation thing work in practice. The drivers usually don't work and if you manage to find one you'll quickly find that it wont work when the sims are production-size.

Maybe if you have a exhorbitrallant card it will work though.

Edited by Macha
Link to comment
Share on other sites

on a gtx 580 with 3gb of ram you can do 370^3 voxels phenomenally fast .. which is not super high res, but definitely usable in some situations.

-G

I have never seen that gpu simulation thing work in practice. The drivers usually don't work and if you manage to find one you'll quickly find that it wont work when the sims are production-size.

Maybe if you have a exhorbitrallant card it will work though.

Link to comment
Share on other sites

Yes from a few searches here and there-- I think that folks are relatively happy with the GTX 570 , or GTX 580 with houdini----- that's why am curious with the GTX 670 or 680 models especially with 4gb vram. I saw one post somewhere who seem to indicate that the GTX 600 series although extremely more powerful then the 570 or 580 the Open GL was worse then 570 or 580.

Edited by rgelles
Link to comment
Share on other sites

Yes from a few searches here and there-- I think that folks are relatively happy with the GTX 570 , or GTX 580 with houdini----- that's why am curious with the GTX 670 or 680 models especially with 4gb vram. I saw one post somewhere who seem to indicate that the GTX 600 series although extremely more powerful then the 570 or 580 the Open GL was worse then 570 or 580.

It's the other way around -- the 600 series was designed as a graphics card first, compute second. It'll beat the 500's at graphics (OpenGL) but lose at compute (OpenCL).

Link to comment
Share on other sites

Oh...I see ---so are you saying you think then that the GTX 580 or 570 would react better in houdini overall then. It seems that open gl niceness is handy for building scenes and navigating but then Are you saying the 600's would not be as fast to help with pyro stuff calculations/rendering ? Considering the cuda cores are more than 3x times the amount in the 600's that seemed surprising.

Rich

Link to comment
Share on other sites

Well, it gets a bit complicated because while the 400 and 500 series are of the same generation, the 600 series is a much different GPU architecture.

The cores in the 600 series are not the same cores in the 400/500 series. They are simpler, doing no instruction reordering, and instead of operating at twice the GPU's frequency (the shader clock) the entire GPU has the same clock. So, comparing the 580 and 680:

580: 512 cores,       instruction reordering,    1.55GHz speed
680: 1536 cores (3x), no instruction reordering, 1GHz speed (33% slower, but with +50MHz average GPU boost (5%))

Overall: 680 is 3 x .66 = 2x faster, minus some variable amount for the removal of instruction reordering (??%).

The reason for the clock speed decrease is that power consumption increases with the square of the clock frequency (2x freq = 4x power consumption), while adding more shaders is a linear increase (3x shaders = 3x power consumption). This prompted Nvidia to reduce power and heat by adding more shaders but clock them lower.

However, more die space is now needed for the shaders, so the shaders themselves had to give up some logic to make them smaller -- and the logic sacrificed was the instruction reordering. As it only benefits the GPU in some cases and is fairly significant in terms of die space, it was the first shader feature on the chopping block. This pushes the responsibility of generating an ideal instruction stream to the GLSL and OpenCL compiler. This generally doesn't affect graphics much, but the loss of this feature causes OpenCL kernels to execute slower.

Finally, how the GPU cores are arranged in high-level blocks make the 680 the successor to the GEForce 560. The 560 which was a bit weaker at compute because of the 2:3 scheduler:work unit ratio that it used (2 schedulers to 3 blocks of 16 shaders). The 680 has 6 shader blocks (32/block) to 4 schedulers, giving it a 2:3 ratio as well. So sometimes all 6 blocks are running, and sometimes only 4 can run, depending on the workload. However, if you compare the 680 to the 560, it has a 4x shader advantage and ~20-25% clock speed advantage. By comparison, the 580 uses a 2:2 scheduler:shader block ratio.

So, in a nutshell, comparing GPUs of different generations by cores or clock speed isn't quite apples-to-apples, and that's the reason the 680 wins some (GL), loses some (CL).

  • Like 1
Link to comment
Share on other sites

MAlexander-

Wow---ok --I am not going to say I understood all of that but I think I get the big picture and your explanation---thank you. So it seems for GL stuff the 670 or 680 will make you a little more happy but for cl you might not be getting all the juice you might have thought.

That said --it does seem like this particular gpu overall should in theory a nice GPU for houdini but maybe not as awesome as one might have expected. And seems then if open gl is more important its good to get but if CL is more important maybe sticking with a lower cost 570 or 580 might be more value.

Link to comment
Share on other sites

I have never seen that gpu simulation thing work in practice. The drivers usually don't work and if you manage to find one you'll quickly find that it wont work when the sims are production-size.

Maybe if you have a exhorbitrallant card it will work though.

Sorry to say, but epic Fail. Cuda technology is the future. I am Freelance Artist i use my 550 TI every Day for Rendering complex shots and complex Situations. But not inside Houdini. Houdini got no Cuda Renderengine. I use Blenders Renderer Cycles for Rendering. Sometimes Iam 5 times faster than others... i make the money and they cry;)

I can lit in Realtime, i can Shade in realtime, i can Animate in Realtime, i got a Realtime Material View and i can see a problem before i have to wait. i Got my Own Benchmark.

I Render a Scene of a Car i have made: Full HD Photoquality: Houdini Mantra PBR = 4,57 Hours. 3dsmax Vray 6,34 Hours Blender: 9,92 Min.

Same visible Quality;) Cheers.

Link to comment
Share on other sites

  • 2 weeks later...

Sorry to say, but epic Fail. Cuda technology is the future. I am Freelance Artist i use my 550 TI every Day for Rendering complex shots and complex Situations. But not inside Houdini. Houdini got no Cuda Renderengine. I use Blenders Renderer Cycles for Rendering. Sometimes Iam 5 times faster than others... i make the money and they cry;)

I can lit in Realtime, i can Shade in realtime, i can Animate in Realtime, i got a Realtime Material View and i can see a problem before i have to wait. i Got my Own Benchmark.

I Render a Scene of a Car i have made: Full HD Photoquality: Houdini Mantra PBR = 4,57 Hours. 3dsmax Vray 6,34 Hours Blender: 9,92 Min.

Same visible Quality;) Cheers.

I guess it's a good thing most of us don't just do shiny car renders for a living...

Link to comment
Share on other sites

I guess it's a good thing most of us don't just do shiny car renders for a living...

Sorry but you dont think about my idea for one second. You can use Houdini than you can build a Pipeline within Houdini to direct connect via Phyton with the Blender Renderengine and if not you can easily transfer Data from Houdini to Blender.

Houdinis Renderer Mantra is good, and special for some special Cases, but you will fail in daily bussines.

Vray or Blender are the only render engines where you can push two or three buttons and you get a very nice render in a short period of time. Try this in Houdini you will fail in architecture Renderings. Houdini is to complex for fast workflows. So Try to Combine it like the big studios. Build big networks in Houdini than cach it out via Alembic / FBX or Obj sequences.

I know Houdini for more than 8 Years i know what iam talking about. Sorry to say that, but Mantra needs A GPU Support and a much more simpler rendering Algo.

cui....

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...