Jump to content

Houdini 12 Sneak Peek


robert.magee

Recommended Posts

please give us some more info about mantra 12

More info will be available in October - this really was meant to be a sneak "peek." - the instancing capabilities already mentioned will be huge for Mantra rendering and the geo engine and other enhancements will have an impact - but.... more specific Mantra optimizations will be part of a later discussion.

robert

Link to comment
Share on other sites

You guys are being very subtle, imho H12 SOUNDS FRIKKIN' AWESOME, MIND-BLOWIN', ROCK'N'ROLLIN' HOT STUFF!!! :)

(Sorry, but I had to get it out with the appropriate amount of caps'd letters :))

This is so good news I can hardly wait for the release. I always much appreciated the features that SESI implemented throughout the versions -- but this is so many new and useful stuff they could easily bump the version number to h14 or even more. I would have been happy to see _any one_ of the new features in the new version, but having them _all_ is, well... see comment above :)

Anyhow, it must have-been/be a lot of work going into this, so thank you guys at SESI for your efforts.

Oh, how long till october...

Link to comment
Share on other sites

I was prayng in my dreams for bullet and GPU support. My XSI friend send me a link for H12 sneak peak this morning, and there is it, all together. No more jokes from XSI Ice Momentum Arnold users. I cant wait for this release, YOU GUYS ROCKS!!!!!!! I had to drung some Captain Morgans for celebrate this great day hehe

Link to comment
Share on other sites

I cant wait for Cloth and the new handles for rigging/animation. And the speed increases in geometry editing/viewing along with Fluids. Its gonna be a dream come true.

I already have one high rez character waiting to take advantage of H12.

Link to comment
Share on other sites

Addition.

I wanted to know GUP comparing with not tesla or with latest Quadro(or gforce) as well.

I can not guess how fast tesla is!!

Does anyone know if this type of cuda gpu acceleration can take advantage of the new generation cards with two GPU like Nvidia GTX 590? it got 1024 (512x2) cuda cores so maybe theres no need to buy the expensive tesla card?

Link to comment
Share on other sites

Does anyone know if this type of cuda gpu acceleration can take advantage of the new generation cards with two GPU like Nvidia GTX 590? it got 1024 (512x2) cuda cores so maybe theres no need to buy the expensive tesla card?

Hey Magnus,

see Peter's comment on previous page. Tesla cards are in pair with Quadros gpu units. The difference is the amount of memory (usually 1-2GB versus 6GB).

Link to comment
Share on other sites

Does anyone know if this type of cuda gpu acceleration can take advantage of the new generation cards with two GPU like Nvidia GTX 590? it got 1024 (512x2) cuda cores so maybe theres no need to buy the expensive tesla card?

Yes, it shouldn't be a problem. You can also SLI/Crossfire two cards to increase the speed/memory that is available. I'm using the GTX 580 here and look forward to giving the new GPU features a run in H12 :)

Check this out.

http://www.tomshardw...-radeon-hd-6990,2898-16.html

Edited by hopbin9
Link to comment
Share on other sites

Hey Magnus,

see Peter's comment on previous page. Tesla cards are in pair with Quadros gpu units. The difference is the amount of memory (usually 1-2GB versus 6GB).

Ah yes i saw that comment, I just was curios since the GTX 590 (the new one) has two GPU instead of one (tesla and gtx 580 has only one) and if there could be any problem with this new kind of architecture and gpu accelerated cuda stuff :)

Yes, it shouldn't be a problem. You can also SLI/Crossfire two cards to increase the speed/memory that is available. I'm using the GTX 580 here and look forward to giving the new GPU features a run in H12 :)

Check this out.

http://www.tomshardware.com/reviews/geforce-gtx-590-dual-gf110-radeon-hd-6990,2898-16.html

Yeah two GTX 580 on SLI seems to be pretty nice performance! I got one GTX 580 at home and its going to be fun to test it with H12.

We are going to order some GTX 590 here at work to test with houdini and also for Mari. (not so sure if Mari viewport can take advantage of the 2 gpu tho?)

So if you use two GTX 590 on SLI and it will be quad GPU powah! Wanna see that on H12 benchmark if the card actually scales up hehe!

edit: some benchmark of this here:

http://www.guru3d.com/article/geforce-gtx-590-sli-review/

Mostly gameing but for 3dmark it seems to give some boost with two cards vs one :)

Edited by Magnus Pettersson
Link to comment
Share on other sites

..

Anyhow, it must have-been/be a lot of work going into this, so thank you guys at SESI for your efforts.

..

quoted for agreement .

congratulations to anyone involved in making this possible !

Link to comment
Share on other sites

Mostly gameing but for 3dmark it seems to give some boost with two cards vs one :)

Yea, focused on frame rates. Which to me isn't important, because most of the scenes we do in Houdini use proxies in the viewports anyway. So it's always real-time even on low end cards.

I read in the H12 release, and reading between the lines, that we'll be able to tweak Pyro parameters in real-time in the viewport what would normally require several minutes of baking on a CPU. I don't think FumeFX, Maya or XSI can do this? but I haven't played with those much.

I so glad that went and put in a large power supply in my machine that could handle a second GTX 580 :)

Link to comment
Share on other sites

awesome update, thank you SESI for such tremendous effort to make Houdini lightning fast

there are some questions that came into my mind and maybe it's not time to ask them since this was just sneak peek

but

1. will cloth solver work mutually with rbds once again? that would be so cool

2. GPU computation on Pyro 2.0, is it applicable just for Pyro 2.0 or it means that some microsolvers can be GPU accelerated so we can build our own GPU solvers? and is that CPU acceleration CUDA or Open CL based?

thanks if you can provide any info on that

Edited by anim
Link to comment
Share on other sites

2. GPU computation on Pyro 2.0, is it applicable just for Pyro 2.0 or it means that some microsolvers can be GPU accelerated so we can build our own GPU solvers? and is that CPU acceleration CUDA or Open CL based?

thanks if you can provide any info on that

^ Seconded.

Update looks absolutely brilliant, indeed, well done and thankyou to SESI for the huge update!

Link to comment
Share on other sites

Dear SESI, you have no idea how much these improvements will affect your sales. My studio has been more and more MAX based because of speed alone, yet houdini is infinitely more powerful; I know it, we all know it. It is difficult for me to convince those in-charge of budget to spend money on slow software; or software that requires a lot of pipeline tweaks to get working efficiently. With these improvements; I see a very very bright future. I see more seats; more revenue and more innovation moving forward..

Brilliant work SESI!

Massive improvements, congratulations to everyone involved.

Link to comment
Share on other sites

Yes, it shouldn't be a problem. You can also SLI/Crossfire two cards to increase the speed/memory that is available. I'm using the GTX 580 here and look forward to giving the new GPU features a run in H12 :)

Check this out.

http://www.tomshardw...-radeon-hd-6990,2898-16.html

I may be wrong here, as I've only seen this mentioned with regards to GPU renderers. (thread on cgtalk, and I can't find the link).

I read that you're bound by the amount of memory on a single card. Running SLI does not double the amount of data that you can process. Four 1GB cards can process 1GB of data in the GPU pipeline. (same deal with x2 cards, etc.) If you have mixed cards. One with 512MB and one with 1GB, then both cards are limited to processing 512MB of data.

The point from the thread, was that a Quadro with fewer Cuda cores, but more memory could process larger data sets faster then the higher cuda core gaming cards with lower amounts of memory, as they don't have to swap back and forth between device & system bound processors and memory.

This all makes the 3GB GTX 580's look like the most attractive low cost option, but maybe it doesn't hold true for Houdini 12.

Can any GPU devs provide more concrete facts on this?

Edited by Alanw
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...