Jump to content

Why is Houdini switching to OpenGL 3.2 but not 4.3?


Recommended Posts

Hi,

I never wrote a program using OpenGL (Only DX), so this might be nonsensical. But why don't we upgrade straight to OpenGL 4.3 instead of moving step by step slowly? One could argue that we don't need the features in 4.3, but wouldn't that make it even easier to upgrade because you won't be using those parts of the library?

Also from my experience in games, I witnessed programmers upgrading an engine from DX9 to DX11 straight, bypassing 10. Not saying this is a good example but it's the only one I got :)

Lastly if SESI finalized upgrading to 3.2 (great work and patience from SESI), would the same long overhaul not start again when they decided to go from say 3.2 to 4.0?

Would love to hear some insight on these. I know it's crazy amount of work either way no doubt.

Thanks :)

Link to comment
Share on other sites

I've run into plenty of bugs with houdini's display since they switched to Gl3.2. It's gotten better since 12.0 but I would be concerned if they focused on newer implementations before fixing existing bugs. Slow and steady wins the race.

Link to comment
Share on other sites

I see your point. Although if SESI went to 4.3 right away when they decided to upgrade the viewport, I don't think we would have more issues than now. Then in the end, we would have the very latest OpenGL. But now if the viewport work is finalized completely using 3.2, we will still go through the same steps going from 3.2 to 4.2, no?

Or maybe going from 3.2 to 4.3 will be as easy as linking to a new library. Only SESI knows I guess :)

Link to comment
Share on other sites

When the viewport project was started, GL3.3 and GL4.0 drivers were few months old, and a survey I ran on odforce (http://forums.odforce.net/index.php?/topic/11704-graphic-hardware-survey/page__mode__show) showed that only a small fraction of users had GL4 hardware (1/3) compared to the number that could run GL3 (3/4). So the choice was made to build a GL3 renderer, with the option to build a GL4 renderer on top of its pipeline later.

As for the minor version choice, there didn't seem to be much in GL3.3 that was really needed and driver stability is always a concern for the new features. With the GL3.3 spec released months before, I opted to let drivers mature a bit more. GL3.2 was chosen primarily because of two core features - the geometry shader and uniform blocks. The geometry shader drives all the decoration display and is a key feature in the GL3-accelerated selection in 12.5, while uniform blocks allow for very fast substitution of a bunch of related uniforms (such as light or material uniforms). GL3.2 also includes important GL3.0/1 features like Transform Feedback, integer textures and Texture Buffer Objects.

A large amount of the GL viewport work is actually common to all the GL renderers - the interface between Houdini geometry and OpenGL. The old H11 code was not scalable at all and couldn't be reused due to the huge paradigm shift in hardware since time that the foundation for the old renderer was laid (on an SGI machine in the late 90's). So if a GL4 renderer were to appear, it would share this interface and likely be built off the GL3 pipeline. GL4 also has a much smaller paradigm shift in programming than was seen from GL1->GL2, and GL2->GL3, so it's would fit rather naturally on top of the GL3 pipeline. So we aren't talking about a rewrite for a new render version, just incremental updates where required.

The viewport renderer version is the minimum GL version that is required, so if it were suddenly GL4.3 a large number of users would no longer be able to run it (including all AMD users, as their drivers currently only support 4.2). Tessellation shaders would be a key feature of such a renderer, with some other effects that would benefit from features such as the compute shader or the ability to randomly write to textures. However, as static has pointed out, the GL3.2 viewport still needs some work, and beyond that it still has plenty of room for more optimizations and features, without requiring GL4.

  • Like 1
Link to comment
Share on other sites

Thanks Mark, your input is always helpful. When you said GL3 accelerated selection though, is there more info on this? I am just wondering what's accelerated in selections? Is it to do with faster selection because of faster redraws?

Btw you seem to be the only guy talking about the viewport, graphics and hardware stuff. Are you the sole guy responsible for the new viewport. If so, that's quite an undertaking :)

Link to comment
Share on other sites

12.0 had accelerated object and handle selection, and 12.5 expanded this to component selection (points, polys, edges, vertices). Since the old GL selection buffer API hasn't been hardware-accelerated for years now, selecting polygons on a large polygon model became very slow. The new GL3-selection moves this back onto the GPU, and also introduces a new "visible surface only" area selection mode (Shift+V). Lasso and paint selection of primitives and edges is also more accurate (you no longer need to include a point in the area). And finally, because the data for the model is already on the GPU, there is no transfer cost per selection except the selection data coming back.

I primarily handle all the low-level OpenGL rendering work in the viewport, but there is a lot more to the viewport than just rendering :)

  • Like 1
Link to comment
Share on other sites

Thanks alot Mark. It's more clear now. I should really try the new selection improvements in 12.5 as I didn't model anything since that release.

You and all SESI devs are superstars. I haven't seen more competent people in any other company. Having a small team helps I guess :)

Link to comment
Share on other sites

Interesting thread :)

What do you guys at sesi use as your linux distro+window manager/gpu for best performance? Were going to change os at work now and are doing a beta testing of ubuntu 12.10 and we seem to have very different performances in houdini depending on what window manager we test with (the test machine is using a quadro 4000, with 304 something nvidia driver).

At home in win8 with gtx580 (306.97) i get way smoother performance out of houdini than with the much more powerful machine i have at work. (except for the ghosting mesh problem little now and then).

While on viewport stuff, is there a plan to implement opensubdiv at some point? http://graphics.pixar.com/opensubdiv/ (know its been asked before but wont hurt poking on it again)

*puppy eye look*

  • Like 1
Link to comment
Share on other sites

Hi Edward. Yeah unity is really slow so were not even considering this as an option, but houdini seems to be sensitive for us in what we run it on compared to nuke/maya etc so i was wondering what you guys use for linux distro&manager so we could test it out and see if its not some other problem with hardware configuration etc. :)

Link to comment
Share on other sites

I use gnome/metacity with visual effects disabled (mostly because I don't like them), and others have been using various other window managers, like KDE or gnome with Sawfish underneath. I find that Linux feels a bit slicker than Windows in general, but maybe that's because Aero is running.

As for opensubdiv, I don't know what SESI's stance on that is yet. But if it were to be integrated, it is very likely that it would be integrated into the viewport as well. I'd have to take a much closer look at its API than I have before commenting further, though.

  • Like 1
Link to comment
Share on other sites

Thanks for the information Mark!

We got around to do some more testing with more than one computer and seems like it was a mix of bad driver versions with the graphics card that caused issues. The first test machine had a Quadro FX 4000 and it preforms worse than our GTX 580 cards.

If anyone else got a Quadro FX 4000 card, it seems the driver 304.88 is the one to use on linux now, other drivers made a huge impact on performance depending on what desktop manager we used.

Link to comment
Share on other sites

I think Sesi already have plans for OpenSubdiv and it will work in the viewport.

There's a hidden option in every geo object "Display as Subdivision in Viewport - vport_displayassubdiv"

Just go to "Edit Parameter Interface" and turn on the "Show Invisible Parameters". It will be the last item from the list.

Link to comment
Share on other sites

I think Sesi already have plans for OpenSubdiv and it will work in the viewport.

There's a hidden option in every geo object "Display as Subdivision in Viewport - vport_displayassubdiv"

Just go to "Edit Parameter Interface" and turn on the "Show Invisible Parameters". It will be the last item from the list.

Nice finding Fabiano, but this doesn't have to mean Opensubdiv. Subdiv display was a long RFE in Houdini and might be computed without it. Hopefully I'm wrong!

Link to comment
Share on other sites

While on the topic of selection. I really appreciate not having to hit the points anymore when brush selecting. Beyond that, with the new ogl code, would it be a big issue now for brush selection to highlight selected polygons before mouse-up? Preferably instead of the brush trail.

Thank you,

Fabian

Link to comment
Share on other sites

Which drivers were causing problems with the Quadro 4000? Newer drivers than 304.88, or older ones (or both)?

It was both. I dont remember what earlier version we tried but i know that one was a 304.XX but less than 88. And one was 310.XX. The 319.XX (one that is currently latest for many gtx cards from nvidia, including older quadro 3700 that we also got on some computers here) didnt work at all so clearly not supported driver for the quadro fx 4000.

Im testing on a GTX 680 now and happy to say that houdinis viewport outperforms maya (and maya 2.0 viewport) and blender in pure opengl performance. We did fps comparison on 10+ million poly sculpts from zbrush when moving around the viewport in different modes like wireframe, smooth shaded and smooth shaded with wires.. Also it was faster in selection speeds on both faces, points and edges (lasso + paint selection as well).

Link to comment
Share on other sites

While on the topic of selection. I really appreciate not having to hit the points anymore when brush selecting. Beyond that, with the new ogl code, would it be a big issue now for brush selection to highlight selected polygons before mouse-up?

The main concern was the selection update speed; for extremely large models (in the 10's of millions of polys) the selection operation and the display of the highlight can really bog down the actual pick. GL drawing doesn't thread well either, so it's a bit challenging to do it asynchronously. Also, we ran out of time for 12.5, and stabilization was the main goal, with a dash of VDB.

It will likely make an appearance at some time in the future, with a display option to disable it and some heuristics to perhaps only do a highlight pick on huge models if the user pauses, or something along those lines.

It was both. I dont remember what earlier version we tried but i know that one was a 304.XX but less than 88. And one was 310.XX. The 319.XX (one that is currently latest for many gtx cards from nvidia, including older quadro 3700 that we also got on some computers here) didnt work at all so clearly not supported driver for the quadro fx 4000.

Thanks. I will pass this along to support.

Im testing on a GTX 680 now and happy to say that houdinis viewport outperforms maya (and maya 2.0 viewport) and blender in pure opengl performance. We did fps comparison on 10+ million poly sculpts from zbrush when moving around the viewport in different modes like wireframe, smooth shaded and smooth shaded with wires.. Also it was faster in selection speeds on both faces, points and edges (lasso + paint selection as well).

That's good to hear. There are more performance improvements coming for the next major version as well.

  • Like 2
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...