Jump to content
Alexey Vanzhula

Houdini 16 Wishlist

Recommended Posts

52 minutes ago, LukeLetellier said:

Where is that particular flag on the OpenGL node?

I get very, very different results between the two. As an example, I've attached two samples: the top one is the OpenGL render, and the bottom is my viewport/flipbook. OpenGL_ROP.jpg

ViewportFlipbook.jpg

 

You might want to share a file to see why it is not rendering correctly in your scene, but the openGL Rop is what you're after.

Also your farm would def need to have gpu on it. A majority of farms don't have gpu, unless you have a selection of your que specified to artist workstation during off hours i.e. night or at lunch, so opengl will not work on the farm in most cases. The settings you may have specified in the flipbook settings option will not be copied over to the flipbook rop so you will need to copy the settings over that you may have specified.

I think you're root cause is that opengl volumes don't look as nice as when you start rendering with shaders and mantra. There used to be a copy over settings option pre-15/14 that worked well in that case. Which may be a better request for SESI, to re-institute that process very obviously again.

On the wider aspect of the issue, what has been often the case in many different production environment and studios is that you need to adjust the final look in the final rendering output format, whether it's for mantra, vray, arnold, unreal, unity, web formats, VR, or any other final output under the sun. Not to understate your case, but the render you have doesn't matter unless it's in the final output format. Which I understand sucks when it's the opengl volumes of Houdini that look better and render faster than the default of Houdini shaders and mantra.

A side note for other people, the render view and the opengl are different renderers, the default ipr mantra rop and the opengl render view, respectively.

 

Edited by LaidlawFX

Share this post


Link to post
Share on other sites
Guest tar
5 hours ago, malexander said:

The OpenGL ROP is equivalent to the Flipbook with the "Render Beauty Pass Only" flag on. Handles, grids, and guides don't make sense in the context of an OP. Also, the OpenGL ROP can also be run in hython, so it can run on hqueue (as long as the machines have GPUs).

Perhaps OsX can do it.  I opened Houdini recently on OsX without a graphics card, About Houdini had 'Apple Software OpenGL' or something, via Screen Sharing. Houdini worked but was super slow.

 

EDIT: Confirmed it works! ~2.5 seconds a frame. OpenGL Rop works without a video card on OsX. . The CVMCompiler is working overtime ;) 

 

Quote

 

OpenGL Vendor:            Apple Inc.

OpenGL Renderer:          Apple Software Renderer

OpenGL Version:           4.1 APPLE-11.1.6

OpenGL Shading Language:  4.10

 

 

Edited by tar
Updated

Share this post


Link to post
Share on other sites

Which Mac doesn't have a graphics card? I would assume that in your case, you're using a computer that has an Intel integrated GPU. In PC world, I don't think render farms typically have integrated GPUs either? Does anyone actually have a render farm of Macs?

Share this post


Link to post
Share on other sites
Guest tar

It's a Cheesegrater Mac Pro. I discovered accidentally when updating the OS and the Nvidia Web driver was incompatible for a few days, you can screen share in to install the working one when it's released, then this latest test I yanked the card out so there is no GPU at all, just CPUs.  

It's more of a curiosity than real world, but thinking out load, maybe there are software GL packages on Linux/Windows that then could be installed on farms, and, OpenGL rops could be rendered there.

Share this post


Link to post
Share on other sites

I still have hope that Mesa's software driver(s) [1] will one day support enough GL that Houdini can run on it again . It's somewhat of a moving target though since Houdini's minimum bar also changes.

1. https://mesamatrix.net/

Share this post


Link to post
Share on other sites

It would be nice to have the Time Scale parameter of the FLIP solver by default to be synchronized with all the Time Scale parameters of micro solvers inside it (like in the video below). Also the Source Volume DOP should have the Time Scale parameter that would be synchronized with micro solvers inside it. Not a big deal to copy'n'paste those channels manually but it would be nice to have these things set up correctly.

 

Edited by Vlad

Share this post


Link to post
Share on other sites
On 4/27/2016 at 0:26 PM, LaidlawFX said:

You might want to share a file to see why it is not rendering correctly in your scene, but the openGL Rop is what you're after.

Also your farm would def need to have gpu on it. A majority of farms don't have gpu, unless you have a selection of your que specified to artist workstation during off hours i.e. night or at lunch, so opengl will not work on the farm in most cases. The settings you may have specified in the flipbook settings option will not be copied over to the flipbook rop so you will need to copy the settings over that you may have specified.

I think you're root cause is that opengl volumes don't look as nice as when you start rendering with shaders and mantra. There used to be a copy over settings option pre-15/14 that worked well in that case. Which may be a better request for SESI, to re-institute that process very obviously again.

On the wider aspect of the issue, what has been often the case in many different production environment and studios is that you need to adjust the final look in the final rendering output format, whether it's for mantra, vray, arnold, unreal, unity, web formats, VR, or any other final output under the sun. Not to understate your case, but the render you have doesn't matter unless it's in the final output format. Which I understand sucks when it's the opengl volumes of Houdini that look better and render faster than the default of Houdini shaders and mantra.

A side note for other people, the render view and the opengl are different renderers, the default ipr mantra rop and the opengl render view, respectively.

 

Scene File below:

https://www.dropbox.com/s/dqdbm29ek09ift4/OpenGL_ROP_Debug.zip?dl=0

Right now I'm not trying to get the look of rendering with Mantra -  I just want to be able to render out an image with the OpenGl ROP, and have it look like what I navigate around in my viewport. And right now the best I'm getting for a visual is a volume with significantly lower voxel resolution & heavy banding. I went through all the render settings in the ROP & tried cranking them all up, but it didn't improve the visual quality.

As a side note, you also have to use a Dop I/O node to get any visual at all - even if your DOP network's visibility is active & you're seeing it in your viewport, it won't be visible when rendering with the OpenGL ROP. You have to import in with a DOP i/O & render from there. 

VolumeComparison.jpg

Share this post


Link to post
Share on other sites
Guest tar

Yeah that file renders bad on OsX too. The weird thing is that normal pyros work fine. What's the workflow to generate that? Also, please send it into SideFx as a bug.

 

Share this post


Link to post
Share on other sites
1 hour ago, marty said:

Yeah that file renders bad on OsX too. The weird thing is that normal pyros work fine. What's the workflow to generate that? Also, please send it into SideFx as a bug.

 

It's a custom pyro setup that I hand made vs. using the shelf tools. Nothing too fancy - particles whirling around creating a velocity field & emitting smoke.

Share this post


Link to post
Share on other sites
Guest tar

Just made a custom smoker and the GLRop matches the viewport.

 

EDIT: Llooking closer the OpenGL rop does not seem to use HDR at Full HDR 32FP.  It may be stuck at 8bS SDR! Noticeable when the voxel res is upped. 

 

 

GLROp.png

CustomSmoke.hiplc

OPenGL ROP problem.png

Edited by tar
updated

Share this post


Link to post
Share on other sites

Passing array attributes to shaders (and an option(or by default) to interpolate the values of each index across the vertices)
It would be useful for layering a lot of (arbitrary number) of texture using different UV sets (tiling in particular) in shaders.
Right now the only option I have seems to be just subdividing the mesh in SOPS, then interpolate the values in VEX and render that result.

It would be a lot more memory efficient if I did not have increase my vertex count just to get more texture definition of course :P

I've been searching for this and I got one relevant hit:
https://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&p=178653
but it doesn't seem that this problem was solved yet

 

Share this post


Link to post
Share on other sites
4 hours ago, acey195 said:

Passing array attributes to shaders (and an option(or by default) to interpolate the values of each index across the vertices)
It would be useful for layering a lot of (arbitrary number) of texture using different UV sets (tiling in particular) in shaders.
Right now the only option I have seems to be just subdividing the mesh in SOPS, then interpolate the values in VEX and render that result.

It would be a lot more memory efficient if I did not have increase my vertex count just to get more texture definition of course :P

I've been searching for this and I got one relevant hit:
https://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&p=178653
but it doesn't seem that this problem was solved yet

 

A parameter node will import any attribute you have from sops. Just as you would in a sop/vop context. 

In the case of UVs you can go into shops, and look at the uvcoords node, inside you can see how a Shading Layer Parameter works on multiple sets. nodes/vop/shadinglayer This works on the older layering method, a bit out of fashion now a days, but works perfectly fine. If you want to pull in # number of uv sets you can use the Import Point Attribute nodes/vop/importpoint and change the value of Attribute via a parameter. This is more common now. The most amounts of UVs pulled in a standard shader I've seen has been ten, so you can just make a hda vop that handles imports from 10 UVs into a switch with an integer menu to switch them. Standard shaders don't really carry these much any more(principled shader), as this is usually for something like matte painting or a specialized shader for fx or camera re-projections.

UVs are a standard example, but any attributes passes through shop/vops the same as vop/sop, bind/import attribute, parameter, global(s) you'll have to look through for which is best to pull the value in. Crack open the pyro or mantra shaders for some good preset examples.

Share this post


Link to post
Share on other sites

Have developers discussed any plans for specular anti-aliasing in Mantra in the past? Is it planned, not possible or never mentioned?
Thin speculars need outrageous amounts of pixel samples to look stable in animation.

I was wondering how to deal with the problem.
Ray variance seemed to respond well at first but for smaller reflections its blocks are too broad.
Detecting high frequency of normals would be nice - including normal maps and shader displacement. Ideally, Mantra should send more rays there. If that's not possible, it could apply more filtering.

Currently, I deal with specular aliasing by changing roughness dynamically, according to normals' derivatives ;)
First image is constant roughness 0.005, second is varied 0.005 - 0.25. The third are derivatives.

specular_aliasing_standard.png

specular_aliasing_dynamic_roughness.png

specular_aliasing_derivatives.png

Edited by OskarSwierad

Share this post


Link to post
Share on other sites

Ow yeah I know its possible with a SET number of UV layers, but I prefer a data-driven setup, where the shader does not care at all about the number of them,
and I don't want to manually slot all the UVs, if I can avoid it. If the data has 34 texture slots, those should be assignable automatically :P

For my setup, Im loading in some textures, about 8 right now (but it should be able to support ridiculous amounts (if I have enough ram)) if array attributes were better supported.

and I did get it to work in SOPs just fine; 
to interpolate my array attributes after subdividing my mesh, with a high crease value (have to do this twice for each subdivision) after transferring my UV arrays from the unsubdivied mesh with a ~0 range:

void LerpArray(int neigs[],nC;string arrayName;float arrayS[])
{
    int i,j,maxArLen,arrayC[],curC;
    float arrayVals[];
   
    maxArLen = 0;
    for(i=0;i<nC;i++)
    {
        maxArLen = max(len(float[](point(0,arrayName,neigs[i]))),maxArLen);
    }
    resize(arrayC,maxArLen);
    resize(arrayS,maxArLen);
    
    for(i=0;i<maxArLen;i++)
    {
        arrayC[i] = 0;
        arrayS[i] = 0;
    }
    for(i=0;i<nC;i++)
    {
        arrayVals = point(0,arrayName,neigs[i]);
        curC = len(arrayVals);
        
        for(j=0;j<curC;j++)
        {
            arrayC[j]++;
            arrayS[j] += arrayVals[j];
        }
    }
    for(i=0;i<maxArLen;i++)
    {
        arrayS[i] /= arrayC[i];
    }
}

int neigs[],nC;
float outAr[];
neigs = neighbours(0,@ptnum);
nC = len(neigs);

if(len(f[]@lUVx)==0)
{
    LerpArray(neigs,nC,"lUVx",outAr);
    f[]@lUVx = outAr;
}
if(len(f[]@lUVy)==0)
{
    LerpArray(neigs,nC,"lUVy",outAr);
    f[]@lUVy = outAr;
}
if(len(f[]@lUVz)==0)
{
    LerpArray(neigs,nC,"lUVz",outAr);
    f[]@lUVz = outAr;
}
if(len(f[]@maskClim) == 0) 
{
    LerpArray(neigs,nC,"maskClim",outAr);
    f[]@maskClim = outAr;
}

After that I can just put my texture into vertex attributes using something like this (which does not work in SHOPs):

int i,numNoiseLayers;
float maskSum;
vector outCd,curCd;
string rootFolder,map;

rootFolder = chs("rootFolder");
numNoiseLayers = detail(0,"noiseCount");

maskSum = 0;
outCd = {0,0,0};
curCd = {0,0,0};

for(i=0;i<numNoiseLayers;i++)
{
    map = rootFolder+split(detail(0,sprintf("noise%d",i)),";")[1];
    
    curCd = colormap(map, f[]@lUVx[i*2], f[]@lUVx[i*2+1], "wrap", "repeat");
    curCd = lerp(curCd,colormap(map, f[]@lUVz[i*2], f[]@lUVz[i*2+1], "wrap", "repeat"),f@POLARFRONT);
    curCd = lerp(curCd,colormap(map, f[]@lUVy[i*2], f[]@lUVy[i*2+1], "wrap", "repeat"),f@POLARSIDE);
    
    outCd += curCd*f[]@maskClim[i];
    maskSum += f[]@maskClim[i];
}
if(maskSum>0.0001)
    v@Cd = outCd/maskSum;
else
    v@Cd = {0,0,0};

 

Edited by acey195
  • Like 1

Share this post


Link to post
Share on other sites
16 hours ago, LukeLetellier said:

It's a custom pyro setup that I hand made vs. using the shelf tools. Nothing too fancy - particles whirling around creating a velocity field & emitting smoke.

There is something wrong with you .hiplc. That file is corrupt. I couldn't even do a regular flipbook. I did copy and recreate the files into a new .hip based on your bgeo.sc and it worked. The 1 to 1 quality wasn't there, as your image showed.

You may be having half-band voxel issues, that the opengl is not interpolating correctly. What Marty did worked fine, so there is something specific with your setup that is causing the alias and loss of data. In order to find out why it working you'll have to go step by step and test each section and phase to see where the breakdown is in your setup. That'll be most helpful to sesi. It's still worth sending into SideFX, as there is something off about the stored data, but it looks like a two part problem. The data you have created is some how unique than a standard pyro volume, my gut guess is the half band voxels, or the .sc format. Also there is an interpolation that the viewport is doing that opengl rop doesn't seem to be doing for this data. And it seems the viewport display options (d in the viewport) has gotten more updates than the flipbook and opengl rop lately so that may be a sign, the full on switch to one type of opengl may be where the issue happened.

If your not on a long term project or big group, I would just roll with the in scene flipbook and size tab resolution and Crop Out View Mask Overlay, for the short term. 

Good Luck man.

 

Share this post


Link to post
Share on other sites

I have another request.

Import SVG.

With Houdini attracting more and more Mograph users the task of importing 2D vector art into the 3D space could be easier. Even if I convert my Illustrator file to ai8 format, when it comes in all the brand colors are lost and I still have to patch it up with a Hole Sop which does not always work.

With all the Math Geni working at SideFX you think someone would at least port the Blender SVG importer into Houdini?

  • Like 2

Share this post


Link to post
Share on other sites
36 minutes ago, acey195 said:

Ow yeah I know its possible with a SET number of UV layers, but I prefer a data-driven setup, where the shader does not care at all about the number of them,
and I don't want to manually slot all the UVs, if I can avoid it. If the data has 34 texture slots, those should be assignable automatically :P

For my setup, Im loading in some textures, about 8 right now (but it should be able to support ridiculous amounts (if I have enough ram)) if array attributes were better supported.

and I did get it to work in SOPs just fine; 
to interpolate my array attributes after subdividing my mesh, with a high crease value (have to do this twice for each subdivision) after transferring my UV arrays from the unsubdivied mesh with a ~0 range:

After that I can just put my texture into vertex attributes using something like this (which does not work in SHOPs):

 

For texture mapping from sops, Material Overrides  from the material sop or with a geometry obj will do that by default, for each material you have assigned. 

A shader has to be defined some how, at some point. So in one method or another you need to define the shader before you can pump your data into it, or you wouldn't be able to assign these values you want into the slots. You can send struct of data back and forth, so you just need to define what you want in the shader. A shader with 34 albedo maps, 34 maps across the whole shader, 34 maps per a layer. There is a certain amount of where undefined elements break down. The real question you might want to ask is what do you really need in your shader, define it and then find a theoretical limit on what it is. Also you need to make sure it renders efficiently. I've gotten by on a 3 layered shader: glass, with a sticker, with mud on it in a format similar to the principled shader, and material overrides worked fine, hundreds of maps and ten thousand parameters, but collapsed in hidden ui format such as the principled shader. I'm not sure you realize how much data you'd need to pump through your geometry in order to define a full shader, this is why shaders are defined separately. This separates the two process and allows better machine handling of the data, you don't want to unnecessarily multiple the amount of data you need to carry. IMHO, I think you actually have access to everything you need already architecture wise.    

 

Share this post


Link to post
Share on other sites
5 hours ago, LaidlawFX said:

For texture mapping from sops, Material Overrides  from the material sop or with a geometry obj will do that by default, for each material you have assigned. 

A shader has to be defined some how, at some point. So in one method or another you need to define the shader before you can pump your data into it, or you wouldn't be able to assign these values you want into the slots. You can send struct of data back and forth, so you just need to define what you want in the shader. A shader with 34 albedo maps, 34 maps across the whole shader, 34 maps per a layer. There is a certain amount of where undefined elements break down. The real question you might want to ask is what do you really need in your shader, define it and then find a theoretical limit on what it is. Also you need to make sure it renders efficiently. I've gotten by on a 3 layered shader: glass, with a sticker, with mud on it in a format similar to the principled shader, and material overrides worked fine, hundreds of maps and ten thousand parameters, but collapsed in hidden ui format such as the principled shader. I'm not sure you realize how much data you'd need to pump through your geometry in order to define a full shader, this is why shaders are defined separately. This separates the two process and allows better machine handling of the data, you don't want to unnecessarily multiple the amount of data you need to carry. IMHO, I think you actually have access to everything you need already architecture wise.    

 

Im not just requesting this for myself :P, Its just that I found something I can do in SOPs, but the same code if I use in a SHOP network does not because it does not support float arrays very well.

If I manually define all the UV sets as different vector attributes and then bind them one by one it does work fine, but that creates unnecessarily large networks (and even larger if you have to create redundant nodes which I "may" use at some point.)
its just not a very clean way to do it in my opinion, as binding does not really work with variable parameter names. maybe it could work with an attribute import that way, but it would still be nice to have better array support :P

 

Edited by acey195

Share this post


Link to post
Share on other sites
2 hours ago, LaidlawFX said:

There is something wrong with you .hiplc. That file is corrupt. I couldn't even do a regular flipbook. I did copy and recreate the files into a new .hip based on your bgeo.sc and it worked. The 1 to 1 quality wasn't there, as your image showed.

You may be having half-band voxel issues, that the opengl is not interpolating correctly. What Marty did worked fine, so there is something specific with your setup that is causing the alias and loss of data. In order to find out why it working you'll have to go step by step and test each section and phase to see where the breakdown is in your setup. That'll be most helpful to sesi. It's still worth sending into SideFX, as there is something off about the stored data, but it looks like a two part problem. The data you have created is some how unique than a standard pyro volume, my gut guess is the half band voxels, or the .sc format. Also there is an interpolation that the viewport is doing that opengl rop doesn't seem to be doing for this data. And it seems the viewport display options (d in the viewport) has gotten more updates than the flipbook and opengl rop lately so that may be a sign, the full on switch to one type of opengl may be where the issue happened.

SideFx did confirm that it is a bug, and that they're working on it. :)

It figures that my setup is weird & special; I get so many bizarre tech issues - most of which are confirmed by enough people to ensure that it isn't user error, but not enough people to warrant a bug fix by the company that designed the software. :P

It's more of a personal project anyway - trying to sniff out all of these sorts of issues on my own before committing to client work. 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×