Harry Posted November 18, 2014 Share Posted November 18, 2014 hi, how to write single multi-context VEX shader (displacement and surface) without VOPs (created by File menu > New Operator Type)? Quote Link to comment Share on other sites More sharing options...
dennis.albus Posted November 18, 2014 Share Posted November 18, 2014 AFAIK that is not possible. If you check the source code generated by VOPs you can see that this is also split into the different contexts. I might be wrong though and would be interested in information if and how this is possible. Quote Link to comment Share on other sites More sharing options...
Harry Posted November 18, 2014 Author Share Posted November 18, 2014 Is there a way to compile two VEX shaders (displacement and surface) to single multi-context material? such as the standard Mantra surface material Quote Link to comment Share on other sites More sharing options...
dennis.albus Posted November 19, 2014 Share Posted November 19, 2014 Not that I know of. the way would be to compile them to a surface and displacement shader respectively and build a Material from those in Houdini. This way everything will get automatically updated when you change your code. Is there a specific reason why you absolutely need to have this together? You could also try and compile your shaders with the compiler pragma #pragma optable vop and wire a multicontext shader yourself in Houdini. Quote Link to comment Share on other sites More sharing options...
up4 Posted November 23, 2014 Share Posted November 23, 2014 Funny coincidence here. I was asking myself the same question and for me, the reason for doing this was to have only one VEX source code file per material to put it in a Git repository for versioning. Quote Link to comment Share on other sites More sharing options...
up4 Posted November 27, 2014 Share Posted November 27, 2014 And, about that, if you create a mantrasurface (which contains both surface and displacement networks) from the material palette and you save the vex source to disk as a vfl file, you will see the code produced by "surfaceOutput" (the return statements for the surface shader method), but there is no mention of "dispOutput" (nor any "displace" shader method anywhere). So it looks like saving the VEX source from a material only saves out the surface shader and not the displacement shader. Is that correct? Quote Link to comment Share on other sites More sharing options...
dennis.albus Posted November 27, 2014 Share Posted November 27, 2014 When you show your vex source there is a dropdown menu which let's you choose the context for the generated code. Don't have Houdini in front of me so unfortunately I cannot be more specific right now Quote Link to comment Share on other sites More sharing options...
up4 Posted November 27, 2014 Share Posted November 27, 2014 (edited) Thanks! Yes, didn't see that. But then, if that is the code that gets generated and executed at render time, then it doesn't seem optimal because a lot of what only feeds the displacement outputs is present in the surface code (the result of those computations are never used in the surface shader)… Anyways, it illustrates what I'm talking about, I think. In the end, the best practice seems to prototype with the VEX builder and then save a surface VFL and a displacement VFL refactor both to use common VEX libraries or include/header files. And optimize the VEX code with a text editor. I'm a complete n00b thinking out loud here, not hoping that it will ever makes sens to anybody. ;-) Edited November 27, 2014 by up4 Quote Link to comment Share on other sites More sharing options...
dennis.albus Posted November 27, 2014 Share Posted November 27, 2014 (edited) Don't forget that you are looking at the raw conversion of your node network to vex code. The compiler will optimize the code a LOT when compiling to bytecode and get rid of most (if not all) of the stuff that does not contribute to the final output of the shader. So no need to spend a lot of time manually refactoring and optimizing the code. Edited November 27, 2014 by dennis.albus Quote Link to comment Share on other sites More sharing options...
up4 Posted November 27, 2014 Share Posted November 27, 2014 Ha! Well, I'm really curious as to what this (and many other such things) gets compiled into: if (0 != 0 && 0 != 0) ;-) Quote Link to comment Share on other sites More sharing options...
up4 Posted November 27, 2014 Share Posted November 27, 2014 And the another thing (for multicontext shaders, and I'm still talking out loud here) is that sharing data between geometry and surface and displacement is not so trivial (for a n00b like me). I haven't figured how to peek into vertex/point/primitive data from a surface shader or if it is even feasible (starting from the currently shaded surface and not from a hard coded network path). Is it (possible)? What would be the best way? Thanks again! Quote Link to comment Share on other sites More sharing options...
dennis.albus Posted November 28, 2014 Share Posted November 28, 2014 You mean inspecting geometry and shading information from another position on the geometry than the one you are currently shading? AFAIK this is not possible. I would love to have the possibility to have direct access to geometry and shading information, especially in different spaces (like in UV space for example to do proper blurring of procedural textures). You can do a workaround by sending rays to the desired locations using the trace or gather functions, but it is not nearly as convenient as accessing them directly. Quote Link to comment Share on other sites More sharing options...
up4 Posted November 28, 2014 Share Posted November 28, 2014 Actually, I'm trying (and failing) to get anything at all on the currently sampled geometry other than the globals. If I create a custom primitive attribute on the geometry in a SOP network, would using renderstate to retrieve the "object:name" and feeding both that and the result of a primid call to the the prim function be the best way to find the value of that attribute? If so, does prim_attribute perform smooth interpolation of the same attribute in UV space? If not, what does it do compared to a simple "prim" call? I'm back to basics here, but it seems a little bit under-documented. And thanks! Quote Link to comment Share on other sites More sharing options...
magneto Posted December 10, 2014 Share Posted December 10, 2014 I was wondering about this exact same thing as well. If you make a VOP material, it has surface, displacement, fog, etc. I can see these codes separately. But how do these separate codes become one at render time? Is there no way to do this using had written VEX code? If not, do I have to write code for each component (surface, displacement, etc) separately using File > VEX Type? If so, how will I combine all of these separate shaders in the end? I know some people here write their shaders using VEX, so how do they do this above step? Quote Link to comment Share on other sites More sharing options...
fathom Posted December 11, 2014 Share Posted December 11, 2014 they don't "become one". they are individual shaders that are generated from (potentially) shared vop networks. the "material" vop is really not much more than a (slightly) fancier subnet. if you have a material that contains both displacement and surface shading, you can apply it as a "material" or you can apply it as only a surface shader or only a displacement shader. so really, it's just that the material node collects shaders and lets you refer to them by a single common name rather than having to come up with a way to deferentiate between surface shaders and displacement shaders (like: material "bricks" vs surface shader "bricks_surface" and displacement shader "bricks_displacement"). personally, it feels a bit like the material workflow sort of sputtered a bit in development. perhaps they implemented it just enough to give people what they wanted (the ability to share common vop logic between related shaders) without having to totally change the paradigm (surface shaders do one thing, displacement shaders do something else, etc). there are some other quirks to working with the material builder nodes that i don't like so i tend to just write separate shaders. if i need to combine them, i use subnets. it's way easer to combine two disparate shaders than it is to disentangle a material node into it's constituent parts. in terms of writing one piece of vex code that can compile to different types... you can do that by setting flags on the vcc command line. vcc will take a context flag and you can check that with #ifdef's to rework your code to compile appropriately for whichever shader you're writing. 1 Quote Link to comment Share on other sites More sharing options...
magneto Posted December 11, 2014 Share Posted December 11, 2014 Thanks Miles. How does one for example take a VOP material like Mantra Surface, click "View Code" and then gather the codes for each component (surface, displacement, etc) and put it inside a single VEX based material? I just want to do this as an experiment. If I just compile it using Houdini, then I don't see the code of the new compiled material so I don't want that. Something like this: My VEX material() { // default mantra surface // surface code: ... // displacement code: ... // end } Is this possible? Thanks Quote Link to comment Share on other sites More sharing options...
up4 Posted December 14, 2014 Share Posted December 14, 2014 Hi Ryan, From what Miles says, I don't think it is possible. But it is exactly what I was referring too myself! Also, the pragma solution is less elegant, IMHO, than just writing 2 separate shaders (surface, displacement) and using libraries of shared included (#include) code and then combining them using the material VOP net. 1 Quote Link to comment Share on other sites More sharing options...
fathom Posted December 24, 2014 Share Posted December 24, 2014 i'm not really sure what is to be gained by mixing surface and displacement code since they're not likely to be calculating the same things. however, if you combined a displacement shader with a sop shader, then you're talking about mostly the same code with some extra bits here and there. i have done this in the past -- tho i was basically doing inline code in vops to avoid some of the wrangling. Quote Link to comment Share on other sites More sharing options...
jonp Posted December 31, 2014 Share Posted December 31, 2014 When I want to use the same code in a surface shader and displacement shader I'll calculate the values in the displacement context, then export those values as parameters that can then used in a surface context with the dimport function: https://www.sidefx.com/docs/houdini13.0/vex/functions/dimport http://www.sidefx.com/docs/houdini13.0/nodes/vop/dimport That's actually something I don't like about the material context sometimes... global variables like N are not always identical in surface and displacement context, but the interface can confuse that fact a bit. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.