Jump to content

VEX Volume Procedural


Recommended Posts

Hi. I'm making some volumetric shapes, witch is fully procedural and rendered via vex volume procedural. I have exported my volumes to i3d texture format, and then read them back in CVEX shader. Everything works fine, until i decided to color my volumes with point color from geometry, they been generated from. So i make 3 volumes (Cd.x,Cd.y,Cd.z) and with volume from attribute sop, i get my color volumes. Then i wrote all volumes to i3d texture files.The problem is: When i read "Cd" from i3d textures in CVEX, i can't export color attribute. Only density. When exporting any attribute, except density, the procedural just stop working and nothing have been rendered :huh: Take a look at CVEX vopnet. Maybe the problem is that Vex Volume Procedural can't generate more than one volume primitive at render time? But in docs says: "This procedural uses a CVEX shader to define a set of 3D fields in space for volume rendering"

So here is a hip file. Please take a look, any advise would be helpful. And maybe there is another, much more simple way to transfer point color to volumes? I'm sure it is :rolleyes: Thanks!

volume_i3d.hipnc

post-3906-131534529679_thumb.jpg

Edited by Stalkerx777
Link to comment
Share on other sites

strange, because it works here without any alteration of your file

(I had to turn shadows off so my slow notebook can render anything, but nothing else)

H11.1.22 XP x32

as another method have you considered pointcloud lookup in surface shader?

Edited by anim
Link to comment
Share on other sites

Hi Tomas, thx for reply. I took this hip at work, and it's start working....partialy :unsure:

It's very strange, but to get something in renderview, i had to click render button a couple of times..

Sometimes it's rendering, sometimes not.... Unpredictable results.

And if i delete color export from CVEX, it's work like it should.

Working...

post-3906-131537931049_thumb.jpg

Next render, without altering anything...

post-3906-131537932622_thumb.jpg

I'm will try point cloud lookup in shader, it should work, but i'm very interesting whats wrong with my setup.

Edited by Stalkerx777
Link to comment
Share on other sites

There must be a proper reason to use this approach, but please help me to understand, what is the difference between writing data out to i3d textures and doing this in a Volume VOPSOP?

Hi Nick. Well, it's all about efficiency and scene management. If you need to do complex volumetric shapes, like clouds, you have to deal with huge volumes in sops, and even if you able to generate SDF's with millions of voxels in sops, it's doesn't mean that you can efficiently handle them and get high detailed cloud shapes. For that reason, i generate heavy SDF volume just ones, write it out to i3d texture, witch is very efficiently for handling volumes, and then add volume displacement in cvex. In other words, all volumes,defined in cvex shader generates at render time, with much more details and control. :)

Edited by Stalkerx777
Link to comment
Share on other sites

Also been wondering kinda the same thing as Nick. Here comes another question as well. Why would you need a super high res volume if you are doing per sample displacement when rendering? Won't the displacement shader create enough detail?

As he mentioned he writes out the sdf to have a base shape to work with.

It's very similar to using a displacement map on a normal polygon model. You still need a base model to apply the displacements to. In the case of clouds the basemodel could come from a fluid sim, a bunch of particles, a pointcloud from sops, or for Stalkerx777 an sdf.

On that note, could you elaborate a little why the I3d? In which sense is it faster/better/more efficient ? I have not used I3d a lot myself. Do you find it makes a big difference for storage/load times with large volumes?

Link to comment
Share on other sites

As he mentioned he writes out the sdf to have a base shape to work with.

It's very similar to using a displacement map on a normal polygon model. You still need a base model to apply the displacements to. In the case of clouds the basemodel could come from a fluid sim, a bunch of particles, a pointcloud from sops, or for Stalkerx777 an sdf.

On that note, could you elaborate a little why the I3d? In which sense is it faster/better/more efficient ? I have not used I3d a lot myself. Do you find it makes a big difference for storage/load times with large volumes?

I was probably unclear as usual but of course I understand you need a base shape to run the displacement on.

But you can make a mountain from a quad using a displacement shader. I was wondering why the volume would become so huge, if it was something specific for the i3d method of making clouds, or if it is simply a very big volume that needs to contain multiple clouds.

Link to comment
Share on other sites

The problem is: When i read "Cd" from i3d textures in CVEX, i can't export color attribute. Only density. When exporting any attribute, except density, the procedural just stop working and nothing have been rendered

i ran into this a few months ago and reported it as a bug to sesi, i believe they've fixed it since, so maybe re-upping your h. version might work?

from the journals:

Houdini 11.0.751: Fixed a problem with the VEX Volume Procedural that would cause incorrect rendering when multiple shader exports were present.

Link to comment
Share on other sites

i ran into this a few months ago and reported it as a bug to sesi, i believe they've fixed it since, so maybe re-upping your h. version might work?

Wow, good news brianburke, we use a bit outdated build at work ( i guess 581 :blink: ), i'll try last build tomorrow.

Thank you guys for reply.

I'm in the process of learning volumes in houdini, so correct me if i'm wrong.

But you can make a mountain from a quad using a displacement shader. I was wondering why the volume would become so huge, if it was something specific for the i3d method of making clouds, or if it is simply a very big volume that needs to contain multiple clouds

Well, volume displacement is different from surface displacement. During surface displacement process, mantra will refine surface and then dice it into micropolygons, after that, points will be displaced.

Volumes are set of voxels, and during rendering mantra will use ray-marching to calculate final density and opacity. Volumes can't be refined and diced,that's why we need huge volumes to get good details. It's very difficult to keep this volumes in scene, moreover they need to be displaced, before rendering.Considering this, using VEX volume procedural is the most efficient way for that. Density, defined by CVEX shader, generates at render time, saving you disk space, memory, processing time, and probably render time.

On that note, could you elaborate a little why the I3d? In which sense is it faster/better/more efficient ?

I haven't much experience with with volumes, and i didn't test bgeo vs i3d, but for now, i see following advantages of I3d textures:

1. First of it's stores volumes in tiled format, so volumes can be streamed on demand.Don't know about bgeo here, but someone said that volumes in bgeo stored in tiled format too.

2. Very important - i3d can store arbitrary number of channels per voxel. We can have any number of attributes in one volume,in one file. Bgeo stores 1 volume per attribute.

3. When generating i3d, we can use antialiasing, when reading i3d, we can filter them with typical filter types. Not sure about advantages here, but in my case, filtering helped me to get rid of some bugs.

As for file size, in my case, compressed i3d about six times smaller than bgeo, but uncompressed - six times bigger :)

I did color lookup from point cloud in shader, so my volumes are now properly colored, but rendertime increases about 30-40 %. Tomorrow i'll try to generate color volumes in cvex, maybe it will be more efficient, we'll see.

And i have a couple questions:

1) Can somebody share some more info how exactly cvex shader works with volume procedural? Is cvex runs per shading sample during rendering, or firstly cvex generates whole volume and then pass it to procedural?

2) In cvex shader, we evaluate i3d texture (or volume sample from bgeo, it doesn't matter) in position P

post-3906-131551546696_thumb.jpg

I cant figure out where that P comes from?. In surface shader, P - current shading point position, in VOPSOP, P - current geometry point. If volume procedural assigned to a empty geometry container in scene... where is P comes from?

3) We all know how to layer displacement on top of each other in surface displacement shader. Calculate P,N, displace P, pass new P and N to next iteration, and so on. But is it possible to displace volumes like that?. We can get normal from volume, no problem, but only for first displacement layer. To displace it further, we need new normal right? But we cant call computeNormal().....Well, maybe i'm totally misunderstood something here, but can we have something like that?

Quite a big post =) Share your thoughts guys. Thx.

Link to comment
Share on other sites

1) Can somebody share some more info how exactly cvex shader works with volume procedural? Is cvex runs per shading sample during rendering, or firstly cvex generates whole volume and then pass it to procedural?

I believe cvex code is evaluated per voxel similarly to VopSOP which evaluates per point.

2) In cvex shader, we evaluate i3d texture (or volume sample from bgeo, it doesn't matter) in position P

post-3906-131551546696_thumb.jpg

I cant figure out where that P comes from?. In surface shader, P - current shading point position, in VOPSOP, P - current geometry point. If volume procedural assigned to a empty geometry container in scene... where is P comes from?

I would say it seems to be a world space position of a voxel.

3) We all know how to layer displacement on top of each other in surface displacement shader. Calculate P,N, displace P, pass new P and N to next iteration, and so on. But is it possible to displace volumes like that?. We can get normal from volume, no problem, but only for first displacement layer. To displace it further, we need new normal right? But we cant call computeNormal()....

Normally, on volumes you rely on gradients to mimic what normals do for surfaces. If so, you would have to compute gradient with volumegradient() after every displace. Never tried this though...

Link to comment
Share on other sites

I believe cvex code is evaluated per voxel similarly to VopSOP which evaluates per point.

I would say it seems to be a world space position of a voxel.

as far as i know:

the cvex is executed per shading sample at rendertime.

the P position is object space position of the shading sample, not world.

Edited by brianburke
Link to comment
Share on other sites

as far as i know:

the cvex is executed per shading sample at rendertime.

the P position is object space position of the shading sample, not world.

Quite possible, but why shading sample? Isn't its purpose just to set voxel values, ie. generate geometry procedurally? As to spaces, you're right. Sorry for a confusion :)

Link to comment
Share on other sites

the cvex is executed per shading sample at rendertime.

I would also guess it gets evaluated for every ray marching step for RT and PBR (not sure if it's done via raymarching in PBR) and for every microvoxel for MP.

edit: I just realized that a ray marching step could be considered a shading sample as the shaders are evaluated, so I'm just confirming what you said

Quite possible, but why shading sample? Isn't its purpose just to set voxel values, ie. generate geometry procedurally? As to spaces, you're right. Sorry for a confusion :)

This depends in which context you use the cvex shader. If you use it in SOPs it is bound to the voxels you specified for your volume. If you do it at rendertime this limitation is gone. That's the main benefit of doing it at rendertime. You get your procedural volume exactly as detailed as your shading, sampling and volume step size settings need it to be.

-dennis

Edited by dennis.weil
Link to comment
Share on other sites

That's the main benefit of doing it at rendertime. You get your procedural volume exactly as detailed as your shading, sampling and volume step size settings need it to be.

-dennis

Yes, this would be useful, though afaik completely unreachable from a HDK side. My understanding was purely based on what HDK::VRAY_Procedural is doing.

Link to comment
Share on other sites

As I understand it when defining volumes for mantra instead of defining a voxel grid, you define an "evaluate" function that is called each sample and gets passed the current shading point. So that function could look up data from a voxel grid, or as i'm guessing is the case in the Vex Volume Procedural, call a cvex function to generate data.

In the HDK you use a VRAY_Procedural to add a VRAY_Volume to the scene, and the VRAY_Volume defines how the volume data is created. For a better explanation, check out the VRAY_DemoVolumeSphere example file:

http://www.sidefx.com/docs/hdk11.1/_v_r_a_y_2_v_r_a_y___demo_volume_sphere_8_c-example.html

Link to comment
Share on other sites

As I understand it when defining volumes for mantra instead of defining a voxel grid, you define an "evaluate" function that is called each sample and gets passed the current shading point. So that function could look up data from a voxel grid, or as i'm guessing is the case in the Vex Volume Procedural, call a cvex function to generate data.

In the HDK you use a VRAY_Procedural to add a VRAY_Volume to the scene, and the VRAY_Volume defines how the volume data is created. For a better explanation, check out the VRAY_DemoVolumeSphere example file:

http://www.sidefx.co..._c-example.html

Aha! Very interesting. Thanks for a link, I haven't passed it by before. It seems VRAY_Volume much differs from VRAY_Procedural. Make sense!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...