Jump to content

djo

Members
  • Content count

    19
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Community Reputation

0 Neutral

About djo

  • Rank
    Peon

Personal Information

  • Name
    Joan
  • Location
    Citizen of the World
  1. Upon further thinking if there is no straight process to do so in Mantra I'm thinking of perhaps writing a point cloud data with all this information a point per pixel and then find a way to convert it to a DCM file.
  2. Hey Guys, I have been playing with a raytracing shader. The surface I apply the shader on and the actual position I am computing in the shader is quite a faire distance away. It's like I'm looking through a window. All works well and good however I am now looking into implementing Deep data in this shader We can read deep data with dsmpixel. But is there anyway of writing it out from the shader ? At the moment my deep data comes out with the actual plane surface data that the shader is applied to but not the objects and procedural inside the "window" which my shader generates. Do any of you have any clues on how to feed this custom data to Mantra do so ? Is it just an simple array variable to export or a twin array both depth and color/alpha ? Is it even possible ? Thanks for any idea you might have. J
  3. Hmm so your saying to generate a vdb and apply a volumeSop with my CVEX on it ? wouldn't that require me to crank up the voxel count to get de fine detail ?
  4. Hey Fathom, I am already trying to do this, I also wanted to couple that idea with the raymarching too. But wanted to make sure it was worth it too. As I've just tried, even if I just set the output of the CVEX shader to a constant 1 value within a predetermined box and 0 outside of it, with a volumequality of over 100 it really is faster IF I have no transparency but if it has some transparency then the marcher has to march through the whole cube at the rate specified to get the detail closeup which is a shame. Thanks
  5. Thanks Sebkaine for your answer. I am looking through the documentation of these functions. However I do not see how this can achieve what I am looking for. Basically I have some CVEX code that generates a Volume through the vex volume procedural node which is attached to an empty sop which in turn is then attached to a shader. This CVEX code has the potential to generate a high detail animated volume that I will render as an isosurface as I want a hard surface. I might also want that surface to be transparent etc... perhaps applying a glass shader etc... In order to get the high detail I need for the CVEX code to be usefull at present I need to set the volume quality to 100+ which makes the render time escalate. However this is only needed to get the detail necessary close to the camera. If I set the volume quality to 20 or 10 I can get away with midground stuff and bring it down to 1 for background stuff. As the isosurface will most certainly be transparent reflective/refractive I am just trying to make the render manageable. What would be awesome is to be able to control the marcher so it could work in steps going for example from 0.001 unit is space close to camera to over 1 unit at the back where detail is less likely to be important. I have tried writing a custom shader that does this through a little raytracer within it, and it works very well I can get very high quality detail close to vamera. However this means I am outside of the Mantra standard renderer and need to implement everything I need in the shader, reflection refraction opacity lighting etc.... which I am trying to avoid thus trying an isosurface approach first. If I can't get my renders to a manageable level then I will have to go down and push the custom shader further. Cheers
  6. Hey Guys, I'm trying to optimize some renders and I wonder if there is anyway of changing the volume quality based of distance from camera. I have a custom CVEX volume procedural that adds lots of detail but I need detail in the front but less so at the back of the volume so I'm looking to see if the renderer could march bigger steps as it progresses through the volume ? Does anyone know if its possible in Mantra ?
  7. Ok I found it out For anyone interested, I needed to tick get properties from vex code under the node tab in the Type properties windows. This has however broken the parameters I was reading in the GLSL shader. I just need to reassing all the parameters in VEX now and they are properly transfered over to the GLSL shader Yay
  8. Hey Guys, I've been playing around with the GLSL shader for a few days and understood how it works etc.. I created a shader using mostly he vertex and fragment shader and get a good result of what I am trying to achieve in the viewport. I am now trying to add the vex code to replicate whatever I did in the GLSL shader so it would be picked up by Mantra and render something very similar to what I have in the openGL However I have no clue as how to bind the vex code to the mantra renderer as it doesn't seem to be taken into account The vex pane is mostly empty so I tried something very simple like this to begin with to make sure the shader was actual rendering something (I called my Houdini asset glslTest) surface glslTest() { vector color = {1,1,0}; Cf = color; } I tried this is a standard VEX shop and applied this to the object and it gets taken into account. Does the name of the procedure have to have a specific name or something for it to be taken into account ? I tried main like you would do in the glsl. Do you need to specify some pragmas ? The documentation is lacking a bit on this front given how Houdini is usually outstanding and a model for documentation made right. I would love to avoid having 2 seperate shaders. The documentation seems to be saying the glsl shader can do both viewport and Mantra renders. SideFX please tell us more on this
  9. Hmm I see so not out of the box like that. The other thing I'm looking into is storing the data in the volume as a seperate Volume Primitive of just 1 giant voxel. Its a bit convoluted especially if I have several vector attributes, it will get quite messy, but I think it might just work for me. It's just that I would need to write out my animation of parameters inside a sequence of volumes which is not ideal. I'll need to see if its possible to automate the creation of volume primitives based on how many detail attributes there is on the volume object. Thanks for your help and insight fathom.
  10. Interesting, Indeed rushing into building a test case on multiple objects to post here I didn't stop and think that a detail attribute is one per object. Maybe I would then have tried to store the attributes on placeholder primitives I guess and change this topic to Reading Primitive Attribute in a Volume Vop/Cvex Or perhaps try storing an array in a detail attribute, as Houdini 14 is suppose to be able to that, I never tried this yet though. However I was initially trying to get it to work with detail on a single object hence a volume sop per volume in my preview geometry. I did however previously tried a rendering a single volume with a detail attribute to try and control some things but I just didn't manage to get it to work. Also I Never knew that the path we put in was an actual command line I'll investigate this, it might be convoluted though when passing a lot of data. Cheers
  11. Hey Guys thanks for your ideas. So I installed an apprentice version of houdini 14 (to try ImportDetailAttribute) and build a simple scene explaining my problem. This works nicely at Sop level preview in the viewport. However for rendering I think I am using a wrong approach as cvex rebuilds the volumes fields from scratch overriding whatever geometry was there in the object where the cvex is attached in the first place. At render time there is no animation because my detail attribute "anim" doesn't exist when the cvex rebuild the volume fields. How would one approach this problem given that in this scene there might be just 3 cubes but I might have a lot of these that I might need to animate by hand ? I am trying to be resolution independent. I would like to zoom in pretty closely so I am not looking at saving out the preview volume and rendering them. Finally I am trying to use the same Cvex for both preview and render. Thanks for your help cvexDetailAttribute.hipnc
  12. Also I forgot to mention I am on Houdini 13 so ImportDetailAttribute is not accessible to me
  13. Hey anim, Thanks for the reply, but the problem here is that I need to specify a geometry path. I could set the default to be op:`opfullpath(opinputpath("./",0))` for the volume VOP but how would this translate when used as a shader ?
  14. Oops I posted it in the wrong section I had multiple tabs open Sorry about this, Could anyone bring this back under General Houdini Questions ?
  15. Hey Guys, I have a question regarding volume Vops. I am trying to do an operation on density of a volume using detail attributes as my parameter values for my operations. I can't for the life of me understand if it's possible to do so. Setting in a parameter node reading a detail attribute works fine doing Sop operations in a Vop Sop but doesn't do anything in a Volume Vop, it basically just reads the default value of the parameter node and just doesn't care if an attribute is already set using the same name. I have tried most attribute Types to no avail. I am testing this with a very basic setup. I created two volumes made from boxes and adding a detail attribute for the density multiplier called densityMultiplier. I am animating my densityMultiplier independently on each volume. However after adding a volumeVop and within it adding a parameter called densityMultiplier and multiying the volume density with this parameters doesn't work as intended. Basically what I trying to do is running an animated CVex operation and would like to use both the Volume Vop for preview so I can animate the values properly, while using the CVex in the shader for render time for high quality. I will have many multiple volumes all with different animations of parameters and do not wish to duplicate my shaders/volumeVop. If I can store the parameter animations in the detail attributes and read them straight away from the volumeVop/Cvex it would be amazing. But I can't seem to be able to :'( Am I doing something wrong ? does anyone have an Idea ? Is it even possible with VolumeVop/Cvex in Houdini like its possible in a Vex Sop context ? Thanks in advance for any help and insights. Cheers
×