Jump to content

SSFX

Members
  • Content count

    43
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    2

SSFX last won the day on July 16 2015

SSFX had the most liked content!

Community Reputation

10 Good

1 Follower

About SSFX

  • Rank
    Peon

Personal Information

  • Name
    Alexandre
  • Location
    Montreal

Recent Profile Visitors

1,564 profile views
  1. Bend and Break (post) Solver

    This node '/obj/bendsetup/translate_pieces/get_closest_point' should have its second input connected to this node '/obj/bendsetup/translate_pieces/polymatrix_points' Hope this helps
  2. volume gradient shop

    So, I realized the solution I posted before wasn't really working. There was indeed a space issue... as usual. So I added a transform to get from world to camera, since I am using a sop vector field (world space) that needs to be used in shop (camera space). Now it works, without deformation weirdness using the pyro displace node. With this setup you don't need to write anything to disk, it might be useful in some situation. volumeDisplaceAlongGradient_v002.hipnc
  3. volume gradient shop

    Hi Doum! If you are doing stuff in sop you might consider computing your gradient field at sop level, so it can be passed to shop later. I had the same offset you're having trying to use the displace node, don't really know where that offset come from. It doesn't seems to be spaces related. With volume I tend to use the pyroDisplace node that works fairly well for this kind of task. You can wire in your gradient field and something to drive the amount of displacement and your're good to go. Here's a working file so it's a little clearer. Alex volumeDisplaceAlongGradient_v001.hipnc
  4. simulating complex plants

    You might consider simulating only a couple of instances and scattering the simulated caches with time offsets across the surface you have to cover with plants. Other than that your workflow seems to makes sense. If you try to wire capture hires geo it will take a long time to compute. If the motion of the sim is not too crazy you might also try to transfer you sim over with the latice sop in point mode. Far from being the most accurate solution, it's still a lot faster than the capture workflow. Alex
  5. There is actually a node for that, VDB vector merge. https://www.sidefx.com/docs/houdini14.0/nodes/sop/vdbvectormerge
  6. You can take a look at this page for info: http://www.sidefx.com/docs/houdini14.0/render/deeprasters There is a bunch of very useful aovs you can ouput by using the default shaders in mantra. If you want to create custom aovs (e.g. mattes ids) you can dive in the your shader and use the "bind export" node and wire stuff to it. Give it a name and use the same name on your mantra node in the "Extra image planes" tab. There is also a tutorial at CMIVFX that covers the subject pretty well if you have an account. Hope this helps Alex
  7. Dennis Albus | Technical Reel 2015

    No worries, take your time I'm just really curious about the feature and how you figured it out!?
  8. how to separate BSDF channels ?

    Think he's talking about per light aovs, right?
  9. how to separate BSDF channels ?

    It's probably no the way to go. You probably already know about it, but there is a cool tread about spectral rendering. The code is pretty simple/compact and it works absolutely perfectly. The amount of noise is also very reasonable compared to other shader out there. http://forums.odforce.net/topic/6925-dispersion-bsdf/page-3
  10. Dennis Albus | Technical Reel 2015

    So where is that "generator rendering engine"? I can only find the vm_generatorshader deep down soho param files, but no sign of the render engine itself. Can you give me a hint? Great reel btw! Thanks
  11. Paint on Speaker WIP

    Pretty cool! Are you using any kind of surface tension custom nodes in there or its default flip solver? How many substeps have us used in your solver?
  12. getglobalraylevel() explanation ?

    Hey Emmanuel, I'm no mantra pro by any means, I started working with mantra about a year ago. I always tend to stick with PBR since it's closer to what I already know and have extensively used in production (Arnold, Guerrilla). When I say GI, I mean the GI light in the shelf, the one using Photon Map to generate indirect diffuse information without path tracing. It seems to be very close to photon mapping in VRAY or Final Gather in Mantal Ray. I've seen a lot of people using it for caustic or interior scenes, but I personally tend to avoid it if I can get away without processing a point cloud every time I move or add a light.
  13. getglobalraylevel() explanation ?

    From the tests I have done getglobalraylevel() doesn't work in PBR mode, it always return 0. In fact, it only seems to work with other renderers like raytracing and micropoly when using the gilight. It doesn't take into consideration refraction or reflection and it only counts the diffuse bounces of the gilight. So if you want to work with PBR you can forget about getglobalraylevel() which seems to serve a very specific purpose. Other than that, I tried modifying behaviors based on the return value of getglobalraylevel() and it took ages just to modify color of the rays. I think this function is only used to detect if the ray is a gi bounce or not.
  14. getglobalraylevel() explanation ?

    If you look at the vex files provided in HFS they only use getglobalraylevel() for global illumination. All the PBR related code use getraylevel(). Maybe it's an optimization for PBR, I don't really know. The doc is not really helping that's for sure! That could be what you're looking for: http://forums.odforce.net/topic/19041-pbr-rayswitch/?hl=getraylevel#entry127410 Alex
  15. how to separate BSDF channels ?

    I don't have a solution for you, but I have some thoughts about what I've tested in your scene. I think what your seeing in the right picture is just sampling noise. You can see it decreased right away if you bump up the AA samples. You can also clearly see that it's getting noisier every time it gets down to another level of refraction. I think your shader is defining 3 different specular bsdf that will be sampled individually at render time. That would mean something in the background (could be a vex function similar to nextsample) is generating new samples (SID) on the fly to get a artifact free render. From what I understand, you would need a way to tell mantra to compute the bsdf once and then do your rgb splitting or compute the bsdf 3 time but with exactly the same samples. Sorry if this is not helping you out, it's just observations nothing more. BTW, I'm following what you doing in terms of shader RnD and it's really interesting. As soon as I get a better with shader writing, I'll try to help you out with this! Keep up the good work! Alex
×