Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Maxc

  • Rank

Personal Information

  • Name
  • Location

Recent Profile Visitors

775 profile views
  1. I seem unable to get a cvex fisheye lens shader to render on our render farm (through deadline). As the lens shader is in the file why does every machine not use it? Most importantly how do i get the farm to use this shader? Thanks.
  2. Thanks Tomas, That's so simple, I can't believe it . Amazing! For those that need to do this in the future, here's the set up.
  3. The problem with VR and fisheye rendering is that it needs "z depth" to be equal in every direction not just along the z axis of the camera, (Pz) Could anyone point to with a shader that could be used as an "omni " depth plane (aov) in Mantra. Thanks.
  4. vertex animation -BBOX MAX & MIN values help

    Hi Mike, I saw your post on the new unity update just installed the new vertex_animation_textures_beta from https://github.com/sideeffects/GameDevelopmentToolset/tree/Development and everything works now. I have successfully got a cloth into Unreal. Thanks for the help.
  5. vertex animation -BBOX MAX & MIN values help

    Hi Mike, I have an Indie licence, so any idea how to get these MIN MAX values? In the videos Luiz always shows a working scene, not one from scratch. Thanks.
  6. I'm struggling to get data out of the position map, its just grey. I'm assuming its do with the bbox values. In the GDC2017 cloth example there are values here. but when I use the vertex animation node, the -BBOX MAX & MIN values are not automatically filled out. Where do these values come from and how do you get them? Out of curiosity, what are they for and what do they mean to Unreal? Thanks.
  7. It's had some issues in the past admittedly but that tide may have turned. I'm quoting Andrew Hazelden who has made a lot of lens shaders, including the Domemaster set for both Mental Ray and Vray which we use for stereo equirectangular in 3dsmax and Softimage. "Mental Ray used to work with Houdini back in 2010 but no one has really ventured into using it as an option in the mean time due to lack of support from the developers and end users. The version of Mental Ray 3.14 Standalone that comes with every new paid license of Mental Ray 3.14 for Maya or 3Ds Max would be able to render a Houdini exported mental ray .mi scene file. Also the latest mental ray release has some really nice GPU rendering modes with improved global illumination that has started to make it almost usable again, and I'm a bit partial to the mental ray lens shader support as it works on both the CPU and NVIDIA GPUs now. https://forum.nvidia-arc.com/showthread.php?15796-Mental-Ray-3-14-Standalone-and-Houdini-16&p=64071#post64071 The NVIDIA reply is it depends on how much interest there is from the community. I think it would be a good thing and it may well work out a cheaper option than Redshift and Octane with the benefit of old school CPU. Andrew has some great tools for cinematic VR too. https://www.andrewhazelden.com/blog/
  8. This is answered in the video https://www.sidefx.com/tutorials/autorigging-masterclass/ you can save presets by going to "file" above the python pane.
  9. I cant find a way to save the guide rig so you can try out different configurations. Once you run the generate the rig, the guide is "destroyed" Also if you re-open a file with a guide rig it you can't generate a rig from it. Any ideas to connect it back to the python pane? Thanks.
  10. Hi Rich, That's brilliant. I didn't realise you have to initialise the add point position and also in the VOP add a constant ptnum. Thank you for your help. I've still got away to go before I understand the vex wrangles, but this helps makes sense.
  11. Hi Rich, Thanks, that's great and gives me some new ideas but not quite what I mean in this case I don't want them all to have the same normal direction. I want each point to have a normal derived from it current position subtracted from the single point. In the same way you can use a wrangle with @N = @P; this makes the normals directed to the zero. I don't quite understand how this works admittedly, I just want to control where "zero" is and have all the normals point there. I'm after a more efficient way of what my VOP is already doing. My issue is that I have to have the same number of points in each.
  12. Is there away to align the all the normals of one primitive to a single point of another primitive? I am trying to achieve a bulge type deformation with a point VOP using the normals as a vector. Currently I can align the normals of the first primitive only if the second primitive has the same number of points. I only need one point for them to align to. Thanks. 046 displace_along_vector.hiplc