Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won


davpe last won the day on November 26

davpe had the most liked content!

Community Reputation

109 Excellent

About davpe

  • Rank
    Houdini Master

Contact Methods

  • Website URL

Personal Information

  • Name
  • Location
    Adelaide, AU

Recent Profile Visitors

2,815 profile views
  1. sometimes Clean SOP with Orient Polygons option ON works. that should fix individual crippled faces. not the whole islands thou... maybe combination of methods is the best way to go... really depends on the particular model.
  2. Copied relative reference issue

    the best way would be only to use one copy_to_points SOP and have an attribute on the template points that specifies what object is going to be copied onto them. watch this video. stuff you want starts at about 06:20
  3. Is moving to Australia reasonable?

    I don't think you can just travel to the country and start looking for job in vfx studios. visas, especially in australia, are pretty lenghty and paperwork intensive process and generally your employer have to offer you a job first and then, if you come to an agreement, they'll assign you to an immigration agent that will guide you through the visa obtaining process. i'm pretty sure that working holidays visa don't apply for work in vfx. if you're thinking about emigration, like permanently, you would have to be offered a permanent job position, which is unlikely if you're not a skilled professional. getting people into australia is pretty expensive for companies so they're quite picky about who to bring in. juniors typically don't pay off to bring in from very far away. but hey, trying won't cost you anything so contact a few companies and see what (if) they respond. my advice would be to build a solid basis at home first (or anywhere in europe), have a great showreel, work for couple of high-end studios and your chances for Australia will be much higher. good luck.
  4. Convert images using icp ?

    yeah ok I've never used the icp method. the thread you've linked is quite dated (2007),maybe iconvert wasn't around then?
  5. Convert images using icp ?

    are you sure you want to use icp and not iconvert? with iconvert maybe this video will help you:
  6. hi, maybe like this? roundedge_to_tangent.hiplc
  7. Over time I have found two solutions that are more-less working when you NEED to do blur in shader but both have it's caveats due to the fact you have to actually raytrace the blur, and generally I always bake the result into bitmap to avoid high render times and/or possible artifacts coming from sampling noise. poor man's trick would be gaussian random VOP added to your position data, like this: this is easy to do but may be hard to sample enough. good in simple cases but forget about using it in more complex shaders. Another option would be gather VOP - that's more capable loop based solution but soon you'll see there are similar caveats as in it's essence you still need to raytrace that blur. check this article for details: https://vfxbrain.wordpress.com/2018/08/28/gather-vex-function/ in general, blurring non-bitmap based patterns seems to be a hard nut to crack. I don't know what are you doing but if it's just blurred dots I'd use COP network (or substance designer), brought that in as a bitmap and worked my shader up from there. cheers.
  8. Normals from Displacement ?

    the difference is the input data from which tangent normals are calculated. by default i think it looks at uv coords. read the docs for details. yes, that's exactly what i mean. just plug the displaced N into the output. it renders as expected for me, look at the picture i posted. honestly, i don't know much about how different contexts work. but generally if you want to export any image plane data (including displacement pass) you always use Surface. at least i never had to change it. maybe again, read the docs, for more info?
  9. Normals from Displacement ?

    hi, there was more issues in your file. first of all, the bind VOP. the attached picture is pretty explanatory at this point, i hope then, your tangentnormal VOP did not have any input normal connected and also, tangent style must be set to "Use connected utan, vtan" (which are supplied by the computetan VOP wired to it, by the way). third, in this case your displace2 VOP must be set to "Normal" mode of operation because there is a tangent normal as an input value. and the last thing, I just don't understand what are you trying to do first, you use the displace along normal node to compute displaced normal in object space. then you're doing some math to convert that normal into a tangent space, only to have it connected into another displace node that will finally convert it back to the object space before it's connected to the output N. this doesn't make any sense man. that conversion is completely redundant why didn't you simply use the first displace VOP's normal for the surface output? cheers. A_Shader_fix.hipnc
  10. Normals from Displacement ?

    that means you're doing something wrong don't know, try posting your scene. it certainly works for me...
  11. Normals from Displacement ?

    the first displacement is computing object space normal out of a height map. the resulting normal is then converted to a tangent space normal (with a bind export called Ntangent - use that to render the normal map as an aov). Tangent space normal map goes to the second displacement set to mode Normal which again converts the tangent space normal to the object space to be used in shader. Of course typically you don't do this kind of conversion in your actual shader, you just use a height map or a normal map with a single displace node. this is just an example how you'd go about converting height to normal. two displacements can be used simply by combining two textures before connecting it to a displacement node. displace has more functionality compared to simple Displace along mormal. Drop down both nodes, look at its parameters and you'll see the difference.
  12. Normals from Displacement ?

  13. what exactly are u using takes for? those are mainly designed for animation iterations. for other tasks it's not very practical (as you already found out). personally I never use takes unless I've got some very specific reason for doing so.
  14. hi, as far I understand, you're not rendering your geometry as packed primitives. if that's the case, do take a look at packed primitives workflow (really nicely explained in docs). you should have just a single point per piece, representing your transforms, and use that to drive hipoly pieces that are delay-loaded and rendered as memory efficient packed primitives. this way you're able to render hundreds of millions polys with quite low memory requirements. memory consumption depends on many factors of course but in general, packed prims are the most memory efficient way to render heavy geometry in mantra (alongside with Polysoups - but that's more true when you have a single massive mesh rather than rbd pieces). cheers, D.
  15. trying to understand how arrays work

    hi, maybe take a look at this article: https://vfxbrain.wordpress.com/2018/06/14/using-arrays-with-vex/