Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won


StepbyStepVFX last won the day on October 8

StepbyStepVFX had the most liked content!

Community Reputation

54 Excellent


About StepbyStepVFX

  • Rank

Contact Methods

  • Website URL

Personal Information

  • Name
  • Location
  • Interests
    VFX, Computer Graphics, AI, Deep Learning, Cinema and many more...
  1. Assemble deletes UV attribute ?

    If you have asked the Assemble sop to output packed geometry, you won’t see the UV unless you unpack the geometry. These attributes are not lost, they are kept at the geomtry level of the packed primitive. In essence, a packed geo allows to you to manipulate « lighter » object : they are considered just a point and the geo is stored elswhere. It allows instancing and light management of scene, the geometry being only accessed at render time. http://www.sidefx.com/docs/houdini/model/packed.html
  2. Sandstorm with Billowy Smoke ?

    Hi, Have a look at these topics : Hope that helps ! :-)
  3. findattribval in for loop

    No, the findattribval of 16 and 16.5 returns integers, and not arrays therefore the for loop cannot work properly (it does not know on what to iterate). If you want to learn to learn VEX on Houdini version 16 and/or 16.5, refer to the docs of those versions, do not use the docs of H17 :-)

    You mean stopping them once their speed reach a certain threshold ?

    Hi, You can either use a POP Wrangle or a POP VOP, and play with the speed directly within your sim (and even group the particles you want to slow and Wrangle only this group), and keyframe a factor you would apply of the speed (or a ramp). Or you can also tweak your sim afterward, using CHOPS, which also gives you lots of control.
  6. findattribval in for loop

    Hi, Which version of Houdini are you running ? Seems from the doc that H17 allows findattribval to return an array, but from my doc in H16 or H16.5, the signature of that vex fiunction is only int. So in my opinion, your Wrangle is getting an error because your for loop needs to iterate over an array, while you are feeding it with an integer... In this case, I would recommend to work with findattribvalcount and then iterate with findattribval(0,"point","id",10, count). See their 3rd example :-)
  7. fit UV into 0-1

    Brilliant, and neat of simplicity :-)
  8. fit UV into 0-1

    Very easy :-) You can add : - a first Promote SOP from vertex/point (depending on where UV are stored) to Detail, renaming the uv.x to MaxU and using the "Maximum" promotion method. - a second Promote SOP from vertex/point (depending on where UV are stored) to Detail, renaming the uv.y to MinU and using the "Minimum" promotion method. - a third Promote SOP from Detail to vertex or point (cf. supra) on the attribute MaxU - a fourth Promote SOP - same as above - with MinU Do the same for v (uv.y) Then use a VOPSOP or a Wrangle with a fit function on UV, to remap from MaxU and MinU to 0 - 1 and the same with a MaxV MInV to scale on the other dimension
  9. fit UV into 0-1

    Okay, then try UV Transform SOP node, and use the scale until they fit within the appropriate range. But if you need to match specific UV layout (ie. if you cannot eyeball the scaling so that it fit within the 0-1 boundaries), then consider transfering uvs from one object who has the target UVs toward the object you are working on. But if you just need them to be fit within the range, eyeballing it, then UV Transform is your friend.
  10. Use a portion of $HIPNAME in ROP

    You can try doing it in VEX, putting $HIPNAME in a parameter, create a string variable initialized with chs("ref_to_your_param"), and then use the split function on this variable (and other VEX functions on strings), to keep the parts that you want, and use sprintf to concatenate the full path you need, and then putting that into an attribute at detail level. Then create another parameter on your Wrangle, in which you will use detail() to get the path into the parameter, that you can reference on your mantra node inside the image filename. Alternatively, in this case, I am wondering if you could use substr() : `substr($HIPNAME, 0, 15)`$OS_`substr($HIPNAME, 16, 2)`_$F4.exr Try it and let us know
  11. fit UV into 0-1

    I may not understand well the question, because telling you there is a wonderful node called UV Layout (in SOP) that pack the uv islands into the (0,0) to (1,1) UV space seems too simple :-) What do you mean by fitting the UV into 0-1 space ?
  12. Tracking, Plates and Live action General advice

    The Plate is the general name given to the footage you will work on, that can be « matchmoved » / « tracked » / « 3D Tracked », depending on what meaning you put behind, and on which you may composite 3D elements, or just clean, key, compositie 2D elements etc. with a proper compositing software. Tracking : this can have different meanings, but they are linked. Usually, with a compositing software, you can « track » elements also called features (of the image) or points. This may be indeed some specific patterns printed and glued on set, but it can be other zones of contrast in your image, called « features ». Tracking just mean selecting a pattern in your image, and try to follow it from one frame to another. So its shape shouldn’t change too much over time to have a good track of your elements. The results of tracking is a bunch of points : 1 position for each frame, that describes the move of your pattern. 3D tracking or matchmoving : you can do « matchmove » on 2D elements (you translate a 2D element with the result of your track above, and therefore it « matches » the movement of the pattern you tracked - in 2D; and thus the name « matchmove »). But sometimes, you need to rebuild your scene in 3D : objects and movements of the camera. For that purpose, you will need to track (see above) a bunch of points on your plate. At least a bunch of good tracks, but more often a ton of them (many hundreds) if the movements of the camera are more complex. I won’t enter into details as there are tons of matchmoving tutorials (FXPHD especially, the ones on PFTrack for examples). You can track manually (you usually do it when there are some points / patterns / zones of contrast of interest in your scene, that you specifically want to get in your 3D space), and you can let the software find itself some of points to follow, but you usually needs to manually clean them (for example « false » points : like when two objects that does not have the same position in space appear to cross themsleves on screen, giving a lovely zone of contrast to follow for the computer, that « sees » that as a feature that does NOT exist in space...). Once those 2D tracks obtained, you give as much info on the camera as possible, and then you let the computer « solve » the camera and the points : meaning, the computer will try to put those points in 3D space, try to put a virtual camera in this scene, recompute what this camera would see given its position & focal length & camera gate and the position of your points in space, and then compares this recomputed 2D positions of the points with the initial tracks you fed the software with. Then the computer iteratively try to adjust points and camera positions into space, to minimize their recomputed 2D positions with the initial tracks. That’s the way, after long computation times, and after a long 2D tracking process of « features », that you can have a cloud of points and a camera path, with estimated focal length and so on. From there, you can rebuild your scene (using your favorite modeling package), place nice 3D creatures / FX / objects and render them before compositing them on your initial / cleaned plate. To matchmove, you can do it in NukeX (Nuke does not have the 3D features, NukeX does), which is also the reference for compositing. But for complex matchmoves, I like to use 3DEqualizer, or PFTrack. You also have Syntheyes and Boujou, that many people also use... Of course, I have simplified : you may believe you have a good « solve », but you’ll see that what should be « straigth » or flat maybe bent etc., due to lens distortion. So you usually need to « undistort » your plate previously (using or not some reference pictures of grids taken with the same lens and camera that the ones used to film the footage). Concerning number of features to track in 2D to get a good 3D track : it depends on the length of your shot, the complexity of movements of your camera etc... The tracks should be as long as possible to cover many successive frames. You can have thousands of tracks that track features over 3 to 5 frames, before diying, but this may not be sufficient for complex camera moves... That’s why it must be thougth onset before filming :-) I recommend you the courses on compositing of Eran Dinur on FXPHD (Nuke), but they have many, many good ones on Nuke as well, the courses on PFtrack (still FXPHD), you also have History of VFX by Matt Leonard (Fxphd again) and « VFX Foundations » of Tahl Niran (fxphd again, and this guys works at Weta), and finally this book to better understand the global VFX process : The Filmmaker's Guide to Visual Effects: The Art and Techniques of VFX for Directors, Producers, Editors and Cinematographers Hope that answers a bit your questions :-)
  13. Plus button for dynamically adding parameters

    Hi, I think I would probably do it in Python, using PyQt, to have all the "dynamic" funcionnalities you want (adding fields etc.), and then create a script that takes all this info and creates the VOP COP node with all the proper children nodes and parameters within it
  14. Batch render objects

    Hey, I know I will look stubborn, but check this file, this should do what you want : it gives your image a name that contains the name of the object you are rendering (using splitpath). You'll have to adapt it as I don't know how you stored the data, but the file is easy to understand. Just use the same wrangle and the null whose purpose is to capture the name of your obj, in your Instance node. The parameter of the null I have added is then referenced in your Mantra node, in the name of the saved image. All of that before using the Wedge that controls everything. Hope that will solve you problem for good :-) wedgeRenderObj.hip
  15. Remove self intersections after negative PolyExtrude

    Hi, I would have indeed tried isooffset, convert it to a VDB, dilate it negatively, convert to poly, and then boolean the whole thing. Or substract VDBs and then convert to poly. Except if you need lots of details within yur hollowed geo, a proper division size of your volume should not take too long to calculate... maybe it was set to a too low division size ?