Jump to content

old school

  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won


old school last won the day on November 13 2017

old school had the most liked content!

Community Reputation

518 Excellent

About old school

  • Rank
    Grand Master
  • Birthday 05/29/1963

Contact Methods

  • Website URL

Personal Information

  • Name
    Jeff Wag
  • Location
    The Great White North
  • Interests
    Houdini<br />Maple Syrup<br />Building cool stuff out of crap<br />Guitar<br />Skiing, water and snow<br />My kids
  1. A real tough condition for procedural tools to fill those holes. I used a curve SOP then mirrored a couple times to fill in the holes. And those four holes will create a polygon that is concave. Not a very good primitive to render with many engines, or to pass on to a game engine as triangulating that face may cause overlaps. Polyfill sees the input as degenerate which means there are one or more cases where things are ambiguous. Consider that there are six open conditions that it considers to fill. The four holes, the perimeter of the frame and the perimeter of the exuded oval. Instead I reworked your file to be more procedural. Where the original grid and circle are properly configured, skinned and then three methods to create the extrusion: your existing polyextrude SOPs (fussy approach), using a single poly extrude and it's ramp lookup for the extrusion, sweep SOP to sweep a profile curve to create the profile. You can change the window outline, the padding of the circle to the perimeter and anything else you want to drive procedurally. What's also nice is that the topology is rock solid and you can create an infinite number of variations. Begging to be turned in to an HDA. manifold_geo_window_frame.hip
  2. pop* will bring in any DOP objects that begin with pop. The default name for a particle system built from the POP SOP is popobject. So pop* will fetch that object along with any other simulation objects that begin with pop. ---- DOP Import SOP imports simulation objects. More specifically any geometry sub-data bound to that simulation object. If you open up a Spreadsheet on any DOP, you will see a listing of all the simulation objects. Open up any Simulation object will show you the list of subdata of which Geometry is one. Geometry contains all the points, polygons, any geometry for that object. The DOP Import SOP has two slots actually. One is for the name of the simulation object(s) to bring in. A second field, usually blank is used to import any subdata. Blank always defaults to Geometry. You could use this to bring in any named fields/volumes by name on that simulation object as well. Any sub-data on the simulation object that is of type geometry can be imported back in to SOPs.
  3. Why do presenters almost ALWAYS use a Mac?

    As a Houdini presenter having used MacBook Pros for the last 10 years it has everything to do about video support, ruggedness and thin/light weight and direct access to key tools such as keynote from the very first MacBook Pro released. All of this up until now made up for les than ideal viewport experience with Houdini. Since I love to use Houdini live in many presentations and with Apple moving to Metal and the Houdini viewport experience not keeping up with windows and Linux, well things have now changed for me personally. I am switching back to PC laptops and windows 10 because higher end PCs have come a very long way and most have thunderbolt and DVI out for easy setup to presentation monitors and projectors. Windows 10 is what it is but is far more manageable. I use Cygwin for shells so no issue there. Plus I want to run other tools for games support on the road. Unreal, Unity. Substance, redshift, etc. Unfortunately MS still believes that MS Office still has value beyond the cost of the OS so you do have to pay and it is subscription now. Why MS? Why in this day and age? PowerPoint is not keynote but it too has come a long way. Houdini performance is better and supports all viewport features with a decent Nvidia card and with the Mac you have little choice here. —— Why do some Houdini presenters put down a geo Object, dive in, blast the file sop and start working, I don’t know. I call this the seven deadly clicks and is the main reason we added shelf tools. Familiarity I guess. I use shelf tools to add Primitives then dive in when presenting myself as I try to avoid needless complexity in Presentations. On my own, it’s a toss as to what I do, —— i do not have a moustache. Nor do I see myself growing one in the near future but nothing wrong with that. Had a moustache all through High School and never once was I carded, if that means anything.
  4. This is H16. To add Displacements you have to build this in to the Core shaders. Core shaders only do surface shading. If you want to add a texture displacement, use displacetyexture VOP. To build a single non-mixing shader, you add two output VOPs one of type surface and one of type displacement after the skinshadercore and displacetyexture. Add a Collect VOP and wire in the two outputs. You also have to add a Properties VOP. RMB (RightMouseButton) on the properties and choose Edit Render Properties. In the dialog that pops up, in the render property list, in the bottom search field type displace. Then find the Mantra > Render and choose all the render properties as in the snapshot image. See the attached Houdini scene file for a working skinshadercore with displacements in /mat. skinshader_with_displace.hiplc
  5. Geometry cutting methods (question - cookie sop)

    Wait a couple weeks... H16 will have an answer for you that still blows me away every day I use it. In the meantime, yes Cookie needs a lot of love to get results in some cases. Always try jitter parameter first. Cookie also has issues when the geometry is too small (less than 0.01) or large (greater than 100) so scale in to the sweet spot of 1 to 10 overall units. It also doesn't like co-incident surfaces hence jitter. Always put the more complicated geometry in to the left first input. Really try not to slice up a manifold left input geometry with an open grid plane and instead use a box to cut the geometry using only one of the faces of the box as the cutter. Always make sure primitive (not point normals as they simply do not matter but check the primitive normals) are always facing outward or else use a Reverse SOP to make sure they are facing outward. We gave up on Cookie when doing the shatter tool as it was unpredictable. Shatter Tool uses recursive Clip SOPs to cut up the geometry reliably.
  6. If you are eroding a surface VDB, you can use the VDB Activate SDF to activate a wider band of voxels to do the erosion in to. You can also use the more involved VDB Activate SOP with far more options to activate regions of the VDB grid to do your work including second input geometry to act as a mask for active regions. VDB's are efficient due to the thin band of data that exists around the limit surface in SDFs and volumes around the data that is varying between constant values (density usually 0 to 1). Generally the thin band is defaulting to 3 voxels either way of the 0 value for SDF and around the 0-1 varying band in Density VDB volumes: Exterior Band voxels and Interior Band voxels. These two parameters are exposed on the VDB from Polygons SOP. Please use the VDB Visualize SOP to see the active band. IMHO you simply can not sculpt VDB's without using this SOP somewhere in the chain to avoid this issue. Use the Active Voxels toggle parameter and set to points to see where the active voxels are. You can't push data in to a region of the VDB grid that has inactive voxels. You will get no result which is what you are experiencing.
  7. No need to hack the shader and remove displacement links. Leave the displacement and normal wires in place. Just add the Mantra Property: True Displacements < vm_truedisplace > to your Properties VOP inside your shader using the Edit Parameter Interface Edit Rendering Parameters. This adds the toggle True Displacements. Just turn it off and the geometry will not be diced and displaced. Only surface normals calculated. There is another interesting property as well that you may want to look in to: Add Bump To Ray Traced Displacements which will do both displacements and bump when the threshold is met. This is a critical one when reworking large scenes as archives using Material Style Sheets. You can control this render property right at the end of the pipeline without adding to your shader when using material style sheets and targets with property overrides. Could even put a wrangle in the stylesheet that sets this property based on distance from camera.
  8. primitive selection using group

    Depends on what you want to do. If you need to work with the faces in SOPs, then yes Unpack SOP is what you need to do. Only unpack what you need. There is "one way" to select faces on deferred primitive types (packed prims, alembic archives) and that is to use the Style Sheet Data Tree interface. You can use the style target option Selection from Viewport to select faces buried deep in any archive to do shader/material and Mantra property overrides. Viewport selections can be any group or @attribute group type selection. All selection options are available within the Style Sheet selection target tool.
  9. The number 1 reason to work with shells is that each shell represents it's own environment. Each environment can be set up to a separate project, shot, scene, WIP test of an HDA asset, different versions of Houdini per shell, whatever. Most users in production need concern themselves with a single task. This means that launching Houdini with a single environment. GUI works as well as a whole host of other tools such as python GUI launchers, launching through a Production Management software, double clicking on a .hip file. As you get more and more things on the go shells start to make a lot of sense. Each shell can be configured with different environment variables to point at any build of Houdini with any project environment. Another reason to look at shells as a power user is that all the launch scripts for Houdini are either a csh or bash script, no matter how you launch Houdini. These shell scripts are used to configure the Houdini environment. Convenience is a huge factor here. Using shells isn't for everyone. Actually 99% of users in production don't need shells to do what they want. So many other ways to work. But if you aspire to be a full on Houdini user in a demanding production environment and desire complete control, there is no equivalent to using shells. It's truly liberating for power users. It's easy to get in to shells with Houdini. Houdini ships with it's own shell wrapper that you can double click in Windows/MacOS. This installs the Houdini environment. Then you need to concern yourself with a handful of commands: ls (directory listing), cd (change directory), cp (copy), mv (move), houdini, mplay. Shells are available in all OS'es. Linux and Mac have Unix underpinning where shells are how you interface with the OS. Windows has it's own shells which are getting better or you can install CYGWIN, a 3rd party Open Source shell environment for Windows. It's also interesting to see that Microsoft after all these years seems to be tipping it's hand to more Unix/Linux tools and collaborations. There is even rumour of a proper shell running tcsh and bash in the next release but we'll see... As for what type of shell to use, there are quite a few with the two most common in the CG industry being csh / tcsh (SGI roots showing) or bash which is the Linux default shell. There are those old school users that still cling to csh or tcsh but I made the switch over to bash shells way back when Houdini was ported to linux. I believe most Houdini users have moved over to bash as well. New shell adopters gravitate toward bash. Why fight the default. Hope this helps.
  10. Oh yeah this is an asset so things are deprecated a bit differently. In hscript if you cd to a SOP type directory then type in opadd you will see both versions of particlefluidsurface in there: particlefluidsurface and particlefluidsurface::2.0 So use the -e to use the explicit name to add the first version of paticlefluidsurface: opadd -e particlefluidsurface This will add the original version of particle fluid surface SOP in to the current SOP network. This means you can have two different versions of an asset in your scene. Just MMB on the two to see the asset pointing to the correct definition.
  11. Using Matrix to transform Geometry to origin

    First off amazing example F1!!! Very nice lattice deformer. Many ways to do this where you take the centroid of the input geometry, move to origin, do work then move back. Using VOPs makes it artist friendly and re-usable. Using Wranglers for those that like vex. See the example file for both set-ups. In the set-up, I do the easy thing and use the centroid() hscript function to fetch the origin of the input grouped geometry in to a declared vector parameter. If you don't want to use the centroid() function (or $CEX $CEY and $CEZ in the Transform SOP), you can use this vex code grabbed from the Expression Cookbook in the help http://www.sidefx.com/docs/houdini15.0/ref/expression_cookbook: Centroid $CEX, $CEY, $CEZ v@min = {0.0, 0.0, 0.0}; v@max = {0.0, 0.0, 0.0}; getpointbbox(0, @min, @max); v@cent = (@max + @min) / 2.0; // @cent.x, @cent.y, @cent.z inverse_transform_example.hip
  12. Look at the help file for the Scatter SOP: http://www.sidefx.com/docs/houdini15.0/nodes/sop/scatter The Tip section right at the top describes three different scenarios to scatter fixed point positions on deforming geometry including an example file. Very fast and you only evaluate the Scatter SOP from the frame you use in the Time Shift SOP. Btw rarely do you want to lock operators to get a rest frame in an input deforming archive. Always use the Time Shift SOP and 9 times out of 10 you are setting it to frame 1 or the start frame of the sequence which is $FSTART (in case you are dealing with simulations that start at frame 1001 for example). See the example file as to how to take in deforming unpacked geometry (from your alembic archive) and use an Ends SOP set to unroll then scatter and stick with the Attribute Transfer SOP using the recipe indicated in the help. See the attached example file for scattering on deforming edges. stick_scatter_points_on_deforming_geometry.hip
  13. It's still there but hidden. In an hscript textport, type: opunhide and you will see a long list of hidden operators. Then again use opunhide to unhide your operator of choice for the given scene file. Now you can add the operator to your scene. Are they supported? Well yeah but there is a reason they are hidden as they have been superseded but not removed so users like yourself can still access them, and scene files from older versions can still work.
  14. VEX docs

    The confusion lies in the fact that you are using the random() variadic form: vector random(vector position). @P is an internally defined vector type variable so the random(@P) will return a vector. Variadic functions are used for a great variety of vex functions where the type of declared variable used as one of the input parameters to the function determine what type of function is called and what the return value is. When we create our own VOP HDA's, you can add variadic forms by using "signatures" and code references in the body of the function. https://en.wikipedia.org/wiki/Variadic_function The random() function has a great many variadic forms. Have a look at the help here: http://www.sidefx.com/docs/houdini15.0/vex/functions/random This is why you use the help when you are learning to write vex. Nothing stopping us from sneaking in a couple extra variadic types between releases... Again ALWAYS USE THE HELP when writing vex. Even old school vex peeps ALWAYS USE THE HELP.
  15. Use Stamping with Copy SOP when you want to create new geometry for every template point. For example an upstream Switch SOP or you are cracking geometry up differently for every copy. Use Stamping when you want to evaluate other remote network nodes to provide data for attributes. One example is building up complex material shading strings. Actually I haven't done this since style sheets for H15 supported CVEX routines... Don't use stamping in Copy SOP if you want to manipulate/orient the incoming geometry. There are a whole host of point attributes supported to scale/orient/transform the copied geometry. As well you can use local variables in the Copy SOP to do in-place scaling. Look at $PT (point number) and $ID (particle id's) to add randomness to your copies: rand($PT+1.001) in the scale x parameter will randomly scale inputs. See the Copy SOP help examples. Do I recommend Copy stamping in H15 and beyond? Nope. Use the new For Each SOPs to do this. Much faster. No diving in to subnets as with the older For Each SOP stamping methods. And for render property and shading manipulation, use style sheets and CVEX shaders.