Jump to content

Leaderboard


Popular Content

Showing most liked content on 01/03/2015 in all areas

  1. 1 point
    Recently I discovered how to avoid Makefiles for projects that required lib linking, lots of cpp and header files. just a simple solution my main.cpp file just contains #include SOP_mySop.cpp #include file01.cpp #include file02.cpp #include file03.cpp #include file04.cpp now just execute hcustom main.cpp and everything is fine
  2. 1 point
    I promised, that i will publish some source files and here they are. Inside you can find some network from demovideo, from pre work to render. All assets are unlocked(i used its for git), don't pay attention to that. Happy x-mas. Tree_generator_demoscene_unlocked.hipnc
  3. 1 point
    Hi. It seems that everyone these days make the sand solvers. This is the demonstration of a custom sand solver I have developed for Melnitsa Animation. It is erosion based solver. Mostly implemented with vex. Part of collision algorithm is writen in c++. There are also vex and c++ implementation of erosion nodes. Vex is faster but c++ is more accurate. It can't produce small nice details as sideFx pbd solver in houdini 14, but it is faster and works in certan situations. Pros: - Fast - Stable - Produce predictable results Cons: - not physically correct - not all types of concave collision objects are supported - it is not all-purpose solver
  4. 1 point
    Here is a demonstration of a bicycle tool I made a couple of months ago as a school assignment. -Bram
  5. 1 point
    Thank you Andrew.i already have a motion blur on it.may be i need to increase it and i will add the overall blur as well.
  6. 1 point
    There is no mystery as to how Houdini works. Anything that gets done in Houdini can be expressed by a node. Whether that node is a coded c++ operator, an operator written in VEX (or using VOP nodes representing vex functions), Python operators or Houdini Digital Assets (HDA's), each node does it's own bit and then caches it's result. There is no lower level than nodes. The nodes in Houdini are the lowest level atomic routine/function/programme. A SOP node for example takes incoming geometry and processes it all in of itself, then caches it's result which is seen in the viewport, MMB on the node as it's stats and in the Details View to see the specific attribute values. If this is a modifier SOP, it will have a dependency on it's input node. If there is an upstream change, the current node will be forced to evaluate. If there is a parameter reference to another node and the other node is marked "dirty" and affects this node, this node will have been forced to evaluate. To generalize the cooking structure of a SOP network, for every cook (frame change, parm change, etc), the network starts at the Display/Render node and then walks up the chain looking for nodes with changes and evaluates dependencies for each node also querying those nodes for changes until it hits the top nodes. The nodes marked dirty causing the network to evaluate the dirty nodes top down evaluating the dependencies that were found. You can set a few options in the Performance Monitor to work in the older H11 way and see this evaluation tree order if you wish. Change that. It is "mandatory" that you do this if you want a deeper understanding of Houdini. You definitely need to use the Performance Monitor if you want to see how the networks have evaluated as it is based on creation order along with the set-up dependencies. Yes deleting and undeleting an object can and will change this evaluation order and can sometimes get you out of a spot with crashing. If you haven't used the Performance Monitor pane, then there you go. Use it. Just remember to turn it off as it does have an overhead performance wise. Another key is to use the MiddleMouseButton (MMB) on any and all nodes to see what they have cached from the last cook evaluation. Memory usage, attributes currently stored, etc. the MMB wheel on my mouse is as worn in as the LMB as I use it so much. You can see if the node is marked as time dependent or not which will affect how it evaluates and how it will affect it's dependent nodes. You can RMB on the node and open up the Dependency view for that operator which will list all references and dependencies. You can hit the "d" key in the network and in the parameter display options, in the Dependency tab, enable the various dependency aids (links and halos) in the network to see the dependencies in the network. Houdini is a file system, in memory, and on disk in the .hip "cpio" archive file. If you want, you can use a shell, and given any .hip file, run the hexpand shell command on the file. This will expand the Houdini file in to a directory structure that you can read and edit if you so wish. Then wrap it back up with hcollapse. If you really want to see how Houdini works low level, then this how it all ends up, and how it all starts. It's just hscript Houdini commands that construct the nodes including the folder nodes themselves. Each node is captured as three distinct files: the file that that adds the node and wires it up to other nodes, the parameter file that sets the nodes parameters, and another file that captures additional info on the node. If you locked a SOP, then that binary information will be captured as a fourth file for that node. It is for this reason that .hip files are very small, that is unless you start locking SOPs and that is not wise. Better to cache to disk than lock but nothing stopping you. When you open up a .hip file, all the nodes are added, wired, parameters modified and nodes cooked/evaluated. There are different types of node networks and nodes of a specific type can only be worked on in specific directory node types. This forces you to bop all over the place, especially if you still willingly choose to use the Build desktop which I do not prefer. You have to have a tree view up somewhere in the interface to see how the network lays out as you work. It's also very handy for navigating your scene quickly. The Technical Desktop is a good place to start when working on anyone's file as there is a tree view and a few other panes such as the Details View, Render Scheduler and more. If you want to use the technical desktop and follow a vid done with the Build desktop, simply switch up the Network with the Parameter pane and now the right hand side is the same as Build, but now you can follow the tree view and see where and when other nodes are dropped down. A new Houdini file is an unread book, full of interesting ideas. Using a desktop that exposes a tree view pane, you can quickly see what the user has been up to in a couple seconds. Again use the Technical Desktop as a start if you are still using Build (if you know me you will know I will force you to have a tree view up). You can quickly traverse the scene and inspect the networks. If that isn't enough, you can pop open the Performance Monitor and see what nodes are doing the most work. You really don't need any videos, ultimately just the .hip file. Helps if the scene is commented and nodes named based on intent. Let's stick to SOPs. In Houdini, attributes are an intrinsic part of the geometry that is cached by each SOP. Not some separate entity that needs to be managed. That is what makes SOPs so elegant. That wire between two SOPs is the geometry being piped from one SOP to the next, attributes and all. Not a link per attribute (which in other software can be a geometry attribute, parameter attribute, etc). This makes throwing huge amounts of geometry with lots of attributes a breeze in Houdini. All SOPs will try their best to deal with the attributes accordingly (some better than others and for those others, please submit RFE's or Bugs to Side Effects to see if there is something that can be done). You can create additional geometry attributes by using specific SOPs: - Point SOP creates "standard" point attributes - Vertex SOP creates "standard" vertex attributes - Primitive SOP creates "standard" Primitive attributes - Use the Attribute Create SOP to create ad-hoc attributes with varying classes (float, vector, etc) of type point, vertex, primitive or Detail. - Use VEX/VOPs to create standard and ad-hoc point attributes. - Use Python SOPs to create any standard or ad-hoc geometry attributes. One clarification that must be made is the distinction between a "point" and a "vertex" attribute in Houdini. There are other softwares that use the term vertex to mean either point attributes or prim/vertex attributes. Games have latched on to this making the confusion even deeper but alas, it isn't. In Houdini, you need to make the distinction between a point and a vertex attribute very early on. A point attribute is the lowest level attribute any data type can have. For example, vector4 P position (plus weight for NURBs) is a point attribute that locates a point in space. If you want, that is all you need: points. No primitives what so ever. Then instance stuff to them at render time. You can assign any attribute you want to that point. To construct a Primitive, you need to have a point for the primitive's vertices to reference as a location and weight. In the case of a polygon, the polygon's vertices is indexing points. You can see this in the Details View when inspecting vertex attributes as the vertex number is indicated as <primitive_number>:<vertex_number> and the first column is the Point Num which shows you which point each vertex is referencing as it's P position and weight. Obviously you can have multiple vertices referencing a single point and this is what gives you smooth shading by default with no vertex normals (as the point normals will be used and automatically averaged across the vertices sharing this point). In the case of say a Primitive sphere, there is a single point in space, then a primitive of type sphere with a single vertex that references that point position to locate the sphere. Then there is intrinsic data on the sphere (soon to be made available in the next major release) where you can see the various properties of that sphere such as it's bounds (where you can extrapolate the diameter), area, volume, etc. Other primitive types that have a single point and vertex are volume primitives, metaball primitives, vdb grid primitives, Alembic Archive primitives, etc. How does a Transform SOP for example know how to transform a primitive sphere from a polygonal sphere? Answer is that it has been programmed to deal with primitive spheres in a way that is consistent with any polygon geometry. Same goes for Volumes. It has been programmed to deal with Volumes to give the end user the desired result. This means that all SOPs properly coded will handle any and all primitive types in a consistent fashion. Some SOPs are meant only for Parametric surfaces (Basis SOP, Refine SOP, Carve SOP, etc.) and others for Polygons (PolySplit, etc.) but for the most part, the majority of SOPs can work with all primitive types. What about attributes? The Carve SOP for example can cut any incoming polygon geometry at any given plane. It will properly bi-lineraly interpolate all attributes present on the incoming geometry and cache the result. It is this automatic behaviour for any and all point, vertex, primitive and detail Attributes that makes working with SOPs a breeze. How does Houdini know what to do with vertex attributes when position P, velocity v and surface normal N need to be handled differently? When performing say a rotate with a Transform SOP and the incoming geometry has surface normals N, velocity vector v, and a position cache "rest", each attribute will be treated correctly (well N because it is a known default attribute but for user-defined attributes, you can specify a "hint" to the vector that will tell it to be either vector, 3 float position, or of type surface normal). It is this auto-behaviour with attributes and the fact you don't need to manage attributes makes using SOPs so easy and very powerful without having to resort to code. Remember that each SOP is a small programme unto it's self. It will have it's own behaviours, it's own local variables if it supports varying attributes in it's code logic, it's own parameters, it's own way of dealing with different primitive types (polygons, NURBs, Beziers, Volumes, VDB grids, Metaballs, etc). If you treat each SOP as it's own plug-in programme, you will be on the right path. Each SOP has it's own help card which if it is authored correctly will explain what this plug-in does, what the parameters do, what local variables are available if at all, some other nodes related to this node, and finally example files that you can load in to the current scene or another scene. Many hard-core Houdini users picked things up by just trolling the help example files and this is a valid way to learn Houdini as each node is a node and a node is what does the work and if we were to lock geometry in the help cards the Houdini download would be in the Gigabytes so nodes are all that is in the help cards and nodes is what you need to learn. I'm not going to touch DOPs right now as that is a different type of environment purpose built for simulation work. Invariably a DOP network ends up being referenced by a SOP to fetch the geometry so in the end, it is just geometry which means SOPs. Shelf tools are where it's at but I hear you. Yes there is nothing like being able to wire up a bunch of nodes in various networks and reference them all up. Do that for a scratch FLIP simulation once or twice, fine. Do that umpteen times a week, well that is where the Shelf Tools and HDA's make life quite simple. But don't be dismayed by Shelf Tools. All of those tools are simply executing scripts that place and wire operators together and set up parameter values for you. No different than when you save out a Houdini .hip scene file. If you are uber-hard-core, then you don't even save .hip files and you wire everything from scratch, every time, each time a bit different, evolving, learning. So with the shelf tool logic you find so objectionable, if you open up an existing .hip scene file, you are also cheating. Reminds me of the woodworker argument as to what is hand built and what isn't. I say if you use anything other than your teeth and fingernails to work the wood, you are in essence cheating, but we don't do that. Woodworkers put metal or glass against wood because fingernails take too long to grow back and teeth are damaged for ever when chipped. And I digress... Counter that to power users in other apps that clutch to their code with bare white knuckles always in fear of the next release rendering parts of their routines obsolete. With nodes, you have a type name and parameter names. If they don't change from build to build, they will load just fine. I can load files from before there were .hip files and they were called .mot (from Sage for those that care to remember) from 1995. Still load, well with a few meaningless errors but they still load. A Point SOP is a Point SOP and a Copy SOP is a Copy SOP. No fear of things becoming obsolete. Just type in the "ophide" command in the Houdini textport and you will still find the Limb and Arm SOPs (wtf?). LOL! First thing I do every morning? Download latest build(s). Read the build journal changes. If there is something interesting in that build, work up something from scratch. Then read forums time permitting and answer questions from scratch if I can. All in the name of practice. Remember from above that a .hip file is simply a collection of script files in a folder system saved on disk. A Houdini HDA is the same thing. A shelf tool again is the same thing: a script that adds and wires nodes and changes parameters. Not pounding a bunch of geometry and saving the results in a shape node never to have known the recipe that got you there. To help users sort out what created which node, you can use the "N" hotkey in any network and that will toggle the node names from the default label, the tool that added that node and finally nothing. Hitting "N" several times while inspecting a network will toggle the names about. That and turning on the dependency options in the network will help you see just what each shelf tool did to your scene. Knowing all this, you can now troll through the scene and see what the various shelf tools did to the scene. If you like to dig even deeper, you can use the Houdini textport pane and use the opcf (aliased to cd), opls (aliased to ls), and oppwf (aliased to oppwd and pwd) to navigate the houdini scene via the textport as you would in a unix shell. One command I like to show those more interested in understanding how Houdini works is to cd to say /obj then do an opls -al command to see all the nodes with a long listing. You will see stats very similar to those found in a shell listing files or if you RMB on any disk file and inspect it's info or state. Remember Houdini "IS" a file system with additional elaborate dependencies all sorted out for you. There are user/group/other permissions. Yes you can use opchmod (not aliased to chmod but easily done with the hscript alias command) to change the permission on nodes: like opchmod 000 * will remove read/write/execute permissions on all the nodes in the current directory and guess what? The parameters are no longer available for tweaking. Just remember to either tell your victim or to fix it for them or you may be out of a job yourself. opchmod 777 * gives back the permissions. An opls -al will verify this. Now you know what our licensing does to node states as you can set the state of a node to be read and execute only but remove the write to any DOP or POP node and you have a Houdini license while a Houdini FX license will enable the write to all nodes in all networks. Also knowing this, the .hip file truly is a book with a lot of history along with various ways of inspecting who created what node and when, what tool was used to create this node, what dependencies are on this node, is it time dependent, and more, all with a quick inspection. After all this, learning Houdini simply becomes learning each node in turn and practice, practice, practice. Oh and if you haven't figured out by now, many nodes have a very rich history (some older than 30 years now) and can do multiple things, so suck it up, read the node help cards, study the example files and move forward. The more nodes you master, the more you can see potential pathways of nodes and possibilities in your mind, the faster you work, the better you are. The more you do this, the more efficient your choices will become. The learning curve is endless and boundless. All visual. All wysiwyg.
×