Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.


  • Content count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About Anti-Distinctlyminty

  • Rank

Personal Information

  • Name Luke
  • Location United Kingdom
  1. We can have very long shots (my current one is 3200 frames) with heavy geometry. You can see my issue I'm sure
  2. Unfortunately no, neither of those will work. What I'm after is some sort of underwater wavy motion. I can do this just fine, but it means I have to write geo out for every frame, which is very wasteful. Imagine you are rendering something like some underwater kelp gently swaying in the current. A simple moving displacement will work fine, but I cannot see how that can be done.
  3. Hi all, I wasn't sure whether to put this into shading or effects, so it just ended up here. The title says it all - is there a way to move points at render time? Just apply a noise or something like that, essentially a render time VOP Network. The reason is I have a heavy piece of geo that just needs to sit in the background and wobble slightly. That's all. It seems very wrong to write out geo for every frame.
  4. Hi all, Is there a way to blur surface normals in a shader? I'm currently trying to find a way to make a whole bunch of primitive spheres look like they are blending together. I can't find much in the way of resources on this. My thinking was to do something like a pcfilter in the shader, and modify the surface normals, but I can't see how to use the pc functions in the shader, or modify the surface normals of the principal shader (I'd rather not rebuild that entire thing from scratch).
  5. Hi, Let's say I have a bunch of points that I have created, and I want to treat them as particles in a popnet - how do I do this? Is it possible to do this without emitting from the source points for one frame? I noticed that you can set the initial geometry for the popobject node, and gravity and forces do effect these points, but no pop nodes do anything at all. How do I get the pop solver to consider these points as particles? I have attached a simple example. (Note: The popsource node has the Geometry Source set to 'Use DOP Objects', but this is undocumented) Points_as_Particles.hipnc
  6. Don't forget to let me know if you do
  7. Hi Galagast, Thanks for the response...one thing that is apparent from it is that I still have much to learn when it comes to DOPs I'll work through it...slowly. I think I may have to start simpler and build up. Also, if the CHOP is unavoidable, maybe the overhead can be reduced by using some sort of sliding range ($F-5 to $F+5 or something) so that you only calculate the immediate area around the current frame.
  8. Hi everyone, I'm trying to use the points created by a particle sim, and copy lines to them, then solve them as wires. However, so far I can get nothing to work at all. Nothing ever shows up the in viewport, I don't know why. I have tried using one dop network to create the particles, and another to solve the wires, but this doesn't work. I suspect it's something to do with the fact that the newly created geometry needs to be imported into the DOP network, but I do not know how to do this. Any advice would really be appreciated. Wires_on_Particles.hipnc
  9. Ok, I'm just stupid. I'm not sure how I didn't try this before my lunch break, but to get the type when you do not have access to kwargs, just use hou.pwd().type() So the code from the previous post should read: import toolutils foo = toolutils.createModuleFromSection('bar', hou.pwd().type(), 'PFX_Exporter_Functions')
  10. Hi all, I'm attempting to write a particle cache exporter and I'm having trouble getting the Python module to use functions that are stored in a different section. My instinct was to create a new output driver type as the category made sense, but there were a few challenges which I'll mention here for others facing the same issues: The 'Render to Disk' button's Callback Script section is greyed out. To run code when this button is pushed you need to Create a file parameter called 'soho_program' and change it's default channel value to 'opdef:.?PythonModule' Create a file parameter called 'soho_outputmode' and change it's default to 2 This just runs the code in PythonModule. I have another module in which I store the functions that need to be run in PythonModule, but I cannot import these fucntions. The docs about asset modules state that you need to write the following to import another section as a module: import toolutils foo = toolutils.createModuleFromSection('bar', kwargs['type'], 'PFX_Exporter_Functions') However, because this is an output driver, there appears to be no kwargs argument, so I cannot get the kwargs['type'] value. I've tried other methods such as hou.nodeTypeCategories()['Driver']. Strangely the 'Driver' type is not available directly like all the others are as listed here. Does anyone know how to import modules from the other sections when using the Driver type?
  11. Hi all, I have a potentially difficult animation involving a rather extreme closeup of a tongue. I wanted to get some realistic motion, so my initial thought was to use FEM, however I'm unsure how to control the model and have the FEM simulation be a sort of secondary animation. Is there a standard procedure to achieve this? Are you supposed to animate with bones or somsuch first? All tutorials seem to be content with making things fall under gravity, which is rarely what I actually need.
  12. Thank you for the replies. I'm glad some are as confused as me I spent some time and made every single type and they should be embedded into the hip file attached. They are all laid out, but are also available via Tab>Operator Types in the network view. This was quite illustrative as it showed what options are available for each on the Operator Type Properties window. A few notes: All the VEX Typesseem to have the Code tab available to write code that particular return type. So relatively straight forward, but I'll need to search more in order to find what actual code to write in these sections. The most complex seemed to be the SHOP Geometry Procedural Shader as this has no option to write code. I found from the HDK documentation that you have to write a compiled plugin with the same name that this shader then picks up. I will not be touching this one for a while As far as I can tell the subnet operator types are just the same as selecting a bunch of nodes and collapsing them to a subnet. Except this will be an empty subnet. The Object Operator Types are just containers for geometry, camera or light setups. I think. I'm guessing that because they are those types Houdini treats them all slightly differently. The Python Object Operator seems to be the same but some python code that is called for each cook. If anyone has any other insights then please do let me know. New_Operator_Types.hipnc
  13. Hi all, I'm attempting to make a new operator, and I'm still relatively new to Houdini in this regard, so every time I see this interface I am always left with some questions, as there seems to be no list in the documentation about why the operators are broken up into these 'Operator Styles' and 'Network Types', e.g. Why are these particular 'Operator Styles' the ones available? Are they presets, or are they fundamentally different in some way? Is a 'VEX Type' 'Geometry operator' different from a 'Python Type' 'Geometry operator'? And is that different from a 'Subnet Type' 'Geometry operator'? 'Python Type' is one of the Styles, but I thought all operators were implemented in Python - does this type mean something else? The documentation on the New Operator Type doesn't answer theses questions. I'm currently trying to write a particle cache exporter, so I though it needs to be an 'Output Driver Type', but if I do that the callback script option is greyed out.
  14. Yeah, this is what is supposed to happen, but you can see from the hip file that this does not happen in this case. The Wren node is connected into the Mantra node. The Wedge is set to use this Mantra node, so it should render the connected Wren node, but it renders the same thing, over and over. I think this has to be a bug, but currently I cannot get through to the bug database at www.service.sidefx.com, it just says 'ERROR: The php get_magic_quotes_gpc option must be turned on.' Any suggestions would be useful.
  15. Hi all, I'm trying to render to several outputs at the same time, and perform multiple takes on those outputs. In the example scene shows more clearly what I mean, I have two sub-takes (each shows a different primitive), and a Wedge node that is set to render both of these takes. In the /out you will see a Wren node plugged into a Mantra node, the idea being that if I set the Wedge node to render using the Mantra node as an output driver, the Wren node that is connected into it should also render. But it renders using the wrong take. If you open the attached file and click on 'Render Wedges' of the /out/wedge1 node it should render the Wren node, then the Mantra node to MPlay. You can see that the Wren node always renders the same output. I see in the documentation there are instructions on how to wire up multiple nodes to make them all render in sequence, but this doesn't seem to work at all where takes are concerned. Am I doing something wrong or is this a bug? Wedge.hipnc